The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


news:news

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
news:news [2023/08/13 10:21] Wolfgang Viechtbauernews:news [2024/03/29 10:44] (current) Wolfgang Viechtbauer
Line 3: Line 3:
 ~~NOTOC~~ ~~NOTOC~~
  
-==== 2023-08-13Dynamic Plot Colors Based on the RStudio Theme ====+==== 2024-03-28Version 4.6-0 Released on CRAN ====
  
-I am quite excited about a new feature that will be part of the upcoming version of the metafor package. The package now creates all plots in such way that (if one sets ''[[https://wviechtb.github.io/metafor/reference/mfopt.html|setmfopt]]''''(theme="auto")'') they will have a consistent look depending on the chosen [[https://docs.posit.co/ide/user/ide/guide/ui/appearance.html#editor-themes|RStudio theme]]. You can see below some examples of how various plots change their look according to the theme:+new version of the metafor package has been released on CRANThis update occurred bit sooner than originally planned, but there were two minor issues that flagged metafor as requiring an update as otherwise it would have been archived on CRAN, which would have led to some unpleasant consequences for other packages that depend on metaforSo an update needed to be pushed out relatively quickly.
  
-{{news:plots-rstudio-theme.mp4?800x500&nolink}}+The issues themselves were easy to fix. The first was a very minor formatting oversight in one of the help files. The second issue was the result of two packages being archived that metafor had listed as suggested packages, namely [[https://cran.r-project.org/package=Rcgmin|Rcgmin]] and [[https://cran.r-project.org/package=Rvmmin|Rvmmin]]. These packages provided some alternative optimizers that could be chosen for fitting certain models, but were not essential dependencies for the metafor package and hence could be easily removed. Actually, these optimizers have been moved to the [[https://cran.r-project.org/package=optimx|optimx]] package and will probably be reincorporated into metafor later on.
  
-This creates more pleasing experience when working interactively with RStudiobut could also be useful when creating presentations that do not use white background+The update itself took bit longer (prompting a few well-deserved reminders from the CRAN team) due to other work-related responsibilitiesplus I wanted to finish a few other updates to the package I was working on in the meantime. The full changelog can be found [[:updates#version_46-0_2024-03-28|here]] but I would like to highlight few items.
  
-The default setting (''[[https://wviechtb.github.io/metafor/reference/mfopt.html|setmfopt]]''''(theme="default")'') uses the default colors of the plotting device (which is typically white background and a black foreground color), but one can also use ''[[https://wviechtb.github.io/metafor/reference/mfopt.html|setmfopt]]''''(theme="dark")'' to force plots to be drawn using a 'dark mode' (this may be useful when working with a different programming editor or IDE that also uses a dark mode).+First of all, I have finally added effect sizes measures for computing the standardized mean change using raw score standardization with pooled standard deviations to the ''[[https://wviechtb.github.io/metafor/reference/escalc.html|escalc()]]'' function. Traditionally, following Becker's 1988 seminal paper ([[https://doi.org/10.1111/j.2044-8317.1988.tb00901.x|link]]), this measure was computed with $$d = \frac{\bar{x}_1 - \bar{x}_2}{s_1},$$ where $\bar{x}_1$ and $\bar{x}_2$ are the means at the two measurement occasions and $s_1$ is the standard deviation of the raw scores observed at the first measurement occasion (followed by slight bias correction applied to $d$). In principle, one can also use $s_2$ in the denominator, but crucially, only one of two standard deviations is used for the standardization. While there is nothing inherently wrong with doing so (and it simplifies the derivation of the exact distribution of $d$), some would prefer to pool the two standard deviations and hence use $$d = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{s_1^2 + s_2^2}{2}}}$$ as the effect size measure (i.e., we average the variances and then take the square-root thereof). This is now possible with ''measure="SMCRP"'' under the assumption that the true variances are the same at the two measurement occasions and as ''measure="SMCRPH"'' without this assumption, that is allowing for heteroscedasticity of the two variances (in the latter case, the computation of the sampling variance needs to be adjusted slightly). See the [[https://wviechtb.github.io/metafor/reference/escalc.html#-outcome-measures-for-change-or-matched-pairs|documentation of the escalc function]] for further details.
  
-==== 2023-05-08Version 4.2-0 Released on CRAN ====+Second, the ''[[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]]'' function has received a few updates. To start, the function no longer stops with an error when one or more intervals defined by the ''steps'' argument do not contain any observed p-values (instead a warning is issued and model fitting proceeds, but may fail). For automating analyses and simulation studies, one can now set ''ptable=TRUE'' in which case the function will simply return the table with the number of p-values falling into the various intervals, based on which one can decide how to proceed. 
  
-I made a relatively quick release of another version of the package. There were a few minor little buglets that annoyed me that I wanted to get rid of right away. Along the wayI made some improvements to various plotting functions. In particular, the various ''forest()'' functions now do a better job of choosing default values for the ''xlim'' argument and the ''ilab.xpos'' argument values are now also automatically chosen when not specified (although it is still recommended to adjust the default values to tweak the look of each forest plot into perfection). There is now also a ''shade'' argument for shading particular rows of the plot (e.g., for a 'zebra-like' look). +Furthermoreby setting argument ''decreasing=TRUE''it is now possible to fit the step function model under the assumption that the selection function parameters are monotonically decreasing function of the p-values. This feature is somewhat experimental -- it requires using optimization with inequality constraints or a clever reformulation of the objective function that enforces such constraint, which complicates some internal issues and makes model fitting more difficultOne can also debate whether one should ever make this assumption in the first placebut it is feature I wanted to implement for testing and research purposes anyway.
- +
-Also, the various plotting functions now respect ''par("fg")''. This makes it possible to easily create plots with a dark background and light plotting colors. By default, plots created in R have a light (white) background and use dark colors, like this: +
- +
-{{news:plots-light.png?nolink}} +
- +
-But if we set up the plotting device like this: +
- +
-<code rsplus> +
-bg <- "gray10" +
-fg <- "gray95" +
-dev.new(canvas=bg) +
-par(fg=fg, bg=bg, col=fg, col.axis=fg, col.lab=fg, col.main=fg, col.sub=fg) +
-</code> +
- +
-then the resulting plots look like this: +
- +
-{{news:plots-dark.png?nolink}} +
- +
-So in case you prefer a dark mode for your IDE/editor, opening a plot in this manner no longer feels like starring directly into the sun. +
- +
-{{news:sunshine.jpg?nolink}} +
- +
-Such an awesome [[https://en.wikipedia.org/wiki/Sunshine_(2007_film)|movie]], by the way. +
- +
-Finally, aside from a few other improvements (e.g., functions that issue a warning when omitting studies due to NAs now indicate how many were omitted), the ''rma.glmm()'' function (when ''measure="OR", model="CM.EL", method="ML"'') now treats $\tau^2$ values below 1e-04 effectively as zero before computing the standard errors of the fixed effects. This helps to avoid numerical problems in approximating the Hessian. Similarly, ''selmodel()'' now treats $\tau^2$ values below 1e-04 or ''min(vi/10)'' effectively as zero before computing the standard errors. +
- +
-The full changelog can be found [[:updates#version_42-0_2023-05-08|here]]. +
- +
-==== 2023-03-19: Version 4.0-0 Released on CRAN ==== +
- +
-I am excited to announce the official (i.e., CRAN) release of version 4.0-0 of the metafor package. This will be the 30th update to the package after its initial release in 2009. Since then, the package has grown from measly 4460 lines of code / 60 functions / 76 pages of documentation to a respectable 36879 lines of code / 330 functions / 347 pages of documentation. Aside from a few improvements related to modeling (e.g., the ''[[https://wviechtb.github.io/metafor/reference/emmprep.html|emmprep()]]'' function provides easier interoperability with the [[https://cran.r-project.org/package=emmeans|emmeans]] package and the ''[[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]]'' function gains a few additional selection models), I would say the focus of this update was on steps that occur prior to modeling, namely the calculation of the chosen effect size measure (or outcome measure as I prefer to call it) and the construction of the dataset in general. +
- +
-In particular, the ''[[https://wviechtb.github.io/metafor/reference/escalc.html|escalc()]]'' function now allows the user to also input appropriate test statistics and/or p-values for a number of measures where these can be directly transformed into the corresponding values of the measureFor example, the t-statistic from an independent samples t-test can be easily transformed into a standardized mean difference or the t-statistic from standard test of a correlation coefficient can be easily transformed into the correlation coefficient or its r-to-z transformed version. Speaking of the latter, essentially all correlation-type measures can now be transformed using the r-to-z transformation, although it should be noted that this is not proper variance-stabilizing transformation for all measuresThis can still be useful though since the r-to-z transformation also has normalizing properties and when combining different types of correlation coefficients in the same analysis (e.g., Pearson product-moment correlations and tetrachoric/biserial correlations). +
- +
-Finally, there are now several functions in the package that facilitate the construction of the dataset for a meta-analysis more generally. The ''[[https://wviechtb.github.io/metafor/reference/conv.2x2.html|conv.2x2()]]'' function helps to reconstruct 2x2 tables based on various pieces of information (e.g., odds ratios, chi-square statistics), while the ''[[https://wviechtb.github.io/metafor/reference/conv.fivenum.html|conv.fivenum()]]'' function provides various methods for computing (or more precisely, estimating) means and standard deviations based on five-number summary values (i.e., the minimum, first quartilemedian, third quartile, and maximum) and subsets thereof. The ''[[https://wviechtb.github.io/metafor/reference/conv.wald.html|conv.wald()]]'' function converts Wald-type tests and/or confidence intervals to effect sizes and corresponding sampling variances (e.g., to transform reported odds ratio and its confidence interval to the corresponding log odds ratio and sampling variance). And the ''[[https://wviechtb.github.io/metafor/reference/conv.delta.html|conv.delta()]]'' function transforms effect sizes or outcomes and their sampling variances using the delta method, which can be useful in several data preparations steps. See the documentation of these functions for further details and examples. +
- +
-If you come across any issues/bugs, please report them [[https://github.com/wviechtb/metafor/issues|here]]. However, for questions or discussions about these functions (or really anything related to the metafor package or meta-analysis with R in general), please use the [[https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis|R-sig-meta-analysis]] mailing list.+
  
 +Finally, per request, it is now also possible to pass the observed p-values of the studies to the function directly via the ''pval'' argument. This can in principle be of interest when the observed p-values were not computed with a standard Wald-type test (as assumed by the function) but based on a different method. This is an undocumented and experimental feature, because doing so creates a bit of a mismatch between the assumptions internal to the function (since the integration step to compute the weighted density of the effect size estimates still assumes the use of a standard Wald-type test). To what extent this is actually a problem and whether this feature can improve the accuracy of the results from selection models remains to be determined in future research.
news/news.txt · Last modified: 2024/03/29 10:44 by Wolfgang Viechtbauer