The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


news:news

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
news:news [2020/11/14 15:37] Wolfgang Viechtbauernews:news [2023/05/09 06:42] Wolfgang Viechtbauer
Line 3: Line 3:
 ~~NOTOC~~ ~~NOTOC~~
  
-==== November 14th, 2020An Aggregate Function ====+==== 2023-05-08Version 4.2-0 Released on CRAN ====
  
-In many meta-analyses, multiple effect size estimates or outcomes can be extracted from the same studyIdeallysuch structures should be analyzed using an appropriate multilevel/multivariate model as can be fitted with the ''rma.mv()'' function. However, there may occasionally be reasons for aggregating multiple effect sizes or outcomes belonging to the same study (or to the same level of some other clustering variable) into a single combined effect size or outcome. I've added an ''aggregate()'' function (or to be precise, an ''aggregate.escalc()'' method function) to the package for this purpose. You can read the documentation for this function (and see some examples illustrating its use) [[https://wviechtb.github.io/metafor/reference/aggregate.escalc.html|here]].+I made a relatively quick release of another version of the packageThere were a few minor little buglets that annoyed me that I wanted to get rid of right away. Along the way, I made some improvements to various plotting functions. In particular, the various ''forest()'' functions now do a better job of choosing default values for the ''xlim'' argument and the ''ilab.xpos'' argument values are now also automatically chosen when not specified (although it is still recommended to adjust the default values to tweak the look of each forest plot into perfection). There is now also a ''shade'' argument for shading particular rows of the plot (e.g., for a 'zebra-like' look).
  
-==== October 14th2020Selection Models ====+Alsothe various plotting functions now respect ''par("fg")''. This makes it possible to easily create plots with a dark background and light plotting colors. By default, plots created in R have a light (white) background and use dark colors, like this:
  
-I've added the possibility to fit so-called 'selection models' with the metafor package. In case you are not familiar with such modelsSelection models attempt to model and therefore account for the process by which the studies included in a meta-analysis may have been influenced by some form of publication bias. In other words, some kind of selection process may have happened that made it more likely that certain types of studies will be published and hence are more easily found and therefore can be included in a meta-analysis (yes, one should always search the 'gray literature' for unpublished studies to be included in a meta-analysis, but uncovering those studies lingering in some file drawers out there can be exceedingly difficult).+{{news:plots-light.png?nolink}}
  
-The classical example of such a selection process is the fact that statistically significant findings are more likely to be submitted/accepted for publication. As a result, the findings from a meta-analysis can be biased, sometimes quite severely (because especially the smaller studies can only achieve statistical significance if they just happen to have obtained a large effect). Selection models attempt to correct for this (or can be used for sensitivity analyses by varying the degree of severity of such a selection process).+But if we set up the plotting device like this:
  
-To make this possible directly within the metafor package, I've added the [[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]] function, which provides a wide variety of selection model types (there are lots of proposals out there for how to model the selection process)including the 'beta selection model' by Citkowicz and Vevea (2017)a bunch of selection models suggested by Preston et al(2004)an extension thereof that I call the 'negative exponential power selection model' (sounds fancyhuh?)and so-called 'step function models' as described by Iyengar and Greenhouse (1988), Hedges (1992), Vevea and Hedges (1995), and Vevea and Woods (2005). I wrote the code so that it would be relatively easy to add further selection models to the function in case further models end up being suggested in the statistical literature.+<code rsplus> 
 +bg <- "gray10" 
 +fg <- "gray95" 
 +dev.new(canvas=bg) 
 +par(fg=fgbg=bg, col=fgcol.axis=fgcol.lab=fgcol.main=fgcol.sub=fg) 
 +</code>
  
-Note that the [[https://cran.r-project.org/package=weightr|weightr]] package can also fit step function models and some other selection models are implemented in the [[https://cran.r-project.org/package=metasens|metasens]] and [[https://cran.r-project.org/package=selectMeta|selectMeta]] packages.+then the resulting plots look like this:
  
-==== August 9th, 2020R Code for Even More Meta-Analysis Books ====+{{news:plots-dark.png?nolink}}
  
-The R code for two more books has been added to the [[https://github.com/wviechtb/meta_analysis_books|GitHub repo]]: //The Handbook of Research Synthesis and Meta-Analysis// by Cooper et al. (2019) and //Publication Bias in Meta-Analysis// by Rothstein et al. (2005).+So in case you prefer a dark mode for your IDE/editor, opening a plot in this manner no longer feels like starring directly into the sun.
  
-==== July 17th, 2020R Code for Meta-Analysis Books ====+{{news:sunshine.jpg?nolink}}
  
-I've started a [[https://github.com/wviechtb/meta_analysis_books|repo on GitHub]] to provide R code for various books on meta-analysisIt now contains //Introduction to Meta-Analysis// by Borenstein et al. (2009and //Practical Meta-Analysis// by Lipsey and Wilson (2001). More to be added. The items in the repo will also be listed under the [[:analyses#books_on_meta-analysis|Analysis Examples]] section.+Such an awesome [[https://en.wikipedia.org/wiki/Sunshine_(2007_film)|movie]], by the way.
  
-==== June 8th2020: Weights in Models Fitted with the rma.mv() Function ====+Finally, aside from a few other improvements (e.g., functions that issue a warning when omitting studies due to NAs now indicate how many were omitted), the ''rma.glmm()'' function (when ''measure="OR", model="CM.EL", method="ML"'') now treats $\tau^2$ values below 1e-04 effectively as zero before computing the standard errors of the fixed effects. This helps to avoid numerical problems in approximating the Hessian. Similarly, ''selmodel()'' now treats $\tau^2$ values below 1e-04 or ''min(vi/10)'' effectively as zero before computing the standard errors.
  
-And another entry to the 'Tips and Notes' section, this time discussing how weighting works in more complex models, such as those that can be fitted with the ''rma.mv()'' function. You can read the tutorial [[tips:weights_in_rma.mv_models|here]].+The full changelog can be found [[:updates#version_42-0_2023-05-08|here]].
  
-==== May 27th, 2020Computing Adjusted Effects Based on Meta-Regression Models ====+==== 2023-03-19Version 4.0-0 Released on CRAN ====
  
-I've added an entry to the 'Tips and Notes' sectiondiscussing how to compute 'adjusted effectsbased on meta-regression modelsYou can read the tutorial [[tips:computing_adjusted_effects|here]].+am excited to announce the official (i.e.CRAN) release of version 4.0-0 of the metafor package. This will be the 30th update to the package after its initial release in 2009. Since then, the package has grown from a measly 4460 lines of code / 60 functions / 76 pages of documentation to a respectable 36879 lines of code / 330 functions / 347 pages of documentation. Aside from a few improvements related to modeling (e.g., the ''[[https://wviechtb.github.io/metafor/reference/emmprep.html|emmprep()]]'' function provides easier interoperability with the [[https://cran.r-project.org/package=emmeans|emmeans]] package and the ''[[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]]'' function gains a few additional selection models), I would say the focus of this update was on steps that occur prior to modeling, namely the calculation of the chosen effect size measure (or outcome measure as I prefer to call it) and the construction of the dataset in general.
  
-==== May 9th2020Modeling Non-Linear Associations in Meta-Regression ====+In particularthe ''[[https://wviechtb.github.io/metafor/reference/escalc.html|escalc()]]'' function now allows the user to also input appropriate test statistics and/or p-values for a number of measures where these can be directly transformed into the corresponding values of the measure. For example, the t-statistic from an independent samples t-test can be easily transformed into a standardized mean difference or the t-statistic from a standard test of a correlation coefficient can be easily transformed into the correlation coefficient or its r-to-z transformed version. Speaking of the latter, essentially all correlation-type measures can now be transformed using the r-to-z transformation, although it should be noted that this is not a proper variance-stabilizing transformation for all measures. This can still be useful though since the r-to-z transformation also has normalizing properties and when combining different types of correlation coefficients in the same analysis (e.g., Pearson product-moment correlations and tetrachoric/biserial correlations).
  
-I've added an entry to the 'Tips and Notessectionillustrating how to model non-linear associations in meta-regression using polynomial and restricted cubic spline modelsYou can read the little tutorial [[tips:non_linear_meta_regression|here]].+Finally, there are now several functions in the package that facilitate the construction of the dataset for a meta-analysis more generally. The ''[[https://wviechtb.github.io/metafor/reference/conv.2x2.html|conv.2x2()]]'' function helps to reconstruct 2x2 tables based on various pieces of information (e.g., odds ratios, chi-square statistics), while the ''[[https://wviechtb.github.io/metafor/reference/conv.fivenum.html|conv.fivenum()]]'' function provides various methods for computing (or more preciselyestimating) means and standard deviations based on five-number summary values (i.e., the minimum, first quartile, median, third quartile, and maximum) and subsets thereof. The ''[[https://wviechtb.github.io/metafor/reference/conv.wald.html|conv.wald()]]'' function converts Wald-type tests and/or confidence intervals to effect sizes and corresponding sampling variances (e.g., to transform a reported odds ratio and its confidence interval to the corresponding log odds ratio and sampling variance). And the ''[[https://wviechtb.github.io/metafor/reference/conv.delta.html|conv.delta()]]'' function transforms effect sizes or outcomes and their sampling variances using the delta method, which can be useful in several data preparations steps. See the documentation of these functions for further details and examples.
  
-==== March 31st2020: Interpreting Coefficients in Meta-Regression Models with (Log) Risk Ratios ==== +If you come across any issues/bugsplease report them [[https://github.com/wviechtb/metafor/issues|here]]. However, for questions or discussions about these functions (or really anything related to the metafor package or meta-analysis with in general)please use the [[https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis|R-sig-meta-analysis]] mailing list.
- +
-Based on a question I received, I wrote up a little tutorial on how to interpret the coefficients in meta-regression models when using the log risk ratio as the outcome measure. When exponentiating coefficients, this leads to values that represent ratios of risk ratios, which may not be entirely obvious. You can read the tutorial [[tips:meta_regression_with_log_rr|here]]. +
- +
-==== March 20th, 2020: Two New Functions for Network Meta-Analysis ==== +
- +
-As a follow-up to yesterday's note, it is maybe worth mentioning that I also added two functions that are especially useful for those conducting network meta-analyses with the metafor package. With the ''to.wide()'' function, one can rearrange a dataset that is in an arm-based 'long' format to a contrast-based 'wide' format. Two examples illustrating the use of this function can be found under [[https://wviechtb.github.io/metafor/reference/to.wide.html|help(to.wide)]] (the link takes you to the corresponding help file, which is nicely formatted and shows the output of the examples). Once the dataset is in such a wide format, an important next step is the construction of variables that reflect which two groups are being compared with each other in each row (through +10, -1 coding). Such a contrast matrix can be easily created with the ''contrmat()'' function. See [[https://wviechtb.github.io/metafor/reference/contrmat.html|help(contrmat)]] for the help file and two examples illustrating its use. The analysis of these two datasets (using armand contrast-based models) are illustrated under [[https://wviechtb.github.io/metafor/reference/dat.hasselblad1998.html|help(dat.hasselblad1998)]] and [[https://wviechtb.github.io/metafor/reference/dat.hasselblad1998.html|help(dat.hasselblad1998)]]. +
- +
-==== March 19th, 2020: News Version (2.4-0) on Its Way to CRAN ==== +
- +
-Just submitted a new version (2.4-0) to CRAN. This update was prompted by the upcoming change in R where the new  default will be ''stringsAsFactors=FALSE'' (at long last!). As a result, some tests were failing on R-devel, so these needed fixing. Along the way, I made various minor internal updates and added some convenience functionality to several functions. The full changelog can be found [[:updates#changes_in_version_24-0_2020-03-19|here]].+
  
news/news.txt · Last modified: 2024/03/29 10:44 by Wolfgang Viechtbauer