The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


news:news

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
news:news [2021/06/09 12:58] Wolfgang Viechtbauernews:news [2024/03/29 10:44] (current) Wolfgang Viechtbauer
Line 3: Line 3:
 ~~NOTOC~~ ~~NOTOC~~
  
-==== June 9th, 2021: Version 3.0 Released on CRAN ====+==== 2024-03-28: Version 4.6-0 Released on CRAN ====
  
-A new version of the metafor package (version 3.0) has been published on CRAN. This version includes lot of updates that have accumulated in the development version of the package over the past 14-15 monthsSome highlights:+A new version of the metafor package has been released on CRAN. This update occurred bit sooner than originally planned, but there were two minor issues that flagged metafor as requiring an update as otherwise it would have been archived on CRAN, which would have led to some unpleasant consequences for other packages that depend on metafor. So an update needed to be pushed out relatively quickly.
  
-  * The documentation has been further improvedI now make use of the [[https://cran.r-project.org/package=mathjaxr|mathjaxr]] package to nicely render equations in the HTML help pages (and in order to do this, I had to create the mathjaxr package in the first place!).  +The issues themselves were easy to fixThe first was a very minor formatting oversight in one of the help files. The second issue was the result of two packages being archived that metafor had listed as suggested packages, namely [[https://cran.r-project.org/package=Rcgmin|Rcgmin]] and [[https://cran.r-project.org/package=Rvmmin|Rvmmin]]These packages provided some alternative optimizers that could be chosen for fitting certain models, but were not essential dependencies for the metafor package and hence could be easily removedActuallythese optimizers have been moved to the [[https://cran.r-project.org/package=optimx|optimx]] package and will probably be reincorporated into metafor later on.
-  * ''selmodel()'' was added for fitting a wide variety of selection models, including the beta selection model by Citkowicz and Vevea (2017), various models described by Preston et al. (2004), and step function models (with the three-parameter selection model (3PSM) as a special case). +
-  * As another technique related to publication/small-sample bias, the ''tes()'' function was added to carry out the test of 'excess significance' (Ioannidis & Trikalinos, 2007; see also Francis, 2013). +
-  * The ''regtest()'' function now shows the 'limit estimate' of the (average) true effect/outcomeThis is in essence what the PET/PEESE methods do (when the standard errors / sampling variances are used as predictors in a meta-regression model). +
-  * One can now also fit so-called 'location-scale models' via the ''rma()'' function (using the ''scale'' argument)With this, one can specify predictors for the amount of heterogeneity in the outcomes (to examine if the outcomes are more/less heterogeneous under certain circumstances). +
-  * The ''regplot()'' function can be used to draw bubble plots based on meta-regression models. For models involving multiple predictorsthe function draws the line for the 'marginal relationship' of a predictor. Confidence/prediction interval bands can also be shown. +
-  * Two functions were added that are related to the meta-analysis of correlation matrices / regression coefficients: ''rcalc()'' for calculating the var-cov matrix of correlation coefficients and ''matreg()'' for fitting regression models based on correlation/covariance matrices. +
-  * Sometimes, it might be necessary to aggregate a meta-analytic dataset with multiple outcomes from the same study to the study levelAn ''aggregate()'' method for ''escalc'' objects was added that can do thiswhile (approximately) accounting for various types of dependencies. +
-  * When using functions that allow for parallel processing, progress bars can now also be shown, thanks to the [[https://cran.r-project.org/package=pbapply|pbapply]] package. Gives you an idea whether to just grab a coffee or go out for lunch while your computer is chugging along. +
-  * 24 new datasets were added (there are now over 60 datasets included in the package). These datasets also cover advanced methodology, such as multivariate/multilevel models, network meta-analysis, phylogenetic meta-analysis, and models with a spatial correlation structure.+
  
-Lots of smaller tweaks/improvements were also made. feel like so much has accumulated that this warranted version jump to version 3.0.+The update itself took a bit longer (prompting a few well-deserved reminders from the CRAN team) due to other work-related responsibilities, plus wanted to finish few other updates to the package I was working on in the meantimeThe full changelog can be found [[:updates#version_46-0_2024-03-28|here]] but I would like to highlight a few items.
  
-==== April 21st2021Better Degrees of Freedom Calculation ====+First of all, I have finally added effect sizes measures for computing the standardized mean change using raw score standardization with pooled standard deviations to the ''[[https://wviechtb.github.io/metafor/reference/escalc.html|escalc()]]'' function. Traditionally, following Becker's 1988 seminal paper ([[https://doi.org/10.1111/j.2044-8317.1988.tb00901.x|link]]), this measure was computed with $$d \frac{\bar{x}_1 - \bar{x}_2}{s_1},$$ where $\bar{x}_1$ and $\bar{x}_2$ are the means at the two measurement occasions and $s_1$ is the standard deviation of the raw scores observed at the first measurement occasion (followed by a slight bias correction applied to $d$). In principle, one can also use $s_2$ in the denominator, but crucially, only one of two standard deviations is used for the standardization. While there is nothing inherently wrong with doing so (and it simplifies the derivation of the exact distribution of $d$), some would prefer to pool the two standard deviations and hence use $$d \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{s_1^2 + s_2^2}{2}}}$$ as the effect size measure (i.e., we average the variances and then take the square-root thereof). This is now possible with ''measure="SMCRP"'' under the assumption that the true variances are the same at the two measurement occasions and as ''measure="SMCRPH"'' without this assumptionthat is allowing for heteroscedasticity of the two variances (in the latter case, the computation of the sampling variance needs to be adjusted slightly). See the [[https://wviechtb.github.io/metafor/reference/escalc.html#-outcome-measures-for-change-or-matched-pairs|documentation of the escalc function]] for further details.
  
-In random/mixed-effects models as can be fitted with the [[https://wviechtb.github.io/metafor/reference/rma.html|rma()]] function, tests and confidence intervals for the model coefficients are by default constructed based on standard normal distributionIn generalit is better to use the Knapp-Hartung method for this purpose, which does two things: (1) the standard errors of the model coefficients are estimated in slightly different way and (2) a t-distribution is used with $k-p$ degrees of freedom (where $k$ is the total number of estimates and $p$ the number of coefficients in the model). When conducting a simultaneous (or 'omnibus') test of multiple coefficientsthen an F-distribution with $m$ and $k-p$ degrees of freedom is used (for the 'numeratorand 'denominatordegrees of freedom, respectively), with $m$ denoting the number of coefficients tested. To use this methodset argument ''test="knha"''.+Second, the ''[[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]]'' function has received few updatesTo start, the function no longer stops with an error when one or more intervals defined by the ''steps'' argument do not contain any observed p-values (instead warning is issued and model fitting proceeds, but may fail). For automating analyses and simulation studiesone can now set ''ptable=TRUE'' in which case the function will simply return the table with the number of p-values falling into the various intervalsbased on which one can decide how to proceed
  
-The Knapp-Hartung method cannot be directly generalized to more complex models as can be fitted with the [[https://wviechtb.github.io/metafor/reference/rma.mv.html|rma.mv()]] functionalthough we can still use t- and F-distributions for conducting tests of one or multiple model coefficients in the context of such models. This is possible by setting ''test="t"''. Howeverthis then raises the question how the (denominator) degrees of freedom for such tests should be calculated. By default, the degrees of freedom are calculated as described above. However, this method does not reflect the complexities of models that are typically fitted with the ''rma.mv()'' function. For example, in multilevel models (with multiple estimates nested within studies), predictor (or 'moderator') may be measured at the study level (i.e., it is constant for all estimates belonging to the same study) or at the level of the individual estimates (i.e.it might vary within studies). By setting argument ''dfs="contain"'', a method is used for calculating the degrees of freedom that tends to provide tests with better control of the Type I error rate and confidence intervals with closer to nominal coverage ratesSee the documentation of the function for further details. +Furthermore, by setting argument ''decreasing=TRUE'', it is now possible to fit the step function model under the assumption that the selection function parameters are monotonically decreasing function of the p-valuesThis feature is somewhat experimental -- it requires using optimization with inequality constraints or a clever reformulation of the objective function that enforces such a constraintwhich complicates some internal issues and makes model fitting more difficultOne can also debate whether one should ever make this assumption in the first placebut it is a feature wanted to implement for testing and research purposes anyway.
- +
-==== April 3rd2021: Scatter Plots / Bubble Plots for Meta-Regression Models ==== +
- +
-finally got around to adding a function to the package for drawing scatter plots (also known as bubble plots) for meta-regression models. See the documentation of the [[https://wviechtb.github.io/metafor/reference/regplot.html|regplot()]] function for further details. An example illustrating such a plot is provided [[plots:meta_analytic_scatterplot|here]].+
  
 +Finally, per request, it is now also possible to pass the observed p-values of the studies to the function directly via the ''pval'' argument. This can in principle be of interest when the observed p-values were not computed with a standard Wald-type test (as assumed by the function) but based on a different method. This is an undocumented and experimental feature, because doing so creates a bit of a mismatch between the assumptions internal to the function (since the integration step to compute the weighted density of the effect size estimates still assumes the use of a standard Wald-type test). To what extent this is actually a problem and whether this feature can improve the accuracy of the results from selection models remains to be determined in future research.
news/news.txt · Last modified: 2024/03/29 10:44 by Wolfgang Viechtbauer