The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


news:news

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
news:news [2021/04/25 21:57] Wolfgang Viechtbauernews:news [2024/03/29 10:44] (current) Wolfgang Viechtbauer
Line 3: Line 3:
 ~~NOTOC~~ ~~NOTOC~~
  
-==== April 21st, 2021Better Degrees of Freedom Calculation ====+==== 2024-03-28Version 4.6-0 Released on CRAN ====
  
-In random/mixed-effects models as can be fitted with the [[https://wviechtb.github.io/metafor/reference/rma.html|rma()]] function, tests and confidence intervals for the model coefficients are by default constructed based on a standard normal distribution.((In random-effects model, there is of course just one coefficientnamely $\hat{\mu}$, the estimated average true outcome.)) In general, it is better to use the Knapp-Hartung method for this purpose, which does two things: (1) the standard errors of the model coefficients are estimated in a slightly different way and (2) a t-distribution is used with $k-p$ degrees of freedom (where $k$ is the total number of estimates and $p$ the number of coefficients in the model)When conducting a simultaneous (or 'omnibus') test of multiple coefficients, then an F-distribution with $m$ and $k-p$ degrees of freedom is used (for the 'numerator' and 'denominator' degrees of freedom, respectively), with $m$ denoting the number of coefficients tested. To use this method, set argument ''test="knha"''.+A new version of the metafor package has been released on CRANThis update occurred bit sooner than originally plannedbut there were two minor issues that flagged metafor as requiring an update as otherwise it would have been archived on CRANwhich would have led to some unpleasant consequences for other packages that depend on metaforSo an update needed to be pushed out relatively quickly.
  
-The Knapp-Hartung method cannot be directly generalized to more complex models as can be fitted with the [[https://wviechtb.github.io/metafor/reference/rma.mv.html|rma.mv()]] function, although we can still use t- and F-distributions for conducting tests of one or multiple model coefficients in the context of such models. This is possible by setting ''test="t"''. Howeverthis then raises the question how the (denominator) degrees of freedom for such tests should be calculated. By default, the degrees of freedom are calculated as described above. However, this method does not reflect the complexities of models that are typically fitted with the ''rma.mv()'' function. For example, in multilevel models (with multiple estimates nested within studies), a predictor (or 'moderator') may be measured at the study level (i.e., it is constant for all estimates belonging to the same study) or at the level of the individual estimates (i.e., it might vary within studies). By setting argument ''dfs="contain"'', a method is used for calculating the degrees of freedom that tends to provide tests with better control of the Type I error rate and confidence intervals with closer to nominal coverage rates. See the documentation of the function for further details.+The issues themselves were easy to fix. The first was a very minor formatting oversight in one of the help files. The second issue was the result of two packages being archived that metafor had listed as suggested packages, namely [[https://cran.r-project.org/package=Rcgmin|Rcgmin]] and [[https://cran.r-project.org/package=Rvmmin|Rvmmin]]. These packages provided some alternative optimizers that could be chosen for fitting certain models, but were not essential dependencies for the metafor package and hence could be easily removedActuallythese optimizers have been moved to the [[https://cran.r-project.org/package=optimx|optimx]] package and will probably be reincorporated into metafor later on.
  
-==== April 3rd2021Scatter Plots / Bubble Plots for Meta-Regression Models ====+The update itself took a bit longer (prompting a few well-deserved reminders from the CRAN team) due to other work-related responsibilitiesplus I wanted to finish a few other updates to the package I was working on in the meantime. The full changelog can be found [[:updates#version_46-0_2024-03-28|here]] but I would like to highlight a few items.
  
-I finally got around to adding a function to the package for drawing scatter plots (also known as bubble plots) for meta-regression models. See the documentation of the [[https://wviechtb.github.io/metafor/reference/regplot.html|regplot()]] function for further detailsAn example illustrating such plot is provided [[plots:meta_analytic_scatterplot|here]].+First of all, have finally added effect sizes measures for computing the standardized mean change using raw score standardization with pooled standard deviations to the ''[[https://wviechtb.github.io/metafor/reference/escalc.html|escalc()]]'' function. Traditionally, following Becker's 1988 seminal paper ([[https://doi.org/10.1111/j.2044-8317.1988.tb00901.x|link]]), this measure was computed with $$d = \frac{\bar{x}_1 - \bar{x}_2}{s_1},$$ where $\bar{x}_1$ and $\bar{x}_2$ are the means at the two measurement occasions and $s_1$ is the standard deviation of the raw scores observed at the first measurement occasion (followed by slight bias correction applied to $d$). In principle, one can also use $s_2$ in the denominator, but crucially, only one of two standard deviations is used for the standardization. While there is nothing inherently wrong with doing so (and it simplifies the derivation of the exact distribution of $d$), some would prefer to pool the two standard deviations and hence use $$d = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{s_1^2 + s_2^2}{2}}}$$ as the effect size measure (i.e., we average the variances and then take the square-root thereof). This is now possible with ''measure="SMCRP"'' under the assumption that the true variances are the same at the two measurement occasions and as ''measure="SMCRPH"'' without this assumption, that is allowing for heteroscedasticity of the two variances (in the latter case, the computation of the sampling variance needs to be adjusted slightly). See the [[https://wviechtb.github.io/metafor/reference/escalc.html#-outcome-measures-for-change-or-matched-pairs|documentation of the escalc function]] for further details.
  
 +Second, the ''[[https://wviechtb.github.io/metafor/reference/selmodel.html|selmodel()]]'' function has received a few updates. To start, the function no longer stops with an error when one or more intervals defined by the ''steps'' argument do not contain any observed p-values (instead a warning is issued and model fitting proceeds, but may fail). For automating analyses and simulation studies, one can now set ''ptable=TRUE'' in which case the function will simply return the table with the number of p-values falling into the various intervals, based on which one can decide how to proceed. 
 +
 +Furthermore, by setting argument ''decreasing=TRUE'', it is now possible to fit the step function model under the assumption that the selection function parameters are a monotonically decreasing function of the p-values. This feature is somewhat experimental -- it requires using optimization with inequality constraints or a clever reformulation of the objective function that enforces such a constraint, which complicates some internal issues and makes model fitting more difficult. One can also debate whether one should ever make this assumption in the first place, but it is a feature I wanted to implement for testing and research purposes anyway.
 +
 +Finally, per request, it is now also possible to pass the observed p-values of the studies to the function directly via the ''pval'' argument. This can in principle be of interest when the observed p-values were not computed with a standard Wald-type test (as assumed by the function) but based on a different method. This is an undocumented and experimental feature, because doing so creates a bit of a mismatch between the assumptions internal to the function (since the integration step to compute the weighted density of the effect size estimates still assumes the use of a standard Wald-type test). To what extent this is actually a problem and whether this feature can improve the accuracy of the results from selection models remains to be determined in future research.
news/news.1619387847.txt.gz · Last modified: 2021/04/25 21:57 by Wolfgang Viechtbauer