A new version of the metafor package (version 3.0) has been published on CRAN. This version includes a lot of updates that have accumulated in the development version of the package over the past 14-15 months. Some highlights:
selmodel()
was added for fitting a wide variety of selection models, including the beta selection model by Citkowicz and Vevea (2017), various models described by Preston et al. (2004), and step function models (with the three-parameter selection model (3PSM) as a special case).tes()
function was added to carry out the test of 'excess significance' (Ioannidis & Trikalinos, 2007; see also Francis, 2013).regtest()
function now shows the 'limit estimate' of the (average) true effect/outcome. This is in essence what the PET/PEESE methods do (when the standard errors / sampling variances are used as predictors in a meta-regression model).rma()
function (using the scale
argument). With this, one can specify predictors for the amount of heterogeneity in the outcomes (to examine if the outcomes are more/less heterogeneous under certain circumstances).regplot()
function can be used to draw bubble plots based on meta-regression models. For models involving multiple predictors, the function draws the line for the 'marginal relationship' of a predictor. Confidence/prediction interval bands can also be shown.aggregate()
method for escalc
objects was added that can do this, while (approximately) accounting for various types of dependencies.Lots of smaller tweaks/improvements were also made. I feel like so much has accumulated that this warranted a version jump to version 3.0.
In random/mixed-effects models as can be fitted with the rma()
function, tests and confidence intervals for the model coefficients are by default constructed based on a standard normal distribution. In general, it is better to use the Knapp-Hartung method for this purpose, which does two things: (1) the standard errors of the model coefficients are estimated in a slightly different way and (2) a t-distribution is used with $k-p$ degrees of freedom (where $k$ is the total number of estimates and $p$ the number of coefficients in the model). When conducting a simultaneous (or 'omnibus') test of multiple coefficients, then an F-distribution with $m$ and $k-p$ degrees of freedom is used (for the 'numerator' and 'denominator' degrees of freedom, respectively), with $m$ denoting the number of coefficients tested. To use this method, set argument test="knha"
.
The Knapp-Hartung method cannot be directly generalized to more complex models as can be fitted with the rma.mv()
function, although we can still use t- and F-distributions for conducting tests of one or multiple model coefficients in the context of such models. This is possible by setting test="t"
. However, this then raises the question how the (denominator) degrees of freedom for such tests should be calculated. By default, the degrees of freedom are calculated as described above. However, this method does not reflect the complexities of models that are typically fitted with the rma.mv()
function. For example, in multilevel models (with multiple estimates nested within studies), a predictor (or 'moderator') may be measured at the study level (i.e., it is constant for all estimates belonging to the same study) or at the level of the individual estimates (i.e., it might vary within studies). By setting argument dfs="contain"
, a method is used for calculating the degrees of freedom that tends to provide tests with better control of the Type I error rate and confidence intervals with closer to nominal coverage rates. See the documentation of the function for further details.