The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
faq [2022/04/14 15:09] Wolfgang Viechtbauerfaq [2023/01/24 07:56] – [General Questions] Wolfgang Viechtbauer
Line 7: Line 7:
 ??? Why is the package called 'metafor'? ??? Why is the package called 'metafor'?
  
-!!! The name 'metafor' stands for 'META-analysis FOr R' (so the package name is not 'metaphor', even if your spellchecker insists on that spelling ...). +!!! The name 'metafor' stands for 'META-analysis FOr R' (so the package name is not 'metaphor', even if your spellchecker insists on that spelling ...).
  
 ??? What is (was) the 'mima' function? ??? What is (was) the 'mima' function?
Line 53: Line 53:
 !!! There are actually many R packages available for conducting meta-analyses. To get an appreciation for what the "meta-analysis package ecosystem" currently looks like, take a look at the [[http://cran.r-project.org/web/views/MetaAnalysis.html|Task View for Meta-Analysis]], which provides a pretty thorough overview of the different packages and their capabilities. !!! There are actually many R packages available for conducting meta-analyses. To get an appreciation for what the "meta-analysis package ecosystem" currently looks like, take a look at the [[http://cran.r-project.org/web/views/MetaAnalysis.html|Task View for Meta-Analysis]], which provides a pretty thorough overview of the different packages and their capabilities.
  
-??? Why can I not just use the lm() and lme(), and lmer() functions to conduct my meta-analysis?+??? Why can I not just use the lm()lme(), and lmer() functions to conduct my meta-analysis?
  
 !!! First of all, meta-analytic models (as can be fitted with the ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma.uni()]]'' and ''[[https://wviechtb.github.io/metafor/reference/rma.mv.html|rma.mv()]]'' functions) make different assumptions about the nature of the sampling variances (that indicate the (im)precision of the estimates) compared to models fitted by the ''lm()'', ''lme()'', and ''lmer()'' functions, which assume that the sampling variances are known only up to a proportionality constant (when using their ''weights'' arguments). Extra steps must therefore be taken to fix up the output to bring the results in line with standard meta-analytic practices. For more details, I have written up a more comprehensive [[tips:rma_vs_lm_lme_lmer|comparison of the rma() and the lm(), lme(), and lmer() functions]]. !!! First of all, meta-analytic models (as can be fitted with the ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma.uni()]]'' and ''[[https://wviechtb.github.io/metafor/reference/rma.mv.html|rma.mv()]]'' functions) make different assumptions about the nature of the sampling variances (that indicate the (im)precision of the estimates) compared to models fitted by the ''lm()'', ''lme()'', and ''lmer()'' functions, which assume that the sampling variances are known only up to a proportionality constant (when using their ''weights'' arguments). Extra steps must therefore be taken to fix up the output to bring the results in line with standard meta-analytic practices. For more details, I have written up a more comprehensive [[tips:rma_vs_lm_lme_lmer|comparison of the rma() and the lm(), lme(), and lmer() functions]].
Line 73: Line 73:
 ??? For mixed-effects models, how is the $R^2$ statistic computed by the rma() function? ??? For mixed-effects models, how is the $R^2$ statistic computed by the rma() function?
  
-!!! The pseudo $R^2$ statistic (Raudenbush, 2009) is computed with $$R^2 = \frac{\hat{\tau}_{RE}^2 - \hat{\tau}_{ME}^2}{\hat{\tau}_{RE}^2},$$ where $\hat{\tau}_{RE}^2$ denotes the estimated value of $\tau^2$ based on the random-effects model (i.e., the total amount of heterogeneity) and $\hat{\tau}_{ME}^2$ denotes the estimated value of $\tau^2$ based on the mixed-effects model (i.e., the residual amount of heterogeneity). It can happen that $\hat{\tau}_{RE}^2 < \hat{\tau}_{ME}^2$, in which case $R^2$ is set to zero. Again, the value of $R^2$ will change depending on the estimator of $\tau^2$ used. Also note that this statistic is only computed when the mixed-effects model includes an intercept (so that the random-effects model is clearly nested within the mixed-effects model). You can also use the ''[[https://wviechtb.github.io/metafor/reference/anova.rma.html|anova()]]'' function to compute $R^2$ for any two models that are known to be nested.+!!! The pseudo $R^2$ statistic (Raudenbush, 2009) is computed with $$R^2 = \frac{\hat{\tau}_{RE}^2 - \hat{\tau}_{ME}^2}{\hat{\tau}_{RE}^2} = 1 - \frac{\hat{\tau}_{ME}^2}{\hat{\tau}_{RE}^2},$$ where $\hat{\tau}_{RE}^2$ denotes the estimated value of $\tau^2$ based on the random-effects model (i.e., the total amount of heterogeneity) and $\hat{\tau}_{ME}^2$ denotes the estimated value of $\tau^2$ based on the mixed-effects model (i.e., the residual amount of heterogeneity). It can happen that $\hat{\tau}_{RE}^2 < \hat{\tau}_{ME}^2$, in which case $R^2$ is set to zero. Again, the value of $R^2$ will change depending on the estimator of $\tau^2$ used. Also note that this statistic is only computed when the mixed-effects model includes an intercept (so that the random-effects model is clearly nested within the mixed-effects model). You can also use the ''[[https://wviechtb.github.io/metafor/reference/anova.rma.html|anova()]]'' function to compute $R^2$ for any two models that are known to be nested.
  
 ??? For random-effects models fitted with the rma() function, how is the prediction interval computed by the predict() function? ??? For random-effects models fitted with the rma() function, how is the prediction interval computed by the predict() function?
Line 79: Line 79:
 !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used. !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used.
  
-Note that this differs slightly from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2001), one can use argument ''pi.type="riley"'' in ''predict()''.+Note that this differs slightly from Riley et al. (2011), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2011), one can use argument ''pi.type="riley"'' in ''predict()''.
  
 ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed?
Line 97: Line 97:
 ==== References ==== ==== References ====
  
-Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611.+Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611. https://doi.org/10.1214/aoms/1177729756
  
-Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558.+Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558. https://doi.org/10.1002/sim.1186
  
-van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624.+van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624. https://doi.org/10.1002/sim.1040
  
 Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA. Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA.
  
 Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation. Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation.
 +
 +Riley, R. D., Higgins, J. P. T. & Deeks, J. J. (2011). Interpretation of random effects meta-analyses. //British Medical Journal, 342//, d549. https://doi.org/10.1136/bmj.d549 
  
 Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX. Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX.
  
faq.txt · Last modified: 2024/06/18 19:26 by Wolfgang Viechtbauer