The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
faq [2021/11/08 16:03] Wolfgang Viechtbauerfaq [2022/08/30 11:18] Wolfgang Viechtbauer
Line 7: Line 7:
 ??? Why is the package called 'metafor'? ??? Why is the package called 'metafor'?
  
-!!! The name 'metafor' stands for 'META-analysis FOr R' (so the package name is not 'metaphor', even if your spellchecker insists on that spelling ...). +!!! The name 'metafor' stands for 'META-analysis FOr R' (so the package name is not 'metaphor', even if your spellchecker insists on that spelling ...).
  
 ??? What is (was) the 'mima' function? ??? What is (was) the 'mima' function?
Line 33: Line 33:
 Similar (and much more thorough/extensive) tests have been conducted for the more intricate methods in the package. Similar (and much more thorough/extensive) tests have been conducted for the more intricate methods in the package.
  
-It may also be useful to note that there is now an appreciable user base of the metafor package. The [[https://www.jstatsoft.org/v36/i03/|Viechtbauer (2010)]] article describing the package [[http://scholar.google.nl/scholar?oi=bibs&hl=en&cites=8753688964455559681|has been cited over 7000 times]]. Many of the citations are from applied meta-analyses and/or methodological/statistical papers that have used the metafor package as part of their research. This increases the chances that any bugs would be detected, reported, and corrected.+It may also be useful to note that there is now an appreciable user base of the metafor package. The [[https://www.jstatsoft.org/v36/i03/|Viechtbauer (2010)]] article describing the package [[http://scholar.google.nl/scholar?oi=bibs&hl=en&cites=8753688964455559681|has been cited over 10,000 times]]. Many of the citations are from applied meta-analyses and/or methodological/statistical papers that have used the metafor package as part of their research. This increases the chances that any bugs would be detected, reported, and corrected.
  
 Finally, I have become very proficient at hitting the [[https://xkcd.com/323/|Ballmer Peak]]. Finally, I have become very proficient at hitting the [[https://xkcd.com/323/|Ballmer Peak]].
Line 79: Line 79:
 !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used. !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used.
  
-Note that this differs slightly from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen.+Note that this differs slightly from Riley et al. (2011), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2011), one can use argument ''pi.type="riley"'' in ''predict()''.
  
 ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed?
Line 97: Line 97:
 ==== References ==== ==== References ====
  
-Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611.+Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611. https://doi.org/10.1214/aoms/1177729756
  
-Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558.+Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558. https://doi.org/10.1002/sim.1186
  
-van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624.+van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624. https://doi.org/10.1002/sim.1040
  
 Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA. Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA.
  
 Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation. Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation.
 +
 +Riley, R. D., Higgins, J. P. T. & Deeks, J. J. (2011). Interpretation of random effects meta-analyses. //British Medical Journal, 342//, d549. https://doi.org/10.1136/bmj.d549 
  
 Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX. Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX.
  
faq.txt · Last modified: 2024/06/18 19:26 by Wolfgang Viechtbauer