The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
faq [2022/08/03 11:14] Wolfgang Viechtbauerfaq [2022/08/30 11:18] Wolfgang Viechtbauer
Line 79: Line 79:
 !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used. !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''[[https://wviechtb.github.io/metafor/reference/rma.uni.html|rma()]]''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used.
  
-Note that this differs slightly from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2001), one can use argument ''pi.type="riley"'' in ''predict()''.+Note that this differs slightly from Riley et al. (2011), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2011), one can use argument ''pi.type="riley"'' in ''predict()''.
  
 ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed?
Line 97: Line 97:
 ==== References ==== ==== References ====
  
-Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611.+Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611. https://doi.org/10.1214/aoms/1177729756
  
-Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558.+Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. //Statistics in Medicine, 21//(11), 1539--1558. https://doi.org/10.1002/sim.1186
  
-van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624.+van Houwelingen, H. C., Arends, L. R., & Stijnen, T. (2002). Advanced methods in meta-analysis: Multivariate approach and meta-regression. //Statistics in Medicine, 21//(4), 589--624. https://doi.org/10.1002/sim.1040
  
 Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA. Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// Sage, Thousand Oaks, CA.
  
 Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation. Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// (2nd ed., pp. 295--315). New York: Russell Sage Foundation.
 +
 +Riley, R. D., Higgins, J. P. T. & Deeks, J. J. (2011). Interpretation of random effects meta-analyses. //British Medical Journal, 342//, d549. https://doi.org/10.1136/bmj.d549 
  
 Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX. Sterne, J. A. C. (Ed.) (2009). //Meta-analysis in Stata: An updated collection from the Stata Journal.// Stata Press, College Station, TX.
  
faq.txt · Last modified: 2023/01/24 07:56 by Wolfgang Viechtbauer