The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


faq

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
faq [2020/03/16 20:05] Wolfgang Viechtbauerfaq [2020/06/26 06:56] Wolfgang Viechtbauer
Line 63: Line 63:
 ??? How are $I^2$ and $H^2$ computed in the metafor package? ??? How are $I^2$ and $H^2$ computed in the metafor package?
  
-!!! For random-effects models, the $I^2$ statistic is computed with $$I^2 = 100\% \times \frac{\hat{\tau}^2}{\hat{\tau}^2 + \tilde{v}},$$ where $\hat{\tau}^2$ is the estimated value of $\tau^2$ and $$\tilde{v} = \frac{(k-1) \sum w_i}{(\sum w_i)^2 - \sum w_i^2},$$ where $w_i$ is the inverse of the sampling variance of the $i$th study ($\tilde{v}$ is equation 9 in Higgins & Thompson, 2002, and can be regarded as the 'typical' within-study variance of the observed effect sizes or outcomes). The $H^2$ statistic is computed with $$H^2 = \frac{\hat{\tau}^2 + \tilde{v}}{\tilde{v}}.$$ Analogous equations are used for mixed-effects models.+!!! For random-effects models, the $I^2$ statistic is computed with $$I^2 = 100\% \times \frac{\hat{\tau}^2}{\hat{\tau}^2 + \tilde{v}},$$ where $\hat{\tau}^2$ is the estimated value of $\tau^2$ and $$\tilde{v} = \frac{(k-1) \sum w_i}{(\sum w_i)^2 - \sum w_i^2},$$ where $w_i = 1 / v_i$ is the inverse of the sampling variance of the $i$th study ($\tilde{v}$ is equation 9 in Higgins & Thompson, 2002, and can be regarded as the 'typical' within-study variance of the observed effect sizes or outcomes). The $H^2$ statistic is computed with $$H^2 = \frac{\hat{\tau}^2 + \tilde{v}}{\tilde{v}}.$$ Analogous equations are used for mixed-effects models.
  
 Therefore, depending on the estimator of $\tau^2$ used, the values of $I^2$ and $H^2$ will change. For random-effects models, $I^2$ and $H^2$ are often computed in practice with $I^2 = 100\% \times (Q-(k-1))/Q$ and $H^2 = Q/(k-1)$, where $Q$ denotes the statistic for the test of heterogeneity and $k$ the number of studies (i.e., observed effects or outcomes) included in the meta-analysis. The equations used in the metafor package to compute these statistics are based on more general definitions and have the advantage that the values of $I^2$ and $H^2$ will be consistent with the estimated value of $\tau^2$ (i.e., if $\hat{\tau}^2 = 0$, then $I^2 = 0$ and $H^2 = 1$ and if $\hat{\tau}^2 > 0$, then $I^2 > 0$ and $H^2 > 1$). Therefore, depending on the estimator of $\tau^2$ used, the values of $I^2$ and $H^2$ will change. For random-effects models, $I^2$ and $H^2$ are often computed in practice with $I^2 = 100\% \times (Q-(k-1))/Q$ and $H^2 = Q/(k-1)$, where $Q$ denotes the statistic for the test of heterogeneity and $k$ the number of studies (i.e., observed effects or outcomes) included in the meta-analysis. The equations used in the metafor package to compute these statistics are based on more general definitions and have the advantage that the values of $I^2$ and $H^2$ will be consistent with the estimated value of $\tau^2$ (i.e., if $\hat{\tau}^2 = 0$, then $I^2 = 0$ and $H^2 = 1$ and if $\hat{\tau}^2 > 0$, then $I^2 > 0$ and $H^2 > 1$).
Line 75: Line 75:
 !!! The pseudo $R^2$ statistic (Raudenbush, 2009) is computed with $$R^2 = \frac{\hat{\tau}_{RE}^2 - \hat{\tau}_{ME}^2}{\hat{\tau}_{RE}^2},$$ where $\hat{\tau}_{RE}^2$ denotes the estimated value of $\tau^2$ based on the random-effects model (i.e., the total amount of heterogeneity) and $\hat{\tau}_{ME}^2$ denotes the estimated value of $\tau^2$ based on the mixed-effects model (i.e., the residual amount of heterogeneity). It can happen that $\hat{\tau}_{RE}^2 < \hat{\tau}_{ME}^2$, in which case $R^2$ is set to zero. Again, the value of $R^2$ will change depending on the estimator of $\tau^2$ used. Also note that this statistic is only computed when the mixed-effects model includes an intercept (so that the random-effects model is clearly nested within the mixed-effects model). You can also use the ''anova.rma.uni()'' function to compute $R^2$ for any two models that are known to be nested. !!! The pseudo $R^2$ statistic (Raudenbush, 2009) is computed with $$R^2 = \frac{\hat{\tau}_{RE}^2 - \hat{\tau}_{ME}^2}{\hat{\tau}_{RE}^2},$$ where $\hat{\tau}_{RE}^2$ denotes the estimated value of $\tau^2$ based on the random-effects model (i.e., the total amount of heterogeneity) and $\hat{\tau}_{ME}^2$ denotes the estimated value of $\tau^2$ based on the mixed-effects model (i.e., the residual amount of heterogeneity). It can happen that $\hat{\tau}_{RE}^2 < \hat{\tau}_{ME}^2$, in which case $R^2$ is set to zero. Again, the value of $R^2$ will change depending on the estimator of $\tau^2$ used. Also note that this statistic is only computed when the mixed-effects model includes an intercept (so that the random-effects model is clearly nested within the mixed-effects model). You can also use the ''anova.rma.uni()'' function to compute $R^2$ for any two models that are known to be nested.
  
-??? For random-effects models fitted with the rma() function, how is the credibility/prediction interval computed by the predict() function?+??? For random-effects models fitted with the rma() function, how is the prediction interval computed by the predict() function?
  
 !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''rma()''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used. !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/2} \sqrt{\mbox{SE}[\hat{\mu}]^2 + \hat{\tau}^2},$$ where $\hat{\mu}$ is the estimated average true outcome, $z_{1-\alpha/2}$ is the $100 \times (1-\alpha/2)$th percentile of a standard normal distribution (e.g., $1.96$ for $\alpha = .05$), $\mbox{SE}[\hat{\mu}]$ is the standard error of $\hat{\mu}$, and $\hat{\tau}^2$ is the estimated amount of heterogeneity (i.e., the variance in the true outcomes across studies). If the model was fitted with the Knapp and Hartung (2003) method (i.e., with ''test="knha"'' in ''rma()''), then instead of $z_{1-\alpha/2}$, the $100 \times (1-\alpha/2)$th percentile of a t-distribution with $k-1$ degrees of freedom is used.
  
-Note that this differs from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the credibility/prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. +Note that this differs slightly from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen.
  
 ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed?
faq.txt · Last modified: 2023/01/24 07:56 by Wolfgang Viechtbauer