faq
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
faq [2022/04/14 15:09] – Wolfgang Viechtbauer | faq [2024/06/18 19:26] (current) – Wolfgang Viechtbauer | ||
---|---|---|---|
Line 7: | Line 7: | ||
??? Why is the package called ' | ??? Why is the package called ' | ||
- | !!! The name ' | + | !!! The name ' |
??? What is (was) the ' | ??? What is (was) the ' | ||
Line 19: | Line 19: | ||
Second, results provided by the metafor package have been compared with published results described in articles and books (the assumption being that those results are in fact correct). On this website, I provide a number of such [[analyses|analysis examples]] that you can examine yourself. All of these examples (and some more) are also encapsulated in automated tests using the [[https:// | Second, results provided by the metafor package have been compared with published results described in articles and books (the assumption being that those results are in fact correct). On this website, I provide a number of such [[analyses|analysis examples]] that you can examine yourself. All of these examples (and some more) are also encapsulated in automated tests using the [[https:// | ||
- | Third, I have conducted extensive simulation studies for many of the methods implemented in the package to ensure that their statistical properties are as one would expect based on the underlying theory. To give a simple example, under the assumptions of an equal-effects model (i.e., homogeneous true effects, normally distributed effect size estimates, known sampling variances), the empirical rejection rate of $H_0: \theta = 0$ must be nominal (within the margin of error one would expect when randomly simulating such data). This is in fact the case, providing support that the '' | + | Third, I have conducted extensive simulation studies for many of the methods implemented in the package to ensure that their statistical properties are as one would expect based on the underlying theory. To give a simple example, under the assumptions of an equal-effects model (i.e., homogeneous true effects, normally distributed effect size estimates, known sampling variances), the empirical rejection rate of $\mbox{H}_0{:}\; \theta = 0$ must be nominal (within the margin of error one would expect when randomly simulating such data). This is in fact the case, providing support that the '' |
<code rsplus> | <code rsplus> | ||
library(metafor) | library(metafor) | ||
Line 53: | Line 53: | ||
!!! There are actually many R packages available for conducting meta-analyses. To get an appreciation for what the " | !!! There are actually many R packages available for conducting meta-analyses. To get an appreciation for what the " | ||
- | ??? Why can I not just use the lm() and lme(), and lmer() functions to conduct my meta-analysis? | + | ??? Why can I not just use the lm(), lme(), and lmer() functions to conduct my meta-analysis? |
!!! First of all, meta-analytic models (as can be fitted with the '' | !!! First of all, meta-analytic models (as can be fitted with the '' | ||
Line 73: | Line 73: | ||
??? For mixed-effects models, how is the $R^2$ statistic computed by the rma() function? | ??? For mixed-effects models, how is the $R^2$ statistic computed by the rma() function? | ||
- | !!! The pseudo $R^2$ statistic (Raudenbush, | + | !!! The pseudo $R^2$ statistic (Raudenbush, |
??? For random-effects models fitted with the rma() function, how is the prediction interval computed by the predict() function? | ??? For random-effects models fitted with the rma() function, how is the prediction interval computed by the predict() function? | ||
Line 79: | Line 79: | ||
!!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/ | !!! By default, the interval is computed with $$\hat{\mu} \pm z_{1-\alpha/ | ||
- | Note that this differs slightly from Riley et al. (2001), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2001), one can use argument '' | + | Note that this differs slightly from Riley et al. (2011), who suggest to use a t-distribution with $k-2$ degrees of freedom for constructing the interval. Neither a normal, nor a t-distribution with $k-1$ or $k-2$ degrees of freedom is correct; all of these are approximations. The computations in metafor are done in the way described above, so that the prediction interval is identical to the confidence interval for $\mu$ when $\hat{\tau}^2 = 0$, which could be argued is the logical thing that should happen. If the prediction interval should be computed exactly as described by Riley et al. (2011), one can use argument '' |
??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? | ??? How is the Freeman-Tukey transformation of proportions and incidence rates computed? | ||
Line 97: | Line 97: | ||
==== References ==== | ==== References ==== | ||
- | Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611. | + | Freeman, M. F., & Tukey, J. W. (1950). Transformations related to the angular and the square root. //Annals of Mathematical Statistics, 21//(4), 607--611. |
- | Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. // | + | Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. // |
- | van Houwelingen, | + | van Houwelingen, |
Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// | Lipsey, M. W., & Wilson, D. B. (2001). //Practical meta-Analysis.// | ||
Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// | Raudenbush, S. W. (2009). Analyzing effect sizes: Random effects models. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), //The handbook of research synthesis and meta-analysis// | ||
+ | |||
+ | Riley, R. D., Higgins, J. P. T. & Deeks, J. J. (2011). Interpretation of random effects meta-analyses. //British Medical Journal, 342//, d549. https:// | ||
Sterne, J. A. C. (Ed.) (2009). // | Sterne, J. A. C. (Ed.) (2009). // | ||
faq.txt · Last modified: 2024/06/18 19:26 by Wolfgang Viechtbauer