The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:weights_in_rma.mv_models

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
tips:weights_in_rma.mv_models [2021/11/08 15:16] Wolfgang Viechtbauertips:weights_in_rma.mv_models [2021/11/08 15:56] Wolfgang Viechtbauer
Line 1: Line 1:
 ===== Weights in Models Fitted with the rma.mv() Function ===== ===== Weights in Models Fitted with the rma.mv() Function =====
  
-One of the fundamental concepts underlying a meta-analysis is the idea of weighting: More precise estimates are given more weight in the analysis then less precise estimates. In 'standard' fixed- and random-effects models (such as those that can be fitted with the ''rma()'' function), the weighting scheme is quite simple and covered in standard textbooks on meta-analysis. However, in more complex models (such as those that can be fitted with the ''rma.mv()'' function), the way estimates are weighted is more complex. Here, I will discuss some of those intricacies.+One of the fundamental concepts underlying a meta-analysis is the idea of weighting: More precise estimates are given more weight in the analysis then less precise estimates. In 'standard' equal- and random-effects models (such as those that can be fitted with the ''rma()'' function), the weighting scheme is quite simple and covered in standard textbooks on meta-analysis. However, in more complex models (such as those that can be fitted with the ''rma.mv()'' function), the way estimates are weighted is more complex. Here, I will discuss some of those intricacies.
  
 ==== Models Fitted with the rma() Function ==== ==== Models Fitted with the rma() Function ====
Line 33: Line 33:
 Variable ''yi'' contains the log risk ratios and variable ''vi'' the corresponding sampling variances. Variable ''yi'' contains the log risk ratios and variable ''vi'' the corresponding sampling variances.
  
-We now fit fixed- and random-effects models to these estimates.+We now fit equal- and random-effects models to these estimates.
  
 <code rsplus> <code rsplus>
Line 64: Line 64:
 {{ tips:weights_forest_rma.png?nolink }} {{ tips:weights_forest_rma.png?nolink }}
  
-In the FE model, the weights given to the estimates are equal to $w_i = 1 / v_i$, where $v_i$ is the sampling variance of the $i$th study. This is called 'inverse-variance weighting' and can be shown to be the most efficient way of weighting the estimates (i.e., the summary estimate has the lowest possible variance and is therefore most precise). As a result, the estimates with the lowest sampling variances, namely the ones from Stein and Aronson (1953), TPT Madras (1980), and Comstock et al (1974) are given considerably more weight than the rest of the studies.((Depending on the outcome measure, the sampling variance of an estimate is not just an inverse function of the sample size of the study, but can also depend on other factors (e.g., for log risk ratios as used in the present example, the prevalence of the outcome also matters). Therefore, while Stein and Aronson (1953) has a smaller sample size than for example Hart and Sutherland (1977), it has a smaller sampling variance and hence receives more weight. However, roughly speaking, the weight an estimate receives is directly related to the study's sample size.)) Together, these three studies receive almost 80% of the total weight and therefore exert a great deal of influence on the summary estimate. Especially the TPT Madras study 'pulls' the estimate to the right (closer to a risk ratio of 1).+In the equal-effects model, the weights given to the estimates are equal to $w_i = 1 / v_i$, where $v_i$ is the sampling variance of the $i$th study. This is called 'inverse-variance weighting' and can be shown to be the most efficient way of weighting the estimates (i.e., the summary estimate has the lowest possible variance and is therefore most precise). As a result, the estimates with the lowest sampling variances, namely the ones from Stein and Aronson (1953), TPT Madras (1980), and Comstock et al (1974) are given considerably more weight than the rest of the studies.((Depending on the outcome measure, the sampling variance of an estimate is not just an inverse function of the sample size of the study, but can also depend on other factors (e.g., for log risk ratios as used in the present example, the prevalence of the outcome also matters). Therefore, while Stein and Aronson (1953) has a smaller sample size than for example Hart and Sutherland (1977), it has a smaller sampling variance and hence receives more weight. However, roughly speaking, the weight an estimate receives is directly related to the study's sample size.)) Together, these three studies receive almost 80% of the total weight and therefore exert a great deal of influence on the summary estimate. Especially the TPT Madras study 'pulls' the estimate to the right (closer to a risk ratio of 1).
  
 In the RE model, the estimates are weighted with $w_i = 1 / (\hat{\tau}^2 + v_i)$. Therefore, not only the sampling variance, but also the (estimated) amount of heterogeneity (i.e., the variance in the underlying true effects) is taken into consideration when determining the weights. When $\hat{\tau}^2$ is large (relative to the size of the sampling variances), then the weights actually become quite similar to each other. Hence, smaller (less precise) studies may receive almost as much as weight as larger (more precise) studies. We can in fact see this happening in the present example. While the three studies mentioned above still receive the largest weights, their weights are now much more similar to those of the other studies. As a result, the summary estimate is not as strongly pulled to the right by the TPT Madras study. In the RE model, the estimates are weighted with $w_i = 1 / (\hat{\tau}^2 + v_i)$. Therefore, not only the sampling variance, but also the (estimated) amount of heterogeneity (i.e., the variance in the underlying true effects) is taken into consideration when determining the weights. When $\hat{\tau}^2$ is large (relative to the size of the sampling variances), then the weights actually become quite similar to each other. Hence, smaller (less precise) studies may receive almost as much as weight as larger (more precise) studies. We can in fact see this happening in the present example. While the three studies mentioned above still receive the largest weights, their weights are now much more similar to those of the other studies. As a result, the summary estimate is not as strongly pulled to the right by the TPT Madras study.
tips/weights_in_rma.mv_models.txt · Last modified: 2023/08/03 13:37 by Wolfgang Viechtbauer