The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


analyses:konstantopoulos2011

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
analyses:konstantopoulos2011 [2020/05/01 13:44] Wolfgang Viechtbaueranalyses:konstantopoulos2011 [2020/05/01 14:01] Wolfgang Viechtbauer
Line 76: Line 76:
   4          11           4   4          11           4
 </code> </code>
-So, as noted in the article, the data have an unbalanced structure, with the number of studies per district ranging from 3 to 11 (''range(table(dat$district))'') with an average of 5.1 (''round(mean(table(dat$district)), 1)'').+So, as noted in the article, the data have an unbalanced structure, with the number of studies/schools per district ranging from 3 to 11 (''range(table(dat$district))'') with an average of 5.1 (''round(mean(table(dat$district)), 1)'').
  
 To obtain the descriptives about the effect size estimates per district (Table 3 in the paper), we can use: To obtain the descriptives about the effect size estimates per district (Table 3 in the paper), we can use:
Line 99: Line 99:
 ==== Two-Level Model ==== ==== Two-Level Model ====
  
-First, a standard (two-level) random-effects model is fitted to the data. We can do the same with:+First, a standard (two-level) random-effects model is fitted to the data. Here, we treat the 56 studies as independent (which we later will see is not justified). We can fit such a model with:
 <code rsplus> <code rsplus>
 res <- rma(yi, vi, data=dat) res <- rma(yi, vi, data=dat)
Line 189: Line 189:
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 </code> </code>
-The ''random = ~ 1 | study'' argument adds random effects corresponding to the study level to the model. Note that these are the same results as we obtained earlier. As before, moderators/covariates can be added to the model via the ''mods'' argument.+The ''random = ~ 1 | study'' argument adds random effects corresponding to the study level to the model (which is a unique value for every row in the dataset, so we are essentially adding random effects for every estimate to the model). Note that these are the same results as we obtained earlier. As before, moderators/covariates can be added to the model via the ''mods'' argument.
  
 ==== Three-Level Model ==== ==== Three-Level Model ====
Line 219: Line 219:
 </code> </code>
 These results correspond to those given on the left-hand side of Table 5 in the paper. Somewhat confusingly, the results given in the table appear to be based on ML (instead of REML) estimation. With the argument ''method="ML"'', we could reproduce the results given in the paper more closely. These results correspond to those given on the left-hand side of Table 5 in the paper. Somewhat confusingly, the results given in the table appear to be based on ML (instead of REML) estimation. With the argument ''method="ML"'', we could reproduce the results given in the paper more closely.
 +
 +**Note**: We would obtain the same results when using ''random = ~ 1 | district/school''. While the school variable always starts at 1 within each district, using this notation means that random effects should be added for each level of ''district'' and for each level of ''school'' within each level of ''district''. The latter is the same as adding random effects for every level of ''study'', so the results will be identical.
  
 ==== Profile Likelihood Plots ==== ==== Profile Likelihood Plots ====
  
-Whenever we start fitting more complicated models with the ''rma.mv()'' function, it is a good idea to check the profile likelihood plots of the variance components of the model. The ''profile()'' function can be used to obtain such plots. Here, the model includes two variance components, which are denoted as $\sigma^2_1$ and $\sigma^2_2$. Likelihood profiles for these two components can be obtained with:+Whenever we start fitting more complicated models with the ''rma.mv()'' function, it is a good idea to check the profile likelihood plots of the variance components of the model. The ''profile()'' function can be used to obtain such plots. Here, the model includes two variance components, which are denoted as $\sigma^2_1$ (for between-district heterogeneity) and $\sigma^2_2$ (for between-school-within-district heterogeneity). Likelihood profiles for these two components can be obtained with:
 <code rsplus> <code rsplus>
 par(mfrow=c(2,1)) par(mfrow=c(2,1))
Line 244: Line 246:
 [1] 0.665 [1] 0.665
 </code> </code>
-Therefore, the underlying true effects within districts are estimated to correlate quite strongly.+Therefore, the underlying true effects within districts are estimated to correlate quite strongly (the simpler two-level model we fitted at the beginning ignores this dependence).
  
 Also, it is worth noting that the sum of the two variance components can be interpreted as the total amount of heterogeneity in the true effects: Also, it is worth noting that the sum of the two variance components can be interpreted as the total amount of heterogeneity in the true effects:
Line 284: Line 286:
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 </code> </code>
-The ''random = ~ factor(study) | district'' argument adds correlated random effects for the different studies within districts to the model, where the variance-covariance matrix of the random effects takes on a compound symmetric structure (''struct="CS"'' is the default). Note that the estimate of $\rho$ that is obtained is exactly the same as the ICC value we computed earlier based on the multilevel model. Also, the estimate of $\tau^2$ obtained from the multivariate parameterization is the same as the total amount of heterogeneity computed earlier based on the multilevel model.+The ''random = ~ factor(study) | district'' argument adds correlated random effects for the different studies within districts to the model, where the variance-covariance matrix of the random effects takes on a compound symmetric structure (''struct="CS"'' is the default). Note that the estimate of $\rho$ that is obtained is exactly the same as the ICC value we computed earlier based on the multilevel model. Also, the estimate of $\tau^2$ obtained from the multivariate parameterization is the same as the total amount of heterogeneity computed earlier based on the multilevel model. Note that ''random = ~ factor(school) | district'' would again yield the same results.
  
 As long as $\rho$ is estimated to be positive, the multilevel and multivariate parametrizations are in essence identical. In fact, the log likelihoods of the two models should be identical, which we can confirm with: As long as $\rho$ is estimated to be positive, the multilevel and multivariate parametrizations are in essence identical. In fact, the log likelihoods of the two models should be identical, which we can confirm with:
Line 309: Line 311:
 Again, both plots indicate that the estimates obtained in fact correspond to the peaks of the respective likelihood profiles, with decreasing log likelihoods as the values of parameters are moved away from the actual estimates. Again, both plots indicate that the estimates obtained in fact correspond to the peaks of the respective likelihood profiles, with decreasing log likelihoods as the values of parameters are moved away from the actual estimates.
  
-Since the log likelihood drops of quite dramatically when $\rho$ is set equal to a value very close to 1, the left-hand side of the profile gets 'squished' together at the top and it is more difficult to see the curvature around the estimate. One can change the x-axis limits with the ''xlim'' argument. For example, with ''profile(res.mv, rho=1, xlim=c(0.3,0.9))''a nicer profile plot for $\rho$ can be obtained.+Since the log likelihood drops of quite dramatically when $\rho$ is set equal to a value very close to 1, the left-hand side of the profile gets 'squished' together at the top and it is more difficult to see the curvature around the estimate. One can change the x-axis limits with the ''xlim'' argument. For example, with ''profile(res.mv, rho=1, xlim=c(0.3,0.9))'' we can obtain a nicer profile plot for $\rho$.
  
 ==== Uncorrelated Sampling Errors ==== ==== Uncorrelated Sampling Errors ====
analyses/konstantopoulos2011.txt · Last modified: 2022/08/22 16:00 by Wolfgang Viechtbauer