tips:comp_two_independent_estimates
Differences
This shows you the differences between two versions of the page.
Next revision | |||
— | tips:comp_two_independent_estimates [2019/09/26 07:15] – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ===== Comparing Estimates of Independent Meta-Analyses or Subgroups ===== | ||
+ | Suppose we have summary estimates (e.g., estimated average effects) obtained from two independent meta-analyses or two subgroups of studies within the same meta-analysis and we want to test whether the estimates are different from each other. A Wald-type test can be used for this purpose. Alternatively, | ||
+ | |||
+ | ==== Data Preparation ==== | ||
+ | |||
+ | We will use the ' | ||
+ | <code rsplus> | ||
+ | library(metafor) | ||
+ | dat <- escalc(measure=" | ||
+ | dat$alloc <- ifelse(dat$alloc == " | ||
+ | dat | ||
+ | </ | ||
+ | <code output> | ||
+ | | ||
+ | 1 1 Aronson 1948 4 | ||
+ | 2 2 | ||
+ | 3 3 Rosenthal et al 1960 3 | ||
+ | 4 4 Hart & Sutherland 1977 62 13536 248 12619 52 random -1.4416 0.0200 | ||
+ | 5 5 Frimodt-Moller et al 1973 | ||
+ | 6 6 Stein & Aronson 1953 180 1361 372 1079 44 other -0.7861 0.0069 | ||
+ | 7 7 | ||
+ | 8 8 TPT Madras 1980 505 87886 499 87892 13 random | ||
+ | 9 9 | ||
+ | 10 10 Rosenthal et al 1961 | ||
+ | 11 11 | ||
+ | 12 12 | ||
+ | 13 13 | ||
+ | </ | ||
+ | |||
+ | ==== Separate Meta-Analyses ==== | ||
+ | |||
+ | First, we fit two separate random-effects models within each subset defined by the '' | ||
+ | <code rsplus> | ||
+ | res1 <- rma(yi, vi, data=dat, subset=alloc==" | ||
+ | res2 <- rma(yi, vi, data=dat, subset=alloc==" | ||
+ | </ | ||
+ | |||
+ | We then combine the estimates and standard errors from each model into a data frame. We also add a variable to distinguish the two models and, for reasons to be explained in more detail below, we add the estimated amounts of heterogeneity within each subset to the data frame. | ||
+ | <code rsplus> | ||
+ | dat.comp <- data.frame(estimate = c(coef(res1), | ||
+ | meta = c(" | ||
+ | dat.comp | ||
+ | </ | ||
+ | <code output> | ||
+ | estimate | ||
+ | 1 -0.9709645 0.2759557 random 0.393 | ||
+ | 2 -0.4812706 0.2169886 | ||
+ | </ | ||
+ | |||
+ | We can now compare the two estimates (i.e., the estimated average log risk ratios) by feeding them back to the '' | ||
+ | <code rsplus> | ||
+ | rma(estimate, | ||
+ | </ | ||
+ | <code output> | ||
+ | Fixed-Effects with Moderators Model (k = 2) | ||
+ | |||
+ | Test for Residual Heterogeneity: | ||
+ | QE(df = 0) = 0.000, p-val = 1.000 | ||
+ | |||
+ | Test of Moderators (coefficient(s) 2): | ||
+ | QM(df = 1) = 1.946, p-val = 0.163 | ||
+ | |||
+ | Model Results: | ||
+ | |||
+ | estimate | ||
+ | intrcpt | ||
+ | metarandom | ||
+ | |||
+ | --- | ||
+ | Signif. codes: | ||
+ | </ | ||
+ | While we find that studies using random assignment obtain larger (more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$). | ||
+ | |||
+ | The test of the difference between the two estimates is really just a Wald-type test, given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}}, | ||
+ | <code rsplus> | ||
+ | with(dat.comp, | ||
+ | </ | ||
+ | <code output> | ||
+ | zval | ||
+ | -1.395 | ||
+ | </ | ||
+ | This is the same value that we obtained above. | ||
+ | |||
+ | ==== Meta-Regression with All Studies ==== | ||
+ | |||
+ | Now let's take a different approach, fitting a meta-regression model with '' | ||
+ | <code rsplus> | ||
+ | rma(yi, vi, mods = ~ alloc, data=dat, digits=3) | ||
+ | </ | ||
+ | <code output> | ||
+ | Mixed-Effects Model (k = 13; tau^2 estimator: REML) | ||
+ | |||
+ | tau^2 (estimated amount of residual heterogeneity): | ||
+ | tau (square root of estimated tau^2 value): | ||
+ | I^2 (residual heterogeneity / unaccounted variability): | ||
+ | H^2 (unaccounted variability / sampling variability): | ||
+ | R^2 (amount of heterogeneity accounted for): 0.00% | ||
+ | |||
+ | Test for Residual Heterogeneity: | ||
+ | QE(df = 11) = 138.511, p-val < .001 | ||
+ | |||
+ | Test of Moderators (coefficient(s) 2): | ||
+ | QM(df = 1) = 1.833, p-val = 0.176 | ||
+ | |||
+ | Model Results: | ||
+ | |||
+ | | ||
+ | intrcpt | ||
+ | allocrandom | ||
+ | |||
+ | --- | ||
+ | Signif. codes: | ||
+ | </ | ||
+ | The result is very similar to what we saw earlier: The coefficient for the '' | ||
+ | |||
+ | However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, | ||
+ | |||
+ | ==== Meta-Regression with All Studies but Different Amounts of (Residual) Heterogeneity ==== | ||
+ | |||
+ | Using the '' | ||
+ | <code rsplus> | ||
+ | rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct=" | ||
+ | </ | ||
+ | <code output> | ||
+ | Multivariate Meta-Analysis Model (k = 13; method: REML) | ||
+ | |||
+ | Variance Components: | ||
+ | |||
+ | outer factor: trial (nlvls = 13) | ||
+ | inner factor: alloc (nlvls = 2) | ||
+ | |||
+ | | ||
+ | tau^2.1 | ||
+ | tau^2.2 | ||
+ | |||
+ | Test for Residual Heterogeneity: | ||
+ | QE(df = 11) = 138.511, p-val < .001 | ||
+ | |||
+ | Test of Moderators (coefficient(s) 2): | ||
+ | QM(df = 1) = 1.946, p-val = 0.163 | ||
+ | |||
+ | Model Results: | ||
+ | |||
+ | | ||
+ | intrcpt | ||
+ | allocrandom | ||
+ | |||
+ | --- | ||
+ | Signif. codes: | ||
+ | </ | ||
+ | Note that the two estimates of $\tau^2$ are now identical to the ones we obtained earlier from the separate random-effects models. Also, the coefficient, | ||
+ | |||
+ | A discussion/ | ||
+ | |||
+ | Rubio-Aparicio, | ||
+ | |||
+ | We can also do a likelihood ratio test (LRT) to examine whether there are significant differences in the $\tau^2$ values across subsets. This can be done with: | ||
+ | |||
+ | <code rsplus> | ||
+ | res1 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct=" | ||
+ | res0 <- rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct=" | ||
+ | anova(res1, res0) | ||
+ | </ | ||
+ | <code output> | ||
+ | df | ||
+ | Full 4 29.2959 30.8875 35.9626 -10.6480 | ||
+ | Reduced | ||
+ | </ | ||
+ | |||
+ | So in this example, we would not reject the null hypothesis $H_0: \tau^2_1 = \tau^2_2$ ($p = .58$). |
tips/comp_two_independent_estimates.txt · Last modified: 2024/03/28 09:01 by Wolfgang Viechtbauer