The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:comp_two_independent_estimates

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
tips:comp_two_independent_estimates [2023/02/18 19:24] Wolfgang Viechtbauertips:comp_two_independent_estimates [2024/04/18 10:49] Wolfgang Viechtbauer
Line 43: Line 43:
  
 <code rsplus> <code rsplus>
-dat.comp <- data.frame(estimate = c(coef(res1), coef(res2)), stderror = c(res1$se, res2$se), +dat.comp <- data.frame(alloc    = c("random", "other"),  
-                       meta = c("random","other"), tau2 = round(c(res1$tau2, res2$tau2),3)) +                       estimate = c(coef(res1), coef(res2)),  
-dat.comp+                       stderror = c(res1$se, res2$se), 
 +                       tau2     = c(res1$tau2, res2$tau2)) 
 +dfround(dat.comp, 3)
 </code> </code>
  
 <code output> <code output>
-    estimate  stderror   meta  tau2 +   alloc estimate stderror  tau2 
-1 -0.9709645 0.2759557 random 0.393 +random   -0.971    0.276 0.393 
-2 -0.4812706 0.2169886  other 0.212+ other   -0.481    0.217 0.212
 </code> </code>
  
Line 57: Line 59:
  
 <code rsplus> <code rsplus>
-rma(estimate, sei=stderror, mods = ~ meta, method="FE", data=dat.comp, digits=3)+rma(estimate, sei=stderror, mods = ~ alloc, method="FE", data=dat.comp, digits=3)
 </code> </code>
  
Line 71: Line 73:
 Model Results: Model Results:
  
-            estimate     se    zval   pval   ci.lb   ci.ub +             estimate     se    zval   pval   ci.lb   ci.ub 
-intrcpt       -0.481  0.217  -2.218  0.027  -0.907  -0.056 +intrcpt        -0.481  0.217  -2.218  0.027  -0.907  -0.056 
-metarandom    -0.490  0.351  -1.395  0.163  -1.178   0.198+allocrandom    -0.490  0.351  -1.395  0.163  -1.178   0.198
  
 --- ---
Line 79: Line 81:
 </code> </code>
  
-While we find that studies using random assignment obtain on average larger (i.e., more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$).+While we find that studies using random assignment obtain on average larger (i.e., more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not statistically significant ($z = -1.395$, $p = .163$).
  
 The test of the difference between the two estimates is really just a [[https://en.wikipedia.org/wiki/Wald_test|Wald-type test]], given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}},$$ where $\hat{\mu}_1$ and $\hat{\mu}_2$ are the two estimates and $SE[\hat{\mu}_1]$ and $SE[\hat{\mu}_2]$ the corresponding standard errors. The test statistics can therefore also be computed with: The test of the difference between the two estimates is really just a [[https://en.wikipedia.org/wiki/Wald_test|Wald-type test]], given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}},$$ where $\hat{\mu}_1$ and $\hat{\mu}_2$ are the two estimates and $SE[\hat{\mu}_1]$ and $SE[\hat{\mu}_2]$ the corresponding standard errors. The test statistics can therefore also be computed with:
Line 129: Line 131:
 The result is very similar to what we saw earlier: The coefficient corresponding to the ''alloc'' dummy is equal to $b_1 = -0.490$ ($SE = 0.362$) and not significant ($p = .176$). The result is very similar to what we saw earlier: The coefficient corresponding to the ''alloc'' dummy is equal to $b_1 = -0.490$ ($SE = 0.362$) and not significant ($p = .176$).
  
-However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, which implies that the amount of heterogeneity //within each subset// is assumed to be the same ($\hat{\tau}^2 = 0.318$ in this example).+However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, which implies that the amount of heterogeneity //within each subset// is assumed to be the same ($\hat{\tau}^2 = 0.318$ in this example -- note that this falls somewhere between the two $\hat{\tau}^2$ values we obtained within the subsets).
  
 ==== Meta-Regression with All Studies but Different Amounts of (Residual) Heterogeneity ==== ==== Meta-Regression with All Studies but Different Amounts of (Residual) Heterogeneity ====
Line 136: Line 138:
  
 <code rsplus> <code rsplus>
-rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial, struct="DIAG", data=dat, digits=3)+rma.mv(yi, vi, mods = ~ alloc, random = ~ alloc | trial,  
 +       struct="DIAG", data=dat, digits=3)
 </code> </code>
  
tips/comp_two_independent_estimates.txt · Last modified: 2024/04/18 11:36 by Wolfgang Viechtbauer