The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:comp_two_independent_estimates

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Next revisionBoth sides next revision
tips:comp_two_independent_estimates [2019/09/26 07:15] – external edit 127.0.0.1tips:comp_two_independent_estimates [2020/07/03 10:32] Wolfgang Viechtbauer
Line 71: Line 71:
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 </code> </code>
-While we find that studies using random assignment obtain larger (more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$).+While we find that studies using random assignment obtain on average larger (i.e., more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$).
  
-The test of the difference between the two estimates is really just a Wald-type test, given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}},$$ where $\hat{\mu}_1$ and $\hat{\mu}_2$ are the two estimates and $SE[\hat{\mu}_1]$ and $SE[\hat{\mu}_2]$ the corresponding standard errors. The test statistics can therefore also be computed with:+The test of the difference between the two estimates is really just a [[https://en.wikipedia.org/wiki/Wald_test|Wald-type test]], given by the equation $$z = \frac{\hat{\mu}_1 - \hat{\mu}_2}{\sqrt{SE[\hat{\mu}_1]^2 + SE[\hat{\mu}_2]^2}},$$ where $\hat{\mu}_1$ and $\hat{\mu}_2$ are the two estimates and $SE[\hat{\mu}_1]$ and $SE[\hat{\mu}_2]$ the corresponding standard errors. The test statistics can therefore also be computed with:
 <code rsplus> <code rsplus>
 with(dat.comp, round(c(zval = (estimate[1] - estimate[2])/sqrt(stderror[1]^2 + stderror[2]^2)), 3)) with(dat.comp, round(c(zval = (estimate[1] - estimate[2])/sqrt(stderror[1]^2 + stderror[2]^2)), 3))
Line 113: Line 113:
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 </code> </code>
-The result is very similar to what we saw earlier: The coefficient for the ''alloc'' dummy is equal to $b_1 = -0.490$ ($SE = 0.362$) and not significant ($p = .176$).+The result is very similar to what we saw earlier: The coefficient corresponding to the ''alloc'' dummy is equal to $b_1 = -0.490$ ($SE = 0.362$) and not significant ($p = .176$).
  
 However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, which implies that the amount of heterogeneity //within each subset// is assumed to be the same ($\hat{\tau}^2 = 0.318$ in this example). However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, which implies that the amount of heterogeneity //within each subset// is assumed to be the same ($\hat{\tau}^2 = 0.318$ in this example).
tips/comp_two_independent_estimates.txt · Last modified: 2024/04/18 11:36 by Wolfgang Viechtbauer