tips:comp_two_independent_estimates
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
tips:comp_two_independent_estimates [2023/02/18 17:35] – Wolfgang Viechtbauer | tips:comp_two_independent_estimates [2024/03/28 09:00] – Wolfgang Viechtbauer | ||
---|---|---|---|
Line 43: | Line 43: | ||
<code rsplus> | <code rsplus> | ||
- | dat.comp <- data.frame(estimate = c(coef(res1), | + | dat.comp <- data.frame(meta = c(" |
- | meta = c(" | + | estimate = c(coef(res1), |
- | dat.comp | + | stderror = c(res1$se, res2$se), |
+ | | ||
+ | dfround(dat.comp, 3) | ||
</ | </ | ||
<code output> | <code output> | ||
- | estimate | + | |
- | 1 -0.9709645 | + | 1 random |
- | 2 -0.4812706 | + | 2 |
</ | </ | ||
Line 79: | Line 81: | ||
</ | </ | ||
- | While we find that studies using random assignment obtain on average larger (i.e., more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not significant ($z = -1.395$, $p = .163$). | + | While we find that studies using random assignment obtain on average larger (i.e., more negative) effects than studies not using random assignment ($b_1 = -0.490$, $SE = 0.351$), the difference between the two estimates is not statistically |
The test of the difference between the two estimates is really just a [[https:// | The test of the difference between the two estimates is really just a [[https:// | ||
Line 129: | Line 131: | ||
The result is very similar to what we saw earlier: The coefficient corresponding to the '' | The result is very similar to what we saw earlier: The coefficient corresponding to the '' | ||
- | However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, | + | However, the results are not exactly identical. The reason for this is as follows. When we fit separate random-effects models in the two subsets, we are allowing the amount of heterogeneity within each set to be different (as shown earlier, the estimates were $\hat{\tau}^2 = 0.393$ and $\hat{\tau}^2 = 0.212$ for studies using and not using random assignment, respectively). On the other hand, the mixed-effects meta-regression model fitted above has a single variance component for the amount of residual heterogeneity, |
==== Meta-Regression with All Studies but Different Amounts of (Residual) Heterogeneity ==== | ==== Meta-Regression with All Studies but Different Amounts of (Residual) Heterogeneity ==== | ||
Line 169: | Line 171: | ||
Note that the two estimates of $\tau^2$ are now identical to the ones we obtained earlier from the separate random-effects models. Also, the coefficient, | Note that the two estimates of $\tau^2$ are now identical to the ones we obtained earlier from the separate random-effects models. Also, the coefficient, | ||
- | A discussion/ | + | A discussion/ |
+ | |||
+ | Rubio-Aparicio, | ||
Rubio-Aparicio, | Rubio-Aparicio, | ||
- | We can also do a likelihood ratio test (LRT) to examine whether there are significant differences in the $\tau^2$ values across subsets. This can be done with: | + | We can also conduct |
<code rsplus> | <code rsplus> | ||
Line 188: | Line 192: | ||
So in this example, we would not reject the null hypothesis $H_0: \tau^2_1 = \tau^2_2$ ($p = .58$). | So in this example, we would not reject the null hypothesis $H_0: \tau^2_1 = \tau^2_2$ ($p = .58$). | ||
+ | |||
+ | ==== Other Types of Models ==== | ||
+ | |||
+ | The issue discussed above also arises for other types of models (e.g., multilevel meta-analytic models). When fitting a particular model within several subgroups, then the variance components of the model are automatically allowed to differ across the subgroups. On the other hand, when fitting the same type of model to all studies combined (but including a moderator to allow the mean effect size to differ across subgroups), then the variance components are assumed to be the same within each subgroups (unless one takes extra steps as illustrated above to allow the variance components to differ across subgroups). Consequently, | ||
+ |
tips/comp_two_independent_estimates.txt · Last modified: 2024/06/18 19:28 by Wolfgang Viechtbauer