The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:models_with_or_without_intercept

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
tips:models_with_or_without_intercept [2020/10/31 08:46] Wolfgang Viechtbauertips:models_with_or_without_intercept [2022/08/03 11:34] (current) Wolfgang Viechtbauer
Line 76: Line 76:
 The ''btt'' argument stands for "betas to test" and is used to specify which coefficients we want include in the test. The ''btt'' argument stands for "betas to test" and is used to specify which coefficients we want include in the test.
  
-A different way to conduct the same test is to use the ''L'' argument, which allows us to specify one or more vectors of numbers, which are multiplied with the model coefficients. In particular, we can use:+A different way of conducting the same test is to use the ''L'' argument, which allows us to specify one or more vectors of numbers, which are multiplied with the model coefficients. In particular, we can use:
 <code rsplus> <code rsplus>
-anova(res, L=rbind(c(0,1,0),c(0,0,1)))+anova(res, X=rbind(c(0,1,0),c(0,0,1)))
 </code> </code>
 to test the two hypotheses to test the two hypotheses
Line 106: Line 106:
 &\beta_2 = \mu_s - \mu_a. &\beta_2 = \mu_s - \mu_a.
 \end{align} \end{align}
-But what about the contrast between random and systematic allocation? It turns out that we can obtain this from the model as the difference between the $\beta_1$ and $\beta_2$ coefficients. In particular, if we subtract $\beta_1$ from $\beta_2$, then+But what about the contrast between systematic and random allocation? It turns out that we can obtain this from the model as the difference between the $\beta_1$ and $\beta_2$ coefficients. In particular, if we subtract $\beta_1$ from $\beta_2$, then
 $$ $$
 \beta_2 - \beta_1 = (\mu_r - \mu_a) - (\mu_s - \mu_a) = \mu_r - \mu_s \beta_2 - \beta_1 = (\mu_r - \mu_a) - (\mu_s - \mu_a) = \mu_r - \mu_s
 $$ $$
-so this difference reflects how different random allocation is compared to systematic allocation. Using the ''anova()'' function, we can obtain this contrast with+so this contrast reflects how different systematic allocation is compared to random allocation. Using the ''anova()'' function, we can obtain this contrast with
 <code rsplus> <code rsplus>
-anova(res, L=c(0,-1,1))+anova(res, X=c(0,-1,1))
 </code> </code>
 <code output> <code output>
Line 156: Line 156:
 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
 </code> </code>
-(I shortened the names of the coefficients in the output above to make the table under the ''Model Results'' more readable). Now the intercept reflects the estimated (average) log risk ratio for random allocation, while the coefficients for ''alternate'' and ''systematic'' are again contrasts of these two levels compared to random allocation. Note how the coefficient for systematic allocation is the same as we obtained earlier using ''anova(res, L=c(0,-1,1))''. Moreover, as we can see in the output, the results for the omnibus test of these two coefficients is identical to what we obtained earlier.+(I shortened the names of the coefficients in the output above to make the table under the ''Model Results'' more readable). Now the intercept reflects the estimated (average) log risk ratio for random allocation, while the coefficients for ''alternate'' and ''systematic'' are again contrasts of these two levels compared to random allocation. Note how the coefficient for systematic allocation is the same as we obtained earlier using ''anova(res, X=c(0,-1,1))''. Moreover, as we can see in the output, the results for the omnibus test of these two coefficients is identical to what we obtained earlier.
  
 ==== Model Without Intercept ==== ==== Model Without Intercept ====
Line 165: Line 165:
 res res
 </code> </code>
 +Alternatively, one could use ''mods = ~ 0 + factor(alloc)''. In either case, the output is then:
 <code output> <code output>
 Mixed-Effects Model (k = 13; tau^2 estimator: REML) Mixed-Effects Model (k = 13; tau^2 estimator: REML)
Line 206: Line 207:
 Again, we could use the ''anova()'' function to carry out explicitly the same test with: Again, we could use the ''anova()'' function to carry out explicitly the same test with:
 <code rsplus> <code rsplus>
-anova(res, L=rbind(c(1,0,0),c(0,1,0),c(0,0,1)))+anova(res, X=rbind(c(1,0,0),c(0,1,0),c(0,0,1)))
 </code> </code>
 <code output> <code output>
Line 224: Line 225:
 </code> </code>
  
-It is important to realize that this does not test whether there are differences between the different forms of allocation (this is what we tested earlier in the model that included the intercept term). However, we can again use contrasts of the model coefficients to test differences between the levels. For example, let's test the difference between alternating and random allocation and the difference between systematic allocation and random allocation:+It is important to realize that this does not test whether there are differences between the different forms of allocation (this is what we tested earlier in the model that included the intercept term). However, we can again use contrasts of the model coefficients to test differences between the levels. Let's test all pairwise differences (i.e., between random and alternating allocation, between systematic and alternating allocationand between systematic and random allocation):
 <code rsplus> <code rsplus>
-anova(res, L=rbind(c(-1,1,0),c(-1,0,1)))+anova(res, X=rbind(c(-1,1,0),c(-1,0,1), c(0,-1,1)))
 </code> </code>
 <code output> <code output>
Line 232: Line 233:
 1:     -factor(alloc)alternate + factor(alloc)random = 0 1:     -factor(alloc)alternate + factor(alloc)random = 0
 2: -factor(alloc)alternate + factor(alloc)systematic = 0 2: -factor(alloc)alternate + factor(alloc)systematic = 0
 +3:    -factor(alloc)random + factor(alloc)systematic = 0
  
 Results: Results:
Line 237: Line 239:
 1:  -0.4478 0.5158 -0.8682 0.3853 1:  -0.4478 0.5158 -0.8682 0.3853
 2:   0.0890 0.5600  0.1590 0.8737 2:   0.0890 0.5600  0.1590 0.8737
 +3:   0.5369 0.4364  1.2303 0.2186
 +</code>
 +These are now the exact same results we obtained earlier for the model that included the intercept term.
  
 +Note that the output does not contain an omnibus test for the three contrasts because the matrix with the contrast coefficients (''L'' above) is not of full rank (i.e., one of the three contrasts is redundant). If we only include two of the three contrasts (again, it does not matter which two), then we also get the omnibus test (rest of the output omitted):
 +<code rsplus>
 +anova(res, X=rbind(c(-1,1,0),c(-1,0,1)))
 +</code>
 +<code output>
 Omnibus Test of Hypotheses: Omnibus Test of Hypotheses:
 QM(df = 2) = 1.7675, p-val = 0.4132 QM(df = 2) = 1.7675, p-val = 0.4132
 </code> </code>
-These are now the exact same results we obtained earlier for the model that included the intercept term. 
  
 ==== Parameterization ==== ==== Parameterization ====
  
-What the example above shows is that, whether we remove the intercept or not, we are essentially fitting the same model, but using a different [[wp>Parametrization_(geometry)|parameterization]]. The Wikipedia page linked to here discusses the idea of parameterization in the context of geometry, but this is directly relevant, since fundamentally the process of fitting (meta)-regression models can also be conceptualized geometrically (involving projections and vector spaces). An excellent (but quite technical) reference for this perspective is Christensen (2011).+What the example above shows is that, whether we remove the intercept or not, we are essentially fitting the same model, but using a different [[wp>Parametrization_(geometry)|parameterization]]. The Wikipedia page linked to here discusses the idea of parameterization in the context of geometry, but this is directly relevant, since fundamentally the process of fitting (meta)-regression models can also be conceptualized geometrically (involving projections and vector spaces). An excellent (but quite technical) reference for this perspective (on regression models in general) is Christensen (2011).
  
 ==== Models with Continuous Moderators ==== ==== Models with Continuous Moderators ====
tips/models_with_or_without_intercept.1604133970.txt.gz · Last modified: 2020/10/31 08:46 by Wolfgang Viechtbauer