The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:rma_vs_lm_lme_lmer

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
tips:rma_vs_lm_lme_lmer [2021/04/16 07:50] – [S-Plus Version of lme()] Wolfgang Viechtbauertips:rma_vs_lm_lme_lmer [2021/11/08 13:31] Wolfgang Viechtbauer
Line 3: Line 3:
 In essence, the commonly used meta-analytic models are just special cases of the linear (mixed-effects) model with the only peculiar aspect that the variances of the error terms (i.e., the sampling variances) are known. Not surprisingly, the question therefore comes up occasionally why one cannot use the ''lm()'', ''lme()'', and ''lmer()'' functions for conducting meta-analyses. After all, the functions can be used to fit linear (mixed-effects) models and both functions allow the user to specify the sampling variances via the ''weights'' argument. So it seems that one should also be able to fit meta-analytic models with these functions. Below, I describe and illustrate how the models fitted via the ''lm()'', ''lme()'', and ''lmer()'' functions differ from the models fitted by the ''rma()'' function and why the those functions are therefore not suitable for fitting meta-analytic models. In essence, the commonly used meta-analytic models are just special cases of the linear (mixed-effects) model with the only peculiar aspect that the variances of the error terms (i.e., the sampling variances) are known. Not surprisingly, the question therefore comes up occasionally why one cannot use the ''lm()'', ''lme()'', and ''lmer()'' functions for conducting meta-analyses. After all, the functions can be used to fit linear (mixed-effects) models and both functions allow the user to specify the sampling variances via the ''weights'' argument. So it seems that one should also be able to fit meta-analytic models with these functions. Below, I describe and illustrate how the models fitted via the ''lm()'', ''lme()'', and ''lmer()'' functions differ from the models fitted by the ''rma()'' function and why the those functions are therefore not suitable for fitting meta-analytic models.
  
-==== Fixed-Effects Model ====+==== Equal-Effects Model ====
  
-Let's start with a fixed-effects model. As an example, consider the data from the meta-analysis by Molloy et al. (2014) on the relationship between conscientiousness and medication adherence. For each study, we can compute the r-to-z transformed correlation coefficient and corresponding sampling variance with:+Let's start with an equal-effects model. As an example, consider the data from the meta-analysis by Molloy et al. (2014) on the relationship between conscientiousness and medication adherence. For each study, we can compute the r-to-z transformed correlation coefficient and corresponding sampling variance with:
 <code rsplus> <code rsplus>
 library(metafor) library(metafor)
Line 32: Line 32:
 The ''yi'' values are the r-to-z transformed correlations and the ''vi'' values the corresponding sampling variances. The ''yi'' values are the r-to-z transformed correlations and the ''vi'' values the corresponding sampling variances.
  
-We can now fit a fixed-effects model to these data with:+We can now fit an equal-effects model to these data with:
 <code rsplus> <code rsplus>
-res.fe <- rma(yi, vi, data=dat, method="FE") +res.ee <- rma(yi, vi, data=dat, method="EE") 
-res.fe+res.ee
 </code> </code>
 <code output> <code output>
-Fixed-Effects Model (k = 16)+Equal-Effects Model (k = 16)
  
-Test for Heterogeneity: +I^2 (total heterogeneity / total variability):   60.69% 
 +H^2 (total variability / sampling variability):  2.54 
 +                                                                                                                          
 +Test for Heterogeneity:                                                                                                  
 Q(df = 15) = 38.1595, p-val = 0.0009 Q(df = 15) = 38.1595, p-val = 0.0009
  
 Model Results: Model Results:
  
-estimate       se     zval     pval    ci.lb    ci.ub           +estimate      se    zval    pval   ci.lb   ci.ub  
-  0.1252   0.0170   7.3642   <.0001   0.0919   0.1585      *** +  0.1252  0.0170  7.3642  <.0001  0.0919  0.1585  *** 
  
 --- ---
-Signif. codes: ***’ 0.001 **’ 0.01 *’ 0.05 .’ 0.1 ‘ ’ 1+Signif. codes: '***0.001 '**0.01 '*0.05 '.0.1 ' ' 1
 </code> </code>
  
Line 74: Line 77:
 </code> </code>
 Two things are of note here: Two things are of note here:
-  - The estimated intercept (''0.12518'') is exactly the same as the model coefficient obtained earlier with the ''rma()'' function (the value reported by the ''rma()'' function is rounded to 4 decimals, but that can be changed with ''print(res.fe, digits=5)'' yielding the same value).+  - The estimated intercept (''0.12518'') is exactly the same as the model coefficient obtained earlier with the ''rma()'' function (the value reported by the ''rma()'' function is rounded to 4 decimals, but that can be changed with ''print(res.ee, digits=5)'' yielding the same value).
   - The standard error (of the estimated intercept) is different (and hence, the test statistic and p-value also differs).   - The standard error (of the estimated intercept) is different (and hence, the test statistic and p-value also differs).
  
Line 81: Line 84:
 This can be demonstrated by extracting the estimated error variance from the ''lm'' object, multiplying the sampling variances by that value, and re-fitting the model with the ''rma()'' function: This can be demonstrated by extracting the estimated error variance from the ''lm'' object, multiplying the sampling variances by that value, and re-fitting the model with the ''rma()'' function:
 <code rsplus> <code rsplus>
-rma(yi, vi*summary(res.lm)$sigma^2, data=dat, method="FE")+rma(yi, vi*summary(res.lm)$sigma^2, data=dat, method="EE")
 </code> </code>
 <code output> <code output>
-Fixed-Effects Model (k = 16)+Equal-Effects Model (k = 16)
  
-Test for Heterogeneity: +I^2 (total heterogeneity / total variability):   0.00% 
 +H^2 (total variability / sampling variability):  1.00 
 + 
 +Test for Heterogeneity:
 Q(df = 15) = 15.0000, p-val = 0.4514 Q(df = 15) = 15.0000, p-val = 0.4514
  
 Model Results: Model Results:
  
-estimate       se     zval     pval    ci.lb    ci.ub           +estimate      se    zval    pval   ci.lb   ci.ub 
-  0.1252   0.0271   4.6171   <.0001   0.0720   0.1783      *** +  0.1252  0.0271  4.6171  <.0001  0.0720  0.1783  ***
  
 --- ---
-Signif. codes: ***’ 0.001 **’ 0.01 *’ 0.05 .’ 0.1 ‘ ’ 1+Signif. codes: '***0.001 '**0.01 '*0.05 '.0.1 ' ' 1
 </code> </code>
 Now these results are exactly the same as those obtained by the ''lm()'' function. Note that multiplying the sampling variances by some constant value does not affect the model estimate, but only the corresponding standard error. Now these results are exactly the same as those obtained by the ''lm()'' function. Note that multiplying the sampling variances by some constant value does not affect the model estimate, but only the corresponding standard error.
Line 101: Line 107:
 Therefore, if we want to obtain the same standard error from ''lm()'' as from ''rma()'', we must adjust the standard error by $\hat{\sigma}_e$ (i.e., the square-root of the estimated error variance or what is called the "residual standard error" in the output from the ''lm()'' function): Therefore, if we want to obtain the same standard error from ''lm()'' as from ''rma()'', we must adjust the standard error by $\hat{\sigma}_e$ (i.e., the square-root of the estimated error variance or what is called the "residual standard error" in the output from the ''lm()'' function):
 <code rsplus> <code rsplus>
-coef(summary(res.fe))+coef(summary(res.ee))
 </code> </code>
 <code output> <code output>
Line 108: Line 114:
 </code> </code>
 <code rsplus> <code rsplus>
-coef(summary(res.lm))[1,2] / summary(res.lm)$sigma+coef(summary(res.lm))[1,2] / sigma(res.lm)
 </code> </code>
 <code output> <code output>
Line 216: Line 222:
 To illustrate this, we can again factor in that constant into the sampling variances and refit the model with ''rma()'': To illustrate this, we can again factor in that constant into the sampling variances and refit the model with ''rma()'':
 <code rsplus> <code rsplus>
-rma(yi, vi*res.lme$sigma^2, data=dat)+rma(yi, vi*sigma(res.lme)^2, data=dat)
 </code> </code>
 <code output> <code output>
tips/rma_vs_lm_lme_lmer.txt · Last modified: 2023/11/14 08:00 by Wolfgang Viechtbauer