The metafor Package

A Meta-Analysis Package for R

User Tools

Site Tools


tips:model_selection_with_glmulti_and_mumin

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
tips:model_selection_with_glmulti_and_mumin [2022/08/09 05:15] Wolfgang Viechtbauertips:model_selection_with_glmulti_and_mumin [2022/08/09 05:28] Wolfgang Viechtbauer
Line 5: Line 5:
 ==== Data Preparation ==== ==== Data Preparation ====
  
-For the example, I will use data from the meta-analysis by Bangert-Drowns et al. (2004) on the effectiveness of school-based writing-to-learn interventions on academic achievement (''help(dat.bangertdrowns2004)'' provides a bit more background). The data can be loaded with:+For the example, I will use data from the meta-analysis by Bangert-Drowns et al. (2004) on the effectiveness of school-based writing-to-learn interventions on academic achievement (''help(dat.bangertdrowns2004)'' provides a bit more background on this dataset). The data can be loaded with:
 <code rsplus> <code rsplus>
 library(metafor) library(metafor)
Line 43: Line 43:
  
   * length: treatment length (in weeks)   * length: treatment length (in weeks)
-  * wic: writing in class (0 = no; 1 = yes) +  * wic: writing tasks were completed in class (0 = no; 1 = yes) 
-  * feedback: feedback (0 = no; 1 = yes)+  * feedback: feedback on writing was provided (0 = no; 1 = yes)
   * info: writing contained informational components (0 = no; 1 = yes)   * info: writing contained informational components (0 = no; 1 = yes)
   * pers: writing contained personal components (0 = no; 1 = yes)   * pers: writing contained personal components (0 = no; 1 = yes)
   * imag: writing contained imaginative components (0 = no; 1 = yes)   * imag: writing contained imaginative components (0 = no; 1 = yes)
-  * meta: prompts for metacognitive reflection (0 = no; 1 = yes)+  * meta: prompts for metacognitive reflection were given (0 = no; 1 = yes)
  
 More details about the meaning of these variables can be found in Bangert-Drowns et al. (2004). For the purposes of this illustration, it is sufficient to understand that we have 7 variables that are potentially (and a priori plausible) predictors of the size of the treatment effect. As we will fit various models to these data (containing all possible subsets of these 7 variables), and we need to keep the data being included in the various models the same across models, we will remove rows where at least one of the values of these 7 moderator variables is missing. We can do this with: More details about the meaning of these variables can be found in Bangert-Drowns et al. (2004). For the purposes of this illustration, it is sufficient to understand that we have 7 variables that are potentially (and a priori plausible) predictors of the size of the treatment effect. As we will fit various models to these data (containing all possible subsets of these 7 variables), and we need to keep the data being included in the various models the same across models, we will remove rows where at least one of the values of these 7 moderator variables is missing. We can do this with:
Line 59: Line 59:
 ==== Model Selection ==== ==== Model Selection ====
  
-We will now examine the fit and plausibility of various models, focusing on models that contain none, one, and up to seven (i.e., all) of these moderator variables. For this, we need to install and load the glmulti package and define a function that takes a model formula and dataset as input and then fits a random/mixed-effects meta-regression model to the given data using maximum likelihood estimation:+We will now examine the fit and plausibility of various models, focusing on models that contain none, one, and up to seven (i.e., all) of these moderator variables. For this, we install and load the glmulti package and define a function that (a) takes a model formula and dataset as input and (b) then fits a mixed-effects meta-regression model to the given data using maximum likelihood estimation:
 <code rsplus> <code rsplus>
 install.packages("glmulti") install.packages("glmulti")
Line 123: Line 123:
 </code> </code>
  
-We see that the "best" model is the one that only includes ''imag'' as a moderator. The second best includes ''imag'' and ''meta''. And so on. The values under ''weights'' are the model weights (also called "Akaike weights"). From an information-theoretic perspective, the Akaike weight for a particular model can be regarded as the probability that the model is the best model (in a Kullback-Leibler sense of minimizing the loss of information when approximating full reality by a fitted model) out of all of the models considered/fitted. So, while the "best" model has the highest weight/probability, its weight in this example is not substantially larger than that of the second model (and also the third, fourth, and so on). So, we shouldn't be all too certain here that the top model is really //the// best model in the set. Several models are almost equally plausible (in other examples, one or two models may carry most of the weight, but not here).+We see that the "best" model is the one that only includes ''imag'' as a moderator. The second best includes ''imag'' and ''meta''. And so on. The values under ''weights'' are the model weights (also called "Akaike weights"). From an information-theoretic perspective, the Akaike weight for a particular model can be regarded as the probability that the model is the best model (in a Kullback-Leibler sense of minimizing the loss of information when approximating full reality or the real data generating mechanism by a fitted model) out of all of the models considered/fitted. So, while the "best" model has the highest weight/probability, its weight in this example is not substantially larger than that of the second model (and also the third, fourth, and so on). So, we shouldn't be all too certain here that the top model is really //the// best model in the set. Several models are almost equally plausible (in other examples, one or two models may carry most of the weight, but not here).
  
 So, we could now examine the "best" model with: So, we could now examine the "best" model with:
tips/model_selection_with_glmulti_and_mumin.txt · Last modified: 2022/10/13 06:07 by Wolfgang Viechtbauer