Action disabled: recent

analyses:morris2008

Morris (2008) discusses various ways for computing a (standardized) effect size measure for pretest posttest control group designs, where the characteristic, response, or dependent variable assessed in the individual studies is a quantitative variable.

As described by Becker (1988), we can compute the standardized mean change (with raw score standardization) for a treatment and control group with $$g_T = c(n_T-1) \frac{\bar{x}_{post,T} - \bar{x}_{pre,T}}{SD_{pre,T}}$$ and $$g_C = c(n_C-1) \frac{\bar{x}_{post,C} - \bar{x}_{pre,C}}{SD_{pre,C}},$$ where $\bar{x}_{pre,T}$ and $\bar{x}_{post,T}$ are the treatment group pretest and posttest means, $SD_{pre,T}$ is the standard deviation of the pretest scores, $c(m) = \sqrt{2/m} \Gamma[m/2] / \Gamma[(m-1)/2]$ is a bias-correction factor^{1)}, $n_T$ is the size of the treatment group, and $\bar{x}_{pre,C}$, $\bar{x}_{post,C}$, $SD_{pre,C}$, and $n_C$ are the analogous values for the control group. Then the difference in the two standardized mean change values, namely $$g = g_T - g_C$$ indicates how much larger (or smaller) the change in the treatment group was (in standard deviation units) when compared to the change in the control group. Values of $g$ computed for a number of studies could then be meta-analyzed with standard methods.^{2)}

Morris (2008) uses five studies from a meta-analysis on training effectiveness by Carlson and Schmidt (1999) to illustrate these computations. We can create the same dataset with:

datT <- data.frame( m_pre = c(30.6, 23.5, 0.5, 53.4, 35.6), m_post = c(38.5, 26.8, 0.7, 75.9, 36.0), sd_pre = c(15.0, 3.1, 0.1, 14.5, 4.7), sd_post = c(11.6, 4.1, 0.1, 4.4, 4.6), ni = c(20, 50, 9, 10, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))

and

datC <- data.frame( m_pre = c(23.1, 24.9, 0.6, 55.7, 34.8), m_post = c(19.7, 25.3, 0.6, 60.7, 33.4), sd_pre = c(13.8, 4.1, 0.2, 17.3, 3.1), sd_post = c(14.8, 3.3, 0.2, 17.9, 6.9), ni = c(20, 42, 9, 11, 14), ri = c(0.47, 0.64, 0.77, 0.89, 0.44))

The contents of `datT`

and `datC`

are then:

m_pre m_post sd_pre sd_post ni ri 1 30.6 38.5 15.0 11.6 20 0.47 2 23.5 26.8 3.1 4.1 50 0.64 3 0.5 0.7 0.1 0.1 9 0.77 4 53.4 75.9 14.5 4.4 10 0.89 5 35.6 36.0 4.7 4.6 14 0.44

and

m_pre m_post sd_pre sd_post ni ri 1 23.1 19.7 13.8 14.8 20 0.47 2 24.9 25.3 4.1 3.3 42 0.64 3 0.6 0.6 0.2 0.2 9 0.77 4 55.7 60.7 17.3 17.9 11 0.89 5 34.8 33.4 3.1 6.9 14 0.44

After loading the metafor package with `library(metafor)`

, the standardized mean change within each group can be computed with:

datT <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datT) datC <- escalc(measure="SMCR", m1i=m_post, m2i=m_pre, sd1i=sd_pre, ni=ni, ri=ri, data=datC)

Now the contents of `datT`

and `datC`

are:

m_pre m_post sd_pre sd_post ni ri yi vi 1 30.6 38.5 15.0 11.6 20 0.47 0.5056 0.0594 2 23.5 26.8 3.1 4.1 50 0.64 1.0481 0.0254 3 0.5 0.7 0.1 0.1 9 0.77 1.8054 0.2322 4 53.4 75.9 14.5 4.4 10 0.89 1.4181 0.1225 5 35.6 36.0 4.7 4.6 14 0.44 0.0801 0.0802

and

m_pre m_post sd_pre sd_post ni ri yi vi 1 23.1 19.7 13.8 14.8 20 0.47 -0.2365 0.0544 2 24.9 25.3 4.1 3.3 42 0.64 0.0958 0.0173 3 0.6 0.6 0.2 0.2 9 0.77 0.0000 0.0511 4 55.7 60.7 17.3 17.9 11 0.89 0.2667 0.0232 5 34.8 33.4 3.1 6.9 14 0.44 -0.4250 0.0864

The standardized mean change values are given in the `yi`

columns. Note that internally, the `escalc()`

function computes `m1i-m2i`

, so the argument `m1i`

should be set equal to the posttest means and `m2i`

to the pretest means if one wants to compute the standardized mean change in the way described above. The sampling variances (the values in the `vi`

columns) are computed based on equation 13 in Becker (1988).

We can now compute the difference between the two standardized mean changes values for each study. In addition, since the treatment and control groups are independent, the corresponding sampling variances can be computed by adding up the sampling variances of the two groups:

dat <- data.frame(yi = datT$yi - datC$yi, vi = datT$vi + datC$vi) round(dat, 2)

yi vi 1 0.74 0.11 2 0.95 0.04 3 1.81 0.28 4 1.15 0.15 5 0.51 0.17

The `yi`

values above are the exact same value given in Table 5 (under the $d_{ppc1}$ column) by Morris (2008).

Equation 16 in Morris (2008) is the exact sampling variance of $g$. To actually compute the sampling variance in practice, the unknown parameters in this equation must be replaced with their sample counterparts. As noted earlier, the `escalc()`

function actually uses a slightly different method to estimate the sampling variance (based on equation 13 in Becker, 1988). Hence, the values above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc1})$ in Morris, 2008) differ slightly.

There are in fact dozens of ways of how the sampling variance for the standardized mean change can be estimated (see Viechtbauer, 2008, Tables 2 and 3 – and even that is not an exhaustive list). Hence, there are dozens of ways of estimating the sampling variance of $g$ above. Differences should only be relevant in small samples.

For the actual meta-analysis part, we simply pass the `yi`

and `vi`

values to the `rma()`

function. For example, an equal-effects model can be fitted with:

rma(yi, vi, data=dat, method="EE", digits=2)

Equal-Effects Model (k = 5) I^2 (total heterogeneity / total variability): 9.69% H^2 (total variability / sampling variability): 1.11 Test for Heterogeneity: Q(df = 4) = 4.43, p-val = 0.35 Model Results: estimate se zval pval ci.lb ci.ub 0.95 0.14 6.62 <.01 0.67 1.23 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Note that these results are slightly different than the ones in Table 5 due to the different ways of estimating the sampling variances.

In his article, Morris (2008) discusses two other ways of computing an effect size measure for pretest posttest control group designs. The second approach that pools the two pretest SDs actually can be more efficient under certain conditions. However, that approach assumes that the true pretest SDs are equal for the two groups. That may not be the case. The approach given above does not make that assumption and therefore is more broadly applicable (but may be slightly less efficient).

If you really want to use the approach with pooled pretest SDs, then this can be done as follows:

sd_pool <- sqrt((with(datT, (ni-1)*sd_pre^2) + with(datC, (ni-1)*sd_pre^2)) / (datT$ni + datC$ni - 2)) dat <- data.frame(yi = metafor:::.cmicalc(datT$ni + datC$ni - 2) * (with(datT, m_post - m_pre) - with(datC, m_post - m_pre)) / sd_pool) dat$vi <- 2*(1-datT$ri) * (1/datT$ni + 1/datC$ni) + dat$yi^2 / (2*(datT$ni + datC$ni)) round(dat, 2)

yi vi 1 0.77 0.11 2 0.80 0.04 3 1.20 0.14 4 1.05 0.07 5 0.44 0.16

The `yi`

values above are the exact same value given in Table 5 (under the $d_{ppc2}$ column) by Morris (2008). Note that the equation used for computing the sampling variances above is slightly different from the one used in the paper, so the values for `vi`

above and the ones given in Table 5 (column $\hat{\sigma}^2(d_{ppc2})$ in Morris, 2008) differ slightly.

The example above assumes that the pretest posttest correlations (the values given under the `ri`

column) are the same for the control and treatment groups. Ideally, those values should be coded separately for the two groups.

In practice, one is likely to encounter difficulties in actually obtaining those correlations from the information reported in the articles. In that case, one can substitute approximate values (e.g., based on known properties of the dependent variable being measured) and conduct a sensitivity analysis to ensure that the conclusions from the meta-analysis are unchanged when those correlations are varied.

Becker, B. J. (1988). Synthesizing standardized mean-change measures. *British Journal of Mathematical and Statistical Psychology, 41*(2), 257–278.

Carlson, K. D., & Schmidt, F. L. (1999). Impact of experimental design on effect size: Findings from the research literature on training. *Journal of Applied Psychology, 84*(6), 851–862.

Morris, S. B. (2000). Distribution of the standardized mean change effect size for meta-analysis on repeated measures. *British Journal of Mathematical and Statistical Psychology, 53*(1), 17–29.

Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs. *Organizational Research Methods, 11*(2), 364–386.

Viechtbauer, W. (2007). Approximate confidence intervals for standardized effect sizes in the two-independent and two-dependent samples design. *Journal of Educational and Behavioral Statistics, 32*(1), 39–60.

The bias correction factor given on page 261 by Becker (1988) includes a slight error. See Morris (2000).

Note that $g$ is used here to denote the bias-corrected value (as opposed to Becker, 1988, who uses $d$ to denote this). There was (and sometimes still is) some inconsistency in notation when referring to the biased and the bias-corrected version of standardized mean difference / change measures, but I would say the general trend has been to use $d$ for the biased version and $g$ for the bias-corrected version and this is the notation I am also using here.

analyses/morris2008.txt · Last modified: 2021/11/08 13:17 by Wolfgang Viechtbauer

Except where otherwise noted, content on this wiki is licensed under the following license: CC Attribution-Noncommercial-Share Alike 4.0 International