Effect Size Meta-analysis

 

Menu location: Analysis_Meta-Analysis_Effect Size.

 

Case-control studies of continuous outcomes (e.g. serum creatinine) may be investigated with respect to overall size of effect of an intervention. Meta-analysis may be used to investigate the combination or interaction of a group of independent studies, for example a series of effect sizes from similar studies conducted at different centres. This StatsDirect function examines the effect size within each stratum and across all of the studies/strata.

 

There are a number of statistical methods for estimating effect size, StatsDirect uses g (modified Glass statistic with pooled sample standard deviation) and the unbiased estimator d (Hedges and Olkin, 1985):

- where ne is the number in the experimental group, nc is the number in the control group, μe is the sample mean of the experimental group, μc is the sample mean of the control group, σe is the sample standard deviation for the experimental group, σc is the sample standard deviation for the control group, N = ne + nc, J(m) is the correction factor given m and Γ is the gamma function.

 

For each study StatsDirect gives g with an exact confidence interval and d with an approximate confidence interval. An iterative method based on the non-central t distribution is used to construct the confidence interval for g (Hedges and Olkin, 1985).

 

The pooled mean effect size estimate (d+) is calculated using direct weights defined as the inverse of the variance of d for each study/stratum. An approximate confidence interval for d+ is given with a chi-square statistic and probability of this pooled effect size being equal to zero (Hedges and Olkin, 1985).

 

StatsDirect also gives the option to base effect size calculations on weighted mean difference (a non-standardized estimate unlike g and d) as described in the Cochrane Collaboration Handbook (Mulrow and Oxman, 1996).

 

The inconsistency of results across studies is summarised in the I² statistic, which is the percentage of variation across studies that is due to heterogeneity rather than chance – see the heterogeneity section for more information.

 

Please note that the results from StatsDirect may be slightly different from the results you obtain using other packages or from those quoted in papers; this is due to the use of exact bias correction calculated from the gamma distribution in StatsDirect.

 

DATA INPUT:

You may enter number, mean and standard deviation for control and experimental groups of each study. Alternatively you may just enter numbers in experimental groups, numbers in control groups and effect size g (nb. please make sure you use g only as defined above!).

 

Example

From personal communication from Dr N. Freemantle.

 

The following data represent test outcomes for six studies in which an educational intervention was investigated:

 

  Experimental Control
Trial N Mean SD N Mean SD
Kottke 27 18.50 14.90 17 5.40 17.30
Levinson 16 60.00 29.20 15 55.00 29.20
oliver intensive 25 10.72 6.46 66 6.92 6.83
oliver standard 62 9.20 6.16 66 6.92 6.83
sulmasy 9 3.75 2.55 22 1.05 2.12
white 63 60.00 13.30 40 46.30 18.60
Wilson 23 9.24 5.35 23 5.33 4.48

 

To analyse these data in StatsDirect first prepare them in four workbook columns and label these columns appropriately. Alternatively, open the test workbook using the file open function of the file menu. Then select effect size from the meta-analysis section of the analysis menu, select the option to use mean, n and sd, and then select the columns ’Exptal. number’, ’Exptal. mean’, Exptal. SD’, ’Control number’, ’Control mean’, ’Control SD’ and ’Trial’ as prompted.

 

For this example:

 

Study J(N-2) g Exact 95% CI  
1 0.982 0.8261 0.19 1.453 kottke
2 0.9739 0.1712 -0.536 0.8759 levinson
3 0.9915 0.5644 0.0954 1.0303 oliver intensive
4 0.994 0.35 2.945E-17 0.6986 oliver standard
5 0.9739 1.2017 0.3581 2.0267 sulmasy
6 0.9925 0.8804 0.4639 1.2928 white
7 0.9828 0.7924 0.187 1.3893 wilson

 

Study N (exptal.) N (control) d Approximate 95% CI  
1 27 17 0.8113 0.1812 1.4413 kottke
2 16 15 0.1668 -0.5389 0.8724 levinson
3 25 66 0.5597 0.0923 1.0271 oliver intensive
4 62 66 0.3479 -0.0013 0.6972 oliver standard
5 9 22 1.1703 0.3419 1.9987 sulmasy
6 63 40 0.8738 0.46 1.2876 white
7 23 23 0.7788 0.1794 1.3783 wilson

 

Fixed effects (Hedges-Olkin)

Pooled effect size d+ = 0.612354 (95% CI = 0.421251 to 0.803457)

Z (test d+ differs from 0) = 6.280333 P < 0.0001

 

Non-combinability of studies

Cochran Q = 7.737692 (df = 6) P = 0.258

Moment-based estimate of between studies variance = 0.020397

I² (inconsistency) = 22.5% (95% CI = 0% to 67.1%)

 

Random effects (DerSimonian-Laird)

Pooled d+ = 0.627768 (95% CI = 0.403026 to 0.85251)

Z (test d+ differs from 0) = 5.474734 P < 0.0001

 

Bias indicators

Begg-Mazumdar: Kendall's tau = 0.238095 P = 0.5619 (low power)

Egger: bias = 1.439076 (95% CI = -2.5809 to 5.459051) P = 0.3997

 

Here we can say with 95% confidence, assuming a random effects model, that the true size of the effect was at least 0.4 greater for the group who received the educational intervention compared with those who did not. Assuming a fixed effects model a slightly stronger inference could be made about an effect size of 0.42 (the lower confidence limit) but the high inter-study variation makes the fixed effects model less appropriate.

 

P values

confidence intervals