Hartung-Knap adjustment

83 Views Asked by At

I am currently working on a systematic review and meta-analysis project with several exposure-outcome associations. There is a general recommendation to use the Hartung-Knap adjustment if the number of studies in the meta-analysis is small.

I have also experimented with the Hartung Knap adjustment when the number of studies is small. The Hartung-Knap adjustment uses t-distribution(with n-1 degrees of freedom-- n is for number of studies) to calculate the 95%CI.

One thing is clear, the Hartun-Knap adjustment makes the effect sizes less precise. Often extremely large confidence intervals are much wider than the individual 95%CIs combined. If you take the absolute minimum of the lower intervals and the absolute maximum of the upper intervals and make an interval, the Hartung-Knap adjustment is still much wider than that. I thought one of the fundamental goals of statistical modeling is to find a model that best fits the data which results in a small error and narrow confidence intervals. It is also reasonable to be less precise and expect wider confidence intervals for meta-analyses with small numbers of observations. But this much of extremely large width is not convincing to me. What are your thoughts on this? I added a figure to show you a forest plot showing meta-analysis with and without the Hartung-Knap adjustment. enter image description here

1

There are 1 best solutions below

0
On

I suggest the following exercise. Say you measure the height of two individuals and find they are 178cm and 185cm tall. Construct a 95% confidence interval for the mean height of individuals in the population from which these two individuals have come. Easily done in R with:

x <- c(178,185)
t.test(x)

The 95% CI goes from 137 to 226; in other words, we have no idea how tall people are on average in the population. That is what happens when you try to make an inference from a sample of two observations to the entire population.

Of course meta-analysis is a different application and the formulas are different, but the principle is the same. You are trying to estimate the mean effect size in a population of studies from which you have observed two studies. Yes, the confidence interval will be huge, rightly so.

If you still intend to make an inference about the population of studies (i.e., stick to a random-effects model), then you can make the CI narrow by bringing in external information, typically via a Bayesian model. Some modifications to the K&H procedure have also been proposed for meta-analyses with few studies. Some relevant references are:

Röver, C., Knapp, G., & Friede, T. (2015). Hartung-Knapp-Sidik-Jonkman approach and its modification for random-effects meta-analysis with few studies. BMC Medical Research Methodology, 15, 99. https://doi.org/10.1186/s12874-015-0091-1

Bender, R., Friede, T., Koch, A., Kuss, O., Schlattmann, P., Schwarzer, G., & Skipka, G. (2018). Methods for evidence synthesis in the case of very few studies. Research Synthesis Methods, 9(3), 382-392. https://doi.org/10.1002/jrsm.1297

Seide, S. E., Röver, C., & Friede, T. (2019). Likelihood-based random-effects meta-analysis with few studies: Empirical and simulation studies. BMC Medical Research Methodology, 19(1), 16. https://doi.org/10.1186/s12874-018-0618-3