Owen Yang

There seems to be a group of meta-analysis researchers who are educated to do a random-effect meta-analysis when there is heterogeneity in their results. I generally can respect but do not agree with this protocol.

What frustrated me is that more and more meta-analysis researchers are trained to follow a protocol without understanding the purpose of it. When you ask them why you use the random-effect, they tend to answer ‘the protocol says so.’

Why do you do a meta-analysis?

For me, meta-analysis is to obtain an aggregated result from a number of studies that are similar enough, and by aggregating the results we get some sort of a quantified consensus. If they are not similar enough then aggregation of them would not make sense.

Again, even if the studies appear similar, if the opinions from the studies are very different, aggregating the results will be just like taking an average of extreme results, and would not be a fair reflection of what is actually going on.

It is just like you have a bunch of European countries with left wing and right wing governments, but then make a conclusion that the opinion of the European Union is neutral. It is not just untrue, but a little insulting to the individual countries.

Of course, the so-called similarity depends on how specific the question is. There are ways to look at European countries as similar countries, but other ways to look at them as very different countries. One may be interested in the effect of paracetamol on knee pain, in which case study for hip pain should not be considered. But one can feel that they are joint pain in general, and decide they are similar enough as far as their concern.

When there is no consensus

Sometimes we are not certain in advance whether two types of studies should be treated the same, but can ‘test’ it afterwards by checking whether the collection of the studies to be meta-analysed could reflect discretely different opinions i.e. ‘heterogeneous’. For example, if we find a strong heterogeneity across 25 studies about the effect of paracetamol on joint pain, we could probably have a second look at these studies, and could see whether the difference can be more or less explained by the type of joint studied. If type of joint appear to explain the heterogeneity, that is, there is little difference for studies about the same joints, but large difference for studies about different joints, then it was possible that the studies for different joints should be ‘treated’ as different studies.

If paracetamol has different effects on knee pains and on hip pains, why would you aggregate the results?

Why do some people feel it is okay to do a random effect meta-analysis and report the aggregated result?

When a random effect could work

My guess would be coming back to the basic. A overarching random effect could work if you have so many study results but conducted in very different ways, and you can see the results vary wildly but not in clusters In that case, I may be open to reporting a normal meta-analysis result and a random effect result. But in theory, if the studies are ‘random enough,’ there should not be much difference between a non-random and random-effect, because the randomness has already taken care of itself. If the studies are not random enough, then I would also wonder why bother with a random effect (if it is not random). In all, my feeling is a difference between a non-random and random effect analysis could be used to give ourselves a little bit of warning, but making a conclusion based on random effect results always baffles me.

If we come back to the basic about a random effect, a random effect meta-analysis could work in a scenario that we actually have the individual-level data in the meta-analysis. But this is probably a different topic.

Please let me know what you think about this.