Try to stifle that yawn because this is important stuff.
In the age of PubMed and Google Scholar, where scores of papers on health research are uploaded daily, science is very accessible. The main problem is that there’s no filter separating the poor and irrelevant research from the useful stuff. Its all science. It’s usually poorly reported in the media and also cherry picked by people to push their opinion – or their product – onto you.
In other words if you’re interested in reading about health and taking responsibility for your health, knowing how to recognise the good and bad is going to be crucial, and this is how we do that.
So, here’s a survival guide for use with health science and the different factors you should consider when reading a piece of research:
1) Read more than the abstract alone.
Abstracts are great, but they’re there to do a job i.e. give a condensed feel for the paper. They’ll not give you a very good idea of how well the research was conducted and how valid those conclusions are.
> The abstract is the sales pitch, go to the ‘Methods’ and think about the following points:
2) Think about who the research was done on.
Experiments use a lot of different subjects. Mice are commonly used in nutrition research such as fasting research because they’re small, easy to look after and their lifecycle is short, this means you can follow them over their whole life or several generations in a matter of years. The problem is how relevant is a result from a mouse for me? ‘Not very’, is the answer.
> If it’s in humans it may be relevant, if not, it’s interesting at best
3) How relevant are the population used?
Is the research in diseased people? Are they infants? Are they 50 years older than you?
> Needs differ. The more similar the population used to you, the more their needs and reactions to the intervention reflect yours.
Speaking of which:
4) Was there an intervention or was it epidemiology?
An ‘intervention’ is when you get a group of people and actually give them what you are testing to consume, most nutrition work in not done this way. Nutrition relies heavily on ‘epidemiology’ where you follow a group of people and ask them what they’re doing, with a food diary for example, then you use statistics to connect outcomes – health levels, specific diseases etc. – with what they ate. The advantages here are that it’s much easier to do; you can follow thousands of people, over many years and get a lot of data. The problem is you can only draw connections, they do not prove a definite, linked cause and effect, and this is very important.
> Epidemiology gives you something to ponder and begs a question, but only interventions can give you an answer
5) How long did the experiment run for and how many people were used?
Often, especially where people are being studied, teams will do small, short ‘prospective studies’. These test the water and see if a bigger study is feasible or warranted, they don’t lend too much weight to an argument.
> Health is a lifelong concern, the longer the experiment the better the idea of outcome, especially in diet related research (something here about how many is a good number?)
6) What was the ‘control’ like? Were they similar? Was it placebo controlled? Did they do a ‘cross over’?
If you’re doing an intervention, a control group gives you a baseline to compare the other ‘test’ group against, they’re very important. The best are:
Similar? This is allows you to make valid comparisons.
Placebo controlled? The placebo effect is an amazing thing. To avoid this skewing results give the control group a fake pill, and do this ‘blind’ so that the control group doesn’t know they aren’t getting the ‘real thing’.
Double blinded? Here neither the person taking it or the experimenters know if it is real or fake, this means the data is impossible to fiddle and the participants can’t get hints about whether there should be some effect.
Blinding is an issue in nutrition, because it is pretty hard to feed someone a piece of fish and for them not to realise.
> Blinded experiments rule out placebo effect, they’re also a sign of a well-run piece of research, however they’re difficult to do in nutrition.
7) Did they control the confounds well and collect the data in a sensible way?
Red meat eaters drink more alcohol, smoke more and eat less vegetables. These factors are ‘confounds’, things that might give the same result (worse health), and that may be connected to, but are not caused by the thing being studied. Many highly quotes studies don’t control these factors, so the results and conclusions become skewed.
Dig deeper in some papers and you might find that when they collected the diet diaries red meat was roped in with burger meals, processed foods etc., these are failures of the method of data collection. How can you compare a fillet steak to a frankfurter, or say that deep fried chips are red meat. Astoundingly this happens.
> In nutrition many factors collude to give us the end result, you have to control well for these to single out the one you’re studying or the conclusions are groundless.
8) Does the conclusion gel with the results?
Sometimes, but not very often, you find the conclusion and more often the discussion can drift a little bit away from what the paper actually showed, often in the press this is where the problems happen because one sentence can be jumped upon to make up a whole story.
> Go back to the results and really think about what the analysis showed, does it match the conclusions?
The Bottom Line
These are the major factors for your consideration, there are others like the statistical methods used, the journal the paper is in etc, but these are the more obvious points to look for when considering a piece of research. They are not the be all and end all, not all good research is blinded, epidemiology or animal studies should not be ignored, but taken together they do tell you how relevant they might be to you.