Become a fan of h2g2
Scientists today announced that they have found a link between baldness and eating cheese toasties.
How often do you see a news story announcing some sort of new scientific discovery and think, 'how can they have proved that it can't be true?'. This entry attempts to explain some things you should think about before deciding whether to believe a new medical discovery.
Where Does the New Research First Appear?
Scientists and doctors perform tests, write up the results in a form known as a paper, they then send the results off to scientific journals such as The British Medical Journal or The Lancet. These learned journals will publish the research if they think it's good enough. There are lots of scientific journals - some of them are rated more highly than others. The best journals will have every submitted article reviewed by a panel of experts before it is published. These are called peer-reviewed journals.
So if something is in a peer-reviewed journal you'd be more likely to believe it than if it's been written up in a non-peer-reviewed journal. If the results of an experiment haven't been published in any journal then you should be very suspicious of the research - either the experiments haven't been completed or it's not good enough to be published due to some major flaw in the research. If the paper is a report in the 'Your Health' section of a newspaper, read the original research before you make a decision about it.
Is it Any Good?
When you're making a decision about a paper, start off by being very cynical and see if it convinces you. For instance, a paper's publication in the BMJ doesn't guarantee that it's been well-researched. The same applies if the paper's been written by your boss or if it happens to arrive at a conclusion that you want to be true.
Scientific papers are generally written in a standard style. Firstly, there's a section that looks at the background to the study, which explains why the author chose to do this experiment. Next comes an explanation of the how they carried out the experiment, and then the results are outlined that finally lead on to a conclusion.
Is the Experiment Well-designed?
If the purpose of the study is to decide whether a particular drug is the best treatment for a disease, the ideal design is a double blind randomised controlled trial. To find out whether a new treatment will treat a particular disease, you need to give it to people who have the disease and examine the results. You then have to compare these results to those people who are taking the existing treatment; if there isn't any existing treatment, you can compare it to people who aren't being treated. This group of people is known as the control group. Of course, to be fair, you need to choose who gets the new treatment at random.
If you tell someone who has an illness that you are going to give them a new drug that will relieve pain, they may be so pleased to have the new drug that their pain seems to ease. They may also want to be helpful and tell you that the pain has lessened even if it hasn't. To counteract this you can give half the people a tablet the same size and shape as the real drug but made out of an inactive substance such as sugar. This is called a placebo.
Of course if you tell someone that you are giving them a useless sugar pill then they won't tell you it reduced their pain. The research is only valid if the participants don't know if they are taking placebo or the real treatment.
However, researchers sometimes see what they want to see. This may be consciously, such as misreading blood pressure in a way that leads to the right results, or subconsciously, such as giving hints to the patients that they are on the active drug. The best kind of research is one where neither the patients nor the researchers know who is on the active drug: double blind controlled trial.
Problems with Double Blind Controlled Trials
If these trials are done properly they are the best way to decide if a new drug or treatment is effective. So if you are reading a paper that claims to have used a double blind controlled trial, ask yourself if it's really blind.
If the placebo drug is small and pink and tastes of sugar then it will be easy for the patients to work out that they're on an inactive drug. If the active drug causes very bad side effects and the placebo doesn't, the researchers and patients will be able to tell which is which.
You also need to check that they're measuring the right thing. Imagine you have a drug that it is hoped will reduce the risk of a particular disease occurring. You think that this disease is caused by having too much of a certain chemical in the blood. So when measuring the results you should count how many people in the group that took the drug contracted the disease, rather than looking at the average level of the chemical in the blood of the patients.
If you are going to use the results of this study to change things, for example the treatments that people are receiving, you need to be certain that a treatment is of benefit. It is better if several different experiments using the same method are performed in several different places: this is known as a multi-centre study. If you are reading a multi-centre study then you have to be sure that all the centres were in fact using the same protocol. For example, if interviewing people or asking them to fill out questionnaires formed part of the results, you would want to see that there were no subtle changes to the questions during the translation process.
If several different researchers in the past have done several different studies, they could be analysed together as a meta analysis.
Matched and Unmatched
For almost all diseases there are several risk factors that could increase a positive outcome. So you have to make sure that the control is as similar as possible to the group who are receiving the treatment. One way of doing this is to match each person in the treatment group with someone in the control group of similar sex, age and occupation. If this is possible, better results are usually obtained.
Other Sorts of Trials
Sometimes it isn't possible to do a randomised controlled trial. Sometimes it isn't ethical. For instance, you can't try to prove that taking ecstasy can kill you by giving it to people as part of a randomised controlled trial and seeing how many people die. For a lot of situations it just doesn't make sense; for example, if you are investigating the cause of a disease or how satisfied people are with services provided by a hospital.
Case Control Studies
These are usually studies into the causes of diseases. The researchers take one set of people who have a particular disease and compare them to similar people without the disease. The important thing to consider when reading one of these papers is 'are the controls as similar as possible?' or are they just the ones it was convenient to access?
A common mistake with case control trials carried out in a hospital is to use patients from a different ward in the hospital. For example, research is being carried out on the causes of lung cancer. The researchers have interviewed several patients on the hospital's respiratory ward and have then compared the findings to those of a supposed control group from the cardiology ward. But it may well be that the subjects on the respiratory ward smoked as much as the controls on the cardiology ward because smoking can cause both lung cancer and heart disease.
These are also used to study the association between diseases and a probable risk factor. Two groups of people are studied, one with the risk factor and one without it. These studies are followed up on a regular basis to see how many people get the disease. The biggest problem with these studies is that following the studies up sometimes becomes impossible. If someone does not answer a questionnaire five years after agreeing to take part in a study, is it because they have decided not to take part in the study or because they have died of the disease that is being studied?
A method often used to get around this is to perform the studies on doctors, who in many countries have to be registered (eg, with the GMC in the UK) - so it is easy to locate them wherever they are. An important study that went a long way to proving the link between smoking and lung cancer used this method.
A more general point to consider when looking at either a case control or a cohort study is whether anything else besides the risk factor we are looking at could have caused the differences in the incidence of this disease between these two groups.
Now Look at the Results
All scientific papers should include the actual results of the experiments (raw data) and some statistical interpretation of them. This is to enable you to check if their sums are correct. Be very suspicious if it isn't there. There are numerous types of statistical tests; however, the two outlined below occur in nearly every scientific paper.
All scientific papers will include an average of some sort - this will usually use an arithmetic mean1, with figures called 'confidence intervals' afterwards. These are percentage figures showing how sure the researchers are that the true mean lies between these two values.
The P Value
This is a value that shows how likely it was that there is no connection between two values and that the association occurred by chance. This is expressed as a decimal of 1, so 0.5 means that there is a 50/50 chance that these results occurred by chance. Usually scientists consider the P value should be less than 0.05 - that's a one in 20 chance that the results occurred by chance.
After the results comes a discussion of what the results means. Check that this is consistent with the results. Reputable journals will occasionally publish a paper with a conclusion that cannot be supported by the results. Also check that any recommendations made in the paper are reasonable. For example, if a small study of 10 patients shows that a treatment may have some small side effects, the researchers should be recommending further research rather than stopping treatment altogether.
Now reflect on your existing knowledge of the subject. For example, you are reading about a study that claims to make a link between baldness and eating cheese toasties, while stating that there is a 1 in 20 chance of the link being coincidental. Either a completely new biological mechanism causing baldness has been found or the link was coincidental. If you think this occurred by coincidence, don't dismiss the research out of hand - wait until more research has been published and have a look at their results.
Look at who funded the research: this should be at the end of the paper. Is it by a university or by a charitable body of a company that is trying to sell you the company's drug? This is not to say that all research by drug companies is poor research, much of it is valuable. As a rule of thumb if it is published in a reputable journal, it is generally good research; if it has just been handed to you by an attractive women who is trying to sell you the drug, take it with a pinch of salt.
In the light of all this you must decide whether you believe the study is true and make decisions based on it; you may be unsure and want to wait till further research is produced, or perhaps you don't believe a word of it and will dismiss it out of hand.