"Types of study design" (
x), развернутый конспект.
There are literally thousands of ways to design a study. When a news story suggests, "A new scientific study has found..." or a celebrity doctor begins a sentence with, "Studies show...", you need to ask, "What kind of studies?" Because "studies" are not equally reliable, they all have different limitations, and they should not be acted on in the same manner — or even acted on at all.
[quick guide]читать дальшеMuch of health research can be broken down into two types:
I. Observational studies - scientists observe and gather data on some phenomenon that's already happening. They don't intervene at all to change anything in people's lives; they merely gather descriptive information on habits, beliefs, or events.
The four most common types of observational studies:
1). "Cross-sectional surveys" take a random sample of people and record information about them at one point in time.
Example: Researchers might survey randomly selected inhabitants of Washington, DC to figure out how many have heart disease (i.e., an epidemiological survey) or how they think about the quality of green space for outdoor exercise (i.e., a public opinion poll).
2). "Cohort studies" (often called "longitudinal" and "prospective" studies) are just like surveys but they track the same groups of people over an extended period of time.
Example: Instead of just gathering data on heart disease in Washington DC at one point in time, a cohort study would follow groups (or cohorts) of study participants over a period of, say, 10 years, and see how many people in each of the groups develop heart disease. This allows researchers to record changes in the health of the participants over time and compare the levels of health in different groups of people.
3). "Case-control studies" start with an end point and work backward, figuring out what might have caused that outcome, and are often called "retrospective studies."
Example: Researchers could take two groups of people who live in Washington, DC: those who have been diagnosed with heart disease and those who haven't. They could then work backwards and survey the two groups about their earlier health behaviors to figure out what might have caused the disease to develop or not. They may ask about saturated fat consumption or exposure to disease-inducing viruses. From there, they would note any differences in risk factors or exposures that emerge between the two groups which can help suggest what may have led to heart disease in some people.
4). "Case reports" are basically detailed stories about a particular patient's medical history.
Example: If a doctor writes up case reports about a cluster of patients with the same condition or disease, this is a "case series." Though these are considered the weakest kind of observational studies, they can still be very helpful for rare diseases and powerful for advocacy.
Observational studies have limits: from a single study, researchers will only be able to suggest whether there's an association between a risk like fat consumption and an outcome like heart disease — and not that one caused the other. These things are called "confounding factors," or the difficult-to-predict variables that are associated with both the cause (e.g., saturated fat) and potential effect (e.g., heart disease) under study.
II. Experimental research - scientists do intervene, or at least use statistical methods to mimic intervention. In the best-designed experiments, study participants are randomly divided into at least two groups: those who get the intervention (i.e., treatment) and those who don't (i.e., placebo). Random allocation ensures that the groups are statistically comparable with potential "confounding factors" equally distributed among them. The only difference between the groups is the intervention, which allows researchers to tease out what effect that intervention causes. This is why conclusions from experiments are generally considered to be more reliable and trustworthy.
The two basic types of experimental research:
1). "Randomized controlled trials" are designed to tease out cause-and-effect relationships; randomization means treatment groups are comparable, and the only difference between them is the intervention (i.e., whether they received the drug or not) so any difference in outcome between the two groups can be attributed to the intervention.
These experiments are considered the gold standard of medical evidence, though they aren't necessarily the best study design for every research question. When blinded, they're even more powerful: blinding means either the study participants, the doctors, or both ("double-blinded") do not know whether they are receiving/giving the real treatment or a placebo. So blinded studies account for any placebo effects that may arise.
2). "Quasi-experiment" lies somewhere between experimental and observational research. These are essentially a type of unplanned or uncontrolled experiment that uses statistics and human ingenuity to mimic the conditions of an experiment.
Example: Comparing tobacco consumption before and after a border town is subjected to new state smoking regulations with its neighboring town in a different state that keeps the old regulations. To evaluate the effects of GPA-based university scholarships by comparing those students who were just above and just below the grade point cut-off for receiving them.
Systematic reviews
Rather than relying on just one person's experience or even just one randomized controlled trial, synthesized evidence draws on multiple sources and weighs their contributions to arrive at a more fully-supported conclusion according to each study's rigor and relevance. This kind of research is regarded as the highest form of evidence and the best science to inform decision-making.
Within synthesized evidence, the most reliable type for evaluating health claims are called "systematic reviews." These studies represent the best available syntheses of global evidence about the likely effects of different decisions, therapies and policies.
As their name suggests, systematic reviews use particular methods for finding helpful information, assembling it, and assessing its quality and applicability to the question you're interested in answering. Following this approach to the evidence — which is usually independently repeated at least twice by separate reviewers — reduces the bias that can creep into single studies. This process also helps to make sure results are not skewed or distorted by an individual author's preconceptions or cognitive biases. Finally, such transparency means that readers can know what the authors did to arrive at their conclusions and can easily evaluate the quality of the review itself."Stop googling your health questions" (
x), выдержки.
Databases of Systematic ReviewsIf you don't find information about the health question you're researching in one of these databases, there are other good, evidence-based sources. Try
MedlinePlus,
Mayo Clinic, and
NHS Choices. For more reliable health information, bookmark
this page on the top 100 health websites you can trust. And if you want to nerd out about medical evidence check out the book
Testing Treatments, which is free to download.
читать дальшеThe group that's done more to further that cause than perhaps any other is the Cochrane Collaboration, an international not-for-profit established in the early 1990s. You've probably never heard of it (incidentally, like the evidence-based medicine movement, it was also co-founded by prudent Canadians) but they're one of the best sources for unbiased medical information in existence.
Cochrane Collaboration logo has a cool story behind it.
The horizontal lines on the logo represent seven experiments looking at whether a course of corticosteroids for women who were expected to give birth prematurely reduced the risk of death in their babies. The left-hand side of the circle means the results of the studies were positive and the drug was proven to be useful; the right-hand side means the opposite was shown to be true. The middle, vertical line means there was 'no difference,' or that the drug may or may not work. And the diamond represents the combined results of all the studies.
As you can see, most of the studies showed the drug worked and the combined results came out in support of using corticosteroids in mothers to save their babies' lives. But until the first systematic review was published almost 20 years after the drug hit the market, doctors were left to wade though contradictory studies on the question and basically guess about what to do with their patients. Thousands of babies suffered and died needlessly. "Research study hype" (
x), выдержки.
In 2003,
researchers writing in the
American Journal of Medicine discovered something that should change how you think about medical news. They looked at 101 studies published in top scientific journals between 1979 and 1983 that claimed a new therapy or medical technology was very promising. Only five, they found out, made it to market within a decade. Only
one (ACE inhibitors, a pharmaceutical drug) was still extensively used at the time of their publication.
>>> the overwhelming majority of studies in medicine fail <<<