• The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

5 Unethical Medical Experiments Brought Out of the Shadows of History

Prisoners and other vulnerable populations often bore the brunt of unethical medical experimentation..

medical instruments on a table - shutterstock

Most people are aware of some of the heinous medical experiments of the past that violated human rights. Participation in these studies was either forced or coerced under false pretenses. Some of the most notorious examples include the experiments by the Nazis, the Tuskegee syphilis study, the Stanford Prison Experiment, and the CIA’s LSD studies.

But there are many other lesser-known experiments on vulnerable populations that have flown under the radar. Study subjects often didn’t — or couldn’t — give consent. Sometimes they were lured into participating with a promise of improved health or a small amount of compensation. Other times, details about the experiment were disclosed but the extent of risks involved weren’t.

This perhaps isn’t surprising, as doctors who conducted these experiments were representative of prevailing attitudes at the time of their work. But unfortunately, even after informed consent was introduced in the 1950s , disregard for the rights of certain populations continued. Some of these researchers’ work did result in scientific advances — but they came at the expense of harmful and painful procedures on unknowing subjects.

Here are five medical experiments of the past that you probably haven’t heard about. They illustrate just how far the ethical and legal guidepost, which emphasizes respect for human dignity above all else, has moved.

The Prison Doctor Who Did Testicular Transplants

From 1913 to 1951, eugenicist Leo Stanley was the chief surgeon at San Quentin State Prison, California’s oldest correctional institution. After performing vasectomies on prisoners, whom he recruited through promises of improved health and vigor, Stanley turned his attention to the emerging field of endocrinology, which involves the study of certain glands and the hormones they regulate. He believed the effects of aging and decreased hormones contributed to criminality, weak morality, and poor physical attributes. Transplanting the testicles of younger men into those who were older would restore masculinity, he thought.  

Stanley began by using the testicles of executed prisoners — but he ran into a supply shortage. He solved this by using the testicles of animals, including goats and deer. At first, he physically implanted the testicles directly into the inmates. But that had complications, so he switched to a new plan: He ground up the animal testicles into a paste, which he injected into prisoners’ abdomens. By the end of his time at San Quentin, Stanley did an estimated 10,000 testicular procedures .

The Oncologist Who Injected Cancer Cells Into Patients and Prisoners

During the 1950s and 1960s, Sloan-Kettering Institute oncologist Chester Southam conducted research to learn how people’s immune systems would react when exposed to cancer cells. In order to find out, he injected live HeLa cancer cells into patients, generally without their permission. When patient consent was given, details around the true nature of the experiment were often kept secret. Southam first experimented on terminally ill cancer patients, to whom he had easy access. The result of the injection was the growth of cancerous nodules , which led to metastasis in one person.

Next, Southam experimented on healthy subjects , which he felt would yield more accurate results. He recruited prisoners, and, perhaps not surprisingly, their healthier immune systems responded better than those of cancer patients. Eventually, Southam returned to infecting the sick and arranged to have patients at the Jewish Chronic Disease Hospital in Brooklyn, NY, injected with HeLa cells. But this time, there was resistance. Three doctors who were asked to participate in the experiment refused, resigned, and went public.

The scandalous newspaper headlines shocked the public, and legal proceedings were initiated against Southern. Some in the scientific and medical community condemned his experiments, while others supported him. Initially, Southam’s medical license was suspended for one year, but it was then reduced to a probation. His career continued to be illustrious, and he was subsequently elected president of the American Association for Cancer Research.

The Aptly Named ‘Monster Study’

Pioneering speech pathologist Wendell Johnson suffered from severe stuttering that began early in his childhood. His own experience motivated his focus on finding the cause, and hopefully a cure, for stuttering. He theorized that stuttering in children could be impacted by external factors, such as negative reinforcement. In 1939, under Johnson’s supervision, graduate student Mary Tudor conducted a stuttering experiment, using 22 children at an Iowa orphanage. Half received positive reinforcement. But the other half were ridiculed and criticized for their speech, whether or not they actually stuttered. This resulted in a worsening of speech issues for the children who were given negative feedback.

The study was never published due to the multitude of ethical violations. According to The Washington Post , Tudor was remorseful about the damage caused by the experiment and returned to the orphanage to help the children with their speech. Despite his ethical mistakes, the Wendell Johnson Speech and Hearing Clinic at the University of Iowa bears Johnson's name and is a nod to his contributions to the field.

The Dermatologist Who Used Prisoners As Guinea Pigs

One of the biggest breakthroughs in dermatology was the invention of Retin-A, a cream that can treat sun damage, wrinkles, and other skin conditions. Its success led to fortune and fame for co-inventor Albert Kligman, a dermatologist at the University of Pennsylvania . But Kligman is also known for his nefarious dermatology experiments on prisoners that began in 1951 and continued for around 20 years. He conducted his research on behalf of companies including DuPont and Johnson & Johnson.

Kligman’s work often left prisoners with pain and scars as he used them as study subjects in wound healing and exposed them to deodorants, foot powders, and more for chemical and cosmetic companies. Dow once enlisted Kligman to study the effects of dioxin, a chemical in Agent Orange, on 75 inmates at Pennsylvania's Holmesburg Prison. The prisoners were paid a small amount for their participation but were not told about the potential side effects.

In the University of Pennsylvania’s journal, Almanac , Kligman’s obituary focused on his medical advancements, awards, and philanthropy. There was no acknowledgement of his prison experiments. However, it did mention that as a “giant in the field,” he “also experienced his fair share of controversy.”

The Endocrinologist Who Irradiated Prisoners

When the Atomic Energy Commission wanted to know how radiation affected male reproductive function, they looked to endocrinologist Carl Heller . In a study involving Oregon State Penitentiary prisoners between 1963 and 1973, Heller designed a contraption that would radiate their testicles at varying amounts to see what effect it had, particularly on sperm production. The prisoners also were subjected to repeated biopsies and were required to undergo vasectomies once the experiments concluded.

Although study participants were paid, it raised ethical issues about the potential coercive nature of financial compensation to prison populations. The prisoners were informed about the risks of skin burns, but likely were not told about the possibility of significant pain, inflammation, and the small risk of testicular cancer.

  • personal health
  • behavior & society

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

Controversial and Unethical Psychology Experiments

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

controversial research studies

Shereen Lehman, MS, is a healthcare journalist and fact checker. She has co-authored two books for the popular Dummies Series (as Shereen Jegtvig).

controversial research studies

There have been a number of famous psychology experiments that are considered controversial, inhumane, unethical, and even downright cruel—here are five examples. Thanks to ethical codes and institutional review boards, most of these experiments could never be performed today.

At a Glance

Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment.

These and other controversial experiments led to the formation of rules and guidelines for performing ethical and humane research studies.

Harlow's Pit of Despair

Psychologist Harry Harlow performed a series of experiments in the 1960s designed to explore the powerful effects that love and attachment have on normal development. In these experiments, Harlow isolated young rhesus monkeys, depriving them of their mothers and keeping them from interacting with other monkeys.

The experiments were often shockingly cruel, and the results were just as devastating.

The Experiment

The infant monkeys in some experiments were separated from their real mothers and then raised by "wire" mothers. One of the surrogate mothers was made purely of wire.

While it provided food, it offered no softness or comfort. The other surrogate mother was made of wire and cloth, offering some degree of comfort to the infant monkeys.

Harlow found that while the monkeys would go to the wire mother for nourishment, they preferred the soft, cloth mother for comfort.

Some of Harlow's experiments involved isolating the young monkey in what he termed a "pit of despair." This was essentially an isolation chamber. Young monkeys were placed in the isolation chambers for as long as 10 weeks.

Other monkeys were isolated for as long as a year. Within just a few days, the infant monkeys would begin huddling in the corner of the chamber, remaining motionless.

The Results

Harlow's distressing research resulted in monkeys with severe emotional and social disturbances. They lacked social skills and were unable to play with other monkeys.

They were also incapable of normal sexual behavior, so Harlow devised yet another horrifying device, which he referred to as a "rape rack." The isolated monkeys were tied down in a mating position to be bred.

Not surprisingly, the isolated monkeys also ended up being incapable of taking care of their offspring, neglecting and abusing their young.

Harlow's experiments were finally halted in 1985 when the American Psychological Association passed rules regarding treating people and animals in research.

Milgram's Shocking Obedience Experiments

Isabelle Adam/Flickr/CC BY-NC-ND 2.0

If someone told you to deliver a painful, possibly fatal shock to another human being, would you do it? The vast majority of us would say that we absolutely would never do such a thing, but one controversial psychology experiment challenged this basic assumption.

Social psychologist Stanley Milgram conducted a series of experiments to explore the nature of obedience . Milgram's premise was that people would often go to great, sometimes dangerous, or even immoral, lengths to obey an authority figure.

The Experiments

In Milgram's experiment, subjects were ordered to deliver increasingly strong electrical shocks to another person. While the person in question was simply an actor who was pretending, the subjects themselves fully believed that the other person was actually being shocked.

The voltage levels started out at 30 volts and increased in 15-volt increments up to a maximum of 450 volts. The switches were also labeled with phrases including "slight shock," "medium shock," and "danger: severe shock." The maximum shock level was simply labeled with an ominous "XXX."​

The results of the experiment were nothing short of astonishing. Many participants were willing to deliver the maximum level of shock, even when the person pretending to be shocked was begging to be released or complaining of a heart condition.

Milgram's experiment revealed stunning information about the lengths that people are willing to go in order to obey, but it also caused considerable distress for the participants involved.

Zimbardo's Simulated Prison Experiment

 Darrin Klimek / Getty Images

Psychologist Philip Zimbardo went to high school with Stanley Milgram and had an interest in how situational variables contribute to social behavior.

In his famous and controversial experiment, he set up a mock prison in the basement of the psychology department at Stanford University. Participants were then randomly assigned to be either prisoners or guards. Zimbardo himself served as the prison warden.

The researchers attempted to make a realistic situation, even "arresting" the prisoners and bringing them into the mock prison. Prisoners were placed in uniforms, while the guards were told that they needed to maintain control of the prison without resorting to force or violence.

When the prisoners began to ignore orders, the guards began to utilize tactics that included humiliation and solitary confinement to punish and control the prisoners.

While the experiment was originally scheduled to last two full weeks it had to be halted after just six days. Why? Because the prison guards had started abusing their authority and were treating the prisoners cruelly. The prisoners, on the other hand, started to display signs of anxiety and emotional distress.

It wasn't until a graduate student (and Zimbardo's future wife) Christina Maslach visited the mock prison that it became clear that the situation was out of control and had gone too far. Maslach was appalled at what was going on and voiced her distress. Zimbardo then decided to call off the experiment.

Zimbardo later suggested that "although we ended the study a week earlier than planned, we did not end it soon enough."

Watson and Rayner's Little Albert Experiment

If you have ever taken an Introduction to Psychology class, then you are probably at least a little familiar with Little Albert.

Behaviorist John Watson  and his assistant Rosalie Rayner conditioned a boy to fear a white rat, and this fear even generalized to other white objects including stuffed toys and Watson's own beard.

Obviously, this type of experiment is considered very controversial today. Frightening an infant and purposely conditioning the child to be afraid is clearly unethical.

As the story goes, the boy and his mother moved away before Watson and Rayner could decondition the child, so many people have wondered if there might be a man out there with a mysterious phobia of furry white objects.

Controversy

Some researchers have suggested that the boy at the center of the study was actually a cognitively impaired boy who ended up dying of hydrocephalus when he was just six years old. If this is true, it makes Watson's study even more disturbing and controversial.

However, more recent evidence suggests that the real Little Albert was actually a boy named William Albert Barger.

Seligman's Look Into Learned Helplessness

During the late 1960s, psychologists Martin Seligman and Steven F. Maier conducted experiments that involved conditioning dogs to expect an electrical shock after hearing a tone. Seligman and Maier observed some unexpected results.

When initially placed in a shuttle box in which one side was electrified, the dogs would quickly jump over a low barrier to escape the shocks. Next, the dogs were strapped into a harness where the shocks were unavoidable.

After being conditioned to expect a shock that they could not escape, the dogs were once again placed in the shuttlebox. Instead of jumping over the low barrier to escape, the dogs made no efforts to escape the box.

Instead, they simply lay down, whined and whimpered. Since they had previously learned that no escape was possible, they made no effort to change their circumstances. The researchers called this behavior learned helplessness .

Seligman's work is considered controversial because of the mistreating the animals involved in the study.

Impact of Unethical Experiments in Psychology

Many of the psychology experiments performed in the past simply would not be possible today, thanks to ethical guidelines that direct how studies are performed and how participants are treated. While these controversial experiments are often disturbing, we can still learn some important things about human and animal behavior from their results.

Perhaps most importantly, some of these controversial experiments led directly to the formation of rules and guidelines for performing psychology studies.

Blum, Deborah.  Love at Goon Park: Harry Harlow and the science of affection . New York: Basic Books; 2011.

Sperry L.  Mental Health and Mental Disorders: an Encyclopedia of Conditions, Treatments, and Well-Being . Santa Barbara, CA: Greenwood, an imprint of ABC-CLIO, LLC; 2016.

Marcus S. Obedience to Authority An Experimental View. By Stanley Milgram. illustrated . New York: Harper &. The New York Times. 

Le Texier T. Debunking the Stanford Prison Experiment .  Am Psychol . 2019;74(7):823‐839. doi:10.1037/amp0000401

Fridlund AJ, Beck HP, Goldie WD, Irons G.  Little Albert: A neurologically impaired child .  Hist Psychol.  2012;15(4):302-27. doi:10.1037/a0026720

Powell RA, Digdon N, Harris B, Smithson C. Correcting the record on Watson, Rayner, and Little Albert: Albert Barger as "psychology's lost boy" .  Am Psychol . 2014;69(6):600‐611. doi:10.1037/a0036854

Seligman ME. Learned helplessness .  Annu Rev Med . 1972;23:407‐412. doi:10.1146/annurev.me.23.020172.002203

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Newsletters

Site search

  • Israel-Hamas war
  • Home Planet
  • 2024 election
  • Supreme Court
  • All explainers
  • Future Perfect

Filed under:

The Stanford Prison Experiment was massively influential. We just learned it was a fraud.

The most famous psychological studies are often wrong, fraudulent, or outdated. Textbooks need to catch up. 

Share this story

  • Share this on Facebook
  • Share this on Twitter
  • Share this on Reddit
  • Share All sharing options

Share All sharing options for: The Stanford Prison Experiment was massively influential. We just learned it was a fraud.

Rorschach test 

The Stanford Prison Experiment, one of the most famous and compelling psychological studies of all time, told us a tantalizingly simple story about human nature.

The study took paid participants and assigned them to be “inmates” or “guards” in a mock prison at Stanford University. Soon after the experiment began, the “guards” began mistreating the “prisoners,” implying evil is brought out by circumstance. The authors, in their conclusions, suggested innocent people, thrown into a situation where they have power over others, will begin to abuse that power. And people who are put into a situation where they are powerless will be driven to submission, even madness.

The Stanford Prison Experiment has been included in many, many introductory psychology textbooks and is often cited uncritically . It’s the subject of movies, documentaries, books, television shows, and congressional testimony .

But its findings were wrong. Very wrong. And not just due to its questionable ethics or lack of concrete data — but because of deceit.

A new exposé published by Medium based on previously unpublished recordings of Philip Zimbardo, the Stanford psychologist who ran the study, and interviews with his participants, offers convincing evidence that the guards in the experiment were coached to be cruel. It also shows that the experiment’s most memorable moment — of a prisoner descending into a screaming fit, proclaiming, “I’m burning up inside!” — was the result of the prisoner acting. “I took it as a kind of an improv exercise,” one of the guards told reporter Ben Blum . “I believed that I was doing what the researchers wanted me to do.”

The findings have long been subject to scrutiny — many think of them as more of a dramatic demonstration , a sort-of academic reality show, than a serious bit of science. But these new revelations incited an immediate response. “We must stop celebrating this work,” personality psychologist Simine Vazire tweeted , in response to the article . “It’s anti-scientific. Get it out of textbooks.” Many other psychologists have expressed similar sentiments.

( Update : Since this article published, the journal American Psychologist has published a thorough debunking of the Stanford Prison Experiment that goes beyond what Blum found in his piece. There’s even more evidence that the “guards” knew the results that Zimbardo wanted to produce, and were trained to meet his goals. It also provides evidence that the conclusions of the experiment were predetermined.)

Many of the classic show-stopping experiments in psychology have lately turned out to be wrong, fraudulent, or outdated. And in recent years, social scientists have begun to reckon with the truth that their old work needs a redo, the “ replication crisis .” But there’s been a lag — in the popular consciousness and in how psychology is taught by teachers and textbooks. It’s time to catch up.

Many classic findings in psychology have been reevaluated recently

controversial research studies

The Zimbardo prison experiment is not the only classic study that has been recently scrutinized, reevaluated, or outright exposed as a fraud. Recently, science journalist Gina Perry found that the infamous “Robbers Cave“ experiment in the 1950s — in which young boys at summer camp were essentially manipulated into joining warring factions — was a do-over from a failed previous version of an experiment, which the scientists never mentioned in an academic paper. That’s a glaring omission. It’s wrong to throw out data that refutes your hypothesis and only publicize data that supports it.

Perry has also revealed inconsistencies in another major early work in psychology: the Milgram electroshock test, in which participants were told by an authority figure to deliver seemingly lethal doses of electricity to an unseen hapless soul. Her investigations show some evidence of researchers going off the study script and possibly coercing participants to deliver the desired results. (Somewhat ironically, the new revelations about the prison experiment also show the power an authority figure — in this case Zimbardo himself and his “warden” — has in manipulating others to be cruel.)

Other studies have been reevaluated for more honest, methodological snafus. Recently, I wrote about the “marshmallow test,” a series of studies from the early ’90s that suggested the ability to delay gratification at a young age is correlated with success later in life . New research finds that if the original marshmallow test authors had a larger sample size, and greater research controls, their results would not have been the showstoppers they were in the ’90s. I can list so many more textbook psychology findings that have either not replicated, or are currently in the midst of a serious reevaluation.

  • Social priming: People who read “old”-sounding words (like “nursing home”) were more likely to walk slowly — showing how our brains can be subtly “primed” with thoughts and actions.
  • The facial feedback hypothesis: Merely activating muscles around the mouth caused people to become happier — demonstrating how our bodies tell our brains what emotions to feel.
  • Stereotype threat: Minorities and maligned social groups don’t perform as well on tests due to anxieties about becoming a stereotype themselves.
  • Ego depletion: The idea that willpower is a finite mental resource.

Alas, the past few years have brought about a reckoning for these ideas and social psychology as a whole.

Many psychological theories have been debunked or diminished in rigorous replication attempts. Psychologists are now realizing it's more likely that false positives will make it through to publication than inconclusive results. And they’ve realized that experimental methods commonly used just a few years ago aren’t rigorous enough. For instance, it used to be commonplace for scientists to publish experiments that sampled about 50 undergraduate students. Today, scientists realize this is a recipe for false positives , and strive for sample sizes in the hundreds and ideally from a more representative subject pool.

Nevertheless, in so many of these cases, scientists have moved on and corrected errors, and are still doing well-intentioned work to understand the heart of humanity. For instance, work on one of psychology’s oldest fixations — dehumanization, the ability to see another as less than human — continues with methodological rigor, helping us understand the modern-day maltreatment of Muslims and immigrants in America.

In some cases, time has shown that flawed original experiments offer worthwhile reexamination. The original Milgram experiment was flawed. But at least its study design — which brings in participants to administer shocks (not actually carried out) to punish others for failing at a memory test — is basically repeatable today with some ethical tweaks.

And it seems like Milgram’s conclusions may hold up: In a recent study, many people found demands from an authority figure to be a compelling reason to shock another. However, it’s possible, due to something known as the file-drawer effect, that failed replications of the Milgram experiment have not been published. Replication attempts at the Stanford prison study, on the other hand, have been a mess .

In science, too often, the first demonstration of an idea becomes the lasting one — in both pop culture and academia. But this isn’t how science is supposed to work at all!

Science is a frustrating, iterative process. When we communicate it, we need to get beyond the idea that a single, stunning study ought to last the test of time. Scientists know this as well, but their institutions have often discouraged them from replicating old work, instead of the pursuit of new and exciting, attention-grabbing studies. (Journalists are part of the problem too , imbuing small, insignificant studies with more importance and meaning than they’re due.)

Thankfully, there are researchers thinking very hard, and very earnestly, on trying to make psychology a more replicable, robust science. There’s even a whole Society for the Improvement of Psychological Science devoted to these issues.

Follow-up results tend to be less dramatic than original findings , but they are more useful in helping discover the truth. And it’s not that the Stanford Prison Experiment has no place in a classroom. It’s interesting as history. Psychologists like Zimbardo and Milgram were highly influenced by World War II. Their experiments were, in part, an attempt to figure out why ordinary people would fall for Nazism. That’s an important question, one that set the agenda for a huge amount of research in psychological science, and is still echoed in papers today.

Textbooks need to catch up

Psychology has changed tremendously over the past few years. Many studies used to teach the next generation of psychologists have been intensely scrutinized, and found to be in error. But troublingly, the textbooks have not been updated accordingly .

That’s the conclusion of a 2016 study in Current Psychology. “ By and large,” the study explains (emphasis mine):

introductory textbooks have difficulty accurately portraying controversial topics with care or, in some cases, simply avoid covering them at all. ... readers of introductory textbooks may be unintentionally misinformed on these topics.

The study authors — from Texas A&M and Stetson universities — gathered a stack of 24 popular introductory psych textbooks and began looking for coverage of 12 contested ideas or myths in psychology.

The ideas — like stereotype threat, the Mozart effect , and whether there’s a “narcissism epidemic” among millennials — have not necessarily been disproven. Nevertheless, there are credible and noteworthy studies that cast doubt on them. The list of ideas also included some urban legends — like the one about the brain only using 10 percent of its potential at any given time, and a debunked story about how bystanders refused to help a woman named Kitty Genovese while she was being murdered.

The researchers then rated the texts on how they handled these contested ideas. The results found a troubling amount of “biased” coverage on many of the topic areas.

controversial research studies

But why wouldn’t these textbooks include more doubt? Replication, after all, is a cornerstone of any science.

One idea is that textbooks, in the pursuit of covering a wide range of topics, aren’t meant to be authoritative on these individual controversies. But something else might be going on. The study authors suggest these textbook authors are trying to “oversell” psychology as a discipline, to get more undergraduates to study it full time. (I have to admit that it might have worked on me back when I was an undeclared undergraduate.)

There are some caveats to mention with the study: One is that the 12 topics the authors chose to scrutinize are completely arbitrary. “And many other potential issues were left out of our analysis,” they note. Also, the textbooks included were printed in the spring of 2012; it’s possible they have been updated since then.

Recently, I asked on Twitter how intro psychology professors deal with inconsistencies in their textbooks. Their answers were simple. Some say they decided to get rid of textbooks (which save students money) and focus on teaching individual articles. Others have another solution that’s just as simple: “You point out the wrong, outdated, and less-than-replicable sections,” Daniël Lakens , a professor at Eindhoven University of Technology in the Netherlands, said. He offered a useful example of one of the slides he uses in class.

For example: pic.twitter.com/WdtbjcZ6mR — Daniël Lakens (@lakens) June 11, 2018

Anecdotally, Illinois State University professor Joe Hilgard said he thinks his students appreciate “the ‘cutting-edge’ feeling from knowing something that the textbook didn’t.” (Also, who really, earnestly reads the textbook in an introductory college course?)

I tried to frame things as four steps: 1) here's the big idea 2) here's the famous study and how it illustrates 3) here are the damning criticisms 4) here's what you can do as scholars to figure out what you believe / make a contribution to the literature — Joe Hilgard, that psych prof we all know and love. (@JoeHilgard) June 11, 2018

And it seems this type of teaching is catching on. A (not perfectly representative) recent survey of 262 psychology professors found more than half said replication issues impacted their teaching . On the other hand, 40 percent said they hadn’t. So whether students are exposed to the recent reckoning is all up to the teachers they have.

If it’s true that textbooks and teachers are still neglecting to cover replication issues, then I’d argue they are actually underselling the science. To teach the “replication crisis” is to teach students that science strives to be self-correcting. It would instill in them the value that science ought to be reproducible.

Understanding human behavior is a hard problem. Finding out the answers shouldn’t be easy. If anything, that should give students more motivation to become the generation of scientists who get it right.

“Textbooks may be missing an opportunity for myth busting,” the Current Psychology study’s authors write. That’s, ideally, what young scientist ought to learn: how to bust myths and find the truth.

Further reading: Psychology’s “replication crisis”

  • The replication crisis, explained. Psychology is currently undergoing a painful period of introspection. It will emerge stronger than before.
  • The “marshmallow test” said patience was a key to success. A new replication tells us s’more.
  • The 7 biggest problems facing science, according to 270 scientists
  • What a nerdy debate about p-values shows about science — and how to fix it
  • Science is often flawed. It’s time we embraced that.

Will you support Vox today?

We believe that everyone deserves to understand the world that they live in. That kind of knowledge helps create better citizens, neighbors, friends, parents, and stewards of this planet. Producing deeply researched, explanatory journalism takes resources. You can support this mission by making a financial gift to Vox today. Will you join us?

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

controversial research studies

Next Up In Science

Sign up for the newsletter today, explained.

Understand the world with a daily explainer plus the most compelling stories of the day.

Thanks for signing up!

Check your inbox for a welcome email.

Oops. Something went wrong. Please enter a valid email and try again.

A squat yellow hybrid car on display in a large room with a high, blue-lit ceiling.

Why is Biden blocking the cheapest, most popular EVs in the world?

A dark mass of cloud that reaches the ground beside a farmhouse and a field.

Something weird is happening with tornadoes

A blonde woman singing into a handheld microphone.

Sabrina Carpenter’s pop magic comes from being in on the joke

controversial research studies

What the Methodist split tells us about America

Egg-laying chickens in a crowded barn.

Why aren’t we vaccinating birds against bird flu?

Children being read a story at Washingto

The child care cliff that wasn’t

APS

How the Classics Changed Research Ethics

Some of history’s most controversial psychology studies helped drive extensive protections for human research participants. some say those reforms went too far..

  • Behavioral Research
  • Institutional Review Board (IRB)

controversial research studies

Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. Photo credit:   PrisonExp.org

Nearly 60 years have passed since Stanley Milgram’s infamous “shock box” study sparked an international focus on ethics in psychological research. Countless historians and psychology instructors assert that Milgram’s experiments—along with studies like the Robbers Cave and Stanford prison experiments—could never occur today; ethics gatekeepers would swiftly bar such studies from proceeding, recognizing the potential harms to the participants. 

But the reforms that followed some of the 20th century’s most alarming biomedical and behavioral studies have overreached, many social and behavioral scientists complain. Studies that pose no peril to participants confront the same standards as experimental drug treatments or surgeries, they contend. The institutional review boards (IRBs) charged with protecting research participants fail to understand minimal risk, they say. Researchers complain they waste time addressing IRB concerns that have nothing to do with participant safety. 

Several factors contribute to this conflict, ethicists say. Researchers and IRBs operate in a climate of misunderstanding, confusing regulations, and a systemic lack of ethics training, said APS Fellow Celia Fisher, a Fordham University professor and research ethicist, in an interview with the Observer . 

“In my view, IRBs are trying to do their best and investigators are trying to do their best,” Fisher said. “It’s more that we really have to enhance communication and training on both sides.” 

‘Sins’ from the past  

Modern human-subjects protections date back to the 1947 Nuremberg Code, the response to Nazi medical experiments on concentration-camp internees. Those ethical principles, which no nation or organization has officially accepted as law or official ethics guidelines, emphasized that a study’s benefits should outweigh the risks and that human subjects should be fully informed about the research and participate voluntarily.  

See the 2014 Observer cover story by APS Fellow Carol A. Tavris, “ Teaching Contentious Classics ,” for more about these controversial studies and how to discuss them with students.

But the discovery of U.S.-government-sponsored research abuses, including the Tuskegee syphilis experiment on African American men and radiation experiments on humans, accelerated regulatory initiatives. The abuses investigators uncovered in the 1970s, 80s, and 90s—decades after the experiments had occurred—heightened policymakers’ concerns “about what else might still be going on,” George Mason University historian Zachary M. Schrag explained in an interview. These concerns generated restrictions not only on biomedical research but on social and behavioral studies that pose a minute risk of harm.  

“The sins of researchers from the 1940s led to new regulations in the 1990s, even though it was not at all clear that those kinds of activities were still going on in any way,” said Schrag, who chronicled the rise of IRBs in his book  Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009.  

Accompanying the medical research scandals were controversial psychological studies that provided fodder for textbooks, historical tomes, and movies.  

  • In the early 1950s, social psychologist Muzafer Sherif and his colleagues used a Boy Scout camp called Robbers Cave to study intergroup hostility. They randomly assigned preadolescent boys to one of two groups and concocted a series of competitive activities that quickly sparked conflict. They later set up a situation that compelled the boys to overcome their differences and work together. The study provided insights into prejudice and conflict resolution but generated criticism because the children weren’t told they were part of an experiment. 
  • In 1961, Milgram began his studies on obedience to authority by directing participants to administer increasing levels of electric shock to another person (a confederate). To Milgram’s surprise, more than 65% of the participants delivered the full voltage of shock (which unbeknownst to them was fake), even though many were distressed about doing so. Milgram was widely criticized for the manipulation and deception he employed to carry out his experiments. 
  • In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. 

Western policymakers created a variety of safeguards in the wake of these psychological studies and other medical research. Among them was the Declaration of Helsinki, an ethical guide for human-subjects research developed by the Europe-based World Medical Association. The U.S. Congress passed the National Research Act of 1974, which created a commission to oversee participant protections in biomedical and behavioral research. And in the 90s, federal agencies adopted the Federal Policy for the Protection of Human Subjects (better known as the Common Rule), a code of ethics applied to any government-funded research. IRBs review studies through the lens of the Common Rule. After that, social science research, including studies in social psychology, anthropology, sociology, and political science, began facing widespread institutional review (Schrag, 2010).  

Sailing Through Review

Psychological scientists and other researchers who have served on institutional review boards provide these tips to help researchers get their studies reviewed swiftly.  

  • Determine whether your study qualifies for minimal-risk exemption from review. Online tools are even in development to help researchers self-determine exempt status (Ben-Shahar, 2019; Schneider & McClutcheon, 2018). 
  • If you’re not clear about your exemption, research the regulations to understand how they apply to your planned study. Show you’ve done your homework and have developed a protocol that is safe for your participants.  
  • Consult with stakeholders. Look for advocacy groups and representatives from the population you plan to study. Ask them what they regard as fair compensation for participation. Get their feedback about your questionnaires and consent forms to make sure they’re understandable. These steps help you better show your IRB that the population you’re studying will find the protections adequate (Fisher, 2022). 
  • Speak to IRB members or staff before submitting the protocol. Ask them their specific concerns about your study, and get guidance on writing up the protocol to address those concerns. Also ask them about expected turnaround times so you can plan your submission in time to meet any deadlines associated with your study (e.g., grant application deadlines).  

Ben-Shahar, O. (2019, December 2). Reforming the IRB in experimental fashion. The Regulatory Review . University of Pennsylvania. https://www.theregreview.org/2019/12/02/ben-shahar-reforming-irb-experimental-fashion/  

Fisher, C. B. (2022). Decoding the ethics code: A practical guide for psychologists (5 th ed.). Sage Publications. 

Schneider, S. L. & McCutcheon, J. A. (2018).  Proof of concept: Use of a wizard for self-determination of IRB exempt status . Federal Demonstration Partnership.   http://thefdp.org/default/assets/File/Documents/wizard_pilot_final_rpt.pdf  

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support?  

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science  paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009).   

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

“You can have IRB approval and later decide to make a nominal change to the protocol (a frequent one is to add a new assistant to the project or to increase the sample size),” he wrote. “It can take over a month to get approval. In the meantime, nothing can move forward and the students sit around waiting.” 

Not all researchers view institutional review as a roadblock. Psychological scientist Nathaniel Herr, who runs American University’s Interpersonal Emotion Lab and has served on the school’s IRB, says the board effectively collaborated with researchers to ensure the study designs were safe and that participant privacy was appropriately protected 

“If the IRB that I operated on saw an issue, they shared suggestions we could make to overcome that issue,” Herr said. “It was about making the research go forward. I never saw a project get shut down. It might have required a significant change, but it was often about confidentiality and it’s something that helps everybody feel better about the fact we weren’t abusing our privilege as researchers. I really believe it [the review process] makes the projects better.” 

Some universities—including Fordham University, Yale University, and The University of Chicago—even have social and behavioral research IRBs whose members include experts optimally equipped to judge the safety of a psychological study, Fisher noted. 

Training gaps  

Institutional review is beset by a lack of ethics training in research programs, Fisher believes. While students in professional psychology programs take accreditation-required ethics courses in their doctoral programs, psychologists in other fields have no such requirement. In these programs, ethics training is often limited to an online program that provides, at best, a perfunctory overview of federal regulations. 

“It gives you the fundamental information, but it has nothing to do with our real-world deliberations about protecting participants,” she said. 

Additionally, harm to a participant is difficult to predict. As sociologist Martin Tolich of University of Otago in New Zealand wrote, the Stanford prison study had been IRB-approved. 

“Prediction of harm with any certainty is not necessarily possible, and should not be the aim of ethics review,” he argued. “A more measured goal is the minimization of risk, not its eradication” (Tolich, 2014). 

Fisher notes that scientists aren’t trained to recognize and respond to adverse events when they occur during a study. 

“To be trained in research ethics requires not just knowing you have to obtain informed consent,” she said. “It’s being able to apply ethical reasoning to each unique situation. If you don’t have the training to do that, then of course you’re just following the IRB rules, which are very impersonal and really out of sync with the true nature of what we’re doing.” 

Researchers also raise concerns that, in many cases, the regulatory process harms vulnerable populations rather than safeguards them. Fisher and psychological scientist Brian Mustanski of University of Illinois at Chicago wrote in 2016, for example, that the review panels may be hindering HIV prevention strategies by requiring researchers to get parental consent before including gay and bisexual adolescents in their studies. Under that requirement, youth who are not out to their families get excluded. Boards apply those restrictions even in states permitting minors to get HIV testing and preventive medication without parental permission—and even though federal rules allow IRBs to waive parental consent in research settings (Mustanski & Fisher, 2016) 

IRBs also place counterproductive safety limits on suicide and self-harm research, watching for any sign that a participant might need to be removed from a clinical study and hospitalized. 

“The problem is we know that hospitalization is not the panacea,” Fisher said. “It stops suicidality for the moment, but actually the highest-risk period is 3 months after the first hospitalization for a suicide attempt. Some of the IRBs fail to consider that a non-hospitalization intervention that’s being tested is just as safe as hospitalization. It’s a difficult problem, and I don’t blame them. But if we have to take people out of a study as soon as they reach a certain level of suicidality, then we’ll never find effective treatment.” 

Communication gaps  

Supporters of the institutional review process say researchers tend to approach the IRB process too defensively, overlooking the board’s good intentions.  

“Obtaining clarification or requesting further materials serve to verify that protections are in place,” a team of institutional reviewers wrote in an editorial for  Psi Chi Journal of Psychological Research . “If researchers assume that IRBs are collaborators in the research process, then these requests can be seen as prompts rather than as admonitions” (Domenech Rodriguez et al., 2017). 

Fisher agrees that researchers’ attitudes play a considerable role in the conflicts that arise over ethics review. She recommends researchers develop each protocol with review-board questions in mind (see sidebar). 

“For many researchers, there’s a disdain for IRBs,” she said. “IRBs are trying their hardest. They don’t want to reject research. It’s just that they’re not informed. And sometimes if behavioral scientists or social scientists are disdainful of their IRBs, they’re not communicating with them.” 

Some researchers are building evidence to help IRBs understand the level of risk associated with certain types of psychological studies.  

  • In a study involving more than 500 undergraduate students, for example, psychological scientists at the University of New Mexico found that the participants were less upset than expected by questionnaires about sex, trauma , and other sensitive topics. This finding, the researchers reported in  Psychological Science , challenges the usual IRB assumption about the stress that surveys on sex and trauma might inflict on participants (Yeater et al., 2012). 
  • A study involving undergraduate women indicated that participants who had experienced child abuse , although more likely than their peers to report distress from recalling the past as part of a study, were also more likely to say that their involvement in the research helped them gain insight into themselves and hoped it would help others (Decker et al., 2011). 
  • A multidisciplinary team, including APS Fellow R. Michael Furr of Wake Forest University, found that adolescent psychiatric patients showed a drop in suicide ideation after being questioned regularly about their suicidal thoughts over the course of 2 years. This countered concerns that asking about suicidal ideation would trigger an increase in such thinking (Mathias et al., 2012). 
  • A meta-analysis of more than 70 participant samples—totaling nearly 74,000 individuals—indicated that people may experience only moderate distress when discussing past traumas in research studies. They also generally might find their participation to be a positive experience, according to the findings (Jaffe et al., 2015). 

The takeaways  

So, are the historians correct? Would any of these classic experiments survive IRB scrutiny today? 

Reexaminations of those studies make the question arguably moot. Recent revelations about some of these studies suggest that scientific integrity concerns may taint the legacy of those findings as much as their impact on participants did (Le Texier, 2019, Resnick, 2018; Perry, 2018).  

Also, not every aspect of the controversial classics is taboo in today’s regulatory environment. Scientists have won IRB approval to conceptually replicate both the Milgram and Stanford prison experiments (Burger, 2009; Reicher & Haslam, 2006). They simply modified the protocols to avert any potential harm to the participants. (Scholars, including Zimbardo himself, have questioned the robustness of those replication findings [Elms, 2009; Miller, 2009; Zimbardo, 2006].) 

Many scholars believe there are clear and valuable lessons from the classic experiments. Milgram’s work, for instance, can inject clarity into pressing societal issues such as political polarization and police brutality . Ethics training and monitoring simply need to include those lessons learned, they say. 

“We should absolutely be talking about what Milgram did right, what he did wrong,” Schrag said. “We can talk about what we can learn from that experience and how we might answer important questions while respecting the rights of volunteers who participate in psychological experiments.”  

Feedback on this article? Email  [email protected]  or login to comment.

References   

Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist , 64 (1), 1–11. https://doi.org/10.1037/a0010932  

Ceci, S. J. & Bruck, M. (2009). Do IRBs pass the minimal harm test? Perspectives on Psychological Science , 4 (1), 28–29. https://doi.org/10.1111/j.1745-6924.2009.01084.x   

Decker, S. E., Naugle, A. E., Carter-Visscher, R., Bell, K., & Seifer, A. (2011). Ethical issues in research on sensitive topics: Participants’ experiences of stress and benefit . Journal of Empirical Research on Human Research Ethics: An International Journal , 6 (3), 55–64. https://doi.org/10.1525/jer.2011.6.3.55  

Domenech Rodriguez, M. M., Corralejo, S. M., Vouvalis, N., & Mirly, A. K. (2017). Institutional review board: Ally not adversary. Psi Chi Journal of Psychological Research , 22 (2), 76–84.  https://doi.org/10.24839/2325-7342.JN22.2.76  

Elms, A. C. (2009). Obedience lite. American Psychologist , 64 (1), 32–36.  https://doi.org/10.1037/a0014473

Fisher, C. B., True, G., Alexander, L., & Fried, A. L. (2009). Measures of mentoring, department climate, and graduate student preparedness in the responsible conduct of psychological research. Ethics & Behavior , 19 (3), 227–252. https://doi.org/10.1080/10508420902886726  

Jaffe, A. E., DiLillo, D., Hoffman, L., Haikalis, M., & Dykstra, R. E. (2015). Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review , 40 , 40–56. https://doi.org/10.1016/j.cpr.2015.05.004  

Le Texier, T. (2019). Debunking the Stanford Prison experiment. American Psychologist , 74 (7), 823–839. http://dx.doi.org/10.1037/amp0000401  

Mathias, C. W., Furr, R. M., Sheftall, A. H., Hill-Kapturczak, N., Crum, P., & Dougherty, D. M. (2012). What’s the harm in asking about suicide ideation? Suicide and Life-Threatening Behavior , 42 (3), 341–351. https://doi.org/10.1111/j.1943-278X.2012.0095.x  

Miller, A. G. (2009). Reflections on “Replicating Milgram” (Burger, 2009). American Psychologist , 64 (1), 20–27. https://doi.org/10.1037/a0014407  

Mustanski, B., & Fisher, C. B. (2016). HIV rates are increasing in gay/bisexual teens: IRB barriers to research must be resolved to bend the curve.  American Journal of Preventive Medicine ,  51 (2), 249–252. https://doi.org/10.1016/j.amepre.2016.02.026  

Perry, G. (2018). The lost boys: Inside Muzafer Sherif’s Robbers Cave experiment. Scribe Publications.  

Reicher, S. & Haslam, S. A. (2006). Rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 1–40. https://doi.org/10.1348/014466605X48998  

Resnick, B. (2018, June 13). The Stanford prison experiment was massively influential. We just learned it was a fraud. Vox. https://www.vox.com/2018/6/13/17449118/stanford-prison-experiment-fraud-psychology-replication  

Schrag, Z. M. (2010). Ethical imperialism: Institutional review boards and the social sciences, 1965–2009 . Johns Hopkins University Press. 

Tolich, M. (2014). What can Milgram and Zimbardo teach ethics committees and qualitative researchers about minimal harm? Research Ethics , 10 (2), 86–96. https://doi.org/10.1177/1747016114523771  

Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards.  Psychological Science , 23 (7), 780–787. https://doi.org/10.1177/0956797611435131  

Zimbardo, P. G. (2006). On rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 47–53. https://doi.org/10.1348/014466605X81720  

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

Scott Sleek is a freelance writer in Silver Spring, Maryland, and the former director of news and information at APS.

controversial research studies

Inside Grants: National Science Foundation Research Data Ecosystems (RDE)

The National Science Foundation’s Research Data Ecosystems (RDE) is a $38 million effort to improve scientific research through modernized data management collection.

controversial research studies

Up-and-Coming Voices: Methodology and Research Practices

Talks by students and early-career researchers related to methodology and research practices.

controversial research studies

Understanding ‘Scientific Consensus’ May Correct Misperceptions About GMOs, but Not Climate Change

Explaining the meaning of “scientific consensus” may be more effective at countering some types of misinformation than others.

Privacy Overview

SciTechDaily

  • May 14, 2024 | Astonishing Discovery – Researchers Discover Life 13 Feet Below Earth’s Most Inhospitable Desert
  • May 14, 2024 | Ludington’s Liquid Power: One of the Largest Batteries in the World
  • May 14, 2024 | Galactic Rings of Power: Astronomers Uncover Massive Magnetic Toroids in the Milky Way Halo
  • May 14, 2024 | Hidden Complexity: Unlocking the Mysteries of Orangutan Communication
  • May 14, 2024 | New Study Unveils Serotonin’s Key Role in Fertility and Depression

Suppressing Science: Are We Overreacting to Controversial Findings?

By Association for Psychological Science October 11, 2023

Seperation Controversy Argument

Controversial research findings can lead to defensive reactions, including calls for censorship. New research has shown that people tend to exaggerate the potential of research findings to promote harmful actions and underestimate the support for constructive reactions. This tendency holds true across ideologies and demographic groups. The findings raise questions about the editorial guidelines of academic journals and the possible unwarranted suppression of scientific research.

Controversial research often sparks defensive reactions, sometimes even leading to calls for censorship, especially if the findings clash with established ideologies. However, a pair of studies published in the journal Psychological Science ,  by authors Cory J. Clark ( University of Pennsylvania ), Maja Graso ( University of Groningen ), Ilana Redstone ( University of Illinois Urbana-Champaign ), and Philip E. Tetlock (University of Pennsylvania), indicates that people tend to overestimate the risk that research findings will fuel public support for harmful actions.

Harmful actions related to research findings, according to the authors, can include censoring research, defunding related programs, and promoting bias against a community of people. Conversely, helpful reactions could include behaviors such as funding additional research, investing in programs, and offering educational resources. 

“With this set of studies, we learned that expectations about scientific consequences might have a negativity bias,” Clark told APS in an interview. “We found that participants consistently overestimated support for harmful behavioral reactions and consistently underestimated support for helpful behavioral reactions. And those more likely to overestimate harms tended to be more supportive of censoring scientific research.” 

Study Methodology and Key Findings

In their first study, Clark and colleagues had 983 online participants read an excerpt from the discussion sections of five real studies with findings that some people might perceive as controversial. Two of these excerpts highlighted findings that the researchers expected would be counter to the expectations of people with liberal views (“female protégés benefit more when they have male than female mentors,” and “there is an absence of evidence of racial discrimination against ethnic minorities in police shootings”). Two excerpts were expected to be surprising to more conservative people (“activating Christian concepts increases racial prejudice,” and “children with same-sex parents are no worse off than children with opposite-sex parents”).

The fifth excerpt was intended to be more ideologically neutral (“experiencing child sexual abuse does not cause severe and long-lasting psychological harm for all victims”). The researchers also included two versions of an excerpt from a fictitious study about ideological intolerance suggesting that either liberals or conservatives were less tolerant of ideological differences. 

After reading each excerpt, one-third of participants were asked to self-report which of the 10 actions they would support taking in response to each study’s findings. After reading about the mentorship study, for example, participants in the self-report group were asked if they would support discouraging early-career female researchers from approaching female mentors, conducting more research on the subject, and investing in mentorship development programs, among other reactions. The remaining two-thirds of participants were asked to estimate what percentage of U.S. adults they thought would support the various actions. 

Participants in the estimation group were found to consistently underestimate the percentage of people who would support helpful actions—for example, funding additional research and interventions designed to reduce child sexual abuse and political intolerance.

They also overestimated the percentage of adults who would support harmful actions like withdrawing support from a community or blocking groups of people from leadership positions. These harm estimations did not vary based on the findings’ perceived offensiveness, but participants were more likely to describe findings that they found more offensive as less comprehensible.  

There was some evidence that participants who were more conservative had a greater tendency to overestimate the percentage of people who would support harmful actions. In addition, more conservative and younger participants were more likely to support censoring research. Participants’ responses to the political intolerance study did not vary based on their own ideology, however. 

Honesty in Responses

Clark and colleagues further tested the honesty of these responses through a study of 882 participants. This time, participants in the self-report group were asked to identify which initiatives they would like the researchers to donate $100 to in response to three scientific findings. To encourage honesty, researchers informed participants that $100 would be donated to each cause that a majority of participants supported. Meanwhile, participants in the estimation group were told that the five participants with the most accurate estimates would receive $100 gift cards. 

Despite this additional financial motivation, participants’ responses largely mirrored those in the first study. A notable exception was that women were found to support censorship at a higher rate than men. 

“Although people accurately predicted that helpful reactions were more supported than harmful ones, their deviation from accuracy was consistently in the negative direction: People overpredicted the costs and underpredicted the benefits,” Clark and colleagues wrote. 

Given that some academic journals have added harm-based criteria to their editorial guidelines, Clark would like to further explore how these findings may apply to editors’ and reviewers’ perceptions of scientific risk, as well as how harm risks can be estimated more accurately. 

“Our results suggest the possibility that these intuitions may be systematically biased toward overestimating harms,” Clark told APS. “Intuitions alone may be untrustworthy and lead to the unnecessary suppression of science.” 

Reference: “Harm Hypervigilance in Public Reactions to Scientific Evidence” by Cory J. Clark, Maja Graso, Ilana Redstone and Philip E. Tetlock, 1 June 2023, Psychological Science . DOI: 10.1177/09567976231168777

More on SciTechDaily

Psyche Rollout

A Slim 20%: Weather Concerns for NASA’s Psyche Rocket Launch

Jovian Moon Europa Plume of Water Vapor

Life on Jupiter’s Moon? NASA’s Webb Finds Carbon Source on Surface of Europa

mRNA Vaccine for COVID-19

Dr. Katalin Karikó Awarded $100,000 Vilcek Prize for Research That Led to Development of mRNA Vaccines for COVID-19

Expedition 68 Flight Engineer Koichi Wakata Spacewalk

Cutting-Edge Science Advances on Space Station as Crew Prepares for Departure

Optical Microcavity With 2D Semiconductor Schematic

Single Photon Switch Advance: “Rydberg States” in Solid State Materials

Teen Girl Sad Depressed Lonely

Loneliness Could Double Your Risk of Diabetes

Fat bacteria skinny bacteria there’s a reason microbes stay in shape.

Blue Compact Dwarf Galaxy ESO 338 4

Hubble Image of the Week – Blue Compact Dwarf Galaxy ESO 338-4

3 comments on "suppressing science: are we overreacting to controversial findings".

controversial research studies

If it isn’t controversial, then it probably isn’t science. Science has always been an adversarial blood sport. However, in recent years, the quality of research has declined so much in some areas that people are wringing their hands over what they call the Crisis of Repeatability.

If the results or conclusions of a study seem off, it is incumbent on the peers to try to replicate the study, or design a different one, to see if they get the same results. Censorship is never the answer and one should wonder why anyone would call for that.

This is a reflection of the problem with recent graduates: https://news.yahoo.com/act-test-scores-us-students-040600305.html

controversial research studies

I would hope that the journalists that form a microcosm of information of the sciences have a better understanding and honorable nature to report on findings that can help the success of their field and scientific knowledge I do get disappointed to read skewed individual opinion, but a person can tell the difference it’s just a waste of time.

Leave a comment Cancel reply

Email address is optional. If provided, your email will not be published or shared.

Save my name, email, and website in this browser for the next time I comment.

History Defined

The 7 most controversial psychological experiments of all time

Last updated on March 17th, 2023 at 04:59 am

Psychology is a fascinating field that seeks to understand the human mind and behavior. Psychologists have uncovered numerous insights into how we think, feel, and act through research, experiments, and observations.

However, not all psychological experiments have been met with widespread acceptance and approval. Some experiments have been so controversial that they have sparked ethical debates and even legal action.

From the infamous Stanford Prison Experiment to the controversial Milgram obedience study, let’s delve into the details of each experiment, examine the ethical considerations that arose, and discuss their lasting impact on psychology.

controversial research studies

The Stanford Prison Experiment

Kickstarting our list is the controversial Stanford Prison Experiment. Dr. Philip Zimbardo conducted this experiment in 1971 to observe what would happen when you put good people in bad situations.

He took 24 male students and split them into two groups: prisoners and prison guards. The prisoners were stripped of their clothes and given smocks, while the guards were given uniforms and nightsticks.

The study was intended to last two weeks but shut down after six days due to the horrific conditions.

The ‘prisoners’ were constantly harassed and abused by the ‘guards.’ As a result, some even began to show signs of mental breakdowns due to psychological and physical abuse.

This study is still controversial today, with many people arguing that the participants should never have been put in such a position in the first place. However, it undeniably helped shape our understanding of how power can affect people’s behavior.

The Priming Experiment (Elderly Words Provoke Walking Slow)

Next on our list is a study conducted in 1998 by John Bargh. This experiment is often known as the ‘elderly words provoke walking slow’ study, which aimed to observe how subliminal messaging can affect behavior.

They had two groups complete a word association task to do this. One group was given words related to the elderly (for example, ‘wrinkled,’ ‘grey,’ and ‘bingo’), while they gave others neutral words.

The participants were then asked to walk down a hallway, with the researchers timing how long it took them to reach the end.

Unsurprisingly, those in the first group who had been exposed to words related to the elderly walked down the hall significantly slower than those in the second group.

This study was controversial at the time as it questioned how much control we have over our behavior.

At the time, many people were worried that if advertisers and others could influence our behavior in such a way, we would no longer be able to make our own decisions.

However, this was mainly so controversial because it was later debunked. A different lab led by Stephane Doyen could not produce the same results.

While there’s a lot of hearsay and rumors, social priming is a controversial phenomenon that can have dangerous consequences.

Bargh’s original experiment was flawed, inconsistent, lacked thoroughness, biased, and subject to equipment errors.

Even today, it’s still unclear exactly how much subliminal messaging can influence our behavior. 

The Milgram Experiment

Another well-known and controversial experiment is the Milgram experiment conducted in 1961 by Stanley Milgram. This study observed how far people would go in obeying an authority figure, even if it meant harming another person.

Not coincidently, this experiment took place three months after the trial of Adolf Eichmann had started in Jerusalem. 

This iconic experiment attempted to break down the effects of genocide psychologically and see whether Eichmann and others like him were ‘following orders’ during the Holocaust or were evil beings.

He recruited participants and told them they were participating in a study about memory and learning.

They were then paired with another participant, who was an actor, and the test subjects were told to shock them whenever they got an answer wrong.

The shocks were fake, but the actors pretended to be in pain. Even pre-recorded electrical shock sounds were played through the room to ensure the shock seemed authentic.

The actors were strapped to the chair, and the participant was told that this was to ensure the actors could not leave, no matter how bad it got.

The participants were also given a real electric shock before the test to provide them with an idea of what the actors would be going through.

And so the test commenced.

The test was simple. Lists of pairs of words were given and recalled by the participant and the actor.

The participant would then provide a list of four possible answers, and the actor would use a button to identify which sequence of words was correct.

If they were wrong, they would receive an electric shock, and the voltage would be turned up by 15 volts, with the max being 450 volts.

Of course, as the voltage was increased, the actors would pretend to be in more pain. In later versions of the experiment, some actors would even beg for mercy or plead that they had a heart condition. But still, the shocks continued.

Whenever the participant started to show signs that they wanted to stop the experiment, or at least not carry on, the experimenters replied with these statements, in this order of severity.

  • Please continue  or  Please go on.
  • The experiment requires that you continue.
  • It is absolutely essential that you continue.
  • You have no other choice; you  must  go on.

They would move on to the next if the first statement didn’t work. The experiment would stop if the participant didn’t continue after the fourth prompt.

As the test was run, 65% of the participants made it to the final 450-volt mark, all participants making it to at least the 300-volt mark, showcasing that people will go to great lengths to obey authority figures.

This study was controversial because it showed how easily people could be coerced into harming others, even if they don’t want to. It also raised ethical concerns about the use of deception in research.

Despite these concerns, the Milgram experiment is still considered a critical study, providing insight into how people respond to authority figures.

The ‘Little Albert’ Experiment

One of the most controversial psychological experiments of all time is the ‘Little Albert’ experiment , conducted by John Watson and Rosalie Rayner in 1920. This study aimed to observe how fear could be conditioned in a child.

To do this, they used a nine-month-old boy who was called Albert. He was exposed to several animals, including rats, rabbits, and dogs. Each time he saw them, he was given a slight shock.

After some time, Albert began to show signs of fear whenever he saw any of the animals, even if he wasn’t shocked. This showed that fear can be conditioned in children and doesn’t happen naturally.

The ‘Little Albert’ experiment was controversial because it showed how easy it is to manipulate a child’s emotions. It also raised ethical concerns about the use of animals in research.

Despite these concerns, the ‘Little Albert’ experiment is still considered a critical study, as it provides insight into how fear is learned, enabling us to understand better and treat anxiety disorders.

The Facebook Emotion Experiment

One of the most recent and controversial psychological experiments is the Facebook emotion experiment , conducted by a team of researchers from Cornell University in 2014. This study aimed to observe how social media can affect our emotions.

To do this, they manipulated the news feeds of over 700,000 Facebook users, showing some users positive content and others negative content. They then monitored how these users responded emotionally.

The study results showed that those exposed to positive content were more likely to post positive content themselves, and vice versa for those exposed to negative content.

This study was controversial because it showed that social media could impact our emotions. It also raised ethical concerns about the manipulation of people’s news feeds.

Despite these concerns, the Facebook emotion experiment is still considered an important study, providing insight into how social media can affect our moods and behaviors.

You can see how these controversial psychological experiments have shaped our understanding of the human mind.

While they may have raised some ethical concerns, they are still essential studiesproviding valuable insights into our thinking and behaviore.

Operation Midnight Climax

In the 1950s, the CIA ran what was known as ‘Operation Midnight Climax’ under the highly controversial MKULTRA program.

Under this program, the CIA used prostitutes to lure men back to safe houses in New York and San Francisco, where they would be drugged without their knowledge (usually slipping them into drinks) and observed through one-way mirrors.

Such drugs included illegal chemicals, such as LSD.

Over the decade this program was run, the government was provided with information on how mind-altering drugs and narcotics affected the human mind and their uses. Additionally, it helped develop better surveillance equipment, and there are reports of sexual blackmail.

However, the operation was reportedly shut down in 1965, though some say it continued unofficially under different names.

The Monster Study

In concluding our list, we have one of the most traumatic and controversial studies ever, which became known as The Monster Study of 1939.

It started with two researchers, Wendall Johnson and Mary Tudor, who were deep-diving into how the process and outcomes of positive reinforcement work.

Tudor was interested in how stuttering could be lessened or even cured by providing positive reinforcement to those who spoke without issues.

Sounds reasonable for now.

To figure this out, Wendall and Johnson divided 22 orphaned children between six and nine with no history of speech problems into two groups.

The first group was constantly bombarded with positive feedback and praise for how excellent and fluent their speech was, regardless of what their speech was like.

However, the second group was treated a little differently. They would receive negative feedback and be punished, regardless of their speech fluency.

Things got so bad for this group that one girl even ran away from the orphanage where this was taking place, as reported by the New York Times at the time.

If you put yourself in that position and imagine what life must have been like as a six-year-old going through such trauma, and you have no understanding of why you’ll be quick to see how cruel and controversial this experiment was.

These children carried that trauma with them for the rest of their lives.

Related Posts

The Lovers of Valdaro

The Lovers of Valdaro – A Double Burial From Neolithic Italy

Two US presidential quizes

Quiz: How well do you know US presidents?

2 thoughts on “the 7 most controversial psychological experiments of all time”.

' src=

Why was the Tuskegee Syphilis Study not on this list? That was extremely horrific and controversial that went on for 40 years!

' src=

Because it’s an article limited to psychological experiments, as per the title, perhaps

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

7 ethically controversial research areas in science and technology

There are a number of scientific endeavors that push the ethical lines of what science should be. let’s take a look at them..

Trevor English

Trevor English

7 ethically controversial research areas in science and technology

U.S. Army & U.S. DOD

Science and technology are the great drivers of innovation in the world around us. Technological and scientific breakthroughs help people every day, bringing clean water, access to information through the internet, and cures for rare diseases.

Many aspects of scientific discovery face a few ethical questions. But there are also a number of scientific endeavors that push the ethical lines of what science should revolve around. While all the areas of controversy covered here have great benefits, they also come with potential ethical burdens, such as potential harm to animals, people, or the environment. 

It all should make us stop and think – at what point do the negatives of innovation overshadow the good that it may bring? And is there ever an innovation so beneficial to the world that it would be worth compromising on ethics in order to achieve scientific and technological progress? Ponder these questions as we look into 7 ethically controversial areas of science and technology…

Artificial intelligence is at the forefront of technological development in many areas. Almost every company that has anything to do with technology is using it as a buzzword to sell their product: A new dog collar with built-in AI to detect when your dog is in distress! Install our simple computer plug-in and we’ll optimize your workday.

controversial research studies

Markus Winkler/Unsplash  

AI certainly has many valuable applications and benefits, but there are also areas where it has some extensive drawbacks. Take two, key AI technologies that have questionable benefits, or rather extensive drawbacks: deep fake and Neuralink .

You’ve probably heard of deep fakes, the face-swapping technology that is used to bring dead movie stars back to life, but can also make world leaders appear to say things they never did –  or for even less family-friendly things.

You might not know about Neuralink, though. It’s one of Elon Musk’s technological endeavors and aims to improve brain-machine interfaces, record memories, and make other technological advancements to do with the brain.  

Focusing in on Neuralink first, questions surround the ethics of connecting human brains to machines and utilizing AI to make human brains function better. Ethical questions primarily focus on the development of said technology and potential side effects. The company’s goal is to optimize human brain function, but the testing that will be needed to get there will be extensive. This will eventually involve testing on human brains, with unknown consequences. At what point is the potential promise of drastic technological advancement not worth the potential human loss in the development of the technology?

Moving on from Neuralink, we’re met with technology, deep fakes, that pose less potential benefit to humanity. There’s arguably little reason that anyone needs to replace someone’s face with another in a video – at least, a little reason that isn’t nefarious. 

Yet, the technology exists to do this, thanks to artificial intelligence and machine learning. It continues to be researched under the guise of benefits to improved video editing technology, but at the end of the day, there’s no way to keep it from being used for negative purposes.

At the end of the day, artificial intelligence has the potential to completely change how we interact with the world, but are there too many negatives? Time will tell…

Through CRISPR , scientists are able to quickly and cheaply edit the human genome. That means researchers can alter DNA sequences and how our genes function. That means the potential to correct genetic defects, preventing the spread of disease –  or making designer babies.

CRISPR is short for ‘Clustered Regularly Interspaced Short Palindromic Repeats’, a gene-editing tool whose best-known form utilizes the Cas9 enzyme to cut strands of DNA. It’s basically like molecular scrapbooking.

The development of CRISPR technology emerged from discoveries of how bacteria defend themselves, by creating a ‘library’ of virus DNA that the bacteria can draw on to destroy the DNA of foreign invaders before they are able to take hold of the organism.

CRISPR has emerged very recently, with a 2017 paper demonstrating using the technique for gene editing.

Chinese scientists have started using CRISPR to engineer designer babies – human babies with genes edited to be resistant to particular viruses. All of this seems can potentially improve humanity’s quality of life, but at what cost? The long-term side effects are still completely unknown. And there is no way to tell where this could end. It is one thing to design a baby to be HIV-resistant, but another to design the appearance and intelligence of a baby.

controversial research studies

Thomas Splettstoesser/Wikimedia Commons  

In addition, designer babies also potentially call into question the very definition of human. 

3. Gene editing (GMO)

Moving on from human gene editing in CRISPR, we can examine the ethical issues with gene editing on other organisms, like plants. Gene editing includes any intervention in an organism’s genetics.

This intervention creates GMOs or genetically-modified organisms. This can result in benefits such as stronger, more drought-resistant crops, or crops that have higher yields per acre, among other things advantages.

Today, gene editing occurs across the world and it is conducted on both plants and animals, mostly in the pursuit of better food production. On animals, gene editing has been used to create pigs that are naturally very resistant to the porcine reproductive and respiratory syndrome, or PRRS, improving animal welfare.

The gene-editing process for all organisms is overseen by various government agencies, depending on the country. However, the long-term effects of much gene editing are still unknown, and the potential for edited genes to enter the ‘wild’ and alter the environment in unforeseen ways may be high. 

4. Animal testing

Animal testing is one of the most controversial areas of scientific research on this list. Many people couldn’t care less, while others vehemently oppose it. For years, animal testing has been used to create newer and better pharmaceuticals and test consumer products such as makeup, shampoos, etc. 

At the end of the day, however, animal testing places the prevention of human suffering over the prevention of animal suffering. In certain cases, the ethical argument for animal testing may be easier, i.e. where it may lead to advances in preventing disease. In other cases, the argument is harder, as the development of a better lipstick is likely not worth the suffering of animals. 

On the one hand, you have human suffering and on the other, you have animal suffering. And we seem to have no problem with animal suffering as long as it is for a greater cause.

In introducing the subject, we’ve made it seem fairly cut and dry, but an increasing number of scientists are starting to question the relevance of continued animal testing at a time when AI and other tech are starting to be able to accurately model and predict biological interactions. A lot of animals are harmed in the creation of many chemicals and consumer products, and we must each ask ourselves, is it worth it? 

5. Human trials

The progression from animal testing to human testing or human trials occurs with most new medications. Human subject research is often necessary to get drugs to the final phase of regulatory approval. It serves as the final check of how a given medicine or chemical will interact with the human system. Yet, time and time again it has hurt, maimed, or killed individuals. And we have to ask ourselves again, at what point is this not worth it?

History may not be kind to the reputation of human trials, though scientists are making a constant effort to create safety standards in the process.

In 1947, it was discovered that German physicians conducted deadly experiments on concentration camp prisoners during WWII. Some were prosecuted as war criminals in the Nuremberg Trials, after which the Allies then established the Nuremberg Code, the first international document for voluntary human consent for research.

In today’s human testing, all patients must consent to the study. However, as long as human trials are conducted, there are people who are coerced to participate. For this reason, the ethics of the entire situation are still being hotly debated.

6. Weapons and military R&D

Military weapons development is another major crossroad between science and ethics. Take, for example, the development of the atomic bomb under the Manhattan Project during WWII. In many ways, the research conducted during these experiments furthered humanity’s understanding of atoms, molecules, and quantum theory. In other ways, this research eventually led to the deaths of thousands of people.

Military power and weapons technology poses an ethical dilemma largely due to the nature of humankind. There is the potential that failure to invest in a particularly deadly weapon, such as bioweapons, could allow these weapons to be developed and controlled only by people intent on evil.  Yet, once the weapons are developed by anyone, the genie is out of the bottle, and cannot be put back. This could potentially lead to their use by those wanting to commit harm anyway.

controversial research studies

israel palacio/Unsplash  

7. Space colonization 

Since it seems like the Earth has seen better days, maybe it’s time to consider moving somewhere else, like Mars. Scientists suspect that there is water on Mars somewhere, and we know the planet also contains resources that may help us survive.

So, why not spend the money developing Mars as a colony?

The biggest ethical questions around Martian colonization are presented when you consider the potential of life on Mars or the potential of future life on Mars. We can’t state with absolute certainty that life could thrive on the planet. Moving people there could be harmful. And the cost of developing programs to colonize Mars is high –  surely the money could be used to help solve some of Earth’s current environmental problems? 

RECOMMENDED ARTICLES

The answers to these questions may have to do with how humanity should approach its ethical responsibility toward the Earth itself. If you believe humanity’s only ethical responsibility is to our planet, then colonization seems wasteful. If you believe that we need to explore all options, then space exploration makes sense, no matter how expensive. 

controversial research studies

NASA/SAIC/Pat Rawlings/Wikimedia Commons  

Closing out this discussion of ethical dilemmas in science and technology we’re left again wondering – what is innovation and the betterment of humanity worth? The answer to that question will vary depending upon who you ask… but ask yourself, what is innovation worth?

The Blueprint Daily

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

By clicking sign up, you confirm that you accept this site's Terms of Use and Privacy Policy

ABOUT THE EDITOR

Trevor English <p>Trevor is a civil engineer (B.S.) by trade and an accomplished writer with a passion for inspiring everyone with new and exciting technologies. He is also a published children&rsquo;s book author and the producer for the YouTube channel Concerning Reality.</p>

FEATURED VIDEO

Popular articles, company turns coal fly ash waste into sustainable mgo, emits 60% less co2, gut bacteria enzymes could lead to universal blood production, uk tests world’s first ‘unhackable’ quantum tech for commercial aircrafts, ariane 6 launch could end europe’s rocket crisis, here’s why, related articles.

Breaking exascale barrier: US tops supercomputer list with Frontier, Aurora

Breaking exascale barrier: US tops supercomputer list with Frontier, Aurora

Unitree’s new robot can swing a stick, crush nuts with its dynamic hands

Unitree’s new robot can swing a stick, crush nuts with its dynamic hands

ABB unveils A400, its fastest, sleekest, and most durable EV charger

ABB unveils A400, its fastest, sleekest, and most durable EV charger

Short laser pulses help scientists achieve proton acceleration record

Short laser pulses help scientists achieve proton acceleration record

Featured stories.

  • International edition
  • Australia edition
  • Europe edition

A Facebook page

Facebook emotion study breached ethical guidelines, researchers say

Lack of 'informed consent' means that Facebook experiment on nearly 700,000 news feeds broke rules on tests on human subjects, say scientists

Poll: Facebook's secret mood experiment: have you lost trust in the social network?

Researchers have roundly condemned Facebook's experiment in which it manipulated nearly 700,000 users' news feeds to see whether it would affect their emotions, saying it breaches ethical guidelines for "informed consent".

James Grimmelmann, professor of law at the University of Maryland, points in an extensive blog post that "Facebook didn't give users informed consent" to allow them to decide whether to take part in the study, under US human subjects research.

"The study harmed participants," because it changed their mood, Grimmelmann comments, adding "This is bad, even for Facebook."

But one of the researchers, Adam Kramer, posted a lengthy defence on Facebook , saying it was carried out "because we care about the emotional impact of Facebook and the people that use our product." He said that he and his colleagues "felt that it was important to investigate the common worry that seeing friends post positive content leads to people feeling negative or left out."

The experiment hid certain elements from 689,003 peoples' news feed – about 0.04% of users, or 1 in 2,500 – over the course of one week in 2012. The experiment hid "a small percentage" of emotional words from peoples' news feeds, without their knowledge, to test what effect that had on the statuses or "Likes" that they then posted or reacted to.

The results found that, contrary to expectation, peoples' emotions were reinforced by what they saw - what the researchers called "emotional contagion".

But the study has come in for severe criticism because unlike the advertising that Facebook shows - which arguably aims to alter peoples' behaviour by making them buy products or services from those advertisers - the changes to the news feeds were made without users' knowledge or explicit consent.

Max Masnick, a researcher with a doctorate in epidemiology who says of his work that "I do human-subjects research every day", says that the structure of the experiment means there was no informed consent - a key element of any studies on humans.

"As a researcher, you don’t get an ethical free pass because a user checked a box next to a link to a website’s terms of use. The researcher is responsible for making sure all participants are properly consented. In many cases, study staff will verbally go through lengthy consent forms with potential participants, point by point. Researchers will even quiz participants after presenting the informed consent information to make sure they really understand.

"Based on the information in the PNAS paper, I don’t think these researchers met this ethical obligation."

Kramer does not address the topic of informed consent in his blog post. But he says that "my co-authors and I are very sorry for the way the paper described the research and any anxiety it caused. In hindsight, the research benefits of the paper may not have justified all of this anxiety."

When asked whether the study had had an ethical review before being approved for publication, the US National Academy of Sciences, which published the controversial paper in its Proceedings of the National Academy of Sciences (PNAS), told the Guardian that it was investigating the issue.

  • Social networking

Comments (…)

Most viewed.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 07 May 2024

US funders to tighten oversight of controversial ‘gain of function’ research

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

Biohazard suits hang in a Biosafety Level 4 laboratory.

A US policy that goes into effect next year tightens oversight of risky pathogen research conducted in biosafety facilities. Credit: Associated Press/Alamy Stock Photo

After years of deliberation, US officials have released a policy that outlines how federal funding agencies and research institutions must review and oversee biological experiments on pathogens with the potential to be misused or spark a pandemic.

The policy, which applies to all research funded by US agencies and will take effect in May 2025, broadens oversight of these experiments. It singles out work involving high-risk pathogens for special oversight and streamlines existing policies and guidelines, adding clarity that researchers have been seeking for years.

“This is a very welcome development,” says Jaime Yassif, vice-president of global biological policy and programmes at the Nuclear Threat Initiative, a research centre in Washington DC that focuses on national-security issues. “The US is the biggest funder of life sciences research [globally], so we have a moral obligation to guard against risks.”

Balancing act

Manipulating pathogens such as viruses inside an enclosed laboratory facility, sometimes by making them more transmissible or harmful (called gain-of-function research), can help scientists to assess their risk to society and develop countermeasures such as vaccines or antiviral drugs. But the worry is that such pathogens could accidentally escape the laboratory or even become weaponized by people with malicious intent.

Policymakers have had difficulty developing a clearly articulated review system that evaluates the risks and benefits of this research, while ensuring that fundamental science needed to prepare for the next pandemic and to advance medicine isn’t paralysed. The latest policy, released on 6 May by the US Office of Science and Technology Policy, is the next stage of a long-running US balancing act between totally banning high-risk pathogen research and assessing it with standards that some say are too ambiguous.

controversial research studies

The shifting sands of ‘gain-of-function’ research

In 2014, after several accidents involving mishandled pathogens at US government laboratories, the presidential administration announced a moratorium on funding for research that could make certain pathogens — such as influenza and coronaviruses — more dangerous, given their potential to unleash an epidemic or pandemic. At the time, some researchers said the ban threatened necessary flu surveillance and vaccine research.

The government ended the moratorium in 2017, after the US National Science Advisory Board for Biosecurity (NSABB), a panel of experts that advises the US government, concluded that very few experiments posed a risk. That year, the US Department of Health and Human Services (HHS) instead implemented a review framework for proposals from scientists seeking federal funding for experiments involving potential pandemic pathogens. This framework applied to proposals to any agency housed under the HHS, including the National Institutes of Health (NIH) — the largest public funder of biomedical research in the world.

Researchers raised concerns about the transparency of this review process, and the NSABB was asked to revisit these policies and guidelines in 2020, but the COVID-19 pandemic delayed any action until 2022. During that time, the emergence of the coronavirus SARS-CoV-2 , and the ensuing debate over whether it had leaked from a lab in China, put biosafety at the top of researchers’ minds worldwide. The NIH, in particular, was scrutinized during the pandemic for its role in funding potentially risky coronavirus research. In response, some Republican lawmakers have — so far unsuccessfully — put forward legislation that would once again place a moratorium on research that might increase the transmissibility or virulence of pathogens.

Finding a balance

The latest policy aims to address concerns that have arisen over the past decade about lax oversight, ambiguous wording and lack of transparency.

It breaks potentially problematic research into two categories. The first includes research on biological pathogens or toxins that could generate knowledge, technologies or products that could be misused. The second includes research on pathogens with enhanced pandemic potential.

Research falls into the first category if it meets several criteria. For example, it must involve high-risk biological agents, such as smallpox, that are on specific lists. It must also have particular experimental outcomes, such as increasing an agent’s deadliness.

Research that falls into the second category includes pathogens intended to be modified in a way that is “reasonably anticipated” to make them more dangerous. That criterion means that even research on pathogens that are not typically considered dangerous — seasonal influenza, for example — can fall into the second category. Previously, pathogen surveillance and vaccine-development research were not subject to additional oversight in the United States; the latest policy eliminates this exception, but clarifies that both surveillance and vaccine research are “typically not within the scope” of research in the second category.

Layers of review

Scientists and their institutions are responsible for identifying research that falls into the two categories, the policy states. Once the funding agency confirms that a research proposal fits into either group, that agency will request a risk–benefit assessment and a risk-mitigation plan from the investigator and institution. If a proposal is deemed to fit into the second category, it will undergo an extra review before the project gets the green light. A report of all federally funded research that fits into the second category will be made public every year.

controversial research studies

NIH reinstates grant for controversial coronavirus research

The directive also mandates that agencies outside the HHS that fund biological research, such as the US Department of Defense, must abide by the same rules. This is a huge step forward, says Tom Inglesby, director of the Johns Hopkins Center for Health Security in Baltimore, Maryland. But it applies only to federally funded research; the policy recommends, but does not require, that non-governmental organizations and the private sector follow the same rules.

Federal agencies and research institutions will now create their own implementation plans to comply with the policy before it goes into effect in 2025. Yassif says that the policy’s success will hinge on how these stakeholders implement it.

Nevertheless, the policy sets a worldwide standard and might inspire other countries to re-evaluate how they oversee life-sciences research, says Filippa Lentzos, a biosecurity researcher at King’s College London who chairs an advisory group for the World Health Organization (WHO) on the responsible use of life-sciences research. Later this month, at the World Health Assembly in Geneva, Switzerland, WHO member states will consider a proposal to urge nations to cooperate on developing international standards for biosecurity.

Nature 629 , 510-511 (2024)

doi: https://doi.org/10.1038/d41586-024-01377-x

Reprints and permissions

Related Articles

controversial research studies

  • Research management
  • Public health

Bird flu in US cows: where will it end?

Bird flu in US cows: where will it end?

News 08 MAY 24

Computationally restoring the potency of a clinical antibody against Omicron

Computationally restoring the potency of a clinical antibody against Omicron

Article 08 MAY 24

Chinese virologist who was first to share COVID-19 genome sleeps on street after lab shuts

Chinese virologist who was first to share COVID-19 genome sleeps on street after lab shuts

News 01 MAY 24

Real-world plastic-waste success stories can help to boost global treaty

Correspondence 14 MAY 24

A DARPA-like agency could boost EU innovation — but cannot come at the expense of existing schemes

A DARPA-like agency could boost EU innovation — but cannot come at the expense of existing schemes

Editorial 14 MAY 24

Japan can embrace open science — but flexible approaches are key

Correspondence 07 MAY 24

France’s research mega-campus faces leadership crisis

France’s research mega-campus faces leadership crisis

News 03 MAY 24

My PI yelled at me and I’m devastated. What do I do?

My PI yelled at me and I’m devastated. What do I do?

Career Feature 02 MAY 24

Chief Editor

We are looking for a Chief Editor to build and manage a team handling content at the interface of the physical and life sciences for the journal.

London or Berlin - hybrid working model.

Springer Nature Ltd

controversial research studies

Junior and Senior Staff Scientists in microfluidics & optics

Seeking staff scientists with expertise in microfluidics or optics to support development of new technology to combat antimicrobial resistance

Boston, Massachusetts (US)

Harvard Medical School Systems Biology Department

Postdoctoral Fellow in Biophysics

Seeking ambitious biophysicists to fight antimicrobial resistance with new technology at the interface of microfluidics and optics

Recruitment of Principal Investigators by the School of Life Sciences, Peking University

The School of Life Sciences at Peking University is actively seeking talents, with an emphasis on bioinformatics/computational biology/AI/RNA biology.

Beijing (CN)

School of Life Sciences, Peking University

controversial research studies

2024 Recruitment notice Shenzhen Institute of Synthetic Biology: Shenzhen, China

The wide-ranging expertise drawing from technical, engineering or science professions...

Shenzhen,China

Shenzhen Institute of Synthetic Biology

controversial research studies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies
  • Original article
  • Open access
  • Published: 01 April 2021

Unethical practices within medical research and publication – An exploratory study

  • S. D. Sivasubramaniam 1 ,
  • M. Cosentino 2 ,
  • L. Ribeiro 3 &
  • F. Marino 2  

International Journal for Educational Integrity volume  17 , Article number:  7 ( 2021 ) Cite this article

26k Accesses

1 Citations

4 Altmetric

Metrics details

The data produced by the scientific community impacts on academia, clinicians, and the general public; therefore, the scientific community and other regulatory bodies have been focussing on ethical codes of conduct. Despite the measures taken by several research councils, unethical research, publishing and/or reviewing behaviours still take place. This exploratory study considers some of the current unethical practices and the reasons behind them and explores the ways to discourage these within research and other professional disciplinary bodies. These interviews/discussions with PhD students, technicians, and academics/principal investigators (PIs) (N=110) were conducted mostly in European higher education institutions including UK, Italy, Ireland, Portugal, Czech Republic and Netherlands.

Through collegiate discussions, sharing experiences and by examining previously published/reported information, authors have identified several less reported behaviours. Some of these practices are mainly influenced either by the undue institutional expectations of research esteem or by changes in the journal review process. These malpractices can be divided in two categories relating to (a) methodological malpractices including data management, and (b) those that contravene publishing ethics. The former is mostly related to “committed bias”, by which the author selectively uses the data to suit their own hypothesis, methodological malpractice relates to selection of out-dated protocols that are not suited to the intended work. Although these are usually unintentional, incidences of intentional manipulations have been reported to authors of this study. For example, carrying out investigations without positive (or negative) controls; but including these from a previous study. Other methodological malpractices include unfair repetitions to gain statistical significance, or retrospective ethical approvals. In contrast, the publication related malpractices such as authorship malpractices, ethical clearance irregularities have also been reported. The findings also suggest a globalised approach with clear punitive measures for offenders is needed to tackle this problem.

Introduction

Scientific research depends on effectively planned, innovative investigation coupled with truthful, critically analysed reporting. The research findings impact on academia, clinicians, and the general public, but the scientific community is usually expected to “self-regulate”, focussing on ethical codes of conduct (or behaviour). The concept of self-regulation is built-in from the early stages of research grant application until the submission of the manuscripts for gaining impact. However, increasing demands on research esteem, coupled with the way this is captured/assessed, has created a relentless pressure to publish at all costs; this has resulted in several scientific misconduct (Rawat and Meena 2014 ). Since the beginning of this century, cases of blatant scientific misconduct have received significant attention. For example, questionable research practices (QRP) have been exposed by whistle blowers within the scientific community and publicised by the media (Altman 2006 ; John et al. 2012 ). Moreover, organisations such as the Centre for Scientific Integrity (CSI) concentrate on the transparency, integrity and reproducibility of published data, and promote best practices (www1 n.d. ). These measures focus on “scholarly conduct” and promote ethical behaviour in research and the way it is reported/disseminated, yet the number of misconduct and/or QRP’s are on the rise. In 2008, a survey amongst researchers funded by the National Institutes of Health (NIH) suggested there might be as many as 1,000 cases of potential scientific misconduct going unreported each year (Titus et al. 2008 ). Another report on bioRxiv (an open access pre-print repository) showed 6% of the papers (59 out of 960) published in one journal (Molecular and Cellular Biology - MCB), between 2009 and 2016, contained inappropriately duplicated images (Bik et al. 2018 ). Brainard ( 2018 ) recently reported that the number of articles retracted by scientific journals had increased 10-fold in the past 10 years. If the reported incidence of scientific misconduct is this high, then one can predict the prevalence of other, unreported forms of misconduct. The World Association of Medical Editors (WAME) has identified the following as the most reported misconduct: fabrication, falsification, plagiarism/ghost writing, image/data manipulation, improprieties of authorship, misappropriation of the ideas of others, violation of local and international regulations (including animal/human rights and ethics), inappropriate/false reporting (i.e. wrongful whistle-blowing) (www2 n.d. ).

However, WAME failed to identify other forms of scientific misconduct, such as; reviewer bias (including reviewers’ own scientific, religious or political beliefs) (Adler and Stayer 2017 ), conflicts of interests (Bero 2017 ), and peer-review fixing, which is widespread, especially after the introduction of author appointed peer reviewers (Ferguson et al., 2014 ; Thomas 2018 ). The most recent Retraction Watch report has shown that more than 500 published manuscripts have been retracted due to peer-review fixing; many of these are from a small group of authors (cited in Meadows 2017 ). Other reasons for retraction include intentional/unintentional misconduct, fraud and to a lesser extent honest errors. According to Fang et al. ( 2012 ), in a detailed study using 2,047 retracted articles within biomedical and life-sciences, 67.4% of retractions were due to some form of misconduct (including fraud/suspected fraud, duplicate publication, and plagiarism). Only 21.3% of retractions were due to genuine error. As can be seen, most of the information regarding academic misconduct is reported, detected or meta-analysed from databases. As for reporting (or whistle blowing), many scientists have shown been reticent to raise concerns, mainly because of the fear of aftermath or implications of doing so (Bouter and Hendrix 2017 ). An anonymous information-gathering activity amongst scientists, junior scientists, technicians and PhD students may highlight the misconduct issues that are being encountered in their day-to-day laboratory, and scholarly, activities. Therefore, this exploratory study of an interview-based study reports potentially un-divulged misconduct and tries to form a link with previously reported misconduct that are either being enforced, practiced or discussed within scientific communities.

Methodology

This qualitative exploratory study was based on informal mini-interviews conducted through collegiate discussions with technicians, PhD scholars, and fellow academics (N=110) within medical and biomedical sciences mainly in European higher education institutions including UK, Italy, Ireland, Portugal, Czech Republic and Netherlands (only 5 PhD students). PhD students (n=75), technicians (mostly in the UK; n=25) and academics/principal investigators (PIs; n=10), around Europe, have taken part in this qualitative narrative exploration study. These mini-interviews were carried out in accordance with local ethical guidance and processes. The discussions or conversations were not voice recorded; nor the details of interviewees taken to maintain anonymity. The data was captured (in long-hand) by summarising their views on following three questions (see below).

These answers/notes were then grouped according to their similarities and summarised (see Tables  1 and  2 ). The mini-interviews were semi-structured, based around three questions.

Have you encountered any individual or institutional malpractices in your research area/laboratory?

If so, could you give a short description of this misconduct?

What are the measures, in your opinion, needed to minimise or remove these misconduct?

we also examined recently published and/or reported (in media) unethical practice or misconduct to compare our findings (see Table  2 ). Fig.  1 summarises the methodology and its meta-cognitive reflection (similar to Eaton et al. 2019 ).

figure 1

Interactive enquiry-based explorative methodology used in this study

Results and discussion

As stated above, this manuscript is an exploratory study of unethical practice amongst medical researchers that are not well known or previously reported. Hence, the methodology applied was more exploratory with minimal focus on standardisation, using details of qualitative approach and paradigm, or the impact of researcher characteristics and reflexivity (British Medical Journal (BMJ) – www3 n.d. ). Most importantly, our initial informal meetings prior to this study clearly indicated that the participants were reluctant to provide information that would assist for an analysis linked to researcher characteristics and/or reflexivity. Thus, the level of data presented herein would not be suitable for a full thematic analysis. We do accept this as a research limitation.

This study has identified some less reported (not well-known) unethical behaviours or misconduct. These findings from technician/PhD scholars and academics/PIs are summarised in Tables  1 and  2 . The study initially aimed to identify any previously unreported unethical research conducts, however, the data shows that many previously identified misconduct are still common amongst researchers. Since the interviews were not audio recorded (to reassure anonymity), the participants were openly reported the unethical practices within their laboratories (or elsewhere). This may cast doubts on the accuracy of data interpretation. To minimise this, we have captured the summary of the conversation in long-hand.

We were able to generalise two emerging themes linked to the periods of a typical research cycle (as described by Hevner 2007 ); (a) methodological malpractices (including data management), and (b) those that contravene publishing ethics. Researcher-linked behaviours happen during laboratory investigation stage, where researchers employ questionable research practices, these include self-imposed as well as acquired (or taught) habits. As can be seen from Tables  1 and  2 , these misconduct are mainly carried out by either PhD scholars, post-doctoral scientists or early career researchers. These reported “practices” may be common amongst laboratory staff, especially given the fact that some of these practices have been nicknamed (e.g. ghost repeats, data mining etc. – see Table  1 ). Individual or researcher-linked unethical behaviours mostly related to “committed bias”, by which the researcher selectively uses the data to suit their own hypothesis or what they perceive as ground-breaking. This often results in conducts where research (and in some cases data/results) is statistically manipulated to suit the perceived conclusion.

Although this is a small-scale pilot study, we feel this reflects the common trend in laboratory-based research. As mentioned earlier, although this study was set out to detect unreported research misconduct/malpractices, study participant reported some of the behaviours that were already reported in previous studies.

In contrast, established academics, professors and PIs tend to commit publication-related misconduct. These can be divided into author-related or reviewer-related misconduct. The former includes QRPs during manuscript preparation (such as selective usage of data, omitting outliers, improper ethical clearance, authorship demands etc). The latter is carried out by the academics when they review others manuscripts and includes delaying review decisions, reciprocal reviewing etc.

From tables above, it seems that most of the reported misconduct can be easily prevented if specific and accurate guidelines or code of conduct are present in each research laboratory (see below). This aspect, for example is of minor impact in the clinical research, where the study protocol is rigorously detailed in advance, the specific analysis that will be included in the final report specified in advance with clear primary or secondary endpoints, and all the analysis/reports need to be stored for the final study revision/conclusion. All these different steps are regulated by Good Clinical Practice guidelines (GCP; National Institute for Health Research Clinical Research Network (NIHR CRN- www4 n.d. ).

This by no means indicates that in clinical research fraud does not exist, but that it is easier to discover it than in laboratory based-investigations. The paper of Verhagen et al. ( 2003 ) clearly refers to a specific situation that commonly happens in a research laboratory. The majority of experiments within biomedical research are conducted on tissues or cells. Therefore, the experimental set-ups, including negative and positive controls can easily (and frequently) be manipulated. This can only be prevented by using Standard Operating Procedures (SOP) and well written and clear regulation such as Good Laboratory Practice (GLP; Directive 2004 /9/EC), and written protocols. However, at present, no such regulations exist apart from in industry-based research, where GLP is mandatory. In a survey-based systematic review, Fanelli ( 2009 ) reported that approximately 2% of scientists claimed they had fabricated their data, at some point in their research career. It is worth noting, Fanelli’s study (as well as ours) only reported data from those who were willing admit engaging in these activities. This cast questions on actual number of occurrences, as many of them would not have reported misconduct. Other authors have highlighted the same issue and cast doubt on the reproducibility of scientific data (Resnik and Shamoo, 2017 ; Brall et al, 2017 ; Shamoo 2016 ; Collins and Tabak 2014 ; Kornfeld and Titus 2016 ).

The interview responses

We also wanted to understand the causes of these QRPs to obtain a clear picture of these misconduct. Based on interview responses, we have tried to give a narrative but critical description of individual perceptions, and their rationalisation in relation to previously published information.

Methodological malpractices

The data reported herein show that PhD scholars/post-doctoral fellows are mostly involved in laboratory-linked methodological misconduct. Many of them (especially the post-doctoral scientists) blamed supervisory/institutional pressures on not only enhancing publishing record, but also maintaining high impact. One post-doctoral scientist claimed “ there is always a constant pressure on publication; my supervisor said the reason you are not producing any meaningful data is because you are a perfectionist ”. He further recalled his supervisor once saying “ if the data is 80% correct, you should accept it as valid and stop repeating until you are satisfied ”.

Likewise, another researcher who recently returned from the US said “ I was an excellent researcher here (home country), but when I went to America, they demanded at least one paper every six months ”. “ When I was unable to deliver this (and missed a year without publishing any papers), my supervisor stopped meeting me, I was not invited for any laboratory meetings, presentations, and proposal discussions; in fact, they made me quit the job ”. A PhD student recalled his supervisor jokingly hinting “ if you want a perfect negative control, use water it will not produce any results ”. Comments and demands like these must have played a big role in encouraging laboratory based misconduct. In particular, the pressure to publish more papers in a limited period led to misconduct such as data manipulation (removing outliers, duplicate replications, etc.) or changing the aim of the study, and as a consequence including data set that were not previously considered, because the results are not in line with the original aim of the study. All these aspects force the young researchers to adopt an attitude that leads them to obtain publishable results by any means (ethical or not) – A “ Machiavellian personality trait ” as put by Tijdink et al. ( 2016 ). Indeed, an immoral message is being delivered to these young researchers (future scientists), enhancing cheating behaviours. In fact, Buljan et al. ( 2018 ) have recently highlighted the research environment, in which a scientist is working, as one of the potential causes of misconduct.

Behaviours that contravene publishing ethics

Academics (and PIs) have mostly identified misconduct linked to contravening publishing ethics. This finding itself shows that most of the academics who took part in this study has less “presence” within their laboratories. When confronted with the data obtained from PhD scholars and technicians, some of them vehemently denied these claims. Others came up with a variety of excuses. One lecturer/researcher said, “ I have got far too much teaching to be in my laboratory ”. Another professor said, “ I have post-docs within my laboratory, they will look after the rest; to be honest, my research skills are too old to refresh!” One PI replied, “ why should I check them? No one checked me when I was doing research ”. All these replies show a lack of care for research malpractices. It is true that academics are under pressure to deliver high impact research, carry out consultancy work, get involved with internationalisation within academia and teach (Edwards and Roy 2017 ). However, these pressures should not undermine research ethics.

One researcher claimed to have noticed at least two different versions of “ convenient ethical clearance ”. According to him, some researchers, especially those using human tissues, avoid specifying their research aims; and instead write an application in such a way that they can use these samples for a variety of different projects (bearing in mind of possible future developments). For example, if they aim to use the tissue to study a particular protein, the ethical application would mention all the related proteins and linked pathways. They justify this by claiming the tissues are precious, therefore they are “ maximising the effective use of available material ”. Whilst understanding the rationale within their argument, the academic who witnessed this practice asked a question “ how ethical it is to supply misleading information in an ethical application ?” He also highlighted issues with backdating ethical approval in one institution. That is, the ethical approval was obtained (or in his words “ staged ”) after the study has been completed. Although this is one incident reported by one whistle-blower, it highlights the institutional malpractices.

Selective use of data is another category reported here and elsewhere (Priya Satalkar & David Shaw, 2019 ; Blatt 2013 ; Bornmann 2013 ). One academic reported incidences of researchers purposely avoiding data to maximise the statistical significance. If this is the case, then the validity of reported work, its statistical significance, and in some cases its clinical usage are in question. What is interesting is that, as elegantly reported by Fanelli ( 2010 ), in the highest percentage of published papers, the findings always report the data that are in line with the original hypothesis. In fact, the number of papers published reporting negative results are very limited.

Misconduct relating to authorships have been highlighted in many previous studies (Ploug 2018 ; Vera-Badillo et al. 2016 ; Ng Chirk Jenn 2006 ). The British Medical Journal (BMJ – www5 n.d. ) has classified two main types of misconduct relating to authorships; (a) omission of a collaborator who has contributed significantly to the project and (b) inclusion of an author who has not (or minimally) contributed. Interestingly in this study, one academic claimed that he was under pressure to include the research co-ordinator of his department as an author in every publication.

He recalled the first instance when he was pressurised to include the co-ordinator, “ It was my first paper as a PI but due to my institutional policy, all potential publications needed to be scrutinised by the co-ordinator for their worthiness of publication ”, “ so when I submitted for internal scrutiny, I was called by the co-ordinator who simply said there is nothing wrong with this study, but one important name is missing in authors’ list ” (indirectly requesting her name to be included). Likewise, another PI said, “ it is an unwritten institutional policy to include at least one professor on every publication ”. Yet another PI claimed, “ this is common in my laboratory – all post-doctoral scientists would have a chance to be an author ” “ by this way we would build their research esteem ”. His justification for this was “ many post-doctoral scientists spend a substantial amount of time mentoring other scientists and PhD students, therefore they deserve honorary authorships ”. Similar malpractices have also been highlighted by other authors (Vera-Badillo et al. 2016 ; Gøtzsche et al. 2007 ) but the worrying finding is that in many cases, the practice is institutionalised. With regards to authorships, according to the International Committee of Medical Journal Editors (ICMJE – www6 n.d. ), authorships can only be given to those with (a) a substantial contribution (at least to a significant part of the investigation), (b) involvement in manuscript preparation including contribution to critical review. However, our discussions have revealed complementary authorships, authorship denial, etc.

Malpractices in peer-review process

The final QRP highlighted by our interviewees relates to the vreviewing process. One academic openly admitted, “ I and Dr X usually use each other as reviewers because we both understand our research”, he further added, “the blind reviewing is the thing of the past, every author has his own writing style, and if you are in one particular research field, with time, you would be able to predict the origin of the manuscript you are reviewing (whether it is your friend or a person with a conflicting research interest!”. Another academic said that “ the era of blind reviewing is long gone, authors are intentionally or unintentionally identifying themselves within the manuscripts with sentences such as ‘previously we have shown’. “This allows the reviewer to identify the authors from the reference list ”. He further claimed he also experienced reviewers intentionally delaying acceptance or asking for further experiments to be carried out, simply because they wanted their manuscript (on a related topic) to be published first! Incidences like this, though minimal, cast questions on the reviewing process itself.

Recent reports by Thomas ( 2018 ) and Preston ( 2017 ) (see also Adler and Stayer 2017 ) have highlighted issues (or scams) such as an author reviewing his own manuscripts! Of course, many journals do not use the suggested reviewers; instead, they build a database of reviewers and randomly select appropriate reviewers. Still, it is not clear how robust this approach is in curtailing reviewer-based misconduct. Organisations such as Retraction Watch constantly pick up and report these malpractices, yet there are no definite sanctions or punishment for the culprits (Zimmerman 2017 )

One of the academic interviewees recalled an incidence in which an author has been dismissed due to a serious image manipulation scam, yet obtained a research tenure in another institution within 3 months of dismissal. Galbraith ( 2017 ) reviewed summaries of 284 integrity-related cases published by the Office of Research Integrity (ORI), and found that in around 47% of cases the researchers received moderate punishment and were often permitted to continue their research. This highlights the need for a globalised approach with clear sanction measures to tackle research misconduct. Although this is a small-scale study, it has highlighted that despite measures taken by research regulatory bodies, the problem of misconduct is still there. The main problem behind this is “the lack of care” underpinned by pressures for esteem.

Limitations

This is an exploratory study with minimal focus on standardisation, using details of qualitative approach and paradigm, or the impact of researcher characteristics and reflexivity. Therefore, the level of data presented herein is not suited for a full thematic analysis. Also, this is a small-scale study with a sample size of of 110 participants who are further divided into sub-groups (such as PhD students, technicians and PIs). This limits the scope of analysing variability in the responses of individual sub-groups, and therefore might have resulted in voluntary response bias (i.e. responses are influenced by individual perceptions against research misconduct). Yet, the study has highlighted the issue of research misconduct is worth pursuing using a large sample. It also highlighted the common QRPs (both laboratory and publication related) that need to be focussed further, enabling us to establish a right research design for future studies.

The way forward

This exploratory study (and previously reported large scale studies) showed QRP is still a problem in science and medical research. So what are the way forward to stop these types of misconduct? Whilst it is important to set up confirmed criteria for individual research conduct, it is also important to set up institutional policies. These policies should aim at promoting academic/research integrity, with paramount attention on the training of young researchers about research integrity. The focus should be on young researchers attaining rigorous learning/application of the best methodological and professional standards in their research. In fact, the Singapore statement on research integrity (www7 n.d. ), not only highlights the importance of individual researchers maintaining integrity in their research, but also insists the roles of institutions creating/sustaining research integrity via educational programmes with continuous monitoring (Cosentino and Picozzi 2013 ). Considering the findings from this study, it would also be appropriate to suggest an international regulatory body to regularly monitor these practices involving all stakeholders including governments.

In fact, this (and other studies) have highlighted the importance of re-validating the “voluntary commitment” to follow the research integrity. With respect to individual researchers, we propose using a unified approach for early career researchers (ECRs). They should be educated about the importance of ethics/ethical behaviours (see Table  3 ) for our suggestion for ECRs). We feel it is vital to provide compulsory ethical training throughout their career (not just at the beginning). It is also advisable to regularly carry out “peer review” visits/processes between laboratories for ethical and health/safety aspects. Most importantly, it is time for the research community to move away from the expectation of “self-governance” establish and international research governance guidelines that can monitored by individual countries.

We, do agree this is a small-scale pilot study and due to the way it was conducted, we are unable to carry out a full thematic analysis. This was mainly because the participants were extremely reluctant to offer information to formulate researcher characteristics. Also, the study data in many cases conforms to the previously reported fact, that QRP and research misconduct is still a problem within science and medicine. Yet, this study has attempted to narrate the previously unreported justifications given by the interviewees. In addition, we were able to highlight that these activities are becoming regular occurrence (those nick-named behaviours). We also provided some directives on how academic pressures are inflicted upon early career researchers. We also provided some recommendations in regard to the training ECRs.

Significance

The study has highlighted the negative influence on supervisory/peer pressures and/or inappropriate training may be main causes for these misconducts, highlighting the importance on devising and implementing a universal research code of conduct. Although this was an exploratory investigation, the data presented herein have pointed out that unethical practices can still be widespread within biomedical field. It highlighted the fact that despite the proactive/reflective measures taken by the research governance organisations, these practices are still going on in different countries within Europe. As the study being explorative, we had the flexibility to adapt and evolve our questions in reflection to the responses. This would help us to carry out a detailed systematic research in this topic involving international audience/researchers.

Concluding remarks

To summarise, this small-scale interview-based narrative study has highlighted that QRP and research misconduct is still a problem within science and medicine. Although they may be influenced by institutional and career-related pressures, these practices seriously undermine ethical standards, and question the validity of data that are being reported. The findings also suggest that both methodological and publication-related malpractices continue, despite being widely reported. The measures taken by journal editors and other regulatory bodies such as WAME and ICMJE may not be efficient to curtail these practices. Therefore, it would be important to take steps in providing a universal research code of conduct. Without a globalised approach with clear punitive measures for offenders, research misconduct and QRP not only affect reliability, reproducibility, and integrity of research, but also hinder the public trustworthiness for medical research. This study has also highlighted the importance of carrying out large-scale studies to obtain a clear picture about misconduct undermining research ethics culture.

Availability of data and materials

The authors confirm that the data supporting the findings of this study are available within the article

Adler AC, Stayer SA (2017) Bias Among Peer Reviewers. JAMA. 318(8):755. https://doi.org/10.1001/jama.2017.9186

Article   Google Scholar  

Altman, LK (2006). For science gatekeepers, a credibility gap. The New York Times. Retrieved from http://www.nytimes.com/2006/05/02/health/02docs.html?pagewanted=all . Accessed 26 July 2019

Bero L (2017) Addressing Bias and Conflict of Interest Among Biomedical Researchers. JAMA 317(17):1723–1724. https://doi.org/10.1001/jama.2017.3854

Bik EM, Fang FC, Kullas AL, Davis RJ, Casadevall A (2018) Analysis and Correction of Inappropriate Image Duplication: the. Mol Cell Biol Exp. https://doi.org/10.1128/MCB.00309-18

Blatt M (2013) Manipulation and Misconduct in the Handling of Image Data. Plant Physiol 163(1):3–4. https://doi.org/10.1104/pp.113.900471

Bornmann L (2013) Research Misconduct—Definitions, Manifestations and Extent. Publications. 1:87–98. https://doi.org/10.3390/publications1030087

Bouter LM, Hendrix S (2017) Both whistle-blowers and the scientists they accuse are vulnerable and deserve protection. Account Res 24(6):359–366. https://doi.org/10.1080/08989621.2017.1327814

Brainard J (2018) Rethinking retractions. Science. 362(6413):390–393. https://doi.org/10.1126/science.362.6413.390

Brall C, Maeckelberghe E, Porz R, Makhoul J, Schröder-Bäck P (2017) Research Ethics 2.0: New Perspectives on norms, values, and integrity in genomic research in times of even ccarcer resources. Public Health Genomics 20:27–35. https://doi.org/10.1159/000462960

Buljan I, Barać L, Marušić A (2018) How researchers perceive research misconduct in biomedicine and how they would prevent it: A qualitative study in a small scientific community. Account Res 25(4):220–238. https://doi.org/10.1080/08989621.2018.1463162

Collins FS and Tabak LA (2014) Policy: NIH plans to enhance reproducibility. NATURE (Comment) - https://www.nature.com/news/policy-nih-plans-to-enhance-reproducibility-1.14586

Cosentino M , and Picozzi M (2013) Transparency for each research article: Institutions must also be accountable for research integrity. BMJ 2013;347:f5477 doi: https://doi.org/10.1136/bmj.f5477 .

Directive 2004/9/EC of the European Parliament and of the Council of 11 February 2004 on the inspection and verification of good laboratory practice (GLP).  https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=OJ:L:2004:050:0028:0043:EN:PDF . Accessed  07 Sep 2019

Eaton SE, Chibry N, Toye MA, Toye MA, Rossi S (2019) Interinstitutional perspectives on contract cheating: a qualitative narrative exploration from Canada. Int J Educ Integr 15:9. https://doi.org/10.1007/s40979-019-0046-0

Edwards M, Roy (2017) Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition. Environ Eng Sci 34(1):51–61. https://doi.org/10.1089/ees.2016.0223

Fanelli D (2009) How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. Plos One 4(5):e5738

Fanelli D (2010) Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS One 5(4):e10271. https://doi.org/10.1371/journal.pone.0010271

Fang FC, Steen RG, Casadevall A (2012) Misconduct accounts for the majority of retracted scientific publications. PNAS 109(42):17028–11703. https://doi.org/10.1073/pnas.1212247109

Ferguson C, Marcus A, Oransky I (2014) Publishing: Publishing: The peer-review scam. Nature (News review). 515(7528):480-2. http://www.nature.com/news/publishing-the-peer-review-scam-1.16400. Accessed  21 Nov 2019

Galbraith KL (2017) Life after research misconduct: Punishments and the pursuit of second chances. J Empir Res Hum Res Ethics 12(1):26–32. https://doi.org/10.1177/1556264616682568

Gøtzsche PC, Hróbjartsson A, Johansen HK, HaahrMT ADG, Chan A-W (2007) Ghost Authorship in Industry-Initiated Randomised Trial. Plos-Med. https://doi.org/10.1371/journal.pmed.0040019

Hevner AR (2007) A Three Cycle View of Design Science Research. Scand J Inf Syst 19(2):4 https://aisel.aisnet.org/sjis/vol19/iss2/4

Google Scholar  

Jenn NC (2006) Common Ethical Issues In Research And Publication. Malays Fam Physician 1(2-3):74–76

John LK, Loewenstein G, Prelec D (2012) Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 23(5):524–532

Kornfeld DS, Titus SL (2016) (2016) Stop ignoring misconduct. Nature. 537(7618):29–30. https://doi.org/10.1038/537029a

Meadows, A. (2017). What does transparent peer review mean and why is it important? The Scholarly Kitchen, [blog of the Society for Scholarly Publishing.] [Google Scholar]

Ploug TJ (2018) Should all medical research be published? The moral responsibility of medical journal. Med Ethics 44:690–694

Preston A (2017) The future of peer review. Scie Am. Retrieved from https://blogs.scientificamerican.com/observations/the-future-of-peer-review/

Rawat S, Meena S (2014) Publish or perish: Where are we heading? J Res Med Sci. 19(2):87–89

Resnik DB, Shamoo AE (2017) Reproducibility and Research Integrity. Account Res. 24(2):116–123. https://doi.org/10.1080/08989621.2016.1257387

Satalka P, Shaw D (2019) How do researchers acquire and develop notions of research integrity? A qualitative study among biomedical researchers in Switzerland. BMC Med Ethics 20:72. https://doi.org/10.1186/s12910-019-0410-x

Shamoo AE (2016) Audit of research data. Account Res. 23(1):1–3. https://doi.org/10.1080/08989621.2015.1096727

Thomas SP (2018) Current controversies regarding peer review in scholarly journals. Issues Ment Health Nurs 39(2):99–101. https://doi.org/10.1080/01612840.2018.1431443.

Tijdink JK, Bouter LM, Veldkamp CL, van de Ven PM, Wicherts JM, Smulders YM (2016) Personality traits are associated with research misbehavior in Dutch scientists: A cross-sectional study. Plos One. https://doi.org/10.1371/journal.pone.0163251

Titus SL, Wells JA, Rhoades LJ (2008) Repairing research integrity. Nature 453:980–982

Vera-Badillo, Marc Napoleonea FE, Krzyzanowskaa MK, Alibhaib SMH, Chanc A-W, Ocanad A, Templetone AJ, Serugaf B, Amira E, Tannocka IF, (2016) Honorary and ghost authorship in reports of randomised clinical trials in oncology. Eur J Cancer (66)1 doi: https://doi.org/10.1016/j.ejca.2016.06.023

Verhagen H, Aruoma OI, van Delft JH, Dragsted LO, Ferguson LR, Knasmüller S, Pool-Zobel BL, Poulsen HE, Williamson G, Yannai S (2003) The 10 basic requirements for a scientific paper reporting antioxidant, antimutagenic or anticarcinogenic potential of test substances in in vitro experiments and animal studies in vivo. Food Chem Toxicol. 41(5):603–610

www1 n.d.:  https://retractionwatch.com/the-center-for-scientific-integrity/ . Accessed 13 Nov 2019

www2 n.d.: http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html . Accessed 07 July 2019

www3 n.d.: https://bmjopen.bmj.com/content/bmjopen/8/12/e024499/DC1/embed/inline-supplementary-material-1.pdf?download=true . Accessed 26 July 2019

www4n.d.: http://www.crn.nihr.ac.uk/learning-development/ - National Institute for Health Research Clinical Research Network (NIHR CRN) - Accessed 13 Nov 2019

www5 n.d.: https://www.bmj.com/about-bmj/resources-authors/forms-policies-and-checklists/scientific-misconduct . Accessed 07 July 2019

www6 n.d.: http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html - Accessed 0 July 2019

www7 n.d.: http://www.singaporestatement.org . Accessed 10 Aug 2019

Zimmerman SV (2017), "The Canadian Experience: A Response to ‘Developing Standards for Research Practice: Some Issues for Consideration’ by James Parry", Finding Common Ground: Consensus in Research Ethics Across the Social Sciences (Advances in Research Ethics and Integrity, Vol. 1) Emerald Publishing Limited, pp. 103-109. https://doi.org/10.1108/S2398-601820170000001009

Download references

Acknowledgements

Authors wish to thank the organising committee of the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania for accepting this paper to be presented in the conference. We also sincerely thank Dr Carol Stalker, school of Psychology, University of Derby, for her critical advice on the statistical analysis.

Not applicable – the study was carried out as a collaborative effort amongst the authors.

Author information

Authors and affiliations.

School of Human Sciences, University of Derby, Derby, DE22 1GB, UK

S. D. Sivasubramaniam

Center of Research in Medical Pharmacology, University of Insubria, Via Ravasi, 2, 21100, Varese, VA, Italy

M. Cosentino & F. Marino

Faculty of Medicine, University of Porto, Porto, Portugal

You can also search for this author in PubMed   Google Scholar

Contributions

Dr Sivasubramaniam has produced the questionnaire with interview format with the contribution of all other authors. He also has read the manuscript with the help of Prof Consentino. The latter also contributed for the initial literature survey and discussion. Drs Marino and Ribario have helped in the data collection and analysis. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to S. D. Sivasubramaniam .

Ethics declarations

Competing interests.

The authors can certify that they have NO affiliations with or involvement in any organization or entity with any financial or non-financial interests (including personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sivasubramaniam, S.D., Cosentino, M., Ribeiro, L. et al. Unethical practices within medical research and publication – An exploratory study. Int J Educ Integr 17 , 7 (2021). https://doi.org/10.1007/s40979-021-00072-y

Download citation

Received : 17 July 2020

Accepted : 24 January 2021

Published : 01 April 2021

DOI : https://doi.org/10.1007/s40979-021-00072-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical research
  • Research misconduct
  • Committed bias
  • Unethical practices

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

controversial research studies

UM-Flint Home

TODAY'S HOURS:

Research Topic Ideas

  • Picking a Topic
  • Area & Interdisciplinary Studies
  • Behavioral & Social Sciences
  • Business, Economics, & Management

Not Sure Which Topic to Choose?

Controversial issues and current events, flint water crisis.

  • Education & Social Work
  • Health Sciences
  • Natural and Physical Sciences

Look at the "Picking a Topic" tab on this guide for help brainstorming your topic. Also, our Research Process guide can help you throughout your research process.

  • Research Process by Liz Svoboda Last Updated Apr 26, 2024 7864 views this year
  • Affirmative Action
  • Affordable Care Act
  • Alternative medicine
  • America's global influence
  • Artificial intelligence
  • Assisted suicide
  • Bilingual education
  • Black Lives Matter
  • Border security
  • Capital punishment
  • Charter schools
  • Childhood obesity
  • Civil rights
  • Climate change
  • Concussions in football
  • COVID restrictions
  • Cryptocurrency
  • Cyber bullying
  • Cybersecurity
  • Drug legalization
  • Early voting
  • Eating disorders
  • Equal Rights Amendment
  • Executive order
  • Factory farming
  • Foreign aid
  • Freedom of speech
  • General Data Protection Regulation
  • Genetic engineering
  • Gerrymandering
  • Green New Deal
  • Hate speech
  • Health insurance
  • Human trafficking
  • Immigration
  • Israel-Palestine relations
  • Judicial activism
  • Labor unions
  • Land acknowledgments
  • #MeToo movement
  • Minimum wage
  • Misinformation
  • Net neutrality
  • Nuclear energy
  • Offshore drilling
  • Online anonymity
  • Organic food
  • Outsourcing
  • Police reform
  • Political activism
  • Prescription drug addiction
  • Racial profiling
  • Reparations
  • Russian hacking
  • Sanctuary city
  • Screen addiction
  • Self-driving cars
  • Sex education
  • Smart speakers
  • Social Security reform
  • Standardized testing
  • Stimulus packages
  • Supreme Court confirmation
  • Syrian civil war
  • Title IX enforcement
  • Trade tariffs
  • Transgender rights
  • Ukraine and Russia
  • Urban agriculture
  • Vaccination mandates
  • Violence in the media
  • Voter ID laws
  • Voting fraud and security
  • White nationalism
  • Women's rights
  • Zero tolerance policies

Related suggested databases

U-M login required

Covers contemporary social issues with pro & con and background information. Also allows searching of the collection Global Issues.

Covers contemporary social issues, from Offshore Drilling to Climate Change, Health Care to Immigration. Helps students research, analyze and organize a broad variety of data for conducting research, completing writing assignments, preparing for debates, creating presentations, and more. This resource helps students explore issues from all perspectives, and includes: pro/con viewpoint essays, topic overviews, primary source documents, biographies of social activists and reformers, court-case overviews, periodical articles, statistical tables, charts and graphs, images and a link to Google Image Search, podcasts (including weekly presidential addresses and premier NPR programs), and a national and state curriculum standards search correlated to the content that allows educators to quickly identify material by grade and discipline. Keyword(s): United States

In-depth, unbiased coverage of health, social trends, criminal justice, international affairs, education, the environment, technology, and the economy.

1923-present. Each single-themed, 12,000-word report is researched and written by a seasoned journalist, and contains an introductory overview; background and chronology on the topic; an assessment of the current situation; tables and maps; pro & con statements from representatives of opposing positions; and bibliographies of key sources.

Balanced, accurate discussions of over 250 controversial topics in the news along with chronologies, illustrations, maps, tables, sidebars, contact info, and bibliographies, including primary source documents and news editorials.

Covers 1995-present. A Read Aloud button is available for text-to-speech for much of the content.

Series of short books that offer a balanced and authoritative treatment of current events and countries of the world.

What Everyone Needs to Know has short overviews designed to offer a balanced and authoritative treatment on complex current events and countries of the world. Includes books in these areas:

  • Arts & Humanities  
  • Medicine & Health  
  • Science & Mathematics  
  • Social Sciences  
  • Art as commentary
  • Early childhood development
  • Citizen scientists
  • Emergency manager law
  • Environmental health
  • Government regulations
  • Health care access
  • Infrastructure
  • Investigative journalism
  • Lead and Copper Rule
  • Lead toxicity
  • Volunteerism
  • Water filtration
  • Water Resource Development Act (S.2848)
  • Water rights
  • Water supply policy
  • Water supply regulation

Related subject guide

  • The Flint Water Crisis: A Guide to Information Resources by Paul Streby Last Updated Mar 1, 2024 398 views this year
  • << Previous: Business, Economics, & Management
  • Next: Education & Social Work >>
  • Last Updated: Mar 1, 2024 1:06 PM
  • URL: https://libguides.umflint.edu/topics

share this!

May 7, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

Contradictory thoughts lead to more moderate attitudes, psychologists find

by Informationsdienst Wissenschaft

racial attitudes

Researchers from the Leibniz Institute for Psychology (ZPID) and the University of Hohenheim present rhetorical tools that can help to reduce the polarization of discussions.

When discussing controversial issues such as migration or measures against global warming , strongly opposing positions often clash and discussions become emotional. The polarization that can be found here is primarily driven by people with extreme attitudes.

Current research shows that these attitudes encourage the adoption of more moderate positions by triggering contradictory thoughts in these people. This is the conclusion reached by psychologists Prof. Dr. Kai Sassenberg, director of the Leibniz Institute for Psychology (ZPID) in Trier and Dr. Kevin Winter from the University of Hohenheim, both located in Germany.

The findings are published in the journal Current Directions in Psychological Science .

Demonstrators against right-wing extremism chant "All of [...] hates the AfD." Conservative politicians are banning gender-sensitive language in public institutions and farmers are hanging traffic lights on a gallows as symbols of the German government. These are just a few examples of extreme statements and actions that are currently being displayed unabashedly in social discourse.

The extreme attitudes of individuals that underlie such polarization are difficult to change. Psychologists Sassenberg and Winter have argued in a recent article that deliberately triggering conflicting thoughts can lead to more moderate attitudes.

"Whether it's remembering personal goals that are difficult to reconcile or dealing with contradictory facts. Such cognitive conflicts lead to less extreme positions being adopted," explains Sassenberg.

"What is particularly exciting is that this effect is even evident when the thoughts triggered have nothing to do with the extreme opinion in terms of content."

The two researchers refer to this as a mindset—a way of thinking that manifests itself across different situations. But how can such a mindset be triggered in a specific case?

Rhetorical devices to reduce polarized attitudes

Psychological research has shown that contradictory thoughts can be triggered by a range of rhetorical means. In this way, they contribute to the reduction of polarized attitudes. For example, cognitive conflicts can be triggered by rhetorical questions that play out a course of events that deviate from reality ("What if...?").

Communication that activates goals that are difficult to reconcile (e.g., making a career and having plenty of time for the family) also triggers such mental conflicts.

In numerous studies, the contradictions generated and mentally played out in this way led people with extreme attitudes to adopt more moderate positions. For example, people with politically right-wing attitudes trusted migrants more after cognitive conflicts had been triggered in them.

Another method, which has already been successfully tested in the Israeli-Palestinian conflict and with Germans with right-wing political views, is to ask questions that seem to agree with the extreme positions but exaggerate them in an extreme and sometimes absurd way. For example, the question "Why do you think that Christmas will soon no longer be celebrated in Germany because of the large number of Muslim refugees?" was asked.

Winter explains, "Statements like this trigger opposition even from people who reject the immigration of people of Muslim faith to Germany."

In order to distance themselves, people who originally held extreme attitudes adopt a more moderate position.

Regardless of the way in which contradictory thoughts arise, they are likely to lead people with extreme attitudes to adopt more moderate positions. Such thoughts can be triggered by relatively simple rhetorical means. The psychologists are certain that this could help to reduce social polarization at an individual level.

Journal information: Current Directions in Psychological Science

Provided by Informationsdienst Wissenschaft

Explore further

Feedback to editors

controversial research studies

Study uncovers technologies that could unveil energy-efficient information processing and sophisticated data security

17 minutes ago

controversial research studies

Scientists develop an affordable sensor for lead contamination

22 minutes ago

controversial research studies

Chemists succeed in synthesizing a molecule first predicted 20 years ago

25 minutes ago

controversial research studies

New optical tweezers can trap large and irregularly shaped particles

37 minutes ago

controversial research studies

An easy pill to swallow—new 3D printing research paves way for personalized medication

controversial research studies

Finding the chink in coronavirus's armor—experiment reveals how the main protease of SARS-CoV-2 protects itself

controversial research studies

Discovery of the first ancestors of scorpions, spiders and horseshoe crabs

controversial research studies

'Dancing' raisins: A simple kitchen experiment reveals how objects can extract energy from their environment

controversial research studies

Using AI to speed up and improve the most computationally-intensive aspects of plasma physics in fusion

2 hours ago

controversial research studies

2023 was the hottest summer in 2,000 years, study finds

Relevant physicsforums posts, biographies, history, personal accounts.

8 hours ago

Today's Fusion Music: T Square, Cassiopeia, Rei & Kanade Sato

Cover songs versus the original track, which ones are better.

May 12, 2024

Music to Lift Your Soul: 4 Genres & Honorable Mention

Interesting anecdotes in the history of physics.

May 11, 2024

How does academic transcripts translation work?

May 10, 2024

More from Art, Music, History, and Linguistics

Related Stories

controversial research studies

People put greater trust in news that leads them to be more politically extreme, says study

Apr 29, 2024

controversial research studies

Research finds political attitudes did not change during COVID-19 pandemic

Aug 17, 2023

controversial research studies

Are American voters really as polarized as they seem? Research suggests 'yes'

Feb 20, 2024

controversial research studies

Why polarized politicians can represent citizens best

Oct 31, 2018

controversial research studies

Taking in refugees does not strongly influence xenophobia in East German communities

Sep 22, 2020

controversial research studies

Republicans and Democrats consider each other immoral. Even when treated fairly and kindly by the opposition

Feb 1, 2024

Recommended for you

controversial research studies

The power of ambiguity: Using computer models to understand the debate about climate change

21 hours ago

controversial research studies

Study finds avoiding social media before an election has little to no effect on people's political views

22 hours ago

controversial research studies

Researchers develop algorithms to understand how humans form body part vocabularies

May 13, 2024

controversial research studies

Study shows AI conversational agents can help reduce interethnic prejudice during online interactions

controversial research studies

Study finds liberals and conservatives differ on climate change beliefs—but are relatively united in taking action

May 9, 2024

controversial research studies

Analysis of millions of posts shows that users seek out echo chambers on social media

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

Your Health

  • Treatments & Tests
  • Health Inc.
  • Public Health

Helping women get better sleep by calming the relentless 'to-do lists' in their heads

Yuki Noguchi

Yuki Noguchi

controversial research studies

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help. Natalie Champa Jennings/Natalie Jennings, courtesy of Katie Krimitsos hide caption

Katie Krimitsos is among the majority of American women who have trouble getting healthy sleep, according to a new Gallup survey. Krimitsos launched a podcast called Sleep Meditation for Women to offer some help.

When Katie Krimitsos lies awake watching sleepless hours tick by, it's almost always because her mind is wrestling with a mental checklist of things she has to do. In high school, that was made up of homework, tests or a big upcoming sports game.

"I would be wide awake, just my brain completely spinning in chaos until two in the morning," says Krimitsos.

There were periods in adulthood, too, when sleep wouldn't come easily, like when she started a podcasting company in Tampa, or nursed her first daughter eight years ago. "I was already very used to the grainy eyes," she says.

Now 43, Krimitsos says in recent years she found that mounting worries brought those sleepless spells more often. Her mind would spin through "a million, gazillion" details of running a company and a family: paying the electric bill, making dinner and dentist appointments, monitoring the pets' food supply or her parents' health checkups. This checklist never, ever shrank, despite her best efforts, and perpetually chased away her sleep.

"So we feel like there are these enormous boulders that we are carrying on our shoulders that we walk into the bedroom with," she says. "And that's what we're laying down with."

By "we," Krimitsos means herself and the many other women she talks to or works with who complain of fatigue.

Women are one of the most sleep-troubled demographics, according to a recent Gallup survey that found sleep patterns of Americans deteriorating rapidly over the past decade.

"When you look in particular at adult women under the age of 50, that's the group where we're seeing the most steep movement in terms of their rate of sleeping less or feeling less satisfied with their sleep and also their rate of stress," says Gallup senior researcher Sarah Fioroni.

Overall, Americans' sleep is at an all time low, in terms of both quantity and quality.

A majority – 57% – now say they could use more sleep, which is a big jump from a decade ago. It's an acceleration of an ongoing trend, according to the survey. In 1942, 59% of Americans said that they slept 8 hours or more; today, that applies to only 26% of Americans. One in five people, also an all-time high, now sleep fewer than 5 hours a day.

Popular myths about sleep, debunked

Popular myths about sleep, debunked

"If you have poor sleep, then it's all things bad," says Gina Marie Mathew, a post-doctoral sleep researcher at Stony Brook Medicine in New York. The Gallup survey did not cite reasons for the rapid decline, but Mathew says her research shows that smartphones keep us — and especially teenagers — up later.

She says sleep, as well as diet and exercise, is considered one of the three pillars of health. Yet American culture devalues rest.

"In terms of structural and policy change, we need to recognize that a lot of these systems that are in place are not conducive to women in particular getting enough sleep or getting the sleep that they need," she says, arguing things like paid family leave and flexible work hours might help women sleep more, and better.

No one person can change a culture that discourages sleep. But when faced with her own sleeplessness, Tampa mom Katie Krimitsos started a podcast called Sleep Meditation for Women , a soothing series of episodes in which she acknowledges and tries to calm the stresses typical of many women.

Many Grouchy, Error-Prone Workers Just Need More Sleep

Shots - Health News

Many grouchy, error-prone workers just need more sleep.

That podcast alone averages about a million unique listeners a month, and is one of 20 podcasts produced by Krimitsos's firm, Women's Meditation Network.

"Seven of those 20 podcasts are dedicated to sleep in some way, and they make up for 50% of my listenership," Krimitsos notes. "So yeah, it's the biggest pain point."

Krimitsos says she thinks women bear the burdens of a pace of life that keeps accelerating. "Our interpretation of how fast life should be and what we should 'accomplish' or have or do has exponentially increased," she says.

She only started sleeping better, she says, when she deliberately cut back on activities and commitments, both for herself and her two kids. "I feel more satisfied at the end of the day. I feel more fulfilled and I feel more willing to allow things that are not complete to let go."

controversial research studies

Groundbreaking Research Establishes Biological Basis of Chronic Fatigue Syndrome: Findings from ScienceAlert

T he leading research funder in the world, the US National Institutes of Health (NIH), initiated a pivotal study into chronic fatigue syndrome (CFS) in 2016, which is alternatively known as myalgic encephalomyelitis, or ME/CFS. This was well in advance of the recognition of long COVID as a health issue.

After eight years, the findings of this comprehensive study are finally available. Focused on a small cohort of 17 individuals who contracted ME/CFS following an infection, the research delineated obvious biological discrepancies when compared to a control group of 21 healthy individuals.

NIH’s National Institute of Neurological Disorders and Stroke (NINDS) clinical director and lead researcher on the study, neurologist Avindra Nath, explained in an interview with JAMA that the results conclusively prove that ME/CFS is a biological condition, influencing multiple organ systems.

For a long time, ME/CFS was mistakenly treated by several healthcare providers as a psychological ailment. However, current research now confirms the condition is accompanied by a multitude of biological alterations.

According to Nath, “It’s a systemic disease,” and patients with ME/CFS need their experiences to be earnestly acknowledged.

During a thorough week-long assessment, subjects were subjected to various tests including brain scans, sleep studies, physical and cognitive performance evaluations, as well as blood, skin, and muscle biopsies. They were monitored under regulated dietary conditions and within metabolic chambers to measure calorie and nutrient intake in a controlled environment.

Consistent with prior research, ME/CFS patients exhibited elevated resting heart rates and signs of protracted, hyperactive immune responses resulting in T cell exhaustion. Additionally, they presented with less diverse gut microbiomes relative to the control subjects.

Despite reporting heightened cognitive challenges, the ME/CFS group displayed normal cognitive test results and no evidence of muscle fatigue.

However, the immune and microbiome variances exhibited clear effects on the central nervous system. ME/CFS patients had lowered catechol concentrations in the cerebrospinal fluid and reduced activity in the temporal-parietal junction (TPJ) of the brain during motor tasks.

The TPJ’s role in controlling motor cortex activity suggests that its malfunction may interfere with the brain’s effort exertion and fatigue perception. As summed up by Brian Walitt, the study lead author and an ME/CFS research medical scientist at NINDS, this could identify a physiological nexus for fatigue in these patients.

While the research is an important step, advocacy groups for ME/CFS have criticized some aspects such as fatigue assessments not fully encompassing the condition, particularly overlooking post-exertional malaise.

Additionally, concerns have been raised about the initial 217 patients’ reduction to merely 17 confirmed post-infection ME/CFS cases by a panel of clinicians for the study.

The goal behind selecting a smaller patient group was to facilitate the most meticulous examination feasible, thus maximizing the likelihood of uncovering significant discrepancies for further inquiry with a broader demographic. This challenge mirrors that found in understanding and treating long COVID and Alzheimer’s disease.

While the pandemic limited their target of 40 ME/CFS patients, and certain candidates were excluded based on wellness criteria and concerns about patient overwork, this study has successfully set a foundation for future exploration in this area.

These investigations have been thoroughly documented in the Nature Communications journal.

FAQ Section

What is chronic fatigue syndrome (cfs).

Chronic Fatigue Syndrome (CFS), also known as myalgic encephalomyelitis (ME), is a complex, chronic illness characterized by extreme fatigue, pain, and other symptoms that are not relieved by rest and are often exacerbated by physical or mental activity.

What did the 2016 NIH study reveal about CFS?

The 2016 study conducted by the NIH showed that CFS is unambiguously a biological condition, impacting various organ systems and not merely a psychosomatic disorder as was previously believed by some.

What kind of tests were performed in the CFS study?

Subjects underwent brain scans, sleep studies, muscle strength and cognitive performance tests, skin and muscle biopsies, blood tests, and analyses of gut microbiome and spinal fluid, among other assessments.

Why was there controversy over the study’s patient selection?

Concerns were raised about the stringent criteria that reduced an initial pool of 217 potential subjects to just 17 participants, which some argue may not reflect the broader CFS patient population.

Will there be more research on CFS?

Yes, further research is expected to build on the foundation set by this study to better understand Chronic Fatigue Syndrome and to develop effective treatments.

This landmark study represents a significant stride in chronic fatigue syndrome research, confirming its biological roots and challenging outdated misconceptions about the condition. By identifying biological differences in patients with CFS, the study paves the way for more in-depth research and potentially more targeted treatments. However, further study is necessary to explore the full scope of CFS and to translate these findings into clinical practice. Patients and advocates can now hold onto the hope that CFS is gaining the scientific attention it requires for better diagnosis, understanding, and care.

WomanFatigueAndRelievedInBed

Research Finds Scandals Have Less Impact on Politicians Than They Used To

University of Houston Professor Brandon Rottinghaus Culls Political Scandal Data Since Watergate, Publishes Study

By Rebeca Hawley — 713-743-6773

  • University and Campus

Modern American politics has been plagued by scandals from Watergate to Bill Clinton and Monica Lewinsky, to Donald Trump’s Access Hollywood tapes and impeachments. More recently, President Joe Biden’s son Hunter faces tax and gun possession charges, casting a shadow over his father’s re-election bid.

To assess the impact of scandals on a politicians’ ability to survive in office, University of Houston Professor of Political Science Brandon Rottinghaus at the College of Liberal Arts and Social Sciences examined presidential, gubernatorial and Congressional scandals from 1972 to 2021. His article "Do Scandals Matter?” was published in the journal Political Research Quarterly.

“Scandals don’t hit like they used to,” said Rottinghaus. “Politicians involved are able to survive them because you have media much more divided on political terms. You have people who are more partisan and only look at partisan outcomes, and in an odd way, scandals help increase fundraising for some members who are involved in those scandals.”

In his study, Rottinghaus’ definition of scandal involves allegations of illegal, unethical or immoral wrongdoing.

He found negative consequences from scandals vary across time and institutions. Scandals in the Watergate era led to more resignations in Congress, but then in the ‘90s there were fewer resignations of White House officials. During the Trump administration, White House officials did not survive in office at rates greater than past eras. However, politicians generally survived scandal more in this current polarized era, which hints at the changing role of political scandals.

Partisanship, he writes, reduces the negative impact of scandal on some incumbent politicians, as they can largely rely on their base, which is not as critical of the politicians getting caught in scandals.

“This is because they want to see their side win and the other side lose,” he said.

With media, Rottinghaus said because it is more polarized than in past political eras, people can consume the media that fits their political preferences. “That means people are getting only one side of the story. If a politician gets caught in a scandal, that politician can claim the other side is out to get them politically and your base will still like you, despite the scandal.”

And in some ways, small scandals can be beneficial for fundraising. For example, Rottinghaus said, with U.S. Representatives Marjorie Taylor Greene and Lauren Boebert, they can make outlandish statements, send out fundraising appeals and receive many small dollar donors to contribute to their campaigns.

Rottinghaus’s methodology included using three new data sets of scandals involving presidents, members of Congress and governors at the state level over 50 years. He charted the duration of each political, personal and financial scandal faced by an elected official. Then, he investigated what factors hasten the “end” of a scandal, which is defined as when the scandal ends negatively for the elected official. The results clarified how officials survive scandals (or not) and whether the political climate exacerbates the scandal.

"Trump Effect?”

Before this study, Rottinghaus’ data was limited to the middle of former President Barack Obama’s term. He now has updated data through Donald Trump’s presidency and tested whether Trump changed the way scandals affected the American public – something he calls the “Trump Effect.”

“The answer is a tentative yes to that,” Rottinghaus said. “Trump didn’t change the game, but he altered in some ways how scandals affect politicians generally. Although he himself was able to survive these allegations, a lot of his cabinet members did not, yet they did hold on a little longer than they would have in the pre-polarized era.”

In the study, that era begins in the mid-1990s during the Clinton-Lewinsky scandal. “That's the point where you see scandals matter a lot more.”

Overall, Rottinghaus said his study finds scandals do not have as much of an impact as they once did, but their impact also depends on whether the politician is a president, governor, or member of Congress.

Top Stories

May 10, 2024

Tomas Bryan to Serve as UHS Student Regent

The University of Houston System Board of Regents will soon welcome a new student voice to the table. UH graduate student Tomas Bryan was announced as its student regent and will be formally sworn in at a future board meeting.

  • Student, Faculty and Staff Success

May 08, 2024

Getting Involved Key to UH Grads’ Success

This spring, 6,655 Cougars are crossing the finish line during UH's commencement ceremonies. Among these graduates are Coogs who balanced their studies with extracurricular activities that enhanced their university experience and prepared them for the journey ahead.

May 07, 2024

UH Professor Cristina Rivera Garza Wins 2024 Pulitzer Prize

University of Houston Professor Cristina Rivera Garza has won a 2024 Pulitzer Prize for her memoir “Liliana’s Invincible Summer: A Sister’s Search for Justice.” The Pulitzer Prize is regarded as the highest national honor in journalism, letters and drama, and music.

Advertisement

Supported by

U.S. Tightens Rules on Risky Virus Research

A long-awaited new policy broadens the type of regulated viruses, bacteria, fungi and toxins, including those that could threaten crops and livestock.

  • Share full article

A view through a narrow window of a door into a biosafety area of a lab with a scientist in protective gear working with a sample.

By Carl Zimmer and Benjamin Mueller

The White House has unveiled tighter rules for research on potentially dangerous microbes and toxins, in an effort to stave off laboratory accidents that could unleash a pandemic.

The new policy, published Monday evening, arrives after years of deliberations by an expert panel and a charged public debate over whether Covid arose from an animal market or a laboratory in China.

A number of researchers worried that the government had been too lax about lab safety in the past, with some even calling for the creation of an independent agency to make decisions about risky experiments that could allow viruses, bacteria or fungi to spread quickly between people or become more deadly. But others warned against creating restrictive rules that would stifle valuable research without making people safer.

The debate grew sharper during the pandemic, as politicians raised questions about the origin of Covid. Those who suggested it came from a lab raised concerns about studies that tweaked pathogens to make them more dangerous — sometimes known as “gain of function” research.

The new policy, which applies to research funded by the federal government, strengthens the government’s oversight by replacing a short list of dangerous pathogens with broad categories into which more pathogens might fall. The policy pays attention not only to human pathogens, but also those that could threaten crops and livestock. And it provides more details about the kinds of experiments that would draw the attention of government regulators.

The rules will take effect in a year, giving government agencies and departments time to update their guidance to meet the new requirements.

“It’s a big and important step forward,” said Dr. Tom Inglesby, the director of the Johns Hopkins Center for Health Security and a longtime proponent of stricter safety regulations. “I think this policy is what any reasonable member of the public would expect is in place in terms of oversight of the world’s most transmissible and lethal organisms.”

Still, the policy does not embrace the most aggressive proposals made by lab safety proponents, such as creating an independent regulatory agency. It also makes exemptions for certain types of research, including disease surveillance and vaccine development. And some parts of the policy are recommendations rather than government-enforced requirements.

“It’s a moderate shift in policy, with a number of more significant signals about how the White House expects the issue to be treated moving forward,” said Nicholas Evans, an ethicist at University of Massachusetts Lowell.

Experts have been waiting for the policy for more than a year. Still, some said they were surprised that it came out at such a politically fraught moment . “I wasn’t expecting anything, especially in an election year,” Dr. Evans said. “I’m pleasantly surprised.”

Under the new policy, scientists who want to carry out experiments will need to run their proposals past their universities or research institutions, which will to determine if the work poses a risk. Potentially dangerous proposals will then be reviewed by government agencies. The most scrutiny will go to experiments that could result in the most dangerous outcomes, such as those tweaking pathogens that could start a pandemic.

In a guidance document , the White House provided examples of research that would be expected to come under such scrutiny. In one case, they envisioned scientists trying to understand the evolutionary steps a pathogen needed to transmit more easily between humans. The researchers might try to produce a transmissible strain to study, for example, by repeatedly infecting human cells in petri dishes, allowing the pathogens to evolve more efficient ways to enter the cells.

Scientists who do not follow the new policy could become ineligible for federal funding for their work. Their entire institution may have its support for life science research cut off as well.

One of the weaknesses of existing policies is that they only apply to funding given out by the federal government. But for years , the National Institutes of Health and other government agencies have struggled with stagnant funding, leading some researchers to turn instead to private sources. In recent years, for example, crypto titans have poured money into pandemic prevention research.

The new policy does not give the government direct regulation of privately funded research. But it does say that research institutions that receive any federal money for life-science research should apply a similar oversight to scientists doing research with support from outside the government.

“This effectively limits them, as the N.I.H. does a lot of work everywhere in the world,” Dr. Evans said.

The new policy takes into account the advances in biotechnology that could lead to new risks. When pathogens become extinct, for example, they can be resurrected by recreating their genomes. Research on extinct pathogens will draw the highest levels of scrutiny.

Dr. Evans also noted that the new rules emphasize the risk that lab research can have on plants and animals. In the 20th century, the United States and Russia both carried out extensive research on crop-destroying pathogens such as wheat-killing fungi as part of their biological weapons programs. “It’s significant as a signal the White House is sending,” Dr. Evans said.

Marc Lipsitch, an epidemiologist at Harvard and a longtime critic of the government’s policy, gave the new one a grade of A minus. “I think it’s a lot clearer and more specific in many ways than the old guidance,” he said. But he was disappointed that the government will not provide detailed information to the public about the risky research it evaluates. “The transparency is far from transparent,” he said.

Scientists who have warned of the dangers of impeding useful virus research were also largely optimistic about the new rules.

Gigi Gronvall, a biosafety specialist at the Johns Hopkins Bloomberg School of Public Health, said the policy’s success would depend on how federal health officials interpreted it, but applauded the way it recognized the value of research needed during a crisis, such as the current bird flu outbreak .

“I was cautiously optimistic in reading through it,” she said of the policy. “It seems like the orientation is for it to be thoughtfully implemented so it doesn’t have a chilling effect on needed research.”

Anice Lowen, an influenza virologist at Emory University, said the expanded scope of the new policy was “reasonable.” She said, for instance, that the decision not to create an entirely new review body helped to alleviate concerns about how unwieldy the process might become.

Still, she said, ambiguities in the instructions for assessing risks in certain experiments made it difficult to know how different university and health officials would police them.

“I think there will be more reviews carried out, and more research will be slowed down because of it,” she said.

Carl Zimmer covers news about science for The Times and writes the Origins column . More about Carl Zimmer

Benjamin Mueller reports on health and medicine. He was previously a U.K. correspondent in London and a police reporter in New York. More about Benjamin Mueller

Why Lyme disease symptoms go away quickly for some and last years for others

Why some people recover from Lyme disease, while others experience months, years or even decades of chronic symptoms has long puzzled doctors. New research offers some clues to an immune system marker in the blood that is elevated among people with lingering Lyme disease symptoms, even after they’d received antibiotics. 

In the new study , published on May 9 in the Center for Disease Control and Prevention’s Emerging Infectious Diseases journal, researchers found an immune system marker in the blood called interferon-alpha was elevated among people who had been treated for Lyme disease but had lingering symptoms .

Interferon-alpha is one of a handful of key signaling proteins the body makes to tell immune cells to fight off bacteria or viruses. If the blood levels are too high, the immune system can overact, causing pain, swelling and fatigue — symptoms often seen with Lyme disease.

In patients with high levels of interferon-alpha, the immune response to the Lyme bacteria may cause chronic inflammation, even once the infection is gone, said Klemen Strle, an assistant research professor of molecular biology and microbiology at Tufts University and an author of the new study. 

“We think this is a possible driver of persistent symptoms,” Strle said. And since a number of drugs are already approved to lower interferon-alpha, he suggested the research could mean a possible treatment option for lingering Lyme symptoms. 

The study was small, including 79 people diagnosed with Lyme disease, and found only a link between the higher interferon-alpha levels and the persistent Lyme disease symptoms, not that the immune marker was itself causing the lasting symptoms. A larger clinical trial would be needed to affirm the connection.

Male and female adult blacklegged ticks, Ixodes scapularis, on a sesame seed bun to demonstrate relative size.

Anywhere from 30,000 up to 500,000 people develop Lyme disease from a tick bite each year, according to the C DC . For most, the infection is mild and easily treated with antibiotics. About 10% experience symptoms like fatigue and brain fog along with muscle, joint and nerve pain that persists even after treatment. 

The new findings represent a significant shift in understanding why some people infected with Lyme suffer chronic symptoms. Previously, some researchers believed that a specific strain of the spiral-shaped Borrelia burgdorferi bacteria that causes Lyme might be a cause. Others wondered whether undetectable low levels of infection lingered in the body after treatment. The new research suggests that the way the body reacts to the bacteria — not the bug itself — could result in long-lasting symptoms. 

It’s still unclear why some people have elevated interferon-alpha, but Strle said he’s looking into a possible genetic cause. 

Although the interferon-alpha research is still in an early phase, Dr. Roberta DeBiasi, chief of the division of pediatric diseases at Children’s National Hospital in Washington, D.C., called it "very well-designed and interesting.”

“It provides a possible therapeutic target that could be studied in clinical trials to treat these patients," she said.

For people coping with ongoing Lyme symptoms , any biological explanation for the condition called p ost- t reatment Lyme d isease s yndrome , or PTLDS, is a step forward.  

Years of Lyme symptoms

Rebecca Greenberg isn’t entirely sure when she first contracted Lyme disease, but she has her suspicions. Greenberg, now 26, clearly remembers her mom tweezing a small, firm tick from the back of her neck after she’d been playing at a playground near Albany, New York, when she was 9 years old. She may have had multiple tick bites from her time spent in the Adirondack Mountains but didn’t worry about them until she started feeling sick at age 15. 

“I was so tired I would tell my mom I couldn’t wake up for school anymore,” said Greenberg, who grew up in upstate New York but now lives in South Florida. “My muscles would hurt, my joints would hurt and I’d get these migraines.” 

Rebecca Greenberg.

Doctors told Greenberg her symptoms were likely hormonal, and when Greenberg landed in the emergency room unable to move the left side of her body, they prescribed anti-anxiety medication and suggested she see a neurologist. Those doctors weren’t much help either, Greenberg said. Her symptoms eventually became so severe she stopped attending school and required a wheelchair. 

It wasn’t until 2011, after Greenberg’s mom posted to Facebook about her daughter’s mystery illness that a pediatrician friend suggested Lyme disease. Antibody tests soon showed Greenberg had been infected with Lyme and two other bacterial infections, babesiosis and bartonella . 

Even now, she’s still dealing with fatigue and nerve pain. The most debilitating symptoms of her Lyme disease have been the psychiatric effects, including severe anxiety, depression and hallucinations, she said.

“I’m essentially still putting Band-Aids on all of my symptoms,” she said. “One tick turned my life upside down."

Why diagnosing Lyme is so difficult

 As the geographic spread of Lyme ticks intensifies, there’s an urgent need for more accurate tests that can pick up infection at its earliest stages, researchers and health officials acknowledge. 

“There is no doubt Lyme disease and other tick-borne infectious agents are increasing in prevalence,” Strle said.

The Environmental Protection Agency warns that disease-carrying ticks are most active in warmer temperatures, and climate change will likely mean the insects will increasingly survive the winter and spread to regions beyond the Northeast, Northern California and northern areas of the Midwest.

More on Lyme and other tick-borne diseases

  • Is there a vaccine for Lyme? Pfizer begins late-stage testing.
  • Cases of yet another tick-borne illness are rising in the Northeast.
  • 14% of the world's population may have had Lyme disease, research finds.

Testing for Lyme disease is complicated, especially for doctors who aren’t familiar with the process, DeBiasi said.

“That leads to many people who have Lyme being missed or people who have symptoms being told they have Lyme when actually they don’t,” she said. “Combine that with bad information on the internet, and you end up with a lot of confusion.”

Part of the problem is that once the bacterium is transmitted from a tick into a human, it rapidly spreads through the body at levels that may be too low for a test to pick up.  

“ This quickly becomes a detection issue,” Brandon Jutras, associate professor in the biochemistry department at Virginia Tech, explained. 

Serology tests, which look for antibodies in the blood, are the best available method for diagnosing Lyme, experts say. However, antibody tests indicate the immune system has mounted an attack against a virus or bacteria, but can’t determine whether there is an active infection. They don’t work until the immune system generates a sufficient number of antibodies, which can take six weeks or more after an initial tick bite.

The CDC recommends using a combination of antibody tests to diagnose Lyme, including an immunoassay antibody test like ELISA followed by an immunoblot antibody test like the Western blot test. 

Doctors and health officials recognize the need for more reliable Lyme disease tests that can pick up infection early on.

“We need to do better,” said Jutras, who’s working with a team at Virginia Tech to develop a rapid Lyme test that could identify the actual infection from the earliest sign of a tick bite.

“What we really want is a test that says, ‘Does a person still have Borelli spirochete [Lyme bacteria], and do we need to treat it with antibiotics?'” said Dr. Brian Fallon, director of the Lyme and Tick-Borne Diseases Research Center at Columbia University.  

Controversy over ‘chronic Lyme’

Because testing is inadequate, there’s no way to link long-lasting symptoms to the initial Lyme infection. Many doctors avoid a diagnosis of chronic Lyme since the term implies a lingering infection.

“There’s no way to say a Lyme diagnosis from, say, six years ago has anything to do with the symptoms occurring now,” DeBiasi said. “Symptoms like musculoskeletal pain, fatigue, difficulty thinking and depression are nonspecific. There are many, many possible reasons for those symptoms other than Lyme.”

Columbia University's Fallon prefers the term “long Lyme.”

"‘Chronic’ is a reasonable term if it refers to having symptoms chronically," Fallon said. “The problem is when someone thinks they have an ongoing chronic infection and need more antibiotic treatments.”

DeBiasis insists the symptoms people experience are real, even if PTLDS experiences are the exception.

As a pediatric doctor, she’s seen panicked parents seek unproven therapies after finding a tick on their child. In a recent study published in the journal Pediatric Research , DeBiasi and colleagues found 75% of children with Lyme disease were better within six months of antibiotics, while 9% had symptoms affecting their functioning after six months. 

“If you give them a little more time, they seem to fully recover,” DeBiasi said.

With the exception of Pfizer and Valneva, which are testing a Lyme disease vaccine in clinical trials, the drug development industry has not, for the most part, focused its energies or dollars on Lyme. Federal research grants are lacking too, Jutras said.

“A lot of private foundations have stepped up to the plate as it relates to funding research for Lyme, but at the federal level, it may be time to revisit some of the priorities as it relates to where we’re spending research dollars.”

NBC News contributor Caroline Hopkins is a health and science journalist who covers cancer treatment for Precision Oncology News. She is a graduate of the Columbia University Graduate School of Journalism.  

IMAGES

  1. 125 Controversial Research Topics and Ideas to Deal With

    controversial research studies

  2. 100+ Controversial Research Topics and Ideas to Focus On

    controversial research studies

  3. 125 Controversial Research Topics and Ideas to Deal With

    controversial research studies

  4. 100+ Fascinating Controversial Research Paper Topics For All

    controversial research studies

  5. 255 Controversial Research Paper Topics & Ideas

    controversial research studies

  6. 300+ Controversial Research Topics

    controversial research studies

VIDEO

  1. 25 Films Too Controversial for Today's World: A Deep Dive

  2. Albert Einstein's Brain: The Controversial Research after His Death #alberteinstein #einsteinbrain

  3. John Money

  4. Dr. Ben: The Voice of Afrocentric Truth

  5. Common problems in experiments

  6. Unveiling the Truth: UFOs and Controversial Research

COMMENTS

  1. 5 Unethical Medical Experiments Brought Out of the Shadows of History

    He conducted his research on behalf of companies including DuPont and Johnson & Johnson. Kligman's work often left prisoners with pain and scars as he used them as study subjects in wound healing and exposed them to deodorants, foot powders, and more for chemical and cosmetic companies. Dow once enlisted Kligman to study the effects of dioxin ...

  2. Controversial and Unethical Psychology Experiments

    Some of the most controversial and unethical experiments in psychology include Harlow's monkey experiments, Milgram's obedience experiments, Zimbardo's prison experiment, Watson's Little Albert experiment, and Seligman's learned helplessness experiment. These and other controversial experiments led to the formation of rules and guidelines for ...

  3. Unethical human experimentation in the United States

    The experiment was largely controversial with criticisms aimed toward the lack of scientific principles and a control group, and for ethical concerns regarding Zimbardo's lack of intervention in the prisoner abuse. ... The San Antonio Contraceptive Study was a clinical research study published in 1971 about the side effects of oral contraceptives.

  4. Stanford Prison Experiment: why famous psychology studies are now ...

    The Stanford Prison Experiment was massively influential. We just learned it was a fraud. The most famous psychological studies are often wrong, fraudulent, or outdated. Textbooks need to catch up ...

  5. 11+ most controversial psychological experiments in history

    They had learned an essential lesson in racism, even though the process was completely unethical. 11. The Little Albert Experiment. Source: John B Watson/Wikimedia. John B Watson/Wikimedia. John ...

  6. How the Classics Changed Research Ethics

    How the Classics Changed Research Ethics. Some of history's most controversial psychology studies helped drive extensive protections for human research participants. Some say those reforms went too far. Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer "guards" became abusive ...

  7. The 10 Most Controversial Psychology Studies Ever Published

    Naval Research Reviews, 9(1-17). The Milgram "Shock Experiments" Stanley Milgram's studies conducted in the 1960s appeared to show that many people are incredibly obedient to authority. Given the instruction from a scientist, many participants applied what they thought were deadly levels of electricity to an innocent person.

  8. Many scientists citing two scandalous COVID-19 papers ignore ...

    In June 2020, in the biggest research scandal of the pandemic so far, two of the most important medical journals each retracted a high-profile study of COVID-19 patients. Thousands of news articles, tweets, and scholarly commentaries highlighted the scandal, yet many researchers apparently failed to notice. In an examination of the most recent ...

  9. Controversial New Guidelines Would Allow Experiments On More Mature

    An influential scientific society has recommended scrapping a long-standing taboo on studying human embryos in lab dishes beyond 14 days and greenlighted a long list of other sensitive research.

  10. Problematic research practices in psychology: Misconceptions about data

    A key challenge for psychologists is to distinguish the phenomena under study from the means used to explore them (e.g., concepts, methods, data), as reflected in the terms psychical versus psychological 1 in many non-English languages (similarly, we get viral and not virological infections but we do virological research). This distinction is ...

  11. Review of current controversial issues in clinical trials

    Controversial issues, referred to as debatable issues, are commonly encountered during the conduct of clinical trials. These debatable issues could be raised from (1) compromises between theoretical and real-world practices, (2) miscommunication, misunderstanding and/or misinterpretation in medical and/or statistical perception among regulatory ...

  12. Suppressing Science: Are We Overreacting to Controversial Findings?

    Controversial research often sparks defensive reactions, sometimes even leading to calls for censorship, especially if the findings clash with established ideologies. However, a pair of studies published in the journal Psychological Science, by authors Cory J. Clark (University of Pennsylvania), Maj

  13. Controversy: The evolving science of fluoride: when new ...

    Dr. Lanphear, a senior scientist on our team who conducted many of the pivotal lead toxicity studies that helped confirm Dr. Needleman's work, reminded us that it took two decades of research ...

  14. The 7 all-time most controversial psychological experiments

    The 'Little Albert' Experiment. One of the most controversial psychological experiments of all time is the 'Little Albert' experiment, conducted by John Watson and Rosalie Rayner in 1920. This study aimed to observe how fear could be conditioned in a child. To do this, they used a nine-month-old boy who was called Albert.

  15. The 5 most talked-about scientific papers of November 2020 in ...

    Stronger hurricanes and controversial research on autism feature in these widely discussed studies. Bec Crew A swimmer in Florida during high tide as Hurricane Dorian churned offshore in 2019.

  16. 7 ethically controversial research areas in science and technology

    Artificial intelligence is a controversial area of research, especially deep fakes. ... In today's human testing, all patients must consent to the study. However, as long as human trials are ...

  17. Facebook emotion study breached ethical guidelines, researchers say

    Last modified on Wed 22 Feb 2017 13.31 EST. Researchers have roundly condemned Facebook's experiment in which it manipulated nearly 700,000 users' news feeds to see whether it would affect their ...

  18. US funders to tighten oversight of controversial 'gain of function

    NIH reinstates grant for controversial coronavirus research The directive also mandates that agencies outside the HHS that fund biological research, such as the US Department of Defense, must ...

  19. Public may overestimate pushback against controversial research findings

    Controversial research can put people on the defensive and may even lead to calls to censor findings that conflict with a particular ideological perspective. However, a pair of studies published ...

  20. Why Does Controversy Persist? Paradigm Clash, Conflicting Visions, and

    The genre of controversy studies in Science & Technology Studies distinguishes between 'internalist' and 'interactional' controversies. Interactional controversy studies highlight that debates involving multiple stakeholders with competing interests often evade closure. ... Research in the genre of controversy studies has shown that ...

  21. Unethical practices within medical research and publication

    The data produced by the scientific community impacts on academia, clinicians, and the general public; therefore, the scientific community and other regulatory bodies have been focussing on ethical codes of conduct. Despite the measures taken by several research councils, unethical research, publishing and/or reviewing behaviours still take place. This exploratory study considers some of the ...

  22. The Controversial History of Hormone Replacement Therapy

    2.1. Effect of HRT. Over the years, data regarding the impact of HRT on breast safety and breast cancer mortality have been controversial. Most of the meta-analyses and observational studies performed in the 1990s reported no increase in the risk of breast cancer with estrogen use [].However, some increased risks related to dose and duration of use were found with the administration of ...

  23. Current Events and Controversial Issues

    Also, our Research Process guide can help you throughout your research process. Research Process by Liz Svoboda Last ... accurate discussions of over 250 controversial topics in the news along with chronologies, illustrations, maps, tables, sidebars, contact info, and bibliographies, including primary source documents and news editorials ...

  24. Contradictory thoughts lead to more moderate attitudes, psychologists find

    Psychological research has shown that contradictory thoughts can be triggered by a range of rhetorical means. In this way, they contribute to the reduction of polarized attitudes.

  25. Americans are getting less sleep. The biggest burden falls on ...

    A recent survey found that Americans' sleep patterns have been getting worse. Adult women under 50 are among the most sleep-deprived demographics.

  26. Groundbreaking Research Establishes Biological Basis of Chronic ...

    The leading research funder in the world, the US National Institutes of Health (NIH), initiated a pivotal study into chronic fatigue syndrome (CFS) in 2016, which is alternatively known as myalgic ...

  27. Research Finds Scandals Have Less Impact on Politicians Than They Used

    In the study, that era begins in the mid-1990s during the Clinton-Lewinsky scandal. "That's the point where you see scandals matter a lot more." Overall, Rottinghaus said his study finds scandals do not have as much of an impact as they once did, but their impact also depends on whether the politician is a president, governor, or member of ...

  28. U.S. Tightens Rules on Risky Virus Research

    Research on extinct pathogens will draw the highest levels of scrutiny. Dr. Evans also noted that the new rules emphasize the risk that lab research can have on plants and animals.

  29. Lyme Disease symptoms: Why some recover fast and others do not

    In a recent study published in the journal Pediatric Research, DeBiasi and colleagues found 75% of children with Lyme disease were better within six months of antibiotics, while 9% had symptoms ...