Drug Education: The Triumph of Bad Science
By Jason Cohn, Rolling Stone, May 24, 2001
Dare and Programs Like It Don’t Stop Kids from Using Drugs. But There’s Too Much At Stake to Replace Them.
In February, the head of Drug Abuse Resistance Education used in seventy-five percent of U.S. school districts and fifty-five countries worldwide made the extraordinary admission that the program has not been effective. Nonetheless, the Robert Wood Johnson Foundation gave DARE a $13.7 million grant to bring the curriculum up to date and to scientifically evaluate its usefulness. The foundation reasoned that it would be easier to change DARE than to bring another program to its level of penetration. And so, in September, DARE will launch its new and improved program with great fanfare in six cities, including New York and Los Angeles. In March 2002, administrators will implement it worldwide.
The DARE makeover announcement is being interpreted by some as a signal that science is coming to the rescue at last in the politically sensitive field of drug education. Zili Sloboda, the former director of the Division of Epidemiology and Prevention Research at the National Institute on Drug Abuse, was chosen to oversee the evaluation of the renovated program. She says that DARE “will do everything it can to update its programs and to make them evidence based.”
But many social scientists are unimpressed: They argue that drug prevention education must be the only category in their field where failure – such as DARE’s – is used as an occasion to continue and even expand a program.
In fact, the problem may go way beyond DARE. In interviews with more than a dozen experts, a picture emerges of a dysfunctional and highly politicized drug education environment in which even the “research-based programs” now favored by the federal government don’t stand up to scientific scrutiny. In fact, many say, despite all the “scientific” claims to the contrary, drug prevention education – at least the abstinence based model that reigns in America’s schools is just as likely to have no effect or to make kids curious as it is to persuade them not to use drugs.
Here’s why the current models are flawed: Drug education researchers generally evaluate their own programs, and, with few exceptions, they tend to parse out their data so programs seem more successful than they actually are. Scientists call it “over-advocating.” Positive results in limited situations are exaggerated, and instances of increased drug use are obscured or suppressed. Such practices should never survive the process of peer review, critics say, but they do.
The federal government plays a major role. Key agencies set unrealistic guidelines that ensure failure, and they continue to nurture programs despite bountiful evidence that they don’t work. What’s worse, drug education is big business. Fueled by a perpetual sense of crisis, schools and communities pour scarce resources into prevention programs. Each year, the federal government spends upward of $2 billion on drug prevention education, and states and localities contribute more, according to data extrapolated by Joel Brown, director of the Center for Educational Research and Development, in Berkeley, California. Estimates on total expenditures range as high as $5 billion annually. Researchers who evaluate their own programs stand to profit only when they can report success. And these same researchers are often asked to sit on exclusive government panels, deciding which programs will be recommended for sale to the nation’s schools.
DARE may be “the only game in town,” as Sloboda puts it, but that hasn’t kept other researchers from developing programs to fight for a share of the market. These competitors have been buoyed by DARE’s public-relations woes and by a 1998 law limiting Department of Education drug prevention funds to programs that at least minimally demonstrate “the promise of success” in reducing teen drug use.
Only one of the programs deemed exemplary by all three major agencies – the Department of Education, NIDA and the Center for Substance Abuse Prevention is commercially available nationwide. That program is called Life Skills Training. While LST is not nearly as big as DARE, the program is currently in about 3,000 schools, and an estimated 800,000 students have gone through it to date, according to a spokesman.
LST has never been evaluated independently; two studies are going on now, with the results expected this summer. But the program’s creator, Gilbert Botvin, a professor of psychology and public health at Cornell University, claims that the program reduces tobacco, alcohol and marijuana use in young people by up to an incredible seventy-five percent. He has published more than a dozen articles in leading journals, including the Journal of the American Medical Association, saying as much. These would be remarkable outcomes indeed, enough to warrant implementing LST in every school in the country, if it weren’t for one thing: They are probably not true.
As LST has risen in prominence, other researchers have begun analyzing Botvin’s published articles, and many have discerned a common pattern. “Botvin gets positive effects but only in a very small subsample,” says Dennis Gorman, an expert in prevention and evaluation methodology at the Texas A&M University System Health Center.
As an example, Gorman showed in the 1998 article “The Irrelevance of Evidence in the Development of School-based Drug-Prevention Policy, 1986-1996,” in Evaluation Review, that Botvin emphasizes specific data from students who were exposed to at least sixty percent of the program’s curriculum. Gorman states that the students Botvin ends up focusing on are likely to be those who were most motivated and least inclined to be involved with drugs in the first place. Botvin responds that breaking down the data from only the high-implementation group tells you most about the usefulness of a program. (This is a practice that the National Academy of Sciences called “misleading” in a report that condemned the quality of current prevention research.)
Others have gone beyond Gorman in criticizing Botvin’s methods. In an article in the April issue of Journal of Drug Education, Joel Brown found that when students received fifty-nine percent or less of Life Skills Training, their drug use was actually higher than that of students who didn’t go through LST at all. Botvin categorically denies any negative results: “The fact of the matter is, we present more data than any of these other researchers.”
Another person who takes issue with Botvin’s claims is Stephanie Tortu, an associate professor of public health at Tulane University in New Orleans. In 1984, she was project manager on one of LST’s first major studies – an investigation of the effectiveness of the program in fifty-six schools across New York state.
She says that when Botvin presented her with the draft of the study’s results, she was shocked to discover that crucial data on the students’ alcohol use had been left out. Tortu and the other researchers had found that students who went through LST were more likely to drink alcohol than students who weren’t exposed to the program, but this information was nowhere to be found in the report.
She and several colleagues on the project, including Barbara Bettes, a data analyst, sent Botvin a memo documenting their concern and asking that an investigation of the alcohol findings be made their highest priority.
“He was the principal investigator,” Tortu says. “When he saw that alcohol use was up in his prevention group, he should have been trying to figure out why. I felt he was required ethically to call attention to it and investigate it.”
Shortly afterward, Tortu says, Botvin denied her a standard raise, a message she interpreted as punishment of r sticking her neck out. Soon after that, Botvin informed her that there was no longer enough money to keep her on the project.
“To be straightforward and candid,” says Botvin, “we’ve produced the strongest effects for tobacco, and also strong effects for marijuana, but the alcohol effects in some studies have been inconsistent.” And he maintains that the data at issue in the staff memo was just preliminary. Bettes counters that Botvin felt comfortable using the same set of data to announce positive effects on tobacco and marijuana.
In the end, the report delivered to New York state did not indicate that alcohol use had increased among students who went through the program. Joel Moskowitz, director of the Center for Family and Community Health at UC-Berkeley, notes that “unfortunately, Botvin is not the only researcher to engage in such practices of overstating positive program effects or neglecting to report negative program effects and limitations of the research.”
Tortu maintains that, ultimately, Botvin has a conflict of interest in both evaluating and profiting from LST, but Botvin says that he has fully disclosed the fact that he receives royalties from sales of the program. Furthermore, he says, the evidence confirming LST’s success is superior. “In my view, the quality of the science in the Life Skills Training research is high er than for any other prevention program that I’m aware of in America,” he says.
That may not be far from true, but it’s also not saying a whole lot. Botvin has been careful to distance LST from DARE and other programs that have been found to be ineffective. He calls LST a “comprehensive” approach to drug education.
“Life Skills Training deals with a broad array of skills that we think kids need to navigate their way through the dangerous minefield of adolescence,” Botvin says. “Skills that will help them be more successful as adolescents and help them to avoid high-risk behaviors, including pressures to drink, smoke or use drugs.”
These skills include how to make conversation with strangers and politely end a conversation when it could lead to offers of drugs. “It goes beyond `Just Say No’ to identifying unreasonable requests and reacting to those requests in an appropriate way,” Botvin says.
But critics of LST, who include top DARE officials, say the difference is not so great as Botvin likes to make out. “DARE has `Eight Ways to Say No,’ LST has `Nine Ways to Say No,’ says DARE spokesman Ralph Lochridge. “A lot of these programs look, talk and walk like DARE.”
Despite recent critical attention, LST has emerged as the leading contender to DARE. Major news organizations have hyped it, and Botvin even seems to see DARE’s ongoing troubles as an opportunity. After the recent DARE announcement, he wrote a letter to the New York Times and also encouraged me to paint LST as a David to DARE’s Goliath. “Why wait two, three or five years to find out whether or not [the new DARE] works,” Botvin asks, “when we already have prevention programs available today that have been extensively researched and for which there is strong scientific evidence of effectiveness ?” Even Sloboda, now in DARE’s camp, is quick to list LST as an exemplar of “highly effective prevention programs in use today.”
LST may be ready for its close-up, but has the program gotten an easy ride from scholarly journals and the government agencies that endorse it? Some researchers think so.
“If I had been asked to review these studies, I would not have recommended publication,” says Richard Clayton, director of the Center for Prevention Research at the University of Kentucky. “Some people have a vested interest in saying that our current drug prevention strategies work. They’re looking for a poster child for prevention to take attention away from DARE, and they’ve chosen LST. But that doesn’t mean it works.”
LST and DARE are only two of the many programs drawing criticism from those in the prevention/research community who have grown disgusted with the field. Prevention science, they say, is a niche that tends to attract those more concerned with waging the War on Drugs in America’s classrooms than with performing careful science. Evidence is ignored or used so selectively that it becomes irrelevant. When studies turn up negative or neutral results, prevention boosters employ a variety of deceptions, according to critics.
One tactic is to continually change the measurements for success. Rather than looking for changes in drug behavior, researchers might look for changes in reported attitudes toward drugs. If attitudes haven’t changed much, they can always test how much kids know about the dangers of drugs. DARE, for example, has continued to claim success because kids who go through the program tend to have a better attitude toward the police – as if the goal of the program was to raise awareness for law enforcement rather than to keep kids off drugs.
Another common trick is to revise the program. Since any longitudinal evaluation requires, by definition, years to compile, researchers can always deflect criticism by saying the program has evolved since the evaluation began.
As with other aspects of the War on Drugs, hawkish prevention makes for good politics, and the truth be damned. The Center for Substance Abuse Prevention has developed a set of talking points for advocates to present. The title? “Winning t he Numbers Game: How to Keep Saying `Prevention Works’ When the Numbers Say Something Else.”
This dissing of science is part of what makes independent researchers conclude that the whole enterprise of drug prevention is run through with politics and ideological arthritis. Since it’s a small area of study, they say, the same group of scientists is continually chosen to sit on the panels that recommend programs to schools and to review articles for publication, a practice that encourages mutual back-scratching rather than critical investigation. (One prominent figure calls it a “research mafia.”) It’s a closed feedback loop that works in favor of those researchers whose programs fit the narrow guidelines defined by the federal government. They get the big grants, so they publish the most research and are consequently chosen to peer-review the research of others.
The federal guidelines in question are found in the 1994 Safe and Drug Free Schools and Communities Act, which codified zero-tolerance drug policy as a “no-use” education strategy. The philosophy of no-use is similar to abstinence-based sex education: Kids learn that they have a choice between keeping their bodies pure or … not. This is a “choice” that few kids will fail to see as rigged.
While it may seem appropriate that schools play that role, no-use programs, from DARE on d own, have never been shown to help keep kids off drugs. Even the General Accounting Office, which evaluates how effectively federal money is being spent, reported, “There is no evidence that the no use approach is more successful than alternative approaches, or even successful in its own right.”
Federal policymakers have chosen to ignore the GAO’s recommendation that they broaden the search for effective prevention strategies. So today, schools and researchers who want federal funding must demonstrate that their programs teach abstinence as the only option.
Not all researchers have gone along with the policy. Joel Brown, for one, wants to study an alternative program based on “resilience education,” the subject and title of his recent book. Brown says resiliency is a general scientific concept that focuses on young people’s ability to adapt and thrive in the face of educational challenges. Applying the idea to drug education is natural, he says.
“Over the course of their lives, kids will inevitably face a variety of decisions about drugs, including legal drugs and alcohol when they are of the right age,” Brown says. Rather than starting with the idea that kids don’t have the capacity to make wise decisions if they’re dealt with honestly, a resiliency approach would allow educators to deal credibly with students on the issue of drugs. In this way, Brown says, teachers can help kids become skilled decision makers rather than merely telling them which decision is the right one.
Brown says a resilience-based program would not condone youth drug use. However, he says, it’s critical to provide honest, accurate and complete drug information while focusing on health and safety. Marsha Rosenbaum, director of the San Francisco office of the Lindesmith Center/Drug Policy Foundation, says that this approach is exactly what America’s schools need.
“What’s missing from `drug education’ is education,” says Rosenbaum. “For the kids who don’t say no, where can they go for honest, realistic information about drugs in a life-or-death situation?” she asks. “They sure can’t go to the so-called educator in a no-use prevention program.”
Realistically, educators must recognize that some kids will do drugs no matter what they’re told, Rosenbaum says. And that means adults have the responsibility to provide information that can help save lives. She offers the example of Ecstasy, whose most common health risk comes from dehydration when people take it and go dancing. Some deaths have occurred when kids either fail to drink water or drink so much that they literally drown.
But such useful information is rarely taught in America’s schools. Federal policy and the overall zeitgeist of the War on Drugs make it too difficult to implement and study programs based on resiliency or its less comprehensive cousin, harm reduction, Rosenbaum says.
If education is missing from drug education, then reality is what’s missing from federal policy. In addition to prescribing no-use messages, the Safe and Drug Free Schools Act also set as a goal “that by the year 2000, all schools in America will be free of drugs and violence.” It was unsuccessful in more ways than one. By chasing unrealistic goals, the policy has endangered the most at risk students and failed to properly educate anyone.
Copyright: Straight Arrow Publishers, Inc. May 24, 2001