New rules are coming for sanitizing conflicts of interest in research financed by the National Institutes of Health (NIH), dispenser of the government’s biggest budget for civilian science, some $31 billion this year. The conflicted need not fear. The draft rules, soon to be made final, continue the NIH’s longtime practice of trust but don’t verify, relying on universities to police the outside business dealings of their faculty members. There’s the difficulty: universities have repeatedly demonstrated reluctance to pry into their faculty members’ income-producing sidelines, or, in some cases, mainlines. Relations between campus administrators and scientists can be especially prickly as grant competition intensifies and the ethos of our time encourages professorial entrepreneurship. What dean wants to lose a grantladen superstar? As documented by federal investigators, the NIH tends to be aloof from enforcing its own rules about conflicts of interest.
We’ve witnessed such reformist gestures before, and little has changed. And little, if anything, is likely to change under the NIH’s proposed revised regulations, with the bemusing title, “Responsibility of Applicants for Promoting Objectivity in Research for which PHS [Public Health Service] Funding is Sought and Responsible Prospective Contractors.” Basically, the revisions call for reliance on today’s favored antidote for white-collar wrongdoing: greater transparency, applied, in this instance, to money-making endeavors by university scientists. Never great, the perils for miscreants won’t be appreciably greater. Even so, as predictable as soporific commencement oratory, the mainstay Washington representatives of academic science and medicine have protested the proposed regulatory changes as onerous and unnecessary.
Also looming, but of uncertain impact, is a little-noted statutory tag-along to the massive health-care legislation passed last spring. Titled the Physician Payment Sunshine Act, it was inspired by revelations of covert payments from drug and medical-device manufacturers to physicians and medical researchers. The new law requires these firms to report such payments to the government, but the scope is limited to physicians and teaching hospitals, which leaves out a large swath of the biomedical research community. The first public reports are due March 31, 2013, covering calendar year 2012.
A Case Study in Conflicted Science
Let’s look at one of the great running episodes in the annals of conflict of interest, a tale rich in official languor.
The July 2006 issue of the journal Neuropsychopharmacology contained an article titled “VNS Therapy in Treatment-Resistant Depression: Clinical Evidence and Putative Neurobiological Mechanisms.” The article reported favorably on an implanted electrical device for treatment of depression. Among the nine authors, the lead author was a professionally renowned psychiatrist and major recipient of the NIH’s largesse, Charles B. Nemeroff, chair of the Department of Psychiatry and Behavioral Science at Emory University. Following publication of the article, the Wall Street Journal reported that Nemeroff and six of the authors were consultants for the manufacturer of the device, a firm named Cyberonics. The Neuropsychopharmacology article listed Cyberonics as the institutional affiliation for one of the authors; the others were identified as academics. In violation of the medical journal’s disclosure policy, the article did not reveal the authors’ affiliation with the firm. A subsequent issue of Neuropsychopharmacology disclosed the connection. Also announced was the resignation of the editor of Neuropsychopharmacology, none other than lead author Charles B. Nemeroff. Nemeroff later explained that the requisite information about commercial connections was submitted with the article but was omitted “due to an unfortunate clerical oversight.”
In 2008, Nemeroff returned to public attention because of his collision with NIH rules that require grantees to report to their institutions annual outside income above $10,000 (reduced to $5,000 in the forthcoming revised regulations). Under both the old and new regulations, if the income is deemed to create a conflict of interest with a recipient’s academic obligations, the school is required to report the matter to the NIH and specify steps to manage, reduce, or eliminate the conflict. Adopted in 1995, the rules were developed in response to conflicted deals, such as those in which NIH grantees touted drugs for companies that rewarded them with cash or stock or favorably reported on clinical trials for drugs in which they had a financial interest—performances akin to a restaurant critic gushing over an eatery in which he or she is a silent partner.
On file at Emory, according to the New York Times, was a letter from Nemeroff, dated July 15, 2004, stating that he would earn less than $10,000 from the GlaxoSmithKline (GSK) pharmaceutical firm. “But on that day,” the Times reported, “he was at the Four Seasons Resort in Jackson Hole, Wyoming, earning $3,000 of what would become $170,000 in income from the British drug giant.” But that was insignificant compared with the lapses subsequently reported by Senator Charles E. Grassley, Republican from Iowa and congressional scourge of conflicts of interest in medicine. According to Grassley, Nemeroff had failed to disclose at least $1.2 million of $2.8 million he had received from pharmaceutical firms over seven years for consulting and speeches. While receiving the money from Glaxo- SmithKline, Nemeroff was principal investigator on a depression study funded by the NIH for $3.95 million over five years. During that time, Nemeroff lectured widely and favorably on a leading GSK drug, the antidepressant Paxil. In response, the NIH temporarily suspended the funding—a rare event in grantland.
The episode was distressing to Emory, an institution high in total federal research receipts—in 2008, number twenty-eight in university rankings, with $290 million— thanks in part to Nemeroff’s grant-winning prowess. However, punishment was in order, if only to placate the NIH’s cashiers, who were increasingly prodded by Capitol Hill to enforce their own rules. Emory responded to the challenge by barring Nemeroff from applying for NIH grants for two years, whereupon he stepped down from his position as chair of psychiatry and entered into negotiations to become the psychiatry chair at the University of Miami School of Medicine. He was appointed to the post in 2009.
Critical to this migration was the issue of whether Emory’s twoyear ban on applying for NIH grants would accompany Nemeroff to Miami. To clarify the matter, the dean of the Miami School of Medicine, Pascal J. Goldschmidt, consulted Thomas R. Insel, director of the NIH’s National Institute of Mental Health (NIMH), the spigot for Nemeroff’s government research money. Some years earlier, Insel had worked at Emory University, where he was acquainted with Nemeroff. Old-boy network? Who could be faulted for entertaining the possibility?
It may be assumed that Insel was au courant on the intricacies of grant eligibility, as he was serving as co-chair of the committee charged with the aforementioned revision of the NIH’s conflict-of-interest regulations. According to the Chronicle of Higher Education, Insel told the Miami dean that “Charlie [Nemeroff] was absolutely in fine standing” with the NIH. In fact, he was in such fine standing with the NIH that while Emory barred Nemeroff from applying for NIH grants, he continued to serve on NIH advisory panels that evaluate grant applications. As for eligibility for applying for grants if he was appointed to the Miami post—no problem, the NIMH director assured the Miami dean. Grants are awarded to the institution, not to the applicant, and once out of Emory, Nemeroff would be free of the ban. In the NIH scheme of things, people don’t get grants; institutions get grants.
With suggestions in the air that NIMH’s Insel helped Nemeroff get the Miami job because, years ago, Nemeroff had helped Insel get the Emory job, Insel wrote on his NIMH Director’s Blog on June 15, 2010, “To my knowledge, Dr. Nemeroff had no significant impact on my selection [for his long-ago appointment at Emory].” Apparently realizing that that explanation wouldn’t clear up the matter, Insel later added, “While my response to Dr. Goldschmidt was simply to describe the facts, in retrospect it would have been better to refer the Dean’s specific questions about grant eligibility to someone from the NIH Office of Extramural Research, which coordinated the investigation of Emory University. . . . Note, however,” Insel insisted, “that I must comply with the current policy, which permits someone to apply for NIH funding unless they have been de-barred.”
Later, in a letter to Senator Grassley, Insel asserted that Nemeroff’s failure “to disclose large sums of income from industry was an egregious violation of NIH policy,” and he pledged an “aggressive stance” against conflicted scientists seeking funds from his institute.
In response to the buck-passing extravaganza, NIH director Francis Collins said that the NIH should explore ways to amend the rules so that banishment from grant eligibility sticks to the individual rather than the institution. The draft regulations currently under consideration were two years in the making— but apparently without attention devoted to the issue of individual responsibility and personal sanctions.
Ghostwriters and Grantees
When it comes to policing conflicts of interest, the NIH is a reluctant cop, preferring to rely on universities to maintain reasonably decent ethical standards. Some do, and, fearful of damaging publicity, many more are attempting to do so. But, as Derek Bok, former president of Harvard University, despairingly observed in his 2003 book, Universities in the Marketplace: The Commercialization of Higher Education, tenured professors are a free-running species:
The university strikes many critics as a kind of anarchy, ill-suited for any purpose other than securing the comfort and convenience of the tenured professors. Officials of the university have very little authority over their senior faculty. The latter have almost complete license to do as they choose, thanks to the security of tenure buttressed by the safeguards of academic freedom. Since it is difficult to monitor closely the work of highly educated professionals, faculty members can travel more than the university rules allow or remain home tending their garden or enjoying their hobbies without much fear of detection. So long as they meet their scheduled classes and refrain from criminal acts, they can stay happily in their jobs until they retire.
If gardening and hobbies, to the neglect of academic duties, were really the problem, the trusting public could rest easy about the university-based experts on whom it relies for scientific progress and objective guidance about the benefits and dangers of technical and medical developments. The public record, however, reveals less wholesome deviations from required duties. Among them is what might be called professorial reverse plagiarism. Prestigious names in science are commercially valuable because they lend credibility to scientific papers related to pharmaceutical drugs and other products. If a renowned professor puts his or her name to a paper reporting good results for a drug, that portends profit for patent holders, manufacturers, and investors. In this circumstance, as is customary in the American marketplace, a racket ensues, involving the nefarious practice of “ghostwriting” and what is euphemized as “honorary” or “guest” authorship. In the former case, the actual author—sometimes a pharmaceutical employee or consultant— goes unnamed and unknown, while a professionally recognized name appears on the paper. Often in positions of authority over subordinates who performed the research, honorary or guest authors thereby receive credit for little or no work in producing a published scientific paper, which is the currency of success in the scientific professions. Because ghostwriting and honorary and guest recognition are naughty practices, their prevalence defies accounting, but the few studies that have been made indicate that they are no rarity.
One of these studies, titled “Prevalence of Honorary and Ghost Authorship in Six General Medical Journals,” by Joseph Wislar, Annette Flanagin, Phil B. Fontanarosa, and Catherine D. DeAngelis, presented some doleful numbers to an international meeting of journal editors last year: among a sampling of papers published in 2008, 26 percent had honorary authors, 8 percent had ghostwriters, and 2 percent had both honorary authors and ghostwriters. The figures actually represent a small decline from the results reported in a study conducted a year earlier, suggesting that disapproval directed at the practices may be having a cleansing effect. Singed by dubious authorship and undisclosed conflicts of interest, many scientific and medical journals require prospective authors to vouch for their integrity in authorship and bare their financial souls. In some of these journals, published notes appended to all articles state that the author has declared no conflict or report the organizations from which the author has received money.
The cleansing power of transparency is widely assumed, but the evidence for its efficacy is uncertain. A veteran editor on the front lines of medical publishing, Drummond Rennie, associate editor of JAMA, spelled out his doubts in 2006 when I interviewed him for a book I was researching. Requirements for disclosure are critically important, he stressed, “but, of course, you’re an idiot if you think that solves the problem. . . . We demand everything. You specify as much as you can. But we don’t have the sort of manpower to go into Standard & Poor’s or Dow Jones to look up who’s director [of a company] and so on. Because that’s a huge thing to do, and because people lie to us all the time. And then they claim they didn’t understand the meaning of the term ‘money’ or ‘consult’ or whatever. We’ve had that. ‘Well, we thought you meant a lot of money.’ Who’s to say two million is a lot?”
Perhaps change is on the way. But the NIH’s insouciance toward conflicted grantees has endured through a long series of critical official reports, How Grantees Manage Financial Conflicts of Interest in Research Funded by the National Institutes of Health, and repeated congressional scoldings. Among these was a report issued last year by the inspector general of the Department of Health and Human Services (HHS), the NIH’s parent agency. The report examined forty-one research institutions that received NIH grants in 2006. Among the findings were that 90 percent of the schools relied solely on the discretion of grantees to report outside financial dealings related to their government-supported research and that faculty disclosure statements were not routinely verified.
The findings jibed with those in a 2008 report by the inspector general, National Institutes of Health: Conflicts of Interest in Extramural Research, which stated that “many [NIH] institutes rely on the good faith of the grantee institutions to ensure compliance with Federal financial conflict-of-interest regulations, rather than directly overseeing or reviewing grantee institutions’ management of financial conflict of interest.” The NIH’s record keeping of conflict reports was found to be spotty to nonexistent, as was follow-up on pledges to manage, reduce, or eliminate the few conflicts that were actually reported.
Eight years earlier, what was then the General Accounting Office (GAO), an investigative service for the Congress now called the Government Accountability Office, issued a report with similar findings titled Biomedical Research: HHS Direction Needed to Address Financial Conflicts of Interest. Taking a close look at five research universities, the GAO reported that “all five universities had difficulty providing basic data on investigators’ financial conflicts of interest in clinical research.” The report added that none of the universities had formal processes for verifying that individuals fully disclosed their financial interests.
The NIH’s reluctance to play cop is further demonstrated by the absence of regulations for institutional conflicts of interest—for example, the not-uncommon situations in which a university owns shares in a company whose products undergo testing on campus. University administrators and government policy makers have been dodging that one for decades, deeming it so complex that further study is required—more than two decades after the issue was seriously raised in various official studies.
Having defied remedies so far, the system is likely to continue more or less unchanged, with institutional conflicts ignored and individual conflicts winked at or, at most, mildly reprimanded. The rare exceptions that occur involve egregious cases that not even the NIH can stomach, especially if Congress expresses interest. Professors who supplement their personal incomes and laboratory budgets with commercial money are not at the top of the crowded worry lists of university presidents and medical school deans. To the contrary, big fundraisers are lauded for bringing in research money, even if conflict-of-interest issues tag along. Higher education, and especially its scientific components, is suffused with a sense of societal benefaction and dedication to good works—at the expense of forgoing personal enrichment in the marketplace. Greedy? Not universities, particularly when rookie bankers outearn university presidents. And scientists who neglect pesky financial disclosure forms while seeking cures with industrial collaborators see themselves as victims of bureaucratic zeal and folly. Meanwhile, the government is calling for closer collaboration between academe and industry as a means of boosting the economy, and invasive regulation by government officials is out of political fashion.
With Wall Street getting away with grand larceny on a cosmic scale, why worry about professors dodging the rules to make extra bucks? The answer is that even openly delivered commercial money can impinge on scientific integrity—that is, truthfulness and commitment to honesty—which suggests that undisclosed commercial money is likely to be even more insidious. “Strong and consistent evidence,” JAMA reported in 2003, “shows that industry-sponsored research tends to draw pro-industry conclusions.” The report added that “industry preferentially supports trial designs that favor positive results,” and it noted that “industry ties are associated with both publication delays and data withholding.”
These sins against science do not invariably occur in the tango between academic researchers and commerce, but when they do, science is not always the sole victim. The public, too, may feel the impact of polluted science, as evidenced by a series of episodes in which pharmaceutical research data and the prestige of science were repeatedly manipulated for marketplace gain. One of the most disastrous of recent times involved the painkilling drug Vioxx, manufactured and heavily marketed by Merck, despite warning signs of serious cardiac side effects. With sales of $2.3 billion in 2003, Merck defended the drug as safe and effective, despite accumulating contrary evidence—including an estimated 27,000 heart-related deaths over four years. Finally, under the weight of evidence, Merck took the drug off the market in 2004. As wrongful-death lawsuits piled up, sordid details came to light, including, as JAMA reported in 2008, “guest authorship and ghostwriting” of published papers favorable to the drug. Noting the Vioxx experience, JAMA’s editor, Catherine DeAngelis, lamented that “the manipulation of study results, authors, editors, and reviewers is not the sole purview of one company.”
Finding a Cure
There’s no single cure for conflicted interests and other manifestations of commercially induced rogue science. But greater transparency—though overhyped as a panacea—is surely part of the remedy. Faculty members and administrators should be required to post on a public site all professionally related deals with off-campus organizations. Evaders may be difficult to catch without an undesirable level of scrutiny, but when caught, they should be subjected to public disclosure. Though it appears to be a declining force in academic culture these days, fear of embarrassment may still encourage good behavior.
Meanwhile, the NIH needs to surmount its naive indolence about goodness in academe. Sorry, but crooks in lab coats do exist. And their outing, as happens now and then, erodes the public confidence that helps keep billions pouring into the NIH’s grant-making system. In all times, but especially in this era of budget-slashing fervor, NIH administrators prefer to spend on science rather than employ gumshoes to hound scientists. But an expert team or two might be assembled for occasional unannounced spot checks of university compliance with conflict-ofinterest requirements and other NIH regulations.
Finally, when the rules are broken, a two-year time-out for applying for the next grant does not even rise to the level of frivolous. Something a bit nastier might impress errant professors.
Daniel S. Greenberg, a journalist, was a reporter for the Washington Post and news editor of Science. He has written for many popular and professional journals and is the author of four nonfiction books as well as a novel, Tech Transfer: Science, Money, Love, and the Ivory Tower. His e-mail address is firstname.lastname@example.org.