The Journal of the American Medical Association saw a 21 percent drop in industry-financed research after it began requiring that data in company-sponsored medical trials be independently verified by university researchers, a study has concluded.
The study, by a team of medical researchers in England and Florida, found that two of JAMA’s competitors saw their proportions of industry-backed research grow after JAMA decided to impose the requirement in 2005 to deter companies from shading descriptions of medical-test results to favor their products.
The findings suggest JAMA could face significant financial pressure to abandon the policy, given the reliance of medical journals on corporate dollars, said one of the study’s authors, Benjamin Djulbegovic, a professor of medicine and oncology at the University of South Florida.
“Major medical journals face an inherent conflict of interest” when trying to ensure the integrity of their published findings, Dr. Djulbegovic said in presenting the findings at the International Congress on Peer Review and Biomedical Publication here, a quadrennial conference of medical-journal publishers organized by JAMA with support from several of the other journals.
The three-day conference was dominated by investigations of the ways that corporate money is believed to be misleading both the public and medical professionals who rely on the journals for impartial evaluations of the safety and effectiveness of drugs and medical procedures.
Such warnings have been common features of the JAMA-initiated conferences, which began in 1989, though the issue took center stage in Vancouver after another year of allegations of corporate distortions of medical-research findings.
In the past few months, lawyers suing drug makers alleging harmful effects of their medications have found evidence of concerted attempts by the companies to secretly influence the presentation of medical-journal articles that appeared to have been written by independent university scientists.
Dozens of universities have meanwhile seen the need to toughen requirements on their researchers to disclose details of their financial relationships with makers of pharmaceuticals and medical devices. The federal government has also formed a registry where researchers are encouraged to describe their studies in advance so that any published conclusions can be compared with the promised objective.
‘Lots of Warts’
Despite those efforts, the studies presented to the journal editors gathered here covered a range of ways in which articles in their pages may still contain inaccuracies, often resulting from a financial conflict involving a scientist or a reviewer.
Such concerns led some conference participants to question the journals’ financial models: They rely on unpaid volunteers to review article submissions and on revenue from companies that buy reprints of articles that depict their products favorably.
Conference topics included the failure of journals and their authors to disclose corporate connections, the reluctance of researchers to share their data, the use of misleading rhetoric in journal articles, and the almost uniform ability of authors rejected by one journal to get published in another.
“We still have lots of warts,” Catherine D. DeAngelis, editor of JAMA, said of her industry after listening to the presentations.
Even some areas of improvement were shown to have their limits. About 300 journals have now joined a commitment by JAMA and other leading journals to publish only research in which the authors registered their intended outcomes in advance. The system, using either the federal registry or a recognized alternative, is designed to guard against researchers’ using their studies to selectively identify data that support a drug or treatment rather than sticking to the criteria they initially promised to measure.
But studies presented in Vancouver showed that the registry system isn’t yet having a significant effect because too many researchers are making registry entries that are either vague or filled with too many measurement criteria. “Registration alone cannot improve research quality,” Deborah A. Zarin, director of the federal registry, told the conference.
Independent Data Reviews
JAMA also found its competitors still unwilling to join its commitment to publish industry-supported studies only if the data get an independent review.
Dr. Djulbegovic and his colleagues at the Committee on Publication Ethics, in Britain, compared JAMA’s experience under its 2005 policy with that of two other leading medical journals. They tallied the portions of research appearing in the journals that involve industry financing, comparing numbers from 2002 and 2008.
JAMA saw the percentage of industry-supported studies in its pages drop 21 percent, from more than 60 percent of its published trials to 47 percent. Lancet, however, saw a growth of 17 percent, and The New England Journal of Medicine had an increase of 11 percent, the group reported.
Dr. Djulbegovic called the finding “quite dramatic,” while acknowledging that his investigation had a number of limitations, including the fact that it did not demonstrate the degree to which the shift could be attributed to the 2005 policy.
The study also didn’t show whether JAMA’s policy is actually producing more reliable research, said Fiona Godlee, editor in chief of BMJ, another leading medical journal. Dr. Godlee told Dr. Djulbegovic that other journals “would be flocking to” join JAMA if anyone could show the policy produced better science.
‘Something to Hide’
Dr. DeAngelis, JAMA’s editor, followed Dr. Godlee with an impassioned pledge to stick with the policy, saying she was pushed into imposing the requirement after at least two instances in which a corporate sponsor refused to allow an outside review of its data before publication.
The policy, she said, means she will always have the ability to call a university dean and ask for an investigation any time she encounters a challenge to data published in JAMA. And while some companies may be boycotting JAMA, the journal hasn’t seen its ad revenue drop any more than its competitors have during the recession, and its “impact factor"—a measure of its authors’ influence—has grown since 2005, she said.
“Until somebody can prove that what we’re doing is wrong, we’re going to keep it.” Dr. DeAngelis told her fellow editors. “The cynic in me says, If you’re not submitting to JAMA because you have something to hide or you don’t want anybody else to look at it, so be it. “
JAMA, meanwhile, had its own issues with data accuracy at the conference. Dr. DeAngelis and other JAMA editors presented a survey of authors conducted last year that concluded, according to the summary given to conference participants, that the prevalence of “ghostwriting”—in which university scientists sign their names to research articles that secretly originated with writers sometimes paid by companies—has grown significantly since their previous survey in 1996.
But in the actual presentation to the conference, a JAMA researcher, Joseph S. Wislar, said his subsequent analysis of the data showed no significant increase in ghostwriting during that period. Nevertheless, the prevalence of “ghost” authors in top-ranked medical journals remains a concern, Mr. Wislar said, ranging last year from 2 percent at Nature Medicine to 11 percent at The New England Journal of Medicine.
Industry Silence
The conference participants included representatives of several of the drug companies, who largely sat silently through the repeated depiction of their industry as an obstacle to the unbiased pursuit of medical research.
Many of the concerns raised at the conference appeared to reflect industry tactics that may have been practiced by some companies a decade or more ago but aren’t common now, said Fran Young, director of science publications at Shire Pharmaceuticals. “Things have changed,” Ms. Young said after watching the presentations.
“Either I have worked at the three most rigorous companies out there,” said Ms. Young, formerly with AstraZeneca and GlaxoSmithKline, “or things are not as bad as being painted in this room.”
The companies and the journals appear to share concern over some of the problems identified at the conference, Ms. Young said. Those areas include the bias, as described by some conference presenters, in favor of publishing test results that show a specific result of a medication or procedure as opposed to “negative” results that show no significant effect of the treatment. Either positive or negative findings can contain critical information, yet data presented at the conference suggest that the positive result is more likely to get published.
And any resulting bias in journal articles may not be the fault of drug companies alone. During one discussion of the reluctance of medical journals to publish negative results, Faina Linkov, a research assistant professor at the University of Pittsburgh, said it was a well-known problem.
“All of my colleagues and I,” she said, “are very much tempted to massage the data until we find some positive results.”