We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Deetee • 12 years ago

Quite interesting and comprehensive paper on ways of improving AER reporting:

Improving reporting of adverse drug reactions: Systematic review.
Molokhia M, Tanna S, Bell D.
Clin Epidemiol. 2009 Aug 9;1:75-92.
http://www.ncbi.nlm.nih.gov...

Deetee • 12 years ago

Followup to the above:

Barbara Loe Fisher stated:
"Former FDA Commissioner David Kessler estimated in a 1993 article in the Journal of the American Medical Association that fewer than 1 percent of all doctors report injuries and deaths following the administration of prescription drugs. This estimate may be even lower for vaccines."

Neat misquote, Babs. Kessler actually said this:
"Although the FDA receives many adverse event reports, these probably represent only a fraction of the serious adverse events encountered by providers. A recent review article(12) found that between 3% and 11% of hospital admissions could be attributed to adverse drug reactions. Only about 1% of serious events are reported to the FDA, according to one study.(13)"
http://jama.ama-assn.org/co...

So Kessler himself never made this claim, he just cites another study as an example of how low AER rates may be. Yet the cite has morphed into popular antivaccine mythology as being from him in his capacity as the FDA Commissioner.

Reference 13 is this one:
http://jama.ama-assn.org/co...

It doesn't say that 1% of serious reactions are reported.
It compares reporting rates before and after an initiative to improve reporting rates for AERs in Rhode Island. Where the 1% comes from is still a mystery, since there is no meaningful denominator to estimate real overall AER rates in this study as compared to those reported.

Deetee • 12 years ago

One issue I have is with the trumpeted claim that "less than 1% of reactions are reported" and the use of this inaccurate and unsourced statistic to multiply known side effect rates by 100 fold by whichever idiot is doing the claiming.

Trying to track the source of underreporting is tricky. There are sources including the FDA stating "as few" as 10% of reactions are officially reported(which does not surprise me since medics will only report events they feel are clinically significant for the patient or unusual (eg no-one would issue an incident report for dizziness or postural hypotension in someone on antihypertensives).

One document cited frequently for the less than 1% claim is by former FDA chief Kessler in JAMA, but the wording of his statement remains elusive and I have never been able to find the exact text. It gets repeated by luminaries in the antivaccine world like Meryl Dorey ("The fact is that study after study has shown that the vast majority - up to 99% - of reactions are never reported. Yet the government and the medical community rely on these figures which are 99% incorrect.") and Barbara Loe Fisher: ("Former FDA Commissioner David Kessler estimated in a 1993 article in the Journal of the American Medical Association that fewer than 1 percent of all doctors report injuries and deaths following the administration of prescription drugs. This estimate may be even lower for vaccines.") [therebye dropping the reported percentage of "injuries and deaths" to under 1%]
(see Whale.to for these claims if you dare)

I once did a check on how often serious vaccine adverse events were reported (eg paralysis after oral polio for an example) and recall that the reporting was consistently over 50% (unfortunately I cannot find these citations any longer)

lilady • 12 years ago

Scott...I have a long comment held in moderation...too long and too many links, perhaps?

Thanks in advance for releasing it from the *moderation hopper*....lilady

lilady • 12 years ago

The question about rotavirus vaccines' safety records have been brought up recently on a Respectful Insolence blog. I have responded to the one person who persistently posts some inanities about rotavirus and the original vaccine that was licensed:

http://www.cdc.gov/vaccines...

"What action did CDC take when cases of intussusception were reported to VAERS?

CDC, in collaboration with the Food and Drug Administration (FDA), and state and local health departments throughout the United States, conducted two large investigations. One was a multi-state investigation which evaluated whether or not rotavirus vaccine was associated with intussusception. Based on the results of the investigation, CDC estimated that RotaShield® vaccine increased the risk for intussusception by one or two cases of intussusception among each 10,000 infants vaccinated. The other was a similar investigation in children vaccinated at large managed care organizations. When the results of these investigations became available, the Advisory Committee on Immunization Practices (ACIP) withdrew its recommendation to vaccinate infants with RotaShield® vaccine, and the manufacturer voluntarily withdrew RotaShield® from the market in October 1999. "

RotaShield vaccine was first licensed July, 1998 and removed from the marketplace within 14 months of licensing, a testament to the effectiveness of the FDA"s and the CDC's monitoring of an adverse event, that occurred within a relatively small subset of infants who had received RotaShield vaccine.

This same poster on Respectful Insolence then opined that incidence of Kawasaki Syndrome and deaths have been "reported" following immunization with RotaTeq vaccine (one of two currently licensed rotavirus vaccines.) I then posted this link about Kawasaki disease incidence reported during clinical trials, as well as the incidence of reports of Kawasaki Syndrome from VAERS and the Vaccine Safety Datalink after receiving the vaccine. There were no Kawasaki Syndrome deaths ever reported associated with the administration of Rotavirus vaccines.

http://www.cdc.gov/vaccines...

"The FDA reports that five cases of Kawasaki syndrome have been identified in children less that 1 year of age who received the RotaTeq vaccine during clinical trials conducted before the vaccine was licensed. Three reports of Kawasaki syndrome were detected following the vaccine's approval in February 2006 through routine monitoring using the Vaccine Adverse Event Reporting System (VAERS). After learning about these Kawasaki syndrome reports, CDC identified one additional unconfirmed case through its Vaccine Safety Datalink project. The vaccine label has been revised to notify healthcare providers and the public about the reports of Kawasaki syndrome following RotaTeq vaccination.

The number of Kawasaki syndrome reports does not exceed the number of cases we expect to see based on the usual occurrence of Kawasaki syndrome in children. There is no known cause-and-effect relationship between receiving RotaTeq or any other vaccine and the occurrence of Kawasaki syndrome."

The persistent poster again opined that Rotateq vaccine was implicated in increased risk of intussusception. I then linked to this article from the JAMA:

http://jama.ama-assn.org/co...

"Main Outcome Measure Intussusception occurring in the 1- to 7-day and 1- to 30-day risk windows following RV5 vaccination.

Results During the study period, 786 725 total RV5 doses, which included 309 844 first doses, were administered. We did not observe a statistically significant increased risk of intussusception with RV5 for either comparison group following any dose in either the 1- to 7-day or 1- to 30-day risk window. For the 1- to 30-day window following all RV5 doses, we observed 21 cases of intussusception compared with 20.9 expected cases (SIR, 1.01; 95% CI, 0.62-1.54); following dose 1, we observed 7 cases compared with 5.7 expected cases (SIR, 1.23; 95% CI, 0.5-2.54). For the 1- to 7-day window following all RV5 doses, we observed 4 cases compared with 4.3 expected cases (SIR, 0.92; 95% CI, 0.25-2.36); for dose 1, we observed 1 case compared with 0.8 expected case (SIR, 1.21; 95% CI, 0.03-6.75). The upper 95% CI limit of the SIR (6.75) from the historical comparison translates to an upper limit for the attributable risk of 1 intussusception case per 65 287 RV5 dose-1 recipients.

Conclusion Among US infants aged 4 to 34 weeks who received RV5, the risk of intussusception was not increased compared with infants who did not receive the rotavirus vaccine. "

It's hard work to dispel the myths promulgated by notorious anti-vaccine websites and individuals who have no experience in immunology or epidemiology and, who "plug into" the various and sundry conspiracy theories such as *Big Pharma*, *Big Gubmint* posted by the pseudo-science bloggers.

MerColOzcopy • 12 years ago

You thought wrong, I don't think CAM is better. Adverse or favorable reporting is not relevant when any form of treatment was not needed in the first place. Unnecessary vaccines, antibiotics, and CAM are all guilty. Using your analogy, sometimes the trip is not worth the risk, or even necessary.

Your "CAM treatment" anolgy "takes you nowhere but blows up and injures or kills someone every so often" is the way many see vaccines. If SBM can not determine conclusively that an adverse event is valid or not, then it "seems" to be at odds with itself, hence CAM.

CAMry :)

PJLandis • 12 years ago

@MerColOzcopy

I think your trying to say CAM is better because we don't know anything as opposed to knowing something but not enough to make a solid conclusion? SBM isn't at "odds with itself," if anything I think your seeing the self-correcting nature of science as an achilles heel when it's perhaps science's greatest strength. Of course CAM seems without risk when you never look for any risks and ignore anything but positive evidence.

Either way, favorable events and adverse events are both most useful when they come from a defined population, are compared against a control group, and when all groups (control and treatment) report everything, good or bad. VAERS doesn't do this, hence Dr. Hall's complaint that people are drawing conclusions from data that at best might help develop a hypothesis for a study that itself might yield supportable conclusions.

If a certain group of people are given a flu shot are compared to a similar group which doesn't received a flu shot, and both groups are followed to determine whether get the flu and to report adverse events, then assuming some other source of bias is apparent and the group is large enough, then we can make statements about the safety and efficay of the flu vaccine. Is it possible that adverse events might be missed or not attributed to the vaccine? Yes, but that is why science is self correcting and a good argument for more post-marketing surveillance studies.

On the other hand, an unstudied CAM modality has little or no data to be evaluated. Or more commonly, poorly designed studies which give unreliable conclusions. So, should we bet on a well-studied vaccine with known benefits, and adverse events, or an untested CAM modality that is unlikely to offer any efficacy (I'm actually interested in hearing of a CAm treatment for the flu) which makes any risk foolish.

CAM treatments are akin to getting into a car (CAMry?) everyday that takes you nowhere but blows up and injures or kills someone every so often. Driving my car may have the same or even greater risks, but at least it takes me places.

MerColOzcopy • 12 years ago

"Nothing is without risk" says it all.

If the interpretation of AERS is in question, what value is there in favorable event reporting. If a test group is given a flu shot with no adverse events reported, and everyone avoids getting the flu can it be said it is safe and effective? Certainly not.

The fact that such information exists would seem to put some drugs and vaccines in a category envious of CAM. When SBM seems to be at odds with itself perhaps the path of minimal risk for some is CAM.

dhallai • 12 years ago

Great post, Scott. I agree with some of the previous commenters that structured post-marketing surveillance using a computerized database -- all drugs, all people (anonymized, of course) -- would be ideal from a post marketing safety and efficacy standpoint. The cost would be relatively low. The earlier detection of a single fiasco (e.g., Vioxx, Avandia, etc) would probably pay for the whole system many times over.

windriven • 12 years ago

@Angora Rabbit

You said: "I know that companies are not keen to have registries and possibly the consumer will again foot the bill."

Consumers are going to foot the bill one way or the other. Drug companies generally have one substantial source of income: those who purchase their products. The cost of monitoring will come from that primary income source. I guess I'm wondering where else you think it might come from?

I'm also not certain that 'companies are not keen to have registries.' I don't claim to know either way. I'm in the devices business not the drugs business. I would have no hesitation at all about registries - so long as all companies in my field had to share the cost. I suspect that drug makers take their responsibilities just as seriously as device makers.

Angora Rabbit • 12 years ago

Dr. Hall, I like your idea as computerized records become widespread. Is it going to be feasible to centralize all uses and not just those associated with adverse outcomes? I don't know. I know that companies are not keen to have registries and possibly the consumer will again foot the bill.

Angora Rabbit • 12 years ago

Thanks for a great article on an important topic. In my own field of teratology I too wish there was a better system to record potential adverse outcomes. Many states (but not all!) have mandatory birth defects registries and from those one at least gets a feel for incidence vs. general population. We publish these annually. But this only works because all births are recorded. For most medications outside the birth defects field, as you rightly point out, we do not know the denominator (size of user population) so it is challenging (impossible) to use these datasets to calculate an exposure risk. By definition these reporting systems are biased because the entire population is not sampled, just a subset and only that subset that reports an adverse outcome. It's recall bias to the max.

Thalidomide highlights both the best and worst flaws in these reporting systems. It worked because the spectrum of defects was extremely narrow (= well-defined) and, most important, they were unique to thalidomide exposure. The signal-to-noise was huge and so the cause was quickly and correctly identified.

But most birth defects are not unique and occur at some frequency in the population. Trying to link a drug exposure to, say, midfacial clefting is a challenge because, unless this happens in a high % of the drug users, it will be near impossible to identify those drug-related cases against the low background incidence. Magnify this against outcomes that are already common in society (heart attack, stroke) or occur much later after exposure (cancer) and the registries become nigh useless.

I completely agree with you, they are not much better than hypothesis-generating and certainly are not sufficient to draw a conclusion. What we need is a better system of recording both use and adverse outcomes on a population scale. I am not smart enough to know how to make this happen. But those two websites are not the answer and I too predict a wave of ambulance chasing to follow.

Harriet Hall • 12 years ago

Systematic post-marketing surveillance is the answer. There is great potential in computerized databases that could track everyone taking a medication and compare them to those not taking it.

PJLandis • 12 years ago

My understanding is that VAERS is a purposeful dumping ground not meant to create an accurate profile of any particular treatment. There are ways that is accomplished, or should be, such as the post-marketing studies.

I didn't see it mentioned in the article, but VAERS creates a database where any report can be added to the system regardless of quality as opposed to a more stringent reporting system which would likely result in far fewer spontaneous reports from the medical community at large.

Once an issue is identified, likely through some other means, the reports within the VAERs database could provide useful information to guide further research. Study populations equal money and if the VAERS database brings up little in terms of some new, unexpected adverse event its an indication that the event isn't a 1-100,000,000 event and that maybe a smaller more managable study could be justified. Plus, it gives information on what might otherwise be little studied or expected events.

If you up the VAERS standards, you might get a better database for determining incidence but your also losing a lot of reporting that won't otherwise happen and perhaps adding time to the discovery of serious adverse events. Anyway, just because people are misusing the data doesn't mean there is something wrong with the VAERS system; all evidence can be misused.

cervantes • 12 years ago

Well, see my comment preceding.

The problem with the system as a whole it's not so much that it's insensitive as that it's not specific. There's no way to sort out real signal from noise. Once you think you see a signal, you need to ramp up a whole new investigation, but what the threshold for that should be, and who will pay for it, and what to do in the meantime, is basically undefined.

In fact the FDA has mandated post-marketing surveillance studies for many medications and the companies have just ignored it.

Scott • 12 years ago

@ Sid:

What part of

There is no question that adverse effect databases can serve as a valuable resource as part of an overall program to monitor the safety and efficacy of a drug or vaccine. However, in isolation, these databases have limited utility. Patterns or “signals” are recurrent events observed in the data. They are hypothesis-generating — not hypothesis-answering.

is unclear?

cervantes • 12 years ago

What we really need -- although it's expensive -- is more structured post-marketing surveillance. That means following a cohort of people who receive a treatment so you have a denominator, and can compare their experience to people not getting the drug. There are still many complications, e.g. confounding by indication, but these can be handled to some extent with techniques such as propensity scores and instrumental variables. I am among those who think we really need to do this -- clinical trials are of much too short a duration and, as you say, have unrepresentative populations. Adverse event reporting systems are basically a lame substitute for meaningful surveillance that let the FDA and the pharm companies say they're watching out for us when they really aren't.

Guest • 12 years ago
Scott • 12 years ago
I cannot think of any positive outcomes for these sort of websites except for the owners I suppose.

And the lawyers who will file lawsuits based on the dubious analyses.

drsteverx • 12 years ago

Seems like there is always room for more poor statistical analysis on the interwebs. I cannot think of any positive outcomes for these sort of websites except for the owners I suppose. Most lay people are not able look at this sort of information and know the limitations, I would expect panic from the majority reading about their med on one of these websites.

"We need to get back to believing the evidence of our own eyes."

Wow. That is the whole purpose of adhering to the scientific method the best we can. Our perceptions are faulty at best. I would expect this quote from a homeopathic website or other peveyor of woo. Though I admit I did not read his book Pharmageddon so maybe that does fit.

zeno • 12 years ago

Surely Rxrisk.org are likely to get duplicate events: those reported via AERS and those reported directly? With the anonymous data in AERS, how will the be able to tell?