Medical Studies are Biased

ASKING THE RIGHT QUESTIONS:

 12-11-2008   This is an editorial from ODA Today, (Ohio Dental Association), Dr. Matthew J. Messina:

 When I graduated from Michigan State University with my BS in Physiology, I landed a great job with the Cleveland Clinic Department of Psychiatry before starting dental school.  When one of the staff physicians was planning a research study, it was my task to evaluate all pertinent previous research, determine its content and value, and record the study methodology to devise the best possible research protocol for the new study.  I read thousands of abstracts and hundreds of papers;  some good, some bad, some insightful, some awful, and some downright silly.

 What bothered me were the conflicting results of so many studies.  I have had my suspicions, but I could never quite put my theory into words.  This week, David Michaels, an epidemiologist teaching at George Washington University School of Public Health, wrote in the Washington Post that "It's not the answers that are biased, it's the questions."  How right he is!

 When a scientist is hired by a corporation with a financial interest in the outcome of the study, the likelihood that the result of the research will be favorable to that firm is dramatically increased.  This close correlation between results desired by a study's patron and those reported by the researchers is known in the scientific literature as the "funding effect".

 No scientist sets out to do bad research; or to intentionally prostitute himself for the research dollars.  However, having a financial stake in the outcome can change the approach of even the most respected scientists.  At first, it was assumed that the misleading results in manufacturer-sponsored studies came from the shoddy research done by scientists who manipulated methods and data.  While scientific malpractice does happen, closer examination of manufacturer-supported research shows these studies to be as good, on average, as independent ones.

 Richard Smith, the recently retired editor of the British Medical Journal (MBJ), noted that it would be "far too crude, and possibly detectable, for companies to fiddle directly with results."  He suggests that it was far more important to ask the "right" questions.

 What Smith and other critics have found is that industry researchers design studies in ways that make the products of their sponsor appear to be superior to those of their competitors.  Smith and others have catalogued these "tricks of the trade", which include testing your drug against a treatment that either does not work or doesn't work very well; testing your drug against too low or too high a dose of the comparison drug because this will make your drug appear more effective or less toxic; publishing the results of a single trial many times in different forms to make it appear that multiple studies reached the same conclusions; and publishing only those studies, or parts of studies, that are favorable to your drug, and burying the rest. Among my favorite dental examples of bad questions are the bonding agent manufacturer who boldly pronounced that they had achieved significantly higher bond strength than the industry leader.  It was only when you followed the asterisk to the actual trial data that you found that the test was done on alligator dentin.  I would have preferred a human trial, and I know we as a profession have some history with animal models in monkey, rat, cat, and even beagle dog.  Alligator is a new one on me.  I'm sure they used this research subject because it offered the greatest number of test teeth per mouth.  As it is, if I'm the dentist at Disney's Animal Kingdom, I know which bonding agent I want in my tackle box.  In private practice, this research is questionable at best.

 I also found humorous the research protocol for a pre-brushing rinse that purported to radically improve the effectiveness over brushing alone.  While the improvement in brushing efficiency that was reported was impressive, the methodology called for allowing subjects to brush for 15 seconds as the control.  The experimental subjects rinsed vigorously with the product for one full minutes, then brushed for the same 15 seconds.  I suspect that a pre-rinse with water as a control may have leveled the playing field a bit.  The product in question actually worked, but that was the result of the fact that the rinse tasted terrible, so people brushed twice as long just to get out the foul flavor.

 Questionable research will always be with us.  Our responsibility to our patients and ourselves is to read the fine print and not just listen to the marketing spiel.  We have the training in the scientific method to be skeptical, and to suspect when something smells fishy.  Staying on top of our professional game means more than just taking CE on the latest techniques.  It means remaining intellectually curious, and asking the right questions!