Subluxation, Hill's Criteria of Causation and EBM James Demetrious, Private Practice 29 December 2009 I read with interest the paper written by Mirtz et al. I have reservations regarding the authors’ conclusions pertaining to the manner in which they have editorialized the subject matter and applied Hill’s Criteria of Causation. First, I would direct the authors to the paper written by Phillips and Goodman  entitled, “The missed lessons of Sir Austin Bradford Hill." Phillips and Goodman report the following: Making a good decision does not depend on having studies with confidence intervals that exclude the null. A best decision can be based on whatever information we have now, and indeed a decision will be made – after all, the decision to maintain the status quo is still a decision. Hill offered his clearest condemnation of over-emphasizing statistical significance testing, not when he discussed p-values, but when he concluded by saying: "All scientific work is incomplete – whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time." This would release us from the trap of letting ignorance trump knowledge. Regulators often fail to act because we have not yet statistically "proven" an association between an exposure and a disease, even when there is enough evidence to strongly suspect a causal relationship. There is a growing movement to escape this mistake by making a similar mistake in the other direction: adopting precautionary principles, which typically call for restrictions until we have "proven" lack of causal association – a decision based on ignorance that merely reverses the default. If we can escape from the false dichotomy of "proven vs. not proven," facilitated by the non-existant bright line implied by statistical hypothesis testing and by the notion that causality can be definitively inferred from a list of criteria, then we can make decisions based on what we do know rather than what we don't. The uncritical repetition of Hill's "causal criteria" is probably counterproductive in promoting sophisticated understanding of causal inference. But a different list of considerations that can be found in his address is worthy of repeating: • Statistical significance should not be mistaken for evidence of a substantial association. • Association does not prove causation (other evidence must be considered). • Precision should not be mistaken for validity (non-random errors exist). • Evidence (or belief) that there is a causal relationship is not sufficient to suggest action should be taken. • Uncertainty about whether there is a causal relationship (or even an association) is not sufficient to suggest action should not be taken. These points may seem obvious when stated so bluntly, but causal inference and health policy decision making would benefit tremendously if they were considered more carefully and more often. The last point may be the most important unlearned lesson in health decision making. In fairness to those who do not appreciate these points even today, it over-interprets Hill's short paper to claim that he clearly laid out these considerations, or that he was calling for modern decision analysis and uncertainty quantification. But the fundamental concepts were clearly there (and the over-interpretation is not as great as that required to derive a checklist of criteria for determining causation). Several generations of advancement in epidemiology and policy analysis provide much deeper exposition of his points. But Hill still offers timeless insightful analysis about how to interpret our observations. Strangely, these forgotten lessons, which are only slowly and grudgingly being appreciated in modern epidemiology, are hidden in plain sight, in what is possibly the best known paper in the field. It is my impression that Mirtz et al. have exercised an uncritical repetition of Hill's, "causal criteria," that is counterproductive in promoting a sophisticated understanding of causal inference related to the term, “subluxation.” I would also caution the authors to carefully apply the tenets of evidence based medicine. Sackett et al.  conveyed the following thoughts: • Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. • The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. • Good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough. • Evidence based medicine is not restricted to randomised trials and meta-analyses. It involves tracking down the best external evidence with which to answer our clinical questions. Finally, the opinion of Resnick  bears consideration: “Evidence-based medicine is a useful tool for summarizing and grading the evidence available in the literature for or against a particular treatment strategy. Its utility is limited by the quality of the primary literature, and the absence of proof cannot be equated with the proof of absence.” When considering the term, “subluxation,” utilized by the chiropractic profession, it is my impression that stringent adherence to epidemiologic constructs and evidence based medical protocols must not over-shadow clinical experience. Authors must integrate clinical experience and the best available external evidence. References 1. Phillips CV, Goodman KJ: The missed lessons of Sir Austin Bradford Hill. Epidemiologic Perspectives & Innovations 2004, 1:3. 2. Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn't: It's about integrating individual clinical expertise and the best external evidence. British Medical Journal 1996, 312(7023): 71-72. 3. Resnick DK: Evidence based spine surgery. Spine 2007, 32(11): S15-S19. James Demetrious, DC, FACO Wilmington, NC Competing interests No competing interest exists with regard to my professional judgment about the referenced paper that could possibly be influenced by considerations other than the paper's validity or importance.