The Evidence Pyramid Revisited

This is the 9th anniversary of my “Best of Health” articles. It’s hard to believe I’ve been writing these for 9 years! Thank you to everyone who reads them and to those who correspond with me following their publication. You can find a complete archive of my articles on my blog at: https://sunsongdachshunds.wordpress.com/health-welfare/our-dogs-best-of-health-articles/

Last month I wrote about the challenges of cherry-picking data from published research studies and how they can be used to generate click-bait headlines in the national press and on social media. I emphasised the importance of breed clubs collecting their own data with robust health surveys. Ideally, these should include gathering responses from owners of dogs that aren’t part of the show community or that aren’t KC registered. These non-show and non-KC data have the potential to demonstrate whether or not there are differences in the health of these different sub-populations of our breeds.

My caveat at the end of last month’s article was that more data won’t improve dog health or longevity. There is little point in endlessly arguing with the published research or debating whether or not the sample in a survey is truly representative of what’s happening in a breed.

I was reminded of an article I wrote in 2017, following the Breed Health Coordinators’ Conference. One Of the presentations was by Dr Zoe Belshaw from the Centre for Evidence Based Veterinary Medicine at Nottingham University. Zoe talked about the so-called Trust Triangle which describes the different types of information you might come across and the levels of trust that can be associated with each.

A variation of this is the Evidence Pyramid which has expert opinion at the bottom, followed by Case Studies, Cohort Studies and Randomised Control Trials (RCTs). These latter 3 are unfiltered information which may be available as Open Access papers. Sitting above these are a series of filtered information sources such as Systematic Reviews (meta analysis). These publications dissect and critique a set of primary research papers in order to arrive at “the best evidence” to support a particular case (or to disprove it). This is the sort of work that Zoe’s colleagues do at the Nottingham Centre for EBVM and they then publish what can be considered to be best practice for vets and clinicians to adopt. As with all science, “best practice” today could well change if new research evidence emerges.

This all seems quite logical but, recently, I was intrigued to read an article by Dr Michael Putnam, an Associate Professor of Medicine in Wisconsin who argued that the Evidence Pyramid is flawed.

One of the points he makes is that, in the real world, when a medical professional needs an answer to some obscure clinical question, they rarely dig through published case reports. Their pragmatic approach is to ask a respected colleague for their expert opinion because they feel this carries more weight than reading some randomly published paper.

Putnam argues that systematic reviews are emphatically not the highest level of evidence. This is a good point because they are simply a view of the actual evidence collected from a range of RCTs and observational studies. Obviously, this then depends on the quality of the review process and the input papers and studies which are in the review pool. Many of the studies included in these reviews will involve small sample sizes and (sometimes) dubious methodologies particularly when it comes to statistical analysis. I have had several conversations recently about papers published on research into intervertebral disc disease where the statistical analyses were less than ideal and/or where the studies were underpowered due to small sample sizes. The reason I queried these papers was that the findings contradicted previous studies (both in dogs and humans). Luckily, I have some very capable statistician friends and am in contact with researchers to whom I can turn for a critical appraisal of new papers.

A paper published by Prof. John Ioannidis (Stanford University) in 2016 said that there is massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses. Instead of promoting evidence-based medicine and health care, these instruments often serve mostly as easily produced publishable units or marketing tools. He concluded that “China has rapidly become the most prolific producer of English-language, PubMed-indexed meta-analyses. The most massive presence of Chinese meta-analyses is on genetic associations (63% of global production in 2014), where almost all results are misleading since they combine fragmented information from mostly abandoned era of candidate genes.” More shockingly, a 2022 paper by Ioannidis stated that simulations show that for most study designs and settings, it is more likely for a research claim to be false than true and that many research findings may simply be accurate measures of the prevailing bias.

What should be at the top of the pyramid?

Putnam argues that RCTs should be at the top of the evidence pyramid. However, he goes on to say that we should acknowledge that many RCTs are poorly designed, underpowered and subject to bias. Therefore, instead of the pyramid having discrete layers that differentiate between sources of evidence, the model should recognise that some trials are worse than some cohort studies. In other words, there is more of a blurred boundary between observational studies and RCTs. The case for this reengineering of the Evidence Pyramid was also made by Murad et al in the BMJ in 2016.

Does it matter in the real world?

We are encouraged to consider Evidence Based Veterinary Medicine as an underpinning principle for recommending approaches to diagnosis and treatment of canine health conditions. The quality of evidence should determine the confidence in recommendations.

In practice, there don’t seem to be that many published RCTs that are of relevance to us but there are numerous observational and cohort studies (retrospective and prospective). Putnam ended his article by saying that “good observational studies may be better than bad RCTs and that we should read and judge each paper by its individual merits, not by its strata on a colourful pyramid.”

So, in the real world, it might be worth reading the “Limitations” section of any research paper before you read the full paper. It’s also worth reading the “Conflicts of Interest” declaration to find out who funded the study. I think it was Sid Vicious who said “Today, everything’s a conflict of interest”! 

Advertisement