It’s pretty clear from what’s happened in recent months that we’re living in what’s being labelled a “post-truth world” where personal opinions and “alternative facts” are used as the basis of policy-making.
I’ve written before about analysis paralysis and the dangers of waiting for the perfect set of data before taking action to address breed health issues (you’ll be waiting a very long time!). Surely, there has to be a middle ground where we can develop plans and implement improvement actions that are evidence-based but where we can be agile enough to change course if new evidence emerges.
Breed Health Coordinators know only too well how hard it can be to have a sensible conversation with a breeder who “has never seen this issue in 25 years of breeding” and therefore believes it cannot possibly be something of concern. BHCs are constantly trying to explain (in plain English) that data from surveys describes what is happening in a population and that may be very different to what’s happening to the health of an individual dog.
One way to build a case to demonstrate action may be needed (or not needed) is to triangulate in on the evidence from several sources. So, for example, the Swedish Agria and VetCompass databases provide a large quantity of data on multiple conditions and thousands of dogs. Individual breed surveys, including the KC’s 2004 and 2014 surveys provide additional data, but typically covering fewer dogs and from different owner samples. A third source is published research papers, many of which focus on very specific health conditions and there will invariably be many of these studies published over the years. A simple search on Google Scholar will find hundreds; for example I found over 350 papers on IVDD in Dachshunds. You can even create an alert so that you get sent an email every time a new paper is published related to a keyword you choose.
Of course, one thing everyone needs to understand is the difference between “data” and “evidence”.
I could tell you that as many as 1 in 4 Dachshunds is likely to have some degree of back problem during its life. That’s data, but on its own it doesn’t have much validity or reliability unless you know something about its context. What was the sample size, how was the data collected and what is it going to be used for? Data can exist on its own but is pretty useless without context.
Evidence, however, can only exist to support a theory, an opinion or an argument. So, if in my opinion too many Dachshunds have back problems, I need to provide some data to support that opinion. That data comes from research, including routine health surveillance.
If you want to improve something, you need to have evidence to support a case for taking action. In the case of Dachshunds there is evidence to show that the more calcifications you can count in X-rays of a dog’s spine around 24 months of age, the more likely it is to suffer IVDD and its offspring will also be at more risk. There is lots of data to back up that evidence, published in peer-reviewed papers, and that’s why we launched a new X-ray screening programme in November last year.
My 2 Golden Rules for Breed Health Improvement are:
- There should be no action without evidence
- There can be no evidence without data
An important point here is that the people expected to implement the action (e.g. owners who we want to screen their dogs) don’t need to understand the data but they do need to believe the evidence. So, those of us who love getting our hands dirty with the data need to become better at storytelling. We need to present the evidence in easy-to-digest formats: infographics are one way, as are success stories from other breeds or other countries.
The UK’s National Statistician John Pullinger recently wrote that there is a huge opportunity for statistics in the post-truth world. He said there is great potential to mobilise the power of data to help us make better decisions. But, he points out that with people spending ever more time getting their news from social media channels, we risk connecting only with those with similar views to our own and never encounter those who think differently. This can mean we fall prey to those who choose to support their own opinions with “alternative facts”.
Government is supposed to follow the principles of evidence-based policy-making. The whole point of this approach is that government asks Civil Servants to review and analyse the available data before drafting legislation. They should also be analysing the counterfactuals – what would happen in the absence of the policy or legislation. Both human and veterinary medicine should also be developing evidence-based practice and we need to be doing this with breed health improvement too.
Evidence-based practice is designed to avoid policies being developed either as a knee-jerk reaction to circumstances (exactly what happened with the Dangerous Dogs Act) or on the basis of a politician’s personal agenda or ministerial whim.
Politicians and those in positions of power, such as ministers, are notoriously bad at asking for data and evidence, let alone using them to inform decisions. Steve Dean also noted this in one of his Our Dogs articles on the outcomes of the EFRACom review of canine welfare issues. His article “Poor research and little science” discussed the lack of critical information to support the committee’s views and recommendations. He concluded by saying “attempting to impose sanctions on the majority, to deal with a disreputable minority, is a repetitive misdemeanor of governing bodies“.
Politicians and animal welfare campaigners too often look for simple solutions to complex problems. The last thing they want to do is to look at the data or evidence because, often, these would undermine the rationale for their current “pet policy”. As a consequence, they end up implementing the wrong solution to the wrong problem which is what has happened with the Dangerous Dogs Act. They also end up with unintended consequences and even more bad publicity!
The New York Times’ Andrew Revkin blames pervasive misinformation in part on “single-study syndrome,” in which agenda-driven fringe groups promote studies supporting a predetermined position — no matter how questionable the research behind them may be.
We mustn’t fall into that trap with breed health improvement. We need just enough data and evidence-based policy-making.
I’ll end with a quote from Jill Abramson writing in the Guardian: “Alternative facts are just lies, whatever Kellyanne Conway (advisor to Donald Trump) says”.