There seem to be endless discussions about the evidence for or against the prevalence of health conditions in specific breeds of pedigree dogs. The “front line” of these battles over data has been among the Brachycephalic breeds in the past 12 months. Numerous other breeds crop up for debate with a predictable regularity (GSDs, Cavaliers, Dachshunds, BMD, Flatcoats – the list goes on).
To use that dreadful cliche, “at the end of the day”, there is no single RIGHT answer for the prevalence of any given condition, or conditions, in each breed. The published answers are very dependent on who is doing the research, what their objectives were, how they designed the study, what dogs were used as sources of data and, finally, how the data was analysed and presented.
This month, I want to focus on that latter aspect; how the data was “manipulated” and presented. However, with an eye on unanticipated consequences, please don’t use this article as a checklist of ways to spin your data. It would be better viewed as a starting point for being curious (sceptical?) about studies that are being published and data that is being presented.
What answer would you like?
Cherry-picking is probably the easiest way to spin data; simply select the results that support your case and ignore the rest. It is not unusual for research studies to come up with different answers to previously published material. For example, Packer et al (2012) studied the relationship between body length and back disease (IVDD) and concluded that the longer and lower the dog, the higher the odds of it having IVDD. That clearly plays to an agenda that links exaggerated conformation to health issues. A subsequent analysis of a much larger dataset collected by the Dachshund Breed Council also published by Packer et al (2015) did not reproduce those findings. It would be wrong to cherry-pick the latter study as a way of justifying exaggerated conformation (particularly when our Breed Standard calls for moderation in body length and asks for sufficient ground clearance).
The Cobra Effect occurs when an incentive produces the opposite result to the one intended (also known as “perverse incentive”). A classic example here would be the decision to publish the results of a screening programme to showcase dogs with, for example, good hips and to show an overall improvement in scores over time. If owners choose only to submit “good” scores for publication, the published results will give a false impression of the state of the breed.
False causality occurs when you assume that if 2 events occur together, one has caused the other. There is, for example, data that suggests Pugs with a higher Body Condition Score tend to have a higher risk of BOAS. It might be unwise to conclude that “being overweight causes BOAS”. It may be more appropriate to suggest that there is an association between being overweight and BOAS, and therefore good husbandry advice to owners would be to keep their dogs at an ideal body condition score. Having said that, we know that being overweight is generally unhealthy and leads to all sorts of adverse health outcomes!
Don’t be surprised by contradictory results
Sampling bias is a great argument for anyone who wants to challenge a set of results. In its purest sense, it means that the sample chosen is unrepresentative of the general population. For most canine studies, the reality is that particular sampling frames were chosen either deliberately or by default and the results will inevitably reflect that decision. The sample frame might be “pet dogs”, “show dogs”, “dogs seen at first opinion vets”, “dogs seen at referral practices” and so on. That’s one reason why it is perfectly possible for apparently contradictory results to be obtained.
There are other aspects of sampling bias which can affect the results obtained in a survey or research exercise. There may be Area Bias which means the geographic origin of the sample is not representative of the whole population. For example, our 2015 Dachshund Health survey includes data on about 90 Australian Dachshunds. This group has a high prevalence of skin conditions compared with UK dogs and this is likely to be an area bias related to climate and environment.
Self-selection bias is perhaps one of the most used “excuses” for results being challenged. The argument is usually along the lines of “people whose dogs have been ill are more likely to respond” or “you can’t rely on show people to report honestly, if at all”. Both of these might be true and would lead to biased samples and results.
Social desirability bias occurs when people don’t want to admit to doing something that is perceived to be socially undesirable or, in the case of their dogs, is undesirable for the dog. Typically, owner-reported estimates of a dog’s body condition underestimates the degree to which dogs are overweight and the amount they are fed. Similarly, owners may report an overestimate of the amount of exercise their dog gets; e.g. 40 minutes is rounded-up and reported as “an hour”.
Of course, adding in a sampling bias to your data collection is an important consideration if you want to lie or mislead with your study results!
Averages can hide a multitude of sins
Finally, the use of Summary Statistics can be misleading. Calculating an Arithmetic Mean (average) may hide a large amount of variation and/or multiple causes of that variation. Dachshunds are generally considered to be a long-lived breed and were used as one of the breeds in a recent GWAS project comparing the genomes of long and short-lived breeds. A look at the age of death (AoD) histogram for the breed shows a Mean AoD of 9 years but this is skewed by the number of deaths due to IVDD. On average, these IVDD dogs die at 6, whereas all other causes of death occur at an average age of 10.
The most worrying misuse of summary statistics I have come across is the choice of the denominator in the calculation of the mean. Say, for example, a large multi-breed population survey of 1,000,000 dogs explores a health condition which is known to be prevalent in particular breeds. The prevalence in the total population might be just 1% (10,000 dogs). If there are 20,000 examples of one breed and, of those, 1000 have the condition, it would be misleading to say the prevalence was 0.1% “among dogs”. The most meaningful calculation is to report that the prevalence is 5% “in that breed” or that it is 50 times more common in that breed than in dogs on average in the sample population of 1 million. We need to understand whether health conditions should be addressed at the level of dogs in general, or if they are breed-specific. Both types of issue exist and masking breed-specific issues by reporting population prevalence is simply avoidance and denial.
So, next time somebody shares some statistical analysis with you, approach it with curiosity and try to figure out if they have some ulterior motive to manipulate your opinion. It might just be their lies, damned lies and statistics (to quote Disraeli).
This article was inspired by “Data fallacies to avoid” published at http://www.datasciencecentral.com