Monday, August 19, 2019

9 Great Suggestions For Improving The Quality Of Dietary Research (And 1 That According To @JamesHeathers Is "Deeply Silly")

Last week saw the publication of an op-ed authored by Drs. David Ludwig, Cara Ebbeling, and Steven Heymsfield entitled, "Improving the Quality of Dietary Research". In it they discuss the many limitations of dietary research and chart a way forward that includes the following 9 great suggestions,
  1. Recognize that the design features of phase 3 drug studies are not always feasible or appropriate in nutrition research, and clarify the minimum standards necessary for diet studies to be considered successful.
  2. Distinguish among study design categories, including mechanistic, pilot (exploratory), efficacy (explanatory), effectiveness (pragmatic), and translational (with implications for public health and policy). Each of these study types is important for generating knowledge about diet and chronic disease, and some overlap may invariably exist; however, the findings from small-scale, short-term, or low-intensity trials should not be conflated with definitive hypothesis testing.
  3. Define diets more precisely when feasible (eg, with quantitative nutrient targets and other parameters, rather than qualitative descriptors such as Mediterranean) to allow for rigorous and reproducible comparisons.
  4. Improve the methods for addressing common design challenges, such as how to promote adherence to dietary prescriptions (ie, with feeding studies and more intensive behavioral and environmental intervention), and reduce dropout or loss to follow-up.
  5. Develop sensitive and specific biomeasures of adherence (eg, metabolomics), and use available methods when feasible (eg, doubly labeled water method for total energy expenditure).
  6. Create and adequately fund local (or regional) cores to enhance research infrastructure.
  7. Standardize practices to mitigate the risk of bias related to conflicts of interest in nutrition research, including independent oversight of data management and analysis, as has been done for drug trials.
  8. Make databases publicly available at time of study publication to facilitate reanalyses and scholarly dialogue.
  9. Establish best practices for media relations to help reduce hyperbole surrounding publication of small, preliminary, or inconclusive research with limited generalizability."
But there is one recommendation that seems at odds with the rest,
Acknowledge that changes to, or discrepancies in, clinical registries of diet trials are commonplace, and update final analysis plans before unmasking random study group assignments and initiating data analysis.
For those who aren't aware, clinical registries are where researchers document in advance the pre-specified methods and outcomes being studied by way of an observational experiment. The purpose of pre-registration is to reduce the risk of bias, selective reporting, and overt p-hacking that can (and has) occurred in dietary research.

Now to be clear, I'm a clinician, not a researcher, and I'm not sure how commonplace changes to or discrepancies in clinical registries of diet trials are, but I'm also not sure that's an argument in their favour even if they are. I do know that recently two of the authors claiming registry changes are commonplace were found to have modified one of their pre-specified statistical analysis plans which if it had been adhered to, would have rendered their results non-significant.

But commonplace or not, is it good science?

To answer that question I turned to James Heathers, a researcher and self-described "data thug" whose area of interest is methodology (and who you should definitely follow on Twitter), who described the notion of accepting that changes and discrepancies to clinical registries were commonplace was, "deeply silly".

He went on to elaborate as to why,
First of all - the whole definition of a theory is something which sets your expectations. the idea that 'reality is messy' does not interfere with the idea that you have hypothesis driven expectations which are derived from theories.

Second: there is nothing to prevent you saying "WE DID NOT FIND WHAT WE EXPECTED TO FIND" and then *following it* with your insightful exploratory analysis. In fact, that would almost be a better exposition of the facts by definition as you are presenting your expectations as expectations, and your after-the-fact speculations likewise.

Third: if you have a power analysis which determines there is a correct amount of observations necessary to reliably observe an effect, having the freedom to go 'never mind that then' is not a good thing by definition.

Fourth: The fact that changes were made is never ever included in the manuscript. i.e. they are proposing being able to make changes to the protocol in the registry *without* having to say so. it's a 'new plan' rather than a 'changed plan'.

Fifth: If you can still do the original analysis then no-one will ever believe that you didn't change the plan after looking at the data. you have to protect yourself, and the best way to do that is to follow your own damned plans and be realistic from the get.
Lastly, Heathers is unimpressed with the argument that registry changes are A-OK because they're commonplace, and he discussed ancient Aztecan punishments for those citing it.

All this to say, there's plenty of room to improve the quality of dietary research. Here's hoping the bulk of these suggestions are taken to heart, but please don't hold your breath.