Editor’s Note: The authors of this editorial have previously worked with and received funding from Coca-Cola, ILSI, and other industry or industry-funded groups. Their arguments, both here and elsewhere, have been used to undermine the value of nutrition research categorically and to dismiss any link between diet and disease.
As shown repeatedly on CrossFit.com, however, there is substantial evidence that certain dietary factors contribute to the progression of metabolic diseases and that the removal of certain dietary factors can slow or reverse disease progression. It is thus important to note that while these authors’ arguments do substantially undermine the value of nutritional epidemiology, specifically, as well as other forms of research in which surveys are used to estimate what and how much subjects eat, they in no way reduce the value of the many randomized controlled trials that clearly demonstrate the links between diet and disease.
Question
Do dietary surveys — used in nutritional epidemiology to assess what and how much subjects eat — provide an accurate estimate of food intake? If not, to what extent are they flawed, and what are the implications of these flaws?
Takeaway
Dietary surveys are both practically and theoretically flawed to an extent that renders them fundamentally unscientific. Any data or conclusions that rely on surveys to assess dietary intake are unreliable and should not be used to inform nutritional recommendations or individual behavior. Given that the majority of public health recommendations regarding the link between diet and disease are heavily informed by nutritional epidemiology, and nearly all nutrition epidemiology has used dietary surveys, the majority of these recommendations are unscientific. The authors argue nutrition research can only regain its credibility by discarding any data or beliefs informed by dietary surveys and exclusively using more rigorous methodologies for all future research.
This 2018 review argues the tool used to estimate intake in the majority of nutritional research — memory-based dietary assessment methods, or M-BMs (1) — is so theoretically and practically flawed that any data or conclusions derived from it are fundamentally inadmissible as evidence.
The authors note the historical successes of nutrition research. More than a century ago, deficiency-related diseases (beriberi, rickets, goiter, pellagra, etc.) were relatively common; today, less than 20% of the population is at risk of a consequential dietary deficiency (2). The effort to identify and mitigate deficiencies in the American diet is a notable public health success. Beginning in the mid-20th century, nutrition research began instead to focus on the link between diet and other noncommunicable diseases, such as obesity, diabetes, and heart disease (3). This research did not directly measure what subjects ate and drank, instead asking them what they recalled eating and drinking (4) — as Walter Willet describes such methods in his seminal textbook on nutrition epidemiology, they represent “perceptions of usual intake” (5). These recalled values were then correlated to risk of disease. The conclusions, which linked specific dietary factors to increased risk of specific disease states, implied that the subjects’ recollections of the foods they ate and drank were accurate reflections of their actual intake.
As early as the 1950s, research indicated these recollections did not provide reliable estimates (6). These findings, along with subsequent research, have repeatedly demonstrated that what subjects say they eat in dietary surveys bears a trivial relationship to the types and amounts of foods they actually consume, and the results of dietary surveys, if taken at face value, often imply diets that are biologically implausible (7). Research has also found the correlations between diet and disease derived from these studies are consistently unreliable; one recent analysis found zero out of 50 claims from nutritional epidemiology were successfully replicated in subsequent randomized controlled trials, with five studies showing correlations in the opposite direction (8).
Despite decades of data undermining their validity, these methods remain prominent: 80% of studies in the USDA’s Agriculture National Evidence Library — the basis for the quinquennial dietary guidelines — are reliant on dietary recall methods, and the National Academy of Sciences specifically argues for their continued use (9). Additionally, data from NHANES, a large regular survey performed by the CDC, uses dietary recall to estimate Americans’ intake levels. This survey is regularly used to inform both public policy and public perception of the healthfulness or unhealthfulness of certain foods (10). When the authors of this review previously argued against the validity of this data in a published editorial, they were met with ad hominem attacks and vitriolic rebuttals (11).
Here, the authors argue indirect dietary assessments are both practically and theoretically flawed to an extent that undermines all data and conclusions drawn from them. Broadly, such assessments provide only uncorroborated self-reported data — a set of anecdotes describing perceived or recalled dietary intake rather than a direct measurement of what subjects ate. They constitute a form of evidence that would be considered clearly inadmissible in many other fields of study (12).
Specific practical issues with such assessments include:
- Deception: It is well-established that when subjects are asked about what they ate, they will deliberately lie, systematically biasing their reported intake toward foods that are considered healthy and away from those considered unhealthy. This error alone, the authors argue, establishes dietary recall as “pure fiction” (13).
- Reactivity: When subjects know their intake is being monitored or that they will have to report on their intake in the future, their dietary patterns shift toward foods they perceive as healthy and away from foods they perceive as unhealthy. More health-conscious subjects who have healthier non-dietary behaviors are more likely to make such shifts, which may exaggerate healthy-user biases. Subjects’ perceptions of what is healthy or unhealthy are themselves often derived from the results of previous research using dietary recall, which then leads to a pattern in which repeated manifestation of bias can be interpreted as support for a dietary hypothesis (14).
- Credulousness: It is unequivocal that even in the absence of deception or reactivity, we are simply unable to accurately recall what we ate. One editorial, speaking about the use of survey-based methodologies across a variety of fields, argued half of what informants report is “probably incorrect.” Nutritional epidemiology, however, analyzes this data as if it provides a completely accurate assessment of each subject’s historical intake (15).
These practical issues are sufficient to disqualify the use of any research using dietary recall surveys to track intake. Additional theoretical issues undermine the methodology at a more fundamental level:
- Categorical errors: Nutrition epidemiology does not measure dietary intake. Instead, it measures subjects’ perceptions of their dietary intake. It is a categorical error to conclude on the basis of any dietary recall that there is a link (or lack thereof) between any dietary factor and a disease state; instead, we can only say that there is a correlation between the recollection of having consumed a certain amount or type of food and the frequency of a particular condition.
- Misplaced concreteness: Nutrition epidemiology treats the dietary recollections subjects provide as direct, concrete measurements of actual intake. They are not. To mistake abstract estimates of intake for direct measurements is pseudoscientific and misleading (16).
- Pseudo-quantification: These reports do not directly measure dietary intake but must derive numeric estimates of the number and types of calories consumed to perform the intended statistical analyses. These surveys attempt to quantify intake values by asking subjects how much of a certain type of food they recall eating or how often they eat this food. Such methods do not produce truly quantitative data but merely a set of numerical values assigned to anecdotal data. The actual dietary intake patterns remain unknown (17).
These errors are compounded by these surveys’ reliance on incomplete and faulty nutrition databases. The USDA database, which many of these surveys reference to turn an estimated amount of food consumed (e.g., 2 cups of broccoli per week) into nutrient estimates (e.g., 62 calories, 60 mg sodium, etc.), contains 8,000 items. The U.S. food supply contains an estimated 85,000 items. Each of these 85,000 items can vary in nutrition content based on production, transport, storage, processing, and preparation, all of which vary over time. These figures clearly indicate that even if these dietary surveys could accurately track the types and amounts of foods subjects ate — and as shown above, they cannot — they would fail to accurately estimate nutrient consumption (18).
Recently developed tools used to monitor dietary intake, such as using photographs of meals to track what subjects eat, are subject to these same issues and so are equally flawed (19).
Crucially, we cannot accurately estimate the direction or magnitude of the errors involved in these dietary assessments. Science requires that we be able to distinguish fact from fiction and accurately assess error. Dietary recalls are “non-falsifiable anecdotes” rather than direct measures of food intake. Thus, dietary assessment data are fundamentally unscientific, as are claims in some papers that these errors can be corrected for (20).
The authors of this review argue these issues are sufficiently ubiquitous and egregious to render spurious the majority of the perceived relationships between diet and disease (21). See Editor’s Note above for additional context.