The Question of Paleolithic Nutrition and Modern Health

Kenneth F Kiple. Cambridge World History of Food. Editor: Kenneth F Kiple & Kriemhild Conee Ornelas. Volume 2. Cambridge, UK: Cambridge University Press, 2000.

A conviction has been growing among some observers that contemporary human health could be substantially improved if we would just emulate our hunter–gatherer ancestors in dietary matters. A look at this contention seems an appropriate way to bring this work to a close because the subject takes us full circle—linking contemporary issues of food and nutrition with our Paleolithic past. In addition, it offers an opportunity for some summary, and, finally, it provides a chance to remind ourselves of how ephemeral food and nutritional dogma can be.

In fact, in retrospect we call such fleeting tenets “Food Fads” (see the treatment by Jeffrey M. Pilcher, this work chapter VI.12), and in the United States at least, it was not very long ago that vitamin E capsules were being wistfully washed down in the hope of jump-starting sluggish libidos (Benedek, this work chapter VI.15). Not long before that, the egg was enshrined as the “perfect” food, with milk in second place, and cholesterol, now an apparently significant ingredient in the gooey deposits that plug heart arteries, was not a word in everyday vocabularies (Tannahill 1989).

In those “B.C.” (before cholesterol) days, meat was in, and the starchy foods (potatoes, breads, and pastas), although full of fiber, were out—considered bad for a person’s waistline and health, not to mention social standing. Garlic had a similarly dismal reputation—only foreigners ate it—and only winos drank wine. Who could have foreseen then that we would soon toss all of that nutritional lore that guided us into the ash heap of history and embrace its polar opposite—the “Mediterranean Diet” (Keys and Keys 1975; Spiller 1991; see also Marion Nestle, chapter V.C.1).

This, however, leads us back to the just-mentioned “growing conviction” among some students of current science and distant history that a fundamental flaw in our present approach to nutrition is a failure (or refusal) to realize that our diets were programmed at least 40,000 years ago and thus we are, to a very important extent, what our ancestors ate. Moreover, these same students believe that it is our obliviousness to the obvious that will prove most baffling of all to future observers of our nurture and nature (Eaton, Shostak, and Konner 1988a, 1988b; Nesse and Williams 1994; Profet 1995).

The purveyors of The Paleolithic Prescription (Eaton et al. 1988a; see also their “Stone Agers in the Fast Lane” [1988b]) belong to a group that subscribes to what has been labeled “Darwinian medicine”—a school of thought that has called into question many of the notions we have about nutriments and nutrients, as well as many practices in the practice of medicine.

Illustrative is the medical treatment of fever, routinely but perhaps mistakenly lowered with pain relievers. Fever, it is argued, is a part of a highly evolved defense system for combating pathogens. When fever is artificially lowered, that system is impaired, and recovery may take longer and be less perfect (Kluger 1979).

Another example is “morning sickness,” said to arise from a mechanism evolved in the distant past to produce sufficient queasiness in newly pregnant women that they avoid foods—like slightly overripe meat, or cruciferous vegetables—that could be mildly toxic and, therefore, harmful to fetuses, which are especially vulnerable during the first trimester of pregnancy. If such ideas are correct, then it is possible that treating this Stone Age condition with modern nausea-preventing drugs increases the risk of birth defects (Profet 1995).

A final example from the medical side of the Darwinian thinkers’ agenda has to do with what have become alarmingly high rates of breast and ovarian cancer. The Darwinians suggest that contemporary women endure an estimated three times more days with menstrual cycles than their hunter–gatherer ancestors because the latter averaged more children over a lifetime and breast-fed each for about three years—all of which suppressed menstrual cycles. The condition of modern women means three times the exposure to surges of estrogen that occur during those cycles—and estrogen has been implicated in the generation of female cancers (Nesse and Williams 1994).

No less disturbing are the nutritional implications that emerge from similar lines of reasoning that reach back into Paleolithic times. As we saw in Part I of this work, written by bioanthropologists, our hunter– gatherer forebears may have enjoyed such variety in viands that they fared better nutritionally than any of their descendants who settled down to invent agriculture; indeed, in terms of stature, it would seem that they did better than practically everyone who has lived ever since (see Mark Nathan Cohen 1989 and this work chapter I.6; Clark Spencer Larsen, this work chapter I.1).

Data on both stature and nutrition indicate that human height began to diminish with the transition from collecting to growing food, finally reaching a nadir in the nineteenth century. It has only been in the twentieth century that humankind, at least in the West, has started to measure up once again to our ancestors of at least some 25,000 to 10,000 years ago (Eaton et al. 1988a; Harris, this work chapter VI.5).

Proof of the early decline in stature and health, along with soaring rates of anemia and infant and child mortality, lies mostly in the skeletal remains of those who settled into sedentary agriculture. Such remains also bear witness to deteriorating health through bone and tooth lesions—caused on the one hand by an ever-more-circumscribed diet and on the other by the increased parasitism that invariably accompanied sedentism (Larsen 1995).

Paradoxically, then, enhanced food production, made possible by the switch to herding and growing, resulted in circumscribed diets and nutritional deficiencies. Thus it would seem that the various Neolithic revolutions of the world, which invented and reinvented agriculture and are collectively regarded as the most important stride forward in human history, were actually backward tumbles as far as human health was concerned. Moreover, when we remember that the superior nutritional status of the hunter-gatherers over that of their sedentary successors was achieved and maintained without two of the food groups—grains and dairy products—that we now call “basic” but which only came with the Neolithic Revolution, the hunter-gatherers’ superiority seems downright heretical in the face of current nutritional dogma.

In longitudinal terms, humans have been on the earth in one form or another for a few million years, during which time they seem to have perpetuated themselves with some efficiency. About 1.5 million years ago, they shifted from a diet of primarily unprocessed plant foods to one comprised of increasing amounts of meat, some of which was scavenged. Approximately 700,000 years ago, humans began the deliberate hunting of animals, and Homo sapiens, who some 100,000 years ago had brains as large as ours today, became expert at it, judging from the frequent discoveries of abundant large-mammal remains at ancient sites (Larsen, this work chapter I.1). Meat may have constituted as much as 80 percent of the diet of those proficient at hunting, but they also ate wild vegetables and fruits; and certainly, as the Darwinists point out, it was these foods—meat, vegetables, and fruits—that constituted the diet from 25,000 to roughly 10,000 years ago (Eaton et al. 1988a).

In light of this nutritious past, it can be argued that humans are best adapted to these “old” food groups. Certainly, one piece of evidence in such an argument seems to be that humans are among the very few animal species that cannot synthesize their own vitamin C—an ability made superfluous by the great amounts forthcoming from the fruits and vegetables in the diet, and apparently lost along the evolutionary trail (Carpenter 1986; see also R. E. Hughes, this work chapter IV.A.3).

The other side of that coin, however, is that humans may not be well adapted to the “new” foods—dairy products and grains—and it is worth noting that from a nutritional standpoint, hunter-gatherers seem to have gotten along better nutritionally without these two food groups than the sedentary folk who created them and have used them ever since.

Recently, of course, meat—one of the old food groups—has become a primary suspect in the etiology of chronic illnesses, especially that of coronary artery disease, and it has been pointed out that modern-day hunter-gatherers (for example, the Kung San in and around Africa’s Kalahari desert) take in much more meat and, thus, more cholesterol than medicine today recommends. The traditional Kung San and other modern-day hunter-gatherers, however, have very low blood-cholesterol levels and virtually no heart disease. Similarly, the Inuit of the Arctic and Sub-arctic, until recently at least, consumed mostly animal foods with the same lack of deleterious effects. It has only been with their switch to the foods the rest of us eat that their health has begun to deteriorate (Eaton et al. 1988a; see also Harold H. Draper, this work chapter VI.10; Linda Reed, this work chapter V.D.7).

It is true that hunted animals as a rule have only a fraction (about one-seventh to one-tenth) of the fat of their domesticated counterparts, along with a better ratio of polyunsaturated to saturated fats. Consequently, it has been estimated that our hunter– gatherer ancestors got only about 20 percent of their calories from fats, whereas fats deliver about 40 percent of the total calories in modern diets. But because the amount of cholesterol in meat is little affected by fat content, it would seem that our ancestors, like the Kung San, also consumed more cholesterol than we do today and more than is recommended by nearly all medical authorities (Eaton et al. 1988a; see also Stephen Beckerman, this work chapter II.G.10).

It is interesting that a group of researchers in the United Kingdom has argued that cholesterol may not be the culprit—or at least not the only culprit—in bringing about coronary artery disease. Their search for its causes entailed the systematic and painstaking gathering, in all of Europe, of data on diets and deaths from heart disease, and analysis of this data has revealed one positive correlation: People living in the four European countries with the highest rates of heart disease (all in northern Europe) take in far more calcium than people in the four countries with the lowest rates (all in southern Europe) (see Stephen Seely, this work chapter IV.F.4).

Of course, that which is a significant correlation to some is merely an interesting speculation to others. But it may be the case that calcium can be harmful, depending on its source. During hunting-and-gathering times, calcium came mostly from plant foods, and bone remains indicate that our ancestors got plenty of it. Since the Neolithic, however, dietary calcium has increasingly been derived from dairy products, such as cheese and yoghurt—and milk, for those who can tolerate it.

Indeed,”tolerate” is the key word as far as milk is concerned, and, presumably, much of the reason for the difference in calcium intake between northern and southern Europeans is that the latter are less likely to consume milk after weaning because they are more likely to be lactose intolerant (cheese and yoghurt are less of a problem because when milk is fermented, lactose is converted to lactic acid) (Kretchmer 1993; see also K. David Patterson, chapter IV.E.6). Northern Europeans, by contrast, collectively constitute a lactose-tolerant enclave in a largely lactose-intolerant world, and, not incidentally, the point is made that white North Americans, whose ancestral homeland was northern Europe, are also able to tolerate milk and also suffer high rates of coronary artery disease (Seely, this work chapter IV.F.4).

One might then argue that in an evolutionary scheme of things, dairy products are newfangled: Allergies to cow’s milk are common among children, and milk-based formulas can be deadly for infants (Fauve-Chamoux, this work chapter III.2). It may be true that lactose intolerance is Nature’s way of preventing the consumption of much in the way of dairy foods that can calcify the arteries, and the heart attacks suffered by adult northern Europeans and their progeny in other parts of the world are the price of overcoming this trait. It is interesting to note that areas in Europe where oats are consumed are also those with the highest rates of coronary disease, and oats contain a great deal of calcium (Seely, this work chapter IV.F.4), which moves us to grains—the other “new” food group.

As with dairy products, it may be that humans are not well adapted to grains either. People with celiac disease—caused by the proteins in wheat and some other grains—most certainly are not. And perhaps significantly, this relatively rare genetic condition has its highest frequency in those regions that spawned the most recent major wheat-producing societies (see Donald D. Kasarda, this work chapter IV.E.2). Is it, therefore, possible that celiac disease is a holdover from earlier times, when the body evolved a mechanism to prevent the consumption of toxic wild grains? One trouble with such an argument, of course, is the need to explain how it was that the Neolithic Revolution ever got off the ground in the first place if people were genetically incapable—at least initially—of digesting the crops they grew.

One answer that has been proposed is that grains were not originally grown because of a need for food but, rather, because of a taste for alcohol (Katz and Voigt 1986; see also Phillip A. Cantrell, this work chapter III.1). It appears that barley-ale was being produced at least 7,500 years ago, and the records of the ancient Sumerians, for example, show that a staggering (no pun intended) 40 percent of their grain production went into brewing (Tannahill 1989). And ale making did precede bread making, so perhaps there was a time when grains were mostly for drinking, and only later was there a shift toward eating them.

The rise to prominence of grains that tended to narrow the diet—wheat in temperate Asia and Europe, rice in tropical and semitropical Asia, millet and sorghums in Africa, and maize in the Americas—was perhaps the most important factor in the previously mentioned decrease in human stature and significant deterioration of dental and skeletal tissues. This is because these so-called “super” foods are super only in the sense that they have sustained great numbers of people; they are far from super nutritionally. Almost all are poor sources of calcium and iron; each is deficient in essential amino acids, and each tends to inhibit the activities of other important nutrients (Larsen, this work chapter I.1, upon which the following paragraph also is based).

Rice, for example, inhibits the activity of vitamin A, and if its thiamine-rich hulls are pounded or otherwise stripped away, beriberi can be the result. The phytic acid in wheat bran chemically binds with zinc to inhibit its absorption, which, in turn, can inhibit growth in children. Zein—the protein in maize—is deficient in three essential amino acids, which, if not supplemented, can also lead to growth retardation; moreover, because the niacin contained in maize is in bound form, many of its consumers have suffered the ravages of pellagra. In addition, maize (along with some other cereals) contains phytates that act against iron absorption and also has sucrose, which—we are continually reminded—exerts its own negative impact on human health (significantly, dental caries were exceedingly rare in preagricultural societies). In short, it is arguable that humans may not be programmed for the newfangled grains either.

Much firmer ground, however, is reached with the assertion that humans are definitely not programmed for the quantities of sodium chloride many now take in. Even the most carnivorous of our ancestors, on an 80 percent meat diet, would have ingested less than half of the average per capita daily salt intake in the United States, and of course, excessive dietary salt has been linked with stomach cancer and hypertension, the latter disease a major contributor to heart attacks, kidney failure, and especially strokes. Modern-day hunting-and-gathering populations that consume little sodium (meaning amounts comparable to those of our hunter–gatherer forebears) suffer no hypertension—not even, it is interesting to note, the age-related increase in blood pressure that medicine has come to believe is “normal” (Cohen, this work chapter I.6).

In the case of sodium chloride, as in that of lactose absorption, western Europeans, once again, pioneered in the attempt to reprogram the Stone Age metabolism, although in this case they were joined by East Asians. All mammals normally consume less sodium than potassium, which complements bodily mechanisms that strive to conserve sodium, because the mineral is, after all, crucial to life itself. But not the Europeans—at least not after salt became an inexpensive commodity some 1,000 years ago. They preserved foods with salt, cooked them in salt, then added more salt at the table, with the result that over the ages, their bodies seem to have learned to some extent to rid themselves of sodium with brisk efficiency through urine and perspiration (Kiple 1984; Eaton et al. 1988a; see also Thomas W. Wilson and Clarence E. Grim, this work chapter IV.B.7).

By contrast, it has been argued that those not of western European (and probably eastern Asian) ancestry and, thus, without a long history of heavy salt consumption, such as African-Americans (whose ancestral lands were poor in salt, whose ancestral cooking habits did not call for it anyway, and whose bodies seem still to treat the mineral as something precious to be conserved), are in considerable peril in cultures where sodium is routinely added at almost every step in food preservation, processing, preparation, and consumption (Wilson 1987; Wilson and Grim, this work chapter IV.B.7). Moreover, because potassium is progressively leached out during these procedures, people in affluent countries have now turned the Stone Age potassium-to-sodium ratio upside down and are consuming some 1.5 times as much sodium as potassium (see David S. Newman, this work chapter IV.B.6). Modern-day hunter-gatherers, by contrast, are said to have potassium intakes that exceed sodium by 10 times or more (Eaton et al. 1988a).

Another controversial, but fascinating, avenue of investigation traveled by some of the Darwinian thinkers has to do with iron, an essential mineral found in all cells in the body and a key component of hemoglobin, wherein can be found about 70 percent of all our iron. The remaining 30 percent is stored in the spleen, the liver, and especially in bone marrow, where it can be called upon as needed. Iron-deficiency anemia is the most common form of anemia worldwide and is especially prevalent in developing countries, where the diet is likely to contain little meat (from which iron is most efficiently absorbed) but much vegetable food (from which iron is poorly absorbed). Moreover, some dietary substances in vegetable foods, such as oxalic acid and tannic acid, actually interfere with iron absorption (Eaton et al. 1988a; Bollet and Brown 1993).

As a few of the authors in the present work point out, however, the presence of pathogens also affects iron levels, and the impact of this presence is more profound than that of dietary imbalance. Helminthic parasites, such as the hookworm, make their living on human blood and the iron it contains, and they frequently cause anemia when they prosper to the point that they become too many, taking too much advantage of a good thing (see Susan Kent, this work chapter IV.D.2; Kent and Patricia Stuart-Macadam, this work chapter IV.B.3; Nevin S. Scrimshaw, this work chapter VI.3).

What is of interest in addition to helminthic parasites, however, is a recently demonstrated and much more subtle interaction of iron and smaller parasites. Bacteria and viruses also need iron to multiply. To forestall such multiplication, or at least to slow it down, our bodies have developed the knack of reacting to pathogenic presence by drawing down the iron supply in the blood and putting it in storage—in effect, starving the pathogens—until the danger has passed, whereupon iron levels in the blood are permitted to rise again (Kent, this work chapter IV.D.2). But it would seem that only a small percentage of physicians are aware of this phenomenon, with the result that they may mistake bodily defenses for anemia and issue prescriptions for iron that can hurt the patients while helping the pathogens (Kent, this work chapter IV.D.2; see also Nesse and Williams 1994).

Moreover, studies in Polynesia, New Guinea, and West Africa have demonstrated that infants given iron have a much higher incidence of serious infectious diseases than those who are not given iron, and that discontinuing the administration of iron in itself lowers the incidence of infections. Thus, it is urged that health workers be reeducated to understand that, in many instances, the high incidence of anemia in the developing world does not mean a dietary failure that requires iron administration; rather, it means that an important bodily defense is at work, combatting high rates of parasitism (Nesse and Williams 1994; Kent, this work chapter IV.D.2).

In the developed world, the argument continues, the extent to which iron may be a threat to health is compounded, and the body confounded, by the indiscriminate and massive iron fortification of many cereal products and the regular use of iron supplements. This may be especially serious in countries like the United States, where elderly men and post-menopausal women have the highest incidence of anemia—which, it is contended, is really anemia defending against the chronic diseases to which these groups are most vulnerable, including neoplastic cells that, like pathogens, also need iron for multiplication. In addition, evidence is accumulating that high iron levels play a significant role in coronary artery disease (Kent, this work chapter IV.D.2).

Another recently recognized bodily mechanism that evolved with humans has to do with an unfortunately tenacious ability of the body to maintain a certain level of fat, even if it is an unhealthy level. A recent Rockefeller study (Leibel, Rosenbaum, and Hirsch 1995) indicates that the body has a preset level of fat that it strives to maintain by changing the efficiency with which it metabolizes food, so that it can either reduce or increase the amount of fat going into storage. What this means—to put the Rockefeller study together with the so-called thrifty-gene theory first advanced by the geneticist James Neel (1962)—is that those who try to lose weight seem to be up against Stone Age “thrifty genes” that probably evolved during the glacial epochs. Then, nutrition under harsh climatic conditions often would have been a matter of feast and famine, and individuals who, during times of feasting, could most efficiently store excess calories as fat to ride out famines would have enjoyed a considerable survival advantage over those lacking this ability (see Leslie Sue Lieberman 1993 and this work chapter IV.E.7).

Today, however, these thrifty genes are blamed directly for diabetes and gallstones and indirectly for all conditions in which obesity is a contributing factor. Apparently, here is another Stone Age mechanism that is troubling modern humans, particularly when today’s diets give us calories in such compact forms that we can easily get more than we need before ever feeling comfortably full. It has been estimated that, in hunter–gatherer days, 5 pounds of food were required to provide 3,000 calories; today, 5 pounds of food may deliver upwards of 9,000 calories—some 3 times as many (Eaton et al. 1988a; Lieberman 1993).

Such a caloric conundrum presents an especially serious hazard to some groups of humans in transition. Studies of Native Americans, Australian Aborigines, Pacific islanders, and Alaskan Eskimos all have reported precipitous declines in health, suffered upon the abandonment of traditional diets and patterns of exercise (Lieberman 1993; Kunitz 1994). North American Indians, along with Polynesians, Micronesians, and native Hawaiians, have some of the highest rates of diabetes mellitus in the world. In subSaharan Africa, hypertension rates are rising alarmingly in the cities, where salt-laden prepared foods are replacing the foods of the villages. And coronary artery disease and hypertension are becoming commonplace in the Pacific, where just decades ago they were unknown (Wilson 1987; Lieberman 1993; Kunitz 1994; Draper, this work chapter VI.10; see also Nancy Davis Lewis, this work chapter V.E.3).

By way of concluding, it remains to be seen how much blame for current chronic diseases can be laid on Paleolithic nutritional adaptations. Another factor, of course, is that to some extent, humans have also been shaped by the requirements of more recent ancestors. Those who settled into cold and damp places, for example, chose diets that were rich and fatty and that helped build fat to insulate against the weather. In warm climates, where evaporating perspiration served to cool the body, strong herbs and spices that encouraged sweating were consumed; and much liquid was drunk to replace lost fluids. Today, central heating obviates the need for a fatty diet, and air conditioning reduces the need to sweat. But there are many dietary holdovers from those old days, too, that do humans no good and, in tandem with Stone Age genetic mechanisms, perhaps a great deal of harm. For example, it was not all that long ago that the author of a late-eighteenth-century cookbook picked a quarrel with a competing writer who called for 6 pounds of butter to fry 12 eggs. One-half that amount of butter was plenty, she primly assured her readers (Tannahill 1989).

Yet, hardly anybody wants to return to a hunting-and-gathering lifestyle. And although history sheds no light on an ideal diet, it does, in a way, defend those grains and dairy products that have just been examined and which, it can be argued, at least have been consumer tested for the past 8,000 to 10,000 years.

Nonetheless, what our authors and others have had to say on matters of Paleolithic nutrition has been well researched, well reasoned, and well received in many quarters. In other words, their work is not wild speculation, and their point, namely that natural selection has not had time to revise our bodies to cope with modern diets, is a good one. But about all Darwinian nutrition can actually do for us at this point is to remind us of the wisdom of moderation in food consumption and nutritional supplementation from yet one more perspective.