Michael Corballis & Stephen Lea. The International Handbook of Psychology. Editor: Kurt Pawlik & Mark R Rosenzweig. Sage Publications. 2000.
By the middle years of the twentieth century, the phrase ‘comparative psychology’ had come to be used to refer to any investigation of psychological processes, especially learning, in animals other than humans. Comparative psychology in its original sense, however, is concerned with comparisons of behavior and mental function across species, and this more accurate usage is now once again common. Sometimes, the aim of comparative psychology is simply to understand different species in their own right, and comparative psychologists can often provide practical information to people who keep animals or deal with them in the wild. Often, though, the aim is the more ambitious one of using comparisons between species in order to understand the evolution of psychological processes, and such use of the comparative method can properly be called ‘evolutionary psychology’ (Corballis & Lea, 1999). Most ambitious of all is the attempt to use comparative psychology to gain an understanding of our own, human, mental evolution, and the place of humans in relation to other species. The term evolutionary psychology is commonly restricted to this narrower sense, and in particular to the attempt to understand the contingencies that have shaped the modern human mind during the last stages of its emergence. A good deal of evolutionary psychology therefore makes little reference to other species; rather, through a process that has been dubbed ‘reverse engineering,’ it seeks to explain the modern human mind in terms of adaptations that arose in the supposed ‘hunter-gatherer’ phase of hominid evolution, in the so-called ‘environment of evolutionary adaptedness’ of the human species.
A recurring question is whether there is a continuity of mental development between ourselves and other species, including our closest relatives, the great apes; or whether there is somehow a profound discontinuity, or dichotomy, that may even render comparative psychology largely irrelevant to understanding the human mind. The theologically inclined may see humans as closer to angels than to animals, and we are indeed often at pains to suppress the ‘animal’ side of our nature; moral systems often explicitly call on us to do so. But the idea of a fundamental discontinuity was given scientific and philosophical respectability by the seventeenth-century philosopher René Descartes.
A Historical Perspective
Impressed with mechanical toys, popular at the time, Descartes asked the hypothetical question of whether it would be possible, at least in principle, to construct a mechanical replica of a human being. He concluded that it might be possible to construct a mechanical replica of an animal, even an ape, but that humans possessed a freedom of thought and action that defied explanation in mechanical terms. Religion did feature in Descartes’ account, however, since he proposed that human thought must be governed, at least in part, by God-given nonmaterial forces operating through the pineal gland. He was especially impressed with the unbounded nature of language, enjoyed even by human imbeciles, but apparently unattainable by any other species. Language has continued to feature prominently in debates about continuity vs. discontinuity in human evolution, as we shall see below.
Although some of Descartes’ contemporaries had supposed that humans might, after all, be just mechanical devices, it was not until Charles Darwin’ theory of evolution in the mid-nineteenth century that the idea of a continuity between ourselves and other species gained prominence. Darwin himself was at first reluctant to spell out the implications for human evolution, but he evidently saw an important role for psychology, and in the first edition of Origin of Species he wrote:
In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Light will be thrown on the origin of man and his history. (Darwin, 1859, p. 488)
In later editions, he amended the second sentence of this passage to refer to ‘the foundation already laid by Mr Herbert Spencer,’ a psychologist who had in many respects anticipated the theory of evolution. It was Spencer, in fact, who coined the phrase ‘survival of the fittest.’ But despite this early promise, psychologists have until recently paid remarkably little attention to evolutionary theory. In his popular account of evolutionary psychology, Steven Pinker (1997) writes: ‘The allergy to evolution in the social and cognitive sciences has been, I think, a barrier to understanding’ (p. 23).
The first psychological laboratory was established in Leipzig in 1879 by Wilhelm Wundt, a dualist who owed more to Descartes than to Darwin, and sought to base psychology on subjects’ introspective observations of their own minds—although he also carried out more objective experiments, including some on reaction time. Wundt’ introspectionism was transported from Leipzig to Cornell University in the United States by an Englishman, Edward B. Titchener, and came to be known there as Titchenerism. These developments in mainstream psychology did nothing to encourage a comparative approach, since by its very nature introspective psychology seems to preclude investigation of other animals.
Perhaps not surprisingly, given Darwin’ influence, comparative psychology was founded in England, and by biologists rather than psychologists. It was based on objective study rather than introspection, and much of the early interest lay in establishing a phylogenetic scale, a scala naturae, in which species might be ordered in terms of their intellectual prowess. In 1888, the British biologist George John Romanes published Mental Evolution in Animals, in which he identified some 50 levels of intellectual development, with only humans reaching the top. The next in line, apes and dogs, made it only to rung 28, followed closely by monkeys and elephants on rung 27.
In 1894, Conwy Lloyd Morgan published his Introduction to Comparative Psychology, which introduced his famous canon:
In no case may we interpret an action as the outcome of a higher psychical activity, if it can be interpreted as the outcome of one which stands lower in the psychological scale. (p. 53; his italics)
Lloyd Morgan’ canon has stood as a corrective to the common and perhaps wishful tendency to proclaim human-like intelligence in other animals, famously demonstrated in 1904 in the claim that a horse, known as ‘Clever Hans,’ was capable of human-like thoughts and language ability. Its trainer, a retired schoolteacher called Wilhelm von Osten, had taught Clever Hans to answer questions by tapping out letters of the alphabet with a front hoof, with each letter represented by a different number of taps. In this manner, the animal was apparently able to answer quite sophisticated questions. The psychologist Oskar Pfungst demonstrated that van Osten, unknown even to himself, was giving subtle signals to the animal as to when to stop tapping, and so was himself generating the answers. Nevertheless, this celebrated case convinced a number of prominent scholars of the time that animals could think and even talk, if only instructed in the right way.
When Darwinian ideas began to infiltrate mainstream psychology itself, they led to the idea that thinking was governed by instincts as well as by learning, in humans as well as in nonhuman animals. The most influential of the ‘instinct psychologists’ was William McDougall, whose 1908 book An Introduction to Social Psychology drew explicitly on Darwinian theory to develop a theory of the human mind that was based on the comparative method. McDougall systematically equated emotions and instinct, but eventually instinct psychology degenerated into the simple listing of instincts for different functions, and lacked explanatory power. The author of one text counted 1,594 instincts that had been attributed to animals and humans (Bernard, 1924).
It is not surprising, then, that instinct psychology succumbed to the relative parsimony of behaviorism, which was to dominate academic psychology, at least in North America, for most of the first half of the twentieth century. Even so, the father of behaviorism, John B. Watson, had begun his research career with the study of instinctive behaviors in the noddy and sooty terns in the Tortugas, and his first book, published in 1914, was entitled Behavior: An Introduction to Comparative Psychology. But the behaviorists gradually came to emphasize learning at the expense of instincts; in the Psychological Abstracts, entries under the term ‘instinct’ itself, relative to the terms ‘drive,’ ‘reinforcement,’ and ‘motivation,’ dropped from 68% to 8% between 1927 and 1958 (Herrnstein, 1974). Ultimately, then, behaviorism did little to encourage comparative psychology or even evolutionary thinking. The aim was to discover common principles underlying the behavior of humans and other species, and not to dwell on differences between species that might reflect specific evolutionary adaptations.
The reluctance to pursue evolutionary theory in the early part of the twentieth century was not restricted to psychology, and it was not until the so-called ‘modern synthesis’ of the 1930s that evolutionary ideas returned to mainstream biology. They have been even slower to return to psychology. In the social sciences, Darwinian ideas had been tainted by the excesses of social Darwinism and the eugenics movement, promulgated by Darwin’ cousin, Francis Galton, and others in the latter part of the nineteenth century. By the 1930s, this had led to sterilization laws for criminals and mental defectives in a number of countries, including Australia, Canada, Japan, Sweden, and many states in the US. Such practices, which persisted until quite recently in some countries, along with the rising threat of Nazism with its emphasis on racial purity, led to a revulsion toward social Darwinism. What Pinker (1997) calls the ‘standard social science model’ follows the cultural determinism of followers of Franz, such as Benedict (1935); it accounts for human behavior almost wholly in terms of culture rather than instinct, and regards culture as unpredictable in terms of geographical, economic, or ecological constraints.
This is not to say the comparative method was totally eclipsed. Wolfgang Köhler’ (1925) studies of insightful problem-solving in chimpanzees long anticipated much of the present-day work on cognitive processes in apes. Beginning in the 1930s the ethologists, who were based in zoology rather than psychology, continued to do work that largely complemented that of the behaviorists, documenting differences between species—although works such as Maier and Schneirla’ (1935) classic Principles of Animal Psychology ensured at least some balance between learning and instinct as explanations for adaptive behaviors. In a long series of studies originating in the 1930s, Konrad Lorenz (1971) and Niko Tinbergen (1972) described a large number of instinctive behaviors, in a range of species, that seemed to owe little to conditioning or any other form of learning the comparative psychologists had studied. Activities like the following response of goslings, the egg fanning of sticklebacks, or the ways in which black-headed gulls remove egg shells, were regarded as adaptations selected because they increased reproductive success. Instinctive behaviors were often found to be dependent on innate releasing signals whose effectiveness did not depend on prior conditioning. Both authors stressed that the comparative approach was also essential to an understanding of human behavior, in marked contrast to the behaviorist notion that the mind of the human infant is a tabula rasa, merely awaiting the imprint of experience.
The notion that complex behavior might be purely instinctive was attacked by psychologists, such as Frank Beach (1955), who argued that it was not possible to separate instinct from learning. Nevertheless, by the 1950s, there was an increasing realization that a psychology based on a single species was unlikely to be wholly satisfactory, and it was again Beach (1965, p. 10) who asked, ‘Are we building a general science of behavior or merely a science of rat learning?’ Two behaviorists, Keller and Marian Breland, observed that even in experiments designed to measure pure operant conditioning, instinctive behaviors would intervene, a phenomenon they called ‘instinctual drift.’ They concluded their article, pertinently entitled ‘The misbehavior of organisms,’ as follows:
After 14 years of continuous conditioning and observing thousands of animals, it is our reluctant conclusion that the behavior of any species cannot be adequately understood, predicted, or controlled without knowledge of its instinctive patterns, evolutionary history, and ecological niche. (Breland & Breland, 1961, p. 684)
Such observations led to modifications to the learning theory of the time, with the suggestion that evolution prepares animals to learn some things more easily than others, and that even classical conditioning is dependent on environmental contingencies, not only in the animal’ experience, but in its evolutionary history as well. Nonetheless, the demonstration of such ‘constraints on learning’ only led to a limited impact of evolutionary theory, in part because of the confrontational nature of the interaction between the two approaches to animal behavior, in part because adaptationist arguments seemed to lead only to a fragmentation of comparative psychology—a hundred sciences of learning in a hundred species, rather than the general science of animal behavior Beach had called for.
The ‘Cognitive Revolution’
The late 1950s saw another wind-change that had quite profound consequences for comparative and evolutionary psychology. Behaviorism lost its role as the dominant force in North American psychology, but not so much because of biological arguments as because of linguistic ones. In 1959, the linguist Noam Chomsky wrote a highly influential review of Skinner’ (1957) book Verbal Behavior, an ambitious but doomed attempt to explain language in behavioristic terms. That review, along with Chomsky’ other writings, was a powerful influence in persuading psychologists that language, arguably the most distinctive and sophisticated of human cognitive achievements, could not be explained in terms of associative learning. Rather, it depended on innately determined rules, better understood in terms of computation than in terms of learned connections. Chomsky’ work, combined with the increasing sophistication of digital computers and the rise of artificial intelligence, brought about the so-called ‘cognitive revolution.’
Chomsky argued not only that learning a language is inexplicable in terms of learning theory, but also that it is a uniquely human accomplishment, innately given. But Chomsky only took to an extreme a tendency that was widely evident in the information-processing models of cognition that emerged from the late 1950s. Such approaches had little use for evidence from species other than humans, and offered little by way of explanation for the behavior of nonhuman animals. No less than the rat-dominated learning theory that Beach deplored, they seemed to lead to a one-species psychology, albeit with a choice of species that is more easily defended.
Actually, the cognitive revolution on animal psychology was not quite so totally at variance with conclusions from animal psychology as some of the new breed of cognitivists may have thought. Although an avowed behaviorist, Edward C. Tolman had long recognized the importance of cognitive concepts to the understanding of animal behavior. His classic work Purposive Behavior in Animals and Men, published in 1932, established a tradition known variously as purposive behaviorism, sign-gestalt theory, or expectancy theory. Tolman’ work was influential from the 1930s into the 1950s, and he is still widely acknowledged for his view that spatial learning is governed by the establishment of cognitive maps, and not by stimulus—response learning. But it was not until after some years of elaboration of the new human cognitive psychology that some animal psychologists sought to explicitly reconcile cognitive insights with investigations of animal learning, leading to a new discipline of ‘animal cognition.’ This concept has been construed in two rather different ways. On the one hand, in the past three decades a huge volume of research has been devoted to fundamental processes of Pavlovian conditioning, and associative learning generally. Many leading researchers now believe that the resulting data can only be explained with the aid of cognitive concepts like representation. This approach leads to a bottom-up science of animal cognition, in which behavioral methods are rigorously applied, and cognitive conclusions drawn almost as a last resort, strictly in the spirit of Lloyd Morgan. Alternatively, the insights and concepts of cognitive psychology can be applied more directly to animal behavior, to see whether parallel analyses will prove fruitful. Whichever approach is taken, the very phrase ‘animal cognition’ is itself a quiet reassertion of Darwinian continuity at the mental level. But other comparative psychologists rose more directly to the challenge, and mounted an empirical assault on Chomsky’ assertions about the uniqueness of language and the fundamental discontinuity it creates between humans and all other species.
Continuity versus Discontinuity in Human Evolution
As we have seen, mainstream psychology has swung sharply from one extreme to the other with respect to the question of whether humans are fundamentally different from other animals. It began with Wundt’ Cartesian assumptions, swung to extreme continuity with Watson’ behaviorism, then swung back to Cartesian discontinuity with the cognitive revolution. It is not surprising, then, that this issue should continue to loom large in modern comparative psychology, and especially in the study of the great apes—the species evolutionarily closest to our own.
Despite the assertions of Chomsky and his followers that there is a fundamental dichotomy between humans and all other animals, at least with respect to language, biochemical analyses have revealed an unexpectedly close genetic kinship between humans and the other great apes. By one estimate, the chimpanzee has about 99.6% of its amino acid sequences and 98.4% of its DNA nucleotide sequences in common with our own species (Goodman, 1992). Until as recently as the 1960s, it was supposed that the common ancestor of human and chimpanzee dated from some 20 million years ago, allowing plenty of time for physical and mental divergence. That estimate was revised in 1967, on the basis of biochemical evidence, to a mere five or six million years ago (Sarich & Wilson, 1967)—an estimate that has held in subsequent investigations.
These findings have led to renewed efforts to demonstrate human-like mental processes in the great apes, and have even led some to argue that the most important discontinuity lies, not between ourselves and the great apes, but between the great apes and the other primates (e.g., Byrne, 1995). Not surprisingly, much of the research effort has focused on language.
Teaching Language to Apes
At first, the battle seemed lost, as attempts to teach chimpanzees to actually talk have never been even remotely successful. For example, Catherine and Keith Hayes (Hayes, 1952) raised a chimpanzee called Viki from the age of three days until about six and a half years in their own home, treating her as one of their own children, but Viki never learned to speak more than three or four crudely articulated words. However, Viki’ failure to talk might have been due to deficiencies of the vocal tract rather than to the lack of any capacity for language, and greater success was later achieved in teaching a form of manual sign language, not only to chimpanzees, but to other great apes as well. Chimpanzees have also proven fairly adept at using plastic tokens (lexigrams) to represent objects and actions, and to compose simple requests. The most impressive achievements appear to be those of a young bonobo, or pygmy chimpanzee (Pan paniscus), named Kanzi, who has learned to use gestures and lexigrams spontaneously while watching his mother being taught. Most of his productions, it is claimed, are not random sequences or meaningless repetitions, but are spontaneous comments, requests, or announcements. Kanzi even appears to have an understanding of spoken human language, with at least some regard to grammatical structure, at about the level of a two-year-old child (Savage-Rumbaugh & Lewin, 1994).
But even the work on gestures and tokens has not been universally accepted as demonstrating true language. Herbert Terrace (1979) analyzed the utterances of a trained chimpanzee, optimistically named Nim Chimpsky, and observed that they consist mainly of repetitions and simple sequencing of ideas. By and large, linguists and cognitive psychologists have remained unconvinced even by the exploits of Kanzi. For example, Pinker (1994) remarks that the chimpanzees, Kanzi included, ‘just don’ “get it”’ (p. 340). What Pinker and others argue is that all attempts to teach language to chimpanzees, or any other species, have so far produced nothing resembling grammar, considered the hallmark of human language.
This is not to say that chimpanzees are incapable of communication, or even symbolic representation. Derek Bickerton (1995), for example, concedes what he calls ‘protolanguage’ to chimpanzees, suggesting that this is a precursor to true language that is shared by two-year-old children, people with language impairment due to brain damage, and speakers of pidgins. Bickerton also argues that true language must have evolved in all-or-none fashion very recently in our evolutionary history: ‘… true language, via the emergence of syntax, was a catastrophic event, occurring within the first few generations of the species Homo sapiens sapiens’ (p. 69).
Another to argue that language evolved late is Philip Lieberman (1991), who has claimed that the production of speech was not possible until the larynx descended in the neck, and that this adaptation, as well as concomitant changes in the brain mechanisms involved in producing speech, occurred only recently in human evolution, and perhaps only with the emergence of H. sapiens. According to Lieberman, even the Neanderthals of 30,000 years ago would have suffered gross speech deficits that not only kept them apart from anatomically modern humans, but led to their eventual extinction. The Neanderthals are generally considered a species distinct from H. sapiens, but the divergence probably took place within the past 500,000 years, suggesting that the adaptation of the vocal tract in H. sapiens was recent. This analysis is supported by recent evidence that the facial structure of H. sapiens might have been uniquely adapted to speech (D. Lieberman, 1998).
Some have argued, however, that language may have evolved, not from vocalization, but from manual gestures (e.g., Armstrong, Stokoe, & Wilcox, 1995; Corballis, 1991). As Terence Deacon (1997) points out, voluntary control over action depends on a shift from midbrain to cortical control, and in primates voluntary control over the limbs greatly exceeds that over vocalization. This could explain why apes would learn a form of sign language much more successfully than they learn anything resembling speech, even taking into account the difficulties posed by their vocal tract anatomy. The distinctive bipedal form of locomotion among even the earliest-known hominids would have freed the hands for further development of manual activity, not only for carrying things, but also for the potential development of gestural communication. There is little doubt that the sign languages of the deaf are true languages, with fully developed syntax, confirming that gesture is as much a ‘natural’ medium for language as is vocalization (Armstrong et al., 1995). The ‘catastrophic event’ that led to the dominance over H. sapiens over other hominid species may not have been the emergence of syntax, as Bickerton proposed, but rather the switch from a system based predominantly on manual gesture to one based on vocalization (Corballis, 1991).
Theory of Mind in Apes?
In spite of the historical prominence of language in the continuity—discontinuity debate, many primate researchers have shifted their interest from the study of language to the study of other cognitive processes. A leader in this endeavor is the one-time behaviorist, David Premack, who gave up his attempts to teach language to the chimpanzee and turned to other questions about primate cognition, such as whether or not a chimpanzee can be said to possess a ‘theory of mind’ (Premack & Woodruff, 1978). In some respects, questions about theory of mind resemble those about language. For example, both appear to depend on recursion. Theory of mind depends on mental propositions such as ‘I know that she can see me,’ or ‘I know that he knows that I can see her,’ and the phrase structure of language lends itself precisely to the expression of these kinds of embedded thoughts.
Richard W. Byrne (1995) has argued that theory of mind is in fact a necessary precursor to language. In a detailed examination of such behaviors as tactical deception and mirror self-recognition, he concludes that the great apes do indeed demonstrate evidence of theory of mind, if not of language itself, but that other primates do not. It is for this reason that Byrne suggests that it was the emergence of the great apes, and not specifically of our hominid ancestors, that created a discontinuity in mental evolution. Unsurprisingly, Byrne’ conclusions are controversial, even among comparative psychologists; for example, they have been challenged by the psychologist Celia Heyes (1998), who suggests that behaviors attributed to theory of mind can be explained in terms of learned associations.
In addressing the issue of continuity vs. discontinuity, some investigators have appealed to brain anatomy. It is often supposed that mental capacity might increase simply as a function of the overall size of the brain. Table 1 shows the cranial volumes of the primates, and it is clear that there is a general increase from monkeys through the lesser apes (Hylobates) to the great apes (orangutans, gorillas, and chimpanzee), and a massive increase to humans. This might seem to support the idea that any discontinuity of mental function does indeed lie between humans and the other great apes. But although humans are at the top, it may nevertheless be misleading to suppose that brain size is directly proportional to intelligence. For example, the chimpanzee is generally thought to be the most intelligent of the great apes (humans excluded), but in terms of brain size comes well behind the other two great apes, the gorilla and the orangutan. Moreover, male brains tend to be larger than female ones, especially in orangutans, gorillas, and humans, yet there is no evidence for a corresponding difference in intelligence.
It has long been observed, however, that brain size depends, at least in part, on body size. In 1891, Otto Snell showed that, across different species of mammals, the logarithm of brain weight was positively and linearly related to the logarithm of body weight, with a slope of 2/3. Harry Jerison (1973) has therefore argued that, in assessing the relation between intelligence and brain size, it is necessary first to compensate for differences in body size. Following the earlier work of Eugene Dubois (1913), he proposed that the way to do this is to compute the empirical relation between brain size and body size across different species, and then express the actual brain size of a given species as a ratio of the predicted brain size. He called this the encephalization quotient (EQ). Based on comparative studies of the EQ, Passingham (1982) concluded that the human brain is about three times the size one would expect for a primate of our build, which he regards as ‘perhaps the single most important fact about mankind’ (p. 78). The EQ also restores the relatively small-bodied chimpanzee to its accustomed place next to humans, although the gap between humans and chimpanzees is considerably larger than that between chimpanzees and gorillas.Even so, the EQ may still be a crude measure of mental capacity since it does not differentiate one part of the brain from another. In particular, the neocortex is presumably especially important in intellectual function, and Dunbar (1993) has suggested that what he calls the neocortex ratio, which is the ratio of the volume of neo-cortex to the volume of the rest of the brain, might be more indicative of mental capacity. He has shown that there is a positive relation among monkeys and primates between the neocortex ratio and the size of the social group that the animals form, suggesting that cognitive capacity establishes an upper limit to the number of individuals with which the animal can maintain personal relationships. Byrne (1995) has further shown a linear relation in primates between the neocortex ratio and the estimated prevalence of tactical deception. In humans, the ratio is 4.1:1, which is about 30% larger than that of any other primate. While this still places humans at the top of the intellectual tree, it is more suggestive of continuity than are measures based on total brain size.
But even within the nonhuman primates, there are reasons to doubt that the neocortex ratio is an altogether adequate measure of mental capacity. For instance, it is as large in the baboon as it is in the gorilla, yet the gorilla shows much more evidence of insightful behavior. It is clear, too, that group size is not the only determinant of the neocortex ratio, since the great apes do not live in larger groups than monkeys do, and indeed the orangutan is notably solitary—although this may have been a recent adaptation. Byrne therefore suggests that while an enlarged neocortex ratio may have been selected for by social pressures, it may also reflect intelligent behavior in nonsocial contexts, such as the insightful solving of the mechanical problems posed by food-processing; this might explain why great apes are capable of insight in abstract tasks (Köhler, 1925). Indeed, one might expect actual computational power to be dependent to some extent on actual brain size, uncorrected for body size.
As Table 1 shows, the brains of the early hominids did not differ substantially from those of modern apes. For example, the absolute size of the australopithecine brain was about the same as that of the chimpanzee, although it was somewhat larger relative to body size. A significant increase in brain size did not emerge until some 2.5 million years ago, with the emergence of the hominids we place in the genus Homo. Again, though, interpretation of these data is complicated by differences in body size. For example, although absolute brain size increased dramatically over the successive stages from early Homo through to H. sapiens, it has been estimated that when body size is taken into account there was virtually no increase in brain size over nearly two million years of evolution in Homo, but then a dramatic increase from about 600,000 to 150,000 years ago. Although the Neanderthals appear to have had bigger brains than modern humans, they also had larger bodies, perhaps by as much as 24%, so that the modern human brain is slightly larger the Neanderthal brain relative to body size. There has also been a reduction in both brain and body size over the past 50,000 years.
Development of Specific Brain Regions
To understand the evolution of intelligence, Holloway (1996) argues that we should also consider specific areas of the brain relative to others. For example, the ability to ‘think’ independently of the immediate sensory input probably depends more on the so-called association areas of the brain than on sensory or motor areas. One such area is contained within the parietal lobe, which is concerned with spatial relations among objects. In humans, the left parietal cortex also includes part of Wernicke’ area, which is critically involved in the perception and comprehension of language. The human parietal lobe is also enlarged relative to the occipital lobe, and even at the expense of it; according to Holloway (1996), the visual striate cortex in humans is only about 45% of the size one would expect in a primate with the same overall brain size. Holloway also claims that this trend had already begun in the australopithecines, but this has been challenged by Falk (1985).
Another critical association area is contained within the frontal lobe, which is greatly enlarged in the human brain relative to that in apes (Deacon, 1997). There is evidence that an endo-cast of the skull of Homo rudolfensis, the earliest-known species of Homo, dating from over 2 million years ago, has a human-like rather than an ape-like third frontal convolution, which is the area containing an important speech area (Broca’ area) in modern humans. On this basis, Tobias (1987) argued that H. rudolfensis (then called H. habilis), had invented at least rudimentary language. This further suggests that the increase in brain size and the relative growth of association areas were driven at least in part by pressure for more effective communication, and began with the split of Homo from the australopithecines perhaps 2.5 million years ago. But it may have been the development of the prefrontal cortex, rather than Broca’ area alone, that was critical in the evolution of the hominid mind. Deacon (1997) estimates that the human prefrontal cortex is about twice the size that would be predicted in an ape with a brain as large as a human,’ and is probably the most divergently enlarged of any brain region. He argues that the development of the prefrontal cortex underlies the emergence of symbolic thinking that is unique to our species.
Another aspect of brain evolution is cerebral asymmetry; most humans are right-handed and left-cerebrally dominant for language. Since these asymmetries seem to be associated with the distinctively human attributes of tool-making and language, respectively, it has long been supposed that they are themselves uniquely human, and further evidence of a discontinuity between humans and apes (e.g., Corballis, 1991). The idea of asymmetry as uniquely human has been somewhat eroded over the past decade. Hopkins (1996) has documented evidence of a population bias toward right-handedness in chimpanzees (and to a lesser extent in other great apes), although McGrew and Marchant (1997) sound a note of caution. In an exhaustive meta-analysis that includes Hopkins’ work, they conclude that of all the nonhuman primates studied, ‘only chimpanzees show signs of a population-level bias … to the right, but only in captivity and only incompletely’ (p. 201).
On the basis of a review of cerebral asymmetries in other primates, Bradshaw and Rogers (1993) argue for continuity between other animals and humans, although it is still possible to argue that the pattern and strength of lateral asymmetries sets humans apart. Summarizing work on anatomical asymmetries of the brain, for example, Holloway (1996) remarks that ‘while asymmetries certainly exist in pongids, neither the pattern nor direction is anywhere near as strong as in Homo’ (p. 94). However, it has recently been reported that the temporal planum, an important language-mediating area in humans, is larger on the left than on the right in 17 out of 18 chimpanzees examined postmortem (Gannon, Holloway, Broadfield, & Braun, 1998).
The Impact of Sociobiology
As we have seen, evolutionary ideas were somewhat suppressed for much of the early part of the twentieth century, especially within the social sciences. An important influence in their revival was the publication of Edward O. Wilson’ (1975) textbook and manifesto, Sociobiology. Taking his cue from the so-called ‘modern synthesis’ of the 1930s, Wilson claimed to be producing the ‘new synthesis’ of population genetics, behavioral ecology, and ethology. His book was widely publicized and was swiftly followed by effective popularizations, such as Richard Dawkins’ (1976) The Selfish Gene. Suddenly, evolutionary theory was being applied to behavior in a wholly different way; sociobiology offered a new, or renewed, ‘grand theory,’ able to generate hypotheses about a huge range of animal and, as later emerged, human behaviors.
An important precursor to Wilson’ book, but somewhat neglected at the time, was George Williams’ (1966) Adaptation and Natural Selection, which presented arguments against the idea that group selection plays a significant role in evolution. For example, behaviors that could be construed as altruistic, in which individuals act in apparently selfless ways for the good of the group, could actually be interpreted in terms of individual benefit in the longer term—virtue, in other words, could bring its own rewards. The point was elaborated by Robert Trivers (1971) with his concept of reciprocal altruism, in which a selfless act might be selected because it leads to probable reciprocation, to the ultimate benefit of both parties. Even cases of extreme altruism, in which individuals put their own lives at risk, might be interpreted in terms of the enhanced survival of the individuals’ genes. In two classic papers, W. D. Hamilton (1964) argued that an altruistic trait would be adaptive if the disadvantage to the individual were outweighed by the product of the benefactor’ degree of genetic relationship and the amount of benefit conferred. Hamilton’ rule, as it has come to be called, is a more formal statement of a remark attributed to the British biologist J. B. S. Haldane, who is said to have declared that he would not put his life at risk for another person, but he would do so for two brothers or eight cousins, since they would equal, in total, his own genetic endowment.
But sociobiology was not universally welcomed, especially among social scientists, as it was directly antagonistic to the standard social science model, and Wilson (1975) even threatened that the biologists would take over:
… sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the modern synthesis. (p. 4)
To compound the threat, Wilson argued for a reductionist approach, based on neurobiology rather than on the traditional methods of the social sciences:
… having cannibalized psychology, the new neuro-biology will yield an enduring set of first principles for sociology. (p. 75)
Not surprisingly, then, the earliest impact of sociobiology was in biology, with the emergence of a subdiscipline dubbed behavioral ecology, marked by the first edition of Krebs and Davies’ (1987) edited volume under that title. Behavioral ecology applied the powerful and general optimizing ideas of sociobiology at a more detailed level. Unlike the earlier ethologists, however, behavioral ecologists were interested in the adaptive significance of learning as well as instincts; learning about prey densities, for example, was an essential feature of the ‘optimal foraging’ models that were at the core of behavioral ecology. Furthermore, the behavioral ecologists recognized the importance of what was already known, within comparative psychology, about animal learning, and this interaction was much more fruitful than the earlier study of constraints on learning had been (Lea, 1985). For example, the so-called ‘matching law,’ originally formulated in the context of operant conditioning, was adapted to a wide variety of species in foraging situations. Attempts to extend this to human choice, however, have been less successful; despite promising early results, in most situations human choice behavior has been found to be extensively modified, relative to animal choice, by human linguistic abilities (see Horne & Lowe, 1993).
Two further examples of the benefits of an ecological approach are especially revealing because they bear on capacities that are often considered uniquely human, but can in fact be studied within a broader comparative framework. One is birdsong, and the other is spatial memory.
Comparative Studies of Birdsong
Twenty of the 23 orders of birds inherit their calls, and develop their species-specific vocalizations even if they are raised in isolation and never hear another bird. The three exceptional orders are the perching birds (Passeriformes)—which include obvious songbirds such as canaries, but also mockingbirds and crows—the hummingbirds (Trochiliformes), and the parrots (Psittaciformes). In these orders, the song the bird sings depends on vocal learning, giving rise to a greater variety of song dialects. This variety is important for attracting mates and defending territory, and is in these respects necessary for reproduction and survival. The added plasticity conferred by learning gives rise to a greater diversity of specialization, and this in turn has led to a greater variety of species; the three orders of birds that learn their songs include more species than are included in the remaining 20 orders together.
A phylogenetic tree of all of the species of birds shows that the three orders of birds that learn their songs are not closely related, suggesting that unlearned song is the primitive condition, and that the capacity to learn songs evolved independently in these three orders. This is known as convergent evolution. In canaries, the learning of song depends on a network of interconnected nuclei, which contain receptors for gonadal sex hormones, in the forebrain of the male bird. Similar hormone-sensitive systems have been found in many other species of songbirds, and the size of the song-control centers appears to be correlated with the variety of songs each species can acquire (DeVoogd, Krebs, Healy, & Purvis, 1993). This appears to be a product of natural selection, since the number of surviving offspring in a population of birds studied over several years was correlated with the size of the fathers’ song repertoires, and so with the size of the forebrain centers (Hassel-quist, Bensch, & Vonschantz, 1996).
The acquisition of song in birds has parallels with the acquisition of speech in humans; it can be sequentially complex, it depends on a critical period in development, there are variations in dialect, and in some (but not all) species of songbird it depends on asymmetric structures—the syrinx (which produces the actual sound) is innervated by the left hypoglossal nerve (Nottebohm, 1977). This need not imply, however, that there is an asymmetry in the higher centers controlling song; in at least one species, the brown thrasher, the asymmetry is at the level of muscles that gate the production of sound, implying that the lateralization of birdsong may not always be a useful model for the asymmetrical control of speech in humans (Goller & Suthers, 1995). Moreover, in birds it is only the males that sing. Even so, the independent evolution of complex communication systems, endowed with plasticity as well as a strong innate component, goes some way toward countering the idea that human language is unique.
Spatial Memory in Birds
Many animals, including mammals and birds, demonstrate considerable ability to remember specific locations, in homing and migration, foraging for specific foodstuffs, and in recovering stored food. Some of the more dramatic examples are birds that store food in caches, and later return to the caches during the winter so that they can recover enough food to enable them to survive until the spring.
In one representative study, Balda and Kamil (1989) compared cache recovery in three species of crow that are found in the forests of the southwestern United States. One species, Clark’ nutcrackers, live in high-altitude forests, and store up to 33,000 pine seeds for the winter, and finding them again is critical to their survival. Another species, the pinyon jay, is found at lower altitudes where the winter climate is less severe, so that later recovery is not quite so critical. These birds store only about 20,000 seeds, but stored seeds still make up about 70 to 90% of their winter diet. A third species, the scrub jay, lives at still lower altitudes, and requires less than 60% of the cache of about 6,000 seeds that they store. Members of each species were tested in a laboratory setting in which they hid seeds in a room containing 180 holes for caching seeds. They were tested for recovery of the cached seeds after a retention interval of seven days. The nutcracker and pinyon jays were significantly more accurate at retrieving the seeds than the scrub jays, suggesting that the different ecological niches had selected for different memory abilities. Despite being less dependent on cache recovery in their natural environment, however, the pinyon jays were rather surprisingly better than the nutcrackers, perhaps because they adopted a strategy of placing their caches close together.
Although scrub jays showed relatively poor memory, a more recent study suggests that they are able to code stored caches for the time at which they were stored. In a laboratory setting, the birds stored wax worms and peanuts. When retrieving the food, they preferred the wax worms if they had been recently stored and were therefore still fresh, but they preferred peanuts if the wax worms had been stored long enough to lose their freshness (Clayton & Dickinson, 1998). This result is of special interest, since the ability to encode stored information for both location and time meets two of the criteria for episodic memory, a form of memory that some have claimed to be uniquely human.
It is also of interest that there is a positive correlation between the amount of storing and the size of the hippocampus, a brain structure often identified, after Tolman, as a cognitive map (O’eefe & Nadel, 1978). Reviewing the literature on the correlation in corvids (including jays and nutcrackers) and in parids (including chickadees and titmice), Rosenzweig, Leiman, and Breedlove (1999) conclude that ‘even within closely related species, reliance on storing and recovering food appears to have been a selective pressure that has led to larger hippocampal size’ (p. 141). Again this raises a question, albeit obliquely, about human uniqueness, since the hippocampus in humans has been associated with episodic memory.
Human Behavioral Ecology
In his 1975 book, Wilson devoted only a closing chapter to humans; and even when he turned to analyze human behavior specifically (Wilson, 1978), his theorizing was essentially speculative, and reminiscent of the earlier instinct psychology. For the most part it consisted of listing aspects of human behaviors, such as aggression, incest avoidance, religion, and so forth, and of arguing that they represented genetically determined predispositions that had led to increased reproductive success.
These ideas eventually led to the new discipline of human behavioral ecology, which has been defined as ‘the study of the evolutionary ecology of human behavior. Its central problem is to discover the ways in which the behavior of modern humans reflects our species’ natural selection’ (Cronk, 1991, p. 25). Human behavioral ecology has relatively little connection with the wider discipline of behavioral ecology, discussed earlier, although some studies make direct use of the kinds of optimizing theory that were developed for the analysis of animal foraging (e.g., Winterhalder & Smith, 1981). Human behavioral ecologists are typically affiliated with anthropology rather than biology or psychology. Their general approach is to study contemporary human societies, with a focus on non-industrial societies and the few remaining hunter-gatherer societies, and to view behavior in terms of its role in maximizing reproductive success. But human behavioral ecologists also emphasize the plasticity of human behavior, and its adaptability to very different ecological and social circumstances. This adaptability is seen as dependent on the emergence of consciousness. This is not to say that the drive to increase reproductive fitness is itself conscious, but rather that it is consciousness that gives humans the unique ability to adapt to a wide variety of different contexts.
An example of this adaptability comes from Borgerhoff-Mulder’ (1996) study of studies of bride-wealth payments among the Kipsigis of rural Kenya. Under conditions of plenty, when it was possible to support large families, bride-wealth payments from men reflected physical signs of fertility in the prospective brides, such as early menarche. When economic changes made large families insupportable, the preference for high fertility became unimportant, and this was reflected in reduced payments. Similarly, Crook and Crook (1988), in their study of polyandry in Tibet, have argued that polyandry is an adaptive response to low levels of subsistence and the lack of opportunity for expansion and dispersal. Such ecological arguments flow smoothly into more strictly sociobiological discussions, such as Symons’ (1979) analysis of human sexuality, or Daly and Wilson’ (1988) analysis of different kinds of human killing. Such modern human sociobiology differs from Wilson’ original speculations by its extensive use of social statistics, but its focus on modern humans means that it lacks comparative perspective.
Another development that grew out of socio-biology was evolutionary psychology, and those who identify with this movement are typically affiliated with psychology rather than anthropology. Its manifesto is the edited volume The Adapted Mind (Barkow, Cosmides, & Tooby, 1995), but it has reached a wider public through Steven Pinker’ (1997) book How the Mind Works. Unlike human behavioral ecology, which stresses behavior, evolutionary psychology emphasizes the way in which the human mind has been structured by natural selection. The mind is assumed to be computational, and to comprise a number of distinct computational mechanisms, known as adaptive specializations, rather than as a general-purpose computational device. In these respects, evolutionary psychology owes much to the computational approach earlier adopted by David Marr (1982) in his study of vision, and by Fodor (1983) in his claim that the mind is best understood as comprised of independent, dedicated modules (Tooby & Cosmides, 1995).
The aim of the evolutionary psychology program, then, is to discover what the evolutionary specializations might be—to ‘carve the mind at the joints’—and then to ‘reverse engineer’ these adaptations to the early environment that shaped them (Pinker, 1997). Although some aspects of the human mind, such as our capacity for visual processing, were largely formed during our pre-hominid primate ancestry, evolutionary psychologists lay particular stress on the Pleistocene—the period from about 1.6 million to about 10,000 years ago—as being especially critical to most of the distinctive aspects of the human mind. In the pre-agricultural world of the Pleistocene, our hominid forebears are thought to have existed primarily as hunter-gatherers, and for most of the Pleistocene they are thought to have lived on the African savanna (the earliest migration of H. sapiens out of Africa is thought to have occurred between about 60,000 and 100,000 years ago). A good deal of reverse engineering therefore has to do with how present-day dispositions might relate to conditions on the savanna. It has been proposed, for example, that we like to eat potato chips because fatty foods were scarce but nutritionally valuable during the Pleistocene; we like landscapes with trees because trees provided shade and escape from dangerous carnivores on the African savanna; our love of flowers derives from their distinctiveness as markers for edible fruit, nuts, or tubers amid the greenery of the savanna; and so on (Pinker, 1997). But there are no fossils to document the origins of these preferences, so these attempts at reverse engineering are almost entirely speculative.
A somewhat more rigorous example of reverse engineering is based on a reasoning task, known as the Wason selection task, in which people are shown cards with symbols on them. For example, they might be shown cards with E, K, 2, and 7 showing, and asked which two cards they should turn over to check the truth of the following claim: ‘If a card has a vowel on one side, then it has an even number on the other side.’ In general, people perform poorly on this task; they typically turn over the cards with the E and the 2, whereas the rational strategy is to turn over the cards with the E and the 7 (since turning over the 2 cannot disconfirm the statement, whereas turning over the 7 can). However, they are much more accurate on exactly analogous tasks that refer to social settings. For example, the cards might refer to beverages on one side and ages on the other, and the subject might be shown cards bearing the labels beer, coke, 22, and 17. If asked which two cards to turn over to check the truth of the statement: ‘If a person is drinking beer, he or she must be over 20 years old,’ most people easily understand that the critical cards are those bearing the labels beer and 17, as any under-age drinker will clearly understand (Cox and Griggs, 1982).
Tooby and Cosmides (1989) infer from this that reasoning of this sort does not depend on a general-purpose reasoning device, but applies specifically to situations involving social contracts. In particular, people are especially adept at strategies for detecting cheaters, such as the under-age drinker or the spouse who takes a lover. In short, there is a cheater-detection module. It is not quite obvious why such a module should give people an equal facility in deciding what type of stamp to put on a letter, as demonstrated in another experiment on the Wason task (Wason & Shapiro, 1971). And there remains a general problem. Experiments in evolutionary psychology are necessarily carried out with present-day human subjects, albeit sometimes in different cultural settings, and the evolutionary component has to be derived from a purely conceptual process of reverse engineering to what is known (or guessed) about social pressures operating during the Pleistocene.
A fertile ground for evolutionary psychology is that of differences between men and women in sexual strategies. It appears to be a fairly general rule across species that the sex that invests more in offspring should be the more selective in the choice of mates, and that the sex that invests less should compete more vigorously for access to mates. Buss and Schmitt (1993) find evidence across cultures that human females and males, respectively, exhibit behaviors and preferences that conform to these expectations, which can be derived from evolutionary principles. They also list other ways in which the optimal strategies for ensuring perpetuation of their genes are differently constrained in men and women.
One limitation of evolutionary psychology is its emphasis on the Pleistocene. Toward the end of the Pleistocene, from about 13,000 years ago, the human condition began to change quite radically, beginning with the domestication of wild plants and animals in the Near East. Although a few hunter-gatherer societies remain today, human societies gradually began to assume different and varied characteristics, culminating in the extraordinary cultural diversity that we see in the world today. Evolutionary psychologists have insisted that cultural differences are too recent to have a significant effect on our biological make-up, but it cannot be denied that culture nevertheless has an important impact on what we believe, how we interpret the world, and how we behave in different social settings. In stressing our common biological heritage at the expense of this cultural diversity, evolutionary psychology is often in conflict with recent traditions of postmodernism and cultural relativism in psychology and the social sciences (Pinker, 1997).
But perhaps the more important difficulty faced by evolutionary psychology is that it relies heavily on reverse engineering to a period in our prehistory about which rather little is known. If it is to progress, it will need to escape from the trap of relatively unconstrained speculation. The most obvious way of doing so is for it to rediscover its roots in a broader, comparative psychology. The key question in the evolutionary psychology of cognition is always, ‘What kind of selection pressure could have produced this kind of cognitive ability?’ A thoroughgoing comparative analysis allows one to ask that question, not just in a speculative way about modern humans, or in a tentative way about the very limited evidence we can find for the behavior of our hominid ancestors, but in a truly empirical, even experimental way, because we can ask it about thousands of other, extant, species. To do so of course requires us to admit that the modules of the human mind may have analogues, or even homologues, elsewhere in the animal kingdom. Such an admission should not be difficult for an evolutionary psychologist, necessarily placed on the side of Darwinian continuity, against Cartesian dualism. Oddly enough, however, it does not involve a denial of human uniqueness. To concede that some modules of the human mind may be found in other species leaves open the possibility that others (most obviously, the language module) may be unique. But even without that possibility, the most dedicated Darwinian can allow that the human mind must represent, at a minimum, a unique configuration of modules, allowing properties to emerge that cannot be paralleled in other species.