Michael Hogg. The International Handbook of Psychology. Editor: Kurt Pawlik & Mark R Rosenzweig. Sage Publications. 2000.
Studying Social Psychology
A widely accepted and very common definition of social psychology is that it is ‘the scientific investigation of how the thoughts, feelings, and behaviors of individuals are influenced, by the actual, imagined or implied presence of others.’ Social psychologists study all aspects of human interaction, including verbal and nonverbal communication between people in dyads or groups; the behavior of people in groups and large-scale social categories; how people in different groups and categories treat and think about one another; how people perceive, interpret, and represent their own and others’ behavior; how interaction produces shared representations of the social and physical world that shape thought and behavior; how close friendships and relationships develop and dissolve; how people can change one another’ attitudes and behaviors; how people form a sense of who they are from their interaction with, experience of, and treatment by others; why people harm others, but can also help others.
Because the subject matter of social psychology is what goes on around us all the time, human beings are all social psychologists—we all need to have a working understanding of social psychology in order to function adequately as human beings. Some important approaches within social psychology argue that scientific social psychological theories are actually a formalization of these naive or lay theories of social psychology (e.g., Heider, 1958) and that an important way to develop theories of social psychology is simply to ask people about their explanations of the social world or to see what sorts of explanatory logic may underlie what people actually say to one another.
Another consequence of the pervasiveness of social psychology in everyday life is that there is a notable applied dimension to social psychology—many social psychologists are primarily applied (e.g., in the areas of organizations, health, family, discrimination) and almost all dabble in applications of basic research and theory. Indeed, Kurt Lewin (e.g., Lewin, 1951), the acknowledged ‘father’ of modern social psychology, believed that there is nothing so practical as a good theory, and advocated ‘full cycle research’ in which basic and applied research were closely intertwined. Lewin also believed that social psychologists should put their theories into action to help make the world a better place to live in—that people should engage in ‘action research.’ This latter goal is formally represented, in the United States, by the Society for the Psychological Study of Social Issues. Nevertheless the scientific heart and mainstream of social psychology is tightly conceptual and largely based on controlled laboratory experiments, which are designed to specify basic cognitive processes and social contexts and how they interact to produce specific forms of social behavior.
Social psychology employs the scientific method to develop and test theories, and caters research methodology to the particular research question being investigated. On the basis of hunches, personal experience, casual observation, and the study of research, a theory is developed—generally in such a way that it specifies not only what causes what, but through what process. From the theory, hypotheses are elaborated that predict that if such and such observable conditions exist (called the independent variable) then there will be such and such observable outcomes (the dependent variable). The preferred way to test predictions is by experimentation in which the independent variable is manipulated and the dependent variable measured under carefully controlled conditions, usually in a laboratory, that rule out alternative explanations for the results. Experiments are very good for establishing clean cause—effect relationships and for unpacking underlying, often cognitive, processes. Not surprisingly, the pre-eminent scholarly societies for social psychology in North America and Europe have ‘experimental’ in their titles: Society for Experimental Social Psychology, and European Association of Experimental Social Psychology.
Laboratory experiments strive to maximize ‘experimental realism’ or ‘internal validity’ by making the manipulation impactful and strong. However, experiments are intentionally, unrealistic in that they do not represent the richness of the real-world phenomenon being investigated—this is called ‘mundane realism’ or ‘external validity.’ Experimentation also requires tremendous care to ensure that participants’ behavior is natural and automatic rather than a deliberate attempt to please the experimenter, confirm hypotheses, or project a favorable impression. To do this, social psychologists generally need to conceal hypotheses and procedures from participants—a scientifically necessary practice that attracted the emotive label of ‘deception’ and, using Milgram’ classic obedience studies (e.g., Milgram, 1963—see below) as the sacrificial lamb, stirred up enormous controversy in the 1960s. Although experimentation still dominates social psychology, strict ethical prescriptions have made it difficult since the early 1980s to conduct the vivid and attention-grabbing studies that characterized earlier research.
Experiments tend to use introductory psychology students as participants—this is convenient for the senior undergraduates and graduate students who actually conduct much of the research, and it is scientifically appropriate for research in which individual differences are treated as error variance and cultural forms are not the focus of study. However, even the most dedicated experimentalist feels the urge, from time to time, to see if the same processes operate in different populations. Experiments are also inappropriate for a large number of research questions—for example, it is difficult to study a riot or an established street gang in the laboratory. However, social psychologists are tenacious and inventive. One researcher tried to create a riot in the laboratory by wafting smoke under the locked door of the laboratory—some groups of participants kicked the door open and disengaged the smoke generator, and other groups calmly discussed the possibility that they were being observed!
Not surprisingly social psychologists have other research methods in their armory. These include field experiments where a variable is manipulated in a naturalistic setting outside the laboratory (e.g., the reaction of passers-by to a well-dressed or shabbily dressed experimental confederate can be measured), and a whole range of non-experimental methods. The latter include archival research (e.g., comparison of government data on TV viewing in Japan and in Britain), case studies (e.g., in-depth multi-method analysis of a specific riot), survey research (e.g., questionnaires on language use and ethnic identity), and field studies (e.g., unobtrusive observation of the behavior of traders on the floor of the stock market).
Theories in Social Psychology
Social psychological theories vary in generality, rigor, testability and general perspective or meta-theory. Behaviorist theories, based on Pavlov’ and Skinner’ approaches to psychology, focus on situational factors that reinforce behavior and produce learning—for example, social exchange theories (see Kelley & Thibaut, 1978) analyze social behavior in terms of people’ assessment of the personal costs and benefits of performing certain actions relative to other actions. Behaviorist theories are sometimes charged with incorrectly viewing people as passive targets of external influences. In response to this, cognitive theories, based on Koffka’ and Köhler’ Gestalt psychology of the 1930s, focus on the way that people actively interpret and change their environment through the agency of cognitive processes and cognitive representations. This perspective has a strong tradition in social psychology—it surfaces in Lewin’ (1951) field theory approach to social psychology where representations of the social world motivate specific behaviors; in cognitive consistency theories of the 1950s and 1960s that view incompatible cognitions as motivating cognitive and behavioral change (see Abelson et al., 1968); in attribution theories of the 1970s that focus on people’ causal explanations of behavior (see Kelley, 1967); and in contemporary social cognition that specifies in detail how cognitive processes and representations relate to social behavior (Fiske & Taylor, 1991).
As in other areas of psychology, social psychologists often try to explain social behavior in terms of enduring (now rarely innate) personality differences—for example some people conform more than others or are better leaders than others because they have conformist or leadership personalities. Most social psychologists now feel that personality is not a social psychological explanation at all—what is needed is an analysis of the interplay between situational and cognitive factors that cause apparently stable behavior patterns. Some of the strongest critics of personality or individual difference perspectives in social psychology are ‘situationists’—social psychologists who focus on the way that people in groups are constituted by their immediate or more enduring social context and thus change when circumstances change. Extreme forms of this are social constructivism and discourse analysis, which virtually do away with psychology and place full explanatory load on social history or spoken language. In reality, however, most theories in social psychology are a complex mixture of perspectives, which reflect metatheory only in their theoretical emphasis.
Controversies and Debates
Social psychology sometimes appears to have an identity problem, and to be in the throes of fiery debates about what it should be doing and how it should be doing it. This makes social psychology exciting—it reflects the passionate dedication that social psychologists have for their subject. There are at least two reasons why social psychology can sometimes be heated. One is that its subject matter is people’ interaction within the context of human society, and therefore social values and political ideologies impact on and frame theories. The other reason is that social psychology lies at the intersection of a number of (sub)-disciplines, including cognitive psychology, organizational psychology, sociology, political science, social anthropology, economics, developmental psychology, and sociolingusitics. To have a distinct identity social psychology needs to have a scientific niche that it alone occupies. Since many disciplines study the same phenomena as does social psychology (e.g., groups, prejudice, families, aggression, cognition), some social psychologists argue that to be distinct, social psychologists must have a distinct way of studying social behavior—it’ how we do it, not what we do it on. The main problem is the level of explanation. Social psychology tends towards reductionism—explaining social phenomena exclusively in terms of individuals, individual cognitive processes, and in extreme cases neuro-psychology. It is clear that this approach gives social psychology away to cognitive psychology or even neuropsychology. Some social psychologists, many from Europe, urge the discipline to focus on the social rather than individual dimension of social behavior (Tajfel, 1984), and to develop concepts that integrate or articulate processes that operate at the cognitive, social interactive, and societal level.
During the late 1960s and early 1970s there was a widespread crisis of confidence in social psychology which brought enduring worries into the open. Social psychologists were worried that social psychology was reductionist and immature in its theories; positivist and unsophisticated in its methods; blind to the role of language, history, and culture; inhumane and disrespectful in its treatment of research participants; and self-indulgently engaged in the explanation of trivial behaviors. The charge of positivism rested on the view that objectivity in social psychology is impossible because social psychologists are people and the subject matter of social psychology is people, ergo oneself. Therefore the scientific method is particularly inappropriate to social psychology. Out of this angst arose a diversity of ‘resolutions’ of the crisis – the two most successful are social cognition with its sophisticated methodologies and tight theories (e.g., Fiske & Taylor, 1991), and social perspectives that focus on culture, intergroup relations, and social identity (e.g., Tajfel, 1984). In recent years these approaches have drawn closer together. There is, however, another set of responses that radically rejects traditional social psychological methods, theories and research foci – this includes social constructionism, humanistic psychology, ethogenics, discourse analysis, and post-structuralist perspectives. These approaches are diverse but share a focus on subjectivity, language, and qualitative methods.
It was only in the second half of the nineteenth century that an empirical approach to the study of social behavior emerged. A group in Germany styled themselves as students of Völkerpsychologie (folk psychology) and focused on the collective mind (as distinct from Wundt’ focus on the individual mind), defined variously as a societal way of thinking within the individual and also a form of supermentality that could enfold a whole group of people. This latter emphasis developed in the 1890s and 1900s into theories of the group mind (e.g., LeBon, McDougall), which although outmoded today still address fundamentally social psychological issues to do with the relation between the individual and the collective.
Social psychology was quickly and decisively influenced by Wundt’ vision of psychology as an experimental science, and by Watson’ behaviorist manifesto for psychology. Although one of the earliest programs in social psychology was at the University of Chicago, where G. H. Mead’ influential symbolic interactionist perspective was developing, this ‘sociological’ form of social psychology became marginalized within a mainstream that viewed social psychology as a part of psychology rather than sociology, and increasingly focused on the isolated individual, observable behavior, and laboratory experimentation. This dominant form of social psychology is captured by F. H. Allport’ (1924) classic text that set the agenda for modern social psychology.
In the early 1900s America ousted Germany as the powerhouse of social psychology. This shift was accelerated in the 1930s by the rise of fascism in Germany—Germany’ leading social psychologists fled mainly to the United States. These emigrés included Kurt Lewin, the ‘father’ of experimental social psychology. This influx, coupled with research demands and questions associated with the Second World War, produced an explosion of activity, much of it applied, in social psychology that focused on small group processes (e.g., Lewin, 1951), attitudes and attitude change (e.g., the Yale attitude change program—Hovland, Janis, & Kelley, 1953), and prejudice (e.g., the authoritarian personality—Adorno, Frenkel-Brunswik, Levinson, & Sanford, 1950).
Since the late 1940s social psychology has grown at a prodigious rate, in terms of programs, publication, and significance within psychology. Although such growth embraces a diversity of research agenda, themes and perspectives, there have been dominant trends that attract attention for periods of time. During the 1950s and into the early 1960s small group research flourished (e.g., the study of cohesion, leadership, communication networks, group influence) and articulated well with a general social exchange perspective on interpersonal relations and interaction (Thibaut & Kelley, 1959). This period was also characterized by cognitive dissonance theory—building on Lewin’ (1951) field theory approach to social psychology and Heider’ (1958) discussion of cognitive balance, Festinger (1957) developed a cognitive dissonance perspective that traced behavior (e.g., attitude change) to motivations arising from inconsistent cognitions. This cognitive emphasis gathered strength in the mid-1960s with Jones and Davis’ (1965) development of Heider’ (1958) early model of people as naive or lay psychologists in the business of developing science-like causal explanations of their social world—thus was born attribution theory (e.g., Kelley, 1967) which dominated social psychology through the 1970s (see Fiske & Taylor, 1991).
Concerns about the rational model of human behavior that underpinned attribution theories, in conjunction with the crisis of confidence in social psychology of the late 1960s and early 1970s, provided fertile ground for the most recent and far-reaching cognitive revolution in social psychology—social cognition (e.g., Fiske & Taylor, 1991; Nisbett & Ross, 1980). Social cognition has tried to emulate cognitive psychology in its types of theories, and its methods of research—it focuses on the way in which people construct and are influenced by their cognitive representations of experience. The emphasis is on cognitive processes and structures within the individual—an individual who is either a cognitive miser (using the least cognitive effort to get by) or a motivated tactician (picking and choosing among cognitive processes in order to best satisfy goals), or both. The rise of social cognition in the United States since the late 1970s has been shadowed by the development in Europe of a different emphasis in social psychology, that focuses on the social dimension of human existence (e.g., Tajfel, 1984)—for example, intergroup relations, prejudice, collective representations, social identity, large-scale social categories. There is some evidence that since the early 1990s American social cognition and European intergroup approaches are finally drawing together to integrate their strengths.
Cultural and National Forms of Social Psychology
For most of the twentieth century, social psychology has been dominated by the United States. There are at least four reasons for this: (a) English is the international language of science and there are well over 250 million English speakers in the United States—Australasia, Canada, and the UK can only muster 100 million; (b) the enormous wealth of the United States has allowed a well-funded and respected research culture to thrive within an elite group of top universities; (c) early twentieth century conflicts in Europe culminated in 1930s fascism which effectively removed Europe as a significant competitor in social psychology for almost half a century; and (d) career advancement in other countries depends on American journal publication criteria and thus local forms of social psychology are inhibited.
Social psychology is done by people—people who study things which interest them and which often come from their own day-to-day experiences. The agendas and perspectives of social psychology are influenced by cultural and historical experiences. It has often been suggested that because social psychology is dominated by America, it is framed by American cultural experiences which emphasize the primacy of individuality. Other cultural milieu might produce different emphases and agendas. Indeed this has happened—most notably with the re-emergence of European social psychology. In 1945 there was effectively no European social psychology. A concerted reconstruction effort led to the birth of the European Association of Experimental Social Psychology in 1966, the launching of the European Journal of Social Psychology in 1971, and the European Review of Social Psychology in 1990. The British Journal of Social Psychology has also played its part, as have European social psychology textbooks (e.g., Hewstone, Stroebe, & Stephenson, 1996; Hogg & Vaughan, 1998). Together this infrastructure has, since the mid-1980s, made Europe a major player in social psychology—a player whose contribution has been to criticize theories that explain collective phenomena purely in terms of individuality or individual cognitive processes. European social psychology has focused on intergroup relations and collective phenomena. However, there is also great, and growing, national diversity in Europe—for instance, social cognition and the study of small groups are significant themes in Germany, social representations thrives in France, and since the mid-1980s post-structuralist and discourse analytic approaches have flourished in the UK.
In Canada, social psychology is heavily influenced by the United States. However, the policy of multiculturalism, and the cultural and language context in Quebec has focused Canadian social psychologists on language, ethnicity, intergroup relations, and social identity—Canadian social psychology has some close links to European social psychology. These links with European social psychology are also to be found in Australia and New Zealand. Social psychology in Australia was initially an offshoot of British social psychology. But with Australia’ post-Second World War shift of allegiance to the United States it became influenced by American social psychology. During the 1970s Australia re-focused on Asia and on its own multicultural nature, and during the 1980s there was an influx of British social psychologists fleeing Thatcher’ Britain and bringing with them a European perspective. Australia is now a kaleidoscope of diverse perspectives that are strongly influenced by a focus on intergroup relations, social identity, culture, ethnicity, and language. New Zealand has been influenced by British post-structuralist and discourse analytic perspectives.
Social psychology, as a scientific debate conducted in English, has not been so well influenced by non-European or non-English speaking nations. The exception is Asia—particularly the more prosperous East Asian nations such as Hong Kong, Japan, Korea, and Taiwan. There have been recent moves to organize social psychology in Asia around a common interest in collectivist perspectives and research on culture. This has been structurally facilitated by the recent formation of the Asian Association of Social Psychology, and by the launching of the Asian Journal of Social Psychology. Prominent East Asian social psychologists have largely been trained in the United States, but the focus on collectivism and culture is metatheoretically closer to European social psychology and the sort of eclecticism that characterizes Australasian social psychology—indeed there are developing links between Asian and Australasian social psychology.
Nevertheless, social psychological research is overwhelmingly conducted in Western nations (North America, Europe, and Australasia). In their 1998 text, Social psychology across cultures, Peter Smith and Michael Bond note that only 2 to 3% of the total of research references in the top contemporary North American and European social psychology texts refer to studies conducted in non-Western cultures. The coverage of social psychology in this chapter reflects this cultural constraint.
Landmarks in Social Psychology
Although social psychology is a young science, there are some landmark studies or programs that are cited repeatedly as reference points for prolific subsequent research. A study by Triplett in 1898 is often identified as social psychology’ first experiment—Triplett first discovered from analysis of published records that cyclists went faster in paced than unpaced trials, and then he went on to conduct a controlled experiment in which he had people roll in fishing line either alone or in coaction with others to see how the presence of others influenced performance. In 1936 Muzafer Sherif reported a program of experiments in which participants judged the apparent movement of a fixed point of light in a completely darkened room (the autokinetic effect) by themselves or in the presence of others. These studies showed how norms developed very rapidly to guide judgments, and how these norms persisted to have influence even when the original members of the group had all been replaced by new people. Developing this theme, Solomon Asch (1956) showed how people’ judgments can be swayed by a consistent majority even when they are judging something completely unambiguous, such as which line is longest. This research showed how readily people conform to a majority. Stanley Milgram (1963) wondered whether people would conform where the consequences of conformity involved inflicting pain on others—he found that people would inflict electric shocks that they believed would be injurious to a victim simply because an experimenter had told them to do so. Milgram’ focus quickly shifted from conformity to destructive obedience of commands. Similarly, Philip Zimbardo (Zimbardo, Haney, Banks, & Jaffe, 1982) found that people would readily comply with role prescriptions even if they entailed inflicting discomfort—he constructed a simulated prison in the basement of the Stanford psychology department and assigned students to prisoner or guard roles for a prolonged role-playing study that had to be curtailed after only a few days because role adherence became too extreme.
In 1939, Dollard, Doob, Miller, Mowrer, and Sears, published their frustration—aggression hypothesis. It traced prejudice and mass aggression to individual frustrations, that are expressed as aggression displaced onto targets who are weaker than the original cause of the frustration. In 1950, Adorno, Frenkel-Brunswik, Levinson and Sanford published their authoritarian personality theory. It traced prejudice and inter-group aggression to prejudiced personalities that had developed out of childhood experience of distorted family relationships involving authoritarian parenting. Later perspectives on prejudice and intergroup aggression focused on inter-group relations. In the early 1950s Muzafer Sherif (1966) conducted a series of naturalistic field experiments at boys’ camps in the United States. These studies showed how mutually exclusive goal relations produced competitive behavior and intergroup conflict and stereotyping, and how superordinate goals that encouraged cooperative interaction improve intergroup relations. In 1954, Gordon Allport published a book on prejudice, which promoted meaningful and enduring equal status contact between members of social groups as a way to reduce prejudice. This recommendation was influential in the United States government’ decision to desegregate the American school system.
In 1970, Henri Tajfel reported an experiment which showed that intergroup behaviors could emerge even if people were merely categorized into two non-interactive, anonymous groups on the basis of minimal and trivial criteria—categorization alone was sufficient to produce intergroup discrimination. This has become a popular paradigm in social psychology. In a typical minimal group experiment, 10 to 20 student participants who have volunteered for a one-hour study of social judgment sit at separate tables in a classroom and do not communicate or interact with one another. They complete a perception or judgment task (e.g., painting preference, dot estimation) and are ostensibly categorized into two groups on the basis of painting/painter preference or under- or over-estimation of the number of dots (in reality they are randomly categorized). They then complete a paper-and-pencil task in which they distribute points that they can think of as representing some valued resource between their own group and the other group. This is followed by a questionnaire in which they evaluate themselves, their group, and their membership in their group. Control participants are not categorized—merely aggregated by similar code number (e.g., 20s and 40s). Categorized participants show significantly greater behavioral and evaluative in-group favoritism and in-group belonging than do control participants.
Another classic study, by Stoner in 1961, challenged prevailing wisdom that conformity was all about averaging and that group decisions were cautious. Stoner found that sometimes group decisions could be more extreme and more risky than the average of the opinions held by the members of the group—groups could polarize towards extremity. In 1972 Irving Janis published an analysis of groupthink—a group decision-making phenomenon in which highly cohesive groups with overly-directive leaders can disregard optimal decision-making procedures as they blindly pursue consensus. This leads to poor decisions that can have disastrous consequences.
There are many other influential studies and publications. The brief coverage of social psychology in this chapter is necessarily framed by these influential works.
Social Cognition and Social Explanation
Probably because social psychology allied itself very early on with general psychology, it has always placed explanatory emphasis on intra-individual cognitive processes and structures—social psychology has always been markedly cognitive. As explained in the history section above, this cognitive emphasis has taken different forms at different times—culminating in modern social cognition (Fiske & Taylor, 1991) which has dominated social psychology since the early 1980s.
Forming Impressions of People
The question of how people combine information to form impressions of other people lies at the heart of social cognition. Early research by Asch adopted a Gestalt perspective to show that some pieces of information act as central cues which influence the meaning of other peripheral cues and are disproportionately influential in impression formation. Cues can be central because people think they are more important (people have their own personal constructs, or implicit personality theories that identify what constellations of traits they feel are important in judging people), or because they stand out from other information (negative information is often distinctive and thus disproportionately influential), or even because they are the first pieces of information encountered (first impressions are often hard to change—a primacy effect). There is some evidence that social norms proscribing certain impressions can inhibit the influence of central traits on impression formation—stereotypic impressions can thus sometimes be inhibited.
An initially more mechanical perspective on impression formation has been proposed by Anderson. People focus primarily on the evaluative implications (positive or negative) of pieces of information about others, and integrate this information arithmetically—the information is cognitively summed or averaged to produce an overall evaluative impression of the person. Research favors averaging, but actually goes further to suggest that the components are first subjectively weighted to reflect the context-dependent subjective importance of information in forming an impression. This weighted averaging model revisits Asch’ central traits—the difference is that Asch focuses on the meaning of traits whereas Anderson focuses on their evaluative implications. Modern social cognition replaces central traits with the more general concept of schemas.
Schemas and Categories
For social psychology, a schema is a cognitive structure that represents knowledge about a concept or type of stimulus—it represents attributes and relationships among attributes. We have schemas about roles (e.g., chairperson), events (usually called ‘scripts,’ e.g., eating at a restaurant), social groups (e.g., Norwegians), specific people (e.g., your spouse), and oneself (how we are, how we would like to be, and how we ought to be). Once invoked by contextual cues, schemas have a powerful tendency to replace data-driven or bottom-up processing (i.e., reliance on information gleaned directly from the immediate context) with theory/ concept-driven or top-down processing (i.e., reliance on information provided by prior knowledge and preconceptions). In order for a schema to come into play a person, event or situation needs to be categorized as fitting a specific schema.
Social psychologists believe that categories are collections of instances that have a general appearance of similarity (called family resemblance), rather than a shared set of criterial attributes, and that people represent categories in terms of prototypes (an abstraction of relatively common attributes) or exemplars (a concrete instance of a category member). For example, consider the category café – instances are diverse but have a family resemblance, and we can represent the category as a prototype (amalgam of café attributes) or an exemplar (a specific café we have known). If an instance successfully matches a category prototype or exemplar it is categorized, and the relevant schema is activated. Categorization accentuates perceived similarities within and differences between categories. These processes, applied to the categorization of people, can produce stereotyping – people represent social categories (e.g., Canadians) as prototypes, exemplars or schemas (i.e., stereotypical images), and when an individual person is categorized as a Canadian the perceptual accentuation process in conjunction with top-down schematic processing leads to stereotypic perception of that person.
Research on schema use suggests that people relatively spontaneously employ schemas relating to social categories, roles, current mood states, easily detected features (e.g., skin color), contextually distinctive features (e.g., single male in a group of females), and schemas that are chronically accessible because they are frequently or recently used, or they relate to important aspects of self. Schemas such as these are functional and accurate enough for immediate interactive purposes in most contexts (they have circumscribed accuracy), and are particularly useful when quick social perceptual decisions have to be made (e.g., time pressure, distraction). However, if the perceived costs of mis-perception are high, people strive for greater accuracy by relying on more specific schemas or engaging in bottom-up data-driven information processing. Although people sometimes know that schemas are undesirable (e.g., derogatory stereotypes of social groups), and can actively try to avoid using them, it can be surprisingly difficult to do this. Schema use is overwhelmingly influenced by situational demands, but there are some individual differences broadly revolving around the complexity of people’ representation of the social world and their predilection for quick and simple or slower and more detailed social perceptions.
People acquire schemas indirectly from other people, literature, and the media, and from encounters with category instances which make schemas more abstract, compact, richer in content, and ultimately more resistant to change. This resistance to change comes directly from the fact that schemas lend a sense of order, structure, and coherence to the social world. Rapidly changing schemas would not satisfy this need – and indeed research shows that once fully developed, schemas are remarkably resistant to schema-disconfirming information. However, schemas are not rigidly immutable, they do change through at least three processes: (a) there can be slow change in response to new information—called bookkeeping, (b) a sudden cataclysmic change as the consequence of gradually accumulating information—called conversion, or (c) a configuration change in which new information forms the basis of a new schema nested within the original schema—called subtyping. Research suggests that subtyping is the most common process of schema change.
Encoding, Remembering, and Using Social Information
Social cognition is heavily influenced by salient stimuli and information because they attract attention and involve increased cognitive work. Novel, unusual, distinctive, and subjectively important stimuli/information are generally salient, and salient people are perceived more coherently, and to be more influential and more extreme. Their behavior is also considered to reflect their dispositions rather than situational constraints. Social cognition is also heavily influenced, as described above, by schemas and information that is accessible in memory. Indeed, social cognition researchers have explored the way in which people store social information in memory—this work relates closely to cognitive psychology in its reference to associative networks, long- and short-term memory, and so forth. Most directly relevant to social psychology is research which shows that we can store information about other people ‘by person’ or ‘by group’—that is, we cluster attributes under individual people, or we cluster people under attributes of groups. One view in social psychology is that organization by group is tied largely to encounters with relative strangers and that the cognitive system strives to transform this structure into the preferred organization by person—this view reflects the traditional individualistic metatheory that places ultimate explanatory emphasis on the individual. Another view is that the two organizations coexist as distinct ways of representing social experience—this view reflects the alternative collectivist metatheory that guards against explaining groups in terms of individuals.
The cognitive decision-making processes we use to make social inferences (i.e., identify, sample, and combine information to form impressions and make judgments) have been studied in detail by social cognition researchers. A major distinction is between (a) relatively automatic top-of-the-head, schema-based, processing (also called heuristic or peripheral route processing); and (b) relatively deliberate bottom-up data-based processing (also called systematic or central route processing). Social inference research has generally specified ways in which social inference is biased and error prone because people fall short of ideal inferential processes. For instance, people are over-influenced by schemas, individual cases, and distinctive or extreme stimuli, and do not adjust inferences to accommodate more statistical information about large numbers of people (i.e., base-rate information and regression effects). One inferential bias that has been well studied is called illusory correlation—people tend to overestimate the co-occurrence of unusual or distinctive events (called paired distinctiveness) and of events which ‘ought’ to belong together on the basis of past experience (called associative meaning). Thus contextually distinctive people (a black person in a white society) and behaviors (e.g., antisocial behavior) are believed to co-occur significantly more than they actually do.
Although people are poor at making inferences they have developed cognitive decision-making shortcuts, called heuristics by Tversky and Kahneman (1974), that are adequate for most day-to-day interactive needs. There are three main heuristics that people use: (a) representativeness—people are rapidly categorized on the basis of superficial and impressionistic assessment of how well they represent the prototype or an exemplar of the category; (b) availability—estimates of the frequency or likelihood of an event are a function of how readily the event comes to mind; and (c) anchoring or adjustment—inferences are tied to, and disproportionately influenced by, initial standards or earlier inferences.
Causal Attribution of Behavior
Although contemporary social cognition lies at the core of social psychology, it has its critics who identify at least three limitations. First, social cognition has largely ignored the social psychological role of language and communication—people speak to one another. Second, social cognition has tended to overlook affect—people have feelings and emotions. Third, social cognition focuses on intraindividual cognitive processes and structures without properly connecting this level of analysis to human interaction, group processes, and intergroup relations—the issue of reductionism discussed earlier. Recent developments in social cognition have begun to redress these limitations. Nevertheless, the emergence of social cognition in the early 1980s itself addressed limitations in the preceding dominant paradigm, which was attribution theory (see Nisbett & Ross, 1980).
Attribution theory takes its lead from Heider (1958) who believed that in order to function adaptively, people need to have a causal understanding of the social world. People are naive or commonsense psychologists who adopt sciencelike methods to understand the causes and consequences of social behaviors. In doing this people are concerned to identify stable and enduring properties of people and situations that reliably produce certain behaviors. In particular they distinguish between internal/dispositional causes of behavior and external/situational causes. According to Jones and Davis’ (1965) theory of correspondent inference people are more likely to attribute behavior internally to characteristics of the person if the behavior was freely chosen, was not socially desirable, had direct and intended impact on us, and the effects of the behavior were unlikely to be produced by other behaviors. Better known is Kelley’ (1967) covariation model which went so far as to characterize people as naive statisticians employing analysis of variance (ANOVA) to attribute causality. People canvas three sources of information to make a decision about whether to attribute behavior internally to the person or externally to the situation: (a) consistency—a person must consistently behave the same way in the same situation for either the person or that situation to be a valid causal candidate; (b) distinctiveness—if the person’ behavior is distinctive to the situation then the likely cause is the situation, but if she behaves in this way irrespective of situation then an internal attribution is warranted; and (c) consensus—if everyone behaves in this way in this situation then the cause is probably the situation, whereas if he is the only person behaving in this way in this situation then an internal attribution is made. Although experimental research shows that people can make attributions in this way, it does not mean that people ordinarily do make attributions in this way in everyday life.
Attribution theory has been extended in a number of ways. Stanley Schachter applied it to the experience of emotions (Schachter, 1964). He suggested that emotions involve an undifferentiated state of physiological arousal and a cognitive label that specifies the particular emotion. Thus, it should be possible to change emotions if people can be induced to attribute the physiological arousal to a different cause (i.e., label it differently)—perhaps this is why a crying (sad) child can sometimes quickly become a laughing (happy) child if a parent makes silly faces. There is some support for this idea, but subsequent research has shown that arousal is somewhat differentiated and distinctive to specific emotions, which reduces the possibility that alternative causal attributions can be made or cognitive labels accepted—the arousal associated with sadness is different to that associated with happiness. Another development of attribution theory, self-perception theory, suggests that people may learn about themselves by internally attributing their own behavior—I know I like seafood because I often freely eat seafood in preference to other foods, and not everyone likes seafood. One well-supported finding (the overjustification effect) is that if people are induced to perform a task by large rewards or harsh penalties they externally attribute their behavior and experience reduced motivation, whereas the absence of adequate external explanations for the behavior encourages increased motivation through internal attribution. Other applications of attribution theory have explored individual differences in attributional styles (people differ in the extent to which they are inclined to make internal or external attributions for their own or others behaviors) and the role of attributions in close relationships (attributional conflict and internal attribution of negative behavior seem to prevail in dysfunctional relationships).
Biases in Causal Attribution
Accumulating evidence suggests that people do not attribute causality in a rational and scientific manner—there are many biases and errors which challenge the naive-scientist model that frames the attribution approach (Nisbett & Ross, 1980). People tend to make internal attributions for others’ behavior even when there are clear situational causes (the fundamental attribution error), and yet are quite likely to attribute their own behavior externally. People largely ignore objective consensus information in making attributions, and instead supply their own—there is a false consensus effect in which people assume that other people behave like they do. Finally, the entire attribution process is subject to self-serving biases aimed at protecting or enhancing self-esteem and self-image. People attribute their own or in-group members’ successes internally and their failures externally, and others’ or out-group members’ successes externally and their failures internally. They also like to believe in a just world, and sustain this belief by tending to attribute responsibility for people’ misfortunes (e.g., sickness, unemployment, poverty, rape) internally—they blame the victim.
Although people can make causal attributions, they probably only do this when prepackaged causal knowledge does not exist, or when unusual events occur or we feel a lack of control. Generally we rely on cultural beliefs, social stereotypes, collective ideologies and social representations that automatically explain what is going on. For example, social representations are interactively elaborated, widely shared, commonsense causal understandings of complex phenomena, such as unemployment, AIDS, global warming. There are also conspiracy theories which blame various social circumstances on the intentional and organized activities of specific social groups (the so-called ‘world Jewish conspiracy’ is a well-documented example of a conspiracy theory). The sorts of social explanations that people give are also culturally influenced. For example, the fundamental attribution error is more prevalent in Western societies—people in Eastern societies lean more towards external attributions for people’ behavior.
Attitudes and Persuasion
Attitudes are a core construct in social psychology (Eagly & Chaiken, 1993). Until the mid-1930s it was considered social psychology’ most indispensable concept, and many definitions of social psychology actually defined social psychology as the study of attitudes. There are different definitions of attitude, that underpin different measurement techniques. However, most social psychologists probably agree that an attitude is a set of beliefs about an attitude object (e.g., a product, a behavior, a group) together with positive or negative feelings towards the attitude object, and perhaps some intentions to behave in certain ways towards the attitude object. Attitudes are highly functional because they integrate sets of beliefs and provide object appraisal that may help in planning action. Attitudes tend to form only around significant events or objects, and they tend to be relatively enduring structures. Attitudes tend to be more specific in terms of their referent than are values, which are much broader, more general orientations to life, (e.g., the value of freedom).
During the 1920s and 1930s attitude researchers concentrated on how to measure attitudes, and elaborated all sorts of more or less complicated methods. More recently this issue has resurfaced in a slightly different guise—how do you know whether expressed attitudes about controversial attitude objects (e.g., racial or ethnic groups, affirmative action, immigration) are people’ true attitudes or merely socially desirable responses? One relatively effective method is the bogus pipeline technique—people complete an attitude questionnaire while they are wired up to a machine that they are (falsely) told can detect when they are lying or telling the truth. Another method capitalizes on the fact that people are quicker to decide about negative than positive descriptors in a checklist, when a stimulus they have a negative attitude towards is displayed. There are also language- and communication-based techniques which unobtrusively monitor nonverbal cues to positive and negative affect, or analyze the subtext of people’ discourse about attitude objects, or measure how abstract or concrete people are in their discussion of positive or negative attributes of a group (if people dislike a group they will talk abstractly about the group’ negative attributes).
Attitudes and Behavior
One might think that attitudes should predict behavior rather well—if someone says he likes running it is not unreasonable to expect frequently to encounter him running. Research, however, clearly indicates that attitudes alone are rather poor predictors of behavior. A classic case in point is a study by LaPiere, published in 1934, in which he traveled across the United States with a Chinese-American couple and was only refused service in one out of 250 establishments, yet a subsequent questionnaire returned by 128 of these establishments indicated that 92% of them would not accept Chinese customers. Research has shown that attitudes are better predictors of behavior if we focus on attitudes which are very specific to the behavior being predicted, and thus relate more to behavioral intentions.
Two related models of attitude behavior relations have been developed (see Ajzen, 1989)—Ajzen and Fishbein’ theory of reasoned action, and Ajzen’ theory of planned behavior. A behavior is more likely to occur if (a) the person’ attitude towards the behavior is favorable, (b) the person thinks significant other individuals are also favorably inclined towards the behavior, and (c) the person believes it is relatively easy to perform the behavior (opportunity and resources exist). This model allows much better prediction of behavior from attitudes, but the correspondence is still surprisingly low. Another factor which may be important is attitude accessibility—attitudes that are strongly held, are self-relevant, and which relate to objects with which one has had direct experience tend to be more accessible in memory and thus more influential on behavior. Finally, attitudes that define group membership may be strong predictors of behavior in contexts where people subjectively identify with the group.
Persuasion and Attitude Change
Attitudes are acquired through the socialization process; with parents, peers, social groups and, in recent decades, the mass media, playing an important role. We can acquire our attitudes through direct experience with attitude objects, classical conditioning, instrumental conditioning, observational learning (modeling), or cognitive learning. According to self-perception theory we may even learn our attitudes by internally attributing our overt behavior—‘I’ drinking tea, therefore I must have a positive attitude towards tea.’ Once formed, attitudes, like schemas, are relatively enduring constructs that we use to locate ourselves in the social world and to make decisions about behavior. However, attitudes can change—otherwise advertising, propaganda, and even education would be a complete waste of time.
Almost 50 years of research on persuasive communication (since Hovland et al., 1953) has shown that people who are considered expert, trustworthy, credible, popular, or attractive are more effective in changing attitudes. The message itself may be more effective if it does not appear to be a deliberate persuasion attempt (obvious persuasion attempts, particularly when we are forewarned, can produce reactance or negative attitude change), if it arouses some fear, or if it is weighted towards evaluation rather than facts. If the audience is hostile or intelligent then it is worthwhile presenting both sides of the argument—otherwise one side is best. People with very low or very high self-esteem may be less susceptible to persuasion, as are people who are distracted (if the message is simple). Early findings that women were more easily persuaded than men are now conclusively attributed to other factors like familiarity with the topic of persuasion. Finally, simple messages are most effective if communicated by video, whereas complex messages are best in writing. As a rule, the persuasive communication should be catered to whether the audience is motivated or able to processes the information systematically via a central processing route, or heuristically via a peripheral processing route (e.g., Eagly & Chaiken, 1993)—for instance, where the message is complex and strong then it would help to allow the audience time and ability to process it systematically, whereas if it is simple but weak it might be better to inhibit systematic processing and encourage superficial heuristic processing.
The process of attitude change has been explored most systematically from a cognitive dissonance perspective. Cognitive dissonance theory (Festinger, 1957), which became perhaps the most studied topic in social psychology in the 1960s (see Abelson et al., 1968), states that inconsistent cognitions produce a state of dissonance—an unpleasant state of psychological tension. People strive to avoid dissonance, but when it arises they must change one or other cognition, seek additional information to bolster one or other cognition, or derogate the source of one of the cognitions. The aim is to re-establish cognitive harmony. Often one cognition is about one’ attitude (e.g., towards smoking) and the other about one’ behavior (e.g., I have a cigarette in my hand), in which case there is pressure on attitude change to reduce dissonance. The three best-known research paradigms for investigating attitude change through dissonance reduction are effort justification, forced compliance, and free choice. Together, these paradigms reveal that attitudes towards a behavior improve if people feel they have entirely freely chosen to enact the behavior, if they have exerted cognitive or physical effort to engage in the behavior, or if there was initially very little to choose between possible behaviors. In one classic experiment from the 1960s military cadets were invited to try eating grasshoppers by either a friendly and cheerful officer (no dissonance would arise—I did it to please the nice officer) or a cool and official officer (dissonance would arise)—the latter was more effective in inducing the cadets to eat more grasshoppers and to report greater liking for them. Dissonance seems to be a plausible explanation of attitude change when people behave in a markedly counter-attitudinal manner (e.g., eat grasshoppers). When the behavior is only slightly out of line with attitudes (e.g., paying just a little too much for a meal), attitude change is more likely to occur through self-perception—I have paid $50 for this meal when I really only wanted to pay $35, so I must have really enjoyed the meal.
Compliance and Obedience
In many ways, social psychology is the study of social influence—look at the definition of social psychology that opens this chapter. One of the most common forms of social influence involves trying to persuade someone to do something—the idea here is not to change someone’ inner attitudes, but merely to secure behavioral compliance with a simple interpersonal request (e.g., how do you persuade someone to donate money?). Generally, with small requests, people are remarkably, and ‘mindlessly,’ compliant. However, if the request is more significant there are some tactics that can be used. One effective tactic is ingratiation—people are more likely to say ‘yes’ if you have succeeded in getting them to like you by, for example, praising them (i.e., flattery), drawing attention to interpersonal similarities, or linking yourself to prestigious or attractive others (i.e., basking in reflected glory, or name-dropping). Another tactic involves capitalizing on people’ belief that they should reciprocate small favors (the reciprocity principle)—people are more likely to say ‘yes’ if you have first done something positive for them, even something trivial and uninvited. A third tactic employs two requests, in which the focal request is preceded by a priming request. Sometimes compliance can be increased when the focal request is preceded by a very small request that people are bound to comply with (the foot-in-the-door technique), while at other times compliance can be increased when the focal request is preceded by a very large request that people are bound not to comply with (the door-in-the-face technique). A final tactic, called low balling, involves first getting compliance at any price, usually by attaching all sorts of inducements to the request, and then removing the attractive inducements—having said ‘yes’ one is unlikely subsequently to change one’ mind.
Compliance, and other forms of persuasion, are also influenced by the perceived power of the source of influence. Research has identified at least six bases of power: ability to reward compliance; ability to punish non-compliance; possession of specific pieces of information; having general expertise and knowledge; securing respect, liking and identification from others; and being a legitimate authority figure. Authority has been the focus of one of social psychology’ most significant and socially meaningful pieces of research—Milgram’ (1963) studies of destructive obedience of authority. Milgram discovered that quite normal people taking part in a laboratory experiment were prepared to administer electric shocks (450 V) which they believed would be injurious to another participant, simply because an authoritative experimenter told them that they must do so. By showing that apparently pathological behavior may not be due to individual pathology (the participants were ‘normal’), but to particular social circumstances (the situation encouraged extreme obedience), this research places explanatory emphasis for socially unacceptable behaviors on social psychology rather than clinical or abnormal psychology. Subsequent research, exploring factors that influence obedience, suggests that social support is the single strongest moderator of the effect—obedience is strengthened if others are obedient, and massively reduced if others are disobedient. However, emblems of authority (e.g., a uniform) can secure obedience even in domains well outside their sphere of legitimate authority.
Social influence also operates through conformity to social norms. Sherif’ (1936) classic auto-kinetic experiments demonstrated that small groups rapidly develop norms that influence group members’ judgments, and that these norms continue to influence new members even when the original members are long gone. The participants in these experiments were calling out their judgment of a highly ambiguous stimulus (the movement of a spot of light that ‘appeared’ to move about). People may have conformed because they were concerned about social evaluation (e.g., being liked or thought badly of) by the others in the group (called normative social influence), or because they were using the others’ judgments to disambiguate reality (called informational social influence). A subsequent series of experiments by Asch (1956) controlled for informational influence—participants called out which of three comparison lines were the same length as a standard line. Although the task was unambiguous, participants were strongly inclined to conform to the unanimously erroneous judgments of five to seven preceding participants (experimental confederates).
Subsequent Asch-type experiments explored the parameters of majority influence. Conformity reaches full strength with three to five apparently independent sources of influence—larger groups are not stronger, and non-independent sources are treated as a single source. Conformity is significantly reduced if the majority is not unanimous—dissenters and deviates of almost any type can produce this effect. In one study conformity was reduced by a deviate who was virtually blind and could not actually see the stimuli. Conformity is also reduced if participants make their judgments privately. However, even when participants judged unambiguous stimuli completely privately in cubicles, they still showed some residual conformity—suggesting that the processes of informational and normative influence do not completely explain conformity. Recent research, based on social identity theory, has suggested that conformity is the behavioral consequence of defining oneself in terms of a norm that defines membership in a self-inclusive group—in which case the ‘ambiguity’ of the stimulus is irrelevant, as is the absence or presence of an ‘audience’ for one’ behavior.
Conformity research mainly focuses on how a majority exerts influence. A valid question is how does a minority have influence, and thus how does social change occur? A program of research, initiated by European social psychologists Serge Moscovici (1976) and Gabriel Mugny (1982), suggests that because people dislike and avoid conflict, minorities must actively create and accentuate conflict to draw attention to themselves. In doing this they need to present a message that is consistent, but not rigidly presented, across time, modality, and members. Minorities are also more effective if they appear to be acting out of principle, and making personal sacrifices for their beliefs. These strategies disrupt majority consensus and raise uncertainty, draw attention to the minority as a group which is committed to its perspective, and convey a coherent alternative viewpoint that challenges the hegemony of majority views. It also helps if the minority can present itself as an in-group for the majority—minorities are typically vilified as outsiders. Effective minorities influence by conversion—the deviant message is superficially and publicly rejected, but is centrally and systematically processed to produce latent private attitude change that emerges behaviorally as apparent conversion to the minority.
In addition to compliance, obedience and conformity, people can find that the way they perform tasks can be influenced by being in a group.
Social facilitation research reveals that people perform well-learned tasks better in the presence of a passive audience or group than alone, but poorly-learned tasks worse. There are competing explanations for this effect—drive theory suggests that people are an innate source of drive, a learned source of drive (via evaluation apprehension), or a source of attentional conflict that produces drive—drive then energizes habitual behavior patterns which may be correct (i.e., well-learned tasks) or incorrect (i.e., poorly-learned tasks). Social facilitation may also occur because of distraction and subsequent attentional narrowing that hinders performance of poorly-learned/difficult tasks, but leaves unaffected or improves performance of well-learned/easy tasks. Self-awareness and self-presentation may also play a role—social presence motivates increased effort which is able to improve performance of easy tasks, but is unable to affect difficult tasks that are hindered by anticipated poor performance.
Groups can often perform better than individuals because more hands are involved, the human resource pool is enlarged, or because people may compete, or try to compensate for a perceived lack of motivation, ability, or effort of others. However, research suggests that in many cases group performance is worse than one might expect, because members not only perform their part of the task, but have to contend with distraction and coordination problems. Another problem is social loafing—individual motivation can suffer in groups, particularly where the task is relatively meaningless and uninvolving, the group is large and unimportant, and individuals’ contribution to the group is personally unidentifiable. Related to this is the free-rider effect—people selfishly take advantage of a limited public resource without contributing to its maintenance (e.g., tax evasion).
Structural Features of Groups
Groups vary in cohesion (often measured as how much people like one another), and in the nature and structure of the norms that regulate group behavior. Cohesive groups tend to retain their members and have tighter adherence to group standards—however the critical factor may be the extent to which people psychologically identify with the group and internalize its defining features as a part of their own self-concept. Group norms are relatively enduring, but change in line with changing circumstances to prescribe attitudes, feelings, and behaviors that are appropriate for group members in a particular context. Norms relating to group loyalty and central aspects of group life are usually more specific and have a narrower latitude of acceptable behavior than norms relating to more peripheral features of the group. Norms are also more forgiving of deviation among higher than lower status group members.
Almost all groups are internally structured—notably into roles that prescribe different activities that exist in relation to one another to facilitate overall group functioning. In addition to task specific roles, there are also more general roles that describe members’ place in the life of the group, e.g., ‘prospective member,’ ‘newcomer,’ ‘oldtimer,’ ‘past member.’ Rites of passage mark movement between these generic roles, which are characterized by varying degrees of mutual commitment between member and group. Roles can be very real in their consequences for role occupants. For example, Philip Zimbardo conducted a realistic role-playing study in which students were randomly assigned to be prisoners or guards in a simulated prison (Zimbardo et al., 1982)—the participants were so zealous in their adherence to role prescriptions that the study was prematurely terminated. Roles are rarely equal—some are more prestigious than others. According to expectation states theory role assignment in new groups is not only influenced by characteristics that relate directly to the group task (specific status characteristics, e.g., being well-organized), but also by characteristics that are more widely socially valued (diffuse status characteristics, e.g., being a doctor). Roles also define functions within a group that need to communicate with one another. Research on communication networks focuses on centralization as the critical factor. More-centralized networks have a hub person or group that regulates communication flow, whereas less-centralized networks allow free communication among all roles. Centralized networks work well for simple tasks (they liberate peripheral members to perform their role), but not for complex tasks (the hub becomes overwhelmed, delays and mis-communications occur, frustration and stress increase, peripheral members feel loss of autonomy).
The most basic role differentiation within groups is into leaders and followers. Despite a long research tradition that attributes leadership to innate or acquired leadership personality attributes (the great person approach to leadership), there are almost no traits that are reliably associated with effective leadership in all situations. Situational perspectives that view leadership purely as a function of situational demands do better. One variant of this (leader categorization theory) is that we have leadership schemas for different activities and categorize people as leaders on the basis of their fit to the task-activated schema. Another variant (an application of self-categorization theory) is that when people identify strongly with a group they ‘appoint’ as leader the person who best fits their representation of a typical/ideal group member.
Interactionist perspectives view effective leaders as those whose general leadership style (the distinction is between a socio-emotional/ relationship-oriented style and a task-oriented style) is best suited to situational/task demands. Fiedler’ contingency theory states that task-oriented leaders are most effective when the situation is highly controlled and the task very well organized, and also at the other extreme of situational and task disorganization—otherwise socio-emotional leaders do best. Other approaches focus on the dynamic transactional relationship between leaders and followers. People who are disproportionately responsible for helping a group achieve its goals are rewarded by the group with the trappings of leadership in order to restore equity. Hollander suggests that part of the reward is being able to be relatively idiosyncratic and thus to be innovative—people who are highly conformist and attain leadership in a democratic manner tend to accumulate significant idiosyncracy credits that they can expend once they achieve leadership. Leaders who have a high idiosyncracy credit rating may be able to exercise what organizational psychologists call transformational leadership—they are able to induce significant group change because they are imbued with charisma by the group.
Group Discussion and Decision Making
A major function of groups is, through discussion, to reach a collective decision from an initial diversity of views. Research on social decision schemes identifies a number of implicit or explicit decision-making rules that groups can adopt to transform diversity into a group decision: (a) unanimity—discussion pressures deviants to conform; (b) majority wins—discussion confirms the majority position which becomes the group decision; (c) truth wins—discussion reveals the position that is demonstrably correct; (d) two-thirds majority—discussion establishs a two-thirds majority, that becomes the group decision; (e) first shift—the group adopts a position consistent with the direction of the first shift in opinion. On intellective tasks (there is a demonstrably correct solution, e.g., matters of fact), groups adopt truth wins. On judgmental tasks (there is no demonstrably correct solution, e.g., matters of taste), groups adopt majority wins.
The process of group discussion involves the recall of information. Because the process is subject to the process and motivation losses discussed earlier (e.g., distraction, uneven power, social loafing), group remembering is often a constructive task in which the group forges its own idiosyncratic version of the truth (a social representation)—a version which is internalized and carried away by group members. Some researchers suggest that there are two components to group memory—not only do members have to remember their own specialized role prescriptions (i.e., what they have to do), but they also need to know where in the group other memories are stored (i.e., where to go to access other information or expertise). This latter form of memory is called transactive memory—it is a shared system for encoding, storing, and retrieving information. Research suggests that because transactive memory emerges from interindividual interaction, and is essential for group functioning, groups function better if members learn together rather than individually.
One popular method to harness the potential of groups is brainstorming—the uninhibited generation of as many ideas as possible, regardless of quality, in an interactive group. Although it is commonly thought that brainstorming enhances individual creativity, research convincingly shows that this is not the case—people may loaf, they are distracted, and the generation of ideas is blocked by others’ ideas. In fact it is more effective for people to generate ideas on their own and then pool them, rather than generate ideas interactively—electronic brainstorming may help, as may having a very hetero-genous group.
Popular opinion and conformity research suggest that groups are conservative and cautious entities which exclude extremes in a ponderous process of averaging. Two phenomena that challenge this view are groupthink and group polarization. Janis (1972) has argued that highly cohesive groups that are ideologically homogenous, under stress, insulated from external influence, and lack impartial leadership and norms for proper decision-making procedures, adopt a mode of thinking (groupthink) in which the desire for unanimity overrides the motivation to adopt proper rational decision-making procedures. Such groups feel invulnerable, unanimous, and absolutely correct. They also discredit contradictory information, pressurize deviants, and stereotype outgroups. The consequence is poor decision-making procedures that produce suboptimal decisions that can have widespread disastrous consequences—particularly if the decision-making group is a government body. A related pitfall of group decision making is group polarization, which is defined as a tendency for groups to make decisions that are more extreme than the average of pre-discussion opinions in the group, in the direction initially favored by the average—group polarization extremitizes group decisions (and can sometimes shift members’ enduring attitudes towards the polarized group position). Polarization does not require group interaction or discussion—it can happen on merely being exposed to a distribution of in-group positions. Polarization is particularly likely to occur when an important self-defining in-group confronts a salient out-group that holds an opposing point of view.
The study of intergroup behavior (behavior regulated by people’ awareness of and identification with different social groups) is entwined with the study of prejudice and discrimination. To explain discrimination and intergroup aggression, Dollard et al. (1939) adopted a psy-chodynamic model to explain how individual frustrations (e.g., economic failure) produce among group members a need to aggress, which, because the cause of the frustration is usually intangible (e.g., the economy) or too powerful (e.g., a military government), is vented on a weaker and available scapegoat (e.g., immigrants). Later variants of this frustration—aggression hypothesis removed the psycho-dynamic process, and focused on frustration in the form of people’ sense of relative deprivation. This research suggests that relative deprivation is most acute when one’ expectations have been rising and there is a sudden fall in one’ attainments, and that this is associated with social unrest and intergroup violence (sometimes called revolutions of rising expectations). People can also feel relatively deprived by comparing their attainments with those of others. Research suggests that interpersonal comparisons produce a sense of egoistic relative deprivation associated with individual stress and depression, whereas intergroup comparisons, between one’ own reference group and relevant other groups, produce fraternalistic relative deprivation associated with collective behaviors and intergroup conflict.
Another psychodynamic approach to prejudice is Adorno et al.’s (1950) authoritarian personality theory. They argued that harsh family rearing strategies produce a love—hate conflict in children’ feelings towards their parents. The conflict is resolved by idolizing parents and all power figures, despising weaker others, and striving for a rigidly unchanging and hierarchical world order. People with this authoritarian personality syndrome are predisposed to be prejudiced. Research suggests that although this syndrome does exist, its genesis in family dynamics is unconfirmed, and its relationship to prejudice is weak. By far the best predictor of prejudice is the existence of a culture of prejudice that is legitimized by societal norms. This finding also challenges the contribution of other, non-psychodynamic, personality explanations of prejudice—for example dogmatism, and closed-mindedness.
Sherif (1966) provides an alternative perspective on intergroup behavior, based upon a series of naturalistic field experiments on conflict and cooperation at boys’ camps in the United States in the early 1950s. Sherif argued that mutually exclusive goals engender competitive behavior that, at the individual level, fragments groups, and at the group level produces intergroup conflict and stereotyping. Superordinate goals requiring interdependence for their achievement produce cooperative behavior that, at the individual level, forms groups, and at the group level improves inter-group relations. The real nature of goal relations determines intergroup behavior—hence the theory is often called realistic conflict theory.
Other research into cooperation and competition has had people play dyadic laboratory ‘games’ (e.g., the prisoners dilemma, the trucking game) that are constructed to manipulate causes and consequences of cooperation or competition. This research repeatedly shows that people are so distrustful of one another that they willing adopt mutually harmful competitive strategies even when a mutually beneficial strategy is clearly available. When a number of people or groups are confronted by the dilemma of whether to cooperate or compete a commons dilemma can exist—if everyone cooperates a common resource (e.g., the natural environment) is preserved for all to enjoy, but if everyone competes the resource is quickly destroyed in a frenzied rush for self-gain. Research shows that self-interest almost always wins out, unless those accessing the resource derive their social identity from the entire group which has access to the resource, or there is a leader who can manage the resource. Resource destruction can also be moderated by any means that limit the number of people accessing the resource or increase the relative attractiveness of cooperation over competition. As in other areas of social psychology most cooperation/competition and dilemmas research has been conducted in Western cultures. People in non-Western societies tend to rest their decision to cooperate or compete more carefully on their relationship to their partner/opponent.
Although goal relations influence intergroup behavior, minimal group studies, first published by Henri Tajfel in 1970, show that competitive intergroup behavior can be an intrinsic feature of merely being categorized as a group member. This research spawned the social identity perspective on group processes and intergroup relations. When category memberships are con-textually salient, people categorize themselves and others in terms of contrasting in-group and out-group defining prototypes that prescribe category-appropriate perceptions, attitudes, feelings, and behaviors. This process of prototypical ‘depersonalization’ of self produces a sense of group identification and belonging, as well as in-group solidarity, conformity, normative behavior, ethnocentrism, in-group bias, intergroup discrimination, and perceptions of intragroup stereotypic similarity. Because groups define and evaluate who we are, intergroup relations are a continual struggle for evaluative superiority of one group over others. How the struggle is conducted, and thus the specific nature of intergroup behavior (acquiescent, competitive, conflictual, destructively aggressive), depends on people’ beliefs about the stability, legitimacy, and permeability of status relations between groups.
Collective and Crowd Behavior
Social psychologists have tended to view collective behavior as irrational, aggressive, antisocial, and primitive. The general model is that people in interactive groups such as crowds are anonymous and distracted, which causes them to lose their sense of individuality and to become deindividuated. Deindividuation prevents people from adhering to the prosocial norms of society that usually govern our behavior, because the critical factor of identifiability which is necessary for conformity to norms is no longer present. People regress to a primitive, selfish, and uncivilized behavioral level. Research (see Zim-bardo, 1970) typically manipulates anonymity (people in dark rooms, or wearing hoods and robes) to discover that deindividuation does increase aggression and antisocial behavior. The anonymity of the city has also been invoked to explain the aggression, selfishness, and rudeness that is often associated with city living.
Another perspective on the crowd is Berkowitz’ long hot summer analysis of urban race riots in the United States during the 1960s (Berkowitz, 1972). Against a background of relative deprivation, excessive heat (an environmental stressor) amplified existing frustration and produced individual acts of aggression which were exacerbated by aggressive cues (armed police). Aggression became the dominant response which was socially facilitated by the presence of other people in the street and thus became widespread and extreme—a riot.
A rather different sort of explanation is provided by emergent norm theory. The essence of collective/group behavior is adherence to norms. The problem of the crowd is that it is normless, because it comprises an ad hoc collection of people who have no tradition of being together. So, where do the norms come from? One possibility is that distinctive behaviors (antisocial behavior would be distinctive due to its rarity in everyday life) attract attention and are assumed to be the appropriate norm. This raises an additional problem—for people to conform to this norm they should be identifiable, and yet most crowd research rests on the assumption that people are anonymous in the crowd. In reality, however, most crowds are not normless—they often comprise people with a shared identity and shared purpose, and thus shared norms that provide the parameters for situation-specific norms to emerge. Social identity would be particularly salient in such a crowd and so self-categorization and depersonalization may explain the generation of normative crowd behavior.
Prejudice and Conflict Reduction
Prejudice and conflict are significant social ills that produce enormous human suffering—ranging from damaged self-esteem, reduced opportunities, stigma and socio-economic disadvantage, all the way to intergroup violence, war, and genocide. Prejudices can be muted by public service propaganda and education, mainly because this conveys societal disapproval of the expression of prejudice and may allay some of the ignorance and fears that fuel prejudice. These strategies are not very effective if isolated from wider social reforms that address health, educational, occupational, and economic disadvantage.
Many people believe that prejudice can be more significantly reduced by intergroup contact—indeed the contact hypothesis (G. W. Allport, 1954) suggested that prolonged, cooperative, purposeful equal status contact, occurring within a framework of official institutional support for integration, should reduce prejudice and improve intergroup relations. This idea fueled the desegregation of the American education system. Almost 50 years of research on the contact hypothesis is less optimistic. Contact is very likely to consolidate or even amplify prejudices—it can confirm that the out-group is more different and less likable than anticipated (particularly if contact proponents promulgate a sanitized image of the out-group that conceals real differences), it can make people feel anxious enough to avoid further contact (intergroup anxiety), it can remind one that there is often a real conflict of interest between groups. A significant problem with contact is that pleasant contact with a specific out-group member rarely generalizes to the group as a whole—the contact is essentially interpersonal. However, true inter-group contact is, by definition, unlikely to be pleasant and thus change perceptions.
Another strategy to reduce prejudice is to encourage the development of a common identity based upon re-categorization, superordinate goals, or a common threat. If the in-group—out-group distinction disappears, then so does the prejudice—this is the ‘melting pot’ practice of assimilation or cultural monism. Although initially compelling, there are some pitfalls of this approach—superordinate goals do not reduce conflict if the groups fail to achieve the goal, and groups which furnish well-defined different identities can feel that assimilation is a threat to their distinctiveness. One way around the latter problem may be multiculturalism or cultural pluralism in which group differences are recognized and nurtured within a common superordinate identity that stresses cooperative interdependence and diversity.
One obvious way to reduce conflict between groups is for the groups or their representatives to resolve specific intergroup disagreements through discussion. Representatives can bargain directly with one another, but this often accentuates intergroup orientations and hinders conflict resolution—in which case a credible, respected, and impartial mediator can be brought in to reduce emotional heat and misperceptions and explore novel compromises and face-saving strategies. The least satisfactory resort is arbitration, where a powerful third party imposes a resolution. Where intergroup relations are so poor as to preclude direct interaction groups become trapped in an escalating cycle of threats and retaliation, which can only be stopped if each group advertises and makes a small concession, and then invites the other group to reciprocate. This may work because it establishes trust and engages the norm of reciprocity.
Aggression and Helping Behavior
Theories of human aggression tend to emphasize its innate or instinctive aspects on the grounds that aggression has survival value. More popular with social psychologists are approaches that either emphasize the drive aspect of aggression, or how aggression can be learned. As we saw earlier, the frustration—aggression hypothesis views aggression as an automatic consequence of frustration. A more complex process is described by Zillmann’ excitation—transfer model—arousal produced in one situation (e.g., exercise) can spill over into another context which is interpreted as requiring an aggressive response, and thus be available to drive that aggression. The most popular learning approach is Albert Bandura’ social learning theory (Bandura, 1977)—through socialization people learn aggressive behavior directly (i.e., by being reinforced for aggressive behavior) or vicariously (i.e., by witnessing relevant others being reinforced or not censured for aggressive behavior).
Research on personality and aggression has found that people with a Type A personality tend, in competitive contexts, to be more aggressive than other people. Sex differences in aggression have also been a focus of study. Although there is little difference between males and females in verbal aggression, males are generally more physically aggressive than females. This is mainly attributed to sex role socialization, although there is evidence that the male hormone, testosterone, does produce more dominant behavior that can, when social norms encourage it, lead to aggressive behavior. Frustration, however caused (e.g., blocked goals, disadvantage, relative deprivation, uncomfortable climate, crowding), can also produce aggression, sometimes against targets that are entirely unrelated to the original cause of frustration. Direct provocation is another source of aggression—because it engages the reciprocity principle, even a small act of aggression can produce strong retaliation, and subsequent escalation. Disinhibition, through alcohol consumption or loss of personal identifiabilty (i.e., de-individuation), can cause people to behave in ways that are usually constrained by societal norms—for example, frustrations or aggressive attitudes that are normally kept in check may be expressed as aggression. However, much collective aggression may be better explained in terms of identification with a group that has norms that actually prescribe certain forms of aggression. Aggression can become institutionalized through official or unofficial sanctions—for example, terrorism, war, gang violence, prison culture. Generally speaking, cultural, sub-cultural, and group norms are powerful influences on what are considered acceptable levels or forms of expression of aggression (compare Hare Krishna with Skin Heads).
Probably the most effective moderator of aggression is the existence of cultural norms, practices, and legislation that proscribe aggressive behavior, remove societal frustrations based on social disadvantage and discrimination, and encourage non-aggressive outlets for individual frustrations.
Human aggression is balanced by a drive for people to help one another—helping is one type of prosocial behavior, which may or may not involve altruism (benefitting others at some cost to oneself). Although aggression can perhaps be reduced by facilitating helping behavior, research on helping behavior suggests that there are many factors that prevent people from helping. General theories of helping are very similar to theories of aggression—some emphasize the evolutionary advantage of helping others, while the most popular theories for social psychologists are ones that emphasize social learning through direct reinforcement, vicarious learning, and formal education.
The decision to help someone can be based on assessment of whether someone deserves help and whether the help will be effective. In order to preserve the perception of a just world, people tend to blame victims for their plight, and thus do not feel that help is deserved. This effect can be minimized by making people more aware of others’ suffering, and showing how a surprisingly uncostly helpful act can effectively reduce that suffering. More effective than awareness is empathy—perceived similarity is particularly effective in making people experience others’ suffering as their own, and thus provide help in order to reduce their own empathic suffering. Self-attribution processes may also play a role—internal attribution of a helpful act that one has performed helps construct a helpful personality for oneself that facilitates helping in other situations. External pressures to help others hinder this process. Helping is also increased by the existence of prosocial societal or group norms—these can be general norms of reciprocity (help those who help you) or social responsibility (help those in need), or more specific helping norms tied to the nature of a social group.
The bulk of helping research focuses on immediate situational factors that determine whether bystanders help someone who is in need of help—bystander intervention. This approach was given impetus by the widely reported murder of Kitty Genovese in New York in 1964—although 38 people admitted witnessing the murder, not a single person ran to her aid. To explain bystander intervention/apathy, Darley and Latané (1968) proposed a cognitive decision-making model of helping. Bystanders (a) need to notice an event, (b) need to be able to define that event as an emergency that calls for help, (c) need to assume personal responsibility to help, and (d) need to decide what can be done. Piliavin and associates (1981) propose a bystander calculus model that assigns a role to arousal—emergencies make us aroused, situational factors determine how that arousal is labeled and thus what emotion is felt, and then people assess the costs and benefits of helping or not helping before deciding what to do. The presence of multiple bystanders reduces personal responsibility and is the strongest inhibitor of bystander intervention—due to diffusion of personal responsibility, fear of social blunders, and social reinforcement for inaction. In addition, the costs of not helping are reduced by the presence of other potential helpers. People tend to help more if they are alone or among friends, if situational norms or others’ behavior prescribe helping, if they feel they have the skills to offer effective help, or if the personal costs of not helping are high. Other factors which increase helping include being in good mood, and assuming or recognizing that one has a leadership role in the situation. Relative to situational variables, personality and gender are poor predictors of helping
Interpersonal Processes and Close Relationships
Human beings have a strong need to affiliate with other people, through belonging to groups and developing close interpersonal relationships. The consequences of social deprivation are severely maladaptive (ranging from loneliness through psychosis), and social isolation is a potent punishment which can take many forms (e.g., solitary confinement, shunning, ostracism, the silent treatment). Research on social comparison processes shows that almost all knowledge we have about our ourselves, our skills, abilities, perceptions, and attitudes, comes from being able to make comparisons between ourselves and other people.
Generally, people seek out and preserve the company of people they feel they like. We tend to like others who we consider physically attractive, and who are nearby, familiar, available, and with whom we expect continued interaction. We also like people who genuinely like us (the reciprocity principle may explain this), particularly if we have relatively low self-esteem, and they grow to like us over time (thegain—loss hypothesis)—people who shift from less to more liking for us reduce rejection anxiety and may be perceived as being discerning. One of the most important determinants of liking is attitude or value similarity—we tend to like people who have similar attitudes and values to our own. But in contrast to this, we can also be attracted to people who satisfy our needs through having complementary qualities to our own. Research suggests that similarity is important in the early stages of a relationship, and need complementary in later stages.
A popular approach to the study of interpersonal relationships is framed by social exchange theory (Thibaut & Kelley, 1959). Relationships are effectively trading interactions where the partners trade goods (e.g., objects), information (e.g., advice), love (e.g., affection, warmth), money (e.g., things of value), services (e.g., activities of the body), and status (e.g., evaluative judgments). A relationship continues to the extent that both partners feel that the benefits of remaining in the relationship outweigh the costs of the relationship and the benefits of other relationships—relationships are based on complex cost-benefit analyses. Relationships are also influenced by equity considerations—people only remain in relationships if they feel that distributive justice exists (i.e., both partners’ outcomes are proportional to their inputs—particularly that the other person is not free-riding).
Close relationships often involve love, which is difficult to study at all let alone under controlled conditions. Research distinguishes between companionate love (caring and affection arising from spending time together) and passionate/romantic love (intense absorption involving physiological arousal). The latter, romantic love, is considered to be a function of (a) the existence of a cultural concept of love, (b) physiological arousal, and (c) a culturally appropriate love object. All three conditions must exist for love to exist. Sternberg has proposed a taxonomy of love based on three dimensions: passion, commitment, and intimacy (Sternberg, 1988). Passion alone is infatuation, commitment alone is ‘empty love,’ intimacy alone is liking, passion and commitment is fatuous love, passion and intimacy is romantic love, commitment and intimacy is companionate love, and all three together is consummate love.
The study of the development and disintegration of close relationships has tended to focus on marital satisfaction in Western-style marriages. A notable feature of distressed marriages is that partners blame each other for negative outcomes but do not dispositionally attribute positive outcomes. The warning signs of relationship dissolution are that a new life seems to be the only solution, alternative partners are available, relationship failure is expected, and there is lack of commitment to relationship continuance. Partners can respond passively by expressing relationship loyalty (waiting for things to improve) or neglect (allowing deterioration to continue), or actively by voice behavior (working at relationship improvement) or exit behavior (ending the relationship).
Communication and Language
Communication and especially language are the underplayed dimensions of much conventional social psychology. Nevertheless both are obviously social psychological phenomena. The social psychology of language has tended to focus on how things are said (speech style) rather than what is said (speech content). Spoken language contains social markers that tell us something about who is speaking to whom in what context. For example, slow, precise speech suggests that we are talking to a young child, an elderly person, or a foreigner. Powerless speech (i.e., rising intonation, intensifiers, hedges, tag questions) suggests we are in the presence of a higher status person. Social markers also tell us about group memberships such as social class, ethnicity, sex, and age. Research using the matched guise technique (people evaluate speech extracts that are constructed to be identical in all respects except speech style, e.g., accent, dialect, language) shows that, on the basis of speech style alone we can categorize people in terms of their group membership and evaluate them accordingly. Although accommodation of speech style to communicative contexts can be largely automatic, the evaluative implications of speech style mean that we can sometimes deliberately adopt a more prestigious speech style (e.g., speaking formally in an interview).
Speech accommodation theory states that in interpersonal interactions we tend to accommodate to each other’ speech styles (bilateral convergence) to improve communication and increase attraction via reciprocity and increased similarity. However, when intergroup relations are salient people with the higher prestige speech style accentuate usage of that speech style—divergence. People with the low-prestige style show upward convergence on the highprestige variety, unless they believe that their low status position is unstable and illegitimate in which case they accentuate their own speech style—divergence. Speech accommodation in intergroup contexts reflects wider intergroup dynamics that have been extensively studied in multi-ethnic contexts containing ethnic groups for whom language is a cultural anchor point (e.g., Australia, Québec)—ethnolinguistic identity theory. Members of ethnolinguistic minority groups who feel that their group has low status, poor demography and little support (low subjective ethnolinguistic vitality), and that this is a stable and possibly legitimate state of affairs, try to pass linguistically into the dominant group by learning to be fluent speakers of the dominant group’ language (the motivation to be fluent is strong, because fluency is a passport to higher status). Consequently the minority group can gradually lose its language and culture, while, because it is difficult to pass successfully, individuals are left in ethnolinguistic limbo. Where subjective ethnolinguistic vitality is high, and the legitimacy of the status quo is challenged, minority individuals promote their language and culture, and only need a working knowledge of the dominant group’ language (the motivation to be fluent is weak, because fluency conflicts with pride in one’ existing ethnolinguistic identity)—there is an ethnolinguistic revival.
People communicate not only through what they say and how they say it, but also through postures, gestures, facial expressions, touch, and even how close they stand to one another. Verbal and non-verbal messages do not have to be consistent—sometimes they clash, and it can be difficult to know which message is ‘true.’ Generally, people are good at managing the verbal content of a message, but not the nonverbal channels—lies may be detected through non-verbal cues. Indeed one way to discover underlying attitudes on socially sensitive issues is to analyze non-verbal cues. Non-verbal channels tend to specialize in communicating feelings and status, and in regulating conversational turn taking. Although non-verbal communication often operates automatically without us paying conscious attention, some of us are better than others at systematically reading and strategically using non-verbal cues.
The eyes are highly informative. If someone persistently gazes at you it could mean they like you, they are trying to persuade you or ingratiate themselves, they are a higher status person trying to exert control, or they are a speaker signaling its now your turn to talk. Facial expressions are powerful communicators of universally recognizable emotions, however there are culture-specific rules that encourage or discourage the expression of emotion or specific emotions in different contexts—called display rules. Research indicates that although facial expression of emotion is expressive, it is also communicative—people express emotions when other people are around. The entire body can be used to illustrate spoken language (postures) or replace spoken language (gestures). For example a relaxed, forward-leaning, face-to-face posture communicates liking, and in India a sideways tilt of the head with a simultaneous slight upswing of the head (a gesture that is distinct from the half-head shake as part of the full-head shake used for denial in most societies) means ‘yes.’ Touch can be used to communicate, among other things, positive affect, playfulness or control, depending on who touches who on what part of the body, for how long, and in what context. Research indicates that higher status people touch lower status people more than vice versa, and that there are marked cross-cultural differences in the amount and pattern of touching (e.g., Italians touch more than Germans, and there is a taboo in Buddhist countries against touching the head). Finally, interpersonal distance is a potent cue to liking (people who like one another stand closer together) and to the formality of the interactive context (the greater the distance the more formal the interaction). But again, there is marked situational and cultural variation. Intimacy equilibrium theory suggests that if interpersonal distance is inappropriately intimate people reduce intimacy in other modalities (reduced gaze, deflected posture)—typical elevator behavior.
Conversation and Discourse
Verbal and non-verbal communication occur in face-to-face interaction. In this context, nonverbal behavior is important for regulating the flow of conversation. This can involve attempt-suppressing signals such as a raised hand to indicate one has not yet finished, or backchannel communication such as nodding to indicate that one is still listening and not intending to interrupt. Depending on context, interruptions can signify rudeness, power and influence, or involvement, interest and support. Analysis of the entire conversational event, the discourse, can reveal a subtext of meanings that tell a great deal about the context and the interactants—for example analysis of discourse can reveal bigotry that would otherwise be difficult to detect.
Applications of Social Psychology
Social psychology and social psychologists move easily between basic research, applied research, and action research. This versatility was advocated at the outset by Kurt Lewin, and has been continued by key researchers and their associates, and the discipline as a whole. It is a strength of social psychology. It is beyond the scope of this chapter to more than give a flavor of some significant areas of application. Impression formation and social inference research has been applied in the context of eyewitness testimony. Small-group decision-making research has focused on jury decision making and the legal system. Cohesion research has been applied in sports and military contexts. Aggression research has been pursued in the contexts of media violence, pornography, alcohol, soccer hooliganism, rape, and domestic violence. Attitude research has been significantly and enduringly applied in a range of areas to do with attitude measurement (survey construction), attitude change, and attitude—behavior correspondence—for example commercial and public service advertising, safe driving, safe sex, dietary and exercise behavior, tobacco smoking, and sun-related behaviors. A range of social psychological constructs, including communication and helping behavior research, have been applied to built environment design. Attribution theory has been applied in clinical settings, and other social psychological constructs have been applied to the understanding of stress and coping, and family interactions. Research on intergroup relations and prejudice has been applied in negotiation and bargaining contexts as well as race, ethnicity, and gender relations. One of the most significant applications of social psychology, particularly of group processes and intergroup relations, is to organizations—there is a relatively close link between organizational and social psychology.