Given a man’s name or a woman’s, which are you more likely to think is a famous name? Academic research finds that in general, you’ll think of a man’s name as the famous over a women’s. But you might ask a different question: why is that worth studying? We’ll get to that, but know that the National Science Foundation thought the topic had merit. In 1995 the NSF approved two grants, totaling more than $200,000, for Mahzarin Banaji and Anthony Greenwald to investigate the phenomenon and find a way to measure it.
On its face, those grants seem the sort of thing to attract the ire of politicians seeking to score points by exposing stupid government spending. Think William Proxmire’s Golden Fleece awards, or the wastebooks put out by Jeff Flake or Tom Coburn.
But Banaji and Greenwald weren’t studying names, or even memory per se. They were studying implicit bias, something that is pretty much universal but the scope of which hadn’t really been realized until there was a way to measure it. “The brain is an association-seeking machine,” Banaji told interviewer David Edmonds in a recent Social Science Bites podcast. “It puts things together that repeatedly get paired in our experience. Implicit bias is just another word for capturing what those are when they concern social groups.” It’s why people can sincerely say they feel little or no bias — and certainly no animosity toward social groups — and yet the data from everyday access, opportunity, and treatment shows they must be biased.
For their work on creating a measuring device, the Implicit Association Test, or IAT, and for their colleague Brian Nosek’s role in popularizing the IAT, Banaji, Greenwald and Nosek received the Golden Goose Award Thursday night. The Golden Goose Awards have since 2012 celebrated federally funded research which may have seemed odd or obscure when it was first conducted but which over time has significantly benefited society. (SAGE, the parent of Social Science Space, is a sponsor of the Golden Goose Awards.) The award’s name is a specific callback — and refutation — to Proxmire’s Golden Fleece Award.
Banaji and Greenwald met on Banaji’s first day in the United States as a new psychology student at The Ohio State University. Greenwald would serve as Banaji’s Ph.D. adviser and later her mentor when she joined the psychology faculty at Yale University in 1986.
At Yale she started studying how amnesia affected the ability to recollect some memories, but those memories still seemed to be stored in the brain. She modeled her research after the work of Larry Jacoby to show that although men’s and women’s names had equal status in memory, a bias emerged: people were far less likely to judge women’s names as famous compared to men’s names. When asked, not a single participant thought the gender of the names was a factor. And yet gender bias in the results was clear.
Here’s how the National Science Foundation describes the Implicit Association test:
The test asks people to sort words or pictures into one of two columns as they flash onto a screen, and then measures their rate of sorting errors and speed of test completion as these stimuli appear with varying instructions.
In one version designed to test racial bias (available on the World Wide Web at http://www.tolerance.org/hidden_bias/), the first items to sort are faces identifiable racially as either black or white. The next items to be sorted are words associated with positive qualities (peace, pleasure, friend) or negative qualities (violent, failure, awful). Almost all research participants find it easy to do these sortings.
Next, participants are asked to sort words into combined categories, assigning positive words and white faces to one column, and negative words and black faces into the other. As the items flash on the screen — peace, [white face], awful, [black face], friend — the vast majority of people continue to have little trouble at the sorting task.
The signals of bias appear in the next step, when people are asked to reverse the process: to group positive words with black faces, and negative words with white faces. Intellectually, the task should be precisely the same difficulty as the previous step. However, most test-takers take longer and make more errors when trying to group good qualities with blacks — and, in other versions of the test, with other socially stigmatized groups. The cultural imprint appears to cross racial lines: in the black-white version of the test, many African Americans also struggle with the last portion of the test, but not as large a percentage as European Americans.
Overall, a high percentage of those who take the test make more errors and take more time as they struggle to accomplish the final task. The strength of the IAT’s effect has surprised and impressed other scientists.
Mahzarin and Anthony suggested that perhaps the bias wasn’t experienced by the people who showed it because it was not consciously accessible to them. And if that was true, how pervasive is such access to “implicit” information about social groups? This needed to be studied further, so the collaborators applied for and received two grants totaling $200,000 from the National Science Foundation to further their exploration. Eventually, the NSF would grant a total of $600,000 to their research.
Mahzarin asked colleague Robert Crowder if adapting the term “implicit memory” would be a good idea. He suggested “implicit attitudes” (the title of the first NSF grant) was a better fit. Meanwhile, two of Mahzarin’s students at Yale, Curtis Hardin and Alex Rothman, worked with her on studies that found that alerting people to the same concept, such as “assertive,” didn’t translate into equal assessments of men and women. Their paper “Implicit Stereotyping,” was the first paper to use the term “implicit” with the meaning it has now acquired. Banaji and Greenwald’s pioneering 1995 paper in Psychological Review, “Implicit Social Cognition: Attitudes, Stereotypes, and Self-Esteem,” first used the term “implicit bias” and concluded with a remark that methods to study implicit cognition should be a priority.
Greenwald subsequently conducted the first study using the IAT, and over the next decade he and Banaji deployed it in their respective labs. The early data were met with surprise and even resistance– how could scientists, the very people who studied prejudice, show evidence of implicit bias? Banaji and Greenwald were clear that implicit bias was not the same thing as prejudice, the conscious feeling of antipathy. So whatever the IAT was measuring, it wasn’t an attitude or preference in the way that social psychologists had described them.
Neuroscientist Elizabeth Phelps suggested pairing the IAT race test with functional magnetic resonance imaging to view the test subject’s brain’s response to black and white faces. The results showed that those with higher IAT race bias also showed higher levels of response in their amygdala, which was known to be involved in emotional learning and fear conditioning.
In 1996, intrigued by grad student Nosek’s studies in computer science before he took up psychology, Banaji brought him on board. During this time, even though the NSF and National Institutes of Health saw merit in the work and the methodology was made more robust and inventory of responses grew, many academics remained critical of the findings even as they couldn’t fault the science. Then Nosek suggested creating an IAT website. When the Project Implicit site went live in 1998, the team began seeing thousands of users a day taking the tests and flooding them with data. Within the first month, they had received 50,000 completed tests.
The team knew that they had struck a chord. The more tests that went up – today, there are tests for gender, sexuality, race, religion, and many more — the more interest the work received. Slowly, the term “implicit bias” caught on. In 2013, Banaji and Greenwald even wrote a bestselling book, Blindspot: Hidden Biases of Good People, to share the idea of implicit bias with the public
There days, political candidates debate implicit bias. Businesses use it to improve the quality of decision-making. Teachers use it to explore if they are teaching all students equally. Police departments are engaging with it to improve law enforcement practices. Legal scholars and practitioners are asking about implications for the law and creating unbiased courtrooms. Clinical psychologists use it to detect mental states and track whether treatments are effective. And doctors and healthcare providers use the test to ask if their bias may lead them, quite implicitly, to behave in ways that are opposed to their own values of equal treatment.
More than 30 million people across schools, churches, police departments, and the military have taken the IAT to date. Meanwhile, other researcherss have made their own versions. Researcher (and Yalie) Matthew K Nock has taken the test a step further by changing the images to reflect self-harm and harming others. This version helps identify treatable impulses among people who have attempted suicide or for military members back from war zones, for example.
This post contains material taken from the Golden Goose website.