Skip to Content

Found 100 Resources

Truman Michelson notes on Southern Cheyenne and Arapaho story, 1930 June

National Anthropological Archives
Digital surrogates are available online.

Digitization and preparation of these materials for online access has been funded through generous support from the Arcadia Fund.

Citation corrected from 3188 (part) to 3188-b on 2/28/12.

Title changed from "Miscellaneous notes June 9, 11, 13, 1930" 5/22/2014.

Place supplied from 47th Annual Report of Bureau of American Ethnology, page 2.

Handwritten Cheyenne linguistic and ethnographic notes and anthropometric data collected by Truman Michelson in Oklahoma. Much of the information is from his work with Mack Haag. The materials include vocabulary and notes on grammar and phonetics; a short story in Arapaho about spider with an interlineal English translation; notes on Cheyenne family and kinship relationships, marriage, divorce, adultery, illegitimacy, incest, pregnancy, death, etc.; and anthropometrical data on 22 Cheyenne adult males, identified by name and age.

Current, I – Lehua M. Taitano

Smithsonian Asian Pacific American Program
Presented by the Smithsonian Asian Pacific American Center Part of A Day in the Life of Queer Asian Pacific America: https://smithsonianapa.org/day-queer-life Also part of Care Package: https://smithsonianapa.org/care In "Current, I," poet Lehua M. Taitano reflects on the natural elements which she is composed of, and how social norms find friction with her very existence. Lehua's words find companionship in a video featuring scenes from her daily encounters with water, plants, and sea creatures – a montage ecosystem which she is inseparable from. Lehua M. Taitano is a queer CHamoru writer and interdisciplinary artist from Yigu, Guåhan (Guam) and co-founder of Art 25: Art in the Twenty-fifth Century. Taitano’s work investigates modern indigeneity, decolonization, and cultural identity in the context of diaspora. She is the author of two volumes of poetry—Inside Me an Island (WordTech Editions) and A Bell Made of Stones (TinFish Press).

Psychological Test, A Social Study

National Museum of American History
A Social Study was written by Manly H. Harper, Ph.D. It was published and copyrighted in 1927 by the Bureau of Publication, Teachers College, Columbia University. The four page booklet (double sided) included 71 different questions that the test taker was instructed to mark with either a + or a – sign. The study was designed to ascertain what teachers and educators thought were important considerations to take into account when developing good citizenship. Therefore, this test is of interest to historians of psychology while also providing insight into ideas about citizenship, the state, and political economy in the late 1920s.

The collection also includes a sheet entitled “Directions to Directors of the Study.” Here, we gain insight into how to proctor the study as well as gaining information on scoring, interpreting the results, and the norms for educators. The paper described how the study “gave a highly valid and reliable measure of social beliefs and attitudes as included in the trait conservatism-liberalism-radicalism.” Moreover, the norms section broke down the norms in terms of level of education, geographic location, and in one instance, race.

E. P. Richardson Symposium: New Perspectives on Portraiture: Prince Demah

National Portrait Gallery
New Perspectives on Portraiture: “Prince Demah and the Profession of Portrait Painting” by Jennifer Van Horn From the Edgar P. Richardson Symposium: New Perspectives on Portraiture at the National Portrait Gallery, Sept. 20, and Sept. 21, 2018 Day 1, Session 1: Materiality and the Profession of Portraiture Jennifer Van Horn Assistant Professor of Art History and History, University of Delaware “Prince Demah and the Profession of Portrait Painting” In “Prince Demah and the Profession of Portrait Painting”, we hear the story of the enslaved artist Prince Demah, who was active in Boston at the end of the 18th century. The merchants who owned Demah billed his talent as an oddity, where witnessing his act of painting was as much a draw as the works themselves. Despite this dehumanizing description of his work, Demah’s presence in the art world destabilized social norms in a daring way. Dr. Van Horn investigates not only the contradictions in his training, but the unique position granted to him by the artist’s seat.

Transforming Teaching and Learning about American Indians: 2 Maria Elena Campisteguy

National Museum of the American Indian
Contemporary teaching about American Indians frequently present just a tiny glimpse into the rich and diverse cultures, histories, and contemporary lives of Native peoples. Transforming Teaching and Learning about American Indians is a symposium that explores the need to transform education about Native Americans that seek to address this deficiency and others. In this segment, Maria Elena Campisteguy, Metropolitan Group, speaks on "The Power of Narrative to Change Education and Society." Maria Elena Campisteguy, Principal and Senior Executive Vice President, Metropolitan Group, has more than 30 years of professional experience in strategic communication, narrative change, and public will building, in the United States and internationally. She is passionate about designing approaches that allow people from diverse cultural backgrounds, perspectives, and experiences to co-create a shared vision, and strategies and messages to bring that vision to life. As the leader of Metropolitan Group’s Multicultural Engagement practice, Campisteguy designs collaborative, values-based approaches to motivate behavior change, affect systems change through shifts in policy and social norms, establish effective and lasting collaborations, engage diverse audience groups, and build inclusive organizational cultures. Most recently, she facilitated the development of a new narrative and narrative change strategy for Reclaiming Native Truth, an initiative to dispel myths and misconceptions about Native peoples and tribes in the U.S. She is currently leading similar work in Mexico. She holds a master’s in business administration with an emphasis in international marketing from Portland State University and a bachelor of science from Georgetown University. The symposium was webcast and recorded in the National Museum of the American Indian Rasmuson Theater on November 1, 2018.

Cap mask

National Museum of African Art
Wood cap mask with large, broad face and protruding hemispherical ears, prominent cheekbones, large, open mouth showing teeth, large eyes, and prominent brow ridges and nose on top of a large, broad neck. The mask has an overall dark patina with white pigment outlining the eyes and mouth.

Marisol Escobar

National Portrait Gallery
Marisol, who was born in Paris to Venezuelan parents, was profoundly affected by her mother’s suicide in 1941. The eleven-year-old retreated into a protective shell of silence and sustained an enigmatic, aloof persona, even after becoming a star of the New York City art scene during the 1960s. Marisol’s sculptures defy easy categorization. The life-size figures that she crafted from blocks of wood satirize middle-class American dress and behavior with the playfulness of Pop art. Yet beneath the witty surface, she probed her own identity, often incorporating plaster casts and photographs of her face to reflect her fascination with the many different “selves” we present to the world.

As Hans Namuth’s photograph suggests, Marisol was particularly attuned to the artificial masks that women adopt in compliance with social norms of “femininity” and “womanhood.” In the 1970s, she produced a series of masks that range from self-portraits to goddesses and other female archetypes.

Marisol, nacida en París de padres venezolanos, quedó sumamente afectada por el suicidio de su madre en 1941. La niña de 11 años se refugió en un caparazón de silencio y manifestó una personalidad enigmática y distante, incluso después de convertirse en una celebridad del mundo del arte neoyorquino en la década de 1960. Las esculturas de Marisol desafían la categorización fácil. Sus figuras de tamaño natural creadas con bloques de madera satirizan el vestir y la conducta de la clase media estadounidense en el estilo lúdico del arte pop. Pero bajo la ingeniosa superficie la artista indagaba su propia identidad, a menudo incorporando moldes de yeso y fotos de su rostro para plasmar su fascinación por los múltiples “yos” que presentamos al mundo.

Como sugiere la fotografía de Hans Namuth, Marisol era muy consciente de las máscaras artificiales que adoptan las mujeres para cumplir con las normas sociales de “femineidad” y lo que representa “ser mujer”. En la década de 1970 produjo una serie de máscaras que abarcan desde autorretratos hasta diosas y otros arquetipos femeninos.

William S. Burroughs

National Portrait Gallery
Allen Ginsberg’s photograph of William Burroughs pictures the thirty-nine-year-old writer in 1953, the year his first novel, Junkie, was published. This thinly veiled autobiographical tale of an "unredeemed drug addict" offended many readers on account of both its grim subject matter and its hallucinogenic, stream-of-consciousness literary style. Burroughs’s next novel, Naked Lunch, originally published in Paris in 1959, was equally controversial for its unabashed rendering of heroin use and gay sexuality. Banned in the United States for several years, it solidified his reputation as one of the formative members of the Beat generation. A quiet, deeply private man, Burroughs wrote knowingly about alienation from the social norm and defiance of authority and helped to lay the foundation for the cultural dissonance of the 1960s. He once remarked that his purpose was "to make people aware of the true criminality of our times, to wire up the marks."

Harnessing the Power of Peer Pressure Could Help Reduce Traffic

Smithsonian Magazine

Much like frustrating gridlocked city streets and clogged, slow-moving highways… people are slow to get going when it comes to shifting their driving habits. But CityLab’s Eric Jaffe explains that there might be what he calls an “underrated approach” to getting traffic moving again—the power of peer pressure.

Jaffe cites a recent study from a group of Canadian researchers who asked how social norms impact the use of private vehicles. After recruiting 78 regular commuters, researchers asked them to keep a journal of their journeys. They provided participants information on alternative modes of transportation, like carpooling and mass transit and asked them to reduce their vehicle use by 25 percent. Jaffee explains the details of how this went down:

Here’s the twist: not everyone was asked the same way. One group, a control, was simply given the information on alternative modes and nothing more. Another group received a peer pressure message with “low” power, telling them “only” 4 percent of other campus commuters had given up the single-occupancy drive. A third group got the “high” peer pressure push—told that about one in four commuters had switched from a car to a more sustainable travel mode.

But it was in the results where the rubber hit the road: the team found that the higher the peer pressure, the lower the use of private vehicles. In fact, commuters who received the most peer pressure reduced their use of private vehicles five times more than those in the control group. There could have been some self-selection bias at play, Jaffe notes, as participants more interested in alternative forms of transportation might have decided to do the study. But it's possible that comparing people to one another could be a powerful way to impact city driving patterns.

Since the study seems to show that peer pressure could be effective in reducing single-vehicle use and, eventually commute times, does traffic-related public shame play into city's future plans? Maybe, but maybe not — rather than investing in wider roads or public education campaigns, some communities like this Florida city are opting for a different approach. Given more congested roads, they’re opting to do nothing — and hope that people get so frustrated, they change how they commute.

How Technology Makes Us Better Social Beings

Smithsonian Magazine

About a decade ago, Robert Putnam, a political scientist at Harvard University, wrote a book called Bowling Alone. In it, he explained how Americans were more disconnected from each other than they were in the 1950s. They were less likely to be involved in civic organizations and entertained friends in their homes about half as often as they did just a few decades before.

So what is the harm in fewer neighborhood poker nights? Well, Putnam feared that fewer get-togethers, formal or informal, meant fewer opportunities for people to talk about community issues. More than urban sprawl or the fact that more women were working outside the home, he attributed Americans’ increasingly isolated lifestyle to television. Putnam’s concern, articulated by Richard Flacks in a Los Angeles Times book review, was with “the degree to which we have become passive consumers of virtual life rather than active bonders with others.”

Then, in 2006, sociologists from the University of Arizona and Duke University sent out another distress signal—a study titled “Social Isolation in America.” In comparing the 1985 and 2004 responses to the General Social Survey, used to assess attitudes in the United States, they found that the average American’s support system—or the people he or she discussed important matters with—had shrunk by one-third and consisted primarily of family. This time, the Internet and cellphones were allegedly to blame.

Keith Hampton, a sociologist at the University of Pennsylvania, is starting to poke holes in this theory that technology has weakened our relationships. Partnered with the Pew Research Center’s Internet & American Life Project, he turned his gaze, most recently, to users of social networking sites like Facebook, Twitter and LinkedIn.

“There has been a great deal of speculation about the impact of social networking site use on people’s social lives, and much of it has centered on the possibility that these sites are hurting users’ relationships and pushing them away from participating in the world,” Hampton said in a recent press release. He surveyed 2,255 American adults this past fall and published his results in a study last month. “We’ve found the exact opposite—that people who use sites like Facebook actually have more close relationships and are more likely to be involved in civic and political activities.”

Hampton’s study paints one of the fullest portraits of today’s social networking site user. His data shows that 47 percent of adults, averaging 38 years old, use at least one site. Every day, 15 percent of Facebook users update their status and 22 percent comment on another’s post. In the 18- to 22-year-old demographic, 13 percent post status updates several times a day. At those frequencies, “user” seems fitting. Social networking starts to sound like an addiction, but Hampton’s results suggest perhaps it is a good addiction to have. After all, he found that people who use Facebook multiple times a day are 43 percent more likely than other Internet users to feel that most people can be trusted. They have about 9 percent more close relationships and are 43 percent more likely to have said they would vote.

Image by Oren Livio, Copyright 2011 Keith N Hampton. Urban public spaces, shown here is Rittenhouse Square in Philadelphia, are increasingly places for the use of mobile phones, computers and other devices connected to the wireless Internet. (original image)

Image by Oren Livio, Copyright 2011 Keith N Hampton. The more devices present, the less in-person interaction, as shown here in Bryant Park in New York City. The majority of public Internet users are online communicating with people they know, but who aren't physically present. (original image)

Image by Ed Quinn. Keith Hampton, a sociologist at the University of Pennsylvania, is starting to poke holes in the theory that technology has weakened our relationships. (original image)

The Wall Street Journal recently profiled the Wilsons, a New York City-based family of five that collectively maintains nine blogs and tweets incessantly. (Dad, Fred Wilson, is a venture capitalist whose firm, Union Square Ventures, invested in Tumblr, Foursquare and Etsy.) “They are a very connected family—connected in terms of technology,” says writer Katherine Rosman on WSJ.com. “But what makes it super interesting is that they are also a very close-knit family and very traditional in many ways. [They have] family dinner five nights a week.” The Wilsons have managed to seamlessly integrate social media into their everyday lives, and Rosman believes that while what they are doing may seem extreme now, it could be the norm soon. “With the nature of how we all consume media, being on the internet all the time doesn’t mean being stuck in your room. I think they are out and about doing their thing, but they’re online,” she says.

This has been of particular interest to Hampton, who has been studying how mobile technology is used in public spaces. To describe how pervasive Internet use is, he says, 38 percent of people use it while at a public library, 18 percent while at a café or coffee shop and even 5 percent while at church, according to a 2008 survey. He modeled two recent projects off of the work of William Whyte, an urbanist who studied human behavior in New York City’s public parks and plazas in the 1960s and 1970s. Hampton borrowed the observation and interview techniques that Whyte used in his 1980 study “The Social Life of Small Urban Spaces” and applied them to his own updated version, “The Social Life of Wireless Urban Spaces.” He and his students spent a total of 350 hours watching how people behaved in seven public spaces with wireless Internet in New York, Philadelphia, San Francisco and Toronto in the summer of 2007.

Though laptop users tended to be alone and less apt to interact with strangers in public spaces, Hampton says, “It’s interesting to recognize that the types of interactions that people are doing in these spaces are not isolating. They are not alone in the true sense because they are interacting with very diverse people through social networking websites, e-mail, video conferencing, Skype, instant messaging and a multitude of other ways. We found that the types of things that they are doing online often look a lot like political engagement, sharing information and having discussions about important matters. Those types of discussions are the types of things we’d like to think people are having in public spaces anyway. For the individual, there is probably something being gained and for the collective space there is probably something being gained in that it is attracting new people.” About 25 percent of those he observed using the Internet in the public spaces said that they had not visited the space before they could access the Internet there. In one of the first longitudinal studies of its kind, Hampton is also studying changes in the way people interact in public spaces by comparing film he has gathered from public spaces in New York in the past few years with Super 8 time-lapse films that were made by William Whyte over the decades.

“There are a lot of chances now to do these sort of 2.0 versions of studies that have been ongoing studies from the ’60s and ’70s, when we first became interested in the successes and failures of the cities that we have made for ourselves,” says Susan Piedmont-Palladino, a curator at the National Building Museum in Washington, D.C. Hampton spoke earlier this month at the museum’s “Intelligent Cities” forum, which focused on how data, including his, can be used to help cities adapt to urbanization. More than half of the world’s population is living in cities now and that figure is expected to rise to 70 percent by 2050.

“Our design world has different rates of change. Cities change really, really slowly. Buildings change a little faster, but most of them should outlive a human. Interiors, furniture, fashion—the closer you get to the body, the faster things are changing. And technology right now is changing fastest of all,” says Piedmont-Palladino. “We don’t want the city to change at the rate that our technology changes, but a city that can receive those things is going to be a healthy city into the future.”

How Victorian Gender Norms Shaped the Way We Think About Animal Sex

Smithsonian Magazine

That males are naturally promiscuous while females are coy and choosy is a widely held belief. Even many scientists—including some biologists, psychologists and anthropologists—tout this notion when interviewed by the media about almost any aspect of male-female differencesincluding in human beings. In fact, certain human behaviors such as rape, marital infidelity and some forms of domestic abuse have been portrayed as adaptive traits that evolved because males are promiscuous while females are sexually reluctant.

These ideas, which are pervasive in Western culture, also have served as the cornerstone for the evolutionary study of sexual selection, sex differences and sex roles among animals. Only recently have some scientists—fortified with modern data—begun to question their underlying assumptions and the resulting paradigm.

It all comes down to sperm and eggs?

These simple assumptions are based, in part, on the differences in size and presumed energy cost of producing sperm versus eggs—a contrast that we biologists call anisogamyCharles Darwin was the first to alludeto anisogamy as a possible explanation for male-female differences in sexual behavior.

His brief mention was ultimately expanded by others into the idea that because males produce millions of cheap sperm, they can mate with many different females without incurring a biological cost. Conversely, females produce relatively few “expensive,” nutrient-containing eggs; they should be highly selective and mate only with one “best male.” He, of course, would provide more than enough sperm to fertilize all a female’s eggs.

In 1948, Angus Bateman—a botanist who never again published in this area—was the first to test Darwin’s predictions about sexual selection and male-female sexual behavior. He set up a series of breeding experiments using several inbred strains of fruit flies with different mutations as markers. He placed equal numbers of males and females in laboratory flasks and allowed them to mate for several days. Then he counted their adult offspring, using inherited mutation markers to infer how many individuals each fly had mated with and how much variation there was in mating success.

One of Bateman’s most important conclusions was that male reproductive success—as measured by offspring produced—increases linearly with his number of mates. But female reproductive success peaks after she mates with only one male. Moreover, Bateman alleged this was a near-universal characteristic of all sexually reproducing species.

In 1972, theoretical biologist Robert Trivers highlighted Bateman’s work when he formulated the theory of “parental investment.” He argued that sperm are so cheap (low investment) that males evolved to abandon their mate and indiscriminately seek other females for mating. Female investment is so much greater (expensive eggs) that females guardedly mate monogamously and stay behind to take care of the young.

In other words, females evolved to choose males prudently and mate with only one superior male; males evolved to mate indiscriminately with as many females as possible. Trivers believed that this pattern is true for the great majority of sexual species.

The problem is, modern data simply don’t support most of Bateman’s and Trivers’ predictions and assumptions. But that didn’t stop “Bateman’s Principle” from influencing evolutionary thought for decades.

A single sperm versus a single egg isn’t an apt comparison. (Gametes image via www.shutterstock.com)

In reality, it makes little sense to compare the cost of one egg to one sperm. As comparative psychologist Don Dewsbury pointed out, a male produces millions of sperm to fertilize even one egg. The relevant comparison is the cost of millions of sperm versus that of one egg.

In addition, males produce semen which, in most species, contains critical bioactive compounds that presumably are very expensive to produce. As is now also well-documented, sperm production is limited and males can run out of sperm—what researchers term “sperm depletion.”

Consequently, we now know males may allocate more or less sperm to any given female, depending on her age, health or previous mated status. Such differential treatment among preferred and nonpreferred females is a form of male mate choice. In some species, males may even refuse to copulate with certain females. Indeed, male mate choice is now a particularly active field of study.

If sperm were as inexpensive and unlimited as Bateman and Trivers proposed, one would not expect sperm depletion, sperm allocation or male mate choice.

Birds have played a critical role in dispelling the myth that females evolved to mate with a single male. In the 1980s, approximately 90 percent of all songbird species were believed to be “monogamous”—that is, one male and one female mated exclusively with one another and raised their young together. At present, only about 7 percent are classified as monogamous.

Modern molecular techniques that allow for paternity analysis revealed both males and females often mate and produce offspring with multiple partners. That is, they engage in what researchers call “extra-pair copulations” (EPCs) and “extra pair fertilizations” (EPFs).

Because of the assumption that reluctant females mate with only one male, many scientists initially assumed promiscuous males coerced reluctant females into engaging in sexual activity outside their home territory. But behavioral observations quickly determined that females play an active role in searching for nonpair males and soliciting extra-pair copulations.

Rates of EPCs and EPFs vary greatly from species to species, but the superb fairy wren is one socially monogamous bird that provides an extreme example: 95 percent of clutches contain young sired by extra-pair males and 75 percent of young have extra-pair fathers.

This situation is not limited to birds—across the animal kingdom, females frequently mate with multiple males and produce broods with multiple fathers. In fact, Tim Birkhead, a well-known behavioral ecologist, concluded in his 2000 book “Promiscuity: An Evolutionary History of Sperm Competition,” “Generations of reproductive biologists assumed females to be sexually monogamous but it is now clear that this is wrong.”

Ironically, Bateman’s own study demonstrated the idea that female reproductive success peaks after mating with only one male is not correct. When Bateman presented his data, he did so in two different graphs; only one graph (which represented fewer experiments) led to the conclusion that female reproductive success peaks after one mating. The other graph—largely ignored in subsequent treatises—showed that the number of offspring produced by a female increases with the number of males she mates with. That finding runs directly counter to the theory there is no benefit for a “promiscuous” female.

Modern studies have demonstrated this is true in a broad range of speciesfemales that mate with more than one male produce more young.

What’s happening in society outside the lab can influence what you see inside it. (National Library of Ireland on The Commons)

So if closer observation would have disproved this promiscuous male/sexually coy female myth, in the animal world at least, why didn’t scientists see what was in front of their eyes?

Bateman’s and Trivers’ ideas had their origins in Darwin’s writings, which were greatly influenced by the cultural beliefs of the Victorian era. Victorian social attitudes and science were closely intertwined. The common belief was that males and females were radically different. Moreover, attitudes about Victorian women influenced beliefs about nonhuman females. Males were considered to be active, combative, more variable, and more evolved and complex. Females were deemed to be passive, nurturing; less variable, with arrested development equivalent to that of a child. “True women” were expected to be pure, submissive to men, sexually restrained and uninterested in sex—and this representation was also seamlessly applied to female animals.

Although these ideas may now seem quaint, most scholars of the time embraced them as scientific truths. These stereotypes of men and women survived through the 20th century and influenced research on male-female sexual differences in animal behavior.

Unconscious biases and expectations can influence the questions scientists ask and also their interpretations of data. Behavioral biologist Marcy Lawton and colleagues describe a fascinating example. In 1992, eminent male scientists studying a species of bird wrote an excellent book on the species—but were mystified by the lack of aggression in males. They did report violent and frequent clashes among females, but dismissed their importance. These scientists expected males to be combative and females to be passive—when observations failed to meet their expectations, they were unable to envision alternative possibilities, or realize the potential significance of what they were seeing.

The same likely happened with regard to sexual behavior: Many scientists saw promiscuity in males and coyness in females because that is what they expected to see and what theory—and societal attitudes—told them they should see.

In fairness, prior to the advent of molecular paternity analysis, it was extremely difficult to accurately ascertain how many mates an individual actually had. Likewise, only in modern times has it been possible to accurately measure sperm counts, which led to the realization that sperm competition, sperm allocation and sperm depletion are important phenomena in nature. Thus, these modern techniques also contributed to overturning stereotypes of male and female sexual behavior that had been accepted for more than a century.

What looks like monogamy at first glance very often isn’t. (Waved Albatross image via www.shutterstock.com.)

Besides the data summarized above, there is the question of whether Bateman’s experiments are replicable. Given that replication is an essential criterion of science, and that Bateman’s ideas became an unquestioned tenet of behavioral and evolutionary science, it is shocking that more than 50 years passed before an attempt to replicate the study was published.

Behavioral ecologist Patricia Gowaty and collaborators had found numerous methodological and statistical problems with Bateman’s experiments; when they reanalyzed his data, they were unable to support his conclusions. Subsequently, they reran Bateman’s critical experiments, using the exact same fly strains and methodology—and couldn’t replicate his results or conclusions.

Counterevidence, evolving social attitudes, recognitions of flaws in the studies that started it all—Bateman’s Principle, with its widely accepted preconception about male-female sexual behavior, is currently undergoing serious scientific debate. The scientific study of sexual behavior may be experiencing a paradigm shift. Facile explanations and assertions about male-female sexual behaviors and roles just don’t hold up.

Elephants Never Forget When You Slaughter Their Family

Smithsonian Magazine

African elephants in Kruger National Park. Photo: Clive Reid

They say that elephants never forget: they never forget a friendly face, or an injury, or the scent of an abuser. And, as a pack, says new research, elephants never forget the effects of mass killings carried out in the name of conservation. Culling an elephant herd, directed killing that often targets older elephants first, leaves some survivors distraught, and creates a suddenly young herd that is deaf to elephant social norms. Science magazine:

African elephants that have lived through the trauma of a cull—or selected killing of their kin—may look normal enough to the casual observer, but socially they are a mess. That’s the conclusion of a new study, the first to show that human activities can disrupt the social skills of large-brained mammals that live in complex societies for decades.

Conservationists used to selectively trim elephant packs to keep their numbers down. But, by targeting the older members of the group, they were also killing the pack’s social memory. For the survivors, says Science, “Scientists have known since the late 1990s that many of these elephants were psychologically affected by their experiences during the culling. Other studies have described these effects as akin to posttraumatic stress disorder.”

Much of an elephant pack’s memory is tied up in the leading matriarch. With her picked off, says the new research, the elephants don’t know how to confront unexpected dangers, like the sudden appearance of a strange dominant female elephant. Science:

Because the Pilanesberg elephants grew up without the social knowledge of their original families, they will likely never properly respond to social threats and may even pass on their inappropriate behaviors to the next generation, the team concludes in the current issue ofFrontiers in Zoology. And it may be that elephant populations that are heavily poached or otherwise adversely affected by human activities are similarly socially damaged, they say.

More than just eroding elephant culture, they say, this loss of social memory could make elephants that have gone through a cull less likely to survive and reproduce than elephants who didn’t lose their families.

More from Smithsonian.com:

How Poaching Led to Serial Killer Elephants
Elephants Choose to Stay Inside Safe, Less Stressful National Parks

On the Science of Creepiness

Smithsonian Magazine

It’s the spider crawling up the wall next to your bed. Someone knocking at your door late at night. The guy who stands just a bit too close to you on the subway and for a bit too long. “Hello Barbie” with embedded WiFi and Siri-like capabilities. Overgrown graveyards. Clowns.

As with the Supreme Court standard for obscenity, we know creepy when we see it (or perhaps, more accurately, feel it). But what exactly is it? Why do we experience “the creeps”? And is being creeped out useful?

Though the sensation has probably been around since humans began experiencing emotions, it wasn’t until the middle of the 19th century that some of us called this touch of the uncanny “the creeps”. Charles Dickens, who gave the English language only marginally fewer new words and expressions than Shakespeare, is credited with the first use of the phrase, in his 1849 novel David Copperfield, to mean an unpleasant, tingly chill up the spine. In the years after the book, using “creepy” to describe something that causes unease took off – a Google Ngram search shows the instance of the word increasing dramatically since about 1860.

For all its ubiquity, however, the sensation of being “creeped out” has been little studied by psychologists. Frank McAndrew, professor of psychology at Knox College in Illinois, is one of the few. In 2013, he and graduate student Sara Koehnke presented a small and admittedly preliminary paper based on the results of their survey asking more than 1,300 people "what is creepy?" And as it turns out, “creepy” isn’t actually all that complicated.

“[Creepy is] about the uncertainty of threat. You’re feeling uneasy because you think there might be something to worry about here, but the signals are not clear enough to warrant your doing some sort of desperate, life-saving kind of thing,” explains McAndrew.

Being creeped out is different from fear or revulsion, he says; in both of those emotional states, the person experiencing them usually feels no confusion about how to respond. But when you’re creeped out, your brain and your body are telling you that something is not quite right and you’d better pay attention because it might hurt you.

This is sometimes manifest in a physical sensation: In 2012, researchers from the University of Groningen in the Netherlands found that when subjects felt creeped out, they felt colder and believed that the temperature in the room had actually dropped. (Dickens might not have used the word in quite the way it soon came to mean, but he did get the chills part right.)

That physical response further heightens your senses, and, continues McAndrew: “You don’t know how to act but you’re really concerned about getting more information … It kind of takes your attention and focuses it like a laser on this particular stimulus, whatever it is.”

Whatever it is can be things, situations, places and, of course, people. Most creepy research has looked at what makes people seem creepy. For example, the 2012 study successfully creeped people out by exposing them to others who didn’t practice normal non-verbal behavior.

In the experiment, subjects interacted with researchers who practiced degrees of subtle mimicry: When the subject scratched her head, the researcher would do something similar, such as touch his nose. Subjects felt creeped out – and colder – when the researcher didn’t mimic, indicating a discomfort with people who may not be able to follow social norms and cues.

McAndrew and Koehnke’s survey also explored what made creepy people appear creepy, first asking participants to rate the likelihood a person described as creepy exhibited a set of characteristics or behaviors, such as greasy hair, extreme pallor or thinness, or an unwillingness to let a conversation drop. In another section, it asked people to indicate how much they agreed or disagreed with a series of statements about “the nature of creepy people”.

Perhaps the biggest predictor of whether someone was considered creepy was unpredictability. “So much of [what is creepy] is about wanting to be able to predict what’s going to happen, and that’s why creepy people creep us out – because they’re unpredictable,” explains McAndrews, noting that the 2012 study also seemed to underscore that point. “We find it hard to know what they’re going to do next.”

Creepiness in people is also related to individuals breaking certain tacit social rules and conventions, even if sometimes that rule breaking is necessary. This becomes more evident when we look at the kinds of jobs a majority of respondents found creepy. However unfairly, taxidermists and funeral directors  were among the creepiest professions listed in McAndrew and Koehnke’s survey, likely because these people routinely interact with macabre things that most other people would avoid.

“If you’re dealing with somebody who’s really interested in dead things, that sets off alarm bells. Because if they’re different in that way, what other unpleasant ways they might be different?” says McAndrew.

Garbage collectors, who also deal with things that people would rather avoid, were not considered creepy; evidently, the type of thing being avoided needs to be symbolic of or related to a latent threat. But the study respondents did find a fascination with sex to be creepy, so “sex shop owner” was considered a creepy profession.

By far the creepiest profession, according to the survey, was being a clown. Clowns are by nature unpredictable and difficult to fathom – makeup disguises their features and facial cues, and they typically do things outside the social norm, such as give unexpected hugs, with few consequences.

“Creepy” these days is often used to describe things like data surveillance or artificial intelligence (though the creepiness of the Uncanny Valley is best left for other discussions) – anything that has the potential to be used for evil. But creepiness also relies heavily on context: A doll on a child’s bed isn’t creepy, but a doll who looks eerily like your own child found on your doorstep definitely is.

McAndrew believes that there’s an evolutionary advantage to feeling creeped out, one that’s in line with the evolutionary psychology theory of “agency detection”. The idea is that humans are inclined to construe willful agency behind circumstances, seek out patterns in events and visual stimuli, a phenomenon called pareidolia. This is why we see faces in toast, hear words in static or believe that things “happen for a reason”.

Though the theory is most often invoked in explaining the psychological inclination towards religion, McAndrew says it helps make sense of why we get creeped out – because very often, we think that willful agent is malicious.

“We’re predisposed to see willful agents that mean us harm in situations that are ambiguous, but this was an adaptive thing to do,” he explains. Our ancestors saw a saber-toothed tiger in every shadow and a slithering snake in the motion of the swaying grass because it was better to be safe than sorry.

McAndrew believes that other findings from the survey are consistent with an evolutionary directive behind the creeped-out response: Firstly, that respondents – both men and women—overwhelmingly thought that men were more likely to be creepy than women, and secondly, that women were likely to perceive someone as creepy if that person showed an unwanted sexual interest in them.

From an evolutionary psychology perspective, McAndrew says, this makes sense. Males are perceived as more capable of and responsible for violence than females, while women faced a much wider range of threats, including sexual threats. Acting on even the whisper of such a threat is infinitely preferable to not acting at all and suffering the consequences.

But being afraid of the right things at the right time is only half of the story of creepiness. Just as our brains were being shaped by being constantly on guard against potential threats, they were also being shaped by the practical necessity of getting along in a group.

The quiet creeped-out response is a result of not only being perpetually wary, but also of being wary of overreacting – the same social norms that, when violated, keep that person from reacting in an overtly terrified way. We don’t want to seem impolite or suspicious, or jump to the wrong conclusions, so we tread carefully.

There’s something appropriate about the fact that the first appearance of the word “creepy” in The New York Times was in an 1877 article about a ghost story. Because for all of the evolutionary priming, all of the prey’s instincts for self-preservation that seem to have gone into shaping the creeped-out response, there’s at least a little part of us that likes to be creeped out.

Sort of.

McAndrew points out that truly creepy things and situations are not attractive, not even a little bit: “We don’t enjoy real creepy situations, and we will avoid them like the plague. Like if there’s a person who creeps you out, you’ll cross the street to get away.” What we do enjoy is playacting, in the same way we enjoy the vicarious thrills of watching a horror movie.

McAndrew and other psychologists, anthropologists, and even Stephen King, in his 1981 exploration of the genre he dominated, Danse Macabre, see horror films as a safe place for us to explore our fears and rehearse what we would do if, say, zombies tore apart our town. 

The same thing that keeps us tense and attentive in a truly creepy situation is not unlike what keeps us moving, shrieking and shaking, through a Halloween haunted house. “It’s going to trigger a lot of things that scare and startle you, but deep down you know there’s no danger,” McAndrew says. “You can have all the creepy biological sensations without any real risk.” And there’s something important (and fun) about that defanged kind of creepy.

Just keep an eye out for the real creeps. 

A Look Back at South Africa Under Apartheid, Twenty-Five Years After Its Repeal

Smithsonian Magazine

The year 1990 signaled a new era for apartheid South Africa: Nelson Mandela was released from prison, President F.W. de Klerk lifted the ban on Mandela's political party, the African National Congress, and Parliament repealed the law that legalized apartheid. 

There are few words more closely associated with 20th-century South African history than apartheid, the Afrikaan word for "apartness" that describes the nation's official system of racial segregation. And though the discriminatory divide between whites of European descent and black Africans stretch back to the era of 19th-century British and Dutch imperialism, the concept of apartheid did not become law until 1953, when the white-dominated parliament passed the Reservation of Separate Amenities Act, which officially segregated public spaces such as taxis, ambulances, hearses, buses, trains, elevators, benches, bathrooms, parks, church halls, town halls, cinemas, theaters, cafes, restaurants, hotels, schools, universities—and later, with an amendment, beaches and the seashore.

But the repeal was more symbolic than activating because the intended result was already in motion, says Daniel Magaziner, associate professor of history at Yale University and author of The Law and the Prophets: Black Consciousness in South Africa, 1968-1977. By the time of the repeal, South Africans had already begun to ignore some of the legal separation of the races in public spaces. For instance, blacks were supposed to yield the sidewalk to whites, but in large cities like Johannesburg, that social norm had long since passed. And in many places total racial segregation was impossible; these were places like whites-only parks, where blacks were the maintenance crew and black nannies took white children to play.

“The fact that the repeal was passed so overwhelmingly by Parliament, I don’t think speaks to the sudden liberalization of South African politics,” says Magaziner. “I think it speaks to people recognizing the reality that this was a law that was anachronistic and wasn’t in practical effect anymore.”

The impact of apartheid, however, was nowhere near over when the repeal went into effect on October 15, 1990. While white South Africans only made up 10 percent of the country’s population at the end of apartheid, they owned nearly 90 percent of the land. In the quarter-century since the act's repeal, land distribution remains a point of inequality in the country. Despite the post-apartheid government's stated plan to redistribute one-third of the country's land from whites to blacks by 2014, less than 10 percent of this land has been redistributed, and the 2014 deadline has been postponed to 2025.

Magaziner cautions that focusing on the repeal of the Separate Amenities Act as a sign of the end of apartheid obscures the deeper problems caused by racial segregation that continue to impact the country today.

“The Separate Amenities Act made visible what had been longstanding practices,” says Magaziner, “but it also made invisible other aspects of segregation that weren’t covered by the Act but have a much longer lasting impact in South Africa.”

The photos above, selected from the photo archives of the United Nations and Corbis, show the impact of the Reservation of Separate Amenities Act in public spaces in South Africa.

Designing Media: Jimmy Wales

Cooper Hewitt, Smithsonian Design Museum
One of 31 video segments featured in 'Designing Media', the new book, DVD and website by Bill Moggridge. More info on 'Designing Medi'a available at http://www.designing-media.com Jimmy Wales, the founder of Wikipedia, who has harnessed voluntary contributions from anyone with sufficient interest and time to create the world's largest encyclopedia. He has evolved a hierarchical structure that benefits from the combination of automation and human judgment; the software is enhanced through the emergence of social rules and norms for interaction, which bring members of the community together to do something enjoyable and productive. More info on Designing Media available at http://www.designing-media.com

Q&A: Cynthia Saltzman

Smithsonian Magazine

Your book profiles several of the great 19th-century American collectors of European Old Master paintings. What was happening in the 1880s and 1890s that prompted these wealthy Americans to go after these works?
I think it was because America was really becoming a world power. It was overtaking England and Germany as the leading economic power. Americans began to focus on culture. They had built the Metropolitan, they had built the Philadelphia Museum and the Boston Museum of Fine Arts, then they need great art to put in them. In order to have a major world-class museum you needed Old Master paintings. The Old Masters were a measure of the museum.

What, at the same time, was prompting the Europeans to sell?
Sometimes I think American taste is English taste. We bought so many things from the English. They had the huge collections. At the end of the 19th century there were two things, the fact that the English began importing American grain and it sold for so much less that it caused prices of English prices to fall, and that meant that the value of their land went down. All these English noblemen had their rents go down, so they were squeezed that way, and then at the same time their taxes on land, and inheritance taxes, went up, so they were in a financial crisis at exactly the same time that the Americans, these big industrialists, had a great deal of money.

There seem to have been both public and private motivations for these collectors, educating the public and enhancing their own status.
I think these art collectors wanted to transform themselves, and they wanted to transform America. They were interested in turning themselves into collectors and giving themselves a new identity. They all did give their collections to the public, but the ones like Isabella Gardner and Henry Clay Frick, who create their own museums, are clearly interested in transforming themselves. And still today, when you go to their museums and you look at the art, you still think of it as their possessions. There's always a mix of motives, I think.

What in particular was driving Isabella Stewart Gardner?
She's an esthete; she loves art. I think as a collector she had such definite taste, and she was so enthusiastic. She saw Whistler's abstract pictures and she wanted them, and then she saw Sargent's Madame X, and she wants him to do her portrait. And also I think collecting just let her do something outside the social norms, the social expectations that were put upon her in Boston. Once she got involved in art, she could become a collector. And be something completely different. She is the patron of all these young men, artists and musicians, and it allowed her to be somebody completely outside Boston society. She modeled herself on Isabella d'Este.

You devote a large amount of the book to the dealers that these collectors used. Why?
I really wanted to take a different approach. I wanted to tell the backstage story. It seems to me that collectors always monopolize the credit for their collections, but almost always it's the work of a team, the dealers, experts, and the collectors.

Dealers like Otto Gutekunst?
He's one of the heroes of the book. He's important to Gardner's collection. She writes "I don't adore Rembrandt, I only like him." And yet Gutekunst is an expert on Northern painting. And Gardner has three fabulous Rembrandts. When Frick starts collecting, Gutekunst wants to get him "big, big, game," or "angel's food." He's very outspoken, he's very honest. I just thought he was great. And so he goes to get Frick a major Rembrandt. He takes an active role.

What is the ultimate result of all this art collecting?
I think of it in huge, sweeping terms. All these Old Masters came over here, and then eventually American art becomes more and more important. After World War II it is the most influential for a while. And if we hadn't created these great museums with these great works of western art?...The American artists were really very influenced by them, and inspired by them, and I think it was really crucial to the development of American art which of course the vision of some of the first collectors.

Staying Near Home Becomes the Norm For Millennials

Smithsonian Magazine

Moving to a new city or moving to take a job used to be a much more common rite of passage for young people than it is today. Bloomberg reports that the mobility of people aged 25 to 34, a group that has been much more likely to move in the past, is down to 20.2 percent—only one out of five. That’s the lowest level since that data started being collected in 1947. 

From Bloomberg:  

Economists and demographers say a combination of relatively low-paying opportunities, the burden of student loans and an aversion to taking risks explains the reluctance to relocate. Student-loan debt rose $114 billion in the year ended in December to $1.08 trillion, according to the Federal Reserve Bank of New York.

The decline in mobility among young adults “is economically significant,” said Chris Christopher, director of consumer economics for IHS Global Insight Inc. in Lexington, Massachusetts. “It is a lot more expensive to get started, to move, to find a job. In terms of social mobility, job mobility, overall geographic mobility, they are not doing as well as their parents and grandparents.”

That’s not to say that millennials aren’t moving at all. Though the numbers of people relocating might have gone down, millennials are still nearly twice as likely to relocate than other age groups. But with limited well-paying job openings across the country, staying close to a social safety net seems to be a more secure option.  

The young people that are moving seem overwhelmingly to prefer denser areas to the suburbs. For a more visual approach for migratory data (that isn’t limited to millennials) check out Chris Walker’s data visualization from last winter of where people were moving in 2012

When I Say "You" But Really Mean "Me"

Smithsonian Magazine

“You can’t always get what you want.”

“You can’t be too careful around there.”

“Life is like a box of chocolates. You never know what you're gonna get."

As the above phrases show, “you” doesn’t always refer to you, the person I’m speaking to. The second-person pronoun can also take a broader meaning, referring to a "generic" person doing or saying or being something. In linguistics, this "generic you" refers to the use of the word "you" to mean an unspecified "someone" or "one," as opposed to the person being addressed.

But like much of our speech, this little pronoun could actually reflect something deeper: Research in recent years has shown that seemingly insignificant word choices can potentially reveal insights about a person's background and personality. And in some cases, using the word “you” could actually serve as an insulator from negative or traumatic emotions when talking about past experiences, according to a psychology study published Friday in the journal Science.

In recent years, Ariana Orvell, a social psychology student at the University of Michigan, noticed that participants in psychology studies conducted in her lab tended to use this seemingly "simple word" often, and in myriad different ways. Sometimes, they even used it to refer to themselves. "We thought it was kind of a curious puzzle as to why people would use that we typically think of addressing specific others to refer to themselves and their own experiences," she says.

To dig in this puzzle, Orvell and her collaborators designed a series of experiments to study where this tendency might stem from.

Their first set of experiments looked specifically at social norms—the behaviors and traits considered acceptable or not by society for a certain person. About 200 participants recruited randomly online were asked questions in two basic structures: one designed to elicit an answer about the "norm" for an action or object ("What should you do with hammers?") and one designed to get at the person's preferences ("What do you like to do with hammers?")

The researchers found that participants were significantly more likely to use "generic you" when they were speaking about the "norm" for something than when they were speaking about their own personal preference. About 50 percent of responses speaking to the “norm” contained a use of “generic you” compared to less than 10 percent of the responses speaking to the preference.

The researchers next set out to test whether people unconsciously use the "generic you" to "normalize" a negative experience based on results from previous research done by some of the team on “meaning-making” from negative experiences. They asked roughly 200 more randomly recruited participants to recall a negative experience from their life, and then write lessons that that could be taken from it.

Another group of the study participants were asked to recall about an emotionally neutral life experience, and also find a lesson in that. A third group was asked to simply recall a negative experience without making a lesson from it.

The people trying to extract meaning from their negative emotional experiences were much more likely to use "generic you" in the lessons they created, Orvell says. Of that group, 46 percent used “you” at least once in their responses, compared to just 10 percent in the recall-only group and only 3 percent in the neutral group.

"'Generic you' was really coming online when they were trying to make meaning of their negative experience," Orvell says. This could reflect the people putting "psychological distance" between themselves and their traumatic experience—in essence, trying to shield themselves from negative emotions. Some of the lessons given demonstrate this: “Sometimes people don’t change, and you have to recognize that you cannot save them”; “when you are angry, you say and do things that you will most likely regret”; and “pride is something that can get in the way of your happiness.”

Mark Sicoli, an anthropological linguist at the University of Virginia, says this research has great potential for helping people work through traumatic experiences and mourning in therapy. "Across these experiments the findings are robust and show us not only how language can evoke feelings and affect the way we remember events, but also how choosing ways to talk about negative experience can help us frame and reframe the experience," says Sicoli, who was not involved in the study.

Sicoli says he hopes to see more research into this phenomenon in languages other than English and looking at actual communication between two people as well as comparing "generic you" to the uses of other pronouns such as "one," "they," and even the "royal we." For her part, Orvell says she plans to look at children to see when and how the use of "generic you" develops in people. "This work gives us much to think about," Sicoli says.

In Japan, Couples Are Still Legally Required to Have the Same Surname

Smithsonian Magazine

In many countries and traditions, marriage comes with a name change, almost always for the woman. Yet about 20 percent of American women opt to keep their names and do not take on the name of their spouse. Other couples hyphenate, and sometimes a man will even take his wife’s surname. But that freedom of choice is barred in Japan. There, the Supreme Court recently upheld a century-old law that married couples must share a surname.

The decision came "as a blow to women’s rights activists," reports the BBC. The vast majority of couples use the husband’s surname, so the practice is discriminatory, the activists say. 

When 80-year-old Kyoko Tsukamoto heard the ruling she says she started crying, reports Jonathan Soble for the New York Times. The retired high school teacher was one of the plaintiffs trying to change the law. She and her husband of 55 years registered their marriage only to keep their three children from being born out of wedlock. They divorced and remarried, in protest of the law, between each of the children’s births. "My name is Kyoko Tsukamoto, but I can’t live or die as Kyoko Tsukamoto," she tells the Times. Instead, her legal married name, Kojima, appears on all her official government records.

Judge Itsuro Terada, the chief justice hearing the case, justified his decision by saying that the law’s effect isn’t strong because there is already widespread, informal use of maiden names. The government has allowed married civil servants to use the surname from their unmarried days since 2001, Sobel reports for the Times

While the question of married names may seem to some as a small fight in the larger context of gender equality, the history demonstrates its significance. In 1855, American equal rights activist Lucy Stone kept her name when she married the abolitionist Henry Blackwell. "A wife should no more take her husband's name than he should hers," she said at the time, according to Biography.com. "My name is my identity and must not be lost."

Many countries let their residents choose whether to change their surnames when getting married, and some have laws that prohibit requiring a woman to change her name, reports the BBC. Others are more extreme. In Greece, married people, male or female, have to petition to change their name. In Quebec, a woman is actually barred from taking her husband’s surname. It’s still rare and difficult for men to take their wife’s surnames in many places.

Though there is no law that requires a woman to take her spouse's name in the U.S., the decision can still be fraught, report Claire Cain Miller and Derke Willis for the New York Times "This is the strongest gendered social norm that we enforce and expect," Laurie Scheuble, who teaches sociology at Penn State, tells Miller and Willis. The heavy weight of tradition explains why most women do change their surname when they marry, though keeping maiden names is on the rise. 

That tradition was behind the recent decision in Japan. Terada says that a single family name is "deeply rooted in our society," Tomohiro Osaki reports for the Japan Times. Terada adds that it "enables people to identify themselves as part of a family in the eyes of others."

To change the surname law, activists will have to turn away from the court and appeal to the legislature. However, they are still likely to face resistance: Osaki reports for Japan Times that respondents to two different surveys were evenly split between those for and against the surname law. 

There was one small win that came of the surname case in Japan, however: the court went ahead and overturned a separate century-old statute that prevented women from remarrying within six months of their divorce, originally designed "to help determine the paternity of a child born shortly after the divorce," reports the BBC

Human Hunting Is Driving the World's Biggest Animals Toward Extinction

Smithsonian Magazine

Prior to the conclusion of the Pleistocene Epoch, Earth boasted a vibrant population of enormous animals, including armadillo ancestors the size of a Volkswagen Beetle, ground sloths weighing up to 9,000 pounds and beavers the size of a black bear.

Today, the planet’s largest creatures—known collectively as megafauna—are decidedly smaller than these prehistoric counterparts. But as Marlene Cimons writes for Nexus Media, contemporary giants such as African elephants, rhinoceros and giraffes face many of the same threats as their extinct predecessors. First and foremost, according to new research published in Conversation Letters, is human activity, or more specifically, the killing of megafauna for their meat.

To assess the state of the world’s megafauna, a team of international researchers led by scientists from Oregon State University surveyed the populations of 292 large animal species. Of these, 70 percent, or just over 200, were classified as decreasing in number, while 59 percent, or 171, were deemed at risk of extinction.

Crucially, the team reports in the study, “direct harvesting of megafauna for human consumption” represented the largest individual threat for all six classes of vertebrates analyzed. Harvesting megafauna for meat presents a direct threat to 98 percent of the at-risk species included in the research. Additional threats include intensive agriculture, toxins, accidental entrapment, capture for medicinal use and invasive competitors.

Live Science’s Brandon Specktor explains that the researchers set various weight thresholds to determine whether an animal could be considered megafauna. Mammals, ray-finned and cartilaginous fish had to weigh in at more than 220 pounds, while amphibians, birds and reptiles needed to tip the scales at more than 88 pounds.

The final group of established megafauna, according to Newsweek’s Kashmira Gander, included such little-known creatures as the Chinese giant salamander, an alligator-sized amphibian prized as a delicacy in certain parts of Asia, and the Somali ostrich, a flightless bird hunted for its meat, feathers, leather and eggs. Better-known animals featured in the study include whales, sharks, sea turtles, lions, tigers and bears.

The scientists’ findings suggest that megafauna are far more vulnerable to extinction than vertebrates as a whole. (As Specktor points out, only 21 percent of all vertebrates are threatened with extinction, while 46 percent have declining populations.) This trend has become increasingly apparent over the past 250 years. During this time period, according to Oliver Milman at the Guardian, nine megafauna species, including two varieties of giant tortoise and two types of deer, have gone extinct. The decline is in part due to what Specktor describes as “human over-hunting and habitat encroachment.”

Quartz’s Chase Purdy explains that humans’ ascension to the role of “Earth’s super-predator” began toward the end of the Pleistocene, when our species became increasingly technologically savvy and started using projectile weapons to hunt larger animals from a safe distance. Today, however, humans no longer need to rely on megafauna for food. As Purdy notes, the majority of contemporary food sources derive from agriculture and aquaculture, while most “wild” meat stems from the capture of smaller, and often more abundant, prey.

"It’s a complex issue,” lead author William Ripple, an ecologist at Oregon State University, tells the Guardian’s Milman. “Sometimes large animals are killed for trophies, sometimes it’s subsistence hunting and fishing, sometimes it’s illegal poaching—it runs the gamut."

Ripple continues, “Humans have become super predators who don’t even have to come into contact with the things we are killing. Many of these large animals have low reproduction rates so once you add in that pressure they become vulnerable.”

Effective megafauna conservation will require the minimization of direct harvesting for meat or other body parts, the authors write in the study. Although such curbing efforts will likely have little influence on food supply, the team admits that “economic values, cultural practices and social norms might complicate the picture.”

Still, Ripple says in a press release, “If we don’t consider, critique and adjust our behaviors, our heightened abilities as hunters may lead us to consume much of the last of the Earth’s megafauna.”

Bringing Light to West Africa

Smithsonian Center for Folklife and Cultural Heritage

As Rahama Wright embarked on a two year Peace Corps assignment in Mali, she had every intention of seizing the opportunity to empower and transform the lives of countless women. Just how she planned to do it would require ingenuity, determination, and most importantly, flexibility. After completing her undergraduate studies, Rahama worked as a Peace Corps health volunteer providing pre- and post-natal care and support for the village’s health center.

While medicine was not her original course of study, her adaptable spirit allowed her to flourish in this new discipline. As she worked away providing care for the mothers of a Malian village, she contemplated how she could incorporate her interest in shea butter with her current position. She had already heard of shea butter and was very interested in learning more. She considered creating a secondary venture that would consist of a small enterprise development project, not knowing that it would blossom into a business that would generate income for Malian women.

Click on the image below to enlarge and view captions.

“Be flexible…There is a reason why you’re there!” was Rahama’s response when I asked what advice she would give to other Peace Corps volunteers. She recounts leaving behind many of the creature comforts we take for granted, like the bus running on schedule or the convenience of shopping. She emphasizes that every day brought her new experiences. In addition to being adaptable, she advises, “You don’t know everything.” While Peace Corps volunteers have attained higher education, they have to adapt to a completely new and alien culture.

Peace Corps training prepares its volunteers for the jolting experience of transitioning into a new world and how to understand new cultures and practices. It is important to couple knowledge with a willingness to learn more. “You can’t quantify your experience, you must qualify it…” sticks out in my mind as Rahama and I continue with our conversation. She explains to me that Peace Corps volunteers may become discouraged in their experiences. Volunteers, like Rahama, are often living in countries where basic infrastructure is lacking. This includes the running water and electricity we use freely every day. While volunteers work to transform the lives of those they work with, there are limitations. Rahama explains that it is necessary to focus on the positive experiences, and not dwell on what is beyond one's control. Peace Corps volunteers will have to learn new traditions, languages, and social norms. Rahama insists it was her flexibility and adaptability that allowed her to connect with community leaders, women’s leaders, and elders.

As her women’s cooperative continued to grow, Rahama bore witness to the transformation taking hold in her community. Women’s wages increased tremendously and new opportunities arose for all of them. Sending their children to school was now an attainable goal. The organization she founded, Shea Yeleen International, has now had over 800 women work in its women’s cooperatives in Mali, Ghana, and Burkina Faso. When I asked Rahama what “yeleen” means, she explains that it is a Malian word for “light/luminous.” While shea butter is known for its restorative effects on the skin and body, Shea Yeleen has been a source of light and warmth for the women of West Africa.

Katherine D. Campbell is an educational aide/facilitator in the Lemelson Center at the Smithsonian's National Museum of American History.

How Fake News Breaks Your Brain

Smithsonian Magazine

"Pope Francis shocks world, endorses Donald Trump for president." “Clinton's assistant J. W. McGill is found dead.” “‘Tens of thousands’ of fraudulent Clinton votes found in Ohio warehouse.” These shocking news headlines of the past year all had one thing in common: They weren’t true. Not in the slightest. Each was manufactured, either out of malice or an attempt to cash in on advertising revenue, in an effort to deceive as many unwitting Internet readers as possible. They were, in other words, “fake news.”

Fake news, of course, is nothing new. In the past it took the form of pamphlets created to smear political enemies or sensationalist stories designed to “go viral” the old-fashioned way through newspaper sales. But the recent surge of false information enabled by our new social media landscapes has propelled it forward as a serious problem worthy of national and even international debate.

The problem, people say, is the medium. Which makes sense: Social media platforms like Facebook face criticism for enabling the spread of this kind of misleading or incorrect information, because they allow any user or even automated bots to post legitimate-looking articles, which then proceed to spread like wildfire through "liking" and "sharing." Now Facebook has rolled out new tools to crack down on fake viral articles, while Twitter is testing a new feature to let users flag misleading, false or harmful information.

But a new study published this week in the journal Nature Human Behaviour shows that the limitations of the human brain are also to blame. When people are overloaded with new information, they tend to rely on less-than-ideal coping mechanisms to distinguish good from bad, and end up privileging popularity over quality, the study suggests. It’s this lethal combination of data saturation and short, stretched attention spans that can enable fake news to spread so effectively.

"Through networks such as Twitter and Facebook, users are exposed daily to a large number of transmissible pieces of information that compete to attain success," says Diego Fregolente Mendes de Oliveira, a physicist at Northwestern University who studies how networks of people work and lead author of the study.

Because of the significant impacts that social media can have on politics and life, Oliveira says, discriminating between good and bad information has become "more important in today's online information networks than ever before." Yet even though the stakes are higher, the dynamics of like-minded groups such as those found on social media can undermine the collective judgment of those groups—making judgment calls about fake news even harder to make. As the study puts it, when given too much information, humans become “vulnerable to manipulation.”

In 2016, Oliveira set out to study how information spreads on social networks, and particularly how "low-quality information" or fake news can end up rippling out like a contagion. He designed a theoretical model to predict how fake news spreads on social networks.

The model did not incorporate actual human users or actual fake articles. But it did draw on data collected by independent observers about debunked (but nonetheless popular) Facebook and Twitter articles to calculate an average ratio of real news to fake news in posts flagged for review by users. Oliveira used this ratio to run an algorithm he designed on the sharing of news in a network.

This model was similar in design to a previous study in which Oliveira showed how people who segregate themselves into separate networks—the social bubbles of like-minded people one tends to create on Facebook, for example—can contribute to hoaxes and fake information spreading. As the thinking goes, these people are less likely to be exposed to information contrary to the posts their like-minded friends are sharing that could oust fake news and reveal the truth.

At relatively low flows of information, his algorithm predicted that a theoretical social media user was able to discriminate between genuine and fake news well, sharing mostly genuine news. However, as Oliveira and his coauthors tweaked the algorithm to reflect greater and greater flows of information—the equivalent of scrolling through an endless Twitter or Facebook feed—the theoretical user proved less and less capable of sorting quality information from bad information.

Oliveira found that, in general, popularity had a stronger effect on whether a person shared something than quality. At higher levels of information flow that effect became more pronounced, meaning people would theoretically spend less or no time assessing the information’s quality before deciding to share it. Soon, as they paid less and less attention to each piece of information, the people were sharing fake news at higher and higher rates.

At the highest rates modeled, the quality of a piece of information had zero effect on the popularity of that information. "We show that both information overload and limited attention contribute to a degradation in the system's discriminative power," Oliveira said via email.

While the model has clear limitations, it does provide one interpretation of how fake news spreads. "Traditionally it is believed that truth has some inherent power to overcome false," says Haluk Bingol, a computer engineer at Boğaziçi University in Turkey who has long studied online networks. "Similarly, the good eventually beats the bad. Social norms are based on these assumptions. Interestingly this has never been tested empirically."

Bingol, who was not involved in this study, says the study highlights how the quality the quality of information does not always win out when it comes to distribution. Oliveira’s research aligns with Bingol’s previous findings on the relationship choice and amount of information. In one paper, he found that the recommendation of a merchant advertising a certain item to a potential customer mattered even more strongly when the customer was presented with more options to choose from.

"That is, if you artificially increase the number of choices, you can obtain better results with the same 'marketing push,'" Bingol says. In other words, a person being overloaded with information is much more easy to manipulate—for advertisers, and for purveyors of fake news. "Clearly this is not difficult to do today," he adds.

Walter Quattrociocchi, a computer scientist at the IMT School for Advanced Studies Lucca in Italy, is more skeptical of Oliveira's model. "Oversimplifying the complex social dynamics behind the emergence of narratives could be misleading," says Quattrociocchi, who was not involved in this research. For instance, the model used worked on the simplified assumption that social media users introduce new information at the same rate, and that users all start with the same attention spans.

While he found the study interesting, Quattrociocchi notes that other research has shown how confirmation bias and other factors beyond the scope of Oliveira's model can significantly affect the spread of information online.

For future research, Oliveira hopes to enhance his model with some of these other facts, including how a person's relationship to the sharer of information affects how they process it, and how likely people would be to change their minds upon receiving information online that conflicts with their current beliefs.

At the end of the day, Oliveira believes that stopping fake news starts with readers. He suggests that people read carefully what they share online, avoid unfriending or unfollowing people to create an online echo chamber, and avoid assuming anything is trustworthy even if they trust the person sharing it. "Keep in mind that our friends are probably not good editors and are driven by emotions and biases more than objectivity and trustworthiness," he points out.

So give this article another read, and check out where it came from before you click “share.” 

Portrait Gallery's Hide/Seek Uncovers an Intricate Visual History of Gay Relationships

Smithsonian Magazine

It's hard to consider a large pile of Jolly Rancher-type candies as a form of portraiture. And yet, in the corner of the National Portrait Gallery's new show "Hide/Seek: Difference and Desire in American Portraiture" is a tidy spill of sweets in Technicolor cellophane. You can't miss it, nor should you—it's one of the few opportunities you'll ever have to not only touch the art, but to eat it. (Minding the nearby choking hazard warning signs, of course.) But the sheer whimsy of it all is quickly undercut upon realizing that the piece is a memorial to Ross Laycock, partner and lover of the artist Felix Gonzalez-Torres. Laycock died of AIDS in 1991.

But what does a pile of candy really communicate about a human being? Minimalist art isn't always easy to read, so careful consideration has to be paid to what visual elements are there before you. To display the work of art, the museum had a number of guidelines they were required to follow. "There has to be an assortment of colors," says Hide/Seek curator David Ward, "and it has to weigh 175 pounds—Ross’s weight when healthy—at the start of the installation." As viewers pass by and eat the candy, they enjoy the sweetness of the relationship Gonzalez-Torres and Laycock shared.

The piece was created at a point in time when much of America—including the nation's leaders—was ignoring the AIDS epidemic, and the dwindling pile of candy is also a symbol for the dissolution of gay communities in the wake of this disease. Furthermore, the piece can be arranged in one of two ways: a mound in a corner or in a rectangle on the floor. "The mound in the corner is simply a way of collecting or organizing it so its not just a lump that gets spread out on the floor in a misshapen mass," Ward explains. "But organizing it flat suggests two things: either it’s a bed or it’s a grave. This makes it more powerful in a way but we didn’t have the space to install it like that."

But artwork that speaks to how AIDS impacted gay communities is only a facet of Hide/Seek. As a whole, the show reveals how American artists have explored human sexuality. Those who approach the show thinking that gay culture is a recent development may be surprised to find that it has been hidden in plain view for decades. It's all a matter of knowing how to crack the visual codes that artists hid in their work. "This is a show about oblique glances," says Ward. "It's a show about subversion."

For a preview of the show, be sure to check out the gallery below as well as Blake Gopnik's Washington Post reviewHide/Seek: Difference and Desire in American Portraiture will be on view at the National Portrait Gallery until February 13, 2011.

Image by Walt Whitman by Thomas Cowperthwaite Eakins. National Portrait Gallery, Smithsonian Institution. “Walt Whitman is the founding spirit of this show,” says Ward. During the Civil War, Whitman, whose poetry collection Leaves of Grass contains themes of free love, worked as a nurse in the Patent Office Building, which is now the National Portrait Gallery. Thomas Eakins took this photograph a year before the poet’s death in 1891. (original image)

Image by Salutat by Thomas Cowperthwaite Eakins. National Portrait Gallery, Smithsonian Institution. In the late 19th century, sporting events that glorified masculinity rose in popularity. College football, rowing and boxing celebrated the fit and healthy physique of the athlete. Here, Eakins plays with social norms by portraying a scantily clad boxer instead of a nude female as the object of an all-male crowd’s gaze. The boxer is the 22-year-old featherweight Billy Smith, who was a close, devoted friend to the artist. (original image)

Image by Painting No. 47, Berlin by Marsden Hartley. National Portrait Gallery, Smithsonian Institution. In this 1917 canvas, Marsden Hartley memorializes a man he fell in love with, a German soldier named Karl von Freyburg, who was killed during World War I. “Gays and lesbians were particularly attuned to abstraction because of the care with which they had to present themselves in society,” says Ward. “Their lives had to be coded to hide themselves from repressive or hostile forces, yet they also had to leave keys both to assert their identity and to link up with other members of the community.” Von Freyburg’s initials, his age at death his position in the cavalry unit are all cautiously hidden in this abstraction, Painting No. 47, Berlin. (original image)

Image by Self Portrait by Romaine Brooks. National Portrait Gallery, Smithsonian Institution. Romaine Brooks was both an artist and patron of the arts. In this 1923 self-portrait, she depicts herself in hyper-masculine clothing. “I think the element of cross-dressing has had an appeal in the lesbian community,” Ward says. “Brooks abandons a stereotypically female look for a combination of items that would signal how she was crossing gender and sexual lines.” (original image)

Image by Janet Flanner by Berenice Abbott. National Portrait Gallery, Smithsonian Institution. Janet Flanner was an American living in Paris with her lover Solita Solano and together they traveled in the most fashionable gay social circles. Flanner wrote a regular column for the New Yorker that gave readers a coded glimpse of the Parisian “in crowd.” This 1923 portrait, Flanner’s masks are a symbol of the multiple disguises that she wears, one for private life, and one for public life. (original image)

Image by Marsden Hartley by Geoge Platt Lynes. National Portrait Gallery, Smithsonian Institution. This 1942 portrait captures artist Marsden Hartley mourning the death of another man that Hartley admired. A shadowy man haunts the background of this portrait, taken by photographer George Platt Lynes in 1942, alluding to the loves of Hartley’s life that were lost and unspoken. (original image)

Image by Robert Mapplethorpe Self-Portrait by Robert Maplethorpe. National Portrait Gallery, Smithsonian Institution. Stricken with AIDS, Robert Maplethorpe casts himself in this 1988 self-portrait as the figure of death. “What he is doing,” Ward says, “is refusing to accept our pity. He is refusing to be defined by us: poor gay man, poor dying gay man. He is also dying with dignity, turning himself into the King of Death. He is owning his status. And what he is telling us is that we are all going to die. We are all mortal and this is the fate that awaits us all. And I also think he is making a statement that he is going to survive after death because of his work as an artist. He is transcending death through art.” (original image)

Image by Unfinished Painting by Keith Haring. National Portrait Gallery, Smithsonian Institution. As AIDS raged through gay communities across the United States beginning in the 1980s, Haring’s 1989 devastating canvas, entitled Unfinished Painting, mourns the loss of so many. Haring himself died from AIDS on February 16, 1990, a year that saw the incredible toll—18,447 deaths—of the disease. (original image)

Image by Camouflage Self-Portrait by Andy Warhol. National Portrait Gallery, Smithsonian Institution. In this 1986 canvas, Andy Warhol plays with the concept of camouflage and the idea that portraiture is a means of masking oneself. Here he is hidden, yet in plain sight. (original image)

Image by Ellen DeGeneres, Kauai, Hawaii by Annie Leibovitz. National Portrait Gallery, Smithsonian Institution. When Ellen DeGeneres publicly acknowledged her lesbianism in 1997, it was a landmark event. Besides defying Hollywood’s convention of rarely publicly acknowledging her homosexuality, coming out gave her a degree of control over her life. "For me,” DeGeneres said in a 1997 interview with Diane Sawyer, “this has been the most freeing experience, because people can’t hurt me anymore.” (original image)

Fall in Love With Cannibalism This Valentine's Day

Smithsonian Magazine

We "civilized" folk tend to write off cannibalism as a freak phenomenon reserved for psychopaths, starvation and weird animals (I’m looking at you, praying mantis). In fact, eating others of your kind is a well-established biological strategy employed throughout the animal kingdom. Moreover, our own species’ history is rich with examples of this "eccentric" behavior, from medicinal consumption of human body parts in Europe to more epicurean people-eating in China.

In Cannibalism: A Perfectly Natural History, zoologist and author Bill Schutt uses science, humor and engaging storytelling to expose all the gory details of this underappreciated yet surprisingly scrumptious subject. We spoke with Schutt about some of the more intriguing tidbits he learned while on the cannibalism beat—perfect conversation starters for wooing your Valentine’s date over dinner. 

What’s the biggest misconception surrounding cannibalism in animals?

Until 15 years ago or so, the party line for scientists was that cannibalism was the result of one of two things: Either there’s no food, or we stuck these animals in a cage and now they’re acting bizarrely. In other words, it was caused by starvation or captive conditions. Researchers have recently discovered that that’s a real misconception. In fact, across the whole animal kingdom cannibalism has all sorts of functions—including parental care. 

For example, some birds lay eggs asynchronously as a "lifeboat strategy."  If there’s enough resources they’ll raise both chicks, but if not, they'll kill the younger chick and eat it so the older one survives. Cannibalism can also be a reproductive strategy: If a new male lion takes over the pride, for instance, he’ll kill and eat the existing cubs to make the females come into heat quicker.

Do you have a favorite example of cannibalism in the animal kingdom?

Probably my favorite example is this weird group of legless amphibians, the Caecilians. There are two types of Caecilians: egg-laying ones and ones that give birth to live young. Both have wild adaptations. In the egg layers, the hatchlings peel and eat their mother’s fat-laden skin—which grows back, only to be peeled again, for several weeks.

In the species that give birth to live young, on the other hand, the eggs hatch internally. Scientists were puzzled to find that the young are born with tiny teeth, which were lost soon after birth. They were like, “What’s going on here?” After dissecting some specimens, they found that the lining of the mother’s oviduct in the sections where the babies were developing—another area full of nutritious fat—was literally being eaten by them.

This behavior wasn’t aberrant; it was an evolved form of parental care. That blew me away.

What is cannibalism’s forgotten role in Western history?

The big surprise to me was finding out that medicinal cannibalism was practiced frequently throughout Europe, from the Middle Ages on and lasting even into the beginning of 20th century. When we talk about medicinal cannibalism, we’re talking about using human body parts or blood to treat disease. In most instances, people weren’t being killed to be eaten, although the bodies of the newly dead—or even the not-quite-dead—were often used after public executions. 

In fact, people believed that the more violent the death, the more potent and useful the person’s parts. From blood collected at executions and doled out to treat epileptic seizures to human fat used for skin ailments, to ground up skulls or mummies mixed into elixirs, nobility as well as commoners regularly consumed human parts. 

Why did cannibalism become taboo in the West?

Blame the Greeks. It started with Homer and the Cyclops—the one-eyed giant that eats Odysseus’ men—and then moved on to being demonized by the Romans and Shakespeare. It snowballed from there, with the Brothers Grimm turning it into a threat for children, to Robinson Crusoe and Freud—the list goes on and on. It was seen as something that monsters did.

Culture is king, and Western culture tells you that cannibalism is worst thing you can do in a moral sense. Elsewhere, though, cannibalism was not taboo. As a result, some cultural groups that did not get that kind of Western input were just as horrified to learn that we buried our dead as Westerners were mortified to hear that they cannibalized theirs.

What was the effect of such thinking on other cultures?

As explorers went out and stuck flags in places, one of the main things they started with was a spiel along the lines of, “Oh, and that cannibalism thing you guys practice? You’re not doing that anymore.” It was also used as a tool by these “explorers” to justify destroying whole cultures. If you were seen as a cannibal, then it was ok to hunt you with dogs and butcher you, because you were seen as less than human.

In Spain in the 15th century, Queen Isabella basically told Columbus, “You have to treat people nicely when you meet them—unless they’re cannibals, then all bets are off.” (Or words to that effect.) By labeling millions of people across the Caribbean and Mexico as “cannibals,” the Spanish gave themselves permission to beat, enslave and murder those they encountered. There is not a shred of evidence that indicates that most indigenous groups encountered by the Spaniards were cannibals.

How about in the East, where Western influence arrived much later?

In China, human flesh was baked, boiled, fried and made into soup for maybe 2,000 years. There are all sorts of descriptions about human flesh being preferred—of invaders coming in and eating kids and women because they liked the way they tasted best—and recipes for preparing human meat. China also has a Confucian concept called filial piety, which emphasizes respect and care of elders. In its extreme expression, people would cut off pieces of their own bodies—eyeballs plucked out, part of their own livers removed—all to feed to sick relatives as a last-resort medicinal treatment.

In other cases, it’s not culture but stressful circumstances that lead to cannibalism. You write about the Siege of Leningrad during World War II, for example, when starving residents resorted to cannibalism, or of the Donner Party snowbound in the wilderness in the mid-19th century and forced to eat their dead to survive. Could this happen again?

Absolutely. If you look in the animal kingdom, two of the reasons cannibalism occurs is because of overcrowding (tiger salamander larvae eat each other in too close of quarters) and a lack of alternative forms of nutrition (many spiders, insects and snails lay “trophic eggs”—unfertilized eggs that the young eat when they hatch).

If you put human beings in that position—whether it’s a famine, a siege or they’re stranded somewhere—and there’s no food, then they are going to go through predictable steps in the process of starvation. In the end, they’ll either die or they’re going to consume human flesh, if it’s available.

That’s not based on science fiction but on the history of what has happened when there’s nothing to eat. In the future, if there’s an agricultural collapse in a place where there are suddenly no other forms of nutrition, people might resort to cannibalism. Horrible? Yes, but not surprising or abnormal.

What do examples of stress-induced cannibalism say about the limits of human social norms and morality?

We have these sets of rules we try to follow. But when the going gets tough, that stuff eventually goes out the window. The Donner party were good Christians who never thought that they’d be consuming their own relatives because of the horrible conditions they found themselves in. There’s a biological directive to survive and at that point, when you reach that extreme, you’re not worried about the fact that there’s a taboo. You simply want to live.

Have you ever tasted human flesh?

While investigating the phenomenon of placentophagy (placenta-eating), I was invited to test some for myself. This was during my visit to what was basically a one-stop center for all your placenta-related needs. The husband of the woman who ran the place, a chef, prepared a bit of his wife's placenta osso bucco style. The consistency was like veal, but the flavor was more organ meat—like chicken gizzards. It was delicious.

1-24 of 100 Resources