Skip to Content

Found 24 Resources

Unveiling Stories: Project Zero Global Thinking Routine

SI Center for Learning and Digital Access
A Project Zero “Global Thinking” routine for revealing multiple layers of meaning. This routine invites students to investigate the world and develop powerful habits of global journalism consumption. The framework asks students to consider five questions: “What is the story?,” “What is the human story?,” “What is the world story?,” “What is the new story?,” and “What is the untold story?”

UNVEILING STORIES

A routine for revealing multiple layers of meaning

1. What is the story?

2. What is the human story?

3. What is the world story?

4. What is the new story?

5. What is the untold story?

Purpose: What kind of thinking does this routine encourage?

This routine invites students to reveal multiple layers of meaning in an image, text, or journalistic report. Each layer addresses a key dimension of quality global journalism: the central, most visible story; the way the story helps us understand the lives of fellow humans; the ways in which the story speaks to systemic global issues; what is new and instructive about the story and issues explored; and the important absences or unreported aspects of the story. This routine also invites students to investigate the world and develop powerful habits of global journalism consumption – habits that are transferable to information consumption more broadly.

Application: When and where can it be used?

This routine can be used in global competence development in the arts, geography, literature, and history.

Launch: What are some tips for starting and using this routine?

You may consider selecting some – not all – of the routine’s questions depending on your goals. You may also consider modifying the order in which the questions are introduced. In using this routine with your students, you may see “the story” interpreted in one of the following ways: 1) “the story” told by the article, image, or material that they read, or 2) “the story” proposed to explain or contextualize the event depicted, i.e. “the human story that led to the contamination of the Mexican gulf begins with our dependence on fossil fuels.”

Beauty and Truth: Project Zero Global Thinking Routine

SI Center for Learning and Digital Access
A Project Zero “Global Thinking” routine for exploring the complex interaction between beauty and truth. This routine invites students to consider how journalists and artists communicate ideas about the world. After picking an image or story to examine, the framework asks students to consider: “Can you find beauty in this [image, story]?,” “Can you find truth in this [image, story]?,” “How might beauty reveal truth?,” “How might beauty conceal truth?”

BEAUTY AND TRUTH

A routine for exploring the complex interaction between beauty and truth

1. Can you find beauty in this [image, story]?

2. Can you find truth in this [image, story]?

3. How might beauty reveal truth?

4. How might beauty conceal truth?

Purpose: What kind of thinking does this routine encourage?

This routine invites students to explore the complex interaction between beauty and truth and consider how journalists and artists comment on and communicate ideas about the world. This routine also helps students navigate the overwhelming quantities of information accessible in an increasingly visually-informed world.

Application: When and where can it be used?

In art and journalism, the routine aims to slow students’ thinking down and invite them to reflect about how quality work uses beauty to engage us to learn more about an issue and seek truth. The routine also invites a critical analysis of the ways in which beauty can mislead.

Launch: What are some tips for starting and using this routine?

Think of this routine as one that invites you and your students to a broad and deep conversation about a photograph or work of art. Allow time for individual students to share ideas of beauty and truth – constructs unlikely to have been explored explicitly in the past. In their discussion, students may reveal the misconception that photographs by their very nature reveal truth. In questions three and four, the terms “beauty” and “truth” can be inverted.

Circles of Action: Project Zero Global Thinking Routine

SI Center for Learning and Digital Access
A Project Zero “Global Thinking” routine for fostering a disposition to participate and take responsible action. This routine invites students to distinguish between personal, local, and global spheres and deliberate about potential courses of action and their consequences. The framework asks students to consider what they can do to contribute to an issue within three circles of action: “In my inner circle (of friends, family, the people I know),” “In my community (my school, my neighborhood),” and “In the world (beyond my immediate environment).”

CIRCLES OF ACTION

A routine for fostering a disposition to participate

What can I do to contribute…

1. In my inner circle (of friends, family, the people I know)?

2. In my community (my school, my neighborhood)?

3. In the world (beyond my immediate environment)?

Purpose: What kind of thinking does this routine encourage?

This routine is designed to foster students’ dispositions to participate and take responsible action. It invites them to distinguish personal, local, and global spheres and make local-global connections. It also prepares them for an intentional deliberation about potential courses of action and their consequences.

Application: When and where can it be used?

This routine can be used across disciplines (e.g., geography, science, literature, economics) and with a broad range of resources (e.g., films, narratives, photographs) typically addressing a conflict, problem, system, or design that can be improved through participation and engagement. This routine can also be used informally in daily school contexts and interactions where individual students can exhibit agency (e.g., a conflict among friends, consumption patterns, the integration of immigrant students).

Launch: What are some tips for starting and using this routine?

This routine invites students to map possibilities for action, and the order of questions can be inverted if necessary. Call students’ attention to an issue that they can perceive as requiring solutions. Students are best prepared to use this routine when they have a moderate understanding of the issue, are primed to care about it, and have a sense of urgency or need for a response. This routine is particularly effective when students sense the need but have difficulty considering viable paths for action. The routine can be followed by discussing: What are the barriers to students’ capacity to take action at various levels? Drawing on a rich initial actions map, students may be invited to consider factors such as ethics, viability, personal interest, and potential impact as they decide what to do next.

How Else and Why: Project Zero Global Thinking Routine

SI Center for Learning and Digital Access
A Project Zero “Global Thinking” routine for cultivating a disposition to communicate across difference. This routine asks students to consider that they have communicative choices and that intention, context, and audience matter when communicating appropriately with diverse audiences. The routine asks students to make a statement (“What I want to say is…”) and then answer the question, “How else can I say this? And why?” multiple times.

HOW ELSE AND WHY?

A routine for cultivating a disposition to communicate across difference

1. What I want to say is… (Student makes a statement and explains intention)

2. How else can I say this? And why? (Student considers intention, audience, and situation to reframe things such as language, tone, and body language)

3. How else can I say this? And why? (Student considers intention, audience, and situation to reframe things such as language, tone, and body language)

(Repeat question)

Purpose: What kind of thinking does this routine encourage?

This routine is designed to develop students’ dispositions toward appropriate communication with diverse audiences, where students understand (a) they have communicative choices and (b) that intention, context, and audience matter in communicating appropriately, especially across cultural, religious, economic, or linguistic differences. Through multiple reflective iterations of a particular statement (comment, question, story), the routine invites students to: consider content, audience, purpose, and situation for communication (what, to whom, why and where), refine the use of symbols (verbal, visual, nonverbal) to find forms of expression appropriate for the context, and reflect about communication and miscommunication.

Application: When and where can it be used?

This routine is broadly applicable to many communicative situations. These may include distinctly intercultural scenarios that are present in the curriculum (e.g., a story, historical event, conflict, scientific finding). They may also include moments when students re-represent ideas or phenomena (e.g., when producing a graph in statistics, a poster design, an interpretation of a work of art). Communicative situations may also include regular classroom discussions or informal interactions in and outside of school. In selecting communicative situations for analysis, you may prioritize ones that present an opportunity to reflect on the complexities of dialogue across difference and the broad repertoire of possible communicative choices. Examples of provocations include but are not limited to film excerpts, students’ own writings, classroom dialogue, and works of art.

Launch: What are some tips for starting and using this routine?

The phrase “How else can I say this? And why?” can be used with varying degrees of structure. In some cases, students may use the multiple iterations proposed by the routine to explore possible communicative choices in a given scenario and select the one they prefer. In guiding students through this routine, you may consider pairing students up for feedback. Peers can help students construct a concrete sense of audience. It is important to encourage students to consider speakers’ intention, audience, and context when they begin to revise the claims under study. Without doing so, the routine risks inviting students to repeat less-effective forms of communication or reinforce communication misconceptions. Regardless of the topics or contexts in which the routine is used, it is important that students offer an explicit rationale for their communicative choices, as students’ explanations will reveal their current understanding of communicative demands. As with all global thinking routines, students' responses are best seen as the beginning, rather than the end, of a conversation that will enable teachers and peers to offer perspectives and enrich communicative capacities.

Step In–Step Out–Step Back: Project Zero Global Thinking Routine

SI Center for Learning and Digital Access
A Project Zero "Global Thinking" routine to support responsible perspective-taking. This routine invites learners to take other people’s perspectives, recognize that understanding others is an ongoing process, and understand that our efforts to take perspective can reveal as much about ourselves as they can about the people we are seeking to understand. Asks students: “Step In: What do you think this person might feel, believe, know, or experience?”, “Step Out: What would you like or need to learn to understand this person’s perspective better?”, and “Step Back: What do you notice about your own perspective and what it takes to take somebody else’s?”

STEP IN–STEP OUT–STEP BACK

A routine to support responsible perspective-taking

Ask students to choose a person or agent in the situation you are examining, then ask:

1. Step In: What do you think this person might feel, believe, know, or experience?

2. Step Out: What would you like or need to learn to understand this person’s perspective better?

3. Step Back: What do you notice about your own perspective and what it takes to take somebody else’s?”

Purpose: What kind of thinking does this routine encourage?

This routine invites learners to take other people’s perspectives (e.g., religious, linguistic, cultural, class, generational), recognize that understanding others is an ongoing, often uncertain process, and understand that our efforts to take perspective can reveal as much about ourselves as they can about the people we are seeking to understand.

Application: When and where can it be used?

This routine can be adapted to a broad range of topics, from examining the perspectives of agents in a story, a historical event, or a contemporary news article, to considering non-human perspectives such as species in an ecosystem, or collective perspectives such as interest groups in a given conflict. You may choose an image, video, story, or classroom incident to ground students’ thinking.

Launch: What are some tips for starting and using this routine?

In “step-in,” make sure learners understand they are reasoning with the information they have, which is always limited. You may point to the speculative nature of their interpretations. In “step-out,” invite learners to see that there is more to understanding another person than the first impression they construct. As they share their views, students may detect stereotypes in their own initial thinking and feel uneasy about “having been wrong” in their guess. It is important to normalize the fact that we all have first impressions of others and others have them of us, and the importance of committing to understanding other persons’ perspectives beyond initial assumptions. Under “step back,” learners may explore how prior knowledge, cultural, or linguistic perspectives inform or obscure their interpretation. This routine lends itself to small groups. You may invite students to write their responses to each question individually on separate Post-its first and then share.

The 3Ys: Project Zero Global Competency Routine

SI Center for Learning and Digital Access
A Project Zero "Global Thinking" routine to discern the significance of a topic in global, local, and personal contexts. This routine encourages students to uncover the significance of a topic in multiple contexts, make local-global connections, and situate themselves in local and global spheres. Asks the questions: "Why might this [topic, question] matter to me?", "Why might it matter to people around me [family, friends, city, nation]?”, and "Why might it matter to the world?"

THE 3Ys

A routine to discern the significance of a topic in global, local, and personal contexts

1. Why might this [topic, question] matter to me?

2. Why might it matter to people around me [family, friends, city, nation]?

3. Why might it matter to the world?

Purpose: What kind of thinking does this routine encourage?

This routine encourages students to develop intrinsic motivation to investigate a topic by uncovering the significance of the topic in multiple contexts. The routine also helps students make local-global connections and situate themselves in a local and global spheres.

Application: When and where can it be used?

The routine can be applied to a broad range of topics (from social inequality, to a mathematician’s biography, balance in ecosystems, writing a story, to attending school) and questions. You may use a rich image, text, quote, video, or other materials to ground students' thinking. You may find this routine useful early in a unit after the initial introduction of a topic, when you want students to consider carefully why it might be worth investigating further. Teachers have also used this routine to expand on a given topic to help students become aware of how it has far-ranging impact and consequences at the local and global levels. In other cases (i.e., studying poverty in Brazil), the routine is used to create a personal connection to a topic that seems initially remote.

Launch: What are some tips for starting and using this routine?

Ensure that the students have clarity about the focal point of the analysis. For example, you might ask “Why might understanding social inequality matter to me, my people, the world?” as opposed to “Why might this image matter?” Use the questions in the order proposed or in reverse order beginning with the most accessible entry point. For instance, students might unfold the purpose and significance of a story they are writing by first reflecting about why the story matters to them, and then moving out to the world from there. In other cases, a teacher may seek to construct a more personal connection to a distant event (e.g., the Holocaust), thus beginning with the world, then working inward. It is recommended that students work on one step at a time as nuances and distinctions between the personal, local, and global may be lost if they work with the three questions in mind at once. If time allows, compare and group students’ thoughts to find shared motivations and rationales for learning the topic under study.

Global Competence Activities for World Language Classrooms

Smithsonian Education
Discover classroom-tested, ready-to-use teaching resources to enhance the World Language curriculum and foster students’ communication, critical thinking, and collaboration skills. Two Spanish-language teachers present activities for middle and high school language classrooms, all grounded in Project Zero Thinking Routines and Global Competence strategies and designed to promote global-mindedness, curiosity, and empathy. Presenters: Vicky Masson, Christ Episcopal School (MD), Marcela Velikovsky, Bullis School (MD), and Philippa Rappoport, Smithsonian Learning Lab.

Your Blood Type is a Lot More Complicated Than You Think

Smithsonian Magazine

Not long ago, a precious packet of blood traveled more than 7,000 miles by special courier, from America to Australia, to save the life of a newborn. Months before the delivery date, a routine checkup of the mom-to-be had revealed that the fetus suffered from hemolytic disease. Doctors knew that the baby would need a blood transfusion immediately after delivery. The problem was, the baby's blood type was so rare that there wasn't a single compatible donor in all of Australia. 

A request for compatible blood was sent first to England, where a global database search identified a potential donor in the United States. From there, the request was forwarded to the American Rare Donor Program, directed by Sandra Nance. The ARDP had compatible frozen blood on hand, but Nance knew that a frozen bag might rupture in transit. So her organization reached out to the compatible donor, collected half a liter of fresh blood, and shipped it across the Pacific. When the mother came in to give birth, the blood was waiting. “It was just magic,” Nance says.

You’re probably aware of eight basic blood types: A, AB, B and O, each of which can be “positive” or “negative.” They're the most important, because a patient who receives ABO +/– incompatible blood very often experiences a dangerous immune reaction. For the sake of simplicity, these are the types that organizations like the Red Cross usually talk about. But this system turns out to be a big oversimplification. Each of these eight types of blood can be subdivided into many distinct varieties. There are millions in all, each classified according to the little markers called antigens that coat the surface of red blood cells.

AB blood contains A and B antigens, while O blood doesn't contain either; “positive” blood contains the Rhesus D antigen, while “negative” blood lacks it. Patients shouldn’t receive antigens that their own blood lacks—otherwise their immune system may recognize the blood as foreign and develop antibodies to attack it. That’s why medical professionals pay attention to blood types in the first place, and why compatible blood was so important for the baby in Australia. There are in fact hundreds of antigens that fall into 33 recognized antigen systems, many of which can cause dangerous reactions during transfusion. One person's blood can contain a long list of antigens, which means that a fully specified blood type has to be written out antigen by antigen—for example, O, r”r”, K:–1, Jk(b-). Try fitting that into that little space on your Red Cross card.

Scientists have been discovering unexpected antigens ever since 1939, when two New York doctors transfused type O blood into a young woman at Bellevue Hospital. Type O was considered a “universal” blood type that anyone could receive, yet the woman experienced chills and body pain—clear signs that she was reacting to the blood. After running some lab tests, the doctors confirmed that even type O blood could contain previously unknown antigens. They’d accidentally discovered Rhesus antigens.

Additional kinds of antigens have been discovered every few years since then. Almost everyone has some. More than 99.9 percent of people carry the antigen Vel, for example. For every 2,500 people, there's one who lacks the Vel antigen who shouldn't receive blood from the remaining 2,499. (Like many blood types, Vel-negative is tightly linked to ethnicity, so how rare it is depends on what part of the world you’re in.) If a Vel-negative patient develops antibodies to Vel-positive blood, the immune system will attack the incoming cells, which then disintegrate inside the body. For a patient, the effects of such reactions range from mild pain to fever, shock and, in the worst cases, death.

Blood types are considered rare if fewer than 1 in 1,000 people have them. One of the rarest in existence is Rh-null blood, which lack any antigens in the Rh system. “There are nine active donors in the whole community of rare blood donors. Nine.” That's in the entire world. If your blood is Rh-null, there are probably more people who share your name than your blood type. And if you receive blood that contains Rh antigens, your immune system may attack those cells. In all, around 20 antigen systems have the potential to cause transfusion reactions.

Just to be clear, transfusion patients today don't have much to worry about. In 2012, there were tens of millions of transfusions in the United States, but only a few dozen transfusion-related deaths were reported to the U.S. Food and Drug Administration. Medical practitioners go to great lengths to make sure that transfused blood is compatible. But curiously enough, they manage to do this without even knowing all the antigens present.

Before a transfusion takes place, lab technicians mix a sample of the patient's blood with the sample of a donor whose blood type is ABO +/– compatible. If the two samples clump, the blood may be unsafe to transfuse. “The moment you discover that, you do not know why,” Nance explains. Figuring out the precise cause of the problem is like solving a crossword puzzle, she says. “You test many donors that are known types, and you find out, just by process of elimination, what is the contributing factor that makes this incompatible.”

This was the process that helped the newborn in Australia. Lab technicians there had tested the fetal blood and figured out which antigens they needed to avoid. But they still didn't know where in the world they might find suitable blood. So they sent a rare blood request to the international organization set up for cases just like this: the International Blood Group Reference Laboratory in Bristol, England. The IBGRL consults its database of hundreds of thousands of rare donors worldwide to find compatible blood. For the past 30 years, the process of global blood sharing has been gradually standardized during the biennial congress of the International Society for Blood Transfusion, which took place this week in Seoul, South Korea.

In the past two years, at least 241 packets of rare blood were shipped internationally, according to Nicole Thornton, head of Red Cell Reference at the IBGRL. Many more are shipped within national borders. In 2011, for example, more than 2,000 units of rare blood were shipped within the United States. It’s an impressive feat of coordination.

Even rare donor programs with the resources to identify and ship rare blood are looking to improve. There just aren't enough rare donors who come in regularly. The American Rare Donor Program has 45,000 rare donors in its database, but 5 percent of transfusion patients still don't get the blood they need. Coral Olsen, a scientist in charge of regional rare blood banking in South Africa, says that her laboratory often struggles to keep track of registered rare donors. “Because a lot of them are from rural settings, we often can't get ahold of them. So that's our challenge, as far as tracing and tracking and maintaining our rare donor base.”

For many countries, an even bigger challenge is simply dealing with resource constraints. National blood laboratories have to maintain a repository of samples if they want to run detailed antigen tests. Olsen says that in developing countries, where starting samples aren’t always available, it's difficult to even begin classifying and sourcing rare blood. Finally, there's the high cost of importing rare types, especially for patients who need chronic transfusions. In those cases, medical professionals sometimes have to use blood that's known to be incompatible, but unlikely to cause severe reactions because of the particular antigens involved.

One day, scientific breakthroughs may make it easier to find compatible blood for anyone. Geneticists are working on testing methods that determine blood types using DNA, without looking at the blood itself. (So far, this process only works with certain antigens.) Nance hopes that one day, every newborn will undergo testing so that blood banks can build a comprehensive database of every rare type, which would immediately point medical professionals to the nearest compatible donor. Biochemists, meanwhile, have been testing chemicals that effectively mask the antigens on red blood cells, seeking to turn them into “stealth” cells that are functionally universal.

Until then, researchers will probably go on discovering antigens one by one. It's as if the surface of red blood cells started out as a fuzzy picture that scientists have slowly brought into focus, revealing subtle differences that just weren't visible before. For blood scientists and patients with rare blood types, these differences can be tedious and troublesome. But they're also a reminder of our remarkable individuality. With hundreds of possible antigens and millions of possible antigen combinations, your blood can be as unique as your fingerprint.

Which of 2013’s Many Natural Disasters Can We Blame on Climate Change?

Smithsonian Magazine

In the normal course of the world, seasonal shifts in weather can cause crops to fail, water to rise and roads to ice over. But on top of that naturally chaotic system, anthropogenic climate change is creating hazardous conditions that pose similar—sometimes heightened—dangers. It's tricky to know, though, which events fall into which category—was that storm just an unusually bad one or the sign of things to come?

The relatively new scientific field of “extreme event attribution” seeks to separate routine (if unfortunate) weather from weather that can be chalked up to climate change. In a new report published by the American Meteorological Society, independent teams of scientists examined a swath of 2013's big natural disasters with an eye to figuring out how climate change may have contributed to the storms. Climate Central has a breakdown of the extreme events studied in the report, detailing which were tied to climate change and which were not.

Some of the events that the researchers tied to climate change—heat waves in Australia, South Korea, Japan, China and across Europe, and drought in New Zealand—make intuitive sense. Others, like the heavy rains that caused widespread flooding in northern India, fit when you consider the fact that India's rains are strongly guided by the seasonal monsoon, and the monsoon is already thought to be affected by climate change.

But some natural disasters that one would think would be attributable to climate change didn't seem to hold up under the scientists' scrutiny—most notably, the ongoing and record-setting California drought. At best, the drought's connection to climate change is up in the air, with some scientists saying there is a connection and others saying there is not

In a certain theoretical sense, all weather is now being guided by our greenhouse gas emissions. Global warming is raising the energy in the air and the sea, and as an interconnected system any changes affect the formation of storms, drought and other extreme events. But in a more practical way, some extreme events owe their strength more to climate change than others—some heat waves are just heat waves, while others burn so hot or last so long that scientists think they would have been extremely unlikely, if not impossible, without the warming of climate change.

As Smart News has written before, “[t]here’s never an all-or-nothing relationship between climate change and a particular extreme event. But what event attribution allows us to say is how much more likely a particular weather event was or how much stronger it ended up being because of shifts caused by climate change.”

While not everything that seems like climate change necessarily is, a lot of nasty weather events are. The longer humans keep pushing the system, the more chances climate change will have to show its ugly side.

This “Sweaty” Billboard Kills Mosquitoes

Smithsonian Magazine

Zika virus is spreading like a swarm of mosquitoes—since 2007, the World Health Organization reports, 66 countries have experienced transmission of the disease, and the WHO recently declared the microcephaly and other neurological disorders it’s believed to cause a public health emergency. But one group of Brazilian marketing agencies think they can stop its spread with an unlikely tool, the BBC reports: A billboard that secretes human-like “sweat,” then traps and kills mosquitoes.

It’s called The Mosquito Killer Billboard, and its premise is both disgusting and deceptively simple. On the device’s website, which includes free blueprints for those who might want to make one of its own, its inventors explain the premise. The billboard emits a solution containing carbon dioxide and lactic acid that mimics human sweat and breath, attracting mosquitoes from a distance of up to nearly two and a half miles. Fluorescent lights make it even more attractive to mosquitoes and take advantage of the bugs’ need for a fixed point of light to navigate. When mosquitoes make it to the billboard, they’re lured inside, where they dehydrate and die.

So far, two billboards (appropriately showcasing a Zika awareness message) have been installed in Rio de Janeiro. The BBC reports that the collective behind the anti-mosquito ads won’t be selling ad space on the billboards. But at least one expert worries that the innovation could backfire. Chris Jackson, an ecologist and pest control specialist at the University of Southampton, told the BBC that since the billboards are so good at sucking mosquitoes in, they could actually endanger people in proximity to the billboard who could become the target of hungry bugs.

The idea is just one of a spate of creative solutions coming out in the wake of a virus that could infect up to four million people by the end of the year. Earlier this month, Massachusetts General Hospital’s Consortium for Affordable Medical Technologies (CAMTech) hosted a Zika Innovation Hackathon that yielded ideas like a mobile app that helps hunt down mosquito larvae and a water buoy that automatically dispenses larvicide.

Over 50 engineers, global health specialists and students participated in a similar event at Johns Hopkins a few days later, and the ideas they came up with are just as brilliant and weird. Potential Zika solutions included mosquito trap surveillance systems, Zika-proof clothing, sporting event banners that also scare off bugs and even “Never Will Bite,” a body and laundry soap that could one day make mosquito prevention part of people’s everyday routine.

While a single billboard or bar of soap is unlikely to stop Zika’s deadly march any time soon, every prevented bite represents one less potential victim of the virus. And with mosquitoes implicated in the spread of other deadly diseases, like dengue and malaria, there’s no time like the present to take full advantage of human ingenuity in the war against mosquito-borne illness.

When Did the Human Mind Evolve to What It is Today?

Smithsonian Magazine

Archaeologists excavating a cave on the coast of South Africa not long ago unearthed an unusual abalone shell. Inside was a rusty red substance. After analyzing the mixture and nearby stone grinding tools, the researchers realized they had found the world’s earliest known paint, made 100,000 years ago from charcoal, crushed animal bones, iron-rich rock and an unknown liquid. The abalone shell was a storage container—a prehistoric paint can.

The find revealed more than just the fact that people used paints so long ago. It provided a peek into the minds of early humans. Combining materials to create a product that doesn’t resemble the original ingredients and saving the concoction for later suggests people at the time were capable of abstract thinking, innovation and planning for the future.

These are among the mental abilities that many anthropologists say distinguished humans, Homo sapiens, from other hominids. Yet researchers have no agreed-upon definition of exactly what makes human cognition so special. 

“It’s hard enough to tell what the cognitive abilities are of somebody who’s standing in front of you,” says Alison Brooks, an archaeologist at George Washington University and the Smithsonian Institution in Washington, D.C. “So it’s really hard to tell for someone who’s been dead for half a million years or a quarter million years.”

Since archaeologists can’t administer psychological tests to early humans, they have to examine artifacts left behind. When new technologies or ways of living appear in the archaeological record, anthropologists try to determine what sort of novel thinking was required to fashion a spear, say, or mix paint or collect shellfish. The past decade has been particularly fruitful for finding such evidence. And archaeologists are now piecing together the patterns of behavior recorded in the archaeological record of the past 200,000 years to reconstruct the trajectory of how and when humans started to think and act like modern people.

There was a time when they thought they had it all figured out. In the 1970s, the consensus was simple: Modern cognition evolved in Europe 40,000 years ago. That’s when cave art, jewelry and sculpted figurines all seemed to appear for the first time. The art was a sign that humans could use symbols to represent their world and themselves, archaeologists reasoned, and therefore probably had language, too. Neanderthals living nearby didn’t appear to make art, and thus symbolic thinking and language formed the dividing line between the two species’ mental abilities. (Today, archaeologists debate whether, and to what degree, Neanderthals were symbolic beings.)

One problem with this analysis was that the earliest fossils of modern humans came from Africa and dated to as many as 200,000 years ago—roughly 150,000 years before people were depicting bison and horses on cave walls in Spain. Richard Klein, a paleoanthropologist at Stanford University, suggested that a genetic mutation occurred 40,000 years ago and caused an abrupt revolution in the way people thought and behaved.

In the decades following, however, archaeologists working in Africa brought down the notion that there was a lag between when the human body evolved and when modern thinking emerged. “As researchers began to more intensely investigate regions outside of Europe, the evidence of symbolic behavior got older and older,” says archaeologist April Nowell of the University of Victoria in Canada.

For instance, artifacts recovered over the past decade in South Africa— such as pigments made from red ochre, perforated shell beads and ostrich shells engraved with geometric designs—have pushed back the origins of symbolic thinking to more than 70,000 years ago, and in some cases, to as early as 164,000 years ago. Now many anthropologists agree that modern cognition was probably in place when Homo sapiens emerged.

“It always made sense that the origins of modern human behavior, the full assembly of modern uniqueness, had to occur at the origin point of the lineage,” says Curtis Marean, a paleoanthropologist at Arizona State University in Tempe.

Marean thinks symbolic thinking was a crucial change in the evolution of the human mind. “When you have that, you have the ability to develop language. You have the ability to exchange recipes of technology,” he says. It also aided the formation of extended, long-distance social and trading networks, which other hominids such as Neanderthals lacked. These advances enabled humans to spread into new, more complex environments, such as coastal locales, and eventually across the entire planet. “The world was their oyster,” Marean says.

Image by Courtesy of M. Malina, University of Tübingen, The Royal Society. Important artifacts found in the Sibudu Cave and Blombos Cave in Africa include shell beads, red pigments, engravings and projectile points. (original image)

Image by Courtesy of Wikimedia Commons. Cave art evolved in Europe 40,000 years ago. Archaeologists reasoned the art was a sign that humans could use symbols to represent their world and themselves. (original image)

Image by Courtesy of Wikimedia. Artifacts found in Blombos Cave in South Africa. (original image)

Image by Courtesy of Kari Janne Stenersen / Wikimedia. Deposit layers in Blombos Cave in South Africa. (original image)

But symbolic thinking may not account for all of the changes in the human mind, says Thomas Wynn, an archaeologist at the University of Colorado. Wynn and his colleague, University of Colorado psychologist Frederick Coolidge, suggest that advanced "working memory" was the final critical step toward modern cognition.

Working memory allows the brain to retrieve, process and hold in mind several chunks of information all at one time to complete a task. A particularly sophisticated kind of working memory “involves the ability to hold something in attention while you’re being distracted,” Wynn says. In some ways, it’s kind of like multitasking. And it’s needed in problem solving, strategizing, innovating and planning. In chess, for example, the brain has to keep track of the pieces on the board, anticipate the opponent’s next several steps and prepare (and remember) countermoves for each possible outcome.

Finding evidence of this kind of cognition is challenging because humans don’t use advanced working memory all that much. “It requires a lot of effort,” Wynn says. “If we don’t have to use it, we don’t.” Instead, during routine tasks, the brain is sort of on autopilot, like when you drive your car to work. You’re not really thinking about it. Based on frequency alone, behaviors requiring working memory are less likely to be preserved than common activities that don’t need it, such as making simple stone choppers and handaxes.

Yet there are artifacts that do seem to relate to advanced working memory. Making tools composed of separate pieces, like a hafted spear or a bow and arrow, are examples that date to more than 70,000 years ago. But the most convincing example may be animal traps, Wynn says. At South Africa’s Sibudu cave, Lyn Wadley, an archaeologist at the University of the Witwatersrand, has found clues that humans were hunting large numbers of small, and sometimes dangerous, forest animals, including bush pigs and diminutive antelopes called blue duikers. The only plausible way to capture such critters was with snares and traps.

With a trap, you have to think up a device that can snag and hold an animal and then return later to see whether it worked. “That’s the kind of thing working memory does for us,” Wynn says. “It allows us to work out those kinds of problems by holding the necessary information in mind.”

It may be too simple to say that symbolic thinking, language or working memory is the single thing that defines modern cognition, Marean says. And there still could be important components that haven’t yet been identified. What’s needed now, Wynn adds, is more experimental archaeology. He suggests bringing people into a psych lab to evaluate what cognitive processes are engaged when participants make and use the tools and technology of early humans.

Another area that needs more investigation is what happened after modern cognition evolved. The pattern in the archaeological record shows a gradual accumulation of new and more sophisticated behaviors, Brooks says. Making complex tools, moving into new environments, engaging in long distance trade and wearing personal adornments didn’t all show up at once at the dawn of modern thinking.

The appearance of a slow and steady buildup may just be a consequence of the quirks of preservation. Organic materials like wood often decompose without a trace, so some signs of behavior may be too ephemeral to find. It’s also hard to spot new behaviors until they become widely adopted, so archaeologists are unlikely to ever locate the earliest instances of novel ways of living.

Complex lifestyles might not have been needed early on in the history of Homo sapiens, even if humans were capable of sophisticated thinking. Sally McBrearty, an archaeologist at the University of Connecticut in Storrs, points out in the 2007 book Rethinking the Human Revolution that certain developments might have been spurred by the need to find additional resources as populations expanded. Hunting and gathering new types of food, such as blue duikers, required new technologies.

Some see a slow progression in the accumulation of knowledge, while others see modern behavior evolving in fits and starts. Archaeologist Franceso d’Errico of the University of Bordeaux in France suggests certain advances show up early in the archaeological record only to disappear for tens of thousands of years before these behaviors—for whatever reason—get permanently incorporated into the human repertoire about 40,000 years ago.  “It’s probably due to climatic changes, environmental variability and population size,” d’Errico says.

He notes that several tool technologies and aspects of symbolic expression, such as pigments and engraved artifacts, seem to disappear after 70,000 years ago. The timing coincides with a global cold spell that made Africa drier. Populations probably dwindled and fragmented in response to the climate change. Innovations might have been lost in a prehistoric version of the Dark Ages. And various groups probably reacted in different ways depending on cultural variation, d’Errico says.  “Some cultures for example are more open to innovation.”

Perhaps the best way to settle whether the buildup of modern behavior was steady or punctuated is to find more archaeological sites to fill in the gaps. There are only a handful of sites, for example, that cover the beginning of human history. “We need those [sites] that date between 125,000 and 250,000 years ago,” Marean says. “That’s really the sweet spot.”

Erin Wayman writes Smithsonian.com's Homind Hunting blog.

How Three Amateur Jewel Thieves Made Off With New York’s Most Precious Gems

Smithsonian Magazine

On the night of October 29, 1964, two self-styled Miami beach boys crept onto the grounds of New York City’s American Museum of Natural History while a lookout drove a white Cadillac around the museum’s block of Manhattan. The beach boys were talented, brazen and sure-footed. After scaling a fence to the museum’s courtyard, they scrambled up a fire escape to secure a rope to a pillar just above the fourth-floor windows of the J.P. Morgan Hall of Gems and Minerals.  Clinging to the rope, one of them swung to an open window and used his feet to lower the sash. They were in.

Allan Dale Kuhn and Jack Roland Murphy used a glasscutter and duct tape to breach three display cases, and then a squeegee to gather 24 gems. Their haul included the milky-blue Star of India (the world's biggest sapphire, weighing 563.35 carats); the orchid-red DeLong Star Ruby (100.32 carats, and considered the world’s most perfect), and the purplish-blue Midnight Star (the largest black sapphire, at 116 carats).  Fearing they’d tripped a silent alarm, the pair retraced their steps to the street and caught separate getaway cabs.  “For us, it wasn’t anything,” recalled Murphy, who was better known as Murf the Surf. “We just swung in there and took the stuff.”

***

The mid-1960s were salad days for jewel thievery. In 1963, when a U.S. gem heist occurred on average every 32 seconds, crooks stole $41 million worth of insured precious and semiprecious stones Cash aside, diamonds were the anonymous currency of a thriving seller’s market. An estimated 3.5 million diamonds of one-third of a carat or more were being sold annually in the United States—but that was well short of demand. Abroad, jet-set Europeans, Arabs and Asians knew that jewels held their value in uncertain times. To grease the gears of this emerging global economy, many seemingly legitimate jewel merchants did double-duty as fences. They asked no untidy questions; routinely melted down precious-metal settings into salable ingots; cut conspicuous gems (or “went going on the break”) to erase their identity, and then blithely intermixed stolen and honest merchandise.

The best jewel thieves were aristocrats atop a three-tiered class structure.  At its bottom was an army of lowly criminals who committed perhaps 80 percent of all jewel thefts, but did so in crude, often clueless ways. Sandwiched between were about 4,000 skilled professionals who, like the aristocrats, left unwanted items untouched and promptly disposed of their booty. Kuhn, Murphy and their Cadillac-driving lookout, Roger Frederick Clark, probably aspired to this middle class. But they were young—Kuhn was 26, Murphy 27 and Clark 29—and they liked living large. They courted betrayal.

***

James A. Oliver, the director of the American Museum of Natural History, was having a tooth pulled when the heist was first discovered. That afternoon, answering press questions about his institution’s more painful and costly extractions, Oliver conceded that security was “not good.” Other officials elaborated: Batteries in the display-case burglar alarm had been dead for months—a surprise to geology curator Brian H. Mason, who routinely deactivated the system to access the gems. The tops of all the gem hall’s 19 exterior windows were left open two inches overnight for ventilation, and none had burglar alarms. After years when nothing untoward happened, even the precaution of locking a security guard into the gem room overnight had lapsed. 

Museum bookkeepers valued the stolen jewels at $410,000 (about $3 million today.) Historically speaking they were priceless, but because premiums were prohibitive, none were insured. Even as burglary detectives from New York’s 20th Squad dusted for prints (they found none), museum executives shuttered the barn. The J.P. Morgan Hall of Gems and Minerals was immediately closed to visitors and “Know Your Precious Gems,” a popular adult-education course, was postponed indefinitely.

The Star of India. (©AMNH/C. Chesek)

***

Authorities believed they were pursuing amateurs who had taken big and prominently displayed stones while ignoring more easily disposable clear gems.  Going on the break with these famous nuggets would involve considerable waste and, therefore, little recompense from fences.

Not so, according to Maurice Nadjari, then the assistant district attorney in charge of the case. “They knew what they wanted and took it,” Nadjari said in a recent phone interview. Kuhn, Nadjari said, planned to pass the biggest gems to an airline-pilot friend for quick conveyance to the Far East and resale to wealthy—and anonymous—foreign collectors.

Kuhn and Murphy were men of accomplishment—Kuhn a skin-diving expert, Murphy a violin virtuoso—but the gem-heisting was wanting for discretion. A vice and gambling plainclothesman named James Walsh heard from an informant who’d attended a party thrown by Kuhn, Clark and Murphy at the Cambridge House Hotel on West 86th Street—a short walk from the Natural History Museum. “I think I got something for you,” the source confided. “There are three guys upstairs in this place…spending money like wild. You’d think they were making it with a machine.”

After obtaining a search warrant, detectives went up to Room 1803, a $525-a-month suite of three rooms, and found marijuana, a floor plan of the Natural History Museum and books about precious stones. Their search was interrupted when a disheveled Roger Clark walked in.  Under questioning, Clark, according to Nadjari’s account, promptly caved and revealed that Murphy and Kuhn had flown to Florida. FBI agents soon arrested them for extradition to New York. Although the crime was nearly solved, the drama had just begun.

(L-R) Jack Murphy and Allan Kuhn, suspects in jewel robbery at The Museum of Natural History, at a hearing. (Lynn Pelham//Time Life Pictures/Getty Images)

***

The authorities held their suspects, but not for long.  The presiding New York judge considered Nadjari’s case shaky and set low bail. After posting bond, the suspects flew south, but not before Murf the Surf emerged as the trio’s photogenic and quotable front man. Interviewed at the Miami office of Kuhn’s attorney, a cigar-puffing Murf expressed annoyance over the whole affair. “I was supposed to be on my way to Hawaii to surf. Now all this inconvenience has fouled things up.” Kuhn sat quietly nearby. 

Things were going well for the rogues. On December 1, a Miami court dismissed federal charges. Nineteen-year-old New York stenographer Janet Florkiewicz, a key material witness who had purportedly carried the jewels when they fled to Miami, was no longer cooperating.  All of Nadjari’s efforts to hike the defendants’ bail failed.

But on December 13, Murphy’s longtime girlfriend, Bonnie Lou Sutera, 22, despondent after hearing that Murphy had a new love, was found dead in a suburban Miami apartment—an apparent suicide.  On January 2, Murphy and Clark were arrested for a Miami burglary, but only after leading police on a mile-long chase in a car registered to Sutera.

Murphy and Clark were arraigned on the burglary charge but soon made the $1,000 bail, in time to fly to a New York hearing—and a waiting trap. Searching files on unsolved jewelry thefts, police struck pay dirt. As soon as the hearing on the Natural History Museum theft adjourned, Kuhn, Murphy and Clark were charged with the January 4, 1964, jewel robbery and pistol-whipping of the actress Eva Gabor. With bail raised to $100,000, Kuhn, Murphy and Clark were suddenly willing to negotiate.

***

Maurice Nadjari faced a dilemma. His suspects were under lock and key, but he needed their help in recovering the loot. But he dared not ask the judge to ease their incarceration. Kuhn was spirited from his jail cell for negotiations with Nadjari and three New York plainclothes detectives. Kuhn said he could recover all the gems—if only he could go to Miami alone. “There’s no damn way you’re going anywhere alone,” Nadjari assured him.  But lured by the prospect of a quick recovery, and convinced that Kuhn’s custody wouldn’t be jeopardized if the three officers went along, Nadjari gambled on a secret trip to Miami.

The mission became a nightmare. Spotting a local TV newsman as they waited to board a Miami flight on January 5, Nadjari grabbed one cop’s fedora, shoved it onto Kuhn’s head and pulled the brim down to his ears. Press evasion continued in Miami. But at Kuhn’s insistence (and the cops’ encouragement), Nadjari agreed to rent a red Cadillac convertible. Just steps ahead of reporters and photographers, the men moved between perhaps a dozen hotels as Kuhn phoned and took calls from his contacts. A compulsive TV watcher, Kuhn offered elaborate excuses for the delay, along with hints of bribes if his custodians would just “look the other way.” At one point, Nadjari phoned his boss, District Attorney Frank S. Hogan. “If you get the jewels, come back,” Hogan advised him. “If you don’t, go to Argentina.”

Finally, a phone call delivered directions to the key for a locker at the Northeast Miami Trailways bus terminal. Detective Richard Maline returned with two water-logged suede pouches (a clue that the gems had been stowed underwater.) Inside were just nine gems: the Star of India, the Midnight Star, five emeralds and two aquamarines—but neither the DeLong Ruby nor other lesser gems. With the clock ticking, Nadjari cut his losses. Abandoning the red Caddie in favor of a furtive ride to the airport with a local bail bondsman, Nadjari, the detectives and Kuhn caught an 8:15 A.M. flight. Before buckling in, Nadjari slid the sodden, jewel-laden pouches into an airsickness bag.

***

On April 6, 1965, two months after pleading guilty to the Natural History Museum heist, Allan Kuhn, Jack Murphy and Roger Clark were each sentenced to three-year terms at New York’s Rikers Island Correctional Facility. (The Eva Gabor case was eventually dropped after she refused to testify.) A few days after the sentencing, the Star of India went back on exhibit, this time secured in a thick glass display case stationed on the museum’s main floor. Each night the case pivoted out of sight into a black two-ton safe.

That September, the DeLong Star Ruby was recovered—rather, it was ransomed for $25,000 by the insurance millionaire John D. MacArthur (the same man who would establish the foundation that funds the fellowships known as “genius grants”). Though the New York DA’s office played no part, the recovery bore the earmarks of Nadjari’s scavenger hunt: MacArthur, after negotiating privately with a Florida fence, found the stone in a telephone booth near Palm Beach. (Eventually Duncan Pearson, 34, a Miami friend of the Rikers convicts, was convicted of hiding the gem.) With the DeLong’s return, 10 of the 24 most valuable gems were back in museum custody.  The rest were never found.

***

In the years since, interest in Roger Frederick Clark and Allan Dale Kuhn has faded—although Kuhn got a 1975 writer’s credit for Live a Little, Steal a Lot, a film about the Museum of Natural History caper. In 1967, Murphy and Kuhn were arrested for a string of Los Angeles jewelry burglaries, but they were never tried. Murf the Surf’s criminal career  then took a much darker turn. In 1968 he was charged with conspiracy and assault in connection with a botched armed robbery of Miami Beach socialite Olive Wofford . The next year he was convicted of first-degree murder in the “Whiskey Creek” case: the bludgeoning deaths of two California secretaries—accomplices in a securities theft—whose bodies found in a creek north of Miami.

Murphy was ultimately sentenced to two life terms plus 20 years (one term for the Whiskey Creek Murder conviction, the balance for the Wofford robbery conviction)  but won parole in 1986, emerging—he said—a changed man, dedicated to ministering to prison convicts. In 2012, he asked the state of Florida to grant clemency and restore his civil rights. Governor Rick Scott, who did not know about Murphy until the case came up, was apparently willing to grant clemency. But Murphy failed to garner the two additional cabinet votes required.

***

Today the Star of India, the DeLong Star Ruby and the Midnight Star are displayed in the Natural History Museum’s first-floor Morgan Hall of Minerals. (The former fourth-floor J.P. Morgan Hall of Gems and Minerals has long since been partitioned into staff offices—though its heavy metal gate and at least some of the original windows are still in place.) According to physical-sciences curator George E. Harlow, the three storied gems are the collection’s most popular pieces. But the current display offers no hint of past notoriety, and the room’s ambience was subdued. It’s as if the gems had escaped their tabloid days and settled into the long arc of geology.

The Gruesome History of Eating Corpses as Medicine

Smithsonian Magazine

The last line of a 17th century poem by John Donne prompted Louise Noble’s quest. “Women,” the line read, are not only “Sweetness and wit,” but “mummy, possessed.”

Sweetness and wit, sure. But mummy? In her search for an explanation, Noble, a lecturer of English at the University of New England in Australia, made a surprising discovery: That word recurs throughout the literature of early modern Europe, from Donne’s “Love’s Alchemy” to Shakespeare’s “Othello” and Edmund Spenser’s “The Faerie Queene,” because mummies and other preserved and fresh human remains were a common ingredient in the medicine of that time. In short: Not long ago, Europeans were cannibals.

Noble’s new book, Medicinal Cannibalism in Early Modern English Literature and Culture, and another by Richard Sugg of England’s University of Durham, Mummies, Cannibals and Vampires: The History of Corpse Medicine from the Renaissance to the Victorians, reveal that for several hundred years, peaking in the 16th and 17th centuries, many Europeans, including royalty, priests and scientists, routinely ingested remedies containing human bones, blood and fat as medicine for everything from headaches to epilepsy. There were few vocal opponents of the practice, even though cannibalism in the newly explored Americas was reviled as a mark of savagery. Mummies were stolen from Egyptian tombs, and skulls were taken from Irish burial sites. Gravediggers robbed and sold body parts.

“The question was not, ‘Should you eat human flesh?’ but, ‘What sort of flesh should you eat?’ ” says Sugg. The answer, at first, was Egyptian mummy, which was crumbled into tinctures to stanch internal bleeding. But other parts of the body soon followed. Skull was one common ingredient, taken in powdered form to cure head ailments. Thomas Willis, a 17th-century pioneer of brain science, brewed a drink for apoplexy, or bleeding, that mingled powdered human skull and chocolate. And King Charles II of England sipped “The King’s Drops,” his personal tincture, containing human skull in alcohol. Even the toupee of moss that grew over a buried skull, called Usnea, became a prized additive, its powder believed to cure nosebleeds and possibly epilepsy. Human fat was used to treat the outside of the body. German doctors, for instance, prescribed bandages soaked in it for wounds, and rubbing fat into the skin was considered a remedy for gout.

Blood was procured as fresh as possible, while it was still thought to contain the vitality of the body. This requirement made it challenging to acquire. The 16th century German-Swiss physician Paracelsus believed blood was good for drinking, and one of his followers even suggested taking blood from a living body. While that doesn’t seem to have been common practice, the poor, who couldn’t always afford the processed compounds sold in apothecaries, could gain the benefits of cannibal medicine by standing by at executions, paying a small amount for a cup of the still-warm blood of the condemned. “The executioner was considered a big healer in Germanic countries,” says Sugg. “He was a social leper with almost magical powers.” For those who preferred their blood cooked, a 1679 recipe from a Franciscan apothecary describes how to make it into marmalade.

Rub fat on an ache, and it might ease your pain. Push powdered moss up your nose, and your nosebleed will stop. If you can afford the King’s Drops, the float of alcohol probably helps you forget you’re depressed—at least temporarily. In other words, these medicines may have been incidentally helpful—even though they worked by magical thinking, one more clumsy search for answers to the question of how to treat ailments at a time when even the circulation of blood was not yet understood.

However, consuming human remains fit with the leading medical theories of the day. “It emerged from homeopathic ideas,” says Noble. “It’s 'like cures like.' So you eat ground-up skull for pains in the head.” Or drink blood for diseases of the blood.

Another reason human remains were considered potent was because they were thought to contain the spirit of the body from which they were taken. “Spirit” was considered a very real part of physiology, linking the body and the soul. In this context, blood was especially powerful. “They thought the blood carried the soul, and did so in the form of vaporous spirits,” says Sugg. The freshest blood was considered the most robust. Sometimes the blood of young men was preferred, sometimes, that of virginal young women. By ingesting corpse materials, one gains the strength of the person consumed. Noble quotes Leonardo da Vinci on the matter: “We preserve our life with the death of others. In a dead thing insensate life remains which, when it is reunited with the stomachs of the living, regains sensitive and intellectual life.”

Image by Bettmann / Corbis. Egyptians embalming a corpse. (original image)

The idea also wasn’t new to the Renaissance, just newly popular. Romans drank the blood of slain gladiators to absorb the vitality of strong young men. Fifteenth-century philosopher Marsilio Ficino suggested drinking blood from the arm of a young person for similar reasons. Many healers in other cultures, including in ancient Mesopotamia and India, believed in the usefulness of human body parts, Noble writes.

Even at corpse medicine’s peak, two groups were demonized for related behaviors that were considered savage and cannibalistic. One was Catholics, whom Protestants condemned for their belief in transubstantiation, that is, that the bread and wine taken during Holy Communion were, through God’s power, changed into the body and blood of Christ. The other group was Native Americans; negative stereotypes about them were justified by the suggestion that these groups practiced cannibalism. “It looks like sheer hypocrisy,” says Beth A. Conklin, a cultural and medical anthropologist at Vanderbilt University who has studied and written about cannibalism in the Americas. People of the time knew that corpse medicine was made from human remains, but through some mental transubstantiation of their own, those consumers refused to see the cannibalistic implications of their own practices.

Conklin finds a distinct difference between European corpse medicine and the New World cannibalism she has studied. “The one thing that we know is that almost all non-Western cannibal practice is deeply social in the sense that the relationship between the eater and the one who is eaten matters,” says Conklin. “In the European process, this was largely erased and made irrelevant. Human beings were reduced to simple biological matter equivalent to any other kind of commodity medicine.”

The hypocrisy was not entirely missed. In Michel de Montaigne’s 16th century essay “On the Cannibals,” for instance, he writes of cannibalism in Brazil as no worse than Europe’s medicinal version, and compares both favorably to the savage massacres of religious wars.

As science strode forward, however, cannibal remedies died out. The practice dwindled in the 18th century, around the time Europeans began regularly using forks for eating and soap for bathing. But Sugg found some late examples of corpse medicine: In 1847, an Englishman was advised to mix the skull of a young woman with treacle (molasses) and feed it to his daughter to cure her epilepsy. (He obtained the compound and administered it, as Sugg writes, but “allegedly without effect.”) A belief that a magical candle made from human fat, called a “thieves candle,” could stupefy and paralyze a person lasted into the 1880s. Mummy was sold as medicine in a German medical catalog at the beginning of the 20th century. And in 1908, a last known attempt was made in Germany to swallow blood at the scaffold.

This is not to say that we have moved on from using one human body to heal another. Blood transfusions, organ transplants and skin grafts are all examples of a modern form of medicine from the body. At their best, these practices are just as rich in poetic possibility as the mummies found in Donne and Shakespeare, as blood and body parts are given freely from one human to another. But Noble points to their darker incarnation, the global black market trade in body parts for transplants. Her book cites news reports on the theft of organs of prisoners executed in China, and, closer to home, of a body-snatching ring in New York City that stole and sold body parts from the dead to medical companies. It’s a disturbing echo of the past. Says Noble, “It’s that idea that once a body is dead you can do what you want with it.”

Maria Dolan is a writer based in Seattle. Her story about Vaux's swifts and their disappearing chimney habitat appeared on Smithsonian.com in November 2011.

Science and Tradition Are Resurrecting the Lost Art of Wave Piloting

Smithsonian Magazine

The Republic of the Marshall Islands lies more than 2,000 miles from the nearest continent, a smattering of coral atolls engulfed by the vastness of the central Pacific Ocean. The islands are tiny, together encompassing just 70 square miles, and they’re remote, spread over 750,000 square miles of ocean. They’re also gorgeous—white sand beaches, tropical foliage, and lagoons so turquoise they seem to glow. Traveling through in the 19th century, Robert Louis Stevenson called the area the “pearl of the Pacific.”

But the 50,000 or so Marshallese who call these islands home live in one of the most challenging environments on Earth. With so little land surrounded by so much water, most activities—from trading to gathering food—require dangerous trips across the sea. Because most of the islands rise just seven feet above the waves, they’re impossible to spot from a distance. If you were on a boat scanning the horizon, you wouldn’t see an island until you were nearly on top of it.

That’s why it’s so astounding that seafarers from Southeast Asia discovered and colonized these island chains some 2,000 years ago—and even more so that they stayed, eking out a life defined more by water than earth. Before European colonization, Marshallese navigators routinely sailed dugout canoes across vast stretches of open water, landing precisely on the only atoll for hundreds or even thousands of miles. They did so through a system that anthropologists call wave piloting. Instead of relying on the stars to find their way, wave pilots steer by the feeling of the ocean itself.

Over the last 150 years, wave piloting was nearly lost. But today, Western scientists and the last of the Marshall Islands’ expert navigators are attempting to explain the physics that underlie this ancient art for the first time. As they translate it into scientific terms, they’re helping preserve an integral part of Marshallese identity—even as rising sea levels threaten to push more Marshallese away from their homes and their seafaring heritage. 

A Marshall Islands stick navigation chart is less a literal representation of an area and more of a guide to how waves and currents interact with islands. (National Museum of Natural History)

When Alson Kelen was young, he used to lie at night against his father’s arm, on an island where there were no lights and no cars. The only sounds were waves slapping against wet sand, the breeze rustling through palm fronds, the delicate crackling of a coconut-shell fire. As the purple-blue evening gave way to night, Alson’s father would tell his son to close his eyes. And then he would tell stories about sailing, about flying on the wind, about surviving long and difficult journeys.

The island where Alson lived, Bikini, was a hub of traditional Marshallese navigation. In the old days, young men and women learning wave piloting would spend hours floating in the ocean blindfolded, memorizing the minute sensations of waves, currents and swells beneath them. Then they’d study stick charts—maps made of curved sticks that show the locations of islands and predominant swells—to place those waves in a larger mental geography. Later, if they became disoriented at sea, they could close their eyes and use the reflections and refractions of waves to determine the direction of land.

For generations, these skills were guarded like a family heirloom. But in the first half of the 20th century, under German, Japanese and eventually American occupation, they began to decline. Bikini, once a stronghold of sailing culture, became the center of nuclear testing by the United States. Between 1946 and 1958, the United States detonated 67 atomic bombs in the area. Communities like Alson’s were permanently displaced. The knowledge passed down for millennia “was fading away,” Alson says.

Across the world, equally sophisticated navigational systems have been pushed out by technology or lost through cultural oppression. But Alson had spent his whole life dreaming of canoes. In 1989, he launched a six-month program called Waan Aelõñ in Majel (Canoes of the Marshall Islands) that teaches life and job skills to local kids through building and sailing outrigger canoes. Roughly 400 teenagers and young adults have graduated from the program and canoes, once on the brink of disappearing, are now part of life in dozens of outer islands.

Alson’s passion also caught the attention of John Huth. The Harvard experimental particle physicist works at the Large Hadron Collider and helped discover the Higgs boson, and he has long been fascinated by indigenous navigation. How could Marshallese stick charts, for instance—made without GPS or compasses or even sextants—show the location of far-flung islands with almost precise latitudinal accuracy?

In 2015, Huth was invited to the Marshall Islands to join a 120-mile outrigger canoe voyage with Alson, Dutch oceanographer Gerbrant van Vledder, University of Hawaii anthropologist Joe Genz and one of the Marshall Islands’ last navigators, an elder who calls himself Captain Korent Joel.

"My attempt,” Huth later explained at a lecture, “was to unravel what seems to be a rather mysterious and somewhat fragmented tradition. … In a sense what I’m trying to do is help some of the last of the Marshall Islands’ navigators try to piece together some of their traditions by employing what science can bring to the topic.”

Huth and the other Western scientists are trying to understand the oceanography, wave dynamics, climatology and physics of wave piloting. It’s not a straightforward task. Captain Korent’s understanding of wave patterns, finely tuned from generations of keen observation, doesn’t always mesh with Western scientific concepts. Korent describes four main ocean swells, for example, while most sailors in the region can only sense one or two. Even computerized buoys dropped in the ocean fail to pick up the minute sensations Korent uses to navigate.

Alson Kelen started a program in the Marshall Islands to teach traditional wave piloting and canoe building to young Marshallese. (Krista Langlois)

But the biggest mystery is a technique that allows a navigator to sail between any two islands in the Marshalls by identifying a ridge of waves, called a dilep, that seems to connect neighboring islands. 

Korent’s explanation of dilep (or at least the translation of it) seemed to contradict basic wave dynamics. But as Huth lay awake in the hull of the chaser boat on the return leg of his journey last year, frantically scribbling wind speed and GPS coordinates into a yellow Rite-in-the-Rain notebook, he began to develop an idea that could explain dilep in scientific language for the first time. He’s reluctant to give too many details—it’s still unpublished—but he says that he thinks “it has more to do with the motion of the vessel and less to do with what’s happening with the swells.”

Huth hopes to return to the Marshalls to test this and other theories and eventually publish his hypotheses in a scientific journal. But his ultimate goal is to turn that academic paper into a layperson’s manual—a sort of "Introduction to Wave Piloting" that could be taught in Marshallese schools in the future. 

As it stand today, generations of Marshallese may never get the chance to practice wave piloting. As sea levels rise, life in the Marshall Islands is becoming increasingly precarious. Several times a year the rising ocean floods peoples’ homes, washes out roads and destroys staple crops. More than a third of the population—some 25,000 Marshallese—have already emigrated to the United States, and the number is likely to grow.

Most climate experts predict that global sea-level rise will render the Marshall Islands uninhabitable by the end of this century. The government of Bikini is already petitioning the U.S. Congress to allow the island’s former residents to use a nuclear testing trust fund to buy land in the U.S. for relocation.

By giving wave piloting new life, Huth, Alson and others are helping displaced Marshallese maintain a link to their place in the world no matter where they wind up. Even though the specifics of Marshallese wave piloting are unique to the waters around the Marshall Islands, any form of cultural revival—from wave piloting to weaving—is also a form of climate adaptation, a way of surviving.

If the skills their ancestors clung to for so long are validated by some of the world’s greatest scientists, perhaps climate change won’t mean cultural genocide. Perhaps the Marshallese are voyagers, not victims, with the skills to push off into the unknown and thrive.

A pair of racers wait for the canoe race to begin in Majuro in the Marshall Islands. (Krista Langlois)

With Semios, Farmers Can Monitor Their Fields Remotely and Keep Pests Away

Smithsonian Magazine

There’s definitely something in the air at John Freese’s cherry, apple and pear orchards in Central Washington. And that something is a cloud of pheromones—one that means fewer pesky moths snacking their way through his fields. For the past year, the fourth-generation fruit farmer has been testing Semios, a pheromone-delivery and precision farming system that uses wireless sensors and cameras to help growers monitor their crops remotely. The network of tiny cameras and data collecting monitors keeps tabs on the fields and weather conditions. Strategically placed canisters routinely spray the trees to prevent infestations.

Last year, Freese hung several thousand Semios cartridges on trees across 100 acres of orchard. During peak insect mating seasons throughout the year, for up to 12 hours a day, the dispensers fire off pheromone mistings every 15 minutes. These biopesticides don’t kill insects, but they do disrupt breeding. Farmers using Semios schedule sprayings by using the company’s full suite of remote in-field monitors and weather stations that help measure wind and moisture levels, as well as pest traps, which track insects’ lifecycles.

“Mating disruption is not exciting,” Freese concedes. “If it works, you don’t see anything.” And that is what makes it a profoundly useful tool for fruit and nut growers trying to protect their farms.

Confused by the presence of pheromones in the air, male insects are unable to locate a mating partner and give up. For Freese, that’s meant contending with far fewer codling and Oriental fruit moths, the top two global pests for apples and pears that also attack other types of tree fruit, including apricots, peaches and quinces. Aside from some automatic misters in the trees, which look a bit like industrial-grade outdoor air fresheners, there’s no way to tell an orchard is managing pests by messing with their reproductive lives.

Image by Semios. During peak insect mating seasons throughout the year, for up to 12 hours a day, Semios dispensers fire off pheromone mistings every 15 minutes. (original image)

Image by Semios. The biopesticides don’t kill insects, but they do disrupt breeding. (original image)

Image by Semios. Semios handles all the installation costs and maintenance on the hanging boxes, and also offers on-call support 24 hours a day. (original image)

Michael Gilbert, the founder and CEO of Vancouver-based Semios, is a chemist with two decades of experience concocting, manufacturing and managing the distribution of pharmaceutical products based on naturally occurring substances. After stints at Merck and Cardiome Pharma Corp., where he managed the research and development of cardiovascular and other medications, he was bitten by the entrepreneurial bug and started looking into the class of chemicals known as pheromones.

Developed in earnest after the Environmental Protection Agency banned the use of DDT in 1972, biopesticides include everything from fungal pesticides that contain bacterium that kill specific insects to biochemical products, such as pheromone pesticides. In 1994, the EPA established the Biopesticides and Pollution Prevention Division within the Office of Pesticide Programs in order to facilitate the registration of biopesticides. As of 2014, the EPA reported more than 430 registered biopesticide ingredients and 1,320 products. As biopesticides are generally considered less toxic than chemical pesticides, the EPA review and approval process takes considerably less time. Biopesticides are typically approved in less than a year versus the typical three years it takes to for a chemical product.

In recent years, organic and traditional farmers have increasingly used pheromone pesticides for a number of reasons. Targeted pests don’t become resistant to pheromones in the same way they can adapt to insecticides. Biopesticides don’t kill or harm pollinators like bees, and other beneficial insects and wildlife. Furthermore, farm workers are spared the excess labor and chemical exposure needed to spray entire fields with traditional pesticides. With pheromones, there’s no re-entry period, or time—sometimes more than a month—when farmers have to stay out of an area recently sprayed with toxic chemicals.

Historically, pheromones—and specifically pheromone pesticides—have been prohibitively expensive. Before partnering with Semios, Freese had already implemented pheromone pesticides in his cherry orchards in order to disrupt codling moth mating. But it cost him up to $11,000 just for pheromone bands, a flexible adhesive fiber ring farmers can wrap around a tree trunk to emit pheromones and block climbing insects like caterpillars. When he factored in the entire week it takes his whole crew to install bands on over 100 acres, the actual cost was much higher.

“My thought was, let’s find a better cheaper way of making these [pheromone-based] products,” Gilbert explains. He focused on airborne, winged insects for a reason: it’s much harder to locate and target pests that crawl or burrow.

Over two years of intense research and development, Semios developed patented wireless monitors with radio signals that can connect through broad tree leaves. A leafy orchard can be a logistics nightmare when it comes to wireless technology. Gilbert says that contrary to what some might think, it’s easier for radio signals to travel through dense concrete city buildings than wet leaves blowing in the wind.

Freese reasoned paying $150 per acre for the complete Semios system—or $15,000 for 100 acres, by his back-of-the-envelope math—would produce at least a comparable result to his pheromone tree bands for the same amount of money. As an added bonus, Semios handles all the installation costs and maintenance on the hanging boxes, and also offers on-call support 24 hours a day.

Remote monitoring has saved the fruit farmer both time and energy by enabling acre-by-acre microanalysis. Instead of hiring consultants to walk in the fields, for example, a farmer can check the Semios dashboard on his phone and receive alerts when temperature drops in remote orchards. These early warnings could significantly improve crop yields.

When we spoke, Freese was just up from an early morning nap after a night out in the fields fighting frost. Being able to track temperatures in near-real-time in remote orchards miles away has been especially critical early in the growing seasons, when he can spot a cold patch in an orchard or note when a frost fan doesn’t kick on and send warm air to ground level before the temperature drops too low.

“Anybody else that fights frost in the spring,” Freese says, “I can’t believe they wouldn’t be excited about this technology.”

To Limit Pollution, The Chinese Are Faced With Giving Up an Ancient Tradition

Smithsonian Magazine

Keeping the air clean is one of China’s biggest challenges today. Large cities in the north, such as Beijing, fail air quality standards for health on more than half the days each year. In cities even farther north, such as Harbin, visibility last October was reduced to 50 feet due to severe air pollution, shutting down schools and airports. People walking the streets throughout China routinely wear face masks to protect their lungs. And what you see are not the simple cloth masks that doctors use for surgery; many Chinese wear elaborate breathing apparatuses, made of plastic with a variety of serious filtering systems.  

The facts about air pollution are not disputed. As the New York Times recently reported, “China is the world’s largest consumer of coal, using about 45 percent of the global total. It is also the largest emitter of carbon dioxide.” In 2013, the country burned 3.61 billion tons of coal, which accounted for 49.3 percent of the world’s total consumption, according to China Daily. A quarter century earlier, however, China was burning just 610 million tons of coal. Scientists and policy experts are debating the country's plans for more renewable energy sources and the removal of older, more inefficient vehicles from the roads. But as a folklorist, something curious in the debates caught my interest. A couple of longstanding popular traditions had been caught up in a clash with government efforts to improve air quality.

Last January, in Beijing shortly before the start of the Spring Festival or Chinese New Year, large posters on the streets, especially in areas near the historic hutongs (or narrow alleyways) flatly announced that fireworks were prohibited. Reactions were mixed. The Chinese invented gunpowder and fireworks more than 2,000 years ago, and many people insist on setting off their own fireworks just as they always have, especially on the 15th day of the Spring Festival—known as the Lantern Festival—to bid farewell with a bang to the old year, and to ensure good luck for the coming year.

Image by © Liu Liqun/Corbis. Pollution has become a serious problem for Beijing in recent years due to more and more cars on the road, building construction, and other pollution sources. (original image)

Image by © HOW HWEE YOUNG/epa/Corbis. Fireworks celebrating the 2013 Lantern Festival go off in a residential area in Beijing. (original image)

Image by © Sean Gallagher/National Geographic Society/Corbis. A view of the Forbidden City on a heavily polluted day. (original image)

Image by James Deutsch. Signs all over Beijing prohibit the use of fireworks. (original image)

Image by James Deutsch. The above photos were taken just 14 hours apart from the same vantage point in Beijing, illustrating the urgency of China's pollution problem. (original image)

Image by © Michael S. Yamashita/Corbis. Joss paper or "spirit money" is burnt as an offering to the dead, especially during the seventh month of the lunar year. (original image)

Such pyrotechnic displays in China have been occurring for millennia, but one of the earliest Western witnesses of these traditions was a British missionary to China, the Reverend George Smith. He observed a Lantern Festival on February 10, 1845, on the streets of Xiamen, Fujian Province, in southeastern China:

“A long pole was erected, fifty feet in height, hung round with cases of rockets and other combustibles. On its being lighted at the bottom, there was a rapid succession of squibs, roman-candles, guns, and rockets, which illuminated the sky to a great distance with their igneous masses. . . . A volley of lesser combustibles suddenly terminated in a beautiful cluster of grapes, which lasted for some time, and shed a deep blue light on the houses and walls for some distance around. A shower of golden rain was shortly after followed by an umbrella of fire, which suddenly flew open, amid the loud cheers of the spectators.”

Nearly 170 years later, some spectators are giving the same loud cheers, while others are bemoaning the costs to their health of such pyrotechnics. For instance, an article in the scientific journal Atmospheric Environment noted that fireworks contain an assortment of polluting chemicals, including potassium nitrates, potassium chlorate, potassium perchlorate, charcoal, sulfur, manganese, sodium oxalate, aluminum and iron dust powder, strontium nitrate, and barium nitrate. Based on samples collected during the Lantern Festival in Beijing in February 2006, the authors concluded that particulate matter in the air (including fine particles with diameters of 2.5 micrometers or less and respirable suspended particles with diameters of 10 micrometers or less—both of which are known to cause lung cancer) “went up over 6 and 4 times in the lantern day compared to the normal days.”

Similarly, the tradition of burning joss paper or “spirit money” to honor ancestors—especially during the seventh month or “Ghost Month” of the lunar year—is another Chinese tradition that creates dense smoke and thus reduces air quality. An article from 2011 in the scientific journal Aerosol and Air Quality Research noted that joss paper is “chiefly composed from bamboo and/or recycled waste paper,” which when burned creates significant amounts of particulate matter, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated dibenzo-p-dioxin/dibenzofurans. The authors concluded that “PAHs concentrations in ambient air during festivals were observed to be several times higher than those during other times.” As a result, government officials in Hong Kong and elsewhere are asking temples and mausoleums to install special incinerators for burning the joss paper.

There may not be much that individual Chinese can do about their country’s reliance on burning coal for energy. But an increasing number of people understand that not setting off fireworks and not burning joss paper will have positive consequences. As 28-year-old Hua Jingwen who lives in Beijing told me, “I think we can still have fireworks here, but maybe not so many. It is not really necessary for every household to shoot off fireworks.”

Even if only one in four persons were to take this sort of positive action, the numbers will add up. After all, there are roughly 1.35 billion people living in China today; one in four means 337 million people, more than the total population of the United States.

Making the Best of Invasive Species

Smithsonian Magazine

The lowly garlic mustard had never seen so much love.

This prolific invasive plant—cursed by home gardeners and park and wildlife managers alike—is routinely wrenched from the ground or spritzed with herbicide in an attempt to keep it from taking over. But on April 14 at Cleveland’s Shaker Lakes Nature Center, garlic mustard was the guest—or rather, pest—of honor.

“Pestival 2011” featured seven of Cleveland’s most notable chefs making garlic mustard a gourmet treat. They rose to the occasion deliciously: garlic mustard sauce over thin slices of roast beef, garlic mustard pesto on pork tenderloin crostinis, garlic mustard chutney on wonton-skin ravioli stuffed with tofu and paneer cheese, garlic mustard dip for thick-cut potato chips, and garlic mustard relish on chèvre cheesecake. The 125 attendees clustered around the chefs’ silvery platters, then carried artfully arranged portions of the garlic-mustard creations back to white-linen draped tables.

Would all this culinary artfulness persuade people to cook up some garlic mustard on their own, or at least recognize it when they see it along a path in a public park and yank it out?

“We hope so!” says Terri Johnson, the nature center’s special events manager. “We look forward to the day when garlic mustard is eradicated. Then we’ll hold Pestival as a victory celebration.”

Garlic mustard is just one of 50,000 alien plant and animal species that have arrived in the United States. These invaders flourish in the absence of their native competitors and predators. European settlers brought garlic mustard here for their kitchen gardens. An attractive plant with heart-shaped leaves and tiny white flowers, it outcompetes native plants for light, moisture, nutrients, soil and space. It propagates at a fierce speed, producing thousands of seeds that spread by sticking to animals’ fur.

“If you don’t control it, woods filled with native species can be completely taken over by garlic mustard in five years,” says Sarah Cech, the nature center’s naturalist.

When the nature center first conceived Pestival six years ago—the first one was a simpler event in which the staff prepared a garlic-mustard pesto served with spaghetti for 80 guests—they didn’t realize they were part of a national trend. The United States spends around $120 billion each year to control invasive species, according to Cornell University ecologist David Pimentel. But in the past decade or so, a growing number of people have decided to view the crisis of surging alien populations as an opportunity to expand the American palate. If these species are out of control because they have no natural predators, then why not convince the fiercest predator of all—human beings—to eat them? The motto of these so-called invasivores is, “If you can’t beat ’em, eat ’em.”

Take the Asian carp (please!). Imported from China in 1973 to clean algae from Southern ponds, the carp soon broke from their confines and infested Mississippi River waterways. Gobbling up the phytoplankton that support native species, the carp can grow four feet long and weigh 100 pounds. They continue to swim north and could establish themselves in the Great Lakes, the world’s largest freshwater system, and decimate native fish populations there.

Wildlife managers have tried to prevent Asian carp and other invasive species from reaching the Great Lakes by installing electric underwater fences and, occasionally, poisoning the water. But chefs from New Orleans to Chicago have also tried to put a dent in the population by putting the fish on their menu. Now, a researcher at the Aquaculture Research Center at Kentucky State University is trying to figure out how to harvest and promote carp as a food source. Currently, a few processing plants are converting Asian carp into ingredients for fertilizer or pet food. “That’s a shame, because the meat quality is excellent,” says Siddhartha Disgupta, an associate professor at the center.

Image by Winfred Wisniewski; Frank Lane Picture Agency / Corbis. Garlic mustard is just one of 50,000 alien plant and animal species that have arrived in the United States. These invaders flourish in the absence of their native competitors and predators. (original image)

Image by Jim Weber / ZUMA Press / Corbis. Asian carp, imported from China in 1973 to clean algae from Southern ponds, broke from their confines and infested the Mississippi River waterways. (original image)

Image by Kristin Ohlson. "Pestival 2011" featured seven of Cleveland's most notable chefs making garlic mustard a gourmet treat. Shown here is Chef Scott Kim and his assistant of SASA. They prepared wonton skin ravioli filled with garam masala seasoned tofu with paneer cheese served with garlic mustard chutney and cucumber salsa. (original image)

Image by Kristin Ohlson. Jonathon Sawyer is the owner of the Greenhouse Tavern and was named Best New Chef of 2010 by Food and Wine magazine. He plans to include garlic mustard as a regular part of his menu. (original image)

Image by Kristin Ohlson. Chef Britt-Marie Culey of Coquette Patisserie made chevre cheescake with garlic mustard relish. (original image)

Disgupta argues that the carp has all the health benefits associated with eating fish and, since it eats low on the food chain, has few contaminants such as mercury that tend to be concentrated in the flesh of other fish species. He says he’s eaten Asian carp in various preparations and found it delicious. But even though this species of carp is prized as a tasty fish in China, Americans usually grimace at the idea of eating it.

“There’s a negative prejudice to the name,” Disgupta says. “People think they’re bottom feeders. They get them mixed up with suckers, which look similar but are from a different biological family.”

In Florida, George Cera has trained his fork on a different invasive creature: the spiny-tailed black iguana, which was imported as an exotic pet, then escaped and proliferated. Cera was hired by the town of Boca Grande on Gasparilla Island to hunt and kill the iguanas, which feast on endangered plants as well as the eggs of protected sea turtles, gopher tortoises and burrowing owls. “They grab and eat them like we’d eat a cherry tomato,” Cera says.

In two years, Cera bagged 12,000 iguanas, his conscience soothed as he found parts of protected species inside them. But it bothered him to kill an animal without eating it. Then, he met some Central and South American tourists who told him that iguanas are considered a delicacy back home, where they’re a native species. They gave Cera recipes. He tracked down more on his own and produced an iguana cookbook.

“I thought it would be a fun way to educate the public,” Cera says. “Now, people come and ask me where they can get some of this meat.”

Perhaps no one tackles the issue of eating invasives with as much gusto as Jackson Landers, author of The Locavore Hunter blog. Over the past year, he’s traveled the country hunting invasives and gathering material for his new book, Eating Aliens. Landers has hunted and eaten feral pigs in Georgia, green iguanas in the Florida Keys, pigeons in New York City, Canada geese in Virginia and European green crabs in Massachusetts, among others.

“As a systematic approach to invasives, eating them should be a major component,” Landers says. “After all, human beings have eaten other species to extinction.”

Not everyone agrees with this approach, however. Sarah Simons, executive director of the Global Invasive Species Programme, echoes the thoughts of some wildlife managers, saying, “There is currently no evidence whatsoever to demonstrate a reduction in population size, or effective management, of invasive species by consuming them. More often, it is quite the reverse which occurs—promoting the consumption of an invasive species can actually create a market, which in turn increases the spread or introduction of invasive species.”

The organizers of Cleveland’s Pestival are well aware of the fine and dangerous line between educating people about garlic mustard—including its edibility—and inadvertently inspiring them to cultivate it in their backyards. But there seemed to be little cause for worry at the event. Most of the preparations offered an array of flavors, and it was hard for the diners to isolate the particular taste of garlic mustard. Some of the chefs only shrugged when asked if they planned to make the wayward green a regular part of their menu.

The exception was Jonathon Sawyer, owner of the Greenhouse Tavern and named a Best New Chef of 2010 by Food and Wine magazine. Sawyer loves to forage the ring of parks around Cleveland and has been carrying garlic mustard back to use in his restaurant and home for five years. In the springtime, he likes to eat the leaves raw, comparing their taste and bite to arugula. As the plants get older, he blanches and eats them like mustard greens.

“Dude, it’s the ultimate food!” Sawyer exclaimed as he passed out his artichoke and spinach dip with crème fraiche, garlic mustard and thick-cut potato chips. “It’s free, and nature wants us to get rid of it.”

Science Still Bears the Fingerprints of Colonialism

Smithsonian Magazine

Sir Ronald Ross had just returned from an expedition to Sierra Leone. The British doctor had been leading efforts to tackle the malaria that so often killed English colonists in the country, and in December 1899 he gave a lecture to the Liverpool Chamber of Commerce about his experience. In the words of a contemporary report, he argued that “in the coming century, the success of imperialism will depend largely upon success with the microscope.”

Ross, who won the Nobel Prize for Medicine for his malaria research, would later deny he was talking specifically about his own work. But his point neatly summarized how the efforts of British scientists was intertwined with their country’s attempt to conquer a quarter of the world.

Ross was very much a child of empire, born in India and later working there as a surgeon in the imperial army. So when he used a microscope to identify how a dreaded tropical disease was transmitted, he would have realized that his discovery promised to safeguard the health of British troops and officials in the tropics. In turn, this would enable Britain to expand and consolidate its colonial rule.

Ross’s words also suggest how science was used to argue imperialism was morally justified because it reflected British goodwill towards colonized people. It implied that scientific insights could be redeployed to promote superior health, hygiene and sanitation among colonial subjects. Empire was seen as a benevolent, selfless project. As Ross’s fellow Nobel laureate Rudyard Kipling described it, it was the “white man’s burden” to introduce modernity and civilized governance in the colonies.

But science at this time was more than just a practical or ideological tool when it came to empire. Since its birth around the same time as Europeans began conquering other parts of the world, modern Western science was inextricably entangled with colonialism, especially British imperialism. And the legacy of that colonialism still pervades science today.

As a result, recent years have seen an increasing number of calls to “decolonize science”, even going so far as to advocate scrapping the practice and findings of modern science altogether. Tackling the lingering influence of colonialism in science is much needed. But there are also dangers that the more extreme attempts to do so could play into the hands of religious fundamentalists and ultra-nationalists. We must find a way to remove the inequalities promoted by modern science while making sure its huge potential benefits work for everyone, instead of letting it become a tool for oppression.

Ronald Ross at his lab in Calcutta, 1898. (Wellcome Collection, CC BY)

The gracious gift of science

When an enslaved laborer in an early 18th-century Jamaican plantation was found with a supposedly poisonous plant, his European overlords showed him no mercy. Suspected of conspiring to cause disorder on the plantation, he was treated with typical harshness and hanged to death. The historical records don’t even mention his name. His execution might also have been forgotten forever if it weren’t for the scientific inquiry that followed. Europeans on the plantation became curious about the plant and, building on the enslaved worker's “accidental finding,” they eventually concluded it wasn’t poisonous at all.

Instead it became known as a cure for worms, warts, ringworm, freckles and cold swellings, with the name Apocynum erectum. As the historian Pratik Chakrabarti argues in a recent book, this incident serves as a neat example of how, under European political and commercial domination, gathering knowledge about nature could take place simultaneously with exploitation.

For imperialists and their modern apologists, science and medicine were among the gracious gifts from the European empires to the colonial world. What’s more, the 19th-century imperial ideologues saw the scientific successes of the West as a way to allege that non-Europeans were intellectually inferior and so deserved and needed to be colonized.

In the incredibly influential 1835 memo “Minute on Indian Education,” British politician Thomas Macaulay denounced Indian languages partially because they lacked scientific words. He suggested that languages such as Sanskrit and Arabic were “barren of useful knowledge,” “fruitful of monstrous superstitions” and contained “false history, false astronomy, false medicine.”

Such opinions weren’t confined to colonial officials and imperial ideologues and were often shared by various representatives of the scientific profession. The prominent Victorian scientist Sir Francis Galton argued that the “the average intellectual standard of the negro race is some two grades below our own (the Anglo Saxon).” Even Charles Darwin implied that “savage races” such as “the negro or the Australian” were closer to gorillas than were white Caucasians.

Yet 19th-century British science was itself built upon a global repertoire of wisdom, information and living and material specimens collected from various corners of the colonial world. Extracting raw materials from colonial mines and plantations went hand in hand with extracting scientific information and specimens from colonized people.

Sir Hans Sloane’s imperial collection started the British Museum. (Paul Hudson/Wikipedia, CC BY)

Imperial collections

Leading public scientific institutions in imperial Britain, such as the Royal Botanic Gardens at Kew and the British Museum, as well as ethnographic displays of “exotic” humans, relied on a global network of colonial collectors and go-betweens. By 1857, the East India Company’s London zoological museum boasted insect specimens from across the colonial world, including from Ceylon, India, Java and Nepal.

The British and Natural History museums were founded using the personal collection of doctor and naturalist Sir Hans Sloane. To gather these thousands of specimens, Sloane had worked intimately with the East India, South Sea and Royal African companies, which did a great deal to help establish the British Empire.

The scientists who used this evidence were rarely sedentary geniuses working in laboratories insulated from imperial politics and economics. The likes of Charles Darwin on the Beagle and botanist Sir Joseph Banks on the Endeavour literally rode on the voyages of British exploration and conquest that enabled imperialism.

Other scientific careers were directly driven by imperial achievements and needs. Early anthropological work in British India, such as Sir Herbert Hope Risley’s Tribes and Castes of Bengal, published in 1891, drew upon massive administrative classifications of the colonized population.

Map-making operations including the work of the Great Trigonometrical Survey in South Asia came from the need to cross colonial landscapes for trade and military campaigns. The geological surveys commissioned around the world by Sir Roderick Murchison were linked with intelligence gathering on minerals and local politics.

Efforts to curb epidemic diseases such as plague, smallpox and cholera led to attempts to discipline the routines, diets and movements of colonial subjects. This opened up a political process that the historian David Arnold has termed the “colonization of the body”. By controlling people as well as countries, the authorities turned medicine into a weapon with which to secure imperial rule.

New technologies were also put to use expanding and consolidating the empire. Photographs were used for creating physical and racial stereotypes of different groups of colonized people. Steamboats were crucial in the colonial exploration of Africa in the mid-19th century. Aircraft enabled the British to surveil and then bomb rebellions in 20th-century Iraq. The innovation of wireless radio in the 1890s was shaped by Britain’s need for discreet, long-distance communication during the South African war.

In these ways and more, Europe’s leaps in science and technology during this period both drove and were driven by its political and economic domination of the rest of the world. Modern science was effectively built on a system that exploited millions of people. At the same time it helped justify and sustain that exploitation, in ways that hugely influenced how Europeans saw other races and countries. What’s more, colonial legacies continue to shape trends in science today.

Polio eradication needs willing volunteers. (Department for International Development, CC BY)

Modern colonial science

Since the formal end of colonialism, we have become better at recognizing how scientific expertise has come from many different countries and ethnicities. Yet former imperial nations still appear almost self-evidently superior to most of the once-colonized countries when it comes to scientific study. The empires may have virtually disappeared, but the cultural biases and disadvantages they imposed have not.

You just have to look at the statistics on the way research is carried out globally to see how the scientific hierarchy created by colonialism continues. The annual rankings of universities are published mostly by the Western world and tend to favor its own institutions. Academic journals across the different branches of science are mostly dominated by the U.S. and western Europe.

It is unlikely that anyone who wishes to be taken seriously today would explain this data in terms of innate intellectual superiority determined by race. The blatant scientific racism of the 19th century has now given way to the notion that excellence in science and technology are a euphemism for significant funding, infrastructure and economic development.

Because of this, most of Asia, Africa and the Caribbean are seen either as playing catch-up with the developed world or as dependent on its scientific expertise and financial aid. Some academics have identified these trends as evidence of the persisting “intellectual domination of the West” and labeled them a form of “neo-colonialism.”

Various well-meaning efforts to bridge this gap have struggled to go beyond the legacies of colonialism. For example, scientific collaboration between countries can be a fruitful way of sharing skills and knowledge, and learning from the intellectual insights of one another. But when an economically weaker part of the world collaborates almost exclusively with very strong scientific partners, it can take the form of dependence, if not subordination.

A 2009 study showed that about 80 percent of Central Africa’s research papers were produced with collaborators based outside the region. With the exception of Rwanda, each of the African countries principally collaborated with its former colonizer. As a result, these dominant collaborators shaped scientific work in the region. They prioritized research on immediate local health-related issues, particularly infectious and tropical diseases, rather than encouraging local scientists to also pursue the fuller range of topics pursued in the West.

In the case of Cameroon, local scientists’ most common role was in collecting data and fieldwork while foreign collaborators shouldered a significant amount of the analytical science. This echoed a 2003 study of international collaborations in at least 48 developing countries that suggested local scientists too often carried out “fieldwork in their own country for the foreign researchers.”

In the same study, 60 percent to 70 percent of the scientists based in developed countries did not acknowledge their collaborators in poorer countries as co-authors in their papers. This is despite the fact they later claimed in the survey that the papers were the result of close collaborations.

A March for Science protester in Melbourne. (Wikimedia Commons)

Mistrust and resistance

International health charities, which are dominated by Western countries, have faced similar issues. After the formal end of colonial rule, global health workers long appeared to represent a superior scientific culture in an alien environment. Unsurprisingly, interactions between these skilled and dedicated foreign personnel and the local population have often been characterized by mistrust.

For example, during the smallpox eradication campaigns of the 1970s and the polio campaign of past two decades, the World Health Organization’s representatives found it quite challenging to mobilize willing participants and volunteers in the interiors of South Asia. On occasions they even saw resistance on religious grounds from local people. But their stringent responses, which included the close surveillance of villages, cash incentives for identifying concealed cases and house-to-house searches, added to this climate of mutual suspicion. These experiences of mistrust are reminiscent of those created by strict colonial policies of plague control.

Western pharmaceutical firms also play a role by carrying out questionable clinical trials in the developing world where, as journalist Sonia Shah puts it, “ethical oversight is minimal and desperate patients abound.” This raises moral questions about whether multinational corporations misuse the economic weaknesses of once-colonized countries in the interests of scientific and medical research.

The colonial image of science as a domain of the white man even continues to shape contemporary scientific practice in developed countries. People from ethnic minorities are underrepresented in science and engineering jobs and more likely to face discrimination and other barriers to career progress.

To finally leave behind the baggage of colonialism, scientific collaborations need to become more symmetrical and founded on greater degrees of mutual respect. We need to decolonize science by recognizing the true achievements and potential of scientists from outside the Western world. Yet while this structural change is necessary, the path to decolonization has dangers of its own.

Science must fall?

In October 2016, a YouTube video of students discussing the decolonisation of science went surprisingly viral. The clip, which has been watched more than 1 million times, shows a student from the University of Cape Town arguing that science as a whole should be scrapped and started again in a way that accommodates non-Western perspectives and experiences. The student’s point that science cannot explain so-called black magic earned the argument much derision and mockery. But you only have to look at the racist and ignorant comments left beneath the video to see why the topic is so in need of discussion.

Inspired by the recent “Rhodes Must Fall” campaign against the university legacy of the imperialist Cecil Rhodes, the Cape Town students became associated with the phrase “science must fall.” While it may be interestingly provocative, this slogan isn’t helpful at a time when government policies in a range of countries including the U.S., UK and India are already threatening to impose major limits on science research funding.

More alarmingly, the phrase also runs the risk of being used by religious fundamentalists and cynical politicians in their arguments against established scientific theories such as climate change. This is a time when the integrity of experts is under fire and science is the target of political maneuvering. So polemically rejecting the subject altogether only plays into the hands of those who have no interest in decolonization.

Alongside its imperial history, science has also inspired many people in the former colonial world to demonstrate remarkable courage, critical thinking and dissent in the face of established beliefs and conservative traditions. These include the iconic Indian anti-caste activist Rohith Vemula and the murdered atheist authors Narendra Dabholkar and Avijit Roy. Demanding that “science must fall” fails to do justice to this legacy.

The call to decolonize science, as in the case of other disciplines such as literature, can encourage us to rethink the dominant image that scientific knowledge is the work of white men. But this much-needed critique of the scientific canon carries the other danger of inspiring alternative national narratives in post-colonial countries.

For example, some Indian nationalists, including the country’s current prime minister, Narendra Modi, have emphasized the scientific glories of an ancient Hindu civilisation. They argue that plastic surgery, genetic science, airplanes and stem cell technology were in vogue in India thousands of years ago. These claims are not just a problem because they are factually inaccurate. Misusing science to stoke a sense of nationalist pride can easily feed into jingoism.

Meanwhile, various forms of modern science and their potential benefits have been rejected as unpatriotic. In 2016, a senior Indian government official even went so far as to claim that “doctors prescribing non-Ayurvedic medicines are anti-national.”

The path to decolonization

Attempts to decolonize science need to contest jingoistic claims of cultural superiority, whether they come from European imperial ideologues or the current representatives of post-colonial governments. This is where new trends in the history of science can be helpful.

For example, instead of the parochial understanding of science as the work of lone geniuses, we could insist on a more cosmopolitan model. This would recognize how different networks of people have often worked together in scientific projects and the cultural exchanges that helped them–even if those exchanges were unequal and exploitative.

But if scientists and historians are serious about “decolonizing science” in this way, they need to do much more to present the culturally diverse and global origins of science to a wider, non-specialist audience. For example, we need to make sure this decolonized story of the development of science makes its way into schools.

Students should also be taught how empires affected the development of science and how scientific knowledge was reinforced, used and sometimes resisted by colonized people. We should encourage budding scientists to question whether science has done enough to dispel modern prejudices based on concepts of race, gender, class and nationality.

Decolonizing science will also involve encouraging Western institutions that hold imperial scientific collections to reflect more on the violent political contexts of war and colonization in which these items were acquired. An obvious step forward would be to discuss repatriating scientific specimens to former colonies, as botanists working on plants originally from Angola but held primarily in Europe have done. If repatriation isn’t possible, then co-ownership or priority access for academics from post-colonial countries should at least be considered.

This is also an opportunity for the broader scientific community to critically reflect on its own profession. Doing so will inspire scientists to think more about the political contexts that have kept their work going and about how changing them could benefit the scientific profession around the world. It should spark conversations between the sciences and other disciplines about their shared colonial past and how to address the issues it creates.

Unravelling the legacies of colonial science will take time. But the field needs strengthening at a time when some of the most influential countries in the world have adopted a lukewarm attitude towards scientific values and findings. Decolonization promises to make science more appealing by integrating its findings more firmly with questions of justice, ethics and democracy. Perhaps, in the coming century, success with the microscope will depend on success in tackling the lingering effects of imperialism.

Inuit Wisdom and Polar Science Are Teaming Up to Save the Walrus

Smithsonian Magazine

This article is from Hakai Magazine, a new online publication about science and society in coastal ecosystems. Read more stories like this at hakaimagazine.com.

The air is calm this Arctic morning as Zacharias Kunuk prepares for a long day. His morning routine does nothing to quell his nerves—today he’s going on his first walrus hunt.

It’s 1980, late July—the month walrus hunters climb into motorized freighter canoes and leave Igloolik, a small Inuit community in Nunavut, Canada. Every summer since he was a boy, Kunuk has watched the hunters return, weary but triumphant with walrus meat. He’s always wondered how far these men travel to reach the floating rafts of ice where walruses rest during the summer. And he’s pondered how just a few men can possibly kill a creature that might weigh more than 20 men and then wrestle it into a canoe. This is the day Kunuk will get answers. He also plans to capture it all on camera. A young filmmaker in his mid-20s, Kunuk has a small budget to finance the hunt, a cultural practice so vital to his community’s identity that he wants to record it for future generations.

The temperature on an Arctic summer day rarely exceeds 10°C, with much cooler air out by the sea ice, so the hunters dress for the climate: skin boots, mittens, and knee-length parkas with fur-lined hoods. Kunuk joins an experienced elder and the man’s brother as they load their boat with harpoons, guns, knives, tea, and bannock (a fry bread). Nearby, other men ready their own freighter canoes.

Then they push off—a tiny flotilla in a great big sea—on their way to hunt an enormous animal. As they travel, the hunters explain how to read the angle of the sun, the direction of the currents, and the subtle movements of the seaweed—a navigational system so baffling to young Kunuk that he silently questions how they will ever find their way home.

After several hours spent listening to the engine’s mechanical chug, Kunuk hears a chorus of mumbling and chattering, grunts and growls, a sign that they are close to the walruses. (That sound will later remind him of the cacophony in a busy bar). They shut down the motors and drift toward the ice. As the walruses lift their hefty heads, the hunters raise their rifles and aim.

Throughout the Arctic, the traditional walrus hunt happens today much like it has for thousands of years—in teams armed with knowledge about walrus behavior accumulated over generations. But times are changing, and it’s not just that the hunters now have global positioning systems, speedboats, and cell phones. A rapidly changing environment is also altering walrus behavior in ways scientists are struggling to understand. As Arctic sea ice melts at a worrisome rate—in 2015 reaching the smallest maximum extent ever recorded—walruses are behaving strangely in parts of their range. That includes gathering in unusually large numbers on land.

Normally, females and calves prefer to haul out on sea ice instead of on land with the males. But as the ice disappears, the beaches are filling up. In September 2014, 35,000 Pacific walruses piled together near the village of Point Lay, Alaska, making international headlines for a record-setting heap of jostling tusks and whiskers on American soil. In October 2010, 120,000 walruses—perhaps half the world’s population—crowded onto one Russian haul-out site.

For their part, scientists are racing to gather information about walruses, including attempts to get the first accurate head count amidst increased shipping traffic, proposed oil drilling, and other disturbances in key walrus habitat. A 2017 deadline for a decision by the United States government on whether to list walruses under the Endangered Species Act is fueling a new sense of urgency. A major goal is to explain changing walrus behaviors and understand what, if any, protections they might require. But there is another unanswered question that is just as critical, if less quantifiable: What do new walrus behaviors mean for indigenous people who have long depended on the animals? 

(Paul Souders/Corbis)

Though related, these questions represent a clash between two contradictory ways of seeing the natural world. There’s science, which respects numbers and data above all else. And then there’s traditional knowledge, which instead prioritizes relationships between people and animals. In the Inuit view, walruses have a sense of personhood and agency, says Erica Hill, an anthropologist at the University of Alaska Southeast in Juneau. They act and react. As Kunuk points out, animal populations—caribou, fish, seals, and walruses—have always cycled. Unlike the scientists, the Inuit feel it’s best not to talk about how many come by each year. The animals might overhear, feel disrespected, and choose to stay away.

“If we talk about the walrus too much, they’re going to change,” says Kunuk. “If we were farmers we would count our stock. But we’re hunters and these are wild animals.”

Because scientists and hunters use wholly different systems to process knowledge, merging what they know is like trying to read a book in a foreign, if slightly familiar, language. Still, both worldviews share a deep caring for the animals, suggesting that a true understanding of the walrus may come only by allowing each perspective to teach the other. To accurately interpret emerging science, perhaps researchers must incorporate a much deeper history, one embedded in native traditions.

Walruses—and the people who have long relied on them—have, after all, been dealing with hunters, climate variations, and other obstacles for centuries. And Inuit hunters know that walruses have repeatedly adapted to change with more resilience than several decades of scientific data can detect. Within that intricate relationship may lie important lessons for maintaining a delicate balance between species that have coexisted in a harsh and unpredictable environment for millennia. This often-overlooked complexity adds a twist to the standard narrative surrounding Arctic creatures—that environmental change leads to certain catastrophe. It might not be so simple.

“We’re really good in the science world at seeing how things can go wrong, like ‘Gee, walruses need ice and the ice is going away, so whoops, we have a problem,’” says anthropologist Henry Huntington, who has been interviewing native hunters to complement a walrus satellite-tagging study by the Alaska Department of Fish and Game. “We know that ice is getting thinner in the summer, and it is easy to draw a straight line and extrapolate and say that at the end of this line is doom and gloom for the walrus population. What we’re not good at anticipating is what adjustments walruses can make. Walrus hunters are able to put that in perspective.”

On that first expedition some three decades ago, young Kunuk watched and filmed as the hunters shot and butchered walruses, then wrapped parcels of meat in walrus skin. When they returned to Igloolik, he kept filming as the men dug pits for the meat in the gravel beach. After fermenting for several months, the aged meat, called igunaq, takes on the consistency of blue cheese and smells like a week-old carcass, Kunuk says. Yet once acquired, a taste for this valuable delicacy is a lifelong love, and, along with fresh, boiled walrus meat, is coveted.

For a 700-kilogram polar bear, calorie-dense walrus is also fair game and, in the emerging quagmire of changing Arctic dynamics, this is the crux. As Arctic ice melts, polar bears are spending more time on land where they’re smelling hard-won igunaq, digging up the meat, and occasionally wandering into Igloolik or other villages. A generation ago, Kunuk’s father told him that one bear a year might come into the village. But between August 2012 and January 2013, over 30 bears were seen on Igloolik Island, including in and around Igloolik village.

Along the coasts of Alaska and Russia, another temptation lures polar bears closer to villages: extra-large gatherings of live walruses that are, like the bears, increasingly driven to shore, largely because of the lack of sea ice. Walruses are notoriously skittish and often stampede when spooked by something like a bear. In a stampede’s wake, they leave trampled animals, sometimes thousands of them. It’s like a free buffet for hungry bears.

(Paul Souders/Corbis)

Escalating conflicts between walruses, polar bears, and humans have prompted a new era of adaptation by indigenous communities, often with scientists supporting their efforts. In Igloolik and nearby Hall Beach, hunters are testing electric fences as deterrents to protect igunaq. Sometimes the bears get over or under the fences, but several years into the project, they have learned to avoid the live wires, which deliver a harmless but effective jolt. And communities are losing less of their valuable meat, especially when they’re vigilant about checking the fences, says Marcus Dyck, a polar bear biologist with the government of Nunavut. “I’ve seen polar bears move a thousand pounds of rocks to get at walrus meat. If a bear is determined, there’s nothing that can stop [it],” he says. “Surprisingly, electricity from the fences really spooks the shit out of them.”

On the Pacific side of the Arctic, efforts to manage the walrus situation began in 2006 after a polar bear killed a teenage girl in the Russian village of Riyrkaipiy. Along with a growing sense that more polar bears were hanging around on land, the concerned villagers took charge by restricting disturbances at haul-out sites and creating umky (polar bear) patrols to chase bears away with flares, pots and pans, and rubber bullets. Their work was so effective that at least seven communities now have active polar-bear patrol teams that keep watch along the northern coastline of Russia. In Alaska, communities are managing walrus stampedes in terrestrial haul-out sites—and thus deterring bears—by minimizing noise and other human-caused disturbances. Low-flying planes are diverted, film crews turned away, and hunting is avoided in an attempt to keep the herds calm.

The people who live among walruses, in other words, are adapting to new realities. But what about the walruses? What do the numbers show?

Before the onset of industrial European walrus hunting in the 19th century, it is estimated that hundreds of thousands of walruses swam freely throughout the Arctic. But the animals became so valued for their oil, meat, skin, and ivory that by the 1950s the population had fallen as low as 50,000. After a recovery that peaked in the 1980s, when there seemed to be more walruses than the environment could support, numbers declined again. Today, the best available data suggests that there may be as many as 25,000 Atlantic walruses and some 200,000 Pacific walruses. 

But nobody knows for sure. Walruses spend a lot of time underwater, diving for shellfish on the seafloor. And they tend to clump within an enormous range that is both inaccessible and inhospitable to people, which means that extrapolating the size of an entire population by surveying a fraction of the environment can lead to wild miscalculations. The last attempt to make an aerial count of Pacific walruses, in 2006, came up with an estimate of 129,000 individuals, but error margins were huge. The possible range was between 55,000 and 507,000.

“They’re the gypsies of the sea and they’re a very challenging species to study,” says Rebecca Taylor, a research statistician with the United States Geological Survey (USGS) Alaska Science Center in Anchorage. “If you find walruses, you often find a lot of walruses. But you can go a long time at sea without finding any walruses. The logistics of getting out there and observing them are very challenging.”

Among the variety of scientific endeavors aiming to learn, once and for all, how walruses are faring, researchers at the USGS are tagging animals to track their movements and using statistical analyses to understand population trends. The U.S. Fish & Wildlife Service (USFWS) is studying biopsies and DNA sequences to try and get the first accurate count of Pacific walruses. Results, as they emerge, will help focus conservation efforts where they’re needed most.

Still, many questions remain unanswered. “We can definitively say they’ve altered their behavior in an unprecedented way,” says USGS wildlife biologist Anthony Fischbach. “We can report that they have a different energy budget, that they’re spending less time resting and more time in the water burning calories. And that leads us to think that is not a good thing. But integrating that into what it’s going to be like in the future, whether they will do fine or not, that is an open question. There is more science to do.”

An Inuit pipe carved from a walrus tusk. (Werner Forman/Werner Forman/Corbis)

There may also be more history to unearth before researchers can blend that science with the trove of indigenous knowledge. For at least 2,000 years, people have relied on walrus for food for themselves and their dogs, Hill says. Her research also shows that native communities have long built their villages near haul-out sites that have remained in the same areas for hundreds, if not thousands, of years. But while hauling out on land appears to be normal behavior for walruses, it’s the staggering size of recent gatherings that is cause for concern. This new behavior suggests that the places walruses gather are limited. With less sea ice for walruses to rest on, Hill suspects that the beaches are only going to become more overcrowded. “It’s not a matter of walruses going someplace else to haul out,’” she says, adding that walruses return repeatedly to the same haul-out locations for generations. “Because they have specific requirements for their [haul-out sites], they can’t just move elsewhere. There is no other place.”

Further scrutiny of the deep past offers insight into how, for many indigenous communities, animals are woven into the fabric of life. Early hunters used walrus bones, teeth, tusks, and skin, for instance, to fashion sled runners, ornaments, and sails. Scapulae became shovel blades, penis bones became harpoon sockets, intestines were stretched into skylights, and skulls formed the structural foundation of walls for homes. In Iñupiaq, a language spoken in northern Alaska, 15 words exist to describe a walrus’s position relative to a fishing boat, including samna, “that one on the southern side.” Walruses are also ingrained in Inuit religion. “There is an idea people still talk about today,” says archaeologist Sean Desjardins of McGill University in Montreal, “that the Northern Lights are actually spirits playing a ball game with a walrus head.” 

Merging these cultural tales with the stories that scientists piece together offers a chance to fully assess the walrus’s condition. Modern walrus research is wide-ranging geographically, but reaches back just 40 years, while indigenous hunters have longer-term knowledge that is more locally focused, says Jim MacCracken, a wildlife biologist with the USFWS in Anchorage. Together, these understandings build a fuller picture that goes beyond the usual story told to the public. “Environmental groups are quick to latch on to [dramatic stories of changing walrus behavior] and with 2014’s big haul outs, they were the ones pretty much making a big story out of it, telling people that walruses are in serious trouble and have no place to go but to shore,” MacCracken says. “These one- or two-minute reports on TV tend to sensationalize these events with ‘the world is coming to an end’ sorts of stuff. They can’t get into all the complexity of what’s going on there.”

Reaching across time and culture has other benefits, too. If studies show that walruses are in trouble, saving them is going to require that scientists and hunters listen to one another.  “Nobody likes it if you come in and say, ‘I studied your problem and here’s what you need to do,’” Huntington says. “Ultimately, if some kind of management action is needed, we need everyone working together.”

For his part, Kunuk continues to join the hunt each year. Today, he is also an established filmmaker who directed and produced the award-winning 2001 film Atanarjuat: The Fast Runner. Much of his work aims to preserve his culture in the midst of rapid change. In, “Aiviaq (Walrus Hunt),” an episode of the television series Nunavut (Our Land), Kunuk tells the fictional story of a priest who arrived in Igloolik in 1946. Through the eyes of this outsider, viewers watch weathered, red-cheeked Inuit drink steaming tea and discuss the wind before piling into a boat At the hunting site, some passengers cover their ears when a rifle fires. Soon, the hunters are chewing on raw meat as they slice through blubber, then bundle meat for igunaq. A more recent educational film called “Angirattut (Coming Home),” features an elder explaining the walrus hunt as it happens.

“When your son asks you how to butcher walrus, we have to know,” Kunuk says. “It’s part of our culture. It’s just our way, the way we live. It’s part of the routine. I hope it goes on forever.”

This article originally appeared under the headline "What Now, Walrus?"

One Hundred Years Ago, Einstein's Theory of General Relativity Baffled the Press and the Public

Smithsonian Magazine

When the year 1919 began, Albert Einstein was virtually unknown beyond the world of professional physicists. By year’s end, however, he was a household name around the globe. November 1919 was the month that made Einstein into “Einstein,” the beginning of the former patent clerk’s transformation into an international celebrity.

On November 6, scientists at a joint meeting of the Royal Society of London and the Royal Astronomical Society announced that measurements taken during a total solar eclipse earlier that year supported Einstein’s bold new theory of gravity, known as general relativity. Newspapers enthusiastically picked up the story. “Revolution in Science,” blared the Times of London; “Newtonian Ideas Overthrown.” A few days later, the New York Times weighed in with a six-tiered headline—rare indeed for a science story. “Lights All Askew in the Heavens,” trumpeted the main headline. A bit further down: “Einstein’s Theory Triumphs” and “Stars Not Where They Seemed, or Were Calculated to Be, But Nobody Need Worry.”

The spotlight would remain on Einstein and his seemingly impenetrable theory for the rest of his life. As he remarked to a friend in 1920: “At present every coachman and every waiter argues about whether or not the relativity theory is correct.” In Berlin, members of the public crowded into the classroom where Einstein was teaching, to the dismay of tuition-paying students. And then he conquered the United States. In 1921, when the steamship Rotterdam arrived in Hoboken, New Jersey, with Einstein on board, it was met by some 5,000 cheering New Yorkers. Reporters in small boats pulled alongside the ship even before it had docked. An even more over-the-top episode played out a decade later, when Einstein arrived in San Diego, en route to the California Institute of Technology where he had been offered a temporary position. Einstein was met at the pier not only by the usual throng of reporters, but by rows of cheering students chanting the scientist’s name.

The intense public reaction to Einstein has long intrigued historians. Movie stars have always attracted adulation, of course, and 40 years later the world would find itself immersed in Beatlemania—but a physicist? Nothing like it had ever been seen before, and—with the exception of Stephen Hawking, who experienced a milder form of celebrity—it hasn’t been seen since, either.

Over the years, a standard, if incomplete, explanation emerged for why the world went mad over a physicist and his work: In the wake of a horrific global war—a conflict that drove the downfall of empires and left millions dead—people were desperate for something uplifting, something that rose above nationalism and politics. Einstein, born in Germany, was a Swiss citizen living in Berlin, Jewish as well as a pacifist, and a theorist whose work had been confirmed by British astronomers. And it wasn’t just any theory, but one which moved, or seemed to move, the stars. After years of trench warfare and the chaos of revolution, Einstein’s theory arrived like a bolt of lightning, jolting the world back to life.

Mythological as this story sounds, it contains a grain of truth, says Diana Kormos-Buchwald, a historian of science at Caltech and director and general editor of the Einstein Papers Project. In the immediate aftermath of the war, the idea of a German scientist—a German anything—receiving acclaim from the British was astonishing.

“German scientists were in limbo,” Kormos-Buchwald says. “They weren’t invited to international conferences; they weren’t allowed to publish in international journals. And it’s remarkable how Einstein steps in to fix this problem. He uses his fame to repair contact between scientists from former enemy countries.”

Headline in the New York Times about Einstein's newly confirmed general theory of relativity, November 10, 1919. (The New York Times Archives / Dan Falk)

At that time, Kormos-Buchwald adds, the idea of a famous scientist was unusual. Marie Curie was one of the few widely known names. (She already had two Nobel Prizes by 1911; Einstein wouldn’t receive his until 1922, when he was retroactively awarded the 1921 prize.) However, Britain also had something of a celebrity-scientist in the form of Sir Arthur Eddington, the astronomer who organized the eclipse expeditions to test general relativity. Eddington was a Quaker and, like Einstein, had been opposed to the war. Even more crucially, he was one of the few people in England who understood Einstein’s theory, and he recognized the importance of putting it to the test.

“Eddington was the great popularizer of science in Great Britain. He was the Carl Sagan of his time,” says Marcia Bartusiak, science author and professor in MIT’s graduate Science Writing program. “He played a key role in getting the media’s attention focused on Einstein.”

It also helped Einstein’s fame that his new theory was presented as a kind of cage match between himself and Isaac Newton, whose portrait hung in the very room at the Royal Society where the triumph of Einstein’s theory was announced.

“Everyone knows the trope of the apple supposedly falling on Newton’s head,” Bartusiak says. “And here was a German scientist who was said to be overturning Newton, and making a prediction that was actually tested—that was an astounding moment.”

Much was made of the supposed incomprehensibility of the new theory. In the New York Times story of November 10, 1919—the “Lights All Askew” edition—the reporter paraphrases J.J. Thompson, president of the Royal Society, as stating that the details of Einstein’s theory “are purely mathematical and can only be expressed in strictly scientific terms” and that it was “useless to endeavor to detail them for the man in the street.” The same article quotes an astronomer, W.J.S. Lockyer, as saying that the new theory’s equations, “while very important,” do not “affect anything on this earth. They do not personally concern ordinary human beings; only astronomers are affected.” (If Lockyer could have time travelled to the present day, he would discover a world in which millions of ordinary people routinely navigate with the help of GPS satellites, which depend directly on both special and general relativity.)

The idea that a handful of clever scientists might understand Einstein’s theory, but that such comprehension was off limits to mere mortals, did not sit well with everyone—including the New York Times’ own staff. The day after the “Lights All Askew” article ran, an editorial asked what “common folk” ought to make of Einstein’s theory, a set of ideas that “cannot be put in language comprehensible to them.” They conclude with a mix of frustration and sarcasm: “If we gave it up, no harm would be done, for we are used to that, but to have the giving up done for us is—well, just a little irritating.”

A portrait of Albert Einstein published on the cover of Berliner Illustrirte Zeitung on December 14, 1919. (Ullstein Bild via Getty Images)

Things were not going any smoother in London, where the editors of the Times confessed their own ignorance but also placed some of the blame on the scientists themselves. “We cannot profess to follow the details and implications of the new theory with complete certainty,” they wrote on November 28, “but we are consoled by the reflection that the protagonists of the debate, including even Dr. Einstein himself, find no little difficulty in making their meaning clear.”

Readers of that day’s Times were treated to Einstein’s own explanation, translated from German. It ran under the headline, “Einstein on his Theory.” The most comprehensible paragraph was the final one, in which Einstein jokes about his own “relative” identity: “Today in Germany I am called a German man of science, and in England I am represented as a Swiss Jew. If I come to be regarded as a bête noire, the descriptions will be reversed, and I shall become a Swiss Jew for the Germans, and a German man of science for the English.”

Not to be outdone, the New York Times sent a correspondent to pay a visit to Einstein himself, in Berlin, finding him “on the top floor of a fashionable apartment house.” Again they try—both the reporter and Einstein—to illuminate the theory. Asked why it’s called “relativity,” Einstein explains how Galileo and Newton envisioned the workings of the universe and how a new vision is required, one in which time and space are seen as relative. But the best part was once again the ending, in which the reporter lays down a now-clichéd anecdote which would have been fresh in 1919: “Just then an old grandfather’s clock in the library chimed the mid-day hour, reminding Dr. Einstein of some appointment in another part of Berlin, and old-fashioned time and space enforced their wonted absolute tyranny over him who had spoken so contemptuously of their existence, thus terminating the interview.”

Efforts to “explain Einstein” continued. Eddington wrote about relativity in the Illustrated London News and, eventually, in popular books. So too did luminaries like Max Planck, Wolfgang Pauli and Bertrand Russell. Einstein wrote a book too, and it remains in print to this day. But in the popular imagination, relativity remained deeply mysterious. A decade after the first flurry of media interest, an editorial in the New York Times lamented: “Countless textbooks on relativity have made a brave try at explaining and have succeeded at most in conveying a vague sense of analogy or metaphor, dimly perceptible while one follows the argument painfully word by word and lost when one lifts his mind from the text.”

Eventually, the alleged incomprehensibility of Einstein’s theory became a selling point, a feature rather than a bug. Crowds continued to follow Einstein, not, presumably, to gain an understanding of curved space-time, but rather to be in the presence of someone who apparently did understand such lofty matters. This reverence explains, perhaps, why so many people showed up to hear Einstein deliver a series of lectures in Princeton in 1921. The classroom was filled to overflowing—at least at the beginning, Kormos-Buchwald says. “The first day there were 400 people there, including ladies with fur collars in the front row. And on the second day there were 200, and on the third day there were 50, and on the fourth day the room was almost empty.”

Original caption: From the report of Sir Arthur Eddington on the expedition to verify Albert Einstein's prediction of the bending of light around the sun. (Public Domain)

If the average citizen couldn’t understand what Einstein was saying, why were so many people keen on hearing him say it? Bartisuak suggests that Einstein can be seen as the modern equivalent of the ancient shaman who would have mesmerized our Paleolithic ancestors. The shaman “supposedly had an inside track on the purpose and nature of the universe,” she says. “Through the ages, there has been this fascination with people that you think have this secret knowledge of how the world works. And Einstein was the ultimate symbol of that.”

The physicist and science historian Abraham Pais has described Einstein similarly. To many people, Einstein appeared as “a new Moses come down from the mountain to bring the law and a new Joshua controlling the motion of the heavenly bodies.” He was the “divine man” of the 20th century.

Einstein’s appearance and personality helped. Here was a jovial, mild-mannered man with deep-set eyes, who spoke just a little English. (He did not yet have the wild hair of his later years, though that would come soon enough.) With his violin case and sandals—he famously shunned socks—Einstein was just eccentric enough to delight American journalists. (He would later joke that his profession was “photographer’s model.”) According to Walter Isaacson’s 2007 biography, Einstein: His Life and Universe, the reporters who caught up with the scientist “were thrilled that the newly discovered genius was not a drab or reserved academic” but rather “a charming 40-year-old, just passing from handsome to distinctive, with a wild burst of hair, rumpled informality, twinkling eyes, and a willingness to dispense wisdom in bite-sized quips and quotes.”

The timing of Einstein’s new theory helped heighten his fame as well. Newspapers were flourishing in the early 20th century, and the advent of black-and-white newsreels had just begun to make it possible to be an international celebrity. As Thomas Levenson notes in his 2004 book Einstein in Berlin, Einstein knew how to play to the cameras. “Even better, and usefully in the silent film era, he was not expected to be intelligible. ... He was the first scientist (and in many ways the last as well) to achieve truly iconic status, at least in part because for the first time the means existed to create such idols.”

Einstein, like many celebrities, had a love-hate relationship with fame, which he once described as “dazzling misery.” The constant intrusions into his private life were an annoyance, but he was happy to use his fame to draw attention to a variety of causes that he supported, including Zionism, pacifism, nuclear disarmament and racial equality.

A portrait of Albert Einstein taken at Princeton in 1935. (Sophie Delar)

Not everyone loved Einstein, of course. Various groups had their own distinctive reasons for objecting to Einstein and his work, John Stachel, the founding editor of the Einstein Papers Project and a professor at Boston University, told me in a 2004 interview. Some American philosophers rejected relativity for being too abstract and metaphysical, while some Russian thinkers felt it was too idealistic. Some simply hated Einstein because he was a Jew.

“Many of those who opposed Einstein on philosophical grounds were also anti-Semites, and later on, adherents of what the Nazis called Deutsche Physic—‘German physics’—which was ‘good’ Aryan physics, as opposed to this Jüdisch Spitzfindigkeit—‘Jewish subtlety,’ Stachel says. “So one gets complicated mixtures, but the myth that everybody loved Einstein is certainly not true. He was hated as a Jew, as a pacifist, as a socialist [and] as a relativist, at least.” As the 1920s wore on, with anti-Semitism on the rise, death threats against Einstein became routine. Fortunately he was on a working holiday in the United States when Hitler came to power. He would never return to the country where he had done his greatest work.

For the rest of his life, Einstein remained mystified by the relentless attention paid to him. As he wrote in 1942, “I never understood why the theory of relativity with its concepts and problems so far removed from practical life should for so long have met with a lively, or indeed passionate, resonance among broad circles of the public. ... What could have produced this great and persistent psychological effect? I never yet heard a truly convincing answer to this question.”

Today, a full century after his ascent to superstardom, the Einstein phenomenon continues to resist a complete explanation. The theoretical physicist burst onto the world stage in 1919, expounding a theory that was, as the newspapers put it, “dimly perceptible.” Yet in spite of the theory’s opacity—or, very likely, because of it—Einstein was hoisted onto the lofty pedestal where he remains to this day. The public may not have understood the equations, but those equations were said to reveal a new truth about the universe, and that, it seems, was enough.

Would You Trust Drone Software to Pilot Your Flight?

Smithsonian Magazine

Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.

But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.

In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings – when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.

A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have – ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.

**********

Drones come in many forms, from tiny quad-rotor copter toys to missile-firing winged planes, or even 7-ton aircraft that can stay aloft for 34 hours at a stretch.

When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.

Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee – or control the camera to get the best view.

University researchers, businesses and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.

Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.

**********

Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.

On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience – if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.

As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing and reliability.

**********

Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.

The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.

US Airways Flight 1549 passengers evacuate in the water after an emergency landing. (AP Photo/Bebeto Matthews)

These systems would also need to be thoroughly tested – in both real-life situations and in simulations – to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue or other human impairments that can create problems or cause errors even in common situations.

**********

Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.

In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened – that the plane had flown through a flock of birds, which had damaged both engines – and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.

At the NTSB hearing, investigators learned how the decision time made it impossible for Flight 1549 to return to the airport, forcing the water landing. (AP Photo/Charles Dharapak)

Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements – essentially learning how to fly all over again with a damaged plane.

**********

The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds or thousands more hours of flight experience than any human pilot.

Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.

(Pew Research Center)

The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.

Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)

One way to help regular passengers become comfortable with software pilots – while also helping to both train and test the systems – could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.

How Are Horoscopes Still a Thing?

Smithsonian Magazine

Astrology is either an ancient and valuable system of understanding the natural world and our place in it with roots in early Mesopotamia, China, Egypt and Greece, or complete rubbish, depending on whom you ask.

But newspaper and magazine horoscopes? The ones advising you to not “fight against changes” today, or to “go with the flow”, whatever that means, or to “keep things light and breezy with that new hottie today”? They get even less respect, from both skeptics and true believers. So it’s a bit surprising, then, that they remain so popular with everyone in between.

The first real newspaper horoscope column is widely credited to R.H. Naylor, a prominent British astrologer of the first half of the 20th century. Naylor was an assistant to high-society neo-shaman, Cheiro (born William Warner, a decidedly less shamanistic name), who’d read the palms of Mark Twain, Grover Cleveland, and Winston Churchill, and who was routinely tapped to do celebrity star charts. Cheiro, however, wasn’t available in August 1930 to do the horoscope for the recently born Princess Margaret, so Britain’s Sunday Express newspaper asked Naylor.

Like most astrologers of the day, Naylor used what’s called a natal star chart. Astrology posits that the natural world and we human beings in it are affected by the movements of the sun, moon and stars through the heavens, and that who we are is shaped by the exact position of these celestial bodies at the time of our birth. A natal star chart, therefore, presents the sky on the date and exact time of birth, from which the astrologer extrapolates character traits and predictions.

On August 24, 1930, three days after the Princess’s birth, Naylor’s published report predicted that her life would be “eventful”, an accurate if not entirely inspired forecast given that she was, after all, a princess (he didn’t, it appears, foresee the Princess’s later star-crossed romances and lifelong love affair with alcohol and cigarettes). He also noted that “events of tremendous importance to the Royal Family and the nation will come about near her seventh year”, a prediction that was somewhat more precise – and seemed to ring true right around the time that her uncle, King Edward VIII, abdicated the throne to her father.

Celebrity natal star charts weren’t a particularly novel idea; American and British newspapers routinely trotted astrologers out to find out what the stars had in store for society pagers like Helen Gould and “Baby Astor’s Half Brother”. Even the venerable New York Times wasn’t above consulting the stars: In 1908, a headline declared that President Theodore Roosevelt, a Sagittarius, “might have been different with another birthday”, according to “expert astrologer” Mme. Humphrey.

But though it wasn’t the first of its kind, Naylor’s article was a tipping point for the popular consumption of horoscopes. Following the interest the public showed in the Princess Margaret horoscope, the paper decided to run several more forecasts from Naylor. One of his next articles included a prediction that “a British aircraft will be in danger” between October 8 and 15. When British airship R101 crashed outside Paris on October 5, killing 48 of the 54 people on board, the tragedy was taken as eerie evidence of Naylor’s predictive skill. Suddenly, a lot more people were paying attention to the star column. The then-editor of the paper offered Naylor a weekly column – on the caveat that he make it a bit less dry and bit more the kind of thing that lots of people would want to read – and “What the Stars Foretell”, the first real newspaper horoscope column, was born.

The column offered advice to people whose birthdays fell that the week, but within a few years, Naylor (or a clever editor) determined that he needed to come up with something that could apply to larger volumes of readers. By 1937, he’d hit upon the idea using “star signs”, also known as “sun signs”, the familiar zodiac signs that we see today. “Sun sign” refers to the period of the year when the sun is passing through one of 12 30-degree celestial zones as visible from earth and named after nearby constellations; for example, if you’re born in the period when the sun is passing through the constellation Capricornus (the “horned goat”, often represented as a half-fish, half-goat), roughly December 22 to January 19, then that makes your sun sign Capricorn.

“The only phenomenon in astrology allowing you make a wild generalizations about everybody born in this period to that period every year without fail is the sun sign,” explained Jonathan Cainer, prominent astrologer who writes one of Britain’s most-read horoscope columns for The Daily Mail.

“[The column] was embraced by an enthusiastic public with open arms and it spawned a thousand imitations. Before we knew it tabloid astrology was born… this vast over-simplification of a noble, ancient art,” Cainer says. Cainer pointed out that even as newspaper and magazine horoscope writing became more and more popular – which it did and quickly, on both sides of the Atlantic – the practice was largely disregarded by the “proper” astrological community. The accusation, he says, was bolstered by the fact that historically, a lot of horoscope columns weren’t written by actual astrologers, but by writers told to read a book on astrology and get cracking.

Astrologers’ consternation notwithstanding, the popularity of newspaper and magazine horoscope has never really died down; they became, along with standards like the crossword, newspaper “furniture”, as Cainer put it (and people hate it when the furniture is moved, Cainer says). Cainer also noted that there are few places in newspapers and, to some extent magazines, that address the reader directly: “It’s an unusual form of language and form of relationship and as such, it lends itself well to a kind of attachment.”

Tiffanie Darke, editor of The Sunday Times Style section, which runs astrologer Shelley von Strunckel’s column, confirmed that via email, saying, “There is a significant readership who buy the paper particularly for Shelley's column, and there is a very considerable readership who you will see on Sundays in the pub, round the kitchen table, across a table at a cafe, reading out her forecasts to each other.”

This fits with what newspapers really are and have virtually always been – not just vehicles for hard news and so-called important stories, but also distributors of entertainment gossip and sports scores, advice on love matters and how to get gravy stains out of clothing, practical information about stock prices and TV schedules, recipes and knitting patterns, comics and humor, even games and puzzles. Whether those features are the spoonful of sugar to help the hard news medicine go down or whether people just pick up the paper for the horoscope makes little difference to the bottom line.

So as to why newspapers run horoscopes, the answer is simple: Readers like them.

But the figures on how many readers actually like horoscopes aren’t entirely clear. A National Science Foundation survey from 1999 found that just 12 percent of Americans read their horoscope every day or often, while 32 percent read them occasionally. More recently, the American Federation of Astrologers put the number of Americans who read their horoscope every day as high as 70 million, about 23 percent of the population. Anecdotally, enough people read horoscopes to be angry when they’re not in their usual place in the paper – Cainer says that he has a clause in his contract allowing him to take holidays, making him a rarity in the business: “The reading public is gloriously unsympathetic to an astrologer’s need for time off.”

Other evidence indicates that significant numbers of people do read their horoscopes if not daily, then regularly: When in 2011, astronomers claimed that the Earth’s naturally occurring orbital “wobble” could change star signs, many people promptly freaked out. (Astrologers, meanwhile, were far more sanguine – your sign is still your sign, they counseled; some, Cainer included, sighed that the wobble story was just another salvo in the fiercely pitched battle between astronomers and astrologers.)

At the same time, a significant portion of the population believe in the underpinnings of newspapers horoscopes. According to a 2009 Harris poll, 26 percent of Americans believe in astrology; that’s more people than believe in witches (23 percent), but less than believe in UFOs (32 percent), Creationism (40 percent) and ghosts (42 percent). Respect for astrology itself may be on the rise: A more recent survey from the National Science Foundation, published in 2014, found that fewer Americans rejected astrology as “not scientific” in 2012 than they did in 2010 – 55 percent as compared to 62 percent. The figure hasn’t been that low since 1983.

People who read their horoscopes also pay attention to what they say. In 2009, an iVillage poll – to mark the launch of the women-focused entertainment site’s dedicated astrology site, Astrology.com – found that of female horoscope readers, 33 percent check their horoscopes before job interviews; 35 percent before starting a new relationship; and 34 percent before buying a lottery ticket. More recent research, published in the October 2013 issue of the Journal of Consumer Research, found that people who read a negative horoscope were more likely to indulge in impulsive or self-indulgent behavior soon after.

So what’s going on? Why are people willing to re-order their love lives, buy a lottery ticket, or a take a new job based on the advice of someone who knows nothing more about them than their birthdate?

One reason we can rule out is scientific validity. Of all the empirical tests that have been done on astrology, in all fields, says Dr. Chris French, a professor of psychology at London’s Goldsmith College who studies belief in the paranormal, “They are pretty uniformly bad news for astrologers.”

There’s very little scientific proof that astrology is an accurate predictor of personality traits, future destinies, love lives, or anything else that mass-market astrology claims to know. For example, in a 1985 study published in the journal Nature, Dr. Shawn Carlson of University of California, Berkeley’s Physics department found that seasoned astrologers were unable to match individual’s star chart with the results of a personality test any better than random chance; in a second test, individuals were unable to choose their own star charts, detailing their astrologically divined personality and character traits, any better than chance.

A smaller 1990 study conducted by John McGrew and Richard McFall of Indiana University’s Psychology department and designed with a group of astrologers, found that astrologers were no better at matching star charts to the corresponding comprehensive case file of a volunteer than a non-astrologer control subject or random chance, and moreover, didn’t even agree with each other. A study out in 2003, conducted by former astrologer Dr. Geoffrey Dean and psychologist Dr. Ivan Kelly, tracked the lives of 2,000 subjects who were all born within minutes of one another over several decades. The theory was that if astrological claims about star position and birthdates were true, then the individuals would have shared similar traits; they did not.

Studies that support the claims of astrology have been largely dismissed by the wider scientific community for a “self-attribution” bias – subjects had a prior knowledge of their sign’s supposed characteristics and therefore could not be reliable – or because they could not be replicated. Astrologers are, unsurprisingly, not impressed by scientific efforts to prove or disprove astrology, claiming that scientists are going about it all wrong – astrology is not empirical in the way that, say, physics is: “Experiments are set up by people who don’t have any context for this, even if they were attempting to do something constructive,” says Shelley von Strunckel, American astrologer and horoscope writer whose column appears in The Sunday Times, London Evening Standard, Chinese Vogue, Tatler and other major publications. “It’s like, ‘I’m going to cook this great French meal, I’ve got this great cook book in French – but I don’t speak French.’”

But despite a preponderance of scientific evidence to suggest that the stars do not influence our lives – and even personally demonstrable evidence such as that financial windfall your horoscope told you to expect on the eighth of the month failed to materialize – people continue to believe. (It’s important to note, however, that some astrologers balk at the notion of “belief” in astrology: “It’s not something you believe in,” says Strunckel. “It’s kind of like believing in dinner. The planets are there, the cycles of nature are there, the full moons are there, nature relates to all of that, it’s not something to believe in.”)

The “why” people continue to read and credence their horoscopes is most often explained by psychologist Bertram Forer’s classic 1948 “self-validation” study. Forer gave his students a personality test, followed by a description of their personality that was supposedly based on the results of the test. In reality, there was only ever one description, cobbled together from newspaper horoscopes, and everyone received the same one. Forer then asked them to rate, on a scale of 0 (very poor) to 5 (excellent), the description’s accuracy; the average score was 4.26 – pretty remarkable, unless all the students really were exactly the same. Forer’s observation was quickly dubbed the Forer effect and has often been replicated in other settings.

Part of what was happening was that the descriptions were positive enough, without being unbelievably positive:

You have a great deal of unused capacity which you have not turned to your advantage. While you have some personality weaknesses, you are generally able to compensate for them.

and, importantly, vague enough to be applicable to a wide audience:

At times you have serious doubts as to whether you have made the right decision or done the right thing.

At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved.

Even horoscope writers admit that some of their success rests in not saying too much. Says Cainer, “The art of writing a successful horoscope column probably confirms what all too many skeptics and cynics eagerly clutch to their bosoms as charlatanry. Because it’s writing ability that makes a horoscope column believable… ultimately a successful column will avoid specifics wherever possible. You develop the art of being vague.”

The other element of the Forer effect is that the individual readers did most of the work, shaping the descriptions to fit themselves – not for nothing is the Forer effect also called the Barnum effect, after the famous showman’s claim that his shows “had something for everyone”. French, the Goldsmith psychologist, notes that people who read horoscopes are often invested in making their horoscope right for them. “If you buy into the system and the belief, it’s you that’s kind of making the reading appear to be more specific than it actually is,” he explains. “Most days for most people is a mix of good things and bad things, and depending on how you buy into the system… if you’re told to expect something good that day, then anything good that happens that day is read as confirmation.”

Astrologer Cainer has another, more practical explanation for why people read horoscopes: “It’s because they’re there.” There’s very much a “can’t hurt” and “might help” perception of horoscopes; at the same time, newspaper horoscopes, he says, also allow casual horoscope readers “a glorious sense of detachment: ‘I don’t believe in this rubbish but I’ll have a look.’” This resonates with what Julian Baggini, a British philosopher and writer for The Guardian, says about why people read horoscopes: “No matter how much the evidence is staring someone in the face there’s nothing in this, there’s that ‘Well, you never know.’” (Even if you do know.) 

But “you never know” and even the Forer effect doesn’t entirely explain the longevity of a form that many critics complain has no business being in a newspaper – so maybe there’s something else going on. When French taught a course with a section on astrological beliefs, he’d sometimes ask on exams: “Does astrology work?” “Basically, the good answers would be the ones that took part the word ‘work,’” he says. On the one hand, the straightforward answer is that, according to a host of scientific studies, astrology does not work. “But you’ve then got the other question… ‘Does astrology provide any psychological benefit, does it have an psychology function?’” he said. “The answer to that is, sometimes, yes.”

Psychologists see people on a scale between those who have what’s called an external locus of control, where they feel that they are being acted upon by forces out of their influence, and people with an internal locus of control, who believe that they are the actors. “Not so surprisingly, people who believe in astrology tend to have an external locus of control,” says French. That observation tallies with what other psychologists say: Margaret Hamilton, a psychologist at the University of Wisconsin who found that people are more likely to believe favorable horoscopes, noted that people who are believers in astrology also tend to be more anxious or neurotic.

Newspaper horoscopes, she said, offer a bit of comfort, a sort of seeing through the veil on a casual level. French agrees: astrology and newspaper horoscopes can give people “some kind of sense of control and some kind of framework to help them understand what’s going on in their lives.” It’s telling that in times of uncertainty, whether on a global, national or personal level, he notes, astrologers, psychics, and others who claim to be able to offer guidance do a pretty brisk business; that belief in astrology is apparently on the rise in America, according to the NSF survey published in 2014, may have something to do with recent financial uncertainty. Cainer agreed that people take horoscopes more seriously when they’re in distress: “If they’re going through a time of disruption, they suddenly start to take what’s written about their sign much more seriously…. If you’re worried and somebody tells you not to worry, you take that to heart.” (On whether astrologers are taking advantage of people, French is clear: “I am not saying that astrologers are deliberate con artists, I’m pretty sure they’re not. They’ve convinced themselves that this system works.”)

Philosophically, there is something about reading horoscopes that does imply a placing of oneself. As Hamilton notes, “It allows you to see yourself as part of the world: ‘Here’s where I fit in, oh, I’m Pisces.’” Looking deeper, Baggini, the philosopher, explains, “Human beings are pattern seekers. We have a very, very strong predisposition to notice regularities in nature and the world, to the extent that we see more than there are. There are good evolutionary reasons for this, in short a false positive is less risky than failure to observe a truth.” But, more to the point, “We also tend to think things happen for a reason and we tend to leap upon whatever reasons available to us, even if they’re not entirely credible.”

Horoscopes walk a fine line, and, for many people, an appealing one. “On the one hand, people do want to feel they have some agency or control over the future, but on the other, it’s rather frightening to think they have too much,” explained Baggini. “So a rather attractive world view is that there is some sense of unfolding benign purpose in the universe, in which you weren’t fundamentally responsible for everything, but were given some kind of control… and astrology gives us a bit of both, a balance.”

Astrologers might agree. “I’m a great believer in freewill,” says Cainer. “There’s a lovely old Latin phrase that astrologers like to quote to each other: Astra inclinant non necessitant. The stars suggest, but they don’t force… I like to think that astrology is about a way of fighting planetary influences, it’s not entirely about accepting them.”

But really, at the end of the day, are horoscopes doing more harm than good, or more good than harm? It all depends on whom you ask (and, of course, on the appropriateness of the advice being given). Strunckel and Cainer, obviously, see what they do as helping people, although both acknowledge that, as Strunckel says, “Astrology isn’t everybody’s cup of tea.”

Richard Dawkins, the outspoken humanist and militant atheist, came out strongly against astrology and horoscopes in a 1995 Independent article published on New Years’ Eve, declaring, “Astrology not only demeans astronomy, shrivelling and cheapening the universe with its pre-Copernican dabblings. It is also an insult to the science of psychology and the richness of human personality.” Dawkins also took newspapers to task for even entertaining such “dabblings”. More recently, in 2011, British rockstar physicist Brian Cox came under fire from astrologers for calling astrology a “load of rubbish” on his Wonders of the Solar System program on BBC. After the BBC fielded a bunch of complaints, Cox offered a statement, which the broadcaster probably wisely chose not to release: “I apologize to the astrology community for not making myself clear. I should have said that this new age drivel is undermining the very fabric of our civilization.”

What Dawkins and Cox may not want to acknowledge is that humans don’t tend to make decisions based on a logical, rational understanding of facts (there’s a reason why “cognitive dissonance” is a thing) – and horoscope reading might be just as good a system of action as any. “Most people don’t base their views and opinions the best empirical evidence,” French says. “There are all kinds of reasons for believing what you believe, not least of which is believing stuff because it just kind of feels good.”

At their heart, horoscopes are a way to offset the uncertainty of daily life. “If the best prediction you’ve got is still completely rubbish or baseless, it’s better than no prediction at all,” says Baggini. “If you have no way of controlling the weather, you’ll continue to do incantations and dances, because the alternative is doing nothing. And people hate doing nothing.”

Can Kenya Light the Way Toward a Clean-Energy Economy?

Smithsonian Magazine

In the United States, we tend to think of electricity as something that is either on or off. You either have power, or you don’t. But in Nairobi, Kenya, electricity is experienced more like the hot water in an old building: sputtering, low-voltage brownouts contrast with sudden voltage spikes and power surges. Inconsistent electrical power does more harm than a suddenly ice-cold shower; refrigerators, computers and manufacturing equipment are frequently damaged, and routines are disrupted. Power outages cost the country an estimated 2 percent of gross domestic product annually.

That’s because the country’s power plants can provide just 1.2 gigawatts of electricity. The United States has more than 960 gigawatts of capacity, and one of its largest utilities, American Electric Power, serves about 5 million customers with its 38 gigawatts of generating capacity. In Kenya, that 1.2 gigawatt capacity serves more than 10 million customers, including homes, businesses, and industry—less than 30 percent of the entire country’s population. The remaining 70 percent have no electricity at all.

Kenya’s “Vision 2030” plan, widely praised when it was announced in 2008, calls for 10 percent annual economic growth, and estimates that at least 20 gigawatts of new energy capacity will need to come online in the next decade to support it. To achieve that goal, dozens of efforts are underway to aggressively expand Kenya’s electric power infrastructure and, in doing so, to “leapfrog” over fossil fuels toward a clean-energy economy.

The idea of leapfrogging first emerged when cellphones swept the continent, bypassing traditional landline technology. The number of cellphones in use in Africa ballooned to more than 615 million in 2011, from 16.5 million a decade earlier—a surge that ever since has spurred optimism among everyone from local politicians and NGOs to international businesses and media that other cutting edge technologies could carve a similar trajectory. Because of the opportunities opened up by Vision 2030 and other factors, nowhere does this excitement run higher than in Kenya’s energy sector.

Taking the leap

The lack of an incumbent telecommunications industry or existing telephony infrastructure played a critical role in the cellphone’s success in Africa, and for many, the absence of existing energy infrastructure suggests that the country has a similar opportunity to adopt and scale the use of new technologies quickly, avoiding the mistakes of the past. In this case, that means avoiding the fossil-fuel–lined path to development.

“In many ways, the beauty of Africa is that you're almost starting with a blank canvas,” says Bob Chestnutt, a London-based project director for Aldwych International, which is developing a 300-megawatt wind farm near Kenya’s Lake Turkana. “You really do have the opportunity to be innovative. You're not dealing with the legacy of 40, 50 years of fossil generation.”

Renewables to the rescue?

Kenya is particularly well positioned for an end-run around fossil fuels. Its location along the equator bestows the country with plentiful sunlight (on average, each square meter collects an estimated 4.5 kilowatt-hours per day of solar radiation, which can be converted to electricity; a more northern climate like Boston would be expected to get about 3.6 kilowatt-hours per square meter per day). In the Lake Turkana region, Kenya also has some of the world’s greatest wind potential. And the Great Rift Valley, which carves a jagged arc through the heart of Kenya, sits atop a hot spot in the earth’s crust that creates ideal conditions for geothermal wells. At a policy level, it doesn’t hurt that Kenya has dropped its import duties on renewable energy technologies.

Much of the nation’s energy today comes from large hydropower projects, many of them part of a series of linked dams and reservoirs known as the Seven Forks scheme. Located primarily along the Tana and Turkwel rivers, hydropower provides about 800 megawatts of electricity to Kenya’s grid. However, there’s little room for hydropower to grow; many rivers run dry for a good portion of the year, limiting their ability to provide consistent electricity.

Developers have already begun to tap into new energy opportunities, with geothermal leading the way. By next year, a series of geothermal wells will provide 280 megawatts of power to the grid, up from 157 megawatts today. By 2030, geothermal power is expected to meet more than a quarter of the country’s energy needs. “Geothermal is a very stable, sustainable source,” says Gregory Ngahu, a spokesman for Kenya Power, the nation’s only electric utility. “It’s quite robust.” 

Wind and hydropower projects account for more than 95 percent of the rest of the new capacity planned through 2030. Yet renewables are not a shoo-in for Kenya’s electrification push. Over the last few years, Kenya has discovered oil, natural gas, and coal deposits within its borders, tempting some to consider expansion of traditional fossil-fuel capacity. Hydropower has stumbled as climate change-linked droughts reduce water flow through critical rivers. And solar isn’t part of the Vision 2030 plan.

Another challenge for renewables is the need for new infrastructure to connect large projects to the grid. Led by state-backed organizations, Kenya’s power industry is building out several transmission lines to import power from neighboring Ethiopia, and also to bring electricity from new renewable projects to population centers where it’s needed. Developers of the Lake Turkana wind farm, for example, are building a 428-kilometer (266-mile) high-voltage transmission line from Lake Turkana to the existing grid. Crossing the geothermal-rich Rift Valley, the line will pave the way for future energy projects, Aldwych’s Chestnutt says. “Now, developers will take the initiative.”

Cutting the cord

Despite these efforts, the majority of Kenya’s population won’t gain access to electricity from these sources. Even though urban areas are growing dramatically, most Kenyans live far from the grid in rural towns and villages. And those who do live close to the grid can’t always tap into its benefits. Kenya Power charges approximately $400 USD per household for a grid connection.

“That is so far away, if you’re a poor Kenyan family,” says Jon Bøhmer, founder of Nairobi-based Kyoto Energy. “There are many places where the power lines cross over people’s huts and they have no way to connect to the grid.”

As a result, there’s a growing recognition that serving these areas will require a different approach. Locating a variety of smaller-scale resources in a single location, close to demand, could help expand energy access more quickly. Startups, nonprofits and even Kenya Power are all beginning to look to solar-based microgrids—small, self-contained power grids—as one possible solution.

While individual solar lighting systems, like the d.Light, have received much positive press in the U.S. and Europe, microgrids have the potential to power local industries. Bøhmer, a Norwegian software engineer who in 2006 moved with his Kenyan wife to Thika, near Nairobi, has introduced a solar microgrid system specifically for this market.

“Silicon Valley entrepreneurs come in saying, ‘We raised $3 million from a venture capitalist in San Francisco,’ with their 3-watt solar panel and LED light,” says Bøhmer. “They think they’ve sorted it out. Sure, now someone has lights and can charge their mobile. Great. But in the West, when you got power, you could run a machine, and build a business. That business could grow and build an entire industry. That kind of story is not possible, if you’re going to do it with these dead-end, stop-gap solutions.”

Bøhmer’s solution, dubbed the Butterfly Solar Farm, uses concentrating solar photovoltaics (PV) to generate electricity and captures solar thermal energy to heat water. His first customer is commercial tea producer whose operations include both agricultural and drying facilities.

The first pilot project, planned for later this year, will place the concentrating system’s solar-tracking mirrors, or heliostats, among the bushes in the existing tea fields—a kind of triple-cropping arrangement that produces tea along with 1 megawatt of electricity and 2.5 megawatts of heat. The heat is used in the drying facility, reducing dependence on wood-fired heat, and the electricity provides power to 7,000 on-site homes. Bøhmer estimates that the project will have a four-year payback period.

In the northern part of the country, Kenya Power has 10 microgrids with capacities ranging from 5 to 10 megawatts in pilot phase. Most of them were built in off-grid areas using diesel generators over the past several years; today, the utility is beginning to add a solar resource to the mix. During the day, solar power feeds directly into the regional distribution network, and at night, diesel generation fills the gap.

“Operating diesel plants becomes very expensive and unsustainable,” says Kenya Power’s Ngahu. “We are eventually going solar throughout.”

Terry Mohn, CEO of General Microgrids, which has offices in Nairobi and San Diego, California, advocates for “opportunistic” microgrids that leverage a wider range of local energy resources, such as solar, biogas, or small-scale hydro. No matter what the energy source, microgrids can provide reliable shared energy infrastructure while slashing the need for large-scale transmission infrastructure.

Efficiency first

If these efforts seem small, that’s because they are.

Kenya’s per-capita consumption of electric power in 2010 was less than one-tenth the global average for nations considered middle income, such as Argentina, India, and South Africa. Even with expanded generating capacity, the available supply for households isn’t likely to grow quickly. Because much of the planned growth in Kenya’s power is intended to support industrialization and tourism, limiting the growth of residential use will be critical to the success of the plan.

For that reason, one of the key “leapfrog” opportunities that may exist in Kenya is an opportunity to develop an energy policy where efficiency comes first. Implemented at the outset, efficiency efforts can give Kenya more bang for every buck it invests in new capacity.

One way to improve efficiency of the overall system is to meet some energy demands with heat instead of electricity. The central government has introduced programs aimed at spreading the use of solar thermal water heaters to harness the sun’s warmth for household water heating. Some innovators are looking for new ways to satisfy thermal needs on the industrial side, too. “Many industrial operations are still using wood fuel to power their boilers,” says Ernest Chitechi, Outreach and Partnership Manager for the nonprofit Kenya Climate Innovation Center, or CIC. As a substitute, the organization is working with entrepreneurs to develop a biomass briquette based on pineapple waste.

But the real challenge will be in controlling electricity usage where there is no substitute.

Pre-payment brings power to the people

Pre-paid electrical meters mirror the ubiquitous pre-paid cellphone. Users can purchase energy “tokens” from a handful of providers (including mobile payment providers). Each token has a 20-digit number that can be entered into an electric meter to unlock the purchased amount of electricity. Users pay higher prices per kilowatt-hour as they consume more electricity.

These increases are quickly recognizable by the user, encouraging conservation. At least, that’s the idea. In practice, some complain that rate information isn’t transparent enough, and that different token providers charge wildly variable service fees, confusing pricing signals to customers. Further consumer education is likely needed to ensure that they achieve these goals.

But pre-paid meters have another advantage. Like the rest of Kenya’s electrification initiative, they feed into the country’s broader economic development plan: The program is supporting new job growth, as vendors are needed to sell the energy tokens. In the mobile market, a similar marketing model created 100,000 new direct jobs.

Pre-payment has also helped the utility shore up cash reserves, because customers can’t miss payments. In September 2012, Business Daily Africa reported that by June of 2011, Kenya Power had already accumulated Sh7.4 billion ($84 million) in unpaid electricity bills for the year. With pre-payment, those funds can be used to further invest in its electrification program.

Renewable energy entrepreneurs are looking to the success of the model as a way to introduce their products to rural Kenyans, as well. “In most cases, people may not have adequate resources to invest in the upfront costs,” says Chitechi. “It’s one of the biggest barriers to adoption.”

Stima, Angaza and Azuri are among the startups offering pay-as-you-go solar, which allow users to install a few, small solar panels at a time, with no up-front cost. To access power from their panels, customers buy energy credits using a mobile payment system. Unlike the utility-installed pre-paid meters, however, solar customers can eventually pay off their solar panels and permanently “unlock” access to the electricity. Two entrepreneurs at the CIC are also looking at ways to leverage pre-payment to finance the up-front cost of renewable energy systems. 

If innovations like these can support cleaner, more efficient energy use for urban and rural customers alike, Kenya just may have a chance to make the hop toward a strong, low-carbon economy. 

To Understand the Elusive Musk Ox, Researchers Must Become Its Worst Fear

Smithsonian Magazine

Joel Berger is on the hunt. Crouching on a snow-covered hillside, the conservation biologist sports a full-length cape of brown faux fur and what looks to be an oversized teddy bear head perched on a stake. Holding the head aloft in one hand, he begins creeping over the hill’s crest toward his target: a herd of huddling musk oxen.

It’s all part of a plan that Berger, who is the wildlife conservation chair at Colorado State University, has devised to help protect the enigmatic animal that roams the Alaskan wilderness. He slowly approaches the unsuspecting herd and makes note of how the musk oxen react. At what distance do they look his way? Do they run away, or stand their ground and face him? Do they charge? Each of their reactions will give him vital clues to the behavior of what has been a notoriously elusive study subject. 

Weighing up to 800 pounds, the Arctic musk ox resembles a smaller, woollier cousin of the iconic American bison. But their name is a misnomer; the creatures are more closely related to sheep and goats than oxen. These quadrupeds are perfectly adapted to the remote Arctic wasteland, sporting a coat of thick fur that contains an insulating under layer to seal them away from harsh temperatures. 

Perhaps most astonishing is how ancient these beasts are, having stomped across the tundra for a quarter of a million years relatively unchanged. "They roamed North America when there were giant lions, when there were woolly mammoths," Berger told NPR's Science Friday earlier this year, awe evident in his voice. "And they're the ones that have hung on." They travel in herds of 10 or more, scrounging the barren landscape in search of lichen, grasses, roots and moss.

But despite their adaptations and resilience, musk oxen face many modern threats, among them human hunting, getting eaten by predators like grizzlies and wolves, and the steady effects of climate change. Extreme weather events—dumps of snow, freezing rain or high temperatures that create snowy slush—are especially tough on musk oxen. “With their short legs and squat bodies," they can't easily bound away like a caribou, explains Jim Lawler, an ecologist with the National Parks Service.

In the 19th century, over-hunting these beasts for their hides and meat led to a statewide musk ox extinction—deemed "one of the tragedies of our generation" in a 1923 New York Times article. At the time, just 100 musk oxen remained in North America, trudging across the Canadian Arctic. In 1930, the U.S. government shipped 34 animals from Greenland to Alaska's Nunivak Island, hoping to save a dwindling species.

It worked: by 2000, roughly 4,000 of the charismatic beasts roamed the Alaskan tundra. Yet in recent years that growth has slowed, and some populations have even started to decline. 

Which brings us back to how little we know about musk oxen. Thanks to their tendency to live in sparse groupings in remote regions that are near-impossible for humans or vehicles to traverse, no one knows the reason for today’s mysterious decline. The first part of untangling the mystery is to figure out basic musk ox behavior, including how they respond to predators. 

This is why Berger is out in the Arctic cold, dressed up as a musk ox’s worst nightmare. 

Image by Courtesy of Joel Berger. The name musk ox is a bit of a misnomer. The creatures don't produce true musk and are more closely related to sheep and goats than oxen. (original image)

Image by Courtesy of Joel Berger. In recent years, Berger began similar work on Wrangle Island, a Russian nature preserve in the Arctic Ocean, where musk ox are facing the threat of an increasing population of polar bears on land. (original image)

Image by Courtesy of Joel Berger. These prehistoric beasts are known to face their predators head on, huddling together with their young tucked behind. (original image)

Image by Courtesy of Joel Berger. Berger poses as a grizzly bear in the Alaskan wilderness, slowly approaching a herd of musk ox. (original image)

Image by Courtesy of Joel Berger. Musk ox contain a thick, insulating layer of underwool that protects the creatures in the harsh winter temperatures. (original image)

Image by Courtesy of Joel Berger. When the Alaskan herds lacks males, they flee from their grizzly predators, which means that some of the musk ox, most often the babies, will get eaten. (original image)

Image by Courtesy of Joel Berger. When a charging musk ox seems like it could be serious, Berger stands up out of his crouched position and throws off the bear head. This move confuses the burly beasts, halting the attack. (original image)

Image by Courtesy of Joel Berger. When full grown, musk ox stand up to five feet tall and weigh up to 800 pounds. These long-haired ungulates survive in the desolate arctic landscape by eating roots, mosses, lichens and grasses. (original image)

Becoming the other

Donning a head-to-toe grizzly bear costume to stalk musk oxen wasn’t Berger’s initial plan. He’d been working with these animals in the field since 2008, studying how climate change was impacting the herds. Along with the National Parks Service, he spent several years tracking the herds with radio collars and watching from a distance how they fared in several regions of Western Alaska. 

During this work, scientists began to notice that many herds lacked males. This was likely due to hunting, they surmised. In addition to recreational trophy hunting, musk oxen are important to Alaskan subsistence hunters, and the Alaska Department of Fish and Game grants a limited number of permits each year for taking a male musk ox. This is a common wildlife management strategy, explains Lawler: "You protect the females because they're your breeding stock." 

But as the male populations declined, park officials began finding that female musk ox and their babies were also dying.

In 2013, a study published in PlosOne by members of the National Park Service and Alaska's Department of Fish and Game suggested that gender could be playing a key role. In other animals like baboons and zebras, males hold an important part in deterring predators, either by making alarm calls or staying behind to fight. But no one knew whether musk ox had similar gender roles, and the study quickly came under criticism for a lack of direct evidence supporting the link, says Lawler.

That’s when Berger had his idea. He recalls having a conversation with his park service colleagues about how difficult these interactions would be to study. “Are there ways we can get into the mind of a musk ox?’” he thought. And then it hit him: He could become a grizzly bear. "Joel took that kernel of an idea and ran with it," says Lawler.

This wouldn't be the first time Berger had walked in another creature’s skin in the name of science. Two decades earlier, he was investigating how carnivore reintroduction programs for predators, such as wolves and grizzlies, were affecting the flight behavior of the moose. In this case, he dressed up as the prey, donning the costume of a moose. Then, he covertly plunked down samples of urine and feces from predators to see if the real moose reacted to the scent. 

It turns out that the creatures learned from past experiences: Mothers who had lost young to predators immediately took notice, while those who lost calves to other causes remained “blissfully ignorant” of the danger, he says.

To be a grizzly, Berger would need an inexpensive and extremely durable design that could withstand being bounced around "across permafrost, across rocks, across ice, up and over mountains and through canyons," he explains. The most realistic Hollywood costumes cost thousands of dollars, he says, and he couldn't find anyone willing to "lend one on behalf of science." 

So Berger, who is also a senior scientist at the Wildlife Conservation Society, turned to the WCS' Bronx Zoo to borrow a his teddy-bear-like ensemble. He then recruited a graduate student to make a caribou garment, so he could test how the musk oxen would react to a faux predator versus an unthreatening fellow ungulate. 

After comparing the two disguises in the field, he found that the bear deception worked. When dressed as a caribou, he's largely ignored. But when he dons his grizzly suit, the “musk oxen certainly become more nervous,” he says. Now it was time to start gathering data.

The trouble with drones

Playing animal dress-up is far from a popular method for studying elusive creatures. More common strategies include footprint tracking and GPS collars, and most recently, drones. Capable of carrying an assortment of cameras and sensors, drones have grown in popularity for tracking elusive creatures or mapping hard-to-reach terrains. They’ve even been deployed as sample collectors to collect, among other things, whale snot.

But drones are far from perfect when it comes to understanding the complex predator-prey drama that unfolds between bear and musk ox, for several reasons. 

They’re expensive, challenging to operate and finicky in adverse weather. "You can't have it all," says Mary Cummings, a mechanical engineer at Duke University who has worked with drones as a wildlife management tool in Gabon, Africa. Cummings found that the heat and humidity of Africa caused the machines to burst into flame. Meanwhile, Berger worries the Arctic cold would diminish battery life.

Moreover, when studying elusive creatures, the key is to leave them undisturbed so you can witness their natural behavior. But drones can cause creatures distress. Cummings learned this firsthand while tracking African elephants from the air. Upon the drone's approach, the elephants trunks rose up. "You could tell they were trying to figure out what was happening," she says. As the drones got closer, elephants began to scatter, with one even slinging mud at the noisemaker. 

The problem, the researchers later realized, was that the drone mimics the creatures’ only nemesis: the African bee.

"Drones have kind of this cool cache," says Cummings. But she worries we've gone a little drone-crazy. "I can't open my email inbox without some new announcement that drones are going to be used in some new crazy way that's going to solve all our problems," she says. Berger agrees. "Sometimes we lose sight about the animals because we are so armed with the idea of a technological fix," he adds.

Another option for tracking hard-to-find animals is hiding motion-activated cameras that can snap images or video of unsuspecting subjects. These cameras exploded on the wildlife research scene after the introduction of the infrared trigger in the 1990s, and have provided unprecedented glimpses into the daily lives of wild animals ever since.

For musk oxen, however, observing from the sky or from covert cameras on the ground wasn't going to cut it.

Musk oxen are scarce. But even scarcer are records of bears or wolves preying on the massive creatures. In the last 130 years, Berger has found just two documented cases. That meant that to understand musk ox herd dynamics, Berger needed to get up close and personal with the burly beasts—even if doing so could put him in great personal danger. “We can’t wait another 130 years to solve this one,” he says.

When he first suggested his study technique, some of Berger’s colleagues laughed. But his idea was serious. By dressing as a grizzly, he hoped to simulate these otherwise rare interactions and study how musk ox react to threats—intimate details that would be missed by most other common study methods.

It's the kind of out-of-the-box thinking that has helped Berger tackle tough conservation questions throughout his career. "We call it Berger-ology," says Clayton Miller, a fellow wildlife researcher at WCS, "because you really have no idea what's going to come out of his mouth and somehow he ties it all together beautifully."

Risks of the trade

When Berger started his work, no one knew what to expect. "People don't go out and hang out with musk ox in the winter," he says. Which makes sense, considering their formidable size and helmet-like set of horns. When they spot a predator, musk oxen face the threat head on, lining up or forming a circle side-by-side with their young tucked behind. If the threat persists, a lone musk ox will charge.

Because of the real possibility that Berger would be killed, the park service was initially reluctant to approve permits for the work. Lawler recalls arguing on behalf of Berger’s work to his park service colleagues. "Joel's got this reputation for … these wacky hair-brained ideas," he remembers telling them. "But I think you have to do these kinds of far out things to make good advances. What the heck, why not?" 

Eventually the organization relented, taking safety measures including sending out a local guide armed with a gun to assist Berger.

Besides the danger, Berger soon found that stalking musk ox is slow-going and often painful work. On average, he can only watch one group each day. To maintain the bear routine, he remains hunched over, scrambling over rocks and snow for nearly a mile in sub-zero temperatures and freezing winds. He sits at a "perilously close" distance to the musk ox, which puts him on edge. 

Between the physical challenge and the nerves, each approach leaves him completely exhausted. "When you are feeling really frostbitten, it’s hard to keep doing it," he says.

But by weathering these hardships, Berger has finally started to learn what makes a musk ox tick. He can now sense when they're nervous, when they'll charge and when it's time to abort his mission. (When things are looking tense, he stands up and throws his faux head in one direction and his cape in the other. This momentarily confuses the charging musk ox, halting them in their tracks.)

So far he’s been charged by seven male musk oxen, never by a female—suggesting that musk oxen do indeed have distinct gender roles in the pack. Moreover, he’s found, the presence of males changes the behavior of the herd: When the group lacks males, the females all flee. This is dangerous because, as any outdoor training course will tell you, “you don't run from a [grizzly] bear," says Berger. When the herds bolt, musk oxen—particularly babies—get eaten. 

The polar bear that wasn't

The charismatic polar bear has long been the poster child of Arctic climate change. Compared to musk ox, “they’re a more direct signal to climate,” says Berger. Polar bears need sea ice to forage for food, and as Earth warms, sea ice disappears. This means that tracking polar bear populations and health gives scientists a window into the impacts of climate change. Their luminous white fur, cuddly-looking cubs and characteristic lumber only make them more ideal as animal celebrities. 

As a result, much of the conservation attention—and funding—has been directed toward polar bear research. Yet Berger argues that musk ox are also a significant piece of the puzzle. "Musk ox are the land component of [the] polar equation," Berger explains. Though their connection to climate is less obvious, the impacts could be just as deadly for these brawny beasts. 

Musk oxen and their ancestors have lived in frosty climates for millennia. "If any species might be expected to be affected by warming temperatures, it might be them," he says.

Moreover, musk oxen have their own charisma—it’s just rare that people get to see them close enough to witness it. The easiest time to spot them, says Berger, is during winter, when the animals' dark tresses stand in stark contrast to the snowy white backdrop. "When you see black dots scattered across the hillside, they are as magic," he says.

From Greenland to Canada, musk oxen around the world face very different challenges. On Wrangle Island, a Russian nature preserve in the Arctic Ocean, the animals are facing increased encounters with deadly polar bears, but less direct climate impacts. To get a more complete picture of musk oxen globally, Berger is now using similar methods to study predator interactions with the herds on this remote island, comparing how the creatures cope with threats.  

"We can’t do conservation if we don’t know what the problems are," says Berger. "And we don’t know what the problems are if we don’t study them." By becoming a member of their ecosystem, Berger hopes to face these threats head on. And perhaps his work will help the musk ox do the same.

"We won't know if we don't try," he says.