Skip to Content

Found 12,395 Resources

In Another Giant Leap, Apollo 11 Command Module Is 3D Digitized for Humankind

Smithsonian Magazine

On a Tuesday morning, an hour before the National Air and Space Museum opened to the public, Adam Metallo, a 3D digitization program officer at the Smithsonian Institution, stood in front of the Apollo 11 command module Columbia.

For 40 years, a Plexiglas “skin” had protected the module—which on July 16, 1969 launched Neil Armstrong, Edwin “Buzz” Aldrin, and Michael Collins to the moon—but now it was nakedly exposed to the air.

More than $1.5 million worth of equipment, from lasers to structured light scanners to high-end cameras, surrounded the module, whose rusty, grizzled surface evoked Andrew Wyeth’s watercolor palette.

“We were asked about scanning the Apollo command module both inside and outside, and we gave an emphatic ‘Maybe’ to that question,” Metallo says. “This is one of the most complicated objects we could possibly scan.”

Typically, Metallo and colleague Vince Rossi, also a 3D digitization program officer at the Institution, have a “grab bag” of about half a dozen categories of tools available for 3D scanning projects, each of which might use one or two tool types. “This project uses pretty much everything we have in our lab,” he says. “We brought the lab on site here to the object.”

False colorization depicting the interior of the spaceship. (Digitization Program Office/SI)

Watch this video in the original article

By scanning and photographing the exterior of the module as well, the team can do cross-sections and in the final digital product, offer perspectives on what it would be like to sit inside the module. Data will also be made available to those who want to do a 3D print of the object. (Although a full-size print is theoretically possible, Rossi says scaled models are much more likely.)

“Three-dimensional printing is a great way to engage kids by creating a replica of such an iconic object either in the classroom or at home,” he says. “But the online model is really what we’re excited about.”

That online model will engage both young and older visitors, according to Allan Needell, curator of the human spaceflight Apollo collections at the museum.

“They could look at old film and pictures, but now we have an opportunity to basically present to them an experience which is visually almost identical to if you were allowed to go in and lie down on one of those seats and look around,” he says.

The command module, which has been on display in the museum’s “Milestones” gallery since the museum opened in 1976 after being on exhibit at the Arts and Industries Building—where it was installed in 1970—will become the centerpiece of the museum’s new gallery “Destination Moon,” which will open at the end of the decade.

A black and white rendering of the laser-collected data depicting the interior of the spacecraft and the seats of Neil Armstrong and Michael Collins. (Digitization Program Office/SI)

Watch this video in the original article

Laser scanners eschew certain reflective and shiny surfaces, which for the module presents quite a problem. “A very dark and shiny surface doesn’t reflect light back into the sensor as accurately as a nice, clean matte, white surface,” says Metallo.

And most importantly for this project, the interior of the module is incredibly cramped and complex, and, to make matters more challenging, Metallo and Rossi aren’t allowed to touch the artifact, let alone to climb inside.

“We have a few tricks up our sleeves,” Metallo says with a smile.

He was also cheerful and philosophical about the technical challenges. “That’s integral to the story that we want to tell by scanning this object: what it’s like in there,” he says. “We can see the conditions that these astronauts went through and lived with. By scanning the interior with such fidelity and expressing that in 3D models online and potentially in virtual reality, we’re going to be able to give the public a really profound experience and understanding of the object.”

Unable to physically enter the module, the team used cameras on mechanical “arms” to reach inside and capture the interior’s nooks and crannies. Laser devices capture one million points per second. “It’s similar to a laser tape measure” capturing geometry, Rossi says, noting that the team will map photos onto the three-dimensional data. “We marry those two data sets,” he adds.

A rendering of the data shows the complex instrument panel and the cramped quarters inside the Apollo 11 Command Module. (Digitization Program Office/SI)

Moving the artifact offers the museum a rare chance to study and scan an otherwise inaccessible artifact. “We recognize that it has enormous cultural significance, as well as engineering and technical significance,” Needell says. “The challenge is how to take an object like this—and experiencing it—and translate it to a new generation of people who don’t have personal familiarity with it, and weren’t following it on their own.”

Although digital experiences of the command module will help engage that younger generation, a core and growing museum audience, the original module will remain on display. “That experience of ‘I actually stood next to the only part of that spacecraft that in 1969 took three astronauts to the vicinity of the moon and two of them to the surface—I stood next to it,’ that iconic feeling of being next to the real thing will be there,” says Needell.

The ingenuity of the module, which had to keep three men alive for two weeks as they hurdled through space, will become even more apparent in the scans, which will demonstrate to viewers how engineers solved technical problems. Seat belts, for example, were configured so that the astronauts had room to put on their space suits.

“We can show all of those kinds of things by being able to virtually tour the command module,” Needell said.

After eight days of scanning—and Rossi says every second will count—the team will process the enormous amount of collected data, and then will conduct a second scanning, some time in February, to fill in gaps. Each laser scan—about 50 will be completed—gather 6GB of data, and the 5DSR cameras will take thousands of pictures, 50 megapixels each. When this reporter noted that the hard drive on one of the laptops that Rossi and Metallo were using was nearly full, the latter said, “Thanks for noting.”

The two produced an iPhone and demonstrated the 3D display of the museum’s 1903 Wright Flyer, which, like the Apollo module, was done in collaboration with software company Autodesk. The software, which viewers can use without downloading any plugins, maps and triangulates two-dimensional photos and uses them to create three-dimensional models.

“The version of the viewer that Autodesk helped us develop is a beta version. Of course we’re thinking about what a 1.0 version looks like,” Rossi said.

Brian Mathews, vice president and group chief technology officer at Autodesk, a software company headquartered in San Rafael, Californian, was on hand with some staff. “This technology is not even on the market yet, and this object is going to be perfect for it,” he said, as Autodesk employee and doctoral student Ronald Poelman demonstrated on a computer how the software pieced together images until the entire command module had been mapped out.

The 3D models won’t seek to displace the presence of the original artifact, says Needell. “The artifact is not to be replaced by digital archives,” he adds. “They complement each other.”

Hosting an Event? Don't Toss Leftover Food, Donate It

Smithsonian Magazine

The party is over and guests are dwindling. Then comes the perennial question: What should be done with all that leftover food? A New York-based company called Transfernation has the answer—donate it.

“We use technology to make the process of rescuing food from events and bringing it to communities in need as simple as possible,” says Samir Goel, the company’s co-founder.

Transfernation focuses on food rescue from corporate events, using an Uber-like app. During registered events, the app sends out alerts to potential volunteers nearby, who can boost their karma for the day by helping to transport the food from the event to the nearest shelter or soup kitchen.

Most people, especially in a city as busy as New York, don't have an entire day to give to volunteering,” says Goel. “But finding 30 minutes to an hour is something that most people can do and is something that most people want to do.

Goel and his friend, Hannah Dehradunwala, started the company in 2013, while students at New York University. “We realized that hunger wasn’t a problem of producing more but rather better using what we already had,” says Goel.

Many companies have sprouted up in recent years to solve this problem, transferring food from grocery stores, cafeterias and restaurants. But Goel and Dehradunwala had their sights specifically on another prime food waste culprit, corporate events. “Living in a city like New York, it's pretty clear that events are a large source of food waste,” says Goel. “But there’s no real solution to that right now.”

So the duo took it on themselves to pick up and deliver food to local shelters and soup kitchens.

In 2014, they won the Resolution Social Venture Challenge, which provided them with startup capital and support necessary for the budding business to grow. Now composed of several hundred volunteers, their team has rescued over 14,000 pounds of food and counting.

Goel shares his story with Smithsonian.com.

How did Transfernation start?

We started out by just manually rescuing food from events—galas, conferences, lunches, dinners. At first, it was Hannah and me with some of our close friends. But as we progressed, we built a large volunteer base, including college students, corporate employees and individuals already in the social sector. Now, we have around 300 people on our general listserv of volunteers. We've worked with small businesses to Fortune 500 companies to rescue their extra food.

What is the main goal for Transfernation?

There's two components to what we are doing. One is greater awareness and social education. We want people to be conscious about what they are doing with their extra food. In an ideal world, corporations actually stop having so much extra food.

The second part [of our goal] is that we want to be the event solution. So when someone has an event, it becomes second nature for them to donate that extra food. It shouldn't be something that they have to think about.

Tell me about your new app.

We launched our app this past fall, partnering with volunteers in a group called SocialEffort. SocialEffort is a platform people use to find volunteering opportunities, and we added a real-time component.

Event planners can input a few details about an event into the app, which will send out push-notifications to registered volunteers on their iphones or tablets. This works the same way as receiving a calendar notification or a text message, but alerts individuals of a volunteering opportunity with Transfernation in the near future.

These notifications are all based on an algorithm of when the volunteers say they are available and what their interests are. So if someone is walking past a building where an event will soon end, they get a notification that says, 'Hey, there's an opportunity to rescue food that's about five minutes away.’

Is it difficult to find volunteers?

When you go to a career fair, no one is not going to sign up for something like this. It seems really simple, it's a way to give back. No one is going to be like, ‘I don't care about the homeless.’

The question is: What percentage of those people are actually going to dedicate their time? What we've seen is that one out of every ten is going to be a serious, committed volunteer.

Did you run into any legality issues with the donations?

Legality issues were the first thing we had to solve, and one of the first things that most of our clients thought of. What's really interesting is that food donations are actually protected by federal and state law.

[On a federal level, donors are protected under the Bill Emerson Good Samaritan Food Donation Act that President Bill Clinton signed into effect on October 1, 1996 to encourage people to donate food to those in need.]

The standard for giving away food is that the food cannot be knowingly unfit for human consumption. If you have a container of milk that you leave outside for a couple of days then try to give it to someone, that is something you could be liable for. On the other hand, food that you serve at an event that you would take home for your family is not something you would be liable for. 

What we found is that it's more of an education thing. We just had to work our corporate partners through the actual legal standards. For the most part, organizations really want to be involved. The more that we reassured them that there wasn't a real risk of liability the more on board they were.

Are you planning to expand Transfernation beyond New York?

For right now, New York is such a massive market to be in, and there are so many events we can't even reach right now. But down the line we see Transfernation as something that is very replicable elsewhere.

We are willing to adapt it for other cities and markets. New York is a public transit based market. But a city like Chicago or Los Angeles is much more car driven, so we would have to adjust how we do operations. But it's something that we're willing and interested to do.

Food waste is everywhere. Do you have plans to expand into other markets beyond event food waste?

We work with events, but we also work with corporate cafeterias. A lot of companies have their own cafeterias.

Are you interested in collecting leftover food at restaurants, grocery stores and universities?

There's other companies that do food rescue, like City Harvest, and they do a really phenomenal job working with restaurants and grocery stores. We're not trying to encroach on what they're doing right now. We wanted to tackle the space that no one was looking at. That's why we do the events and that's what we are going to stick to. There's so much volume here.

There's very little competition or negative will between nonprofits in the food-waste space because there's so much to do. There could be another 150 organizations in the United States and there will still be enough to go around.

Editor's Note April 26, 2016: The total amount of food rescued by the company was corrected from 2,500 pounds to over 14,000 pounds.

We Asked Four Teenagers to Explain "Divergent" to Old People

Smithsonian Magazine

If you haven’t noticed already, something big is happening this weekend – ask any teenager. Divergent, the first of three books in author Veronica Roth’s post-apocalyptic trilogy, hits the big screen today.

Following in the footsteps of Harry Potter, Twilight, and The Hunger Games, Divergent is the latest in a slew of teen novels to get the Hollywood treatment. Like its predecessors, the books boast a rabid teen fan base, but there's considerable debate as to whether the movie version of the book can differentiate itself from the pack of futuristic teen fantasy stories—especially for novice audiences. (Slate has even conducted a textual analysis of the Divergent books that suggests it's just a lot of nodding and head shaking.)

Whether you plan to see the movie this weekend with your kids (or were dragged to a midnight showing last night) or are totally new to the books, you might be wondering what exactly the fuss is all about. To get to the bottom of the books' appeal, we went straight to the experts: we sat down with four seventh graders – Nick, Maddie, Nils, and Nicole – and they gave us a beginner’s guide to the world of Divergent. Warning: Spoilers below.

Explain Divergent to your parents in one sentence. Go.

Maddie: That’s impossible.
Nils: It has so much plot.

Ok, you each get one sentence in a paragraph. Go.

Maddie: In dystopian Chicago, there are five factions, and there’s a girl who has to choose.
Nils: She chooses Dauntless, and then has to go through training, and yeah.
Nicole: She almost fails training, but then gets the best score out of everyone.
Nick: Then everyone gets infected by a serum and almost ends up starting World War... like 17!

What does it mean to be "divergent"?

Nils: It basically means that you have more than one personality.
Maddie: It means that you’re a normal person with thoughts of your own.
Nick: To be divergent is to fit in to more than one faction.

So, there are these five factions based on character traits, all of which are thought to have contributed to this dystopian future. What are they?

Nick: So, there’s Candor, which is honesty. They always wear black and white.
Maddie: Because they thought that truth was black and white, obviously. Abnegation is selfless, and they wear grey. Then there’s Erudite – they’re the smart people, and they wear blue.
Nick: Amity is the happy faction, the hippie faction.
Nils: Then there’s Dauntless, and they’re brave.
Nicole: So, when they’re 16 [years old], they take an aptitude test that determines which faction is best for them.
Nick: The test is basically a simulation.
Nils: But, some people are special.

Is it cool or dangerous to be divergent?

Nick: The thing about divergence is you aren’t obviously out there as a divergent.
Nils: It’s dangerous because at some point the Erudite leader wants to kill all of the divergent. But, also it’s good because you can manipulate the simulations. There are pluses and minuses.
Maddie: You can manipulate the simulations and break out of them. That’s a sign of divergence because your mind doesn’t work the way the people want their minds to work. So, that’s dangerous.

Who is Tris?

Nils: She is the main character, and she starts out as Abnegation.
Nick: Any time before you’re 16, you aren’t in a faction. You live with your parents who may be of a faction that you might join into.
Nils: Yeah, so she grew up in the Abnegation faction with her parents. And then she transfers to Dauntless when she's 16.
Nils: She acts selfless, but she doesn’t really fit in. Like in her head she knows that she’s not really all that selfless.

Who’s the villain of Divergent?

All: Jeanine, the Erudite leader.
Nils: She’s plotting to overthrow the government.
Maddie: Because she doesn’t like the faction system, and she wants to be the leader of everybody. And she wants to put everyone under her mind control, so that they can be like brainless people. She pretty much calls for an overthrow of Abnegation, who control the government. Because she thinks that they’re not doing a good job.

So, who are the factionless?

Nils: The people who don’t fit in with a faction. They either choose not to be in a faction, or they fail the training that they go through.
Maddie: They are basically the homeless of today.

Is there anything else we need to know about?

Maddie: Umm, Four.

Who’s Four?

Nicole: He’s Tris’s instructor.
Nick: He’s a buff guy.
Maddie: In the first book, he’s really mysterious. No one knows anything about him.

So, you have this female main character who is an action hero in a post-apocalyptic world. This is starting to sound a lot like the Hunger Games. How is it different?

Nils: Besides the similarity of a futuristic dystopia, there’s not much similar after that. There really isn’t.
Maddie: Divergent is different because it’s sort of like a new idea that no one has ever [explored]. I’ve never read any other book that’s like it.

Unlike the Hunger Games, there’s no love triangle (at least in the first book) in Divergent.

Nicole: Well, there’s [still] a lot of romance in it.
Maddie: Everybody likes love triangles, but they get sort of boring.

Do you think Divergent was written with boys or girls specifically in mind?

Maddie: I think it was definitely aimed at girls.
Nick: It was definitely aimed at girls.
Nils: Yeah, but it turned out to be fine for both.
Maddie: Because there’s the whole war and fighting aspect of it.
Nick: The Dauntless faction is definitely…
Maddie:…as Nick would say, “bad ass”.

What do your teachers think of the book series?

Nick: My teacher, no matter what it was, would describe it as “tight."
Maddie: My English teacher actually owns them – and Percy Jackson and The Hunger Games.
Nick: My mom gave a giant lecture to me about this how basically authors today don’t know what good literature is.
Nils: I’d rather have a fun read, than good literature.
Nick: Let me finish my mom’s lecture! Authors these days have no idea what real reading is. They just have a whole bunch of action, action, love scene, love scene, action, action, love scene, really bad ending, cliffhanger that makes everybody want to buy the next book in the series!

Do you plan to see the movie?

All: Yes!

What parts of the book are you most excited to see come to life?

Maddie: The zip-lining scene.
Nils: I want to see when she jumps off the building into the Dauntless pit.
Nicole: I want to see the part where she decides to be Dauntless.
Nick: I want to see when they jump off the train. The dauntless ride the trains, and they just jump on, when it’s fully moving.

Do you think the movie will live up to the book?

Nick: Oh no! We’re going to see the movie and rant at it.
Nicole: It better!
Maddie: It looks pretty accurate [from the trailer].
Nick: We expect way too much, though.
Nils: What’s going to happen is you’re going to expect it to be like the book, and then they’re going to cut out half the book.
Maddie: They’re not going to change it. If they change it, I will sue them!

Inside This Year's Miss Navajo Pageant

Smithsonian Magazine

It is perhaps the least celebrated and most rigorous pageant in the world, and it is said, the only one where contestants must kill an animal. But in Window Rock, Arizona, five women with perfectly coiffed hair and jewel-toned, crushed velvet gowns lay out their knives in preparation for the first part of this year’s Miss Navajo pageant. 

Several of the contestants have wound plastic wrap around their white moccasin leggings and tied on new cotton aprons— to avoid the blood stains. Finally, the sheep are carried in, placid and wide-eyed in the packed arena. 

*****

Held every September since 1952 on the Navajo reservation, in an Arizona high desert town just over the New Mexico border, the pageant has reached cult status among girls in the Navajo culture.  Organizers see this as a unique bridge between Navajo elders and the younger generations that otherwise might not have an incentive to follow the traditions.

Like any pageant, contestants in the Miss Navajo competition are vying for a place of honor in their culture’s value system. And maybe for this reason, it is the antithesis of Miss America-style contests.

“Miss America does the swim suit competition.  But in the Miss Navajo pageant, we don’t show our bodies-- only our head and hands,” explains pageant coordinator Dinah Wauneka, who was instrumental in adding the butchering component to the competition in the late 1990s. “Butchering is the Navajo way of life. Within our culture, this is beauty.”

The stages of competition in the pageant are so challenging and demanding that only a handful of women even bother going for the title each year. For this year’s pageant, only 12 women, all between the ages of 18 and 25 as per the rules, picked up the application form. Of those,  five were able to complete all of the eligibility requirements: Ann Marie Salt, 25, Alyson Jeri Shirley, 20, Crystal Littleben, 23, Farrah Fae Mailboy, 25, and Starlene Tsinniginnie, 25.

*****

At high noon the day before butchering, a slow trickle of pick-up trucks, road-dusted a burnt orange, arrive in the cracked asphalt parking lot of St. Michael’s Mission. With the hem of her blue sateen dress in hand, contestant Crystal Littleben hops out of the passenger seat of one and begins directing her father where to drop off her bins and garment bags. Directions are given in English peppered with words in the staccato Navajo (Diné) language.  Families carry hardhats, kindling, barbeque racks, branches from a greasewood tree into the parish house-- a modest former abbey where the participants would be bunking for the week.

One essential qualification for the pageant is fluency in Diné language—a tongue completely unrelated to English.  Vocal tone dictates meaning and verb conjugations change depending on the nature of the object: is it flat and flexible, solid and round, solid and thin?  Most of the contestants agree that this fluency is the most challenging aspect of the competition.

One dissenter is contestant Alyson Shirley who began speaking Diné at age three.  “I was raised by older Navajo ladies—my grandmother, mother, elders—around ceremonies and squaw dances.  And they taught me this beautiful language.” She describes experiencing culture shock when she switched from a Diné language immersion program to a public junior high school and had to learn how to live in what she calls “the modern world.”

Inside the parish house, everyone is gathering in the room of Starlene Tsinniginnie as she unpacks.  “I must have brought a dozen dresses,” she says.  “I couldn’t decide.”  Fortunately for her, she’ll have the chance to wear most of her wardrobe; during the three-day competition, the women will change outfits several times a day for various parts of the competition.  Mothers, grandmothers, even the contestants themselves painstakingly constructed the outfits over past months.

Besides possessing a broad range of culturally relevant skills that they must demonstrate onstage, competitors must also know Navajo culture inside and out. Impromptu questions run the gamut from stories told by parents and grandparents to minute details of traditional daily life. What is the legend of Changing Woman, the diety who laid the foundation for the matriarchal Navajo way of life? Why are the tales of the coyote are only told in winter? What are your four clans?  These questions are posed onstage, one day in Diné language, the next in English.

Contestant Farrah Mailboy remembers her first Miss Navajo pageant. When the judges asked her, ‘When a traditional ceremony is held for a male or female, which type of corn do they utilize?’” She had answered without hesitation: white corn for boys, yellow for girls.

Someone notices the stitches on the base of Starlene’s right thumb.  A butchering injury. “Last month I probably butchered between 10 to 15 sheep,” she explains.  “I put word out in my community that I would do it for free.  It was great practice for this competition."

The sheep butchering competition requires participants to slaughter, skin and gut an adult Navajo-Churro ewe in just over an hour.  They must simultaneously answer in Diné language, impromptu questions posed by roving judges who are assessing the contestants’ skills and knowledge of each part of the animal and how it is used.

On the morning of the butchering, the women have huddled just outside a sand-floor arena dense with spectators for a group blessing as the U.S. national anthem is sung in Diné language. Contestant Ann Marie Salt keeps her eyes closed long after the long prayer is over. Her mom and step-dad arrived just after sunrise to set up their camp chairs as close as possible to where she will be competing—as they will do for each event during the week.

She is intimidated but also relieved to know they are there. She has taken home six titles since she began competing in Native American competitions at age four, all with her family front and center. (Across the Navajo Nation, there are pageant competitions, referred to as “royalties,” for schools, colleges, agencies and states.) But today’s is the pinnacle of all of them for her.  

Like political candidates, the contestants are asked to have a “platform,” or topic that they pledge to focus on should they wear the crown. For Salt, her platform reflects the concept of “hózhó,” a Diné term meaning a state of balance and order. She urges a focus on young women like her who have one foot in the Navajo world and one foot outside.  “People see it as a disadvantage to grow up on a rural reservation, but I believe it’s an advantage for me and any person with dual identity.  We need both perspectives.”

Following Navajo tradition, no part of a sheep is wasted. The women perform in unison, simultaneously working their knives as male volunteers help maneuver the animals into the right positions, hoisting them with roped onto beam-mounted hooks so the women can finish skinning and gutting. Cut the throat too quickly and the blood will not drain properly. Puncture the bladder and all of the meat is ruined.  The contest is close. 

After the end of this event—and those of the following two days— a winner is announced: Alyson Shirley.

Image by Allison Shelley. Contestant Alyson Jeri Shirley reacts as she is pronounced Miss Navajo. (original image)

Image by Allison Shelley. Contestant Alyson Jeri Shirley is crowned Miss Navajo (original image)

Before accepting the crown, and the scholarships and other assorted gifts, Alyson reflected on why she entered the competition. “We got our culture from our deities.  Our entire lives—even our government—is set up based on Navajo teachings.  But we are forgetting that,” she emphasized.  “Miss Navajo stands for hope.  Even if you teach one person something, it’s enough because that person will go teach another person.”

The Moon Belongs to No One, but What About Its Artifacts?

Smithsonian Magazine

In 1969, the third man to walk on the moon, astronaut Charles "Pete" Conrad Jr., also became the first lunar archaeologist. As part of the Apollo 12 crew, he examined an earlier robotic lander, Surveyor 3, and retrieved its TV camera, aluminum tubing and other hardware, giving NASA scientists back on Earth the evidence they needed to study how human-made materials fared in the lunar environment.

Like all astronauts who have visited the moon, Conrad also left behind artifacts of his own. Some were symbolic, such as the U.S. flag. Others were prosaic: cameras, dirty laundry and bags of human waste. NASA's list of Apollo-related items left on the surface is 18 single-spaced pages. It ranges from geology hammers to earplug wrappers, seismographs to sleep hammocks. Even golf balls belonging to Alan Shepard, who managed some practice during Apollo 14, remain on the moon, though they appear to have escaped the notice of the list makers. All told, six manned landings, two manned orbital missions, over a dozen robotic landings and more than a dozen more crash sites offer signs of a multinational human presence on and around the moon. Each item left behind may seem like a small scrap for a man, but together they offer a giant look at mankind.

“These sites are time capsules," says Beth O'Leary, an anthropologist at New Mexico State University in Las Cruces. They host valuable artifacts for archaeologists and anthropologists who want to study humanity’s growing space heritage. Failed instruments at lunar landing sites, for example, might reveal the engineering or management missteps behind them, the same way the sinking of a ship on earth could tell us something about its commanders or passengers. Archaeologists might even want to study the DNA of microbes in the astronauts’ waste for clues to the diet and health of these early pioneers. “People's idea is that archaeologists are interested in 1,000 years ago, 100 years ago,” O’Leary says, “but here we're talking about the modern past.”

Conrad examines the unmanned Surveyor 3 spacecraft, which landed on the moon on April 19, 1967. He retrieved its TV camera, aluminum tubing and other hardware. Credit: NASA, Johnson Space Center

The effort may not sound urgent. The moon has almost no air, water or geological activity to corrode or otherwise damage artifacts, but a new generation of missions are headed there and they boost the risk that someone or something will interfere with existing sites. This week's planned robotic landing by the Chinese National Space Agency, the first controlled landing since the 1976 Luna 24 mission, signals a renewal of sophisticated lunar exploration. This time around, more countries will be involved, as will commercial entities. Private organizations are in hot pursuit of the Google Lunar X Prize, which offers cash rewards for achieving technical milestones, one of which is landing near the Apollo sites. A recent bill introduced in the House, called the Apollo Lunar Landing Legacy Act, proposes a novel form of protection. Unfortunately, it appears to interfere with existing space law.

O'Leary's interest goes back to 1999, when a graduate student in a seminar she was teaching asked if American preservation laws applied to artifacts left on the moon. O'Leary didn’t know, so she looked into the question, soon discovering that the Outer Space Treaty of 1967 prevents nations from making sovereignty claims in space. It does not address, however, the preservation of property that nations have left behind. O’Leary persuaded NASA to fund her research into the topic, and published what she calls the Lunar Legacy Project. She and colleagues created an inventory of the Apollo 11 landing site and began lobbying for its formal protection. By then, private companies such as Lockheed Martin were already discussing taking samples from other lunar sites for study. The hardware itself still belonged to the governments that put it there (the United States and Russia, the primary heir of the Soviet space program), but that would be little consolation if a modern mission ran over the first human footprints on the moon, for example, or moved an object without documenting its original location.

O'Leary helped lobby California and New Mexico, states with strong ties to the space program, to list the Apollo 11 objects in their state historic registers. The move offered symbolic protection and attracted attention to the problem but didn’t do anything to solve it. There was, and still is, nothing to stop new visitors from interfering with objects already in space.

Vandalism probably isn’t the biggest concern, but even unintentional interference is worrisome. Landing near existing sites could damage the sites, in the case of a crash or from the spray of lunar dust and rocket exhaust. "My concern would be that they miss,” says Roger Launius, senior curator of space history at the Smithsonian National Air and Space Museum. “If they miss by just a little bit, they could end up landing on top of the site." And well-meaning archaeologists, though guided by the cultural legacy laws and professional codes wherever they work, do destroy part of what they study as a matter of routine.

Apollo 11, 14 and 15 astronauts deployed retroreflector arrays on the moon. Credit: NASA

O’Leary would like the moon sites preserved as long as possible so that future archaeologists, perhaps with more sophisticated instruments and less damaging techniques, can examine them for clues about the human story of the landings. Scientists and engineers also have an interest in preserving the sites: They want to study how equipment left on the moon ages, like they did with the samples Conrad took from Surveyor 3. They also want to resolve questions about moon rocks that couldn’t be answered the first time around, including the size of a patch of orange volcanic glass discovered by geologist Harrison Schmitt during the Apollo 17 mission.

By 2011, O’Leary’s effort had become national: NASA researchers, engineers and managers called O'Leary and Launius, who is writing a book on space heritage, to a meeting to discuss guidelines for protecting lunar artifacts and sites. "We should avoid them until there is a collective agreement on how to study them," O’Leary told the meeting attendees. The non-binding guidelines that NASA later released, and which the Google Lunar X Prize organizers agreed to take into account, established "keep-out" zones for fly-overs, rovers or manned visits around Apollo-era sites. Rob Kelso, a former NASA manager, notes that he and the guideline’s other creators still depend on the threat of negative publicity to prevent sloppy visits: "If you damage those sites, you could get a backlash," he says.

Earlier this year, Maryland congresswoman Donna Edwards, who had previously worked on NASA’s Spacelab project, and Texas congresswoman Eddie Bernice Johnson took the protection efforts a step further by introducing a bill that would designate the Apollo landing sites as a unit of the U.S. National Park System and submit the sites for designation as a UNESCO World Heritage Site. But the bill presents a conundrum, as space policy experts Henry R. Hertzfeld and Scott N. Pace wrote last month in Science magazine (subscribers only). It may not comply with the Outer Space Treaty. How can you claim to own the site and its artifacts, to designate them under the control of the Park System, without claiming to own the land they sit on? How can you own a footprint, without owning the soil?

This is an image of Buzz Aldrin's bootprint on the lunar surface. He and Neil Armstrong walked on the moon on July 20, 1969, during the Apollo 11 mission. Credit: NASA

Instead of supporting the bill, Hertzfeld and Pace call on officials from the United States to work with the Russian and Chinese governments to draft a joint protection plan that can then be offered to other spacefaring nations. “The first step is to clearly distinguish between U.S. artifacts left on the Moon, such as flags and scientific equipment, and the territory they occupy. The second is to gain international, not unilateral, recognition for the sites upon which they rest,” Hertzfeld and Pace write.

Space is not the only place with a vacuum of sovereignty: Antarctica is a quilt of unrecognized sovereignty claims, and the open ocean belongs to nobody at all. People have found ad hoc ways to conduct scientific research and to preserve and learn from human historical artifacts there, but the results have not always been ideal. Consider, Launius says, the tourist-ransacked Scott hut in Antarctica. Or, notes Kelso, the way some commercial salvage operators take advantage of the absence of laws to cut corners when recovering valuable sunken material.

Unless countries work together to establish international heritage laws soon, Kelso adds, the landing sites may receive protection only once it’s too late. Preserving the first footprints on the moon, not quite property or territory, requires a new way of cooperating, a giant leap of its own.

Metaphorically Speaking, Your Nervous System is a Dictatorship

Smithsonian Magazine

How does the architecture of our brain and neurons allow each of us to make individual behavioral choices? Scientists have long used the metaphor of government to explain how they think nervous systems are organized for decision-making. Are we at root a democracy, like the U.K. citizenry voting for Brexit? A dictatorship, like the North Korean leader ordering a missile launch? A set of factions competing for control, like those within the Turkish military? Or something else?

In 1890, psychologist William James argued that in each of us “[t]here is… one central or pontifical [nerve cell] to which our consciousness is attached.” But in 1941, physiologist and Nobel laureate Sir Charles Sherrington argued against the idea of a single pontifical cell in charge, suggesting rather that the nervous system is “a million-fold democracy whose each unit is a cell.” So who was right?

For ethical reasons, we’re rarely justified in monitoring single cells in healthy people’s brains. But it is feasible to reveal the brain’s cellular mechanisms in many nonhuman animals. As I recount in my book “Governing Behavior,” experiments have revealed a range of decision-making architectures in nervous systems—from dictatorship, to oligarchy, to democracy.

For some behaviors, a single nerve cell does act as a dictator, triggering an entire set of movements via the electrical signals it uses to send messages. (We neurobiologists call those signals action potentials, or spikes.) Take the example of touching a crayfish on its tail; a single spike in the lateral giant neuron elicits a fast tail-flip that vaults the animal upward, out of potential danger. These movements begin within about one hundredth of a second of the touch.

Crayfish escapes thanks to its "dictator neuron." Each photo taken 10 hundredths of a second apart. (Jens Herberholz and Abigail Schadegg, University of Maryland, College Park)

Similarly, a single spike in the giant Mauthner neuron in the brain of a fish elicits an escape movement that quickly turns the fish away from a threat so it can swim to safety. (This is the only confirmed “command neuron” in a vertebrate.)

Each of these "dictator neurons" is unusually large—especially its axon, the long, narrow part of the cell that transmits spikes over long distances. Each dictator neuron sits at the top of a hierarchy, integrating signals from many sensory neurons, and conveying its orders to a large set of subservient neurons that themselves cause muscle contractions.

Such cellular dictatorships are common for escape movements, especially in invertebrates. They also control other kinds of movements that are basically identical each time they occur, including cricket chirping.

But these dictator cells aren’t the whole story. Crayfish can trigger a tail-flip another way too—via another small set of neurons that effectively act as an oligarchy.

These “non-giant” escapes are very similar to those triggered by giant neurons, but begin slightly later and allow more flexibility in the details. Thus, when a crayfish is aware it is in danger and has more time to respond, it typically uses an oligarchy instead of its dictator.

Similarly, even if a fish’s Mauthner neuron is killed, the animal can still escape from dangerous situations. It can quickly make similar escape movements using a small set of other neurons, though these actions begin slightly later.

This redundancy makes sense: it would be very risky to trust escape from a predator to a single neuron, with no backup–injury or malfunction of that neuron would then be life-threatening. So evolution has provided multiple ways to initiate escape.

Leeches hold a neuron election before recoiling from your touch. (Vitalii Hulai / iStock)

Neuronal oligarchies may also mediate our own high-level perceptions, such as when we recognize a human face. For many other behaviors, however, nervous systems make decisions through something like Sherrington’s “million-fold democracy.”

For example, when a monkey reaches out its arm, many neurons in its brain’s motor cortex generate spikes. Every neuron spikes for movements in many directions, but each has one particular direction that makes it spike the most.

Researchers hypothesized that each neuron contributes to all reaches to some degree, but spikes the most for reaches it’s contributing to most. To figure it out, they monitored many neurons and did some math.

Researchers measured the rate of spikes in several neurons when a monkey reached toward several targets. Then, for a single target, they represented each neuron by a vector—its angle indicates the neuron’s preferred reaching direction (when it spikes most) and the length indicates its relative rate of spiking for this particular target. They mathematically summed their effects (a weighted vector average) and could reliably predict the movement outcome of all the messages the neurons were sending.

This is like a neuronal election in which some neurons vote more often than others. An example is shown in the figure. The pale violet lines represent the movement votes of individual neurons. The orange line (the “population vector”) indicates their summed direction. The yellow line indicates the actual movement direction, which is quite similar to the population vector’s prediction. The researchers called this population coding.

For some animals and behaviors, it is possible to test the nervous system’s version of democracy by perturbing the election. For example, monkeys (and people) make movements called “saccades” to quickly shift the eyes from one fixation point to another. Saccades are triggered by neurons in a part of the brain called the superior colliculus. Like in the monkey reach example above, these neurons each spike for a wide variety of saccades but spike most for one direction and distance. If one part of the superior colliculus is anesthetized—disenfranchising a particular set of voters–all saccades are shifted away from the direction and distance that the now silent voters had preferred. The election has now been rigged.

A single-cell manipulation demonstrated that leeches also hold elections. Leeches bend their bodies away from a touch to their skin. The movement is due to the collective effects of a small number of neurons, some of which voted for the resulting outcome and some of which voted otherwise (but were outvoted).

Perturbing a leech movement "election." Left: researchers touched the animal’s skin at a location indicated by the arrow. Each solid line is the direction the leech bent away from this touch on one trial. Middle: electric stimulation to a different sensory neuron made the leech bend in a different direction. Right: Researchers touched the skin and stimulated the neuron simultaneously and the leech bent in intermediate directions. (Reprinted by permission from Macmillan Publishers Ltd: J. E. Lewis and W. B. Kristan, Nature 391: 76-79, copyright 1998)

If the leech is touched on the top, it tends to bend away from this touch. If a neuron that normally responds to touches on the bottom is electrically stimulated instead, the leech tends to bend in approximately the opposite direction (the middle panel of the figure). If this touch and this electrical stimulus occur simultaneously, the leech actually bends in an intermediate direction (the right panel of the figure).

This outcome is not optimal for either individual stimulus but is nonetheless the election result, a kind of compromise between two extremes. It’s like when a political party comes together at a convention to put together a platform. Taking into account what various wings of the party want can lead to a compromise somewhere in the middle.

Numerous other examples of neuronal democracies have been demonstrated. Democracies determine what we see, hear, feel and smell, from crickets and fruit flies to humans. For example, we perceive colors through the proportional voting of three kinds of photoreceptors that each respond best to a different wavelength of light, as physicist and physician Thomas Young proposed in 1802. One of the advantages of neuronal democracies is that variability in a single neuron’s spiking is averaged out in the voting, so perceptions and movements are actually more precise than if they depended on one or a few neurons. Also, if some neurons are damaged, many others remain to take up the slack.

Unlike countries, however, nervous systems can implement multiple forms of government simultaneously. A neuronal dictatorship can coexist with an oligarchy or democracy. The dictator, acting fastest, may trigger the onset of a behavior while other neurons fine-tune the ensuing movements. There does not need to be a single form of government as long as the behavioral consequences increase the probability of survival and reproduction.

Not “Just Another Doll”: Two Orchids for Miss Stafford

Smithsonian Institution Archives

Throughout March, we will be celebrating Women's History Month with photos in the Flickr Commons and a series of blog posts about women from the Archives' collections.

'JaneThe stylish woman with bright eyes and an eager smile appears almost ethereally exuberant. By January 1937, when she posed in a flowered hat and sporting two orchids, Jane Stafford (1899-1991) was earning wide respect as a medical reporter, as a "typewritten bridge between the M.D. and the ultimate consumer." An earlier photograph, taken when she first joined the Science Service staff, sent a different message. That image may explain why one potential employer initially pegged her as "just another doll" - until she quickly, cleverly, and accurately fulfilled a test assignment.

Stafford had been born into a life of privilege (her father was a successful Chicago attorney) but she built a professional reputation with intelligence and hard work. At Smith College, she majored in chemistry and, after graduating in 1920, worked briefly as a hospital technician. Laboratory life offered first-hand glimpses of clinical medicine. Then, on the staff of the American Medical Association's popular magazine Hygeia, Stafford learned how to write for popular audiences and also acquired her lifelong love of writing.

'JaneWhen Stafford and her widowed mother moved to Baltimore (where her brother Edward was beginning medical training), she scoured the area for jobs. Fortunately, Science Service's medical editor had just quit. Stafford's chemistry degree, hospital experience, and intelligence outweighed any unfamiliarity with news deadlines. From 1928 until leaving in 1956 to work at the National Institutes of Health, she explored all aspects of medicine, from tuberculosis to toothaches. In the 1940s, she wrote about doctors on the battlefield, nutrition on the home front, and the "war on polio." As she explained in 1935, Science Service had few "taboo" subjects: they discussed venereal disease, pregnancy, "and all the glands and hormones." Although the organization's director "did not like the word 'intestine' and generally we avoid [the word] urine by calling it kidney secretion or excretion," Stafford assured former Hygeia colleague Mildred Whitcomb that the topics were always chosen by "scientific importance or news value."

Life for a female journalist was not all work, of course. Covering a medical meeting might include dinner with other science journalists or sampling local entertainment. After attending a bacteriology conference in New York City in January 1936, Stafford wrote Whitcomb that she had seen "three good plays": Dead End, Porgy and Bess, and Boy Meets Girl. That summer, Stafford and her mother vacationed in Rehoboth Beach, Delaware ("Nearby, quiet ocean resort, very charming, nice people, no crowds, cool and not too expensive. Had a pleasant time, swimming, bridge, dancing, reading, knitting and sunning. Came back with much suntan, and promptly had to write a story on that subject for Today Magazine."). 

Visible on Stafford's desk at left is a copy of A Textbook of Surgery for Nurses by Edward S. Stafford and Doris Diller. Edward was Jane's brother and a professor of surgery at Johns Hopkins University Medical School. Accession 90-105 - Science Service, Records, 1920s-1970s, Smithsonian Institution Archives, Neg. No. SIA2009-3715.

And the following month, Stafford confessed to Whitcomb that she had succumbed to the latest fad. "I am practically in retirement, socially, while reading Gone with the Wind–which I had firmly intended not to read. Entertaining book, but I still feel it is rather a waste of time to be reading it at all, though I don't know that I would do anything more profitable with the evenings it has taken."

As Stafford approached middle age, she retained her willowy figure and social skills. In 1938, Whitcomb wrote that a mutual acquaintance ("Mr. B") had seen Stafford at a meeting in Kansas City and reported that "you resembled a 'beautiful wild flower.' Your conversation, he said, was sophisticated but your dress belied it."

That combination of intelligence and style carried into the workplace. Even when Stafford was photographed in her office piled with textbooks, surrounded by the tools of her trade - telephone, typewriter, and advance proofs of medical articles - she wore a strand of pearls.

Related Resources

Related Collections

Blog Categories: 

Neanderthals Hunted in Groups, One More Strike Against the Dumb Brute Myth

Smithsonian Magazine

On an autumn day around 120,000 years ago, in the dense forests of what would come to be Germany, fierce hunters prowled the landscape.

These hunters regularly brought down mammoths and woolly rhinoceroses, deer, wild horses, aurochs (extinct bulls) and straight-tusked elephants. They competed for these prizes against other predators like hyenas and lions, sometimes losing their lives in the process. But today their skills and tools proved their worth: A group of Neanderthals used their hand-crafted wooden spears to kill two male fallow deer, both in the prime of their life and heavy with valuable meat and fat.

We know this because those skeletons, with bones bearing the signs the people who killed them, were recovered in 1988 and 1997 in a site called Neumark-Nord. This week, researchers argue in a new paper in Nature Ecology & Evolution that those punctured bones are the oldest example of hunting marks in the history of homininkind. That would mean that Neanderthals used sophisticated close-range hunting techniques to capture their prey—adding more weight to the argument that they were much smarter than we once gave them credit for.

“This has a lot of implications, as groups of hunters had to closely cooperate, to rely on each other,” said Johannes Gutenberg University archaeologist Sabine Gaudzinski-Windheuser, one of the study’s authors, by email. “Our findings must be understood as one of the best evidence known so far that provides insight into the social set up of Neanderthals.”

This new research is only the latest in a recent string of studies that indicate Neanderthals were our genetic and perhaps cultural cousins: complex, emphathetic hominins. Neanderthals have now been credited with creating symbolic art, producing geometric structures of broken stalagmites in underground caves and controlling fire to use on tools and food. Moreover, they successfully exploited whatever environment they happened to live in, be it the snowy tundra of Ice Age Europe, or heavily forested lakeshores during the interglacial periods.

This is a sea-change from how anthropologists once viewed this group of hominins: as a species doomed to extinction. Such a view meant that researchers were always looking for what weaknesses had set Neanderthals up for failure, rather than the skills that allowed them to successfully survive for so long.

Excavation of a 120.000 last Interglacial lake-landscape at Neumark-Nord near present day Halle in the eastern part of Germany. (W. Roebroeks / Leiden University )

“Maybe 10 years ago the story [of this study] would’ve been, Neanderthals couldn’t throw, because they had a different shoulder structure, and there’s an implication of cognitive limitation—that they weren’t using thrown projectiles,” says Penny Spikins, a senior lecturer in archaeology at the University of York who wasn’t affiliated with the study. “Now we see it in terms of the continuity of human adaptation. They’re choosing from various hunting options open to them, and this choice demonstrates a lot of collaboration.”

Spikins is particularly interested in hunting strategies, because her research focus is Neanderthal “healthcare.” No, Neanderthals weren’t opening up medical practices or offering insurance (that we know of)—but they did help one another recover from injuries that might’ve been sustained in dangerous activities like close-range hunting, as seen in bones that show recovery from wounds. To Spikins, that suggests tight-knit social networks and empathetic support of one another, which she and her colleagues wrote about in a February paper for World Archaeology.

To understand the precise mechanics of how this close-range hunting would have worked, Gaudzinski-Windheuser and her colleagues decided to recreate the scene. First, they set up the targets: 24 skeletons from German red deer (the species of fallow deer the Neanderthals hunted are now extinct, and this was the closest modern analogy) embedded in ballistics gel to simulate flesh. Then the group recruited three men versed in weaponry to recreate the attack.

The spears were made from metal poles with a wooden point at the end, as evidence from other archaeological sites shows the Neanderthals of the period were using wooden spears for their hunts. Sensors were attached to the spears to measure their motion and the velocity of the impact against the bones as the mock hunters thrust their weapons into the “deer.” The end result: damage patterns on the pelvis and scapulae bones that exactly mimicked the puncture marks on the ancient deer.

For the authors, that meant the spears were probably thrust rather than thrown—but in a different context, they note, throwing was still possible. “I like the fact that the authors are taking a more nuanced approach by acknowledging that spears can do both, thrusting and throwing,” says paleoanthropologist Rebecca Wragg Sykes, an archaeological researcher affiliated with the Université de Bordeaux who didn’t participate in the study.

Wragg Sykes agrees with Spikins that the interpretation of this study reflects a transformation in how researchers view Neanderthals. “People traditionally have looked for differences between the two species [Neanderthals and Homo sapiens], and if you’re looking for reasons why Neanderthals disappear from the fossil record, then you would want to look at whether their lives were riskier,” she says. Today, Neanderthals are considered “a parallel course of what it could mean to be a kind of human.”

Skeleton of an extinct fallow deer from Neumark-Nord, arranged in flight-posture. (Juraj Lipták © Landesamt für Denkmalpflege und Archäologie Sachsen-Anhalt, Juraj Lipták)

For Spikins, the origins of this paradigm shift date back to 2010, when researchers discovered that Neanderthal DNA lives on in modern humans of European and Asian descent. In other words, the two species interbred. Suddenly Neanderthals weren’t just an evolutionary dead end; they were more similar to and indeed a part of us. More research pointed to the possibility of other species of hominin populating the globe at the same time, from Homo heidelbergensis of Eurasia to Homo naledi of South Africa.

“Our own ancestor were just one of many different options of humans at the time,” Spikins says, “and that’s given us a perspective in which we can see there were different types of humans adapting in different ways.”

Both Spikins and Wragg Sykes have questions that remain unanswered. Wragg Sykes noted that the deer remains present an enigma: Normally, the hunters would’ve left far more cut marks on the bones, and would’ve removed parts of the body like the brain, the fat, and the tongue, which were the most nutrient-dense. These bones remain fully assembled, and only one deer shows faint traces of butchery. “They don’t tend to leave entire carcasses,” Wragg Sykes says.

Maybe the hunters were scared off from their prey by the arrival of other dangerous predators; or maybe they were so successful in their hunting that they didn’t require anything more than some meat and the skins of the animals.

Spikins wants to continue exploring the intersection between hunting and healthcare among Neanderthals, and this finding offers an intriguing avenue to do so. “Some of [the hunters] were volunteering to be in positions where they were more likely to be injured,” Spikins says of hunting at close-range. Taking that risk meant there was a high reward, and likely some kind of safety net that would allow them to do so. “I’m interested in how the emotional element of Neanderthal lives was intimately connected to the economics of their existence.”

As for Gaudzinski-Windheuser and her colleagues, they’re eager to bring their success with this experiment to the field at large. “Numerous researchers are currently dealing with studies on weaponry in Pleistocene contexts,” Gaudzinski-Windheuser said. She and a colleague have organized their work in ‘ballistic archaeology’ so that more archaeological work might be brought “under the umbrella of physics,” she says.

For now, paleoanthropologists will continue digging into Neanderthal history, focusing both on what makes them different from Homo sapiens, and what similarities they share. And anytime we start feeling complacent about the fact that our species survived and others didn’t, Spikins has her own remedy for that mindset: “They were successful for longer than we yet have been.” Neanderthals thrived for some 250,000 years in some of the harshest and most variable climates on Earth. As for whether Homo sapiens will have such a long run—that remains to be seen.

Viking Chess Pieces May Reveal Early Whale Hunts in Northern Europe

Smithsonian Magazine

In central and eastern Sweden from 550 to 793 CE, just before the Viking Age, members of the Vendel culture were known for their fondness for boat burials, their wars, and their deep abiding love of hnefatafl.

Also known as Viking chess, hnefatafl is a board game in which a centrally located king is attacked from all sides. The game wasn’t exclusive to the Vendels—people across northern Europe faced off over the gridded board from at least 400 BCE until the 18th century. But during the Vendel period, love for the game was so great that some people literally took it to their graves. Now, a new analysis of some hnefatafl game pieces unearthed in Vendel burial sites offers unexpected insight into the possible emergence of industrial whaling in northern Europe.

For most of the game’s history, its small, pebble-like pieces were made of stone, antler, or bone from animals such as reindeer. But later, starting in the sixth century CE, Vendels across Sweden and the Åland Islands were buried with game pieces made of whale bone.

In the new research, Andreas Hennius, an archaeology doctoral candidate at Uppsala University in Sweden, and his colleagues traced the source of the whale bone by following a trail of evidence that led them to the edge of the Norwegian Sea about 1,000 kilometers north of the Vendels’ heartland in central Sweden.

Hennius thinks the whale bones used to make the game pieces were the product of early industrial whaling. If so, the pieces would be evidence of the earliest-known cases of whaling in what is today Scandinavia, and a sign of the growing trade routes and coastal resource use that paved the way for future Viking expansion.

To come to this striking conclusion, Hennius and his colleagues first had to find out where the whale bone was coming from. The Vendels weren’t whalers, Hennius says, so the pieces must have been imported. But from whom? The researchers also needed to confirm that the bone was the result of deliberate whaling, not just scavenged from stranded whales.

To answer these and other questions, Hennius drew on genetic analysis, other archaeological finds, and ancient texts.

The first clue that the game pieces were indeed a sign of early industrial whaling emerged from genetic analysis of the whale bone. Though several whale species swam in Scandinavian waters, most hnefatafl pieces were made from North Atlantic right whale bones. This suggests the bones were the result of systematic hunting rather than opportunistic scavenging, Hennius says.

Other clues came from the Vendel graves. Whale bone game pieces first were only in the graves of a few wealthy people. But later, a flood of whale bone hnefatafl pieces appeared in the graves of regular folks. “Not the poorest graves, but the middle-class graves,” Hennius says. To him, it seemed like a rare, prestigious commodity suddenly became available to the mass market. And that implied regular, reliable imports—an industry.

(Illustration by Mark Garrison)

Early texts hinted at where that whaling industry might have been located, since it almost certainly wasn’t in the Vendel lands of central and eastern Sweden.

The first known written record of whaling in Scandinavia describes a ninth-century Norwegian tradesman named Óttarr. In his travels, he visited the royal courts of England, where records describe him bragging about his whaling prowess. Óttarr claimed that he and his friends caught 60 whales in two days near what is now Tromsø, Norway. Though Óttarr’s exploits date several centuries after the appearance of whale bone in Vendel graves, it suggests whaling may have been well established in northern Norway by the 800s CE.

It isn’t clear who was actually doing the difficult work of catching the whales, though it could have be any of the several groups of people living in northern Norway at the time, including the Sami. As for who was turning the whale bone into game pieces, that is also unknown. According to the researchers, it could have been the Sami or anyone along the long trade route south.

Hennius says further archaeological evidence also supports the idea of early whaling in northern Norway. Recently, other researchers discovered blubber rendering pits in the region, associated with the Sami, that date from about the time whale bone game pieces appeared farther south. The existence of these pits, Hennius says, implies the Sami were processing a steady supply of whales and not just the occasional stranding.

Hennius says all of this together—the Sami’s rendering pits, Óttarr’s exploits, the predominance of one species, and the presence of whale bone in middle-class graves—is “strong evidence that active whaling took place in northern Norway at this time,” and that the Vendels had established long-distance trade routes to ferry the material south.

Vicki Szabo, a historian at the University of North Carolina who studies medieval whaling across the North Atlantic, says Hennius and his colleagues make a good case for the existence of pre-Viking whaling in Scandinavia. “They’re linking ideas and trends that haven’t clearly been linked before,” she says.

Szabo’s own research suggests whaling in northern Norway was definitely feasible around 550 CE. After the collapse of the Roman Empire during the fifth century CE and the period of economic disruption that followed, it took time for societies across Europe to rebound. Szabo says whaling fits with a larger pattern of economic resurgence at the time.

As for the logistical challenges, Szabo says it’s unlikely these early whalers were out on the open ocean hunting whales from boats. Instead, hunters could have used poison-tipped spears, netted off narrow fjords, or driven whales onto shore.

Hennius is continuing to study the imported Vendel hnefatafl game pieces to see what else they can tell us about their origin and the trade routes on which they traveled. If the game pieces do, in fact, tell the tale of expanding coastal resource use in Norway, it is one of the first chapters in the dawning saga of Viking maritime dominance.

Related Stories from Hakai Magazine:

With AI Art, Process Is More Important Than the Product

Smithsonian Magazine

With AI becoming incorporated into more aspects of our daily lives, from writing to driving, it’s only natural that artists would also start to experiment with artificial intelligence.

In fact, Christie’s will be selling its first piece of AI art later this month – a blurred face titled “Portrait of Edmond Belamy.”

The piece being sold at Christie’s is part of a new wave of AI art created via machine learning. Paris-based artists Hugo Caselles-Dupré, Pierre Fautrel and Gauthier Vernier fed thousands of portraits into an algorithm, “teaching” it the aesthetics of past examples of portraiture. The algorithm then created “Portrait of Edmond Belamy.”

The painting is “not the product of a human mind,” Christie’s noted in its preview. “It was created by artificial intelligence, an algorithm defined by [an] algebraic formula.”

If artificial intelligence is used to create images, can the final product really be thought of as art? Should there be a threshold of influence over the final product that an artist needs to wield?

As the director of the Art & AI lab at Rutgers University, I’ve been wrestling with these questions – specifically, the point at which the artist should cede credit to the machine.

The machines enroll in art class

Over the last 50 years, several artists have written computer programs to generate art – what I call “algorithmic art.” It requires the artist to write detailed code with an actual visual outcome in mind.

One the earliest practitioners of this form is Harold Cohen, who wrote the program AARON to produce drawings that followed a set of rules Cohen had created.

But the AI art that has emerged over the past couple of years incorporates machine learning technology.

Artists create algorithms not to follow a set of rules, but to “learn” a specific aesthetic by analyzing thousands of images. The algorithm then tries to generate new images in adherence to the aesthetics it has learned.

To begin, the artist chooses a collection of images to feed the algorithm, a step I call “pre-curation.”

For the purpose of this example, let’s say the artist chooses traditional portraits from the past 500 years.

Most of the AI artworks that have emerged over the past few years have used a class of algorithms called “generative adversarial networks.” First introduced by computer scientist Ian Goodfellow in 2014, these algorithms are called “adversarial” because there are two sides to them: One generates random images; the other has been taught, via the input, how to judge these images and deem which best align with the input.

So the portraits from the past 500 years are fed into a generative AI algorithm that tries to imitate these inputs. The algorithms then come back with a range of output images, and the artist must sift through them and select those he or she wishes to use, a step I call “post-curation.”

So there is an element of creativity: The artist is very involved in pre- and post-curation. The artist might also tweak the algorithm as needed to generate the desired outputs.

When creating AI art, the artist’s hand is involved in the selection of input images, tweaking the algorithm and then choosing from those that have been generated. (Ahmed Elgammal)

Serendipity or malfunction?

The generative algorithm can produce images that surprise even the artist presiding over the process.

For example, a generative adversarial network being fed portraits could end up producing a series of deformed faces.

What should we make of this?

Psychologist Daniel E. Berlyne has studied the psychology of aesthetics for several decades. He found that novelty, surprise, complexity, ambiguity and eccentricity tend to be the most powerful stimuli in works of art.

When fed portraits from the last five centuries, an AI generative model can spit out deformed faces. (Ahmed Elgammal)

The generated portraits from the generative adversarial network – with all of the deformed faces – are certainly novel, surprising and bizarre.

They also evoke British figurative painter Francis Bacon’s famous deformed portraits, such as “Three Studies for a Portrait of Henrietta Moraes.”

‘Three Studies for the Portrait of Henrietta Moraes,’ Francis Bacon, 1963. (MoMA )

But there’s something missing in the deformed, machine-made faces: intent.

While it was Bacon’s intent to make his faces deformed, the deformed faces we see in the example of AI art aren’t necessarily the goal of the artist nor the machine. What we are looking at are instances in which the machine has failed to properly imitate a human face, and has instead spit out some surprising deformities.

Yet this is exactly the sort of image that Christie’s is auctioning.

A form of conceptual art

Does this outcome really indicate a lack of intent?

I would argue that the intent lies in the process, even if it doesn’t appear in the final image.

For example, to create “The Fall of the House of Usher,” artist Anna Ridler took stills from a 1929 film version of the Edgar Allen Poe short story “The Fall of the House of Usher.” She made ink drawings from the still frames and fed them into a generative model, which produced a series of new images that she then arranged into a short film.

Another example is Mario Klingemann’s “The Butcher’s Son,” a nude portrait that was generated by feeding the algorithm images of stick figures and images of pornography.

On the left: A still from ‘The Fall of the House of Usher’ by Anna Ridler. On the right: ‘The Butcher’s Son’ by Mario Klingemann.

I use these two examples to show how artists can really play with these AI tools in any number of ways. While the final images might have surprised the artists, they didn’t come out of nowhere: There was a process behind them, and there was certainly an element of intent.

Nonetheless, many are skeptical of AI art. Pulitzer Prize-winning art critic Jerry Saltz has said he finds the art produced by AI artist boring and dull, including “The Butcher’s Son.”

Perhaps they’re correct in some cases. In the deformed portraits, for example, you could argue that the resulting images aren’t all that interesting: They’re really just imitations – with a twist – of pre-curated inputs.

But it’s not just about the final image. It’s a about the creative process – one that involves an artist and a machine collaborating to explore new visual forms in revolutionary ways.

For this reason, I have no doubt that this is conceptual art, a form that dates back to the 1960s, in which the idea behind the work and the process is more important than the outcome.

As for “The Butcher’s Son,” one of the pieces Saltz derided as boring?

It recently won the Lumen Prize, a prize dedicated for art created with technology.

As much as some critics might decry the trend, it seems that AI art is here to stay.

From Obscurity, Hilma af Klint Is Finally Being Recognized as a Pioneer of Abstract Art

Smithsonian Magazine

The arrival of artistic abstraction has long been attributed to a triumvirate of male painters: Wassily Kandinsky, a Russian Expressionist whose improvisational creations translated musical compositions into cacophonies of color; Kazimir Malevich, a Russian Suprematist who pioneered the concept of complete non-representation with his 1915 “Black Square,” a literal block of black painted onto a white canvas; and Piet Mondrian, co-founder of the Netherlands-based De Stijl movement, which advocated pure, universal beauty in the form of simple grids of primary colors.

But an elusive female figure actually beat these art world giants to the punch. As Roberta Smith reports for the New York Times, a new Guggenheim exhibition is putting the spotlight on the pioneering Swedish painter Hilma af Klint, whose work has only emerged from obscurity in recent decades. Af Klint not only began dabbling in abstraction in 1906—nearly a decade before Kandinsky, Malevich and Mondrian first defied traditional representation—but managed to do so at a time when her peers were largely constrained to painting flowers, animals and domestic scenes.

Af Klint saw herself as a “holy transcriptionist, a technician of the unknown” whose work was simply a stepping stone in the pursuit of knowledge (David Heald)

Born in 1862 to a middle-class Swedish family, af Klint graduated with honors from the Stockholm Royal Academy of Fine Arts. As a scholar, she showed herself to be an “eager botanist, well read in natural sciences and in world religions,” according to the non-profit Art Story. While her early works were typical of the period, it was her growing interest in spiritualism—which in the late Victorian era was stoked by new scientific discoveries of the "invisible world," including cathode rays, X-rays and the electron—that triggered a dramatic change in her style. As Caitlin Dover notes for the Guggenheim’s blog, beginning in 1896, af Klint and a group of women collectively dubbed the Five met regularly for sessions filled with prayer, meditation, sermons and séances. The Five believed they were in contact with spirits who would outline tasks for them to complete back on Earth, such as building a temple or creating artwork. On January 1, 1906, af Klint claimed a spirit known as Amaliel addressed her directly, asking her to create the paintings that would line the proposed temple’s walls.

“Amaliel offered me a work and I answered immediately Yes,” af Klint wrote in one of her many spiritually focused notebooks. “This was the large work, that I was to perform in my life.”

According to a separate Guggenheim blog post by Johan af Klint, the artist’s grand-nephew, and Hedvig Ersman, a member of the Hilma af Klint Foundation, af Klint readily followed the spirit’s instructions, completing 111 works in a series entitled “Paintings for the Temple” between November 1906 and April 1908—a staggering rate of one every few days.

Af Klint’s monumental canvases are characterized by her free-wheeling swirls, pastel curlicues and almost psychedelic vocabulary of unrestrained movement. The art is designed to overwhelm—which is exactly what it does in the Guggenheim show, entitled Hilma af Klint: Paintings for the Future.

The rousing retrospective, which features 170 works by the woman who may well deserve the title of Europe’s first abstract artist, is, in fact, af Klint’s first in the United States. Part of the reason for her lack of name recognition up until this point stems from an event that occurred in 1908. That year, af Klint invited famed spiritualist Rudolf Steiner to assess her creations. Rather than celebrate her paintings, he told her that no one must see the work for 50 years. Af Klint took this advice to heart, Kate Kellaway writes for the Observer, halting her work for the next four years and shifting focus to caring for her blind mother.

Following a second burst of inspiration that concluded in 1915, af Klint completed a total of 193 “Paintings for the Temple.” A selection of these canvases, fittingly dubbed “The Ten Largest,” dominate the Guggenheim’s High Gallery, providing a whimsical journey through the human life cycle. As the New York Times’ Smith explains, these works measure up to 10 feet by 9 feet and feature a pastel palette of curved shapes, symbols and even words.

“Evoking the passage of life, they combine depictions of lilies and roses with forms suggestive of male and female gonads, spermatozoa, breasts and a somewhat labial layering of curves,” Hettie Judah writes for the Independent.

Upon her death in 1944, Hilma af Klint stipulated that her paintings remain unseen for the next 20 years (Wikimedia Commons)

Frieze’s Anya Ventura believes that af Klint saw herself as a “holy transcriptionist, a technician of the unknown” whose work was simply a stepping stone in the pursuit of knowledge. And, after completing her “Paintings for the Temple,” the Swedish painter began the heady task of interpreting them, making annotations and edits aimed at decoding what Ventura calls a “new language delivered by the divine.”

Af Klint died penniless in 1944. Rather than bequeathing her creations to the world, she stipulated that they remain unseen for the next 20 years. This wish was fulfilled, albeit belatedly, with the first display of her work in 1986 and subsequent shows in the following decades. Now, thanks to renewed interest in her body of work, including the new Guggenheim exhibition, af Klint's place as one of the first pioneers of abstract art is being affirmed.

“The art history canon wasn’t ready to accept Hilma af Klint at the time of her death in 1944,” curator Tracey Bashkoff tells the Guggenheim’s Dover. “Now, hopefully, we’re pushing those boundaries enough that there is a willingness to see things differently, and to embrace work that was done by a woman, and was done outside of the normal mechanisms of the art world of her time. I think she understood that her work was really for a future audience."

Hilma af Klint: Paintings for the Future is on view at the Guggenheim through April 23, 2019.

The Shrewd Press Agent Who Transformed William Cody Into Larger-Than-Life Buffalo Bill

Smithsonian Magazine

To appreciate the wonder and luster of a star in the sky, one must look off to its side—“averted vision,” it is called.

So it was in the late 19th century with the rising star of republics—the United States—and with the man who, more than any other, came to epitomize our nation’s drive, character, promotional flair, and obsession with celebrity: William F. Cody.

In the second half of the century, Cody, also known as “Buffalo Bill,” achieved a measure of renown in the United States as a Pony Express rider, plainsman, buffalo hunter, and military scout. Brave, rugged, handsome, and decidedly Western, he was the subject of hundreds of popular dime novels and became a stage actor portraying himself in a series of shoot-’em-up dramas that were wretched productions but nevertheless titillated theater-goers. Starting in 1883, his action-packed outdoor arena show, “Buffalo Bill’s Wild West,” attracted large audiences in places like Lancaster, Woonsocket, and Zanesville.

Still, it wasn’t until Cody took his act to Europe, in 1887, that Americans truly began to revere him as an exemplar of national character. The Wild West was a huge hit in Britain. One million people saw the show, including statesmen (members of Parliament, and once-and-future Prime Minister William Gladstone) and famous actors (the estimable London actor-manager Henry Irving told one newspaper that the Wild West would “take the town by storm”). Queen Victoria emerged from seclusion to visit the show two days after it opened and enjoyed it a second time 40 days later during a command performance at Windsor Castle. The audience that day included many other kings, queens, and members of European royalty who had come to town to celebrate her Golden Jubilee.

W.F. “Buffalo Bill” Cody in 1875 (George Eastman House Collection/Wikimedia Commons)

Their adulation was picked up by the British newspapers, and that press coverage was then amplified by many American periodicals, which eagerly chronicled Cody’s every move through London society. The New York World observed that Cody was already as well-known to the masses in London as the queen. “You could not pick up in the most obscure quarter of London any one so ignorant as not to know who and what he is. His name is on every wall. His picture is in nearly every window.” The magazine Puck joked that Cody was mostly spending his time playing poker with duchesses. Other publications speculated that Cody might be knighted.

None of this happened by chance. Cody’s trip and its newspaper coverage had been engineered in large part by a burly, brilliant, sombrero-wearing press agent named John M. Burke, a man with a genius for promotion and a keen sense of what it meant to be American.

Upon first meeting Cody in 1869, Burke had recognized the scout’s quintessentially rugged Western character and universal appeal. “Physically superb, trained to the limit, in the zenith of manhood, features cast in nature’s most perfect mold…,” Burke wrote later, Cody was “…the finest specimen of God’s handiwork I had ever seen.” Burke himself was somewhat rootless—born to Irish immigrants who died when he was an infant; raised in a succession of towns and homes; trained as an itinerant theater manager, newspaperman, and scout. Perhaps for this reason, he intuited his countrymen’s emerging, visceral desire for belonging, and the prospect that Cody was an identity the American people could latch onto.

This was a remarkable insight from a man who seemingly had a crystal ball (as early as the 1890s, Burke predicted that women would get the vote, world war would break out in Alsace-Lorraine, and a member of a minority group would become president of the United States). For in the years following the Civil War, American identity was still on the blacksmith’s forge. The Republic had been formed during the lifetimes of people still alive to tell the tale, and it had been re-formed by the War Between the States. But there hadn’t been many prominent Americans in world or cultural affairs since the days of Jefferson and Franklin. Perhaps the most clearly identifiable American trait was neither intellectual nor artistic, but simply the enterprising, brash spirit of “Yankee push” best exemplified by P.T. Barnum, who was somehow both laudable and horrifying.

John Burke, the marketing force behind Buffalo Bill (Wikimedia Commons)

And so, unsure of its place, unsteady in its path, America looked across the ocean for validation. Writers, artists, statesmen, and entertainers from the United States sailed to Britain and the Continent to measure their growth and worth. The painter George Catlin, who had earned praise for his portraits of New York Governor DeWitt Clinton and General Sam Houston, and fame for his sketches of 48 tribes of American Indians with whom he had lived, still found it necessary to seek true legitimacy through a tour of London, Paris, and Brussels in the 1830s and ’40s. Even Barnum, famous and successful as he was, felt compelled to take one of his popular acts—his distant cousin Charles Stratton, also known as General Tom Thumb—on a similar sort of corroboration tour of Europe in 1844-45, appearing before audiences that included queens and tsars.

But Burke managed to do something with Cody and the Wild West that the earlier cultural exports never could. He burnished and redefined the American reputation by reflecting it in the shiny crowns of the Old World’s beloved monarchs, juxtaposing ancient and modern and thus validating the appeal of a new kind of American: the Westerner. He accomplished this by applying groundbreaking marketing tactics to promote a sort of on-the-sleeve patriotism throughout the Wild West’s tour of Britain in 1887-88, and during a subsequent tour of the Continent in 1889-92.

For example, he created an illustration of all the “Distinguished Visitors” to the show, with a dour-looking H.R.H. Queen Victoria and other royals surrounding a splendid portrait of Cody in the center. He invited reporters to see how efficiently Cody’s massive show unloaded its train cars, as a way of promoting American ingenuity. He devised a system of horse-drawn mobile billboards that awed one newspaper in Dresden, Germany seemingly as much as the show itself: “Already weeks in advance, the audience is prepared for the show through billboards etc. The American, in this matter as in many others, is very practically minded.” And everywhere the show traveled, Burke’s team plastered towns with iconic images to herald the arrival of the Wild West, employing “immense painted posters all over the city to advertise Buffalo Bill—his portraits pasted all in a row, many times larger than natural; the cowboys on their wild horses; the Indians looking very savage,” as the Brooklyn Daily Eagle reported. (In France in 1889, this campaign made a deep impression on even the most stuck-up Parisians. “Eh bien!,” wrote Le Temps. “All that ingenious and bold American advertising enterprise has proved to be as honest as our tame [publicity] ever was.” Crowds flocked to the Wild West show in Paris and clamored for cowboy gear in the shops all around town.)

And so Burke transformed the flesh-and-blood Cody into the almost mythic Buffalo Bill, a man whose spur-jingling acts of derring-do embodied America’s heroic past—and whose entrepreneurial wrangling of the world’s most successful entertainment property foretold of America’s promising future. Burke consciously crafted a new Western American self-image, in a rifle-toting, money-making, entrepreneurial husband-father and cultural conqueror who looked dashing in buckskins and dapper in business suits. For millions of Americans, Cody represented a new and uniquely American persona to which they could relate.

An 1898 advertisement for the Wild West show (Wikimedia Commons)

It all paid off handsomely. The Wild West returned to American shores triumphant, greeted dockside by thousands of grateful well-wishers. The show thrived, and in 1893 enjoyed its most successful season ever, a six-month stand outside the gates of the World’s Columbian Exposition in Chicago that played to full houses twice a day and raked in $1 million in profits. Soon, Burke would even float Cody’s name as a presidential candidate.

In the ensuing years, John M. Burke and William F. Cody continued building the Wild West brand, though mostly on American soil. What had begun with an averted vision across the Atlantic was now an American star of a completely different magnitude. Before it was all done in 1916, they had performed in front of 50 million people, and had carved out a place for Cody in that strange eternal pantheon of larger-than-life legends, where real people and fictional ones (George Washington, Johnny Appleseed, Davy Crockett, Paul Bunyan, Pecos Bill, John Henry, Babe Ruth) dwell side-by-side in a murky world of perpetual stories, myths, and singsong nursery rhymes. When Cody died in 1917, the country mourned in a way it hadn’t since Lincoln’s assassination. Around 25,000 people ascended the tortuous path up Lookout Mountain in Colorado to attend his funeral.

But perhaps the most important legacy of Burke and Cody was the double-barreled contribution they made to the new sense of American identity: a crystallized articulation of the Western ideal that would find expression in everything from the Hollywood Western to the Marlboro Man to Ronald Reagan; and their incredibly shrewd use of promotion to build celebrity and leverage it for commercial success. In that respect, Burke and Cody may be more a part of American life today than they ever were in their own times.

Prehistoric Wine Reveals Missing Pieces of Ancient Sicilian Culture

Smithsonian Magazine

This article was originally published on February 13, 2018, on The Conversation and has been republished for International Archaeology Day.

Monte Kronio rises 1,300 feet above the geothermally active landscape of southwestern Sicily. Hidden in its bowels is a labyrinthine system of caves, filled with hot sulfuric vapors. At lower levels, these caves average 99 degrees Fahrenheit and 100 percent humidity. Human sweat cannot evaporate and heat stroke can result in less than 20 minutes of exposure to these underground conditions.

Nonetheless, people have been visiting the caves of Monte Kronio since as far back as 8,000 years ago. They’ve left behind vessels from the Copper Age (early sixth to early third millennium B.C.) as well as various sizes of ceramic storage jars, jugs and basins. In the deepest cavities of the mountain these artifacts sometimes lie with human skeletons.

Archaeologists debate what unknown religious practices these artifacts might be evidence of. Did worshipers sacrifice their lives bringing offerings to placate a mysterious deity who puffed gasses inside Monte Kronio? Or did these people bury high-ranking individuals in that special place, close to what was probably considered a source of magical power?

The storage jars and their mysterious contents, left millennia ago in the recesses of Monte Kronio. (Davide Tanasi et al. 2017, CC BY-ND)

One of the most puzzling of questions around this prehistoric site has been what those vessels contained. What substance was so precious it might mollify a deity or properly accompany dead chiefs and warriors on their trip to the underworld?

Using tiny samples, scraped from these ancient artifacts, my analysis came up with a surprising answer: wine. And that discovery has big implications for the story archaeologists tell about the people who lived in this time and place.

Analyzing scraping samples

In November 2012, a team of expert geographers and speleologists ventured once again into the dangerous underground complex of Monte Kronio. They escorted archaeologists from the Superintendence of Agrigento down more than 300 feet to document artifacts and to take samples. The scientists scraped the inner walls of five ceramic vessels, removing about 100 mg (0.0035 ounces) of powder from each.

I led an international team of scholars, which hoped analyzing this dark brown residue could shed some light on what these Copper Age containers from Monte Kronio originally carried. Our plan was to use cutting-edge chemical techniques to characterize the organic residue.

We decided to use three different approaches. Nuclear magnetic resonance spectroscopy (NMR) would be able to tell us the physical and chemical properties of the atoms and molecules present. We turned to scanning electron microscopy with energy dispersive X-ray spectroscopy(SEM/EDX) and the attenuated total reflectance Fourier transform infrared spectroscopy (ATR FT-IR) for the elemental analysis – the chemical characterization of the samples.

These analysis methods are destructive: The sample gets used up when we run the tests. Since we had just that precious 100 mg of powder from each vessel, we needed to be extremely careful as we prepared the samples. If we messed up the analysis, we couldn’t just run it all over again.

There were no second chances with the tiny amount of samples that had been scraped from the ancient vessels. (Davide Tanasi, CC BY-ND)

We found that four of the five Copper Age large storage jars contained an organic residue. Two contained animal fats and another held plant residues, thanks to what we inferred was a semi-liquid kind of stew partially absorbed by the walls of the jars. But the fourth jar held the greatest surprise: pure grape wine from 5,000 years ago.

Presence of wine implies much more

Initially I did not fully grasp the import of such a discovery. It was only when I vetted the scientific literature on alcoholic beverages in prehistory that I realized the Monte Kronio samples represented the oldest wine known so far for Europe and the Mediterranean region. An incredible surprise, considering that the Southern Anatolia and Transcaucasian region were traditionally believed to be the cradle of grape domestication and early viticulture. At the end of 2017, research similar to ours using Neolithic ceramic samples from Georgia pushed back the discovery of trace of pure grape wine even further, to 6,000-5,800 B.C.

This idea of the “oldest wine” conveyed in news headlines captured the public’s attention when we first published our results.

But what the media failed to convey are the tremendous historical implications that such a discovery has for how archaeologists understand Copper Age Sicilian cultures.

From an economic standpoint, the evidence of wine implies that people at this time and place were cultivating grapevines. Viticulture requires specific terrains, climates and irrigation systems. Archaeologists hadn’t, up to this point, included all these agricultural strategies in their theories about settlement patterns in these Copper Age Sicilian communities. It looks like researchers need to more deeply consider ways these people might have transformed the landscapes where they lived.

A view of Monte Kronio today. Gianni Polizzi, (2018, CC BY-ND)

The discovery of wine from this time period has an even bigger impact on what archaeologists thought we knew about commerce and the trade of goods across the whole Mediterranean at this time. For instance, Sicily completely lacks metal ores. But the discovery of little copper artifacts – things like daggers, chisels and pins had been found at several sites – shows that Sicilians somehow developed metallurgy by the Copper Age.

The traditional explanation has been that Sicily engaged in an embryonic commercial relationship with people in the Aegean, especially with the northwestern regions of the Peloponnese. But that doesn’t really make a lot of sense because the Sicilian communities didn’t have much of anything to offer in exchange for the metals. The lure of wine, though, might have been what brought the Aegeans to Sicily, especially if other settlements hadn’t come this far in viticulture yet.

Ultimately, the discovery of wine remnants near gaseous crevices deep inside Monte Kronio adds more support to the hypothesis that the mountain was a sort of prehistoric sanctuary where purification or oracular practices were carried out, taking advantage of the cleansing and intoxicating features of sulfur.

Wine has been known as a magical substance since its appearances in Homeric tales. As red as blood, it had the unique power to bring euphoria and an altered state of consciousness and perception. Mixed with the incredible physical stress due to the hot and humid environment, it’s easy to imagine the descent into the darkness of Monte Kronio as a transcendent journey toward the gods. The trek likely ended with death for the weak, maybe with the conviction of immortality for the survivors.

And all of this was written in the grains of 100 milligrams of 6,000-year-old powder.

Musician José Feliciano shook up a baseball tradition at age 23

National Museum of American History

José Feliciano will remain forever celebrated for his perennial Christmas classic "Feliz Navidad," one of his many hit recordings that have resulted in 45 Gold and Platinum records and eight Grammy awards. His launch to stardom began 50 years ago, with his hit 1968 recording of "Light My Fire," but it was not until his appearance at a baseball game later that fall that he truly became a household name.

Photo of acoustic guitarIn 1967 this guitar was custom built for José Feliciano. On it, he recorded his first hit in 1968, "Light My Fire," and performed before Game 5 of the 1968 World Series.Jose Feliciano, wearing sunglasses and leather jacket, plays acoustic guitar and sings as three others look on.In 2018, Jose Feliciano welcomed new citizens into the United States during a naturalization ceremony hosted by the National Museum of American History.

Indeed, his early life in Lares, Puerto Rico, and then in New York City, where his family moved when he was five, conjured for him spectacular visions of the brilliant traditions of American music, song, and . . . baseball. So when he was asked to perform "The Star-Spangled Banner" before Game 5 of the 1968 World Series at Detroit's Tiger Stadium, he crafted the most beautiful, and meaningful, rendition that he could imagine. He was only 23 at the time, and his interpretation of the anthem was unexpected, new, different, and vital. It was soulful and searching. Steeped in blues and folk music traditions and seasoned with the percolation of his fingers across a guitar built in the Sunset Boulevard shop of an immigrant family from Torreon, Mexico, his rendition demonstrated the complexity of the American experience as none had before.

The live national broadcast of his youthful and pleading, yet unorthodox, performance reverberated throughout a country embroiled in the Vietnam War, reeling over the assassinations of Dr. Martin Luther King Jr. and Robert Kennedy, and recovering from the previous summer's civil uprisings in cities throughout the country, including Detroit.

Some were offended by the way he made the song his own. They believed that performances of the anthem should be delivered with the solemn pomp and circumstance of marshal music, rather than incorporate the instruments, vocal inflections, and musical styles found in the more popular genres of the day. They considered Feliciano's version not as heartfelt and sincere, but as an attack on authority and tradition. The day after the game, the Los Angeles Times reported that NBC had "received a rash of calls from irate viewers." One spectator at the game called it "a disgrace, an insult. I’m going to write my senator about it." Another, also quoted in the Los Angeles Times, called it "non-patriotic." Feliciano heard boos from many in the crowd, and stood his ground while interviewed during the event: "I just do my thing—what I feel. . . . I love this country very much. I'm for everything this country stands for."

Others supported him and, through their embrace, Feliciano sent "The Star-Spangled Banner" into the pop charts for the first time ever. As Bill Freehan of the Detroit Tigers put it after the game, Feliciano made Marvin Gaye, who sang the anthem in a conventional manner before Game 4, "sound like a square." The attention that he drew from the performance launched a revolution through the present day for popular artists, from Jimi Hendrix to Whitney Houston to Lady Gaga, to personalize and seek new ways to find meaning in the anthem.

We continue to place great weight in the ritual singing of "The Star-Spangled Banner" at sporting events—both as an opportunity to express thanks for the sacrifices of those before us, and, through solemn protest, to challenge the country to do better, to continue our march toward a more perfect union. It was Feliciano's 1968 performance, however, that led the way for us all to search and explore together how and why "The Star-Spangled Banner" matters.

Following his keynote address, delivered just a few feet from the flag from Fort McHenry that inspired the anthem, Feliciano performed "The Star-Spangled Banner" on his 1967 Candelas guitar, just as he performed it during the 1968 World Series. You can also find this video on YouTube.

José Feliciano has donated a set of objects to the National Museum of American History that each speak to different facets of his life and career. The objects include the braillewriter that he has used since the 1960s to write lyrics, notes to fans, and love letters to his wife, Susan, who joined us at the donation ceremony on Flag Day. Feliciano has been blind since birth, and his braillewriter was a critical songwriting tool that also contributes magnificently to the museum's growing collection of objects that convey the stories of Americans who are blind. 

Heavy-looking grey/dark green piece of equipment with six typewriter-style keys.Feliciano's Perkins BraillerLetter that starts "My dear Mr. Jose Feliciano, This is the first letter I have written to somebody in film world..." Embroidered in pink, green, and yellow, on black background.This letter was embroidered and mailed to Feliciano in the early 1970s by a member of Japan's José Feliciano Fan Club.

 

Feliciano talks about the guitar he donated to our collections. This video is also available on YouTube.

Three objects now in our collection represent the extent of his global reach: a pair of his iconic sunglasses, the likes of which have featured on millions of album covers and concert posters throughout both hemispheres; a long-used performance stool that has journeyed with him to concert halls and recording studios all over the world; and a cherished letter from the early 1970s that had hung in his home studio for years—a piece of fan mail embroidered with a message in English from a member of Japan's José Feliciano Fan Club that demonstrates not only the breadth of his global appeal, but also the intense dedication of his fans. Finally, he donated his beloved 1967 Candelas guitar—the guitar that was built specifically for him by famed Mexican American instrument-maker Candelario Delgado. With this guitar, Feliciano recorded his first hit, "Light My Fire." And with this guitar he provided the world that historic 1968 performance of the national anthem.

Acoustic guitar in caseDuring a naturalization ceremony in Flag Hall, Feliciano performed "The Star Spangled Banner" on this 1967 Concerto Candelas guitar before donating it to the National Museum of American History.

John Troutman is Curator of American Music in the Division of Culture and the Arts. He has also blogged about the legacies of James Cotton and Chuck Berry.

Posted Date: 
Tuesday, October 9, 2018 - 10:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=OTghSKAHPXQ:8y-nPQE8-x0:V_sGLiPBpWU OSayCanYouSee?i=OTghSKAHPXQ:8y-nPQE8-x0:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

These Are the United States’ 18 Most Dangerous Volcanoes

Smithsonian Magazine

Eighteen volcanoes scattered across the United States’ western coast officially pose a “very high threat” to their surrounding communities, according to a newly updated version of the U.S. Geological Survey National Volcanic Threat Assessment.

The Associated Press’ Seth Borenstein notes that a dozen volcanoes rose in threat level since the last assessment was completed in 2005, while 20 dropped. Of the top 18, 11 actually saw their overall threat scores decrease despite remaining at a very high threat level. The new assessment ranks 161 of the nation’s active or potentially active volcanoes based on the predicted damage inflicted by their hypothetical eruption. It does not forecast which volcano will erupt next, National Geographic’s Maya Wei-Haas emphasizes, but rather the “potential severity” of an eruption’s impact.

Mount Kilauea, which bombarded Hawaii’s Big Island with lava bombs, ash and volcanic smog in a series of eruptions that lasted throughout the summer, and Washington’s Mount St. Helens, the infamous site of a 1980 eruption that killed 57 people, topped the updated rankings, while Washington’s Mount Rainier, Alaska’s Redoubt Volcano and California’s Mount Shasta rounded out the top five.

As CNN’s Andrea Diaz reports, the USGS conducted its threat assessment by weighing 24 factors related to a volcano’s “hazard potential and exposure of people and property to those hazards.” These factors, which were used to sort the country’s volcanoes into five threat levels ranging from very low to very high, included type of volcano, recent seismic activity, frequency of eruption, number of people living nearby and previously recorded instances of eruption-triggered evacuations.

Volcanoes forecast to endanger people, property or critical infrastructure upon eruption were ranked as higher threats, according to The Verge’s Mary Beth Griggs. This helps accounts for Kilauea’s spot at the top of the list, George Dvorsky writes for Gizmodo: In addition to being a highly active volcano, Kilauea is situated directly beside an inhabited community and a geothermal power plant. Washington’s Mount Rainier, which finished third in the rankings, is located just 59 miles from Seattle and poses a significant hazard to roughly 300,000 people—the most of any active volcano included on the list.

Interestingly enough, risk to aviation also proved key to the updated assessment. As National Geographic’s Wei-Haas explains, ash from Alaskan volcanoes (five of which are classified as very high threats) threaten the planes navigating the state’s skies, causing problems like engine erosion, clogged air filters and, in worst-case scenarios, complete engine failure.

Eight volcanoes included in the 2005 rankings failed to make the cut this time around. This is in large part thanks to more accurate dating methods, which pinpointed the volcanoes’ last known eruptions to more than 11,700 years ago, which is the benchmark for a volcano to be considered inactive. But Ben Andrews, director of the Smithsonian’s Global Volcanism Program, tells Wei-Haas that three sites on the list—Wyoming’s Yellowstone supervolcano, New Mexico’s Valles caldera and California’s Long Valley volcano—represent the exception to the rule, as volcanologists believe they erupt on intervals longer than the 11,700-year mark.

The USGS findings provide a potent reminder of the United States' surprisingly high levels of volcanic activity. As the report notes, the country is home to more than 10 percent of the world’s known active and potentially active volcanoes. Since 1980, these geological hotspots have spewed forth 120 eruptions and 52 episodes of notable volcanic unrest.

In an interview with AP's Borenstein, Denison University volcanologist Erik Klemetti said that the U.S. is “sorely deficient in monitoring” for many of the top 18 sites.

“Many of the volcanoes in the Cascades of Oregon and Washington have few, if any, direct monitoring beyond one or two seismometers,” Klemetti explained. “Once you move down into the high and moderate threat [volcanoes], it gets even dicier.”

USGS geologist Angie Diefenbach tells CNN that the rankings are not designed to scare the public. Instead, they aim to further support monitoring of high-risk volcanoes and better prepare at-risk communities for potential eruptions.

Volcanologist Janine Krippner of Concord University concurs, telling Wei-Haas that “this is a really good opportunity to remind people that, yeah, this is one of the most volcanically active countries in the world.”

She adds, “It needs to be a priority for funding and monitoring. USGS is working incredibly hard to understand what these hazards are and communicate them—and everyone needs to listen.”

Church bells and the noise of democracy

National Museum of American History

Like many other churches in the early republic, the Congregational meetinghouse in Castine, Maine, served both sacred and secular functions. Built in 1790, it was home not just to worship services but town meetings and judicial proceedings. Taxpayers paid its pastor’s wages. Though the ratification of the First Amendment made such arrangements unconstitutional at the federal level, a year after the meetinghouse’s construction, at the local level church and state remained very much entwined for decades. When parishioners heard the tolling of the 692-pound bell Paul Revere cast for the meetinghouse steeple in 1802, they knew it might signal time to pray—or time to vote.

A large bell hung on a wooden and metal frame. Visible text on the bell reads "Revere & Son Boston"Currently on display in the museum’s "American Stories" exhibition, this bell was cast by Paul Revere.

More than two centuries later, Americans tend to expect a much sharper divide between religion and all levels of government. And yet churches and other houses of worship continue to play an essential role in local, state, and national elections. Along with schools, libraries, rec centers, and other private and public institutions, thousands of churches (and a growing number of synagogues and mosques) serve as polling places across the country. In some areas, churches account for half of all available voting sites.

Early on, electoral use of churches was a practical matter. In rural locales especially, churches were often the only buildings large enough to host community functions, acting when necessary as schools and hospitals as well as polling places. In towns and cities with more options available, voting might occur anywhere that could hold a crowd. As historian Rosemarie Zagarri has noted, in both the colonial era and the first decades of the republic “elections could be held at almost any public venue—from a town hall to a courthouse to a church or tavern.” No matter where voting occurred, it was often so disorderly—at times thanks to an air of celebration, at others due to the potential for violence—that every setting would be equally filled by the noise of democracy, even those that might be a peaceful sanctuary on any other day.

Despite the pragmatic origins of the practice, voting in churches has become far more complicated over time. The expected political neutrality of polling sites has not prevented some churches from using their status as moral arbiters on Election Day. In the 1800s, churches with well-known stances on hot-button political issues, such as temperance and woman suffrage, hoped votes cast within their walls might be in keeping with the tenets of their faith. As recently as 1986, hundreds of Florida churches refused to serve as polling places to protest petition drives seeking to put pro-gambling initiatives on the ballot.

A young child is shown walking into church with sign "Church of the Nazarene." An American flag and "Polling Place" sign hand in the foreground.A Church of the Nazarene house of worship serves as a polling place in this 2000 photograph by David Hume Kennerly. ©David Hume Kennerly/Courtesy of Photographic History Collection, National Museum of American History.

Other churches have seen their electoral stewardship not as proponents of particular measures with religious implications but rather as proponents of democratic participation. Throughout the middle of the 1900s, locally organized programs across the country encouraged ministers and priests to toll their church bells hourly while the polls were open. Doing so, one 1960 program in Pennsylvania put it, would “remind voters that liberty and freedom can be preserved only by the use of their free balloting.”

With questions of religious freedom and diversity becoming ever more politicized, in recent years the use of religious buildings for voting has been challenged in the courts and behind the scenes on election commissions. In 2007 a Florida lawsuit objecting to sectarian messages visible during voting in a Catholic Church was dismissed by a federal judge who maintained that voting in churches was constitutional. Elsewhere in Florida a few years later, a mosque was removed from a list of county polling places after local election officials received complaints and threats of violence if the practice continued.

Whether driven by First Amendment concerns or prejudice against minority religious groups, objections to voting in houses of worship often raise an issue highlighted by recent scholarship. Over the past decade, several studies have shown that where one votes matters. Though every state has laws prohibiting the display of campaign materials at voting locations (usually stating that political signs cannot be displayed within 100 feet of a polling place entrance), it is possible that polling places themselves subtly affect the choices of those who step behind the voting curtain. Religious sites, these studies suggest, exert a small but measurable influence on votes cast.

Yet at a time when non-voters far outnumber members of either political party, having as few barriers as possible to voting, and making abundant venues available, can only be a benefit. Whatever future generations might decide about the church-state implications of voting in houses of worship, the practice has been a vital part of American history since before the first ringing of Paul Revere’s bell.

Do you know where you would go to vote in the next election? Find your nearest polling place here.

Peter Manseau is the Lilly Endowment Curator of American Religious History at the National Museum of American History.

Posted Date: 
Friday, November 2, 2018 - 10:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=J9k1NEIIhoA:g5MkpWm-PwQ:V_sGLiPBpWU OSayCanYouSee?i=J9k1NEIIhoA:g5MkpWm-PwQ:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Place on the plate: Smith Island, Chesapeake Bay

National Museum of American History

"Regions Reimagined," the theme for this year's Smithsonian Food History Weekend, will explore the power of place and the dynamics of change in American regional foodways. This post by Curator Paula Johnson takes us to a maritime community that will be represented in a cooking demonstration at the museum on Saturday, November 3.

Sprawling 12 miles offshore from Crisfield on Maryland's lower Eastern Shore, Smith Island is a series of low-lying, marshy landmasses in the Chesapeake Bay. Unless you have your own boat, getting to the island requires a trip by ferry across the waters of Tangier Sound. When the sound is "dish ca'm" (an expression islanders use to mean "smooth as a dish"), the voyage allows for birdwatching and contemplation. But when the waters are riled by "weather," the choppy, rolling passage tends to focus a mainlander’s mind on survival. For island residents accustomed to whatever the bay hands out, a rough passage goes without notice.

Sunny day on the water with boatsThe harbor with workboats at Ewell, Smith Island, on a calm day in October 2006. Photos in this post are by author unless otherwise noted.

As descendants of settlers from Cornwall, England, who came to the island in the 17th century, Smith Islanders possess a depth of knowledge about the bay that infuses all aspects of their work and community life. Since the 19th century, as markets and transportation networks were established for local seafood, Smith Islanders have "followed the water" as commercial watermen, harvesting the seasonal round of seafood—blue crabs (Callinectes sapidus) in summer and oysters (Crassostrea virginica) in winter. In the 20th century, as the bay's fisheries declined due to widespread loss of habitat, disease, pollution, and increased harvesting competition, the number of watermen able to make a living, and the population of the island itself, also declined. The U.S. Census reported 777 residents of Smith Island in 1930; 453 in 1990; and only 276 people living in the island's three communities of Ewell, Tylerton, and Rhodes Point in 2010.

Calm, sunny day. White houses on the water.Rhodes Point, Smith Island, 2006

I began traveling to Smith Island in the 1980s as part of a research project organized by the Maryland Historical Trust's folklorist Elaine Eff. Elaine and I made regular trips to conduct oral history interviews and work with islanders to create the Smith Island Visitor's Center. While my main interest was documenting the designs and construction techniques of the island's working watercraft for my book, The Workboats of Smith Island (1997), I was also drawn to the expressive dimensions of island life, including traditional foodways and storytelling. Boats, food, and stories all embodied and reflected the islanders' deep sense of place and identity.

At a port or harbor on a sunny day, a women in a red vest does an audio interview with a man in a baseball hat.Interviewing boatbuilder Haynie Marshall in Tylerton, 1994. Photo by David A. Taylor.

Against this backdrop of a maritime community in transition (or "an island out of time," as Tom Horton referred to it in his 1996 book by the same name), we're looking forward to welcoming Smith Island native Janice Marshall to the 2018 Food History Weekend stage. Marshall, whose family extends back six generations on the island, is a storyteller, songwriter, home cook, expert crab-picker, and businesswoman. Like many island wives, she was an essential partner with her husband in the "water business."

Woman in white top picking crabs spread across a wooden table.Janice Marshall picking crabmeat at home several years before she established the Crabmeat Co-op at Tylerton, Smith Island, in 1996. 
Dressed in silly costumes, two Smith Island residents perform on a stage, singing into a microphone.Janice Marshall performing a song and skit with other Smith Islanders for a charity event.

On Saturday, November 3, Marshall will be sharing two recipes in a cooking demonstration on the museum's stage: oyster pie and 10-layer cake (recipes will be posted online). Oyster pie is a winter dish, traditionally made and served at home. Marshall will prepare this easy-to-make comfort food that her husband, Bobby, would look forward to and then devour after a long, cold day of oystering on the water.

Woman in kitchen frosting a cake.Janice Marshall making a Smith Island layer cake in her home kitchen.
Woman with short brown hair holding up a cake covered in what appears to be chocolate frosting. Many layers of cake.Smithsonian staffer Nanci Edwards holding the finished masterpiece, 2006.
A multiple layer cake with a Maryland flag at the top.A Smith Island cake. Photo courtesy of Edwin Remsberg. 

Ten-layer Smith Island cake is the opposite of comfort food served at home: it's meant to be seen and appreciated by mainlanders and islanders alike. The women of Smith Island are known for creating multilayered cakes for church suppers, fundraisers, and other community events. After gaining attention over the years, in 2006, Smith Island cake was named the official dessert of Maryland. While Marshall won't have time to make and frost all 10 layers during the demo, she will bring her cake pans and will put the finishing touches on a cake from home. She'll share her tricks for making, assembling, and frosting this fancy, sweet dessert that emerged from a salty maritime community on the Chesapeake Bay.

Table at a community picnic covered in cakes.Smith Island cakes at a community dinner, 2006
Wooden figure boating and dip-netting for crabs, wearing yellow hat and blue shirt.Figure of a waterman dip-netting for soft crabs made by Smith Islander Waverly Evans, 1996, in the museum's collection.

Paula Johnson is a curator in the Division of Work and Industry.

Our Smithsonian Food History Weekend takes place from November 1-3 and includes cooking demonstrations, roundtable discussions, a black-tie gala, the Last Call event on beer history, and more. We recommend completing our free registration for the roundtables soon as these sessions are very popular. Last Call tickets are now on sale. Sign up for our monthly newsletter for more details.

Posted Date: 
Thursday, September 20, 2018 - 11:15

Categories:

OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=zDsQF_NuftY:EIpX5mGLl2I:V_sGLiPBpWU OSayCanYouSee?i=zDsQF_NuftY:EIpX5mGLl2I:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Scrapbook

Archives of American Art
Scrapbook : 86 p. : b&w ; 34 x 29 cm.

One scrapbook compiled by Tilton containing photographs of artists John Rollin Tilton and William Wetmore Story in their studios in Rome, of Paul Akers, of writers Robert Browning, Thomas Carlyle, and Alfred Tennyson; photographs of views of Italy, Switzerland, Scotland, Gibraltar, and New York state; photographs of works of art by Emma Stebbins, Tilton's aunt, and by old masters; watercolors by Tilton, H. Coleman, Thomas Hotchkiss, and Tilton's brother Paul H. Tilton.

Elaine de Kooning scrapbook relating to Caryl Chessman

Archives of American Art
Scrapbook : 136 p. : b&w and col. ; 38 x 32 cm. Scrapbook includes newspaper clippings relating to the case and execution of Caryl Chessman; correspondence sent and received by de Kooning regarding her opposition to the execution; and photos of de Kooning and other members of the Justice for Chessman Committee preparing for protests.
Pieces of correspondence primarily from Rosalie Asher to de Kooning are tucked into back pocket of scrapbook, scans follow scan of back cover.
Date range based on dated articles and pieces of correspondence throughout scrapbook.
Headline pasted on cover of scrapbook: The real story of Caryl Chessman

Satchel Paige: Pitching through history

National Museum of American History

"How old would you be if you didn't know how old you are?" – Satchel Paige

With a professional baseball career spanning the jazz age to the space age, pitcher Leroy Robert “Satchel” Paige (1906—1982) established himself not only as one of the most dominant American athletes of all time, but also as one of the most remarkable. Thanks to a generous gift, the museum recently acquired a baseball signed by Paige, inspiring us to share the story of this timeless legend.

Signed baseballOfficial American Association baseball autographed in ink by pitcher Satchel Paige, around 1970–1980. Gift of Thomas Tull.

Paige earned the nickname “Satchel” as a boy, when he made money carrying passengers' bags at the train station in his hometown of Mobile, Alabama. Sent to the Industrial School for Negro Children in Mount Meigs at the age of 12 for the minor offense of stealing some toy rings from a store, Paige worked on his baseball skills until his release just before his 18th birthday.

Satchel Paige baseball card with large illustration of PaigeSatchel Paige baseball card, 1953. Note the misspelling of his first name.

In 1924 Paige earned his first baseball paycheck pitching for the semi-professional Mobile Tigers. Paige's lanky 6'3" frame helped him dominate the semi-pro opposition, and he was signed to the Negro Southern League’s Chattanooga Black Lookouts in 1926.

Paige thus began his lengthy and nomadic professional baseball career. Records for the various Negro League Organizations are scarce and incomplete, but we know that between 1926 and 1947 Paige played for the Lookouts, the Birmingham Black Barons, the Baltimore Black Sox, the Cleveland Cubs, the Pittsburgh Crawfords, the Kansas City Monarchs, the New York Black Yankees, the Memphis Red Sox, and the Philadelphia Stars. He also moonlighted in other exhibition games and winter leagues, and by "barnstorming" with rural traveling teams.

Signed baseballBaseball signed by Negro League players, including Satchel Paige.

Paige was beloved not only for his dominance on the mound, but for his enthusiasm and cocksure personality. He loved to impress the crowd, striking out batters with speed and control. Paige excelled with the 1942 Kansas City Monarchs, who won the Negro League World Series. The team, managed by Frank Duncan, and led by Paige and Buck O’Neil, is considered one of the most talented teams in Negro League history. As O’Neil has said of the club, “I do believe we could have given the New York Yankees a run for their money that year.”

Jacket patch designed to look like a baseball with text: “World Monarchs Champions 1942”Negro League jacket patch, Buck O’Neil, 1942 Monarchs

Paige finally got his chance to pitch before Major League audiences in 1948, two years after Jackie Robinson broke baseball’s color barrier with the Brooklyn Dodgers. Signed mid-season by the Cleveland Indians, the oldest rookie in Major League history at 42 years old, he set attendance records in Cleveland and Chicago on his first three starts. 

Paige went 6-1 with the Indians, helping the team reach the World Series, where, called to the mound in Game 5, he became the first African American player to pitch in a Major League championship game. The Indians would take the title, defeating the Braves four games to two.

After pitching for Cleveland for another year, Paige briefly left Major League Baseball, barnstorming for a couple of years before returning to the Majors in 1951, signing with the St. Louis Browns and being named to two All-Star teams.

Signed baseballBaseball, signed by the 1951 St. Louis Browns, including Satchel Paige

After leaving the Browns in 1953, Paige continued to pitch for barnstorming teams and in the minor leagues. Paige’s last Major League appearance was in 1965, when at 59 he played one game for the Kansas City A’s and threw three shutout innings against the Boston Red Sox.

Paige’s last turns on the mound came in 1967, pitching for the Indianapolis Clowns, the last all-black baseball club. By his own estimation, he had pitched in about 2,500 games before putting down his glove for good.

Despite his popularity, success, and lengthy career, Paige’s legacy has been overlooked due to racial inequities. It is a testament to his abilities and charisma that he could become a living legend, despite being forced to play outside of the Major Leagues for the majority of his career and doing so while facing wide-ranging discriminatory practices and bigotry. As he said himself in 1982, the year of his death, “They said I was the greatest pitcher they ever saw. . . . I couldn’t understand why they couldn’t give me no justice.” 

Eric W. Jentsch is Curator of Popular Culture and Sports for the Division of Culture and the Arts.

Posted Date: 
Tuesday, October 23, 2018 - 02:30
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=RHjC2L1e_ZQ:unbJG2OrTdE:V_sGLiPBpWU OSayCanYouSee?i=RHjC2L1e_ZQ:unbJG2OrTdE:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

How Cities Are Upgrading Infrastructure to Prepare for Climate Change

Smithsonian Magazine

The most recent international report on climate change paints a picture of disruption to society unless there are drastic and rapid cuts in greenhouse gas emissions.

Although it’s early days, some cities and municipalities are starting to recognize that past conditions can no longer serve as reasonable proxies for the future.

This is particularly true for the country’s infrastructure. Highways, water treatment facilities and the power grid are at increasing risk to extreme weather events and other effects of a changing climate.

The problem is that most infrastructure projects, including the Trump administration’s infrastructure revitalization plan, typically ignore the risks of climate change.

In our work researching sustainability and infrastructure, we encourage and are starting to shift toward designing man-made infrastructure systems with adaptability in mind.

Designing for the past

Infrastructure systems are the front line of defense against flooding, heat, wildfires, hurricanes and other disasters. City planners and citizens often assume that what is built today will continue to function in the face of these hazards, allowing services to continue and to protect us as they have done so in the past. But these systems are designed based on histories of extreme events.

Pumps, for example, are sized based on historical precipitation events. Transmission lines are designed within limits of how much power they can move while maintaining safe operating conditions relative to air temperatures. Bridges are designed to be able to withstand certain flow rates in the rivers they cross. Infrastructure and the environment are intimately connected.

Now, however, the country is more frequently exceeding these historical conditions and is expected to see more frequent and intense extreme weather events. Said another way, because of climate change, natural systems are now changing faster than infrastructure.

How can infrastructure systems adapt? First let’s consider the reasons infrastructure systems fail at extremes:

  • The hazard exceeds design tolerances. This was the case of Interstate 10 flooding in Phoenix in fall 2014, where the intensity of the rainfall exceeded design conditions.
  • During these times there is less extra capacity across the system: When something goes wrong there are fewer options for managing the stressor, such as rerouting flows, whether it’s water, electricity or even traffic.
  • We often demand the most from our infrastructure during extreme events, pushing systems at a time when there is little extra capacity.

Gradual change also presents serious problems, partly because there is no distinguishing event that spurs a call to action. This type of situation can be especially troublesome in the context of maintenance backlogs and budget shortfalls which currently plague many infrastructure systems. Will cities and towns be lulled into complacency only to find that their long-lifetime infrastructure are no longer operating like they should?

Currently the default seems to be securing funding to build more of what we’ve had for the past century. But infrastructure managers should take a step back and ask what our infrastructure systems need to do for us into the future.

Agile and flexible by design

Fundamentally new approaches are needed to meet the challenges not only of a changing climate, but also of disruptive technologies.

These include increasing integration of information and communication technologies, which raises the risk of cyberattacks. Other emerging technologies include autonomous vehicles and drones as well as intermittent renewable energy and battery storage in the place of conventional power systems. Also, digitally connected technologies fundamentally alter individuals’ cognition of the world around us: Consider how our mobile devices can now reroute us in ways that we don’t fully understand based on our own travel behavior and traffic across a region.

Yet our current infrastructure design paradigms emphasize large centralized systems intended to last for decades and that can withstand environmental hazards to a preselected level of risk. The problem is that the level of risk is now uncertain because the climate is changing, sometimes in ways that are not very well-understood. As such, extreme events forecasts may be a little or a lot worse.

Given this uncertainty, agility and flexibility should be central to our infrastructure design. In our research, we’ve seen how a number of cities have adopted principles to advance these goals already, and the benefits they provide.

A ‘smart’ tunnel in Kuala Lumpur is designed to supplement the city’s stormwater drainage system. (David Boey, CC BY)

In Kuala Lampur, traffic tunnels are able to transition to stormwater management during intense precipitation events, an example of multifunctionality.

Across the U.S., citizen-based smartphone technologies are beginning to provide real-time insights. For instance, the CrowdHydrology project uses flooding data submitted by citizens that the limited conventional sensors cannot collect.

Infrastructure designers and managers in a number of U.S. locations, including New York, Portland, Miami and Southeast Florida, and Chicago, are now required to plan for this uncertain future – a process called roadmapping. For example, Miami has developed a US$500 million plan to upgrade infrastructure, including installing new pumping capacity and raising roads to protect at-risk oceanfront property.

These competencies align with resilience-based thinking and move the country away from our default approaches of simply building bigger, stronger or more redundant.

Planning for uncertainty

Because there is now more uncertainty with regard to hazards, resilience instead of risk should be central to infrastructure design and operation in the future. Resilience means systems can withstand extreme weather events and come back into operation quickly.

Microgrid technology allows individual buildings to operate in the event of a broader power outage and is one way to make the electricity system more resilient. (Amy Vaughn/U.S. Department of Energy, CC BY-ND)

This means infrastructure planners cannot simply change their design parameter – for example, building to withstand a 1,000-year event instead of a 100-year event. Even if we could accurately predict what these new risk levels should be for the coming century, is it technically, financially or politically feasible to build these more robust systems?

This is why resilience-based approaches are needed that emphasize the capacity to adapt. Conventional approaches emphasize robustness, such as building a levee that is able to withstand a certain amount of sea level rise. These approaches are necessary but given the uncertainty in risk we need other strategies in our arsenal.

For example, providing infrastructure services through alternative means when our primary infrastructure fail, such as deploying microgrids ahead of hurricanes. Or, planners can design infrastructure systems such that when they fail, the consequences to human life and the economy are minimized.

This is a practice recently implemented in the Netherlands, where the Rhine delta rivers are allowed to flood but people are not allowed to live in the flood plain and farmers are compensated when their crops are lost.

Uncertainty is the new normal, and reliability hinges on positioning infrastructure to operate in and adapt to this uncertainty. If the country continues to commit to building last century’s infrastructure, we can continue to expect failures of these critical systems and the losses that come along with them.

The Mystery of Ancient Dolphins’ Super-Long Snouts

Smithsonian Magazine

For a period of several million years, ancient species of dolphins glided through the seas, looking in many ways similar to today’s toothed whales—with the notable exception of their remarkably long snouts. These odd cetaceans boasted proportionally longer snouts than any other aquatic mammal or reptile, living or extinct; some of their nose-like appendages extended more than 500 percent further than their braincases. Even Matthew McCurry, curator of palaeontology at the Australian Museum who has studied the evolution of long snouts in extant species, finds their skulls “extremely strange-looking.”

In 2015, as a pre-doctoral fellow at the Smithsonian National Museum of Natural History, McCurry decided to take a closer look at these extinct marine mammals. Scientists have known about them for more than 100 years, but no one had pinned down the function of their bountiful snouts. The hypotheses were “largely qualitative and offhand,” says Nicholas Pyenson, curator of fossil marine mammals at the Museum of Natural History. “People said, ‘Oh, the long snout is probably used for stirring up prey in the sediment … [W]hat I would say is that those are adaptational hypotheses, but nothing had really been tested.”

So McCurry and Pyenson set out to do just that. And in a new paper published in Paleobiology, the researchers have put forth a solution to the curious case of the long-snouted dolphin: the creatures, they found, were able to swish their snouts through the water, using them to hit and stun prey, much as swordfish do today.

In their quest to analyze the unique skulls of long-gone cetaceans, McCurry and Pyenson turned to the Smithsonian’s vast trove of whale fossils. “We have so many that have not been looked at that I actually cannot tell you the full extent of the whale fossil records that we possess,” Pyenson says, but estimates that there may be as many as 15,000 in the collection.

The researchers performed computed tomography (CT) scans of the crania of three extinct species (Pomatodelphis inaequalis, Xiphiacetus bossi and Zarhachis flagellator), and casts of two other ancient cetaceans (Parapontoporia sternbergi and Zarhinocetus errabundus). To compare these creatures to animals that are alive today, McCurry and Pyenson scanned two species of river dolphins, which have considerably longer snouts than their ocean dwelling-counterparts, though not nearly as long as their prehistoric predecessors. The researchers also looked at two species of long-snouted fish: the Atlantic blue marlin and the swordfish.

McCurry and Pyenson then analyzed the digital models of the skulls using calculations that engineers rely on to assess the load-bearing capabilities of beams. According to Pyenson, “beam theory” is useful in the study of snouts because it “talks about these objects as they are built to respond to forces: how rigid it is, what kind of stresses are imposed on it.” And the researchers found that dolphins of yesteryear would have had no trouble sweeping their impressive snouts through the water to whack their prey.

Because the snouts of the species varied in shape, they moved their handy appendages in different ways. Some swept them from side to side, others up and down, and still others could move their snouts in multiple directions.

“Imagine a beam like a ski,” Pyenson says, as an example. “A ski flexes well up and down, but not side to side. A pole, which has the same shape distributed, can flex up and down [and] side to side, no problem.”

The researchers were particularly struck by the fact that these animals were not all closely related to one another. Several species appear to have independently evolved exceptionally long snouts, which suggests that something in their environment was driving the change. But what, precisely?

Long-snouted dolphins emerged in the Middle Miocene, a period stretching from 11.6 to 16 million years ago, when the climate was warmer than it is today. Ocean temperatures went up and sea levels rose, creating more near-shore sea floor, which is “a really great habitat for fish and other prey items for dolphins,” Pyenson says. But fish’s escape response gets quicker in warmer waters, making them more difficult to catch. It is possible, the researchers theorize, that dolphins evolved hyper-long snouts during this period to give them an extra advantage during the hunt.

For millions of years, global temperatures remained steady and dolphins with supremely elongated snouts frolicked in warm waters.

“Maybe this is a consequence of what happens when you have that kind of environment stable for several millions of years,” Pyenson theorizes. “These traits get exaggerated.”

But with the advent of the Pliocene era, the climate became more erratic and the abundance of temperate, near-shore feeding grounds fluctuated. With these changes, the long-nosed dolphins vanished. And this raises interesting questions about whether the evolutionary trajectory of extinct dolphins can tell us anything about how dolphins might fare in the current era of climate change.

The story of these ancient creatures highlights how an organism’s environment transforms its appearance, and clearly shows what we stand to lose in terms of biodiversity when an environment changes, zoologist Karina Amaral from the Federal University of Rio Grande do Sul, who was not involved in the study, tells Ed Yong of The Atlantic. And that’s important to consider, especially, “[at] a time when many people insist on ignoring our changing climate,” Amaral says.

What can the evolutionary trajectory of extinct dolphins can tell us about how dolphins might fare in the current era of climate change? McCurry notes that it is difficult to draw definitive conclusions because fluctuations in temperature today are “unprecedented in their cause and speed.” But he does see the study as a “cautionary tale,” and Pyenson adds that looking more closely at ancient whales can provide insight into the future of the Earth’s ocean systems.

“High sea level rise, acidified oceans, warmer oceans—those are all traits of past whale worlds,” he says. “And looking at the fossil record, looking at the biological response of those past worlds, that's going to be really important moving forward.”

Prehistoric Angolan “Sea Monsters” Take Up Residence at the Natural History Museum

Smithsonian Magazine

Ravaged by decades of civil war, the southwestern African country of Angola has spent the years following its 2002 peace accords in search of a cohesive sense of national pride, endeavoring to cultivate a distinctive cultural presence on the world stage. As humanitarian campaigns work to get displaced families back on their feet and infrastructure up to date, paleontologists are providing Angola with an unlikely source of excitement and unity: the fossils of massive “sea monsters” that roamed the oceans of the Cretaceous period. Today, Projecto Paleoangola, a multinational enterprise involving scientists from the U.S., Portugal, the Netherlands and of course Angola itself, is hard at work studying the region’s unique fossil record.

The beautifully preserved “sea monsters” of Angola are the focus of a new exhibition opening today at the Smithsonian’s National Museum of Natural History. The impressive display will give visitors a small but potent taste of the paleontological work—groundbreaking in every sense of the word—now unfolding across the country.

When the Cretaceous began nearly 150 million years ago, the south Atlantic Ocean, as we know it today, did not exist. The supercontinent of Pangaea was just beginning to break apart, and present-day South America was still firmly wedged into the recess of present-day Africa’s western coast. As tens of millions of years elapsed and a gap began to yawn between the two, the Atlantic Ocean expanded southward, bringing with it all manner of exotic marine lifeforms formerly confined to the Northern Hemisphere.

Trade winds buffeting the young Angolan shoreline made conditions in its waters particularly conducive to sea life, creating a salubrious upwelling effect that saw deep-water nutrients bubble to the surface. Great long-necked predators called mosasaurs migrated to the new habitat in droves, and their fossilized remains today litter the easily accessible sedimentary rock of uplifted Angolan crust.

Image by Donny Bajohr. The 72-million-year-old giant Euclastes sea turtle. (original image)

Image by Donny Bajohr. Detail of the cast of Euclastes sea turtle, fossils of which were excavated from Angola's coastal cliffs. (original image)

It was in 2005 that Texas-based paleontologists Louis Jacobs and Michael Polcyn first set foot in the country. The two Americans had planned the trip alongside Dutch marine vertebrate expert Anne Schulp and Portuguese paleontologist Octávio Mateus, both of whom they had encountered at technical conferences in the preceding two years (in the Netherlands and Brazil, respectively). The aim of the quartet was to secure the permission of Angolan researchers to conduct wide-ranging fossil excavations.

As it turned out, Angola’s scientists were thrilled.

“We went to the geology department at Agostinho Neto University,” Jacobs recalls, “and we walked in and said, ‘We would like to do a project with you.’ And they said, ‘Good, we want to do it.’ That’s all it took. Just cold off the street.”

With the backing of Angolan researchers, the international team went on to secure multiple grants, and the team’s fieldwork soon ballooned to spectacular proportions.

“Since 2005, we’ve had time now to prospect from the very northern part of the country, up in the province of Cabinda, all the way down to the south,” Polcyn says. “In that transect, you have a lot of different slices of geological time. We not only have these marine Cretaceous sediments, we have much younger material in the north.” The team even got their hands on the premolar tooth of a never-before-seen early African primate, a species they are excited to comment on further in the months and years ahead.

The easily accessible sedimentary rock along modern Angola's sea cliffs is littered with fossilized remains of the life that thrived along the coast tens of millions of years ago. (Projecto Paleoangola)

As its name suggests, the new “Sea Monsters Unearthed” Smithsonian show centers on the team’s aquatic finds, which were far too numerous for all to be included. The fossils showcased were culled from two particularly rich locations. Set against an accurately illustrated Cretaceous mural backdrop, the centerpiece is a massive and remarkably well preserved 72-million-year-old mosasaur skeleton, whose 23-foot cast will fill the exhibition space—and the imagination of whoever takes it in.

What Polcyn says is most remarkable about this prognathodon skeleton is the fact that three other sets of mosasaur remains were found within its stomach cavity—including one belonging to a member of its own species, the first-ever evidence of full-on mosasaur cannibalism. These fossilized remains offer unprecedented insights into mosasaur feeding habits, about which little was previously known.

“The strange thing is,” Polcyn says, “it’s primarily heads. This guy was eating heads.”

Visitors will get to see the cranial remains taken from the big mosasaur’s gut in a separate display case. “There’s not a lot of calories in that, which indicates [Prognathodon kianda] may have been a scavenger.”

Exhibition-goers can also look forward to seeing the picked-at bones of a plesiosaur and the skull and lower jaw of a prehistoric turtle species.

In time, the bones on view at the Smithsonian will return to Angola, where Jacobs and Polcyn hope they will be exhibited permanently along with the other outstanding discoveries of the ongoing Paleoangola movement, which in addition to producing astonishing results has given several aspiring Angolan paleontologists their first exposure to the rigors of fieldwork.

An artist's rendering of Angola's Cretaceous seas, where droves of large, carnivorous marine reptiles thrived on upwelling nutrients. (Karen Carr Studios, Inc.)

While getting the chance to raise awareness of these remarkable Angolan Cretaceous deposits through the apparatus of the Smithsonian is no doubt exciting for Jacobs, Polcyn and their team, the American scientists are quick to point out that this is after all Angola’s narrative. Their aim is simply to get that story out in the world—cementing Angola’s rightful status as a hotbed of incredible paleontological activity.

Jacobs has witnessed firsthand a slow but steady pivot toward the sciences in Angola’s national agenda, one which he is eager to see continue in the years to come. “When we started,” he recalls, “it wasn’t long after the peace treaty was signed, and everybody in the earth sciences was after oil.” In the years since, though, “you see a trend where there’s more of a general appreciation of knowledge, and a maturing of ideas.”

“Sea Monsters Unearthed: Life in Angola’s Ancient Seas” will remain on view at the Smithsonian’s National Museum of Natural History through 2020.

This Week Has Offered a Slew of Insights on the Western Hemisphere’s First Humans

Smithsonian Magazine

Scientists have come a long way since 2010, when researchers extracted DNA from a 4,000-year-old clump of hair to map out the first complete genome of an ancient human living in the Western Hemisphere. Today, that initial discovery has been supplemented by 229 genomes recovered from teeth and bone found across the Americas, providing geneticists with a comprehensive portrait of the region’s first inhabitants and their early migration patterns. Three new genomic studies published this week in Science, Cell and Science Advances fill in the details of ancient human migration in North and South America—and add some new twists and turns to their path.

As Science News’ Tina Hesman Saey writes, the studies build on past findings to chart the path of the Americas’ first humans—who spread out from Siberia and East Asia to populate the northern and southern lands of North America before heading downward to South America—and hone in on a specific community based in the Andean Highlands between roughly 1,400 to 7,000 years ago. Summarizing the researchers’ extensive findings, George Dvorsky reports for Gizmodo that the new papers reveal rapid yet uneven movement south in at least three migratory waves beginning some 15,000 years ago, suggesting the individuals who settled the Americas were more genetically diverse than previously believed.

The Science study, led by Natural History Museum of Denmark researcher J. Víctor Moreno-Mayar, Southern Methodist University anthropologist David Meltzer, and University of Copenhagen and University of Cambridge evolutionary geneticist Eske Willerslev, draws on 15 ancient genomes—including that of a 9,000-year-old western Alaskan who is only the second Ancient Beringian to undergo DNA testing, according to The New York Times’ Carl Zimmer—to track early humans’ migration from Alaska to Patagonia, a region at the farmost tip of South America.

Science magazine’s Lizzie Wade writes that previous studies suggested the first Americans arrived from Siberia and East Asia about 25,000 years ago. While some stayed in the now defunct Beringia region, others moved south, splitting into two groups: Southern Native Americans and Northern Native Americans—who largely settled in what is now Canada and Alaska. The former spread across North and South America some 14,000 years ago, moving at what Meltzer describes as “astonishing speed” given their unfamiliarity with the landscape.

One of the most significant insights offered by the Science report is confirmation that a 10,700-year-old skeleton dubbed the “Spirit Cave mummy” is an ancestor of modern-day Native Americans, not a member of the “Paleoamericans” hypothesized to have populated North America before these native groups arose. As Hannah Devlin explains for The Guardian, the mummy, which was discovered in a Nevada cave in 1940, has been the subject of intense controversy since 1996, when the local Fallon Paiute-Shoshone community learned of its existence and began campaigning for its repatriation. The body was returned to the group and reburied in a private ceremony held this summer.

The findings point toward three distinct waves of southward migration (Cell)

Another finding of note revolves around an individual who lived roughly 10,400 years ago in what is now Brazil. The skeleton revealed traces of a distinctly Australasian genetic marker unseen in any of the other samples included in the study, raising questions of how it ended up in South America. It’s possible, Meltzer tells Science’s Wade, that traces of Australasian ancestry were isolated to a small group of Siberian migrants who moved across continents without mingling amongst other populations, but additional research must be conducted before arriving at a definitive conclusion.

As Michael Greshko explains for National Geographic, the Cell study, led by Max Planck Institute geneticist Cosimo Posth, encompasses the genomes of 49 sets of ancient remains and offers evidence of two previously unidentified South American populations likely related to the main group of Southern Native Americans. One group consists of 4,200-year-old Andean residents closely linked to the Native Americans living in California’s Channel Islands, while the other connects communities that settled in Brazil and Chile around 9,000 years ago to Anzick-1, a 12,700-year-old Clovis child found in Montana.

Posth tells Gizmodo that this latter group speaks to the Clovis culture’s expansion south. He adds, however, that the Clovis-related group was soon completely replaced by an ancestral group with ties to today’s South American populations.

The final paper, published in Science Advances, sheds light on Andean peoples’ adaptation to the harsh conditions of high elevation living. Researchers led by Emory University anthropologist John Lindo drew on the genomes of seven individuals living in the region between 1,400 to 6,800 years ago, as well as dozens of DNA samples sequenced from contemporary populations. As Gizmodo reports, the team found that ancient residents of the Andean Highlands rapidly gained resistance to cold temperatures, low oxygen and UV radiation. They also learned to digest potatoes and, Greshko says, experienced stronger heart health.

Interestingly, analysis of Highland versus Lowland populations revealed vast differences in responses to European contact. Whereas the Lowlanders’ numbers dropped by 95 percent, the Highlanders only shrank by about 27 percent, likely due to adaptations in an immune gene linked with smallpox.

Overall, the studies show multiple distinct waves of migration, complicating the story of the Americas’ first inhabitants. About 16,000 years ago, descendants of the original Siberian and East Asian migrants split into the Northern and Southern Native American branches—both the Spirit Cave mummy and Anzick-1 belong to this latter group. Around 14,000 years ago, the southern branch further splintered into populations that rapidly dispersed across South America. Then, beginning 9,000 years ago, yet another wave of humans from North or Central America arrived in South America, overtaking its older populations. Finally, by at least 4,200 years ago, a group of Andean Highlanders linked to ancient Californians had spread across the Peruvian mountain range.

Jennifer Raff, an anthropological geneticist at the University of Kansas in Lawrence who was not involved in the work, tells Nature’s Ewen Callaway that the findings don’t negate centuries of previous research.

“It’s not that everything we know is getting overturned,” she says. “We’re just filling in details. We’re now moving to a much more detailed, much more accurate and richer history. That’s where the field was always going, and it’s nice to be there now.”

11833-11856 of 12,395 Resources