Skip to Content

Found 380 Resources

Forget the Vegetables—Junk Food Could Help Fight Obesity

Smithsonian Magazine

The 2004 release of Super Size Me—a documentary about Morgan Spurlock’s 24-pound weight gain and health decline during a month-long McDonald’s binge—and other books and exposés of the last decade, for that matter, have tarnished the reputation of fast food and other processed foods.

But what if the food Spurlock ate at the chain was healthier? What if, by eating food engineered to be lower-calorie, lower-fat versions of popular favorites, he lost weight in the course of 30 days rather than gain it?   

Journalist David Freedman says in the world of weight loss, "what counts is calories, fat and sugar, which processed foods can be low in, and unprocessed foods can be high in." Photo courtesy of David Freedman.

Journalist David Freedman made this case—that fast food and processed food may actually help in the fight against obesity instead of hindering it—in an article this summer in The Atlantic. At a time when the loudest and clearest food message is to eat fresh, locally grown, organic foods, the piece prompted a range of reactions from scientists and fellow journalists in the food and health worlds.

In a nutshell, can you explain your big idea?

A high percentage of the obese are more or less hooked on fatty, sugary, processed foods, and we seem helpless to change that. Getting the 100 million obese people in the U.S. to eat less junk food and more unprocessed, "whole" foods, would be helpful to turning the tide on the obesity epidemic—but unprocessed foods are largely too expensive and hard to access for large numbers of poor obese. What we can do right now with food technology is create lower-calorie, lower-fat, lower-sugar processed foods that will deliver the same stimulating sensations as the junkier stuff but help the obese make their diet healthier overall. We need to push the fast food and processed food industries to move toward these healthier versions of their foods.

So wait—Twinkies could actually help people lose weight?

Yes, Twinkies could actually help people lose weight, if there were lower-calorie but still tasty versions of them. But the statement needs some qualifications. It's not the ideal way to lose weight; it only makes sense if for whatever reasons getting on a healthier diet isn't in the cards. It's the answer for someone who's going to keep eating Twinkies whether there are low-calorie versions or not. For that person, the lower-calorie Twinkie is potentially a step in the right direction. And, by the way, researchers have demonstrated that people can in fact lose weight on a diet of nothing but snack cakes, though no one is recommending it.

How did you get interested in this topic?

Six years ago, I struggled to lose 20 pounds, on doctor's orders. That got me wondering about obesity science in general, and the problem of behavior change in particular. Obesity is headed toward robbing living Americans today of a combined billion years of life.

There are a cacophony of conflicting theories and advice promoted by my fellow science journalists. Cut down on fat but feel free to eat lots of carbs. Cut down on carbs but feel free to eat lots of fat. Calories are everything, or calories don't matter at all. Exercise is the key rather than diet. Diet is the key rather than exercise. It's nearly impossible to keep lost weight off. It's all in the genes. It's all in your gut bacteria, and on and on.

I've traveled the U.S. and the world interviewing highly credentialed obesity experts and observing their weight-loss programs. There's little controversy among scientists about what works, and it's been backed up by hundreds of studies. What works is gradually moving people to lower-calorie, less-fatty, less-sugary foods and getting them moving more, along with providing a broad array of behavioral supports so they stick with it forever. The claims pushed by prominent journalists for magic-bullet solutions like switching wholesale to natural foods or to ultra-low-carb diets just cause most obesity experts to smack their heads in frustration, even though the public eats them up.

Well-read laypeople seemed to mostly parrot journalist Michael Pollan's science-free declaration that shunning processed foods can solve obesity and all other food-related health problems, though processing in and of itself is utterly irrelevant to obesity. What counts is calories, fat and sugar, which processed foods can be low in, and unprocessed foods can be high in.

Honey and fruit jam right off the farm-stand shelf are sugary calorie nightmares, and pork belly from locally raised, free-range, antibiotic-free pigs is a fatty calorie nightmare. But a McDonald's egg-white breakfast sandwich, though processed, is a relatively low-calorie, tasty dish that's a great source of lean protein, and has whole grains, both of which are key, satisfying target foods for people who want to keep weight off.

What is this pervasive message, that all processed foods are bad, doing to Americans’ ability to lose weight?

I realized this enormous misconception—the absurd dream of getting farm-fresh meals onto the plates of tens of millions of poor, obese people hooked on junk food—was standing in the way of what might be the one workable solution to attacking obesity: getting the food industry to create healthier versions of its popular foods that those people would actually eat. We need lower-fat meat, in particular, beef; reduced-sugar versions of candy, cakes and other sweets; reduced-fat substitutes for oily foods like salad dressing; whole-grain versions of floury foods like white bread. But we need these healthier versions to taste and look exactly like the originals, or most people won't switch to them.

What are the challenges to making low-calorie, low-fat, low-sugar alternatives tasty?

There are few serious technical or manufacturing obstacles to making healthier versions of popular processed foods. Food scientists know how to replace fat and sugar in foods with healthier alternatives that taste just about the same. It's not a perfect art yet, but it's getting there fast. The bigger challenge is getting the big food companies to really push this stuff, given that the public tends to be wary of healthier alternatives, and that health-food advocates condemn these efforts rather than applaud them. What's the incentive for these companies to make healthier foods? I'm in favor of forcing them to do it through regulation, but the American public hates that sort of regulation, so it won't happen.

A compounding problem is the relentless criticism that the delusional, misinformed, blind haters of all processed foods aim at Big Food companies that even try to bring out healthier stuff. Burger King's Satisfries and McDonald's Egg-White McMuffin have both been hooted down in the press by health-food advocates as not being truly healthy foods—never mind that these dishes are great steps in the right direction. It's absurd and disastrously counter-productive.

What makes your approach more realistic than a switch to whole, unprocessed foods, from an economic standpoint?

No one—absolutely no one—has advanced a clear plan for how at any time in the next 50 years we're going to be able to grow, ship and sell enough whole food to an entire population that today mostly lives on processed food. Add to this simple fact that this movement wants to do away with giant farms, food factories and shipping foods over distances. Then add to it that if there were some miraculous way to pull this off, the prices for the food would be astronomical by anyone's reckoning, compared to processed foods. It's a lovely idea—hey, I'd love to live in that world—but it's an absurd pipe dream. Meanwhile, the human race is giving up a billion years of life to obesity, and on average hugely lowering the quality of those years of life that we do have.

In this Knight Science Journalism critique of your piece, the author writes: 

One way Freedman works his magic is to confuse ‘unprocessed foods’ with ‘wholesome foods.’ Most of his examples of unprocessed foods are things he says are ‘tailored to the dubious health fantasies of a small, elite minority.’… Grass-fed beef might be too expensive and too difficult to produce for the masses. But what about soybeans, whole grains, fruits, and vegetables? They are commodities, they are cheap, and they are plentiful.


What’s your response to this?

This is breathtakingly ignorant, and sadly typical of many of the loud, arrogant voices that objected to my article. Though to be sure, some of the objections to my article were more thoughtful and well informed. These folks have clearly led cushy lives, and need to find out how most of the country and world lives. I've led a cushy life, too, but before opening my mouth on this subject I went out and spent many, many hours walking a number of different disadvantaged neighborhoods all over the country and the planet: talking to countless people in these communities about their diets and shopping, visiting their stores, and interviewing scientists and clinicians who directly work with overweight populations. Let me tell you, it doesn't get simpler or truer than this: Processed food is, for all but the most geographically isolated communities cheaper, more convenient, and easier to access. What's more, it pushes people's taste-sensation buttons. We've been telling the world for nearly a century to eat more vegetables. How's that working out? This fellow might get all his buttons pushed by broccoli that's readily accessible and affordable to him (and so do I, by the way), but the fact that he thinks it applies to the rest of the world, and in particular to the obese world, and most particularly to the obese population that is poor and vulnerable, is a good sign of how poor a job journalists have done in researching this subject before pontificating about it.

Every big thinker has predecessors whose work was crucial to his discovery. Who gave you the foundation to build your idea?

B.F. Skinner, a Harvard behavioral scientist and social philosopher, is, in my book, the patron saint of the science of behavior change. He took us 90 percent of the way there, and everything since then has either been in the wrong direction or is fighting to work out the remaining 10 percent. Skinner demonstrated with striking clarity how all organisms, including humans, tend to do what they are rewarded to do. It's really that simple. The tricky part sometimes is to identify what the rewards are behind certain behaviors, but in the case of obesity it's pretty obvious: People get the huge sensual reward of eating high-calorie, sweet and fatty foods, and of sitting around on their butts. These rewards are deceptively powerful, much more so for most of us than the negative consequences of overeating and under-exercising, consequences that tend to come on us at an imperceptible rate, versus the huge, immediate rush we get from eating. Thus to beat the problem we need to make sure people are getting similarly powerful rewards from eating healthier foods. Making available healthier versions of junk food that deliver similar sensations is a great way to do it.

Who will be most affected by this idea?

I've heard directly and indirectly that the article has had a big impact in the processed food industry, especially at fast food companies.

How so?  

Several major food companies have told me that the article has led to a stream of conversations about how they might move toward more healthy foods. I've also heard from a number of food industry groups asking me to speak at conferences.

Most of the public, as is true with politics and most everything else, has already made up its minds about this subject and won't be swayed by my article. But a small, more open-minded segment of the public seems to have found the article eye-opening. I take a lot of encouragement in that.

How might it change life, as we know it?

It would be wonderful if the article went at least a very small way toward making it easier for processed-food companies to bring out healthier versions of their products without being hooted down by the Pollanites. Burger King brought out its lower-calorie, lower-fat "Satisfries" a month or so after the article came out. I think that's entirely a coincidence, but hey, a journalist can dream.

What questions are left unanswered?

So many! Will Big Food actually bring out healthier products? If they do, will the obese public be willing to try embracing them? If they do move to these products, will it really get them on the road to losing and keeping off weight? Might the government be able to use regulation, or the threat of it, to accelerate the move to healthier processed foods?

What is next for you?

I hesitate to even mention what I'm working on, because it explores an argument that tends to provoke intensely negative reaction from most people. But it follows the theme of my trying to point out how sometimes the well-educated, generally affluent influencers in the public who see themselves as champions of beneficial change for all actually cling to notions that in the end are good for them but more generally bad for the poor and vulnerable.

Museums With Their Own Niche

Smithsonian Magazine

I peered at the rows of lunchboxes and stopped with a smile in front of a gleaming Strawberry Shortcake, its pink and white figures recalling peanut butter and jelly sandwiches, piles of crayons and an overnight party where at least one lucky girl unrolled a Strawberry Shortcake sleeping bag. I wondered if one of these lunchboxes was still hidden in the dusty recesses of my house. In an instant, a tall man with hair like gray steel wool was at my side.

“Ah, you’re of the metal lunchbox era!” said Tim Seewer, artist, cook and partner in Etta’s Lunchbox Café and Museum in New Plymouth, Ohio. “The Florida Board of Education decided in 1985 to ban metal lunchboxes because they could be used as a weapon. All across the United States, lunchboxes started to go plastic. Ironically, the last metal lunchbox was Rambo.”

Etta’s is a thoroughly charming bit of Americana. Lodged in an old blue-tiled general store, this free museum displays owner LaDora Ousley’s collection of 850 lunchboxes as well as the tobacco and lard tins that were the precursors to the lunchbox. The collection offers a unique lens into the popular culture of the last century—especially when accompanied by commentary from Seewer and Ousley, who do double time in the kitchen making pizza, sandwiches and salads. A 1953 Roy Rogers and Dale Evans lunchbox, the first to have a four-color lithograph panel, is among the collection’s notable items. Also on display are lunchboxes featuring the many television icons that followed: Gunsmoke, Looney Tunes, a host of Disney characters, Popeye, Space Cadet, the Dukes of Hazzard, and more.

The collection both chronicles the stories and characters that shaped many a childhood and offers a perspective on larger social trends in America. As an example, Ousley points to her tobacco tins, which were produced beginning in 1860 with sentimental domestic scenes on them. “It was a clever cross-marketing ploy,” Ousley explains. “Women weren’t allowed to buy tobacco, but it was a sign of status to own one of these tins. It showed you knew a man wealthy enough to buy one and that you were special enough to receive it as a gift.”

Museums with a singular focus—whether on an object or a theme—offer visitors an intimate educational experience, often enhanced by the presence of an owner or curator with an unmatched passion for the subject. Here are seven more narrowly focused museums from around the country, some tiny and precariously funded, others more firmly established.

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. Located in New Plymouth, Ohio, Etta's Lunchbox Café and Museum displays owner LaDora Ousley's collection of 850 lunchboxes. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. A 1953 Roy Rogers and Dale Evans lunchbox, the first to have a four-color lithograph panel, is among the collection's notable items. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. In 1985, the Florida Board of Education banned metal lunchboxes because they could be used as a weapon. Rambo was the last metal lunchbox made. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. Lunchboxes on display at Etta's Lunchbox Café and Museum include television icons such as Looney Tunes, Disney characters, Popeye and the Dukes of Hazzard. (original image)

Image by Richard Clement / Reuters / Corbis. At last count, Velveteria, the Museum of Velvet Paintings has nearly 2,500 velvet paintings. (original image)

Image by Courtesy of the National Museum of Roller Skating. The National Museum of Roller Skating boasts 2,000 square feet of memorabilia from roller derby, roller speed and figure skating, and roller hockey. (original image)

Image by Courtesy of the National Museum of Roller Skating. The National Museum of Roller Skating contains the largest collection of historical roller skates in the world. Some of their skates date back to 1819. (original image)

Image by Courtesy of Roadchix. The Hobo Museum is located in the hobo capital of the world, Britt, Iowa. Every year the museum and Britt host a hobo convention that attracts up to 20,000 ramblers from all parts of the country. (original image)

Image by Newscom. The Bigfoot Discovery Museum was inspired by owner Michael Rugg's encounter with a Sasquatch-like creature when he was a child. (original image)

Image by Courtesy of Vent Haven Museum. Located in Fort Mitchell, Kentucky is the world's only public collection of materials related to ventriloquism. The Vent Haven Museum features 700 ventriloquist dummies arranged in three buildings, some sitting in rows as if waiting for a class to begin. (original image)

Velveteria, the Museum of Velvet Paintings in Portland, Oregon, has nearly 2,500 velvet paintings at last count. Eleven years ago, Caren Anderson and Carl Baldwin were shopping in a thrift store, spied a black velvet painting of a naked woman emerging from a flower and had to have it. That impulse buy ultimately led to a vast collection, much of which is now displayed in an 1,800-square-foot museum. Co-authors of Black Velvet Masterpieces: Highlights from the Collection of the Velveteria Museum, Anderson and Baldwin have a connoisseur’s eye for this neglected art form and an appreciation for its history. The paint-on-velvet form had its origins in ancient China and Japan, enjoyed some popularity in Victorian England, then had its modern heyday when American servicemen like Edgar Leeteg expressed the beauty they saw in the South Seas islands on black velvet. You can tour the museum for $5.00, but watch out for unexpected emotion. “A young couple got engaged in our black light room the other day,” says Anderson.

The National Museum of Roller Skating in Lincoln, Nebraska, boasts 2,000 square feet of memorabilia from roller derby, roller speed and figure skating, and roller hockey. Included are a pair of the first skates ever made, which resemble modern inline skates, patent models from the history of roller skate design, costumes, trophies, photos and magazines on skating. Oddest items: a pair of skates powered by an engine worn on the back and a pair of skates made for a horse—with a photograph of the horse wearing them. This is the world’s only museum devoted to roller skating; admission is free.

The Hobo Museum is located in the hobo capital of the world, Britt, Iowa. According to curator Linda Hughes, the town fathers of Britt tossed out a welcome mat for hoboes in 1899 after hearing that Chicago rolled up theirs when Tourist Union 63—the hobo union—wanted to come to town. A famous hobo named Onion Cotton came to Britt in 1900, and hoboes have been gathering there ever since. The museum is currently housed in an old movie theater, but has so much material it plans to expand into a larger space. The collection includes contents of famous hobo satchels, a hat adorned with clothespins and feathers from Pennsylvania Kid, tramp art, hobo walking sticks, and an exhibit of the character language hoboes use to leave each other messages. Every year, Britt and the museum host a hobo convention that attracts up to 20,000 ramblers from all parts of the country. “It’s like a big family reunion,” Hughes says.

The Museum of Mountain Bike Art and Technology Museum is located above a bike store in Statesville, North Carolina, with a 5,000 square-foot showroom displaying the evolution of mountain bikes. The collection includes “boneshakers”—bikes from 1869 with wooden spoke wheels—as well as bikes with interchangeable parts from the turn of the century. Among this free museum’s 250 bikes are several from the mountain bike boom beginning in the 1970s, when the energy crisis pushed people to make tougher bikes. Many of these are highly designed with great craftsmanship. “Even if you have no interest in bikes, you’d hang one on the wall because they’re so pretty,” says owner Jeff Archer. The museum holds an annual mountain-bike festival that attracts many of the sport’s pioneers.

The Bigfoot Discovery Museum in Felton, California, was inspired by owner Michael Rugg’s encounter with a Sasquatch-like creature when he was a child. The museum offers local history tied to Bigfoot; plaster casts of foot and hand prints; hair, scat and tooth samples; displays that discuss hypotheses to explain Bigfoot sightings and Bigfoot in popular culture; and a research library. In the audio-visual section, the controversial Patterson-Gimlin film purporting to show a Bigfoot spied in the wild runs on a continuous loop. “I’ve got everything I’ve found dealing with Bigfoot or mystery primates here,” Hughes says.

Vent Haven Museum in Fort Mitchell, Kentucky, is the world’s only public collection of materials related to ventriloquism. A Cincinnati businessman named William Shakespeare Berger and later president of the International Brotherhood of Ventriloquists began the collection in the early 1900s; ventriloquists—“vents”—still donate materials. There are 700 ventriloquist dummies arranged in three buildings, some sitting in rows as if waiting for a class to begin. Unusual creations include a head carved by a German prisoner in a Soviet POW camp from World War II—the vent performed for fellow prisoners as well as for the cook to get extra food—and a family of figures used by a blind Vaudeville-era vent. Photographs and drawings of vents abound, including one from the late 1700s, when ventriloquism was more often a trick to con people out of money instead of a form of entertainment. The museum also has a library with 1,000 volumes and voluminous correspondence for scholars. Admission is by appointment only, and curator Jennifer Dawson leads hour-and-a-half tours for $5.00. A yearly convention is held nearby.

The Robert C. Williams Paper Museum in Atlanta originated with a collection by Dard Hunter, an artist from America’s Arts and Crafts Movement who traveled the world to record the ways that people made paper and collect artifacts. In the museum, visitors can examine precursors to modern paper, including many tapa cloths made from pounded bark in Sumatra and Tunisia with inscriptions from special occasions; a vat used by Chinese papermakers in 200 B.C.; and one of the one million prayers printed on paper and enshrined in wooden pagodas that were commissioned by the Empress Shotuku after Japan’s smallpox epidemic of 735. In all, there are over 100,000 watermarks, papers, tools, machines and manuscripts. Admission for individuals is free; guided tours are $5 per person or $8.50 for a tour and paper-making exercise.

Why Rare Hawaiian Monk Seals Are Lining Up to Get Their Shots

Smithsonian Magazine

On a summer day on the island of Kaua`i, a Hawaiian monk seal hauls his 500-pound body out of the surf and galumphs toward a nursing female and her newborn pup. When he gets a few feet away from the mother, she arches her back and faces him, head high. He does the same. She barks. He barks. Snot and saliva fly. 

It’s typical—if awkward—monk seal courtship behavior, more posturing than physical. But scientists are concerned that this kind of scene could swiftly turn into a deadly disease outbreak for one of the most endangered marine mammals in the world. The Hawaiian monk seal has been listed under the Endangered Species Act since 1976, after its numbers were devastated by decades of hunting and other forms of human contact.

About a decade ago, researchers grew worried that a strain of morbillivirus, the genus of viruses that includes measles and canine distemper, could wipe out the last of these rare seals. In response, they’ve launched the first-ever effort to vaccinate a species of wild marine mammals—an effort that has come with a host of first-ever challenges.

The 1,200 or so monk seals that survive in the wild are spread over vast swaths of ocean, coming ashore for only brief periods of time to rest, molt and give birth on islands that stretch across the Central Pacific. Morbillivirus, which is spread by respiratory secretions, could kill off a significant chunk of them without anyone knowing. Thankfully, a growing population of monk seals in the main Hawaiian Islands is making it easier for researchers and their dedicated volunteer network to find—and immunize—them.

For the endangered monk seal, disease has always been the “monster lurking over the horizon,” says Charles Littnan, lead scientist for the National Oceanic and Atmospheric Administration’s Hawaiian Monk Seal Research Program (HMSRP). But it wasn’t until the past decade that research revealed that the species had precariously low genetic diversity. At that point, that infectious diseases “rocketed to an immediate concern,” Littnan says.

In fact, disease may have contributed to the demise of the only other species of Neomonachus, the genus that includes the Hawaiian monk seal: the extinct Caribbean monk seal. Disease “can wipe out seal populations all over the world, and we know that there are disease concerns for the living monk seals,” Kris Helgen, a zoologist at the National Museum of Natural History who studies the extinct monk seal’s evolutionary history, told Smithsonian.com in 2014.

“Simply put, morbillivirus outbreaks in pinnipeds and cetaceans are the things that marine mammal stranding responders have nightmares about,” says Dr. Michelle Barbieri, the lead veterinarian with HMSRP who is supervising rollout of the vaccine program. “The disease could spread easily, infecting many animals out in the ocean before we are able to detect what's going on.” 

Two monk seals tussle on a beach on Kaua`i in 2015. (Kim Rogers)

Littnan and his team had already started developing a plan to respond to the event of a morbillivirus outbreak when, in 2010, their fears were validated. That was when researchers identified the first known case of morbillivirus in the Central Pacific, in a Longman’s beaked whale that stranded on Maui. 

Littnan knew that the disease had already killed tens of thousands of seals and dolphins in the Atlantic, Mediterranean, Arctic and North Pacific oceans. Soon after, a northern fur seal, whose native habitat is the west coast of the United States, turned up on an O‘ahu beach near where monk seals are known to haul out and rest. While the fur seal wasn’t infected, its species is known to carry the disease.

Fortunately, there have been no known cases of morbillivirus in Hawaiian monk seals—yet. Blood tests indicate no prior population exposure, probably because the seals are buffered by the archipelago’s isolation in the middle of the Pacific Ocean. While that’s good, it also means there is no natural immunity. And that leaves this already-vulnerable species quite exposed.

If morbillivirus does break out, Hawaiian monk seals won’t stand a chance. An invasive disease, like an exotic species, can quickly wipe out a vulnerable population. In seals, morbillivirus targets the lungs and brain. Pneumonia may develop, skin lesions may erupt, and the animal may exhibit abnormal behavior, resulting in death in as little as five days.

Littnan and Barbieri knew the only hope for these seals was total vaccination. But 85 percent of the species live in the remote Northwestern Hawaiian Islands, among atolls and islets, elusive even to field biologists who study them. Finding monk seals to vaccinate, especially if the vaccine required a follow-up booster, would be a challenge.

Another challenge was finding the right vaccine. The most effective vaccines generally contain a live virus, which runs a chance of infecting the vaccinated animal. There was no way that the National Marine Fisheries Service, the regulatory agency overseeing the seal’s recovery, would risk introducing the live virus into the population. That left vaccines with dead viruses. But the immune responses in those are short-lived and require frequent boosters—hardly an option when dealing with a wild marine species that spends two-thirds of its life at sea.

The best choice turned out to be recombinant vaccine, which takes advantage of the way viruses inject their genetic material into cells. Researchers create recombinant vaccines by inserting harmless viruses with genetic material that stimulate an immune response in the host subject. The vaccine the researchers chose was one made for ferrets. It isn’t as strange as it sounds: Because all morbilliviruses are antigenically similar, meaning that vaccines made for one can cross-protect against another. However, there can always be adverse reactions.

A juvenile and weaner monk seal greet each other on a Kauai beach in 2014. (Kim Rogers)

Meanwhile, across the Pacific in California, researchers were conducting trials using the ferret vaccine in five captive harbor seals. It worked: Tests found that the initial vaccination, followed by a booster one month later, produced persistent antibodies to the virus. The seals had no noticeable side effects. 

The project hit a snag when, in 2013, after nearly a decade of work into a vaccination program, the manufacturer, Merial, put the vaccine on indefinite backorder. “That took us totally by surprise,” Littnan says. “It was unfortunate timing because this vaccine has been strong production for a long time and used quite broadly not only for ferrets in the wild but very broadly in the zoo and aquaria industry to vaccinate marine mammals and other mammals.”

Littnan kept moving forward, modeling potential spatial and temporal progress of the disease, and planning his team’s response in the advent of an outbreak.

This form of aggressive intervention to save the species wasn’t new to HMSRP. In the past, Littnan’s team had stepped in to disentangle seals trapped in marine debris and de-hook seals caught on fishing lines. They translocated young seals from areas of low survival to high. And with The Marine Mammal Center of Sausalito, California, they started rehabilitating underweight and malnourished seals.

Littnan reports that more than 30 percent of monk seals alive today are due to these interventionist efforts. The annual decline of the population has slowed, from 8 percent in the 1980s to 2.8 percent now.

In late 2015, the manufacturer made a limited quantity of the ferret vaccine available. Littnan didn’t waste any time in procuring enough vaccines for 58 animals. Because the vaccines had about a year before they expired, he decided to inoculate the population immediately to—hopefully—prevent an outbreak rather than respond to one.

Barbieri started with seven monk seals at Ke Kai Ola, the rehabilitation center run by The Marine Mammal Center on Hawai‘i Island. Now, they’re targeting seals in the wild around O‘ahu and Kaua‘i, where 40 to 50 seals regularly show up on each island.

The inoculation itself is a simple process, utilizing a pole syringe to inject one millimeter of vaccine through a 10 millimeter syringe and topping that off with a booster three to five weeks later. As of this writing, at least 43 animals have received vaccinations. Because seals often go on multi-day foraging trips at sea and circumnavigate an island at will, you never know when or where they’ll turn up. Thus, finding a seal during the window its booster is required may be the trickiest part of the inoculation process.

While 58 portions certainly isn’t enough to vaccinate every animal in the population, it is enough to create herd immunity among the growing pocket populations of seals around the Main Hawaiian Islands. The idea is that, if the disease does enter the population, it won’t spread to epidemic proportions.

“We’re using this project as an opportunity to learn about how long the antibodies are detectable in the blood of vaccinated monk seals,” Barbieri says, “And we will be able to compare those data to previous studies.” In the future, such a program could lay the groundwork for protecting seals against other diseases like West Nile.

Littnan hopes to roll out the vaccination program to the remote Northwestern Hawaiian Islands, a stretch of uninhabited islands, islets, and atolls that make up the recently expanded Papahānaumokuākea Marine National Monument where Littnan’s field crews stay for five months every summer. But that all depends on vaccine availability.

“There’s hope,” Littnan says. “We’ve been reaching out to the company. Hopefully, they understand the need and will stick with the product.”

Even with an unlimited supply of vaccines, however,  the success of the program hinges on all vaccinated seals achieving what Barbieri calls “perfect immunity.” “Antibodies to morbillivirus do not exactly predict protection in the face of exposure,” says Barbieri. “We will never expose vaccinated monk seals to the virus to find out if they acquire disease or not, so there will remain several unknowns surrounding this question.”

That is, unless a monk seal finds itself naturally infected. But that is a scenario scientists would rather not ponder.

The Science of "Little House on the Prairie"

Smithsonian Magazine

To read Laura Ingalls Wilder’s Little House books is to step out of one’s own world and into hers. For all their relentless nostalgia, their luscious descriptions of life on the prairie, it’s hard to criticize their rich detail.

Wilder has achieved folk hero status thanks to eight books she wrote and published between 1932 and 1943, and a ninth published posthumously. Based on her family’s travels as settlers in Wisconsin, Minnesota and South Dakota from the 1860s through the 1880s, the novels are considered to be semi-autobiographical, even with Wilder’s tweaking of dates, people and events.

Reading the books, though, it’s hard to resist treating the stories as a true historical account. So rich is Wilder’s detail that you’re on the prairies with her, bundled in furs during winter, or roasting in the summer sun in a full-sleeve dress. Readers don’t just get a window into her life; they walk by her side.

For this reason, her biggest fans hold the LauraPalooza conference every two years to celebrate their heroine’s life and works. But like a Russian nesting doll, within every subculture is yet another subculture, and one unexpected element of the conference: hard scientific  study.

Wilder’s reflections on her life experiences have spurred some scientists to use remarkable research techniques to clarify details from the books that seem a little too incredible. Finding the site of a schoolhouse where she taught that hasn’t existed for decades; a terrible winter of blizzards pounding the Ingalls’ small town day after day—for months; Laura’s sister being blinded by a fever that shouldn’t normally cause that kind of damage.

“Scientists are a bit like detectives,” said Barb Mayes Boustead, a presenter and co-organizer of this year’s conference, held in July at South Dakota State University. “We see something that isn’t explained, and we want to find the evidence that will help explain it. There is no shortage of aspects of Laura’s life and writings to investigate.”

********

From an early age, Jim Hicks had a special empathy for Laura: they both grew up on the prairie. Reading Wilder’s books next to a hearth in his small elementary school in Woodstock, Illinois, snow chipping away at the windows, he developed an interest in visiting the places Laura described in her books.

A retired high school physics teacher, Hicks strived to have his students understand physics in real-world terms. He turned his own classroom techniques on himself when trying to find the site of the Brewster school, where Laura went to teach as a mere teenager:

The Brewster settlement was still miles ahead. It was twelve miles from town. … At last she saw a house ahead. Very small at first, it grew larger as they came nearer to it. Half a mile away there was another, smaller one, and far beyond it, another. Then still another appeared. Four houses; that was all. They were far apart and small on the white prairie. Pa pulled up the horses. Mr. Brewster's house looked like two claim shanties put together to make a peaked roof. These Happy Golden Years (1943)

Hicks knew that Laura traveled to the school in a horse cart. Thinking of horse legs as compound pendulums, swinging back and forth with a constant time period, Hicks measured the length of his wife’s horse from knee to hoof to figure out the time of one oscillation. Then by measuring the stride length for a casual walk, Hicks could estimate the rate of travel, in this case around 3 miles per hour.

Frances B. Hicks, Jim's wife, takes measurements to calculate travel time via a horse. (Courtesy of Jim Hicks)

In These Happy Golden Years, Laura describes the drive as occurring just after the family’s noon meal in December. To get back before dark, Hicks estimated Laura’s driver, her father, had five hours of daylight to make the round trip, so one leg would take 2 ½ hours. At a horse speed of 3 miles per hour, a one-way trip would be between 7 or 8 miles, not the 12 that Laura estimated in the excerpt above.

Finding an old map Laura drew of DeSmet, South Dakota, which showed the Brewster school in a southwesterly direction, Hicks drew a seven-to-eight mile arc on a map of DeSmet. With the help of homestead land claim records and Laura’s description that she could see the light of the setting sun glinting off the windows of a nearby shanty, Hicks predicted the most likely location of the Brewster school site, to the west of a homestead settled by the Bouchie family, the “Brewsters” of Laura’s books. Further research confirmed another book detail: Louis and Oliv Bouchie homesteaded on separate but adjoining parcels, and to satisfy homestead requirements, built the separate halves of their mutual home right on the dividing line.

The result: Laura’s peak-roofed shanty.

“Art, physics and all the liberal arts and sciences are an invention of the human spirit, to try and find answers for causes,” says Hicks. “For a true depth of understanding, to be able to think on your feet with a balanced worldview, you need both parts.”

*********************

When she’s not helping organize LauraPalooza, Barb Boustead spends her hours as a meteorologist in the National Weather Service’s Omaha office. An impassioned weather educator, she writes about the science of weather, its impacts, and how people can prepare for inclement weather on her blog, Wilder Weather.

At the end of a recent winter, Boustead revisited a Wilder book from her youth, The Long Wintercentered on the Ingalls’ trials during an exceptionally harsh South Dakota winter.

"There's women and children that haven't had a square meal since before Christmas," Almanzo put it to him. "They've got to get something to eat or they'll starve to death before spring." – The Long Winter (1940)

Boustead said she found herself wondering whether the back-to-back blizzards Laura wrote about had been as bad as she described. Boustead realized that as a meteorologist, she had the tools not only to find out, but to quantify that winter’s severity.

The winter of 1880-81 was relatively well documented for the time. Compiling records on temperature, precipitation and snow depth from 1950 through 2013, she developed a tool to assign a relative “badness” score to the weather recorded at one or more stations in a geographic area. The Accumulated Winter Season Severity Index (AWSSI, rhymes with “bossy”) assigns an absolute severity grade for how the weather compares with the entire country, and a relative severity grade for comparing regional weather. It can also track year-over-year trends.

Boustead applied the tool to records at weather stations from the 1800s. Every site Boustead investigated in Laura’s region in that year falls into the “extreme” category rating on the AWSSI scale, marking it as a record year for snowfall and temperature lows. The season covered in The Long Winter still ranks in the top 10 worst winters on record for South Dakota, as well as other regions of the country.

Boustead said she has found that people pay more attention to the science of weather when a good story is involved.  “Scientists are told to give facts and information, and not tell a ‘story,’ since that becomes associated with fiction—but it’s not fiction,” Boustead said.  

*********

During a meeting in 2000 between medical students and an attending physician at the Albert Einstein College of Medicine in New York City, the subject of scarlet fever came up.

Beth Tarini, now an assistant professor of pediatrics at the University of Michigan, but at the time a third-year medical student on her pediatrics rotation, piped up. “You can go blind from that, can’t you?”

The attending physician said no, but hesitated when Tarini insisted, citing it as the cause of Mary Ingalls’ blindness, as recounted by her sister Laura in By the Shores of Silver Lake.

Beth Tarini, an assistant professor of pediatrics at the University of Michigan, with her collection of Wilder books. (Courtesy of Beth Tarini)

Motivated, Tarini started digging through med school books and references from the 19th century to see if she could find even a hint of verification that scarlet fever could truly be the cause of Mary’s loss of vision.  Picking up the project after a decade-long hiatus, Tarini and an assistant, Sarah Allexan, broadened the search, seeking evidence of an epidemic that might have caused a spate of blindness in children.

They found something better: an actual account of Mary’s fever, facial paralysis and month-long descent into blindness in a local paper from the Minnesota town where the Ingalls family lived.

They also dug into letters between Laura and her daughter Rose, which eventually became part of Laura’s autobiography:

She was suddenly taken sick with a pain in her head and grew worse quickly. She was delirious with an awful fever. We feared for several days that she would not get well. … One morning when I looked at her I saw one side of her face drawn out of shape. Ma said Mary had had a stroke. –Pioneer Girl (Published posthumously in 2014)

Using the newspaper’s reports along with those letters, Tarini guessed Mary had been laid low by either meningitis or encephalitis. A main clue was Laura’s description of Mary’s affliction as a “spinal sickness.”

She narrowed down the likely cause as viral meningoencephalitis, an inflammation of the covering of the spinal cord and brain, not only because of the prolonged headache and fever, but because of the time it took for Mary to go blind. Losing her vision progressively was more indicative of nerve damage from chronic inflammation following an infection. Laura had probably described Mary’s illness as scarlet fever because it commonly plagued children in that time, and readers would have been familiar with it as a terrible illness.

“The newspaper reports brought home the fact that Mary was a real person and her suffering was witnessed and recorded by her community,” Tarini said. “That reinforced our sense that we were getting close to truth.”

Viral encephalitis does not have a cure. Like other virus-caused illnesses, it simply must run its course. But chances are, if Mary Ingalls were similarly stricken today, her blue eyes would still see after she recovered. Hospitalized immediately for a spinal tap and full bloodwork, she would be well fed and kept hydrated, treated for seizures if they occurred, and given steroids for any vision-threatening inflammation. Tissue and fluid samples may be sent to the Centers for Disease Control to help confirm the diagnosis of viral or bacterial meningitis or encephalitis.

“It’s the ultimate differential diagnostic challenge,” Tarini said. “I don’t have the patient there to give me the history or to examine. I had to assemble the clues that history left me.”

How Humankind Got Ahead of Infectious Disease

Smithsonian Magazine

World health officials and organizations are currently involved in a final push to eradicate polio, the paralyzing disease that was once a crisis in the United States but now remains in just three countries—Pakistan, Nigeria and Afghanistan. If the efforts succeed, polio will join smallpox as one of the only human infectious diseases to have been eliminated, entirely. Such a feat involves cooperation, coordination and determination, but it also rests on one crucial development: vaccines, what career immunologist John Rhodes calls “the most successful medical measure of any.”

Rhodes has spent his life studying how the immune system reacts to first encounters with infectious agents and other fundamental aspects of vaccine development and success. His research interests have included influenza, malaria and HIV/AIDS vaccines, with time at the U.S. National Institutes of Health, the Wellcome Foundation in London and GlaxoSmithKline, where he was the director of strategy in immunology from 2001 until 2007. In his new book, The End of Plagues: The Global Battle Against Infectious Disease (MacSci), Rhodes traces the long road to vaccination and the twists and turns that are still ahead.

Your story begins with smallpox, widely cited as one of the biggest killers in history. How did that disease affect society?

Up until the 17th century, it was the Black Death, or bubonic plague, which had the most impact. The Great Plague of London, which happened in 1666, was the last major visitation, at least in Britain. After that, there was a considerable change in the pattern of disease in that smallpox became the biggest killer. The difference between the plague and smallpox is that smallpox afflicted people across the social scale. Those at the very highest, the very top of society, the highest in the land, seemed equally at risk, whereas in the case of the plague it was just the poor people who tended to die in very large numbers.

How many people were affected?

If you lived in London in the 18th century, then most children would have smallpox during their childhood. The mortality rates were about 20 to 30 percent. It was a common experience in virtually every household in the cities.

Help came from an unlikely source, a woman who was an aristocrat rather than a member of the medical profession. Who was Lady Mary Wortley Montagu, and what role did she play?

She was a remarkable woman and a pioneer of women’s rights. She went in 1717 to Constantinople, modern-day Istanbul, with her husband who was ambassador, where she found out the customs of ordinary people and discovered that the Greek people in Constantinople had this long-standing custom of protecting their children with the forerunner to vaccination, which is called variolation. By giving small amounts of the smallpox germ under the skin, preferably from a non-serious case of smallpox, they could protect their children. When she came back to London, she championed and pioneered this against a good deal of resistance, especially from members of the medical profession, who were still promoting the classical ideas of upsets in the four vital humors as being the cause of disease. Purging, vomiting, bloodletting were the treatments of choice at the time.

Mary was a lone voice. Then she convinced Caroline of Ansbach, the wife of the Prince of Wales, that this was the way to protect aristocratic children who could afford the treatment. Mary and Caroline pioneered it, which led to the first trial in 1721, the so called Royal Experiment in Newgate Prison, where a handful of prisoners were injected with smallpox on the understanding that if they survived they would be pardoned. (They were all due to be hanged.)

Was this approach seen as, well, gross at the time?

You have to remember that this was taking place when disease was rife, sanitation was poor, there was no reliable supply of clean water so diseases like cholera caused epidemics periodically. Inevitably, that is why people tended to drink beer—small beer it was called, with a low level of alcohol—because they knew it was safe. The standards of life were very much different from what they are today. Any sign of some sort of protective measure was seized upon and the standards of proof were very, very low. If it seemed to be safe, then people would adopt it because they hoped it would be lifesaving. That is how half a dozen prisoners came to persuade King George that this should be adopted for the members of his family.

At what point does Edward Jenner, the English doctor credited as the pioneer of vaccination, come into the picture?

Jenner was aware of variolation that had been championed by the Lady Mary and Princess Caroline, and also in the Americas by Cotton Mather. Jenner himself was variolated as a child; it was a horrendous experience. He was very unwell for quite a while. Part of the reason was that members of the medical profession were trying to regain ownership of the process from practitioners who they viewed as breaking from medical tradition, so they added a period of fasting and strange diet in order to remystify the process. Jenner came across the notion that milkmaids were never susceptible to smallpox, and he realized it might be possible to use an innocuous agent, cowpox, in order to do the same thing as the very dangerous variolation. It took him almost three decades before he actually did the experiments, in the late 1790s. It wasn’t a step in the dark. It was an improvement on something that already existed—a pivotal improvement, which relatively quickly spread across the world.

There are stunning stories of how vaccination spread. Can you offer an example?

The King of Spain and others essentially wanted to protect their colonies, which were enormously valuable assets to them. So, in the early 19th century, in what I’ve called “the founding voyages,” chains of children were vaccinated one by one so that the vaccine remained fresh over the course of a sea voyage. By the end of the voyage, the last few children would be vaccinated so there was fresh material, fresh cowpox material in this case, to begin to vaccinate in South America. The Portuguese also championed the same strategy. One of the good things was they didn’t confine it to their own colonies. They went into Asia as well. And that is how the spread of vaccination occurred across the globe.

Was there a backlash from skeptics?

I don’t think it was anything we would recognize as a legitimate reason to concern over safety. It was much more to do with religious and philosophical objections to the introduction of a bestial humor [a vital fluid from a non-human animal] into the human body. The idea of deliberately using a disease from a cow to protect humans against disease was repugnant to a large group of people. There were more reasoned critics who believed there was little benefit from vaccination, and it took a little while for it to convince people. But it was only a matter of five years or so before it was beginning its inexorable spread.

How did vaccination evolve, and eventually move beyond smallpox?

There was a sort of gradual, slowly evolving incremental improvement until the end of the 19th century. When there was an explosion in the field of bacteriology, scientists began to realize that there were many other diseases which could be addressed with vaccines, and that led to widespread attempts to bring about vaccines for other infectious diseases. Louis Pasteur and Robert Koch were the important figures of the late 19th century.

It was germ theory that altered everything. In the 1860s, Pasteur was first to show that germs do not arise spontaneously. They exist pretty much everywhere around us. He did away with the theory of spontaneous germ generation. He also managed to produce a vaccine against rabies and also cholera. And a lot of his discoveries were almost serendipitous. In the case of cholera, the researchers had left a culture of cholera germ out on the bench, so it grew weak. Then, when they injected it into chickens, instead of getting cholera, the chickens were protected against subsequent infection… Pasteur knew all about Jenner’s work, by the way, and he used the term “vaccine,” extending it to all kinds of vaccines in Jenner’s honor.

Thereafter, there were all kinds of exciting stories. One of the most important was the discovery antibodies, or antitoxins as they were then called.

It’s clear that vaccines have brought us a long way. What are the plagues that, contrary to your book’s title, are still threats?

Malaria is a huge killer on a global scale and a lot of the disease burden is in the developing world. There are exciting vaccines in the pipeline for malaria.

And tuberculosis, surprisingly, still produces a huge mortality on the global scale. The BCG vaccine, discovered in the early part of the 20th century, is highly controversial. It is used in Britain and used in Europe and third world countries, but it is not used in the U.S.A. One of the problems is if you vaccinate against TB with BCG, you can’t then screen for whether someone has TB or not. If you have been vaccinated, it looks as though you’ve been exposed.

The third is HIV/AIDs, where there has been so much effort and interest in developing a protective vaccine. It has been hugely frustrating for a decade at least. It is partly because the virus targets the very system you are trying to enhance and strengthen—it targets the immune system and the cells, which normally defend us against infection. Those three I would pick on as the major global targets, together with polio.

 

Interested in learning more? Read John Rhodes' The End of Plagues: The Global Battle Against Infectious Disease (MacSci).

A Brief History of Openly Gay Olympians

Smithsonian Magazine

Watching figure skater Adam Rippon compete, it’s easy to forget that he’s on skates. His dramatic, sharp movements – and facial expressions to match–emulate those of a professional dancer, at once complementing and contradicting his smooth, unfettered movement along the ice. He hides the technical difficulty of every jump and spin with head-flips and a commanding gaze, a performer as well as an athlete. But there’s one thing Rippon won’t be hiding – this year, he and freestyle skier Gus Kenworthy will become the first openly gay American men to ever compete in the Winter Olympics.

“The atmosphere in the country has changed dramatically,” says Cyd Zeigler, who co-founded Outsports, a news website that highlights the stories of LGBT athletes, in 1999. “Two men getting married wasn’t even a possibility when we started Outsports. Now it’s a reality in Birmingham, Alabama. There are gay role models at every turn – on television, on local sports, and in our communities.”

Even so, the last time that the United States sent an openly gay man to any Olympic Games was in 2004, when equestrians Guenter Seidel and Robert Dover won bronze in team dressage. It was Dover’s sixth time representing the United States at the Olympics; during his second Games, in 1988, Dover came out, becoming the first openly gay athlete to compete in the modern Olympics.

"I wish that all gay athletes would come out in all disciplines – football, baseball, the Olympics, whatever," Dover has said. "After six Olympics, I know they're in every sport. You just have to spend one day in the housing, the gyms, or at dinner to realize we're all over."

Indeed, by the time Dover came out on the international stage, it was clear that gay athletes were competing and winning in all levels of professional sports. Seven years earlier, tennis star Billie Jean King was famously outed when a lawsuit filed by a former lover led her to publicly admit to having a lesbian affair. (King promptly lost her all her professional endorsements, but later said she only wished that she had come out sooner.) And in 1982, former Olympian Tom Waddell – who would die from AIDS at the height of the epidemic five years later – helped found the first Gay Games for LGBT athletes. 1,350 athletes competed.

But it was more than a decade earlier when an openly gay athlete first performed in the Olympic Games. Just not exactly during competition.

English figure skater John Curry had barely come off the high of winning gold at the 1976 Winter Olympics in Innsbruck, Austria, when reporters caught wind of his sexuality from an article published in the International Herald Tribune. They cornered the skater in a press conference to grill him on matters most personal, according to Bill Jones’s Alone: The Triumph and Tragedy of John Curry. Curry acknowledged that the rumors about his sexuality were true, but when journalists asked prurient questions betraying the era’s misconceptions about homosexuality and masculinity, Curry fought back: “I don’t think I lack virility, and what other people think of me doesn’t matter,” he said. “Do you think that what I did yesterday was not athletic?” (It should be noted as well that homosexual acts were outlawed in the U.K. at the time.)

But even though the competition was over for Curry, custom had it that medal winners were expected to appear in exhibition performances. There, in a fiery, unflinching athletic spectacle, Curry abandoned his usual lively routine of skips and hops for a stern technical masterpiece, making him the first openly gay athlete to perform on the Olympic stage.

“When everyone had telephoned their story and discussions broke out in many languages around the bar, opinion began to emerge that it was [Curry] who was normal and that it was we who were abnormal,” wrote Christopher Brasher, a reporter for The Observer, in his coverage that year.

LGBT journalists and historians, including Zeigler and Tony Scupham-Bilton, have catalogued the many Olympians who were homosexual but competed in a time before being “out” was safe and acceptable. German runner Otto Peltzer, for instance, competed in the 1928 and 1932 Olympics, but was arrested by the Nazis in 1934 for his homosexuality and was later sent to the concentration camps. In more recent years, athletes have waited to come out until after their time in competition was over, including figure skaters Johnny Weir and Brian Boitano and American diver Greg Louganis. Louganis was long rumored to be gay, but didn’t come out publicly until the opening ceremonies of the 1994 Gay Games: "Welcome to the Gay Games,” Louganis said to the crowd. “It's great to be out and proud."

Though the early history of openly gay Olympians is dotted with male athletes, openly gay women have quietly gained prevalence in recent competitions. French tennis player Amélie Mauresmo is among the first women to come out publicly prior to an Olympic appearance – though, Zeigler added, whether an athlete comes out publicly is based in part on the prominence of their sport outside the Olympics. In 1999, a year before her first Olympic competition, reporters questioned her sexuality after an opponent called her “half a man” for showing up to a match with her girlfriend. Mauresmo’s casual discussion of her sexuality as an integral part of her life and dismissal of concerns that she would lose sponsorship represented a shift in the stigma surrounding coming out as an athlete. Fear of commercial failure still underpinned many athletes’ decisions not to come out, but Mauresmo was undaunted.

“No matter what I do, there will always be people against me,” Mauresmo has said. “With that in mind, I decided to make my sexuality clear… I wanted to say it once and for all. And now I want us to talk about tennis.” Mauresmo still faced criticism for her “masculinity.” But her sponsor, Nike, embraced her muscular look by designing clothes that would display her strength, according to the 2016 book Out in Sport. Mauresmo went on to win silver in women’s singles in 2004.

At the 2008 Summer Olympics in Beijing, 11 openly gay athletes competed, only one of whom – Australian diver Matthew Mitcham, who won gold and is a vocal LGBT activist – was a man. All six openly gay athletes at the 2010 Winter Olympics in Vancouver were women, as were all seven of the openly gay athletes at the 2014 Winter Olympics in Sochi. Both of the intervening Summer Olympics saw a greater turnout of openly gay athletes, but women still held the large majority. In 2016, four of the players on the U.S. women’s basketball team – Delle Donne, Brittney Griner, Seimone Augustus and Angel McCoughtry––were openly gay.

This accounting of course elides that sexual orientation is a spectrum. Olympians who openly identify as bisexual, for instance, are growing in number as well. Additionally, the International Olympic Committee, and the many governing bodies within, have made some strides when it comes to recognizing that gender is not binary, though policies for transgender athletes remain a thorny debate among officials and athletes. That being said, the IOC allowed pre-surgery transgender athletes to take part in the 2016 Rio Games.

With this year’s Winter Games in Pyeongchang, Rippon and Kenworthy are the first openly gay American men to compete in the Olympics since the legality of same-sex marriage was established throughout the United States in 2015, and the cultural shift is apparent. While American tennis legend Martina Navratilova, who came out in 1981 but competed as an Olympian for the first time in 2004, has said that coming out in 1981 cost her $10 million in sponsorships, Kenworthy boasts sponsorships with Visa, Toyota and Ralph Lauren, to name a few. The skier also recently appeared in an ad for Head & Shoulders, with a rainbow pride flag waving behind him.

“The atmosphere for LGBT athletes has changed quicker in past decade,” says Scupham-Bilton, LGBT and Olympic historian. “In the 20th century there was more homophobia in sport and society in general. As the increase in LGBT equality has progressed, so has acceptance of LGBT athletes.”

There’s one notable exception: Sochi 2014. The summer before hosting the Winter Olympics, in what many saw as an affront to gay rights activism, the Russian government passed a law prohibiting the promotion of “nontraditional” sexual relationships to minors. The United States used the Olympic platform as an opportunity for subtle protest, including prominent gay athletes Brian Boitano, Billie Jean King and Caitlin Cahow in its Olympic delegation, and protests were staged across the world. Despite the outpouring of international support, Canadian figure skater Eric Radford opted to wait until after Sochi to come out, citing his desire to be recognized for his skill, rather than his sexuality. He’s already made his mark at the Pyeongchang Games, where his performance with skating partner Meagan Duhamel vaulted Canada to the top of the team figure skating competition.

Rippon and Kenworthy have used their newfound platforms to make statements on political issues. Rippon recently made headlines when he refused an offer to meet with Vice President Mike Pence due to disagreements with his stances on LGBT rights – which include past statements that appear to support funding gay conversion therapy. Pence’s former press secretary denied his support for gay conversion therapy during the 2016 presidential campaign. Kenworthy also criticized the Vice President as a “bad fit” to lead the United States' delegation at the Opening Ceremony in Pyeongchang on Friday.

Political platforms and sponsorships aside, Rippon and Kenworthy ultimately hoped that by coming out they could live as freer, more authentic versions of themselves – and empower others to do the same.

“There is pressure that comes with this responsibility and I feel I have a responsibility to the LGBT community now,” Kenworthy has said. “I want to be a positive example and an inspiration for any kids that I can.”

Can This Berry Solve Both Obesity and World Hunger?

Smithsonian Magazine

The Chicago-based chef Homaro Cantu plans to open a new café with Wonka-esque ambitions. He will offer guests a “miracle berry”-laced appetizer that subsequently makes his lite jelly donut—baked without sugar—taste rich, gooey and calorific.

The concept of his Berrista Coffee, set to open next week on Chicago’s north side, rests on miracle fruit—berries native to West Africa that contain a glycoprotein called miraculin that binds to the tongue and, when triggered by acids in foods, cause a sweet sensation. Once diners down the berry, which will be delivered at Berrista in the form of a small madeleine cake, everything subsequently sipped, slurped and swallowed is altered, for somewhere between 30 and 45 minutes. In that time, mascarpone cheese will taste like whipped cream, low-fat yogurt will pass as decadent cheesecake, sparkling water with lemon will sub for Sprite, and cheap merlot will feign a rich port.

Miracle fruit doesn’t just amplify sweetness, it boosts flavor. “If you had a strawberry, it’s not just the sweet that goes up, but there’s a dramatic intense strawberry flavor,” says Linda Bartushuk, director of human research at the Center for Smell and Taste at the University of Florida, who has studied the effects of miracle fruit since the 1970s. “That’s why people get such a kick out of it. The flavor increase is impressive.”

European explorers of West Africa first discovered local tribes eating the fruit before insipid meals, such as oatmeal gruel, in the 18th century. Researchers in the United States have been studying its effects as a sweetener since the 1960s. The berries are considered safe to ingest, according to Bartushuk, but because they are exotic and still little known to the general public, they have yet to become part of our mainstream diet.

Guiding me on a pre-opening tour of his 1,400-square-foot shop, featuring an indoor vegetable garden at the front counter, the ebullient Cantu declares, “Let’s unjunk the junk food!” The Berrista menu will offer sugar-free pastries and dishes like chicken and waffle sandwiches that allow you to, in his words, “enjoy your vices,” without sacrificing your health.

Cantu is a restless tinkerer who holds dozens of patents in food technology, including an edible paper made of soy. He once worked with NASA on creating a “food replicator” in space, much like the 3D printer in Star Trek. Cantu has been experimenting with miracle berries since 2005, when a friend complained her sense of taste had gone metallic as a side effect of chemotherapy. Last year, he published The Miracle Berry Diet Cookbook, giving dieters, diabetics and chemo patients recipes for whoopie pies, cakes and cookies as well as savory dishes, such as Korean beef with kimchi and spicy apricot chicken wings. Now, he hopes to introduce such berry-boosted dishes to mainstream commuters in the working-class Old Irving Park neighborhood, just two blocks from the I-94 expressway.

51Nl4BKx2UL._SL160_.jpg

The Miracle Berry Diet Cookbook

~ Homaro Cantu (author) More about this product
List Price: $19.99
Price: $15.58
You Save: $4.41 (22%)

Miracle fruit, or Synsepalum dulcificum, grows on bushy trees, generally to about five feet. As part of Berrista’s indoor farm, Cantu plans to add a grove of 82 miracle berry plants in the basement by next spring, eventually shipping the harvest to the Arizona-based mberry that processes the fruit into tablets and powder, more potent concentrations than the berry itself, used by the restaurant.

As Cantu sees it, the berry and the indoor farm are solutions to health and hunger issues, as well as to environmental sustainability.

“Refined sugar is a dense energy storage product,” he explains, while offering me a sample of Berrista’s chicken and waffle sandwich, a leaner-than-normal version that, after I down a purple, aspirin-sized miracle berry pill, tastes just like the sweet-savory, maple-syrup-drenched dish. “Throughout history your body got used to consuming raw vegetables and meat, then cooked meat. Sugar is a relatively new invention, maybe in the last 300 years. Now your body, which has taken so long to evolve, has so much thrown at it, it breaks down.”

Image by Amy Stallard. By serving a miracle berry appetizer, Cantu can make a donut—baked without sugar—taste rich and calorific. (original image)

Image by Amy Stallard. Sparkling water with lemon or lime subs for Sprite. (original image)

Image by Amy Stallard. Berrista's leaner-than-normal chicken and waffle sandwich tastes just like the sweet-savory, maple-syrup-drenched version. (original image)

Image by Amy Stallard. The menu, still in development, includes plenty of interesting indulgences, like these carbonated grapes. (original image)

Image by Amy Stallard. Panini Cristo and strawberry jam (original image)

Image by Amy Stallard. Cappuccino (original image)

Image by Amy Stallard. Sirloin flatbread (original image)

Image by Amy Stallard. Croissants (original image)

Image by Amy Stallard. Pineapple mango smoothie (original image)

Image by Amy Stallard. Serrano panini (original image)

Image by Amy Stallard. "Let's unjunk the junk food!" says chef Homaro Cantu. The owner of Berrista wants you to "enjoy your vices" without sacrificing your health. (original image)

The menu, still in development, includes plenty of indulgences, such as donuts and paninis. Eliminating sugar doesn't make them calorie-free, but they are better-for-you options, the chef argues. He plans to price his menu items to compete with fast food rivals, making his version of health food economically accessible.

"I don’t necessarily think it’s going to be the next magic pill or silver bullet for our obesity epidemic," said Louisa Chu, Chicago-based food journalist and co-host of the public radio podcast “Chewing the Fat.” "But it gets us thinking and it may wean us from the sugar we take for granted and the hidden sugar in foods we don’t know about."

If the berries can alter flavor perceptions of treats like sugar-free donuts, Cantu reasons they can also feed the developing world on bland or bitter foods that are digestible, but considered inedible. To prove it, he once spent a summer eating his own lawn alongside miracle berries. “Kentucky bluegrass tastes like tarragon,” he reports.

His plans to scale-up the campaign are vague, but hunger is something Cantu knew intimately as a child in Portland, Oregon. “I grew up floating from homeless shelter to homeless shelter with my mom and sister,” he says. “A character building childhood, we’ll call it.”

By age 12, he began working in restaurants, spending his free time taking apart engines to see how they work. “I actually still do that,” he laughs. He earned his practical education in haute cuisine over four years at Charlie Trotter’s, the famed, now-shuttered, high-end restaurant in Chicago. Just before opening his first restaurant, Moto, in 2004, the 38-year-old took a brief hiatus to create edible paper for menus and other food-related innovations, including utensils with spiral handles that chefs can stuff with aromatic herbs and a hand-held polymer oven than can withstand temperatures up to 400 degrees Fahrenheit and still feel cool to the touch, both of which he uses at Moto. “Over the years, I started realizing in food there’s a need for invention, a need for practical applications, because there are so many challenges,” he says.

One of those challenges, as he sees it, is eliminating food miles—the distance a food must be shipped, which dulls the flavor of food over time and wastes considerable fossil fuels in transit. The Natural Resources Defense Council says the average American meal includes ingredients from five countries outside of the United States. After nearly four years and $200,000 spent perfecting his indoor farm growing herbs and vegetables at Moto in Chicago’s West Loop, he says he finally has the right combination of lighting, seeds and a syphoning water system that irrigates without using an electrical pump to make it productive, energy-saving and therefore financially viable.

If visionary Chicago city planner Daniel Burnham, who famously said, “Make no little plans; they have no magic to stir men’s blood,” had a food counterpart, it would be Cantu, who envisions his indoor farms proliferating and disrupting today’s food system.

“Imagine if this whole neighborhood had access to zero-food-mile products and you were able to buy produce cheaper than at the grocery store up the block? This will happen,” he says with certainty, surveying the busy road on which Berrista resides, a block away from a Dunkin’ Donuts. “Now this is an opportunity for grocery stores to start doing this. This addresses so many problems, the California drought, plastics. We need to decentralize food production.”

One step at a time is not this chef’s multi-tasking, magic-stirring MO.

Why Is Washing Your Hands So Important, Anyway?

Smithsonian Magazine

Avoid close contact with sick patients. Stay home if you’re feeling unwell. Scrub your hands with soap and water for at least 20 seconds and for goodness’ sake, stop touching your face.

By now, you’ve probably heard or seen the advice from the Centers for Disease Control and Prevention (CDC) for staving off COVID-19, the viral epidemic ricocheting across the globe. Most cases of the disease are mild, triggering cold-like symptoms including fever, fatigue, dry cough and shortness of breath. The death rate appears to be low—about two or three percent, perhaps much less. But the virus responsible, called SARS-CoV-2, is a fearsomely fast spreader, hopping from person to person through the droplets produced by sneezes and coughs. Since COVID-19 was first detected in China’s Hubei province in December 2019, nearly 100,000 confirmed cases have been reported worldwide, with many more to come.

To curb the virus’ spread, experts stress the importance of hand hygiene: keeping your hands clean by regularly lathering up with soap and water, or, as a solid second choice, thoroughly rubbing them down with an alcohol-based sanitizer. That might sound like simple, even inconsequential advice. But such commonplace practices can be surprisingly powerful weapons in the war against infectious disease.

“[Washing your hands] is one of the most important ways to interrupt transmission of viruses or other pathogens,” says Sallie Permar, a physician and infectious disease researcher at Duke University. “It can have a major impact on an outbreak.”

How to Destroy a Virus

In the strictest sense of the word, viruses aren’t technically alive. Unlike most other microbes, which can grow and reproduce on their own, viruses must invade a host such as a human cell to manufacture more of themselves. Without a living organism to hijack, viruses can’t cause illness. Yet viral particles are hardy enough to remain active for a while outside of the host, with some staying infectious for hours, days or weeks. For this reason, viruses can easily spread unnoticed, especially when infected individuals don’t always exhibit symptoms—as appears to be the case with COVID-19.

Researchers are still nailing down the details of exactly how SARS-CoV-2 is transmitted and how resilient it is outside the body. Because the virus seems to hang out in mucus and other airway fluids, it almost certainly spreads when infected individuals cough or sneeze. Released into the air, infectious droplets can land on another person or a frequently touched surface like a doorknob, shopping cart or subway seat. The virus can also transfer through handshakes after someone carrying the virus sneezes or coughs into their hand.

After that, it’s a short trip for the virus from hand to head. Researchers estimate that, on average, humans touch their faces upwards of 20 times an hour, with about 44 percent of these encounters involving eyes, mouths and noses—some of the quickest entry points into the body’s interior.

Breaking this chain of transmission can help stem the spread of disease, says Chidiebere Akusobi, an infectious disease researcher at Harvard’s School of Public Health. Sneezing or coughing into your elbow can keep mucus off your mitts; noticing when your hand drifts towards your face can help you reduce the habit.

All this public-health-minded advice boils down to a game of keep away. To actually infect a person, viruses must first get inside the body, where they can infect living cells—so if one lands on your hands, the best next move is to remove or destroy it.

The Science Behind Hand-Washing

The most important step to curbing infection may be hand-washing, especially before eating food, after using the bathroom and after caring for someone with symptoms. “It’s simply the best method to limit transmission,” says Kellie Jurado, a virologist at the University of Pennsylvania’s Perelman School of Medicine. “You can prevent yourself from being infected as well as transmitting to others.”

According to the CDC, you should wet your hands—front and back—with clean, running water; lather up with soap, paying mind to the easily-forgotten spaces between your fingers and beneath your nails; scrub for at least 20 seconds; then rinse and dry. (Pro tip: If counting bores you or you’re sick of the birthday song, try the chorus of these popular songs to keep track.)

Done properly, this process accomplishes several virus-taming tasks. First, the potent trifecta of lathering, scrubbing and rinsing “physically removes pathogens from your skin,” says Shirlee Wohl, a virologist and epidemiologist at Johns Hopkins University.

In many ways, soap molecules are ideal for the task at hand. Soap can incapacitate SARS-CoV-2 and other viruses that have an outer coating called an envelope, which helps the pathogens latch onto and invade new cells. Viral envelopes and soap molecules both contain fatty substances that tend to interact with each other when placed in close proximity, breaking up the envelopes and incapacitating the pathogen. “Basically, the viruses become unable to infect a human cell,” Permar says.

Alcohol-based hand sanitizers also target these vulnerable viral envelopes, but in a slightly different way. While soap physically dismantles the envelope using brute force, alcohol changes the envelope’s chemical properties, making it less stable and more permeable to the outside world, says Benhur Lee, a microbiologist at the Icahn School of Medicine at Mount Sinai. (Note that “alcohol” here means a chemical like ethanol or isopropyl alcohol—not a beverage like vodka, which contains only some ethanol.)

Alcohol also can penetrate deep into the pathogen’s interior, wreaking havoc on proteins throughout the virus. (Importantly, not all viruses come with outer envelopes. Those that don’t, like the viruses that cause HPV and polio, won’t be susceptible to soap, and to some extent alcohol, in the same way.)

A schematic of an enveloped virus (left) and a non-enveloped virus (right). SARS-CoV-2 and other coronaviruses are enveloped, meaning they have a fatty outer coating that can be targeted by soap and alcohol. (Modified from Nossedotti / Wikimedia Commons (CC BY-SA 3.0))

Hand sanitizers made without alcohol—like some marketed as “baby-safe” or “natural”—won’t have the same effect. The CDC recommends searching for a product with at least 60 percent alcohol content—the minimum concentration found to be effective in past studies. (Some water is necessary to unravel the pathogen’s proteins, so 100 percent alcohol isn’t a good option.)

As with hand-washing, timing matters with sanitizers. After squirting a dollop onto your palm, rub it all over your hands, front and back, until they’re completely dry—without wiping them off on a towel, which could keep the sanitizer from finishing its job, Jurado says.,

But hand sanitizers come with drawbacks. For most people, using these products is less intuitive than hand-washing, and the CDC notes that many people don’t follow the instructions for proper application. Hand sanitizers also don’t jettison microbes off skin like soap, which is formulated to lift oily schmutz off surfaces, Akusobi says.

“Soap emulsifies things like dirt really well,” he says. “When you have a dirty plate, you don’t want to use alcohol—that would help sterilize it, but not clean it.”

Similarly, anytime the grit is visible on your hands, don’t grab the hand sanitizer; only a full 20 seconds (or more) of scrubbing with soapy water will do. All told, hand sanitizer “should not be considered a replacement for soap and water,” Lee says. “If I have access to soap and water, I will use it.”

Too Much of a Good Thing?

Technically, it is possible to overdo it with both hand-washing and hand sanitizing, Akusobi says. “If your skin is chronically dry and cracking, that’s no good. You could be exposing yourself to other infections,” he says. But “it would take a lot to get to that point.”

In recent weeks, hand sanitizers have been flying off the shelves, leading to shortages and even prompting some retailers to ration their supplies. Some people have begun brewing up hand sanitizers at home based on online recipes.

Many caution against this DIY approach, as the end products can’t be quality controlled for effectiveness, uniformity or safety, says Eric Rubin, an infectious disease researcher at Harvard’s School of Public Health. “On average, one would imagine that [a homemade sanitizer] would not work as well, so it would be a mistake to rely on it,” he says.

As more information on SARS-CoV-2 and COVID-19 emerges, experts stress the importance of awareness. Even as the news changes and evolves, people’s vigilance shouldn’t.

“Do the small things you need to do to physically and mentally prepare for what’s next,” Wohl says. “But don’t panic. That never helps anybody.”

What Scientists Know About Immunity to the Novel Coronavirus

Smithsonian Magazine

Resolving the COVID-19 pandemic quickly hinges on a crucial factor: how well a person’s immune system remembers SARS-CoV-2, the virus behind the disease, after an infection has resolved and the patient is back in good health.

This phenomenon, called immune memory, helps our bodies avoid reinfection by a bug we’ve had before and influences the potency of life-saving treatments and vaccines. By starving pathogens of hosts to infect, immune individuals cut off the chain of transmission, bolstering the health of the entire population.

Scientists don’t yet have definitive answers about SARS-CoV-2 immunity. For now, people who have had the disease appear unlikely to get it again, at least within the bounds of the current outbreak. Small, early studies in animals suggest immune molecules may stick around for weeks (at least) after an initial exposure. Because researchers have only known about the virus for a few months, however, they can’t yet confidently forecast how long immune defenses against SARS-CoV-2 will last.

“We are so early in this disease right now,” says C. Brandon Ogbunu, a computational epidemiologist at Brown University. “In many respects, we have no idea, and we won’t until we get a longitudinal look.”

A memorable infection

When a pathogen breaches the body’s barriers, the immune system will churn out a variety of immune molecules to fight it off. One subset of these molecules, called antibodies, recognizes specific features of the bug in question and mounts repeated attacks until the invader is purged from the body. (Antibodies can also be a way for clinicians to tell if a patient has been recently infected with a given pathogen, even when the microbe itself can no longer be detected.)

Though the army of antibodies dwindles after a disease has resolved, the immune system can whip up a new batch if it sees the same pathogen again, often quashing the new infection before it has the opportunity to cause severe symptoms. Vaccines safely simulate this process by exposing the body to a harmless version or piece of a germ, teaching the immune system to identify the invader without the need to endure a potentially grueling disease.

From the immune system’s perspective, some pathogens are unforgettable. One brush with the viruses that cause chickenpox or polio, for instance, is usually enough to protect a person for life. Other microbes, however, leave less of an impression, and researchers still aren’t entirely sure why. This applies to the four coronaviruses known to cause a subset of common cold cases, says Rachel Graham, an epidemiologist and coronavirus expert at the University of North Carolina at Chapel Hill. Immunity against these viruses seems to wane in a matter of months or a couple of years, which is why people get colds so frequently.

Because SARS-CoV-2 was only discovered recently, scientists don’t yet know how the human immune system will treat this new virus. Reports have surfaced in recent weeks of people who have tested positive for the virus after apparently recovering from COVID-19, fueling some suspicion that their first exposure wasn’t enough to protect them from a second bout of disease. Most experts don’t think these test results represent reinfections. Rather, the virus may have never left the patients’ bodies, temporarily dipping below detectable levels and allowing symptoms to abate before surging upward again. Tests are also imperfect, and can incorrectly indicate the virus’ presence or absence at different points.

Because the COVID-19 outbreak is still underway, “if you’ve already had this strain and you’re re-exposed, you would likely be protected,” says Taia Wang, an immunologist and virologist at Stanford University and the Chan Zuckerberg Biohub. Even antibodies against the most forgettable coronaviruses tend to stick around for at least that long.

COVID-19 packs a stronger punch than the common cold, so antibodies capable of fending off this new coronavirus may have a shot at lingering longer. Broadly speaking, the more severe the disease, the more resources the body will dedicate to memorizing that pathogen’s features, and the stronger and longer lasting the immune response will be, says Allison Roder, a virologist at New York University. Previous studies have shown that people who survived SARS, another coronavirus disease that resulted in a 2003 epidemic, still have antibodies against the pathogen in their blood years after recovery. But this trend is not a sure thing, and scientists don’t know yet whether SARS-CoV-2 will fall in line.

Earlier this month, a team of researchers posted a study (which has yet to be published in a peer-reviewed journal) describing two rhesus macaques that could not be reinfected with SARS-CoV-2 several weeks after recovering from mild bouts of COVID-19. The authors chalked the protection up to the antibodies they found in the monkeys’ bodies, apparently produced in response to the virus—a result that appears to echo the detection of comparable molecules in human COVID-19 patients.

But the mere presence of antibodies doesn’t guarantee protection, Wang says. Reinfections with common cold coronaviruses can still happen in patients who carry antibodies against them. And a bevy of other factors, including a person’s age and genetics, can drastically alter the course of an immune response.

An evolving virus?

Complicating matters further is the biology of SARS-CoV-2 itself. Viruses aren’t technically alive: While they contain genetic instructions to make more of themselves, they lack the molecular tools to execute the steps, and must hijack living cells to complete the replication process for them.

After these pathogens infect cells, their genomes often duplicate sloppily, leading to frequent mutations that persist in the new copies. Most of these changes are inconsequential, or evolutionary dead ends. Occasionally, however, mutations will alter a viral strain so substantially that the immune system can no longer recognize it, sparking an outbreak—even in populations that have seen a previous version of the virus before. Viruses in the influenza family are the poster children for these drastic transformations, which is part of why scientists create a new flu vaccine every year.

When flu viruses copy their genomes, they often make mistakes. These errors can change the way their proteins look to the immune system, helping the viruses evade detection. (Rebecca Senft, Science in the News)

Some viruses have another immunity-thwarting trick as well: If a person is infected with two different strains of the flu at the same time, those viruses can swap genetic material with each other, generating a new hybrid strain that doesn’t look like either of its precursors, allowing it to skirt the body’s defenses.

Researchers don’t yet know how quickly similar changes could occur in SARS-CoV-2. Unlike flu viruses, coronaviruses can proofread their genomes as they copy them, correcting mistakes along the way. That feature reduces their mutation rate, and might make them “less of a moving target” for the immune system, says Scott Kenney, an animal coronavirus expert at Ohio State University. But coronaviruses still frequently trade segments of their genetic code with each other, leaving the potential for immune evasion wide open.

So far, SARS-CoV-2 also doesn’t appear to be undergoing any extreme mutations as it sweeps across the globe. That may be because it’s already hit on such a successful strategy, and doesn’t yet need to change its tactic. “Right now, it’s seeing a completely naive population” that’s never been exposed to the virus before, Graham says. The virus “doesn’t seem to be responding to any kind of pressure,” she adds.

Should SARS-CoV-2 get a second infectious wind, it may not come for some time. Even fast-mutating influenza strains can take years to reenter populations. And if or when that day comes, future COVID-19 outbreaks could be milder. Sometimes viral success means treading gently with the host, says Catherine Freije, a virologist at Harvard University.

“Viruses that causes severe disease actually tend to die out faster because a host that’s feeling ill can’t spread it as well.” In those cases, she says, sometimes, “the outbreak just sort of fizzles out.”

But we can’t rule out the possibility that SARS-CoV-2 could change in a way that bumps up its virulence instead, Kenney says. To steel the population for what’s ahead, sometimes, he adds, “We just have to be the ultimate pessimist when it comes to this type of outbreak.”

Protection without disease

Although much about COVID-19 remains unknown, researchers are racing through vaccine development to boost the world’s collective immunity—something that would stem the spread of the virus through the human population.

“Vaccine development is going to be critical to controlling this outbreak,” says Wang. That’s especially true if SARS-CoV-2 returns for an encore act. “If it’s an ever-present pathogen, we’ll certainly need vaccines to be part of our arsenal.”

Researchers have managed to concoct partially effective vaccines to combat other coronavirus infections in animals, such as pigs. In these creatures, immunity lasts “at least several months, possibly longer,” says Qiuhong Wang, a coronavirus expert at Ohio State University. (Because many of the subjects are livestock, they often don’t live long enough for researchers to test them further.) These vaccines may be reason for hope, she says, pointing out that “humans are animals, too.”

Two flu viruses can sometimes infect the same host cell. When they spill their contents into the cell, their genetic material can recombine, generating new hybrid viruses that are mixtures of their precursors. (Rebecca Senft, Science in the News)

Several research teams are designing human vaccines that trigger the production of antibodies that attack SARS-CoV-2’s spike protein—the molecular key the virus uses to unlock and enter human cells. Because the spike protein is crucial for viral infection, it makes an excellent target for a vaccine, says Benhur Lee, a virologist at the Icahn School of Medicine at Mount Sinai. But Lee also points out that the spike protein, like other parts of the virus, is capable of mutating—something that could compromise the ability of a vaccinated individual to ward off the virus.

If mutation regularly occurs to that extent, scientists may need to frequently reformulate COVID-19 vaccines, like they do with pathogens in the flu family, Wang says. “We’d be starting over to some degree if there is a new outbreak.”

However, Wang cautions that it’s too soon to tell whether that will be the case. As research worldwide proceeds at breakneck speed, scientists may instead be able to brew up a universal vaccine that’s active against multiple forms of SARS-CoV-2.

But vaccines, which require rigorous testing and retesting to ensure efficacy and safety, take a long time to develop—typically more than a year, Qiuhong Wang says. In the meantime, researchers are turning their attention to treatments that could save those who have already been infected.

Some solutions will inevitably require antiviral drugs that tackle active SARS-CoV-2 infections after they’ve already begun, usually by interfering with the virus’ infection cycle.

But another approach, based on a time-tested technique, also taps into the immune response: transferring blood plasma—and the disease-repelling antibodies it contains—from recovered patients into infected ones. Though new to the current pandemic, the treatment has been deployed in various forms since the 1890s, and saw modest success during outbreaks of SARS in 2003 and Ebola in 2014. Ongoing trials in New York are now recruiting carefully screened, healthy volunteers who no longer have symptoms or detectable virus in their bodies to donate plasma. Importantly, this doesn’t diminish donors’ own resistance to SARS-CoV-2, since their immune systems have already learned to manufacture more antibodies.

Antibodies degrade over time, and won’t protect the people who receive these transfusions forever. The plasma treatments also can’t teach their recipients’ immune systems to make new antibodies after the first batch disappears. But this stopgap measure could ease the burden on health care workers and buy time for some of the outbreak’s most vulnerable victims.

Even as the pandemic evolves, researchers are already looking ahead. Just as the response to this outbreak was informed by its predecessors, so too will COVID-19 teach us about what’s to come, Qiuhong Wang says. The entry of other coronavirus strains into our species “is inevitable.”

“We don’t know when or where that will happen,” she says. But hopefully by the time the next pandemic comes around, the world will be more ready.

How Central Park’s Complex History Played Into the Case Against the 'Central Park Five'

Smithsonian Magazine

For more than a century, New York City’s Central Park was the soothing natural counter to the steel and concrete chaos. Designed to be an amalgam of the best parts of nature, the park, though it had its ups and downs, played a special role as the leafy-green heart of the city.

So, when news of a brutal attack in the park swept the city on April 19, 1989, the public outcry was enormous. The assault and rape of an unnamed victim, a woman since identified as Trisha Meili but then only known as “the jogger,” was plastered across headlines for months. Even the media shorthand for the case revealed the significance of the crime’s setting—the five boys accused of the crime became forever known as the “Central Park Five.”

“Central Park was holy,” said Ed Koch, mayor of New York at the time of the attack, in Ken Burns’ 2012 documentary on the case. “If it had happened anyplace else other than Central Park, it would have been terrible, but it would not have been as terrible.”

All five of the teenage defendants—Kevin Richardson, Yusef Salaam, Raymond Santana, Korey Wise and Antron McCray—were found guilty and served between 6 and 13 years in prison. Most of the evidence against them came from a series of written and videotaped confessions, which, during the two trials, the boys said were coerced; DNA evidence from the crime scene produced no matches. Still, both juries, as well as most of the New York tabloids, were convinced of the teenagers’ guilt. The story of the case is retold in the new Netflix miniseries "When They See Us," which premieres today.

But in 2002, the case re-opened when Matias Reyes, a serial rapist serving a prison sentence for other crimes, confessed as the sole attacker in the Central Park case. His DNA and his account of the attack matched the original evidence. A judge vacated the Central Park Five’s convictions later that year, after the defendants had all served out their sentences, and New York was left to once more reckon with a case that had been closed for years.

Within that reckoning lay the question: Why had this case become so closely tied with the identity of Central Park? Maybe it was because a brutal attack on park grounds was such a perversion of the park’s original mission to serve as a calming and even civilizing space for all the city’s residents. Or maybe it was because such an occurrence exposed how that mission, and the city’s egalitarian project, had never been fully realized.

***

In the mid-19th century, New York’s population boomed as immigrants flooded in, particularly from Ireland, and as American-born migrants fled country farms for city life in an ever-industrializing nation. Even as buildings sprouted rapidly across the city, conditions grew ever more cramped and hazardous. Amid this increasing city-wide claustrophobia, some New Yorkers began to call for a park where green spaces could provide a healing respite for city dwellers.

“Commerce is devouring inch by inch the coast of the island, and if we would rescue any part of it for health and recreation it must be done now,” wrote William C. Bryant, editor of the New York Evening Post and a leading advocate for Central Park’s creation, in an 1844 editorial.

Of course, some motives for creating the park were more paternalistic, as city elites thought a cultivated, natural area could help “civilize” the New York underclass. Others were more business-minded, as realtors knew beautifying undeveloped land would raise property values for surrounding properties. In any case, state legislators were convinced, and set out to build the first major landscaped public park in the United States.

The city landed upon the 700-acre Manhattan expanse where the park still rambles to this day, sprawling out between Fifth and Eighth Avenue and from 59th Street to 106th Street (later expanded a few blocks to 110th). Because of its rough terrain, in which swampy muck alternated with harsh rock, the area didn’t hold much appeal for real estate developers, and in 1853, the city used its power of eminent domain to claim the land as public property and begin its transformation.

The Mall, Central Park, New York', circa 1897. A pedestrian esplanade in Central Park, Manhattan designed to plans by plan of Frederick Law Olmsted and Calvert Vaux. (The Print Collector/Getty Images)

From the beginning, though, the park had an element of controversy: When the city tapped the area for its own use, more than 1,600 people already lived on the future park’s land. Hundreds were occupants of Seneca Village, a community established by free African-American property owners in 1825, two years before slavery was abolished in New York. Once the city claimed the land, police forcibly evicted Seneca Village residents, who probably scattered throughout the New York area. The community’s houses, churches and school were razed to make way for the rolling landscape designs of Olmsted and his design partner, Calvert Vaux.

In Olmsted’s eyes, the park would be a great equalizer among New York’s stratified classes. He’d been inspired by gardens in Europe, and especially by a visit to Birkenhead Park, the first publicly funded park in England. He noted that the site was enjoyed “about equally by all classes,” unlike most of the other cultivated natural grounds at the time, which were privately held by the wealthy elite.

A similar park would be, for Olmsted, an important part of the “great American democratic experiment,” says Stephen Mexal, an English professor at California State University Fullerton who has researched Central Park and its role in the Central Park Five case.

“There was a link that he thought was meaningful between genteel manners, people of genteel birth and genteel landscapes,” Mexal says. “And he said, ‘Well, what if we just kind of took those landscapes and made them more available to everybody?’ So, he said that the park would have this, quote, ‘refining influence’ among everybody in the city.”

Olmsted and Vaux’s “Greensward Plan” beat out more than 30 other entries in a public contest, promising sweeping pastoral expanses and lush greenery. Their vision came to life quickly, and by 1858 the first section of the park opened to the public. Millions of visitors poured into the park in its first years. Families flocked to skate on the lake in winter, and the fashionable New York set paraded into the park in carriages to socialize. Strict rules tried to set a tone of tranquil decorum in the park, prohibiting rowdy sports, public concerts and even walking on the wide grass lawns.

For a time, it seemed like Olmsted’s dream was fulfilled: He’d created a beautiful green respite in the middle of the city’s chaos, an idealized image of nature for all to enjoy.

“There is no other place in the world that is as much home to me,” Olmsted wrote of Central Park. “I love it all through and all the more for the trials it has cost me.”

Image by Universal History Archive/UIG via Getty Images. Horse-drawn carriages and coaches on the driveway, Central Park. (original image)

Image by Rae Russel/Getty Images. View of a well-dressed couple as they enjoy boating on one of the ponds in Central Park, New York, New York, 1948 (original image)

Image by Robert Walker/New York Times Co./Getty Images. Anti-Vietnam War peace rally at Sheep Meadow in Central Park, New York City, in April 1968. (original image)

Image by Ernst Haas/Getty Images. People walking in Central Park in 1980 (original image)

Olmsted, however, may not have been prepared for the reality of a true “park for the people.” As the 19th century wore on, more working-class citizens and immigrants began frequenting the park, disrupting the “genteel” air its creator had so carefully cultivated on their supposed behalf. Sunday afternoon concerts, tennis matches, carousel rides and lawn picnics became important pieces of the park’s new character.

Though Olmsted bemoaned the “careless stupidity” with which many misused his perfectly groomed landscape, his democratic experiment, once set in action, could not be reeled in. Ultimately, even Olmsted’s best efforts couldn’t bring about harmony in the city. As New York continued its growth into the next century, Central Park, intended to be an outlet to relieve the pressures of city living, instead became a microcosm for the urban condition—its use reflecting the changing tides of its country.

In the 1940s, newspapers latched onto the idea of a “crime wave” in the park after a young boy was murdered, a fear that persisted even though Central Park remained one of the safest precincts in the city. Protesters filled the park’s lawns in the 1960s, staging counterculture “be-ins” to speak out against racism and the Vietnam War.

The park gradually fell into disrepair, and though the city government made some efforts at undoing the century’s worth of damage upon Olmsted’s carefully designed structures and landscapes, in the 1970s the city’s financial crisis sapped city funds and park conservation fell by the wayside.

In 1975, a New York Times reporter lamented the park’s “state of galloping decay,” noting the “boarded windows, broken stonework and weed-gouged mortar” of the park’s famous Belvedere Castle.

“It can stand as a symbol of the decline of the park—the slow death of the Olmsted landscape in spite of spotty first aid and the private generosity that rebuilds an occasional bit of token architectural design,” the reporter wrote.

The decaying park, in turn, could stand as a symbol of the struggling city surrounding it. During the decade or so leading up to the Central Park Five case, New York City was a powder keg of competing fears and tensions. The crack-cocaine epidemic emerged as a major threat in the early 1980s. Homelessness swelled at the same time as a growing financial sector brought immense wealth to a select few. Violent crimes climbed ever higher, with a record 1,896 homicides reported in 1988.

When the Central Park jogger attack was reported, it ignited that powder keg, setting off widespread public outrage and a media firestorm.

One word in particular became a centerpiece for coverage of the case: “wilding.” Police reported that the boys had used the term to describe the attack’s motive, or rather, its lack thereof. The concept of “wilding”—roaming around and wreaking havoc, just for the fun of it—sparked fascination and terror. “Park marauders call it ‘wilding’ … and it’s street slang for going berserk,” the New York Daily News proclaimed.

The obsession over this concept, of totally random and gleeful criminality, helped fuel the continuing fervor over the case, Mexal says.

“That crime captured the public's attention for a number of reasons. Partially because it was the assault of a white woman by, they thought, non-white males,” he says. “But also because of the beliefs about nature, savagery and wilderness that the word ‘wilding’ seem to conjure, especially when it was put against this backdrop of Central Park, which is a built environment that is a stylized recreation of a natural space.”

The park was supposed to be a sanitized version of nature, Mexal explains—one that substituted calm civility for genuine wilderness and the danger that came with it. A pattern of “wilding” through the park’s cultivated landscapes would show a failure of this attempt to conquer the natural world.

Media coverage took this idea of “wildness” and ran with it. Newspapers repeatedly referred to the five defendants in sub-human terms: They were a “wolf pack,” “savages,” “monsters,” with the unsuspecting woman as their “prey.” In addition to following a long tradition of dehumanizing language about African-Americans, such headlines fed into the outrage that seemed to spring up any time something went wrong in Central Park.

An abandoned boathouse in Central Park in 1986. (Thomas Monaster/NY Daily News Archive via Getty Images)

Even through varying states of disarray, the park remained close to New Yorkers’ hearts. In the 1980s, commentators still referred to Central Park as “the most popular and democratic space in America” or as “the one truly democratic space in the city,” as Elizabeth Blackmar and Roy Rosenzweig write in their historical account of Central Park. Meili, the victim of the attack, recalled her love for running in the park, a routine she followed most days of the week.

"It was a release to be out there in nature, to see the beauty of the park ... as well as the skyscrapers and the lights of New York City, and the sense that, 'Wow, this is my city. I'm here in my park,’" Meili told ABC News in a recent interview. "I loved the freedom of the park. ... It just gave me a sense of vitality."

It follows that any crime in the park became all the more personal for New Yorkers because of its setting. Crime in Central Park “shock[ed] people like crime in heaven,” as one captain of the park police precinct said.

The Central Park Five case has been, at various points, a terrifying example of pointless crime, and a chilling story of false convictions; it has sparked cries to bring back the death penalty, and to reform the criminal justice system.

The case and its coverage have also been deeply shaped by the setting of the crime in question—a manmade piece of nature that represents its city not despite its many conflicts and paradoxes, but because of them.

Man’s Best Friend or the World’s Number-One Pest?

Smithsonian Magazine

A pack of street dogs naps on a traffic island in Bucharest, Romania. In spite of a culling program, the animals swarm the streets—and occasionally maul residents and tourists. Photo courtesy of Flickr user cod_gabriel.

Stray dogs are a common element of travel just about everywhere in the world—and they are generally just a harmless nuisance. Hikers and cyclists are frequently swarmed by village mutts in developing countries, often on the outskirts of town where the animals are allowed to live—mangy mean rejects of society that scrape by on trash and seem bent on hassling anyone carrying a passport. But usually, the animals are easily sent scattering, tails between their legs, if a person only turns to face them. An even better shooing technique—and standard practice worldwide—is to reach over and pick up a stone. Before you’ve even suggested you might throw it—and I don’t suggest you do unless you need to—the dogs will be slinking away with their heads down, as cowardly as they are predictable. It works every time.

Well, almost—because occasionally stray dogs bite. Even more occasionally, a pack of them, encouraged and emboldened by their own numbers, may ascend into full-fledged attack mode as their lupine instincts show through the grime, fleas and bald patches. It has been reported that one in 20 dogs (PDF) will bite a person in its lifetime, and with perhaps 600 million strays skirmishing for food on the fringe of the human world, attacks on people are common—and for travelers to many places, dogs are a danger to be considered along with various other logistics of tourism. Though sterilization and controversial culling programs are underway in some countries, the dog problem may only be growing worse. Rabies outbreaks occur regularly, and the World Health Organization estimates that the disease kills 55,000 people per year. Dogs are the vector in 99 percent of these cases.

Asia and Africa are ground zero for dog-person maulings, but Eastern Europe—in spite of strident efforts to control the animals’ populations—also has serious problems with homeless, nameless mutts. Consider the headline,”Killer stray dogs put Bulgaria on edge,” which sounds like something out of a pulp fiction comic book. But that was a real headline in April, just weeks after a pack of more than two dozen dogs mauled an 87-year-old retired professor in the capital of Sofia, home to an estimated 10,000 stray dogs. The man, his face and limbs shredded, died after ten days in intensive care. Bulgaria, indeed, is swarming with strays, and a progressive government-funded sterilization program seems to be unable to curb the animals’ population. Most of the country’s street dogs seem gentle enough, sleeping away the days in the streets and plazas, many sporting the yellow ear tag signifying that they’ve been sterilized. But with dangerous regularity, the dogs turn mean. There was another death in 2007, when British tourist Ann Gordon was killed by a group of dogs in the village of Nedyalsko. And in 2009 a 6-year-old girl was reportedly “dismembered” by a pack of street dogs. In 2010, a pack of strays found its way into the Sofia zoo and killed 15 resident animals. Now, after the death of the elderly man in Sofia, the nation’s media are buzzing with dog talk. I even met a cyclist once in Greece who had just come from Bulgaria. I was on my way there—and he advised I carry a spear.

Just next door, in Romania, the dog problem is also out of control. Bucharest alone is said to be the home of as many as 100,000 stray dogs. In late 2011, lawmakers voted to allow euthanizing the animals by the thousands. Even though the decision was a timely, measured response to the January 2011 mauling death of a 49-year-old woman, animal rights activists grew livid at the suggestion of killing the animals. They protested in the streets and demanded alternative methods of dog population control, like sterilization. Meanwhile, Romanian dogs still bite 75 people per day, according to this blog—and there is still talk of the 2006 death of a visiting Japanese businessman, killed in what may have been a freak death; a single dog bit the tourist on the leg and chanced to puncture a vital artery. The man bled to death. Bucharest Deputy Mayor Razvan Murgeanu was later quoted as saying, ”When we tried to solve the stray dogs problem in the past, we were held back by sensitive people who love animals. Now, look what happens.”

Stray dogs lurk and loiter in every nation on earth—and some, like this one in Egypt, live amid some of the most famous sites and scenery. Photo courtesy of Flickr user YoHandy.

In addition to the many challenges of rebuilding a war-torn nation, Iraq has dogs to contend with—and the government isn’t being particularly compassionate toward the animals. With an estimated 1.25 million strays roaming the Baghdad area, officials launched a militant culling program in 2010 in response to increasing reports of attacks, some of which have been fatal. Using guns and poisoned meat left in the streets, officials killed 58,000 stray dogs in a three-month period in 2010 and some reports say that the effort aims to destroy a million dogs. The massive culling may remind one of America’s own grisly war on wolves in the 18th, 19th and 20th centuries, when the animals were poisoned, shot, blown up and burned.

Machismo in Mexico is to thank for a bizarre reluctance to neuter dogs, an operation which macho men supposedly believe will make a male dog gay. And so the dogs are generously left with their virility and fertility—and the population soars out of control. Millions reportedly wander Mexico City, where 20,000 per month are seized by government dog catchers and electrocuted, and for every 100 people in rural Mexican villages, there are as many as 30 mongrels. Mexico isn’t the only nation south of the Rio Grande where dogs run rampant, and where the efforts to manage them are archaic or primitive. “Every country across Latin America is about 40 years behind developed nations in terms of street dog welfare,” according to the Humane Society International. That means packs living at garbage dumps, trotting the roadsides, yowling all night across the cities, outnumbering people in places and, sometimes, attacking. It also means that public agencies and private businesses have their hands full with killing dogs, a joyless job that may never end.

The small Indonesian island of Bali, a tourist hotspot roughly 50 miles square and home to 3.8 million people, is also home to about 500,000 stray dogs. Between November 2008 and early 2010, Bali officials reported 31,000 dog bites, while another source reported 30,000 dog bites in just the first half of 2010. Though many Balinese love and revere dogs, the government has come down with a heavy hand on the stray population, poisoning the dogs, which, as of November 2011, had caused at least 100 rabies deaths in three years. The rabies outbreak is ongoing, and the governments of the United States and Australia have both issued warnings on traveling to Bali.

And, coming home, the United States has a stray population of its own. Consider Detroit, where the declining human population of this impoverished city has made way for homeless dogs, which now number 20,000 to 50,000, according to estimates. And throughout the country, dog bites send 1,000 people to the hospital every day. From January 2006 to December 2008, dogs reportedly killed 88 people in America. Fifty-nine percent of the deaths were attributed to pit bulls. Dogs, of course, know no political borders, and for travelers in America’s rural regions, dogs are a nuisance as noisy and ugly as they are in Bulgaria, or India, or Columbia. Cyclist and blogger Brendan Leonard rode his bike through the Deep South in 2010. Inspired by dozens of nasty dog incidents, Leonard wrote a column advising other travelers on how to safely deal with mean dogs. He suggests blasting charging dogs with pepper spray, or whacking them with a broomstick. He also says that simply shouting back to match a pack’s own awful volume can send them away.

Last note: Let’s not hate all stray dogs. Many of them just want a friend. I’ve had mutts stay with me overnight in my camping places in Greece and Turkey, and I’ve had them chase me desperately for miles the next day, driven by the sense of loyalty that has made canines the most popular of animal human companions. And the traveling cyclists that I met recently in France had adopted a street dog in Spain and another in Morocco. And in how many stories of travel has the protagonist teamed up with a canine companion?

The author teamed up for a day with this stray puppy last year in Turkey. He found the dog—a Kangal sheep dog—tangled in a roadside briar patch and left it in a friendly village. Photo by Alastair Bland.

What do you think should be done about the large populations of stray dogs? Do they present a serious threat? Have you had any positive or negative experiences with strays in your travels abroad? Let us know in the comments below.

How to Resurrect a Lost Language

Smithsonian Magazine

Decades ago, when David Costa first started to unravel the mystery of Myaamia, the language of the Miami tribe, it felt like hunting for an invisible iceberg. There are no sound recordings, no speakers of the language, no fellow linguists engaged in the same search—in short, nothing that could attract his attention in an obvious way, like a tall tower of ice poking out of the water. But with some hunting, he discovered astonishing remnants hidden below the surface: written documents spanning thousands of pages and hundreds of years.

For Daryl Baldwin, a member of the tribe that lost all native speakers, the language wasn’t an elusive iceberg; it was a gaping void. Baldwin grew up with knowledge of his cultural heritage and some ancestral names, but nothing more linguistically substantial. “I felt that knowing my language would deepen my experience and knowledge of this heritage that I claim, Myaamia,” Baldwin says. So in the early 1990s Baldwin went back to school for linguistics so he could better understand the challenge facing him. His search was fortuitously timed—Costa’s PhD dissertation on the language was published in 1994.

United by their work on the disappearing language, Costa and Baldwin are now well into the task of resurrecting it. So far Costa, a linguist and the program director for the Language Research Office at the Myaamia Center, has spent 30 years of his life on it. He anticipates it’ll be another 30 or 40 before the puzzle is complete and all the historical records of the language are translated, digitally assembled, and made available to members of the tribe.

Costa and Baldwin’s work is itself one part of a much larger puzzle: 90 percent of the 175 Native American languages that managed to survive the European invasion have no child speakers. Globally, linguists estimate that up to 90 percent of the planet’s 6,000 languages will go extinct or become severely endangered within a century.

“Most linguistic work is still field work with speakers,” Costa says. “When I first started, projects like mine [that draw exclusively on written materials] were pretty rare. Sadly, they’re going to become more and more common as the languages start losing their speakers.”

David Costa, a linguist and the program director for the Language Research Office at the Myaamia Center, has spent 30 years of his life on the task of reviving Myaamia. (Myaamia Center)

Despite the threat of language extinction, despite the brutal history of genocide and forced removals, this is a story of hope. It’s about reversing time and making that which has sunk below the surface visible once more. This is the story of how a disappearing language came back to life—and how it’s bringing other lost languages with it.

The Miami people traditionally lived in parts of Indiana, Illinois, Ohio, Michigan and Wisconsin. The language they spoke when French Jesuit missionaries first came to the region and documented it in the mid-1600s was one of several dialects that belong to the Miami-Illinois language (called Myaamia in the language itself, which is also the name for the Miami tribe—the plural form is Myaamiaki). Miami-Illinois belongs to a larger group of indigenous languages spoken across North America called Algonquian. Algonquian languages include everything from Ojibwe to Cheyenne to Narragansett.

Think of languages as the spoken equivalent of the taxonomic hierarchy. Just as all living things have common ancestors, moving from domain down to species, languages evolve in relation to one another. Algonquian is the genus, Miami-Illinois is the species, and it was once spoken by members of multiple tribes, who had their own dialects—something like a sub-species of Miami-Illinois. Today only one dialect of the language is studied, and it is generally referred to as Miami, or Myaamia.

Like cognates between English and Spanish (which are due in part to their common descent from the Indo-European language family), there are similarities between Miami and other Algonquian languages. These likenesses would prove invaluable to Baldwin and Costa’s reconstruction efforts.

Baldwin started with word lists found through the tribe in Oklahoma and in his family's personal collection, but he struggled with pronunciation and grammar. That's where Costa's work came in. (John D. & Catherine T. MacArthur Foundation)

But before we get to that, a quick recap of how the Miami people ended up unable to speak their own language. It’s a familiar narrative, but its commonness shouldn’t diminish the pain felt by those who lived through it.

The Miami tribe signed 13 treaties with the U.S. government, which led to the loss of the majority of their homelands. In 1840, the Treaty of the Forks of the Wabash required they give up 500,000 acres (almost 800 square miles) in north-central Indiana in exchange for a reservation of equal size in the Unorganized Indian Territory—what was soon to become Kansas. The last members of the tribe were forcibly removed in 1846, just eight years before the Kansas-Nebraska Act sent white settlers running for the territory. By 1867 the Miami people were sent on another forced migration, this time to Oklahoma where a number of other small tribes had been relocated, whose members spoke different languages. As the tribe shifted to English with each new migration, their language withered into disuse. By the 1960s there were no more speakers among the 10,000 individuals who can claim Miami heritage (members are spread across the country, but the main population centers are Oklahoma, Kansas and Indiana). When Costa first visited the tribe in Oklahoma in 1989, that discovery was a shock.

“Most languages of tribes that got removed to Oklahoma did still have some speakers in the late 80s,” Costa says. “Now it’s an epidemic. Native languages of Oklahoma are severely endangered everywhere, but at that time, Miami was worse than most.”

When Baldwin came to the decision to learn more of the Miami language in order to share it with his children, there was little to draw on. Most of it was word lists that he’d found through the tribe in Oklahoma and in his family’s personal collection. Baldwin’s interest coincided with a growing interest in the language among members of the Miami Tribe of Oklahoma, which produced its first unpublished Myaamia phrase book in 1997. Baldwin had lists of words taped around the home to help his kids engage with the language, teaching them animal names and basic greetings, but he struggled with pronunciation and grammar. That’s where Costa’s work came in.

“David can really be credited with discovering the vast amount of materials that we work with,” Baldwin says. “I began to realize that there were other community members who also wanted to learn [from them].”

Together, the men assembled resources for other Miami people to learn their language, with the assistance of tribal leadership in Oklahoma and Miami University in southern Ohio. In 2001 the university (which owes its name to the tribe) collaborated with the tribe to start the Myaamia Project, which took on a larger staff and a new title (the Myaamia Center) in 2013.

When Baldwin first started as director of the Myaamia Center in 2001, following completion of his Master’s degree in linguistics, he had an office just big enough for a desk and two chairs. “I found myself on campus thinking, ok, now what?” But it didn’t take him long to get his bearings. Soon he organized a summer youth program with a specific curriculum that could be taught in Oklahoma and Indiana, and he implemented a program at Miami University for tribal students to take classes together that focus on the language, cultural history and issues for Native Americans in the modern world. Baldwin’s children all speak the language and teach it at summer camps. He’s even heard them talk in their sleep using Myaamia.

Baldwin organized a summer youth program with a specific curriculum that could be taught in Oklahoma and Indiana. (Karen L. Baldwin)

To emphasize the importance of indigenous languages, Baldwin and others researched the health impact of speaking a native language. They found that for indigenous bands in British Columbia, those who had at least 50 percent of the population fluent in the language saw 1/6 the rate of youth suicides compared to those with lower rates of spoken language. In the Southwestern U.S., tribes where the native language was spoken widely only had around 14 percent of the population that smoked, while that rate was 50 percent in the Northern Plains tribes, which have much lower language usage. Then there are the results they saw at Miami University: while graduation rates for tribal students were 44 percent in the 1990s, since the implementation of the language study program that rate has jumped to 77 percent.

“When we speak Myaamia we’re connecting to each other in a really unique way that strengthens our identity. At the very core of our educational philosophy is the fact that we as Myaamia people are kin,” Baldwin says.

While Baldwin worked on sharing the language with members of his generation, and the younger generation, Costa focused on the technical side of the language: dissecting the grammar, syntax and pronunciation. While the grammar is fairly alien to English speakers—word order is unimportant to give a sentence meaning, and subjects and objects are reflected by changes to the verbs—the pronunciation was really the more complicated problem. How do you speak a language when no one knows what it should sound like? All the people who recorded the language in writing, from French missionaries to an amateur linguist from Indiana, had varying levels of skill and knowledge about linguistics. Some of their notes reflect pronunciation accurately, but the majority of what’s written is haphazard and inconsistent.

This is where knowledge of other Algonquian languages comes into play, Costa says. Knowing the rules Algonquian languages have about long versus short vowels and aspiration (making an h-sound) means they can apply some of that knowledge to Miami. But it would be an overstatement to say all the languages are the same; just because Spanish and Italian share similarities, doesn’t mean they’re the same language.

“One of the slight hazards of extensively using comparative data is you run the risk of overstating how similar that language is,” Costa says. “You have to be especially careful to detect what the real differences are.”

The other challenge is finding vocabulary. Sometimes it’s a struggle to find words that seem like they should be obvious, like ‘poison ivy.’ “Even though we have a huge amount of plant names, no one in the 1890s or 1900s ever wrote down the word for poison ivy,” Costa says. “The theory is that poison ivy is much more common now than it used to be, since it’s a plant that thrives in disturbed habitats. And those habitats didn’t exist back then.”

And then there’s the task of creating words that fit life in the 21st century. Baldwin’s students recently asked for the word for ‘dorm rooms’ so they could talk about their lives on campus, and create a map of campus in Myaamia. Whenever such questions arise, Baldwin, Costa and others collaborate to understand whether the word already exists, if it’s been invented by another language in the Algonquian family (like a word for ‘computer’) and how to make it fit with Myaamia’s grammar and pronunciation rules. Above all, they want the language to be functional and relevant to the people who use it.

“It can’t be a language of the past. Every language evolves, and when a language stops evolving, why speak it?” Baldwin says.

A program at Miami University for tribal students offers classes that focus on the language, cultural history and issues for Native Americans in the modern world. (Karen L. Baldwin)

Their approach has been so successful that Baldwin began working with anthropology researchers at the Smithsonian Institution to help other communities learn how to use archival resources to revitalize their lost or disappearing languages. The initiative was developed out of the Recovering Voices program, a collaboration between the National Museum of Natural History, the Center for Folklife and Cultural Heritage and the National Museum of the American Indian. Researchers from each of the institutions aim to connect with indigenous communities around the world to sustain and celebrate linguistic diversity. From this initiative came National Breath of Life Archival Institute for Indigenous Languages. The workshop has been held in 2011, 2013, 2015 and is slated once again for 2017.

According to Gabriela Pérez Báez, a linguist and researcher for Recovering Voices who works on Zapotec languages in Mexico, the workshop has hosted community members from 60 different languages already.

“When I started linguistics in 2001, one of my professors said, ‘You just need to face it, these languages are gonna go and there’s little we can do,’” Báez says. “I do remember at that time feeling like, is this what I want to do as a linguist? Because it looked very gloomy all around.”

But the more she learned about Baldwin and Costa’s work, and the work undertaken by other tribes whose language was losing speakers, the more encouraged she became. She recently conducted a survey of indigenous language communities, and the preliminary results showed that 20 percent of the people who responded belonged to communities whose languages were undergoing a reawakening process. In other words, their indigenous language had either been lost or was highly endangered, but efforts were underway to reverse that. Even the linguistic terms used to describe these languages has changed: what were once talked of as “dead” or “extinct” languages are now being called “dormant” or “sleeping.”

“All of a sudden there’s all these language communities working to reawaken their languages, working to do something that was thought to be impossible,” Báez says. And what’s more, the groups are being realistic with their goals. No one expects perfect fluency or completely native speakers anytime soon. They just want a group of novice speakers, or the ability to pray in their language, or sing songs. And then they hope that effort will continue to grow throughout the generations.

“It’s amazing that people are committing to a process that’s going to outlive them,” Báez says. “That’s why Daryl [Baldwin] is so focused on the youth. The work the Myaamia Center is doing with tribal youth is just incredible. It’s multiplying that interest and commitment.”

That’s not to say Breath of Life can help every language community across the U.S. Some languages just weren’t thoroughly documented, like Esselen in northern California. But whatever resources are available through the Smithsonian’s National Anthropological Archives and the Library of Congress and elsewhere are made available to all the groups that come for the workshop. And the efforts don’t end in the U.S. and Canada, Báez says. Researchers in New Zealand, Australia, Latin America and elsewhere are going back to archives to dig up records of indigenous languages in hopes of bolstering them against the wave of endangerment.

“I’m a very science-y person. I want to see evidence, I want to see tangible whatevers,” Báez says. “But to see [these communities] so determined just blows you away.”

For Baldwin and Costa, their own experience with the Myaamia Project has been humbling and gratifying. There are now living people who speak Myaamia together, and while Costa doesn’t know if what they’re speaking is the same language as was spoken 200 years ago, it’s a language nonetheless. Baldwin even received a MacArthur “genius grant” for his work on the language in 2016.

They don’t want to predict the future of the language or its people; we live in a world where 4 percent of languages are spoken by 96 percent of the population. But both are hopeful that the project they’ve started is like a spring garden slowly growing into something much larger.

“You don’t know what the seed is, but you plant it and you water it,” Baldwin says. “I hope it’s a real cool plant, that it’s got nice flowers.”

The (Still) Mysterious Death of Edgar Allan Poe

Smithsonian Magazine

It was raining in Baltimore on October 3, 1849, but that didn't stop Joseph W. Walker, a compositor for the Baltimore Sun, from heading out to Gunner's Hall, a public house bustling with activity. It was Election Day, and Gunner's Hall served as a pop-up polling location for the 4th Ward polls. When Walker arrived at Gunner's Hall, he found a man, delirious and dressed in shabby second-hand clothes, lying in the gutter. The man was semi-conscious, and unable to move, but as Walker approached the him, he discovered something unexpected: the man was Edgar Allan Poe. Worried about the health of the addled poet, Walker stopped and asked Poe if he had any acquaintances in Baltimore that might be able to help him. Poe gave Walker the name of Joseph E. Snodgrass, a magazine editor with some medical training. Immediately, Walker penned Snodgrass a letter asking for help.

Baltimore City, Oct. 3, 1849 
Dear Sir,

There is a gentleman, rather the worse for wear, at Ryan's 4th ward polls, who goes under the cognomen of Edgar A. Poe, and who appears in great distress, & he says he is acquainted with you, he is in need of immediate assistance.

Yours, in haste, 
JOS. W. WALKER 
To Dr. J.E. Snodgrass.

On September 27—almost a week earlier—Poe had left Richmond, Virginia bound for Philadelphia to edit a collection of poems for Mrs. St. Leon Loud, a minor figure in American poetry at the time. When Walker found Poe in delirious disarray outside of the polling place, it was the first anyone had heard or seen of the poet since his departure from Richmond. Poe never made it to Philadelphia to attend to his editing business. Nor did he ever make it back to New York, where he had been living, to escort his aunt back to Richmond for his impending wedding. Poe was never to leave Baltimore, where he launched his career in the early 19th- century, again—and in the four days between Walker finding Poe outside the public house and Poe's death on October 7, he never regained enough consciousness to explain how he had come to be found, in soiled clothes not his own, incoherent on the streets. Instead, Poe spent his final days wavering between fits of delirium, gripped by visual hallucinations. The night before his death, according to his attending physician Dr. John J. Moran, Poe repeatedly called out for "Reynolds"—a figure who, to this day, remains a mystery.

Poe's death—shrouded in mystery—seems ripped directly from the pages of one of his own works. He had spent years crafting a careful image of a man inspired by adventure and fascinated with enigmas—a poet, a detective, an author, a world traveler who fought in the Greek War of Independence and was held prisoner in Russia. But though his death certificate listed the cause of death as phrenitis, or swelling of the brain, the mysterious circumstances surrounding his death have led many to speculate about the true cause of Poe's demise. "Maybe it’s fitting that since he invented the detective story," says Chris Semtner, curator of the Poe Museum in Richmond, Virginia, "he left us with a real-life mystery."

1. Beating

In 1867, one of the first theories to deviate from either phrenitis or alcohol was published by biographer E. Oakes Smith in her article "Autobiographic Notes: Edgar Allan Poe." "At the instigation of a woman, " Smith writes, "who considered herself injured by him, he was cruelly beaten, blow upon blow, by a ruffian who knew of no better mode of avenging supposed injuries. It is well known that a brain fever followed. . . ." Other accounts also mention "ruffians" who had beaten Poe senseless before his death. As Eugene Didier wrote in his 1872 article, "The Grave of Poe," that while in Baltimore, Poe ran into some friends from West Point, who prevailed upon him to join them for drinks. Poe, unable to handle liquor, became madly drunk after a single glass of champagne, after which he left his friends to wander the streets. In his drunken state, he "was robbed and beaten by ruffians, and left insensible in the street all night."

2. Cooping

Others believe that Poe fell victim to a practice known as cooping, a method of voter fraud practiced by gangs in the 19th century where an unsuspecting victim would be kidnapped, disguised and forced to vote for a specific candidate multiple times under multiple disguised identities. Voter fraud was extremely common in Baltimore around the mid 1800s, and the polling site where Walker found the disheveled Poe was a known place that coopers brought their victims. The fact that Poe was found delirious on election day, then, is no coincidence.

Over the years, the cooping theory has come to be one of the more widely accepted explanations for Poe's strange demeanor before his death. Before Prohibition, voters were given alcohol after voting as a sort of reward; had Poe been forced to vote multiple times in a cooping scheme, that might explain his semi-conscious, ragged state. 

Around the late 1870s, Poe's biographer J.H. Ingram received several letters that blamed Poe's death on a cooping scheme. A letter from William Hand Browne, a member of the faculty at Johns Hopkins, explains that "the general belief here is, that Poe was seized by one of these gangs, (his death happening just at election-time; an election for sheriff took place on Oct. 4th), 'cooped,' stupefied with liquor, dragged out and voted, and then turned adrift to die."

3. Alcohol

"A lot of the ideas that have come up over the years have centered around the fact that Poe couldn’t handle alcohol," says Semtner. "It has been documented that after a glass of wine he was staggering drunk. His sister had the same problem; it seems to be something hereditary."

Months before his death, Poe became a vocal member of the temperance movement, eschewing alcohol, which he'd struggled with all his life. Biographer Susan Archer Talley Weiss recalls, in her biography "The Last Days of Edgar A. Poe," an event, toward the end of Poe's time in Richmond, that might be relevant to theorists that prefer a "death by drinking" demise for Poe. Poe had fallen ill in Richmond, and after making a somewhat miraculous recovery, was told by his attending physician that "another such attack would prove fatal." According to Weiss, Poe replied that "if people would not tempt him, he would not fall," suggesting that the first illness was brought on by a bout of drinking.

Those around Poe during his finals days seem convinced that the author did, indeed, fall into that temptation, drinking himself to death. As his close friend J. P. Kennedy wrote on October 10, 1949: "On Tuesday last Edgar A. Poe died in town here at the hospital from the effects of a debauch. . . . He fell in with some companion here who seduced him to the bottle, which it was said he had renounced some time ago. The consequence was fever, delirium, and madness, and in a few days a termination of his sad career in the hospital. Poor Poe! . . . A bright but unsteady light has been awfully quenched."

Though the theory that Poe's drinking lead to his death fails to explain his five-day disappearance, or his second-hand clothes on October 3, it was nonetheless a popular theory propagated by Snodgrass after Poe's death. Snodgrass, a member of the temperance movement, gave lectures across the country, blaming Poe's death on binge drinking. Modern science, however, has thrown a wrench into Snodgrasses talking points: samples of Poe's hair from after his death show low levels of lead, explains Semtner, which is an indication that Poe remained faithful to his vow of sobriety up until his demise.

4. Carbon Monoxide Poisoning

In 1999, public health researcher Albert Donnay argued that Poe's death was a result of carbon monoxide poisoning from coal gas that was used for indoor lighting during the 19th century. Donnay took clippings of Poe's hair and tested them for certain heavy metals that would be able to reveal the presence of coal gas. The test was inconclusive, leading biographers and historians to largely discredit Donnay's theory.

5. Heavy Metal Poisoning

While Donnay's test didn't reveal levels of heavy metal consistent with carbon monoxide poisoning, the tests did reveal elevated levels of mercury in Poe's system months before his death. According to Semtner, Poe's mercury levels were most likely elevated as a result of a cholera epidemic he'd been exposed to in July of 1849, while in Philadelphia. Poe's doctor prescribed calomel, or mercury chloride. Mercury poisoning, Semtner says, could help explain some of Poe's hallucinations and delirium before his death. However, the levels of mercury found in Poe's hair, even at their highest, are still 30 times below the level consistent with mercury poisoning.

6. Rabies

In 1996, Dr. R. Michael Benitez was participating in a clinical pathologic conference where doctors are given patients, along with a list of symptoms, and instructed to diagnose and compare with other doctors as well as the written record. The symptoms of the anonymous patient E.P., "a writer from Richmond" were clear: E.P. had succumbed to rabies. According to E.P.'s supervising physician, Dr. J.J. Moran, E.P. had been admitted to a hospital due to "lethargy and confusion." Once admitted, E.P.'s condition began a rapid downward spiral: shortly, the patient was exhibiting delirium, visual hallucinations, wide variations in pulse rate and rapid, shallow breathing. Within four days—the median length of survival after the onset of serious rabies symptoms—E.P. was dead.

E.P., Benitez soon found out, wasn't just any author from Richmond. It was Poe whose death the Maryland cardiologist had diagnosed as a clear case of rabies, a fairly common virus in the 19th century. Running counter to any prevailing theories at the time, Benitez's diagnosis ran in the September 1996 issue of the Maryland Medical Journal. As Benitez pointed out in his article, without DNA evidence, it's impossible to say with 100 percent certainty that Poe succumbed to the rabies virus. There are a few kinks in the theory, including no evidence of hydrophobia (those afflicted with rabies develop a fear of water, Poe was reported to have been drinking water at the hospital until his death) nor any evidence of an animal bite (though some with rabies don't remember being bitten by an animal). Still, at the time of the article's publication, Jeff Jerome, curator of the Poe House Museum in Baltimore, agreed with Benitez's diagnosis. "This is the first time since Poe died that a medical person looked at Poe's death without any preconceived notions," Jerome told the Chicago Tribune in October of 1996. "If he knew it was Edgar Allan Poe, he'd think, 'Oh yeah, drugs, alcohol,' and that would influence his decision. Dr. Benitez had no agenda."

7. Brain Tumor

One of the most recent theories about Poe's death suggests that the author succumbed to a brain tumor, which influenced his behavior before his death. When Poe died, he was buried, rather unceremoniously, in an unmarked grave in a Baltimore graveyard. Twenty-six years later, a statue was erected, honoring Poe, near the graveyard's entrance. Poe's coffin was dug up, and his remains exhumed, in order to be moved to the new place of honor. But more than two decades of buried decay had not been kind to Poe's coffin—or the corpse within it—and the apparatus fell apart as workers tried to move it from one part of the graveyard to another. Little remained of Poe's body, but one worker did remark on a strange feature of Poe's skull: a mass rolling around inside. Newspapers of the day claimed that the clump was Poe's brain, shriveled yet intact after almost three decades in the ground.

We know, today, that the mass could not be Poe's brain, which is one of the first parts of the body to rot after death. But Matthew Pearl, an American author who wrote a novel about Poe's death, was nonetheless intrigued by this clump. He contacted a forensic pathologist, who told him that while the clump couldn't be a brain, it could be a brain tumor, which can calcify after death into hard masses. 

According to Semtner, Pearl isn't the only person to believe Poe suffered from a brain tumor: a New York physician once told Poe that he had a lesion on his brain that caused his adverse reactions to alcohol. 

8. Flu

A far less sinister theory suggests that Poe merely succumbed to the flu—which might have turned into deadly pneumonia—on this deathbed. As Semtner explains, in the days leading up to Poe's departure from Richmond, the author visited a physician, complaining of illness. "His last night in town, he was very sick, and his [soon-to-be] wife noted that he had a weak pulse, a fever, and she didn’t think he should take the journey to Philadelphia," says Semtner. "He visited a doctor, and the doctor also told him not to travel, that he was too sick." According to newspaper reports from the time, it was raining in Baltimore when Poe was there—which Semtner thinks could explain why Poe was found in clothes not his own. "The cold and the rain exasperated the flu he already had," says Semtner, "and maybe that eventually lead to pneumonia. The high fever might account for his hallucinations and his confusion."

9. Murder


In his 2000 book Midnight Dreary: The Mysterious Death of Edgar Allan Poe, author John Evangelist Walsh presents yet another theory about Poe's death: that Poe was murdered by the brothers of his wealthy fiancée, Elmira Shelton. Using evidence from newspapers, letters and memoirs, Walsh argues that Poe actually made it to Philadelphia, where he was ambushed by Shelton's three brothers, who warned Poe against marrying their sister. Frightened by the experience, Poe disguised himself in new clothes (accounting for, in Walsh's mind, his second-hand clothing) and hid in Philadelphia for nearly a week, before heading back to Richmond to marry Shelton. Shelton's brothers intercepted Poe in Baltimore, Walsh postulates, beat him, and forced him to drink whiskey, which they knew would send Poe into a deathly sickness. Walsh's theory has gained little traction among Poe historians—or book reviewers; Edwin J. Barton, in a review for the journal American Literature, called Walsh's story "only plausible, not wholly persuasive." "Midnight Dreary is interesting and entertaining," he concluded, "but its value to literary scholars is limited and oblique."

---

For Semtner, however, none of the theories fully explain Poe's curious end. "I've never been completely convinced of any one theory, and I believe Poe's cause of death resulted from a combination of factors," he says. "His attending physician is our best source of evidence. If he recorded on the mortality schedule that Poe died of phrenitis, Poe was most likely suffering from encephalitis or meningitis, either of which might explain his symptoms." 

A mother's solace: A letter from a World War I enemy

National Museum of American History

In 1922, four years after her American son was killed in action in World War I, Sallie Maxwell Bennett received a letter from Emil Merkelbach, a German officer who had fought against her son in the battle that ended his life.

Tan envelope with eight green postal stamps and German text

"You will look upon my writing, no doubt, as something unusual, and rightly so, for it is indeed not exactly usual for a former enemy of his own accord to report about his opponent in the World War. I was myself a German officer in the World War."

Portrait photograph of man standing on steps in German uniform

Emil Merkelbach was the leader of a German balloon squadron stationed in occupied northern France in August 1918. Balloons were used by both the Allied and Central powers during the war as a way to observe enemy targets at a greater distance and from behind the front lines, allowing armies to more accurately aim their long-range artillery. Antiaircraft machine guns defended the balloons from the ground and patrolling airplanes protected them from the air. Armies' reliance on balloon observations, and the firepower employed to protect them, made balloons both an important and dangerous target for fighter pilots like Louis Bennett Jr., Mrs. Bennett's son.

Eight uniformed soldiers sitting on grassy field

Louis, a Yale student from a prominent West Virginian family, had organized the West Virginia Flying Corps in early 1917 with the idea of training pilots to join the U.S. Army as part of a proposed West Virginia aerial unit. However, when the War Department rejected this idea and required that Louis go through the standard Army training program, he decided instead to join the British Royal Flying Corps (later the Royal Air Force or RAF) in October 1917 with the hope of fighting on the front as soon as possible. He left his studies at Yale in the middle of his Senior year and, after attending flight school in Canada and additional training in England, was eventually stationed in northern France in the summer of 1918.

During the ten days he served in combat before being killed in action, Louis shot down three enemy planes and nine balloons, four of which he shot down in one day. These feats not only earned him the distinction of being named a flying ace, and West Virginia's only World War I ace, but also placed him among the top of all World War I flying aces. Merkelbach saw Louis's impressive skill and total fearlessness first-hand on the battlefield, which he remembered years later and which eventually prompted him to write to Mrs. Bennett.

Four soldiers watching the sky, with gun pointed in the air

Men stand on the ground under a large inflated balloon

Merkelbach wrote:

"[I] had an opportunity to admire the keenness and bravery of your son; for this reason I should like to give you the following short description [of Louis's final battle]. . . . I had been up several hours observing, and was at a height of 1000 meters. Over the enemy's front circled continuously two hostile airplanes. . . . I immediately gave the command to my men below to haul in my balloon. . . . When still about 300 meters high, I saw [another] German balloon . . . plunge to earth burning. At the same moment I saw the hostile flyer (Louis) come toward my balloon at terrific speed, and immediately the defensive fire of my heavy machine rifles below and of the anti-aircraft guns began; but the hostile aviator did not concern himself about that. . . . [He] opened fire on me. . . . I saw the gleaming fire of the missiles flying toward me, but fortunately was not hit. The hostile machine was shot into flames by the fire of my machine guns. The enemy aviator tried to spring from the aeroplane before the latter plunged to the ground and burned completely."

Merkelbach ordered the ambulance corps to attend to the "brave and severely wounded enemy." Louis was unconscious and severely burned. Both of his legs were broken, and he had a bullet wound in his head. He died just hours later in a German field hospital on August 24, 1918. The Germans buried him with military honors in an unmarked grave.

"A bold and brave officer had met his death."

Man stands with bowed head and hat in hand in front of a cross

Back home in West Virginia, four days after her son's death, Mrs. Bennett received a telegram from the Secretary of the British Air Ministry informing her that Louis was missing in action. She immediately wrote to her contacts in Europe offering a reward for more information and promising that she would spare no expense to locate the body if he had been killed. Having lost her husband to unexpected illness only weeks earlier, Mrs. Bennett spent a desolate and difficult two months waiting for word of her son. While she continued to work her contacts in Europe to try to locate Louis, she received a number of conflicting accounts from members of his squadron, some stating that he had been taken as a prisoner of war, some stating that he had been killed.

In the midst of all of this, she was also struggling to help settle her late husband's estate and became gravely ill with the "Spanish flu" that was sweeping the globe in a deadly pandemic. Finally, at the end of October 1918, she received official confirmation from the American Red Cross that Louis had been killed in action two months earlier. Although conflicting reports stated that he was buried in either France or Belgium, at last she knew that he was gone.

Mrs. Bennett spent the next several months attempting to travel to Europe to locate Louis's grave, first using her influential contacts to obtain a passport and, once in England, to gain permission to travel to France. Finally, in March 1919, with the help of the U.S. Army, the American Red Cross, and the local villagers, she found herself standing at his unmarked grave, number 590, in a military cemetery in Wavrin, near Lille in northern France. She had finally found her son.

Left: a memorial wreath with an American, British and French flag. Right: a stone memorial

Despite his distinguished combat service, Louis Bennett Jr. never received any service awards from either the British or American governments. In an attempt to right this wrong Mrs. Bennett spent the rest of her life honoring her son's memory, eventually erecting memorials in three different countries. The first memorial was completed in 1919 when she rebuilt the church in Wavrin, France where Louis was buried. The church and town had been utterly destroyed by the retreating German army, and Mrs. Bennett rebuilt the church in dedication to her son's memory on the one-year anniversary of his death. The rebuilt church was also her way of thanking the local curate and villagers who had not only helped her locate Louis's grave, but had helped her smuggle his remains out of the military cemetery, in direct violation of French law, so he could eventually be laid to rest in West Virginia.

The memorials continued in 1922 with a stained glass window in Westminster Abbey overlooking the Tomb of the Unknown Warrior. Dedicated to Louis and to all members of Royal Flying Corps who died in World War I, the window features Archangel Michael, the patron saint of airmen, looking down at Louis, who is depicted as an angel holding a shield. In 1923 she donated a 16th-century Flemish tapestry in honor of Louis to St. Thomas's Church in New York City, where he had been confirmed as a boy.

Postcard of a red Victorian-style house surrounded by trees

In 1922 Mrs. Bennett donated the family's mansion and extensive collection of books to Lewis County as a war memorial and public library in honor of her deceased husband and son. The Louis Bennett Public Library opened in 1923 and is still in operation today. Mrs. Bennett also had the local airport renamed in Louis's honor and established a memorial organization that met every year in Weston on the anniversary of Louis's death to honor his memory. On Armistice Day in 1925, she unveiled The Aviator, a seven-and-a-half-foot-tall bronze sculpture sitting on a granite base. Sculpted by Augustus Lukeman, the sculpture features Louis in uniform with wings on his back and is dedicated to all Americans who lost their lives in World War I. The pedestal bears the inscription, "And thus this man died, leaving his spirit as an example of able courage, not only unto young men, but unto all the nation."

Man dressed at WWI pilot with wings

Although never given an official award for his service, Louis Bennett Jr.'s courage and skill clearly inspired those around him to honor his memory in their own way: from the enemy German army that buried him with full military honors, to his mother who memorialized him across multiple countries, and finally to Emil Merkelbach, an enemy officer, who was inspired to write a glowing, respectful letter in memorial four years after they had fought on the battlefield.

"I hope that the foregoing lines, a memorial to your son, will be received by you living—he was my bravest enemy. Honor to his memory. With respect, Emil Merkelbach"

Patri O'Gan is a project assistant in the Division of Armed Forces History. She recommends reading more about the Bennetts. She has also blogged about the creative use of cars, planes, and trains in the struggle for woman suffrage.

Author(s): 
Patri O'Gan
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=uQ4jbzN2JzA:wx0v9SdUQRM:V_sGLiPBpWU OSayCanYouSee?i=uQ4jbzN2JzA:wx0v9SdUQRM:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Tracking Down the Origins of Cystic Fibrosis in Ancient Europe

Smithsonian Magazine

Imagine the thrill of discovery when more than 10 years of research on the origin of a common genetic disease, cystic fibrosis (CF), results in tracing it to a group of distinct but mysterious Europeans who lived about 5,000 years ago.

CF is the most common, potentially lethal, inherited disease among Caucasians—about one in 40 carry the so-called F508del mutation. Typically only beneficial mutations, which provide a survival advantage, spread widely through a population.

CF hinders the release of digestive enzymes from the pancreas, which triggers malnutrition, causes lung disease that is eventually fatal and produces high levels of salt in sweat that can be life-threatening.

Depending on the mutation a patient carries, they may experience some or all symptoms of cystic fibrosis. (Blausen.com staff (2014), CC BY-SA)

In recent years, scientists have revealed many aspects of this deadly lung disease which have led to routine early diagnosis in screened babies, better treatments and longer lives. On the other hand, the scientific community hasn’t been able to figure out when, where and why the mutation became so common. Collaborating with an extraordinary team of European scientists such as David Barton in Ireland and Milan Macek in the Czech Republic, in particular a group of brilliant geneticists in Brest, France led by Emmanuelle Génin and Claude Férec, we believe that we now know where and when the original mutation arose and in which ancient tribe of people.

We share these findings in an article in the European Journal of Human Genetics which represents the culmination of 20 years’ work involving nine countries.

What is cystic fibrosis?

My quest to determine how CF arose and why it’s so common began soon after scientists discovered the CFTR gene causing the disease in 1989. The most common mutation of that gene that causes the disease was called F508del. Two copies of the mutation—one inherited from the mother and the other from the father—caused the lethal disease. But, inheriting just a single copy caused no symptoms, and made the person a “carrier.”

I had been employed at the University of Wisconsin since 1977 as a physician-scientist focusing on the early diagnosis of CF through newborn screening. Before the gene discovery, we identified babies at high risk for CF using a blood test that measured levels of protein called immunoreactive trypsinogen (IRT). High levels of IRT suggested the baby had CF. When I learned of the gene discovery, I was convinced that it would be a game-changer for both screening test development and epidemiological research.

That’s because with the gene we could offer parents a more informative test. We could tell them not just whether their child had CF, but also whether they carried two copies of a CFTR mutation, which caused disease, or just one copy which made them a carrier.

Parents carrying one good copy of the CF gene (R) and one bad copy of the mutated CF gene (r) are called carriers. When both parents transmit a bad copy of the CF gene to their offspring, the child will suffer from cystic fibrosis. Children who inherit just one bad copy will be carriers like their parents and can transmit the gene to their children. (Cburnett, CC BY-SA)

One might ask what is the connection between studying CF newborn screening and learning about the disease origin. The answer lies in how our research team in Wisconsin transformed a biochemical screening test using the IRT marker to a two-tiered method called IRT/DNA.

Because about 90 percent of CF patients in the U.S. and Europe have at least one F508del mutation, we began analyzing newborn blood for its presence whenever the IRT level was high. But when this two-step IRT/DNA screening is done, not only are patients with the disease diagnosed but also tenfold more infants who are genetic carriers of the disease are identified.

As preconception-, prenatal- and neonatal screening for CF have proliferated during the past two decades, the many thousands of individuals who discovered they were F508del carriers and their concerned parents often raised questions about the origin and significance of carrying this mutation themselves or in their children. Would they suffer with one copy? Was there a health benefit? It has been frustrating for a pediatrician specializing in CF to have no answer for them.

The challenge of finding origin of the CF mutation

I wanted to zero in on when this genetic mutation first starting appearing. Pinpointing this period would allow us to understand how it could have evolved to provide a benefit—at least initially—to those people in Europe who had it. To expand my research, I decided to take a sabbatical and train in epidemiology while taking courses in 1993 at the London School of Hygiene and Tropical Medicine.

The timing was perfect because the field of ancient DNA research was starting to blossom. New breakthrough techniques like the Polymerase Chain Reaction made it possible to study the DNA of mummies and other human archaeological specimens from prehistoric burials. For example, early studies were performed on the DNA from the 5,000-year-old Tyrolean Iceman, which later became known as Ötzi.

A typical prehistoric burial in a crouched fetal position. (Philip Farrell, CC BY-SA)

I decided that we might be able to discover the origin of CF by analyzing the DNA in the teeth of Iron Age people buried between 700-100 B.C. in cemeteries throughout Europe.

Using this strategy, I teamed up with archaeologists and anthropologists such as Maria Teschler-Nicola at the Natural History Museum in Vienna, who provided access to 32 skeletons buried around 350 B.C. near Vienna. Geneticists in France collected DNA from the ancient molars and analyzed the DNA. To our surprise, we discovered the presence of the F508del mutation in DNA from three of 32 skeletons.

This discovery of F508del in Central European Iron Age burials radiocarbon-dated to 350 B.C. suggested to us that the original CF mutation may have arisen earlier. But obtaining Bronze Age and Neolithic specimens for such direct studies proved difficult because fewer burials are available, skeletons are not as well-preserved and each cemetery merely represents a tribe or village. So rather than depend on ancient DNA, we shifted our strategy to examine the genes of modern humans to figure out when this mutation first arose.

Why would a harmful mutation spread?

To find the origin of CF in modern patients, we knew we needed to learn more about the signature mutation—F508del—in people who are carriers or have the disease.

This tiny mutation causes loss of one amino acid out of the 1,480 amino acid chain and changes the shape of a protein on the surface of the cell that moves chloride in and out of the cell. When this protein is mutated, people carrying two copies of it—one from the mother and one from the father—are plagued with thick sticky mucus in their lungs, pancreas and other organs. The mucus in their lungs allows bacteria to thrive, destroying the tissue and eventually causing the lungs to fail. In the pancreas, the thick secretions prevent the gland from delivering the enzymes the body needs to digest food.

So why would such a harmful mutation continue to be transmitted from generation to generation?

The Natural History Museum in Vienna, Austria, houses a large collection of Iron Age and Bronze Age skeletons which are curated by Dr. Maria Teschler-Nicola. These collections were the source of teeth and bones for investigation of ancient DNA and studies on ‘The Ancient Origin of Cystic Fibrosis.’ (Philip Farrell, CC BY-ND)

A mutation as harmful as F508del would never have survived among people with two copies of the mutated CFTR gene because they likely died soon after birth. On the other hand, those with one mutation may have a survival advantage, as predicted in Darwin’s “survival of the fittest” theory.

Perhaps the best example of a mutation favoring survival under stressful environmental conditions can be found in Africa, where fatal malaria has been endemic for centuries. The parasite that causes malaria infects the red blood cells in which the major constituent is the oxygen-carrying protein hemoglobin. Individuals who carry the normal hemoglobin gene are vulnerable to this mosquito-borne disease. But those who are carriers of the mutated “hemoglobin S” gene, with only one copy, are protected from severe malaria. However two copies of the hemoglobin S gene causes sickle cell disease, which can be fatal.

Here there is a clear advantage to carrying one mutant gene—in fact, about one in 10 Africans carries a single copy. Thus, for many centuries an environmental factor has favored the survival of individuals carrying a single copy of the sickle hemoglobin mutation.

Individuals who carry two copies of the sickle cell gene suffer from sickle cell anemia, in which the blood cells become rigid sickle shapes and get stuck in the blood vessels, causing pain. Normal red blood cells are flexible discs that slide easily through vessels. (Designua/Shutterstock.com)

Similarly we wondered whether there was a health benefit to carrying a single copy of this specific CF mutation during exposures to environmentally stressful conditions. Perhaps, we reasoned, that’s why the F508del mutation was common among Caucasian Europeans and Europe-derived populations.

Clues from modern DNA

To figure out the advantage of transmitting a single mutated F508del gene from generation to generation, we first had to determine when and where the mutation arose so that we could uncover the benefit this mutation conferred.

We obtained DNA samples from 190 CF patients bearing F508del and their parents residing in geographically distinct European populations from Ireland to Greece plus a Germany-derived population in the U.S. We then identified a collection of genetic markers—essentially sequences of DNA—within the CF gene and flanking locations on the chromosome. By identifying when these mutations emerged in the populations we studied, we were able to estimate the age of the most recent common ancestor.

Next, by rigorous computer analyses, we estimated the age of the CF mutation in each population residing in the various countries.

Two copies of the sickle cell gene cause the disease. But carrying one copy reduces the risk of malaria. The gene is widespread among people who live in regions of the world (red) where malaria is endemic. ( ellepigrafica)

We then determined that the age of the oldest common ancestor is between 4,600 and 4,725 years and arose in southwestern Europe, probably in settlements along the Atlantic Ocean and perhaps in the region of France or Portugal. We believe that the mutation spread quickly from there to Britain and Ireland, and then later to central and southeastern European populations such as Greece, where F508del was introduced only about 1,000 years ago.

Who spread the CF mutation throughout Europe?

Thus, our newly published data suggest that the F508del mutation arose in the early Bronze Age and spread from west to southeast Europe during ancient migrations.

Moreover, taking the archaeological record into account, our results allow us to introduce a novel concept by suggesting that a population known as the Bell Beaker folk were the probable migrating population responsible for the early dissemination of F508del in prehistoric Europe. They appeared at the transition from the Late Neolithic period, around 4000 B.C., to the Early Bronze Age during the third millennium B.C. somewhere in Western Europe. They were distinguished by their ceramic beakers, pioneering copper and bronze metallurgy north of the Alps and great mobility. All studies, in fact, show that they were into heavy migration, traveling all over Western Europe.

Distribution of Bell Beaker sites throughout Europe. (DieKraft via Wikimedia Commons)

Over approximately 1,000 years, a network of small families and/or elite tribes spread their culture from west to east into regions that correspond closely to the present-day European Union, where the highest incidence of CF is found. Their migrations are linked to the advent of Western and Central European metallurgy, as they manufactured and traded metal goods, especially weapons, while traveling over long distances. It is also speculated that their travels were motivated by establishing marriage networks. Most relevant to our study is evidence that they migrated in a direction and over a time period that fit well with our results. Recent genomic data suggest that both migration and cultural transmission played a major role in diffusion of the “Beaker Complex” and led to a “profound demographic transformation” of Britain and elsewhere after 2400 B.C.

Determining when F508del was first introduced in Europe and discovering where it arose should provide new insights about the high prevalence of carriers—and whether the mutation confers an evolutionary advantage. For instance, Bronze Age Europeans, while migrating extensively, were apparently spared from exposure to endemic infectious diseases or epidemics; thus, protection from an infectious disease, as in the sickle cell mutation, through this genetic mutation seems unlikely.

As more information on Bronze Age people and their practices during migrations become available through archaeological and genomics research, more clues about environmental factors that favored people who had this gene variant should emerge. Then, we may be able to answer questions from patients and parents about why they have a CFTR mutation in their family and what advantage this endows.

Examples of tools and ceramics created by the Bell Beaker people. (Benutzer:Thomas Ihle via German Wikipedia, CC BY-SA)

The Untold Story of the Vengeful Japanese Attack After the Doolittle Raid

Smithsonian Magazine

At midday on April 18, 1942, 16 U.S. Army bombers, under the command of daredevil pilot Lt. Col. Jimmy Doolittle, thundered into the skies over Tokyo and other key Japanese industrial cities in a surprise raid designed to avenge the attack on Pearl Harbor. For the 80 volunteer raiders, who lifted off that morning from the carrier Hornet, the mission was one-way. After attacking Japan, most of the aircrews flew on to Free China, where low on fuel, the men either bailed out or crash-landed along the coast and were rescued by local villagers, guerrillas and missionaries.

That generosity shown by the Chinese would trigger a horrific retaliation by the Japanese that claimed an estimated quarter-million lives and would prompt comparisons to the 1937-38 Rape of Nanking. American military authorities, cognizant that a raid on Tokyo would result in a vicious counterattack upon free China, saw the mission through regardless, even keeping the operation a secret from their Pacific theater allies. This chapter of the Doolittle Raid has largely gone unreported—until now.

Long-forgotten missionary records discovered in the archives of DePaul University for the first time shed important new light on the extent to which the Chinese suffered in the aftermath of the Doolittle raid.

In the moments after the attack on Tokyo, Japanese leaders fumed over the raid, which had revealed China’s coastal provinces as a dangerous blind spot in the defense of the homeland. American aircraft carriers not only could launch surprise attacks from the seas and land safely in China but could possibly even fly bombers directly from Chinese airfields to attack Japan. The Japanese military ordered an immediate campaign against strategically important airfields, issuing an operational plan in late April, just days after the Doolittle raid.

Survivor accounts point to an ulterior objective: to punish the Chinese allies of the United States forces, especially those towns where the American aviators had bailed out after the raid. At the time, Japanese forces occupied Manchuria as well as key coastal ports, railways and industrial and commercial centers in China.

51FsJo63miL._SL160_.jpg

Target Tokyo: Jimmy Doolittle and the Raid That Avenged Pearl Harbor

~ James M. Scott (author) More about this product
List Price: $35.00
Price: $24.92
You Save: $10.08 (29%)

The United States had neither boots on the ground nor faith that the Chinese military could repel any farther advances by occupying Japanese forces. Details of the destruction that would soon follow—just as officials in Washington and Chungking, the provisional capital of China, and even Doolittle, had long predicted—would come from the records of American missionaries, some of whom had helped the raiders. The missionaries knew of the potential wrath of the Japanese, having lived under a tenuous peace in this border region just south of occupied China. Stories of the atrocities at Nanking, where the river had turned red from blood, had circulated widely. When the Japanese came into a town, “the first thing that you see is a group of cavalrymen,” Herbert Vandenberg, an American priest, would recall. “The horses have on shiny black boots. The men wear boots and a helmet. They are carrying sub-machine guns.”

Wreckage of Major General Doolittle's plane somewhere in China after the raid on Tokyo. Doolittle is seated on wreckage to the right. (Corbis)

Vandenberg had heard the news broadcasts of the Tokyo raid in the mission compound in the town of Linchwan, home to about 50,000 people, as well as to the largest Catholic church in southern China, with a capacity to serve as many as a thousand. Days after the raid letters reached Vandenberg from nearby missions in Poyang and Ihwang, informing him that local priests cared for some of the fliers. “They came to us on foot,” Vandenberg wrote. “They were tired and hungry. Their clothing was tattered and torn from climbing down the mountains after bailing out. We gave them fried chicken. We dressed their wounds and washed their clothes. The nuns baked cakes for the fliers. We gave them our beds.”

By early June, the devastation had begun. Father Wendelin Dunker observed the result of a Japanese attack on the town of Ihwang:

“They shot any man, woman, child, cow, hog, or just about anything that moved, They raped any woman from the ages of 10 – 65, and before burning the town they thoroughly looted it.”

He continued, writing in his unpublished memoir, “None of the humans shot were buried either, but were left to lay on the ground to rot, along with the hogs and cows.”

The Japanese marched into the walled city of Nancheng at dawn on the morning of June 11, beginning a reign of terror so horrendous that missionaries would later dub it “the Rape of Nancheng.” Soldiers rounded up 800 women and herded them into a storehouse outside the east gate. “For one month the Japanese remained in Nancheng, roaming the rubble-filled streets in loin clothes much of the time, drunk a good part of the time and always on the lookout for women,” wrote the Reverend Frederick McGuire. “The women and children who did not escape from Nancheng will long remember the Japanese—the women and girls because they were raped time after time by Japan’s imperial troops and are now ravaged by venereal disease, the children because they mourn their fathers who were slain in cold blood for the sake of the ‘new order’ in East Asia.”

At the end of the occupation, Japanese forces systematically destroyed the city of 50,000 residents. Teams stripped Nancheng of all radios, while others looted the hospitals of drugs and surgical instruments. Engineers not only wrecked the electrical plant but pulled up the railroad lines, shipping the iron out. A special incendiary squad started its operation on July 7 in the city’s southern section. “This planned burning was carried on for three days,” one Chinese newspaper reported, “and the city of Nancheng became charred earth.”

Over the summer, the Japanese laid waste to some 20,000 square miles. They looted towns and villages, then stole honey and scattered beehives. Soldiers devoured, drove away, or simply slaughtered thousands of oxen, pigs, and other farm animals; some wrecked vital irrigation systems and set crops on fire. They destroyed bridges, roads, and airfields.“Like a swarm of locusts, they left behind nothing but destruction and chaos,” Dunker wrote.

Four of the American fliers who raided Tokyo grin out from beneath Chinese umbrellas that they borrowed. (Bettmann/Corbis)

Those discovered to have helped the Doolittle raiders were tortured. In Nancheng, soldiers forced a group of men who had fed the airmen to eat feces before lining up ten of them for a “bullet contest” to see how many people a single bullet would pass through before it stopped. In Ihwang, Ma Eng-lin, who had welcomed injured pilot Harold Watson into his home, was wrapped in a blanket, tied to a chair and soaked in kerosene. Then soldiers forced his wife to torch him.

“Little did the Doolittle men realize,” the Reverend Charles Meeus later wrote, “that those same little gifts which they gave their rescuers in grateful acknowledgement of their hospitality— parachutes, gloves, nickels, dimes, cigarette packages—would, a few weeks later, become the telltale evidence of their presence and lead to the torture and death of their friends!”

A missionary with the United Church of Canada, the Reverend Bill Mitchell traveled in the region, organizing aid on behalf of the Church Committee on China Relief. Mitchell gathered statistics from local governments to provide a snapshot of the destruction. The Japanese flew 1,131 raids against Chuchow—Doolittle’s intended destination—killing 10,246 people and leaving another 27,456 destitute. They destroyed 62,146 homes, stole 7,620 head of cattle, and burned 30 percent of the crops.

“Out of twenty-eight market towns in that region,” the committee’s report noted, “only three escaped devastation.” The city of Yushan, with a population of 70,000 —many of whom had participated in a parade led by the mayor in honor of raiders Davy Jones and Hoss Wilder—saw 2,000 killed and 80 percent of the homes destroyed. “Yushan was once a large town filled with better-than-average houses. Now you can walk thru street after street seeing nothing but ruins,” Father Bill Stein wrote in a letter. “In some places you can go several miles without seeing a house that was not burnt.”

That August, Japan’s secret bacteriological warfare group, Unit 731, launched an operation to coincide with the withdrawal of Japanese troops from the region.

In what was known as land bacterial sabotage, troops would contaminate wells, rivers, and fields, hoping to sicken local villagers as well as the Chinese forces, which would no doubt move back in and reoccupy the border region as soon as the Japanese departed. Over the course of several meetings, Unit 731’s commanding officers debated the best bacteria to use, settling on plague, anthrax, cholera, typhoid, and paratyphoid, all of which would be spread via spray, fleas, and direct contamination of water sources. For the operation, almost 300 pounds of paratyphoid and anthrax germs were ordered.

Technicians filled peptone bottles with typhoid and paratyphoid bacteria, packaged them in boxes labeled “Water Supply,” and flew them to Nanking. Once in Nanking, workers transferred the bacteria to metal flasks—like those used for drinking water— and flew them into the target areas. Troops then tossed the flasks into wells, marshes, and homes. The Japanese also prepared 3,000 rolls, contaminated with typhoid and paratyphoid, and handed them to hungry Chinese prisoners of war, who were then released to go home and spread disease. Soldiers left another 400 biscuits infected with typhoid near fences, under trees, and around bivouac areas to make it appear as though retreating forces had left them behind, knowing hungry locals would devour them.

Major General Doolittle's fliers in China after the Doolittle Raid on Tokyo of April 18, 1942. (Corbis)

The region’s devastation made it difficult to tally who got sick and why, particularly since the Japanese had looted and burned hospitals and clinics. The thousands of rotting human and livestock carcasses that clogged wells and littered the rubble also contaminated the drinking water. Furthermore, the impoverished region, where villagers often defecated in holes outdoors, had been prone to such outbreaks before the invasion. Anecdotal evidence gathered from missionaries and journalists shows that many Chinese fell sick from malaria, dysentery, and cholera even before the Japanese reportedly began the operation.

Chinese journalist Yang Kang, who traveled the region for the Takung Pao newspaper, visited the village of Peipo in late July. “Those who returned to the village after the enemy had evacuated fell sick with no one spared,” she wrote. “This was the situation which took place not only in Peipo but everywhere.”

In December 1942, Tokyo radio reported massive outbreaks of cholera, and the following spring, the Chinese reported that a plague epidemic forced the government to quarantine the Chekiang town of Luangshuan. “The losses suffered by our people,” one later wrote, “were inestimable.” Some of Unit 731’s victims included Japanese soldiers. A lance corporal captured in 1944 told American interrogators that upward of 10,000 troops were infected during the Chekiang campaign.

“Diseases were particularly cholera, but also dysentery and pest,” an American intelligence report stated. “Victims were usually rushed to hospitals in rear, particularly the Hangchow Army Hospital, but cholera victims, usually being treated too late, mostly died.” The prisoner saw a report that listed 1,700 dead, most of cholera. Actual deaths likely were much higher, he said, “it being common practice to pare down unpleasant figures.”

The three-month campaign across Chekiang and Kiangsi Provinces infuriated many in the Chinese military, who understood it as a consequence of a U.S. raid designed to lift the spirits of Americans. Officials in Chungking and Washington had purposely withheld details of the U.S. raid from Chinese ruler Chiang Kai-shek, assuming the Japanese would retaliate.

“After they had been caught unawares by the falling of American bombs on Tokyo, Japanese troops attacked the coastal areas of China, where many of the American fliers had landed,” Chiang cabled to Washington. “These Japanese troops slaughtered every man, woman and child in those areas. Let me repeat—these Japanese troops slaughtered every man, woman and child in those areas.”

News trickled out in American media in the spring of 1943 as missionaries who witnessed the atrocities returned home. The New York Times editorialized, “The Japanese have chosen how they want to represent themselves to the world. We shall take them at their own valuation, on their own showing. We shall not forget, and we shall see that a penalty is paid.”

The Los Angeles Times was far more forceful:

To say that these slayings were motivated by cowardice as well as savagery is to say the obvious. The Nippon war lords have thus proved themselves to be made of the basest metal …

Those notices, however, did not get much traction, and the slaughter was soon forgotten. It was a tragedy best described by a Chinese journalist at the time. “The invaders made of a rich, flourishing country a human hell,” the reporter wrote, “a gruesome graveyard, where the only living thing we saw for miles was a skeleton-like dog, who fled in terror before our approach.” 

Excerpted from Target Tokyo: Jimmy Doolittle and the Raid that Avenged Pearl Harbor by James M. Scott. Copyright © 2015 by James M. Scott. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

This Galentine's Day blog post is for you. You poetic, noble land-mermaid.

National Museum of American History

On February 13, women everywhere (we hope!) will be gathering together to celebrate Galentine's Day. First introduced in 2010 by character Leslie Knope on the TV show Parks and Recreation, Galentine's Day is about "ladies celebrating ladies," be they friends, co-workers, family members, or personal heroes. What began as a fictional holiday for women to honor other women has merged into real life as more women learn about and celebrate this happy day. In honor of Galentine's Day we have chosen some of our favorite gal pals in our collections. Below are some of the women and girls who could have had their own Galentine's Day celebration.

The Monterey Gals

Black and white posed photo of women, each with their hair piled on top of their head and high-collared blouses.

Postcard with typed words and handwritten words. Green postage stamp.

Before telephones were common forms of communication, real photo postcards were the rapid messaging tool of the day. The postmark indicates that Elsie sent this card from Monterey, Virginia, at 9:00 a.m. on 23 May, 1907, to a Miss Jay (maybe a nickname?) Yager in Bartow, West Virginia, some 30 miles away. With mail services often delivering twice a day, you could send a quick note in the morning to invite a friend for a late-night horseback ride as Elsie did in May 1907, "am going horse back riding to night [sic], come and go along 'Moon-light' you know." Enticing!

If Elsie or Jay are depicted among the group of young women photographed in an unidentified photographer's studio, we don't know. It sounds like perhaps Elsie was a store clerk at Dunlevie Drug Store in Dunlevie (now Thornwood), West Virginia. "Do you ever go to Dunlevie, anymore(?)/ Would love you to. Come in and see me." Maybe then Jay could have gotten the scoop from her gal pal, "Rec'd card it was a rich one." When this postcard was written, senders were not allowed to write on the back of the postcard; it was to be the address only. Our rebellious sender, Elsie, continued her message there anyway. With their spunk, contemporary hair and clothes, and tight friendship, we would like being friends with these gals!

The Seven Sutherland Sisters

Black and white photo of seven women with exceedingly long hair.

Women of late-19th-century America flaunted hair as their "crowning glory," the ultimate marker of feminine beauty, luxurious vitality, and even moral health. For the seven Sutherland sisters of Cambria, New York, luscious locks were never in short supply. Together, their womanly manes measured 37 feet, a fact that helped them become national celebrities and entrepreneurs.

Glass bottle labeled "hair grower" beside packaging featuring black and white image of seven women with long hair.

Sarah, Victoria, Isabella, Grace, Naomi, Dora, and Mary toured the country first as a musical act and later with the Barnum and Bailey Circus. Eventually, they took their show to the drug store, offering demonstrations and consultations to admirers and selling their father Reverend Fletcher Sutherland's Seven Sutherland Sisters hair and beauty products. Their success allowed them to retire from the road and build a mansion where they lived together back in New York. By the 1920s, however, the fashion for bobbed hair cut their sales short.

The Navy Nurses of Base Hospital No. 5

Black and white group photo of women wearing uniform coats and hats.

In May 1917, just a month after the United States officially entered World War I, four women set sail from New York Harbor amidst a flurry of noisily cheering crowds. Beulah Armor, Faye Fulton, Halberta Grosh, and Bertha Hamer were nurses with the Navy Nurse Corps and, along with hundreds of soldiers and sailors, they were heading to France well before the troops of the American Expeditionary Force could be fully mobilized to follow them. As young nurses in Philadelphia at the time of the war, the women joined a group of fellow medical professionals from Philadelphia to establish Navy Base Hospital No. 5 in Brest, France.

The hospital began operations in December 1917 and was quickly inundated with patients including new soldiers arriving from the United States, wounded soldiers returning from the war front, civilians injured in German submarine attacks, and victims of the 1918 flu epidemic. There's nothing like a war zone to forge lasting relationships, and no doubt the nurses of Base Hospital No. 5 relied on each other to get them through the long, grueling war.

Coat with "USR" on collar and two lines of buttons, vertical. Dark navy blue.

With the end of the war in November 1918, and the closing of the Base Hospital in March 1919, the four women returned to Philadelphia. Fulton, Grosh, and Hamer continued working as professional nurses, while Armor married a fellow member of Base Hospital No. 5: a cook named Elwood Basler, who was also briefly a patient. Clearly the relationships formed among the nurses in Brest remained strong over the years, as the women got together in 1970 to donate objects from their time in the war. These objects serve as a lasting reminder of brave women facing difficult situations with the support of their friends and colleagues.

Takayo Tsubouchi Fischer and Jayce Tsenami

Black and white photo of two smiling children. Standing outside, with a very plain building behind them. They are both smiling.

Screen, stage, and voice actress Takayo Fischer, was one of 120,000 citizens or residents of Japanese ancestry who were forcibly removed from their homes in Western states and incarcerated in camps during World War II. The youngest of five girls, she grew up with a rich tradition of valuing female friends. The communal nature of the camps—where inmates ate common meals in central mess halls and used bathrooms and showers without individual stalls—broke down the traditional Japanese family structure. As families lost the opportunity to create private family meals or carve out family time, friends, like Jayce Tsenami, took on an even greater importance. Both Takayo and Jayce were inmates at the Jerome camp in Arkansas, where the wooded swampland of the Mississippi delta brought with it mosquitoes, poisonous snakes, malaria, and dysentery. The friendship they forged in camp helped to sustain them through the long and difficult incarceration.

Mary Hill and the Ladies of Maltaville

Quilt with white background and simple shapes in a variety of colors. These include flowers, leaves, and shapes.

In 1847, the women of the Presbyterian Church of Maltaville, New York, honored their friend Mary Hill by making her an album quilt. Album quilts, also called friendship quilts, are made up of appliquéd and embroidered blocks which are joined together to form a quilt. The blocks often contain inked inscriptions with special meaning to the maker, including names, dates, places, or poems.

For Mary Hill's quilt, the women of the church made, joined, lined, and quilted 61 blocks. Each block is signed in ink and features motifs such as birds, flowers, hearts, and stars. At the center of the quilt is a large block with a wreath of flowering vines surrounding the inscription, "Presented to Mrs. Mary B. Hill as an expression of esteem by the Ladies of Maltaville." The quilt was clearly treasured by Mary Hill and her family, as it remained with them for almost 100 years until it was donated to the museum by her granddaughter in 1930.

Gertrud Friedemann and Eva Morgenroth Lande

Black and white portrait of a young girl in a plaid coat. Her hair is on top of her head, perhaps in a hat. Her hands are in muff or hand warmer.

In the 1930s Gertrud Bejach Friedemann and her husband, the bacteriologist Ulrich Friedemann, took refuge in Great Britain and then the United States to avoid the terrors of Nazi Germany. They brought with them Gertrud's two children by her first marriage, Eva and Anton Morgenroth. Among their belongings was a small paper puzzle, called Zauberspiel (Magic Game), which Gertrud had played with as a child in Berlin.

Green piece of paper with rectangular windows cut into the top piece of paper. One window reveals the number 73.

Many green sheets of rigid paper with rectangular windows cut into them, some revealing numbers.

Gertrud passed the game on to her children and it remained a favorite of theirs in their new home in America. In 1988 Eva gave her Zauberspiel to the Smithsonian. The game suggests not only the enduring fascination of mathematical recreations and the rich culture of early 20th century Berlin, but also the power of a small object to tie a mother and daughter who took refuge in the United States to the past they left behind.

Care to try Zauberspiel? Visit our collections record to learn more. But be warned: according to Eva, when family friend Albert Einstein tried his hand at the game he spent days puzzling over it to no avail.

Patri O'Gan is a project assistant in the Division of Armed Forces History. Shannon Perich is a curator in the Division of Culture and the Arts. Mallory Warner is a curatorial assistant and Rachel Anderson is a research and project assistant in the Division of Medicine and Science. Lucy Harvey is a program assistant in the Division of Armed Forces History. Madelyn Shaw is a curator in the Division of Home and Community Life. Peggy Kidwell is a curator in the Division of Medicine and Science.

Author(s): 
Patri O’Gan, Shannon Perich, Mallory Warner, Rachel Anderson, Lucy Harvey, Madelyn Shaw, and Peggy Kidwell
Posted Date: 
Monday, February 13, 2017 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=D1CbjnWLoRw:Xu5C0SFpGgg:V_sGLiPBpWU OSayCanYouSee?i=D1CbjnWLoRw:Xu5C0SFpGgg:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Teddy Roosevelt's Epic (But Strangely Altruistic) Hunt for a White Rhino

Smithsonian Magazine

“I speak of Africa and golden joys.” The first line of Theodore Roosevelt’s own retelling of his epic safari made it clear that he saw it as the unfolding of a great drama, and one that might have very well led to his own death, for the quoted line is from Shakespeare, the Henry IV scene in which the death of the king was pronounced.

As a naturalist, Roosevelt is most often remembered for protecting millions of acres of wilderness, but he was equally commit­ted to preserving something else—the memory of the natural world as it was before the onslaught of civilization. To him, being a responsible naturalist was also about recording the things that would inevitably pass, and he collected specimens and wrote about the life histories of animals when he knew it might be the last opportunity to study them extant. Just as the bison in the American West had faded, Roose­velt knew that the big game of East Africa would one day exist only in vastly diminished numbers. He had missed his chance to record much of the natural history of wild bison, but he was intent on col­lecting and recording everything possible while on his African expe­dition. Roosevelt shot and wrote about white rhinos as if they might someday be found only as fossils.

Interestingly, it was the elite European big-game-hunting frater­nity that most loudly condemned Roosevelt’s scientific collecting. He had personally killed 296 animals, and his son Kermit killed 216 more, but that was not even a tenth of what they might have killed had they been so inclined. Far more animals were killed by the sci­entists who accompanied them, but those men escaped criticism be­cause they were mostly collecting rats, bats, and shrews, which very few people cared about at the time. Roosevelt cared deeply about all these tiny mammals, too, and he could identify many of them to the species with a quick look at their skulls. As far as Roosevelt was con­cerned, his work was no different from what the other scientists were doing—his animals just happened to be bigger.

In June 1908, Roose­velt approached Charles Doolittle Walcott, the administrator of the Smithsonian Institution, with an idea:

As you know, I am not in the least a game butcher. I like to do a certain amount of hunting, but my real and main interest is the interest of a faunal naturalist. Now, it seems to me that this opens up the best chance for the National Museum to get a fine collection, not only of the big game beasts, but of the smaller animals and birds of Africa; and looking at it dispassionately, it seems to me that the chance ought not to be neglected. I will make arrangements in connection with publishing a book which will enable me to pay for the expenses of myself and my son. But what I would like to do would be to get one or two profes­sional field taxidermists, field naturalists, to go with us, who should prepare and send back the specimens we collect. The collection which would thus go to the National Museum would be of unique value.

The “unique value” Roosevelt was referring to, of course, was the chance to acquire specimens shot by him—the president of the United States. Always a tough negotiator, Roosevelt put pressure on Walcott by mentioning that he was also thinking about posing his offer to the American Museum of Natural History in New York—but that, as president, he felt it was only appropriate that his specimens go to the Smithsonian in Washington, D.C.

Compared to those of other museums, the Smithsonian’s African-mammal collection was paltry back then. The Smithsonian had sent a man to explore Kilimanjaro in 1891 and another to the eastern Congo, but the museum still held relatively few specimens. Both the Field Museum in Chicago and the American Museum in New York had been sending regular expeditions to the continent, bringing home thousands of African specimens. Eager not to fall farther behind, Walcott took up Roosevelt’s offer and agreed to pay for the prepara­tion and transport of specimens. He also agreed to set up a special fund through which private donors could contribute to the expedi­tion. (As a public museum, the Smithsonian’s budget was largely con­trolled by Congress, and Roosevelt worried that politics might get in the way of his expedition—the fund solved this sticky issue).

For Teddy Roosevelt, the white rhino was the only species of heavy game left for the expedition to collect, and, of all the species, it was the one the Smithsonian would likely never have an opportunity to collect again. (Smithsonian Institution Archives)

As far as Walcott was concerned, the expedition was both a scien­tific and a public-relations coup. Not only would the museum obtain an important collection from a little-explored corner of Africa, but the collection would come from someone who was arguably one of the most recognized men in America—the president of the United States. Under the aegis of the Smithsonian Institution, Roosevelt’s proposed safari had been transformed from a hunting trip to a serious natural-history expedition promising lasting scientific significance. An elated Roosevelt wrote British explorer and conservationist Frederick Courteney Selous to tell him the good news—the trip would be conducted for science, and he would contribute to the stock of important knowledge being accumulated on the habits of big game.

Roosevelt saw the trip as perhaps his “last chance for something in the nature of a great adventure,” and he devoted the last months of his lame-duck presidency to little other than making preparations. Equipment needed to be purchased, routes mapped, guns and ammo selected. He admitted that he found it very difficult to “devote full attention to his presidential work, he was so eagerly looking forward to his African trip.” Having studied the accounts of other hunters, he knew that the Northern Guaso Nyiro River and the regions north of Mount Elgon were the best places to hunt, and that he had to make a trip to Mount Kenya if he was to have any chance at getting a big bull elephant. He made a list of animals he sought, ordering them by priority: lion, elephant, black rhinoceros, buffalo, giraffe, hippo, eland, sable, oryx, kudu, wildebeest, hartebeest, warthog, zebra, wa­terbuck, Grant’s gazelle, reedbuck, and topi. He also hoped to get up into some of the fly-infested habitats of northern Uganda in search of the rare white rhino.

The Roosevelt rhinos as seen on display at the Natural History Museum in 1959 (Smithsonian Institution Archives)

As 1909 drew to a close, he prepared to embark on a most dangerous mission. Disbanding his foot safari on the shores of Lake Victoria, he requisitioned a flotilla of river craft—a “crazy little steam launch,” two sailboats, and two rowboats—to take him hundreds of miles down the Nile River to a place on the west bank called the Lado Enclave. A semiarid landscape of eye-high elephant grass and scattered thorn trees, it was the last holdout of the rare northern white rhinoceros, and it was here that Roosevelt planned to shoot two complete family groups—one for the Smithsonian’s National Museum, and another that he had promised to Carl Akeley, the sculptor and taxidermist working on the African mam­mal hall at the American Museum of Natural History in New York City.

Nestled between what was then the Anglo-Egyptian Sudan and the Belgian Congo, the Lado Enclave was a 220-mile-long strip of land that was the personal shooting preserve of Belgium’s King Leo­pold II. By international agreement, the king could hold the Lado as his own personal shooting preserve on the condition that, six months after his death, it would pass to British-controlled Sudan. King Leo­pold was already on his deathbed when Roosevelt went to East Africa, and the area reverted to lawlessness as elephant poachers and ragtag adventurers poured into the region with “the greedy abandon of a gold rush.”

In Northern Uganda, the expedition moved downriver, past walls of impenetrable papyrus, until they came upon a low sandy bay that is to this day marked on maps as "Rhino Camp." (Roosevelt Papers, Smithsonian Institution Archives)

Getting to the Lado, however, required Roosevelt to pass through the hot zone of a sleeping-sickness epidemic—the shores and islands at the northern end of Lake Victoria. Hundreds of thousands of peo­ple had recently died of the disease, until the Uganda government wisely evacuated the survivors inland. Those who remained took their chances, and Roosevelt noted the emptiness of the land.

The white rhino lived there—a completely different species from the more common black rhino Roosevelt had been collecting. Color, though, actually has little to do with their differences. In fact, the two animals are so different that they are usually placed in separate genera. The white rhino—white being the English bastardization of the Afrikaans word wyd for “wide,” in reference to this species’ char­acteristically broad upper lip—is specialized for grazing. By compari­son, the more truculent black rhino has a narrow and hooked upper lip specialized for munching on shrubs. Although both animals are gray and basically indistinguishable by color, they display plenty of other differences: the white rhino is generally bigger, has a distinc­tive hump on its neck, and boasts an especially elongated and massive head, which it carries only a few inches from the ground. Roosevelt also knew that of the two, the white rhino was closest in appearance to the prehistoric rhinos that once roamed across the continent of Europe, and the idea of connecting himself to a hunting legacy that spanned millennia thrilled him.

The expedition pitched their tens on the banks of the White NIle, "Rhino Camp," about two degrees above the equator. (Smithsonian Institution Archives)

For many decades since its description in 1817, the white rhino was known to be found only in that part of South Africa south of the Zambezi River, but in 1900 a new subspecies was discovered thousands of miles to the north, in the Lado Enclave. Such widely separated populations were unusual in the natural world, and it was assumed that the extant white rhinos were the remnants of what was once a more widespread and contiguous distribution. “It is almost as if our bison had never been known within historic times except in Texas and Ecuador,” Roosevelt wrote of the disparity.

At the time of Roosevelt’s expedition, as many as one million black rhino still existed in Africa, but the white rhino was already nearing extinction. The southern population had been hunted to the point that only a few individuals survived in just a single reserve, and even within the narrow ribbon of the Lado Enclave, these rhinos were found only in certain areas and were by no means abundant. On the one hand, Roosevelt’s instincts as a conservationist told him to refrain from shooting any white rhino specimens “until a careful inquiry has been made as to its numbers and exact distribution.” But on the other hand, as a pragmatic naturalist, he knew that the species was inevi­tably doomed and that it was important for him to collect specimens before it went extinct.

Roosevelt made a list of animals he sought, ordering them by priority:. . . He also hoped to get up into some of the fly-infested habitats of northern Uganda in search of the rare white rhino. (Roosevelt Papers, Smithsonian Institution Archives)

As he steamed down the Nile, Roosevelt was followed by a second expedition of sorts, led by a former member of the British East Af­rica Police. But Captain W. Robert Foran was not intent on arresting Roosevelt—whom he referred to by the code name “Rex”; rather, he was the head of an expedition of the Associated Press. Roosevelt let Foran’s group follow at a respectable distance, by now wanting regular news to flow back to the United States. Foran had also been instru­mental in securing a guide for Roosevelt on his jaunt into the virtually lawless Lado Enclave. The guide, Quentin Grogan, was among the most notorious of the elephant poachers in the Lado, and Roosevelt was chuffed to have someone of such ill-repute steering his party.

Grogan was still recovering from a boozy, late-night revel when he first met Roosevelt. The poacher thought [the president’s son] Kermit was dull, and he deplored the lack of alcohol in the Roosevelts’ camp. Among some other hangers‑on eager to meet Roosevelt was another character—John Boyes, a seaman who, after being shipwrecked on the African coast in 1896, “went native” and was so highly regarded as an el­ephant hunter there that he was christened the legendary King of the Kikuyu. Grogan, Boyes, and a couple of other unnamed elephant hunters had gathered in the hope of meeting Roosevelt, who char­acterized them all as “a hard bit set.” These men who faced death at every turn, “from fever, from assaults of warlike native tribes, from their conflicts with their giant quarry,” were so like many of the tough cowpunchers he had encountered in the American West—rough and fiercely independent men—that Roosevelt loved them.

Downriver they went, past walls of impenetrable papyrus, until they came upon a low, sandy bay that is to this day marked on maps as “Rhino Camp.” Their tents pitched on the banks of the White Nile, about two degrees above the equator, Roosevelt was in “the heart of the African wilderness.” Hippos wandered dangerously close at night, while lions roared and elephants trumpeted nearby. Having spent the past several months in the cool Kenyan highlands, Roosevelt found the heat and swarming insects intense, and he was forced to wear a mosquito head net and gauntlets at all times. The group slept under mosquito nets “usually with nothing on, on account of the heat” and burned mosquito repellent throughout the night.

In the end, Roosevelt shot five northern white rhinoceros, with Kermit taking an additional four. (Smithsonian Institution Archives)

Although their camp was situated just beyond the danger zone for sleeping sickness, Roosevelt was still bracing himself to come down with some sort of fever or another. “All the other members of the party have been down with fever or dysentery; one gun bearer has died of fever, four porters of dysentery and two have been mauled by beasts; and in a village on our line of march, near which we camped and hunted, eight natives died of sleeping sickness during our stay,” he wrote. The stakes were certainly high in Rhino Camp, but Roosevelt would not have taken the risk if the mission was not important—the white rhino was the only species of heavy game left for the expedition to collect, and, of all the species, it was the one the Smithsonian would likely never have an opportunity to collect again.

Today, the northern white rhino is extinct in the wild and only three remain in captivity. One of the Roosevelt white rhinos is on view at the Natural History Museum. (NMNH)

In the end, Roosevelt shot five northern white rhinoceros, with Kermit taking an additional four. As game, these rhinos were un­impressive to hunt. Most were shot as they rose from slumber. But with a touch of poignancy, the hunts were punctuated with bouts of wildfire-fighting, injecting some drama into one of Roosevelt’s last accounts from the field. Flames licked sixty feet high as the men lit backfires to protect their camp, the evening sky turning red above the burning grass and papyrus. Awakening to a scene that resembled the aftermath of an apocalypse, the men tracked rhino through miles of white ash, the elephant grass having burned to the ground in the night.

Whether the species lived on or died out, Roosevelt was emphatic that people needed to see the white rhinoceros. If they couldn’t expe­rience the animals in Africa, at least they should have the chance to see them in a museum.

Today, the northern white rhino is extinct in the wild and only three remain in captivity. One of the Roosevelt white rhinos is view, along with 273 other taxidermy specimens, in the Smithsonian’s Hall of Mammals at on the National Museum of Natural History.

Adapted from THE NATURALIST by Darrin Lunde. Copyright © 2016 by Darrin Lunde. Published by Crown Publishers, a division of Penguin Random House LLC.

Darrin Lunde, is a mammal scholar who has named more than a dozen new species of mammals and led scientific field expeditions throughout the world.  Darrin previously worked at the American Museum of Natural History, and is currently a supervisory museum specialist in the Division of Mammals at the Smithsonian's National Museum of Natural History.  Darrin independently authored this book, The Naturalist, based on his own personal research. The views expressed in the book are his own and not those of the Smithsonian.

Science Still Bears the Fingerprints of Colonialism

Smithsonian Magazine

Sir Ronald Ross had just returned from an expedition to Sierra Leone. The British doctor had been leading efforts to tackle the malaria that so often killed English colonists in the country, and in December 1899 he gave a lecture to the Liverpool Chamber of Commerce about his experience. In the words of a contemporary report, he argued that “in the coming century, the success of imperialism will depend largely upon success with the microscope.”

Ross, who won the Nobel Prize for Medicine for his malaria research, would later deny he was talking specifically about his own work. But his point neatly summarized how the efforts of British scientists was intertwined with their country’s attempt to conquer a quarter of the world.

Ross was very much a child of empire, born in India and later working there as a surgeon in the imperial army. So when he used a microscope to identify how a dreaded tropical disease was transmitted, he would have realized that his discovery promised to safeguard the health of British troops and officials in the tropics. In turn, this would enable Britain to expand and consolidate its colonial rule.

Ross’s words also suggest how science was used to argue imperialism was morally justified because it reflected British goodwill towards colonized people. It implied that scientific insights could be redeployed to promote superior health, hygiene and sanitation among colonial subjects. Empire was seen as a benevolent, selfless project. As Ross’s fellow Nobel laureate Rudyard Kipling described it, it was the “white man’s burden” to introduce modernity and civilized governance in the colonies.

But science at this time was more than just a practical or ideological tool when it came to empire. Since its birth around the same time as Europeans began conquering other parts of the world, modern Western science was inextricably entangled with colonialism, especially British imperialism. And the legacy of that colonialism still pervades science today.

As a result, recent years have seen an increasing number of calls to “decolonize science”, even going so far as to advocate scrapping the practice and findings of modern science altogether. Tackling the lingering influence of colonialism in science is much needed. But there are also dangers that the more extreme attempts to do so could play into the hands of religious fundamentalists and ultra-nationalists. We must find a way to remove the inequalities promoted by modern science while making sure its huge potential benefits work for everyone, instead of letting it become a tool for oppression.

Ronald Ross at his lab in Calcutta, 1898. (Wellcome Collection, CC BY)

The gracious gift of science

When an enslaved laborer in an early 18th-century Jamaican plantation was found with a supposedly poisonous plant, his European overlords showed him no mercy. Suspected of conspiring to cause disorder on the plantation, he was treated with typical harshness and hanged to death. The historical records don’t even mention his name. His execution might also have been forgotten forever if it weren’t for the scientific inquiry that followed. Europeans on the plantation became curious about the plant and, building on the enslaved worker's “accidental finding,” they eventually concluded it wasn’t poisonous at all.

Instead it became known as a cure for worms, warts, ringworm, freckles and cold swellings, with the name Apocynum erectum. As the historian Pratik Chakrabarti argues in a recent book, this incident serves as a neat example of how, under European political and commercial domination, gathering knowledge about nature could take place simultaneously with exploitation.

For imperialists and their modern apologists, science and medicine were among the gracious gifts from the European empires to the colonial world. What’s more, the 19th-century imperial ideologues saw the scientific successes of the West as a way to allege that non-Europeans were intellectually inferior and so deserved and needed to be colonized.

In the incredibly influential 1835 memo “Minute on Indian Education,” British politician Thomas Macaulay denounced Indian languages partially because they lacked scientific words. He suggested that languages such as Sanskrit and Arabic were “barren of useful knowledge,” “fruitful of monstrous superstitions” and contained “false history, false astronomy, false medicine.”

Such opinions weren’t confined to colonial officials and imperial ideologues and were often shared by various representatives of the scientific profession. The prominent Victorian scientist Sir Francis Galton argued that the “the average intellectual standard of the negro race is some two grades below our own (the Anglo Saxon).” Even Charles Darwin implied that “savage races” such as “the negro or the Australian” were closer to gorillas than were white Caucasians.

Yet 19th-century British science was itself built upon a global repertoire of wisdom, information and living and material specimens collected from various corners of the colonial world. Extracting raw materials from colonial mines and plantations went hand in hand with extracting scientific information and specimens from colonized people.

Sir Hans Sloane’s imperial collection started the British Museum. (Paul Hudson/Wikipedia, CC BY)

Imperial collections

Leading public scientific institutions in imperial Britain, such as the Royal Botanic Gardens at Kew and the British Museum, as well as ethnographic displays of “exotic” humans, relied on a global network of colonial collectors and go-betweens. By 1857, the East India Company’s London zoological museum boasted insect specimens from across the colonial world, including from Ceylon, India, Java and Nepal.

The British and Natural History museums were founded using the personal collection of doctor and naturalist Sir Hans Sloane. To gather these thousands of specimens, Sloane had worked intimately with the East India, South Sea and Royal African companies, which did a great deal to help establish the British Empire.

The scientists who used this evidence were rarely sedentary geniuses working in laboratories insulated from imperial politics and economics. The likes of Charles Darwin on the Beagle and botanist Sir Joseph Banks on the Endeavour literally rode on the voyages of British exploration and conquest that enabled imperialism.

Other scientific careers were directly driven by imperial achievements and needs. Early anthropological work in British India, such as Sir Herbert Hope Risley’s Tribes and Castes of Bengal, published in 1891, drew upon massive administrative classifications of the colonized population.

Map-making operations including the work of the Great Trigonometrical Survey in South Asia came from the need to cross colonial landscapes for trade and military campaigns. The geological surveys commissioned around the world by Sir Roderick Murchison were linked with intelligence gathering on minerals and local politics.

Efforts to curb epidemic diseases such as plague, smallpox and cholera led to attempts to discipline the routines, diets and movements of colonial subjects. This opened up a political process that the historian David Arnold has termed the “colonization of the body”. By controlling people as well as countries, the authorities turned medicine into a weapon with which to secure imperial rule.

New technologies were also put to use expanding and consolidating the empire. Photographs were used for creating physical and racial stereotypes of different groups of colonized people. Steamboats were crucial in the colonial exploration of Africa in the mid-19th century. Aircraft enabled the British to surveil and then bomb rebellions in 20th-century Iraq. The innovation of wireless radio in the 1890s was shaped by Britain’s need for discreet, long-distance communication during the South African war.

In these ways and more, Europe’s leaps in science and technology during this period both drove and were driven by its political and economic domination of the rest of the world. Modern science was effectively built on a system that exploited millions of people. At the same time it helped justify and sustain that exploitation, in ways that hugely influenced how Europeans saw other races and countries. What’s more, colonial legacies continue to shape trends in science today.

Polio eradication needs willing volunteers. (Department for International Development, CC BY)

Modern colonial science

Since the formal end of colonialism, we have become better at recognizing how scientific expertise has come from many different countries and ethnicities. Yet former imperial nations still appear almost self-evidently superior to most of the once-colonized countries when it comes to scientific study. The empires may have virtually disappeared, but the cultural biases and disadvantages they imposed have not.

You just have to look at the statistics on the way research is carried out globally to see how the scientific hierarchy created by colonialism continues. The annual rankings of universities are published mostly by the Western world and tend to favor its own institutions. Academic journals across the different branches of science are mostly dominated by the U.S. and western Europe.

It is unlikely that anyone who wishes to be taken seriously today would explain this data in terms of innate intellectual superiority determined by race. The blatant scientific racism of the 19th century has now given way to the notion that excellence in science and technology are a euphemism for significant funding, infrastructure and economic development.

Because of this, most of Asia, Africa and the Caribbean are seen either as playing catch-up with the developed world or as dependent on its scientific expertise and financial aid. Some academics have identified these trends as evidence of the persisting “intellectual domination of the West” and labeled them a form of “neo-colonialism.”

Various well-meaning efforts to bridge this gap have struggled to go beyond the legacies of colonialism. For example, scientific collaboration between countries can be a fruitful way of sharing skills and knowledge, and learning from the intellectual insights of one another. But when an economically weaker part of the world collaborates almost exclusively with very strong scientific partners, it can take the form of dependence, if not subordination.

A 2009 study showed that about 80 percent of Central Africa’s research papers were produced with collaborators based outside the region. With the exception of Rwanda, each of the African countries principally collaborated with its former colonizer. As a result, these dominant collaborators shaped scientific work in the region. They prioritized research on immediate local health-related issues, particularly infectious and tropical diseases, rather than encouraging local scientists to also pursue the fuller range of topics pursued in the West.

In the case of Cameroon, local scientists’ most common role was in collecting data and fieldwork while foreign collaborators shouldered a significant amount of the analytical science. This echoed a 2003 study of international collaborations in at least 48 developing countries that suggested local scientists too often carried out “fieldwork in their own country for the foreign researchers.”

In the same study, 60 percent to 70 percent of the scientists based in developed countries did not acknowledge their collaborators in poorer countries as co-authors in their papers. This is despite the fact they later claimed in the survey that the papers were the result of close collaborations.

A March for Science protester in Melbourne. (Wikimedia Commons)

Mistrust and resistance

International health charities, which are dominated by Western countries, have faced similar issues. After the formal end of colonial rule, global health workers long appeared to represent a superior scientific culture in an alien environment. Unsurprisingly, interactions between these skilled and dedicated foreign personnel and the local population have often been characterized by mistrust.

For example, during the smallpox eradication campaigns of the 1970s and the polio campaign of past two decades, the World Health Organization’s representatives found it quite challenging to mobilize willing participants and volunteers in the interiors of South Asia. On occasions they even saw resistance on religious grounds from local people. But their stringent responses, which included the close surveillance of villages, cash incentives for identifying concealed cases and house-to-house searches, added to this climate of mutual suspicion. These experiences of mistrust are reminiscent of those created by strict colonial policies of plague control.

Western pharmaceutical firms also play a role by carrying out questionable clinical trials in the developing world where, as journalist Sonia Shah puts it, “ethical oversight is minimal and desperate patients abound.” This raises moral questions about whether multinational corporations misuse the economic weaknesses of once-colonized countries in the interests of scientific and medical research.

The colonial image of science as a domain of the white man even continues to shape contemporary scientific practice in developed countries. People from ethnic minorities are underrepresented in science and engineering jobs and more likely to face discrimination and other barriers to career progress.

To finally leave behind the baggage of colonialism, scientific collaborations need to become more symmetrical and founded on greater degrees of mutual respect. We need to decolonize science by recognizing the true achievements and potential of scientists from outside the Western world. Yet while this structural change is necessary, the path to decolonization has dangers of its own.

Science must fall?

In October 2016, a YouTube video of students discussing the decolonisation of science went surprisingly viral. The clip, which has been watched more than 1 million times, shows a student from the University of Cape Town arguing that science as a whole should be scrapped and started again in a way that accommodates non-Western perspectives and experiences. The student’s point that science cannot explain so-called black magic earned the argument much derision and mockery. But you only have to look at the racist and ignorant comments left beneath the video to see why the topic is so in need of discussion.

Inspired by the recent “Rhodes Must Fall” campaign against the university legacy of the imperialist Cecil Rhodes, the Cape Town students became associated with the phrase “science must fall.” While it may be interestingly provocative, this slogan isn’t helpful at a time when government policies in a range of countries including the U.S., UK and India are already threatening to impose major limits on science research funding.

More alarmingly, the phrase also runs the risk of being used by religious fundamentalists and cynical politicians in their arguments against established scientific theories such as climate change. This is a time when the integrity of experts is under fire and science is the target of political maneuvering. So polemically rejecting the subject altogether only plays into the hands of those who have no interest in decolonization.

Alongside its imperial history, science has also inspired many people in the former colonial world to demonstrate remarkable courage, critical thinking and dissent in the face of established beliefs and conservative traditions. These include the iconic Indian anti-caste activist Rohith Vemula and the murdered atheist authors Narendra Dabholkar and Avijit Roy. Demanding that “science must fall” fails to do justice to this legacy.

The call to decolonize science, as in the case of other disciplines such as literature, can encourage us to rethink the dominant image that scientific knowledge is the work of white men. But this much-needed critique of the scientific canon carries the other danger of inspiring alternative national narratives in post-colonial countries.

For example, some Indian nationalists, including the country’s current prime minister, Narendra Modi, have emphasized the scientific glories of an ancient Hindu civilisation. They argue that plastic surgery, genetic science, airplanes and stem cell technology were in vogue in India thousands of years ago. These claims are not just a problem because they are factually inaccurate. Misusing science to stoke a sense of nationalist pride can easily feed into jingoism.

Meanwhile, various forms of modern science and their potential benefits have been rejected as unpatriotic. In 2016, a senior Indian government official even went so far as to claim that “doctors prescribing non-Ayurvedic medicines are anti-national.”

The path to decolonization

Attempts to decolonize science need to contest jingoistic claims of cultural superiority, whether they come from European imperial ideologues or the current representatives of post-colonial governments. This is where new trends in the history of science can be helpful.

For example, instead of the parochial understanding of science as the work of lone geniuses, we could insist on a more cosmopolitan model. This would recognize how different networks of people have often worked together in scientific projects and the cultural exchanges that helped them–even if those exchanges were unequal and exploitative.

But if scientists and historians are serious about “decolonizing science” in this way, they need to do much more to present the culturally diverse and global origins of science to a wider, non-specialist audience. For example, we need to make sure this decolonized story of the development of science makes its way into schools.

Students should also be taught how empires affected the development of science and how scientific knowledge was reinforced, used and sometimes resisted by colonized people. We should encourage budding scientists to question whether science has done enough to dispel modern prejudices based on concepts of race, gender, class and nationality.

Decolonizing science will also involve encouraging Western institutions that hold imperial scientific collections to reflect more on the violent political contexts of war and colonization in which these items were acquired. An obvious step forward would be to discuss repatriating scientific specimens to former colonies, as botanists working on plants originally from Angola but held primarily in Europe have done. If repatriation isn’t possible, then co-ownership or priority access for academics from post-colonial countries should at least be considered.

This is also an opportunity for the broader scientific community to critically reflect on its own profession. Doing so will inspire scientists to think more about the political contexts that have kept their work going and about how changing them could benefit the scientific profession around the world. It should spark conversations between the sciences and other disciplines about their shared colonial past and how to address the issues it creates.

Unravelling the legacies of colonial science will take time. But the field needs strengthening at a time when some of the most influential countries in the world have adopted a lukewarm attitude towards scientific values and findings. Decolonization promises to make science more appealing by integrating its findings more firmly with questions of justice, ethics and democracy. Perhaps, in the coming century, success with the microscope will depend on success in tackling the lingering effects of imperialism.

Brain Pickings' Top 11 History Books of the Year

Smithsonian Magazine

After the year’s best children’s books, art and design books, photography books, and science books, the 2011 best-of series continues with a look at the most fascinating history books featured on Brain Pickings this years, tomes that unearth unknown treasures from the annals of yesteryear or offer an unusual lens on a familiar piece of our cultural past.

1. THE INFORMATION

The Information coverThe future of information can’t be complete without a full understanding of its past. That, in the context of so much more, is exactly what iconic science writer James Gleick explores in The Information: A History, a Theory, a Flood — the book you’d have to read if you only read one book this year. Flowing from tonal languages to early communication technology to self-replicating memes, Gleick delivers an astonishing 360-degree view of the vast and opportune playground for us modern “creatures of the information,” to borrow vocabulary from Jorge Luis Borges’ much more dystopian take on information in the 1941 classic, “The Library of Babel,” which casts a library’s endless labyrinth of books and shelves as a metaphor for the universe.

Gleick illustrates the central dogma of information theory through a riveting journey across African drum languages, the story of the Morse code, the history of the French optical telegraph, and a number of other fascinating facets of humanity’s infinite quest to transmit what matters with ever-greater efficiency.

We know about streaming information, parsing it, sorting it, matching it, and filtering it. Our furniture includes iPods and plasma screens, our skills include texting and Googling, we are endowed, we are expert, so we see information in the foreground. But it has always been there.” ~ James Gleick

But what makes the book most compelling is that, unlike some of his more defeatist contemporaries, Gleick roots his core argument in a certain faith in humanity, in our moral and intellectual capacity for elevation, making the evolution and flood of information an occasion to celebrate new opportunities and expand our limits, rather than to despair and disengage.

Gleick concludes The Information with Borges’ classic portrait of the human condition:

We walk the corridors, searching the shelves and rearranging them, looking for lines of meaning amid leagues of cacophony and incoherence, reading the history of the past and of the future, collecting our thoughts and collecting the thoughts of others, and every so often glimpsing mirrors, in which we may recognize creatures of the information.”

Originally featured on Brain Pickings in March and excerpted in Smithsonian magazine’s May issue.

2. THE SWERVE

Poggio Bracciolini is the most important man you’ve never heard of.

The Swerve coverOne cold winter night in 1417, the clean-shaven, slender young man pulled a manuscript off a dusty library shelf and could barely believe his eyes. In his hands was a thousand-year-old text that changed the course of human thought — the last surviving manuscript of On the Nature of Things, a seminal poem by Roman philosopher Lucretius, full of radical ideas about a universe operating without gods and that matter made up of minuscule particles in perpetual motion, colliding and swerving in ever-changing directions. With Bracciolini’s discovery began the copying and translation of this powerful ancient text, which in turn fueled the Renaissance and inspired minds as diverse as Shakespeare, Galileo, Thomas Jefferson, Einstein and Freud.

In The Swerve: How the World Became Modern, acclaimed Renaissance scholar Stephen Greenblatt tells the story of Bracciolini’s landmark discovery and its impact on centuries of human intellectual life, laying the foundations for nearly everything we take as a cultural given today.

“This is a story [of] how the world swerved in a new direction. The agent of change was not a revolution, an implacable army at the gates, or landfall of an unknown continent. […] The epochal change with which this book is concerned — though it has affected all our lives — is not so easily associated with a dramatic image.”

Central to the Lucretian worldview was the idea that beauty and pleasure were worthwhile pursuits, a notion that permeated every aspect of culture during the Renaissance and has since found its way to everything from design to literature to political strategy — a worldview in stark contrast with the culture of religious fear and superstitions pragmatism that braced pre-Renaissance Europe. And, as if to remind us of the serendipitous shift that underpins our present reality, Greenblatt writes in the book’s preface:

“It is not surprising that the philosophical tradition from which Lucretius’ poem derived, so incompatible with the cult of the gods and the cult of the state, struck some, even in the tolerant culture of the Mediterranean, as scandalous […] What is astonishing is that one magnificent articulation of the whole philosophy — the poem whose recovery is the subject of this book — should have survived. Apart from a few odds and ends and secondhand reports, all that was left of the whole rich tradition was contained in that single work. A random fire, an act of vandalism, a decision to snuff out the last trace of views judged to be heretical, and the course of modernity would have been different.”

Illuminating and utterly absorbing, The Swerve is as much a precious piece of history as it is a timeless testament to the power of curiosity and rediscovery. In a world dominated by the newsification of culture where the great gets quickly buried beneath the latest, it’s a reminder that some of the most monumental ideas might lurk in a forgotten archive and today’s content curators might just be the Bracciolinis of our time, bridging the ever-widening gap between accessibility and access.

3. RADIOACTIVE

Radioactive coverWait, how can a book be among the year’s best art and design books, best science books, and best history books? Well, if it’s Radioactive: Marie & Pierre Curie: A Tale of Love and Fallout, it can. In this cross-disciplinary gem, artist Lauren Redniss tells the story of Marie Curie — one of the most extraordinary figures in the history of science, a pioneer in researching radioactivity, a field the very name for which she coined, and not only the first woman to win a Nobel Prize but also the first person to win two Nobel Prizes, and in two different sciences — through the two invisible but immensely powerful forces that guided her life: radioactivity and love. Granted, the book was also atop my omnibus of the year’s best art and design books — but that’s because it’s truly extraordinary — a remarkable feat of thoughtful design and creative vision.

Marie Curie book

To honor Curie’s spirit and legacy, Redniss rendered her poetic artwork in cyanotype, an early-20th-century image printing process critical to the discovery of both X-rays and radioactivity itself — a cameraless photographic technique in which paper is coated with light-sensitive chemicals. Once exposed to the sun’s UV rays, this chemically-treated paper turns a deep shade of blue. The text in the book is a unique typeface Redniss designed using the title pages of 18th- and 19th-century manuscripts from the New York Public Library archive. She named it Eusapia LR, for the croquet-playing, sexually ravenous Italian Spiritualist medium whose séances the Curies used to attend. The book’s cover is printed in glow-in-the-dark ink.

Radioactive book

Redniss tells a turbulent story — a passionate romance with Pierre Curie (honeymoon on bicycles!), the epic discovery of radium and polonium, Pierre’s sudden death in a freak accident in 1906, Marie’s affair with physicist Paul Langevin, her coveted second Noble Prize — under which lie poignant reflections on the implications of Curie’s work more than a century later as we face ethically polarized issues like nuclear energy, radiation therapy in medicine, nuclear weapons and more.

Full review, with more images and Redniss’s TEDxEast talk, here.

4. HEDY’S FOLLY

Hedy LamarrHedy’s Folly: The Life and Breakthrough Inventions of Hedy Lamarr, the Most Beautiful Woman in the World tells the fascinating story of a Hollywood-starlet-turned-inventor whose radio system for remote-controlling torpedoes laid the foundations for technologies like wifi and Bluetooth. But her story is also one of breaking free of society’s expectations for what inventors should be and look like. After our recent review, reader Carmelo “Nino” Amarena, an inventor himself, who interviewed Lamarr in 1997 shortly before her death, captures this friction in an email:

“Ever since I found out back in 1989 that Hedy had invented Spread Spectrum (Frequency Hopping type only), I followed her career historically until her death. My interview with her is one of the most notable memories I have of speaking with an inventor, and as luck would have it, she was underestimated for nearly 60 years on the smarts behind her beauty. One of the things she said to me in our 1997 talk was, ‘my beauty was my curse, so-to-speak, it created an impenetrable shield between people and who I really was’. I believe we all have our own version of Hedy’s curse and trying to overcome it could take a lifetime.”

In 1937, the dinner table of Fritz Mandl — an arms dealer who sold to both sides during the Spanish Civil War and the third richest man in Austria — entertained high-ranking Nazi officials who chatted about the newest munitions technologies. Mandl’s wife, a twenty-four-year-old former movie star, whom he respected but also claimed “didn’t know A from Z,” sat quietly listening. Hedy Kiestler, whose parents were assimilated Jews, and who would be rechristened by Louis B. Meyer as Hedy Lamarr, wanted to escape to Hollywood and return to the screen. From these dinner parties, she knew about about submarines and wire-guided torpedoes, about the multiple frequencies used to guide bombs. She knew that she had present herself as the glamorous wife of an arms dealer. And she knew that in order to leave her husband, she would have to take a good amount of this information with her.

Hedy Lamarr Studio Shot

Hedy’s story is intertwined with that of American composer George Antheil, who lived during the 1920s with his wife in Paris above the newly opened Shakespeare and Company, and who could count among his friends Man Ray, Ezra Pound, Louise Bryant, and Igor Stravinsky. When Antheil attended the premiere of Stravinsky’s Les Noces, the composer invited him afterward to a player piano factory, where he wished to have his work punched out for posterity. There, Antheil conceived of a grand composition for sixteen player pianos, bells, sirens, and several airplane propellers, which he called his Ballet mecanique. When he premiered the work in the US, the avant-garde composition proved a disaster.

Antheil and his wife decamped for Hollywood, where he attempted to write for the screen. When Antheil met Hedy, now bona fide movie star, in the summer of 1940 at a dinner held by costume designer Adrian, they began talking about their interests in the war and their backgrounds in munitions (Antheil had been a young inspector in a Pennsylvania munitions plant during World War I.) Hedy had been horrified by the German torpedoing of two ships carrying British children to Canada to avoid the Blitz, and she had begun to think about a way to control a torpedo remotely, without detection.

Hedy had the idea for a radio that hopped frequencies and Antheil had the idea of achieving this with a coded ribbon, similar to a player piano strip. A year of phone calls, drawings on envelopes, and fiddling with models on Hedy’s living room floor produced a patent for a radio system that was virtually jam-proof, constantly skipping signals.

Antheil responded to Hedy’s enthusiasm, although he thought her sometimes scatterbrained, and Hedy to Antheil’s mechanical focus as a composer. The two were always just friends and respected one another’s quirks. Antheil wrote to a friend about a new scheme Hedy was planning with Howard Hughes:

“Hedy is a quite nice, but mad, girl who besides being very beautiful indeed spends most of her spare time inventing things—she’s just invented a new ‘soda pop’ which she’s patenting—of all things!”

Hedy’s Folly isn’t the story of a science prodigy or a movie star with a few hobbies, it’s a star-studded picaresque about two undeniably creative people whose interests and backgrounds unlocked the best in one another — the mark of true inventors.

Adapted from Michelle Legro’s fantastic full review.

5. IN THE PLEX

In the PlexEarlier this year, we looked at 7 essential books on the future of the Internet, how the iPhone changed everything and why Google’s algorithms might be stunting our intellectual growth. But there’s hardly a better way to understand the future of information and the web than by understanding how Google — the algorithm, the company, the ethos — changed everything. That’s exactly what acclaimed technology writer Steven Levy, he of Hackers fame, does in In The Plex: How Google Thinks, Works, and Shapes Our Lives — a sweeping look at how Google went from a startup headquartered above a Palo Alto bike shop to a global brand bigger than GE.

Levy, who has been covering the computing revolution for the past 30 years for titles like Newsweek and Wired, had developed a personal relationship with Larry Page and Sergey Brin, which granted him unprecedented access to the inner workings of the Big G, a company notorious for its caution with journalists. The result is a fascinating journey into the soul, culture and technology of our silent second brain, from Page and Brin’s legendary eccentricities that shaped the company’s creative culture to the uncompromising engineering genius that underpins its services. But most fascinating of all is the grace and insight with which Levy examines not only how Google has changed, but also how it has changed us and how, in the face of all these interconnected metamorphoses, it hopes to preserve its soul — all the while touching on timely topics like privacy, copyright law and censorship.

Levy, who calls himself “an outsider with an insider’s view,” recounts the mysteries he saw in Google, despite a decade of covering the company, which inspired his book:

Google was a company built on the values of its founders, who harbored ambitions to build a powerful corporation that would impact the entire world, at the same time loathing the bureaucracy and commitments that running such a company would entail. Google professed a sense of moral purity — as exemplified by its informal motto, ‘Don’t be evil’ — but it seemed to have a blind spot regarding the consequences of its own technology on privacy and property rights. A bedrock principle of Google was serving its users — but a goal was building a giant artificial intelligence learning machine that would bring uncertain consequences to the way all of us live. From the very beginning, its founders said that they wanted to change the world. But who were they, and what did they envision this new world order to be?” ~ Steven Levy

Levy’s intimate account of Google’s inner tensions offers a sober look delivered with a kind of stern fatherly tenderness, brimming with its own opposing forces of his clear affection for Page and Brin coupled with his, at times begrudging, fairness in writing about Google’s shortcomings.

What I discovered was a company exulting in creative disorganization, even if the creativity was not always as substantial as hoped for. Google had massive goals, and the entire company channeled its values from the founders. Its mission was collecting and organizing all the world’s information — and that’s only the beginning. From the very start, its founders saw Google as a vehicle to realize the dream of artificial intelligence in augmenting humanity. To realize their dreams, Page an Brin had to build a huge company. At the same time, they attempted to maintain as much as possible the nimble, irreverent, answer-to-no-one freedom of a small start-up. In the two years I researched this book, the clash between those goals reached a peak, as David had become a Goliath.” ~ Steven Levy

Besides the uncommon history of Google, Levy reveals a parallel history of the evolution of information technology itself, a sobering invitation to look at the many technologies we’ve come to take for granted with new eyes. (Do you remember the days when you plugged a word into your search engine and it spat back a wildly unordered selection of results, most of which completely irrelevant to your query? Or when the most generous free web mail offered you the magnanimous storage space of four megabytes?)

Originally featured, with video, in August.

6. BOOKS: A LIVING HISTORY

Books HistoryWhat is an omnibus about history books without a book about the history of books? We’ve previously explored how books have been made from the Middle Ages to today, what the future might have in store for them, and why analog books still enchant us. In Books: A Living History, Australian historian Martyn Lyons (of A History of Reading and Writing in the Western World fame) explores how books became one of the most efficient and enduring information technologies ever invented — something we seem to forget in an era plagued by techno-dystopian alarmism about the death of books. Both a cultural time-capsule and an encyclopedia of bibliophilia, Lyons offers an invaluable record of our collective intellectual and informational journey across two millennia of written language and a profound peer into its future.

“It is difficult now to imagine how some of the great turning points in Western history could have been achieved without [the book]. The Renaissance, the Reformation, the Scientific Revolution and the Age of Enlightenment all relied on the printed word for their spread and permanent influence. For two and a half millennia, humanity used the book, in its manuscript or printed form, to record, to administer, to worship and to educate.” ~ Martyn Lyon

“Defining the book itself is a risky operation. I prefer to be inclusive rather than exclusive, and so I offer a very loose definition. The book, for example, does not simply exist as a bound text of sheets of printed paper — the traditional codex with which we are most familiar today. Such a definition forgets two millennia of books before print, and the various forms that textual communication took before the codex was invented.

“A traditional definition based only on the codex would also exclude hypertext and the virtual book, which have done away with the book’s conventional material support. I prefer to embrace all these forms, from cuneiform script to the printed codex to the digitized electronic book, and to trace the history of the book as far back as the invention of writing systems themselves. The term ‘book’, then, is a kind of shorthand that stands for many forms of written textual communication adopted in past societies, using a wide variety of materials.” ~ Martyn Lyons

From the first papyrus scrolls to the painstakingly made illuminated manuscripts of the Middle Ages to today’s ebooks and the iPad, Lyons distills the history and evolution of books in the context of a parallel cultural evolution and, as in the case of Gutenberg’s printing press, revolution.Book printing

Navigating through 2,000 gloriously illustrated years of literary milestones, genres, and groundswells, from serial and dime novels to paperbacks to manga, Lyons ends with a bittersweet contemplation of the fate of the book and the bibliophile after the turn of the digital century.

Originally reviewed, with more images, here.

7. 1493

In 2005, 1491: New Revelations of the Americas Before Columbus by Charles C. Mann came to be regarded as the most ambitious and sweeping look at pre-Columbus North and South America ever published. This year, Mann came back with 1493: Uncovering the New World Columbus Created — a fascinating look at one of the lesser-known, lesser-considered aspects of what happened when Columbus and his crew set foot on American soil: the environmental upheaval that began as they brought plants, animals and diseases that forever changed the local biosphere, both in America and in Europe once the explorers returned to the Old World. Known as The Columbian Exchange, this process is considered the most important ecological event since the extinction of the dinosaurs, and the paradoxes at its heart echo today’s polarized views of globalization as either a great cross-pollinator or a great contaminator of cultures.

“From the outset globalization brought enormous economic gains and ecological and social tumult that threatened to offset those gains. It is true that our times are different from the past. Our ancestors did not have the Internet, air travel, genetically modified crops, or computerized international stock exchanges. Still, reading the accounts of the creation of the world market one cannot help hearing echoes — some muted, some thunderously loud — of the disputes now on the television news. Events four centuries ago set a template for events we are living through today.”

Mann illustrates the fascinating interplay of organisms within ecological systems and the intricate yet powerful ways in which it impacts human civilization. For instance, when the Spaniards brought plantains to South America, they also brought the tiny scaling insects that live in their roots, which turned out to be delicious new food for the local fire ants. This led to a plague-sized explosion in fire ant population, which forced the terrified Spaniards to live on the roofs of their ant-infested houses and eventually drove them off the islands.

The most striking impact of The Columbian Exchange, however, comes from epidemiology. Because pre-Columbus America had no domesticated animals, it also had no animal-borne diseases. But when the Europeans came over, they brought with them enough disease to wipe out between two thirds and 90% of people in the Americas over the next 150 years — the worst demographic catastrophe in history by a long stretch. While early diaries mentioned these epidemics in describing life in the 1500s and 1600, it wasn’t until the 1960s that epidemiologists and historians realized the true scale of the death toll in the decades following Columbus’s arrival.

NPR’s Fresh Air has an excellent interview with Mann.

From how tobacco became the world’s first global commodity to how forests were transformed by a new earthworm, 1493 will change the way you look at ecology, economy and epidemiology, and radically shift how you think about “local” and “global.”

Originally featured here in August and excerpted in Smithsonian magazine’s November 2011 issue.

8. WHEELS OF CHANGE

National Geographic’s Wheels of Change: How Women Rode the Bicycle to Freedom (With a Few Flat Tires Along the Way), which also happens to be one of the year’s best photography, tells the riveting story of how the two-wheel wonder pedaled forward the emancipation of women in late-nineteenth-century America and radically redefined the normative conventions of femininity. (Not to be confused with another excellent tome that came out this year, It’s All About the Bike: The Pursuit of Happiness on Two Wheels, which offers a more general chronicle of the bike’s story, from its cultural history to its technical innovation to the fascinating, colorful stories of the people who ride it.)

To men, the bicycle in the beginning was merely a new toy, another machine added to the long list of devices they knew in their work and play. To women, it was a steed upon which they rode into a new world.” ~ Munsey’s Magazine, 1896

A follow-up to Sue Macy’s excellent Winning Ways: A Photohistory of American Women in Sports, published nearly 15 years ago, the book weaves together fascinating research, rare archival images, and historical quotes that bespeak the era’s near-comic fear of the cycling revolution. (“The bicycle is the devil’s advance agent morally and physically in thousands of instances.”)

(Image: © Beth Emery Collection | via Sarah Goodyear / Grist.org)

From allowing young people to socialize without the chaperoning of clergymen and other merchants of morality to finally liberating women from the constraints of corsets and giant skirts (the “rational dress” pioneered by bike-riding women cut the weight of their undergarments to a “mere” 7 pounds), the velocipede made possible previously unthinkable actions and interactions that we now for granted to the point of forgetting the turbulence they once incited.

“Success in life depends as much upon a vigorous and healthy body as upon a clear and active mind.” ~ Elsa von Blumen, American racer, 1881

Let me tell you what I think of bicycling. I think it has done more to emancipate women than anything else in the world. I stand and rejoice every time I see a woman ride by on a wheel.” ~ Susan B. Anthony, 1896

Many [female cyclists on cigar box labels] were shown as decidedly masculine, with hair cut short or pulled back, and smoking cigars, then an almost exclusively male pursuit. This portrayal reflected the old fears that women in pants would somehow supplement men as breadwinners and decision-makers.” ~ Sue Macy

Originally featured here in March and discussed in Smithsonian’s Off the Road blog in December.

9. HARK! A VAGRANT

History doesn’t have to always take itself seriously. From New Yorker cartoonist Kate Beaton comes Hark! A Vagrant — a witty and wonderful collection of comics about historical and literary figures and events, based on her popular web comic of the same name. Scientists and artists, revolutionaries and superheroes, suffragists and presidents — they’re all there, as antique hipsters, and they’re all skewered with equal parts comedic and cerebral prod.

Beaton, whose background is in history and anthropology, has a remarkable penchant for conveying the momentous through the inane, aided by a truly special gift for simple, subtle, incredibly expressive caricature. From dude spotting with the Brontë Sisters to Nikola Tesla and Jane Austen dodging groupies, the six-panel vignettes will make you laugh out loud and slip you a dose of education while you aren’t paying attention.

I think comics about topics like history or literature can be amazing educational tools, even at their silliest. So if you learn or look up a thing or two after reading these comics, and you’ve enjoyed them, then I will be more than pleased! If you’re just in it for the silly stuff, then there is plenty of that to go around, too.” ~ Kate Beaton

Beaton is also a masterful writer, her dialogue and captions adding depth to what’s already an absolute delight.

Handsome and hilarious, the six-panel stories in Hark! A Vagrant will undo all the uptightness about history instilled in you by academia, leaving you instead with a hearty laugh and some great lines for dinner party banter.

10. THE MAN OF NUMBERS

Imagine a day without numbers — how would you know when to wake up, how to call your mother, how the stock market is doing, or even how old you are? We live our lives by numbers. So fundamental are they to our understanding of the world that we’ve grown to take them for granted. And yet it wasn’t always so. Until the 13th century, even simple arithmetic was accessible almost exclusively to European scholars. Merchants kept track of quantifiables using Roman numerals, performing calculations either by an elaborate yet widespread fingers procedure or with a clumsy mechanical abacus. But in 1202, a young Italian man named Leonardo da Pisa — known today as Fibonacci — changed everything when he wrote Liber Abbaci, Latin for Book of Calculation, the first arithmetic textbook of the West.

Keith Devlin tells his incredible and important story in The Man of Numbers: Fibonacci’s Arithmetic Revolution, also one of the year’s best science books, tracing how Fibonacci revolutionized everything from education to economics by making arithmetic available to the masses. If you think the personal computing revolution of the 1980s was a milestone of our civilization, consider the personal computation revolution. And yet, de Pisa’s cultural contribution is hardly common knowledge.

The change in society brought about by the teaching of modern arithmetic was so pervasive and all-powerful that within a few generations people simply took it for granted. There was no longer any recognition of the magnitude of the revolution that took the subject from an obscure object of scholarly interest to an everyday mental tool. Compared with Copernicus’s conclusions about the position of Earth in the solar system and Galileo’s discovery of the pendulum as a basis for telling time, Leonardo’s showing people how to multiply 193 by 27 simply lacks drama.” ~ Keith Devlin

Though “about” mathematics, Fibonacci’s story is really about a great number of remarkably timely topics: gamification for good (Liber abbaci brimmed with puzzles and riddles like the rabbit problem to alleviate the tedium of calculation and engage readers with learning); modern finance (Fibonacci was the first to develop an early form of present-value analysis, a method for calculating the time value of money perfected by iconic economist Irving Fisher in the 1930s); publishing entrepreneurship (the first edition of Liber Abbaci was too dense for the average person to grasp, so da Pisa released — bear in mind, before the invention of the printing press — a simplified version accessible to the ordinary traders of Pisa, which allowed the text to spread around the world); abstract symbolism (because numbers, as objective as we’ve come to perceive them as, are actually mere commonly agreed upon abstractions); and even remix culture (Liber Abbaci was assumed to be the initial source for a great deal of arithmetic bestsellers released after the invention of the printing press.)

Above all, however, Fibonacci’s feat was one of storytelling — much like TED, he took existing ideas that were far above the average person’s competence and grasp, and used his remarkable expository skills to make them accessible and attractive to the common man, allowing these ideas to spread far beyond the small and self-selected circles of the scholarly elite.

A book about Leonardo must focus on his great contribution and his intellectual legacy. Having recognized that numbers, and in particular powerful and efficient ways to compute with them, could change the world, he set about making that happen at a time when Europe was poised for major advances in science, technology, and commercial practice. Through Liber Abbaci he showed that an abstract symbolism and a collection of seemingly obscure procedures for manipulating those symbols had huge practical applications.” ~ Keith Devlin

For an added layer of fascinating, there’s also a complementary ebook titled Leonardo and Steve, drawing a curious parallel between Fibonacci and Steve Jobs.

Originally featured, with a Kindle preview, in July.

11. MASTERS OF MYSTERY

As far as unlikely friendships go, it hardly gets any unlikelier than that between Sherlock Holmes creator Sir Arthur Conan Doyle and legendary illusionist Harry Houdini. Born fifteen years apart into dramatically different families, one the educated product of a proper Scottish upbringing and the other the self-made son of a Hungarian immigrant, the two even stood in stark physical contrast, once likened by a journalist to Pooh and Piglet.

But when they met in 1920, something extraordinary began. In Masters of Mystery: The Strange Friendship of Arthur Conan Doyle and Harry Houdini, acclaimed pop culture biographer Christopher Sandford tells the story of the pair’s unique friendship, sometimes macabre, sometimes comic, and fundamentally human, underpinned by their shared longing for lost loved ones and their adventures in the world of Spiritualism — at the time, a world with unmatched popular allure.

From Queen Victoria to W. B. Yeats to Charles Dickens to Abraham Lincoln, even the era’s political, scientific, and artistic elite engaged in efforts to reach departed loved ones in worlds unseen. By the time Houdini arrived in America in 1878, more than 11 million people admitted to being Spiritualists. Spiritualism, of course, wasn’t a new idea at the time. The notion that the soul survives intact after physical death and lives on on another plane, Sandford reminds us, could be traced back at least as far back as the writings of Swedish mystic-philosopher Emanuel Swedenborg in the mid-18th century. His Arcana Coelestia (“Heavenly Secrets”) made an eight-volume case for the supernatural and provoked a published retort from Immanuel Kant, who pronounced Swedenborg’s opinions “nothing but illusions.”

This notion of illusion as a central part of Spiritualism turned out to be a central binding element for Houdini and Conan Doyle — one bringing to it the skepticism of a man making a living out of illusions and the other finding in it a saving grace of sorts.

Spiritualism is nothing more or less than mental intoxication; Intoxication of any sort when it becomes a habit is injurious to the body, but intoxication of the mind is always fatal to the mind.” ~ Harry Houdini

Houdini even called for a law that would “prevent these human leeches from sucking every bit of reason and common sense from their victims.” Still, when his father died, the 18-year-old Houdini sold his own watch to pay for a “professional psychic reunion” with the departed. In 1920, Houdini went on a six-month tour in Europe, attending more than a hundred séances. He wanted, desperately, to believe — but, himself professional skeptic in the business of fooling people, he never quite managed to suspend his disbelief. In fact, he became the Penn & Teller of his day, seeing it as his duty to myth-bust psychics and other prophets of Spiritualism.

Conan Doyle, at first, seemed only interested in Spiritualism for its narrative potential, rather than “to change people’s hearts and minds,” as Sandford puts it. But after his father died when the author was only 34 and, mere months later, his wife was diagnosed with tuberculosis and given only a few months to live, Conan Doyle fell into a deep depression. Shortly thereafter, in 1893, he applied to join the Society for Psychical Research, a committee of academics aiming to study Spiritualism “without prejudice or prepossession.” Eventually, he gave up his lucrative literary career, killed off Sherlock Holmes, and dedicated himself wholly to his obsession with Spiritualism with, as we’ve already seen in this rare footage from 1930, reached a manically obsessive proportion by his old age.

Yet, despite their passionate and diametrically opposed views on Spiritualism, the Conan Doyle and Houdini had something intangible but powerful in common. Walter Prince, an ordained minister and a member of the SPR in the 1920s, put it this way:

The more I reflect on Houdini [and] Doyle, the more it seems that the two men resembled each other. Each was a fascinating companion, each big-hearted and generous, yet each was capable of bitter and emotional denunciation, each was devoted to his home and family, each felt himself an apostle of good to men, the one to rid them of certain beliefs, the other to inculcate in them those beliefs.”

Originally featured here earlier this month.

This post appears courtesy of Brain Pickings, where it was originally published.

361-380 of 380 Resources