Skip to Content

Found 374 Resources

A Brief History of the Stoplight

Smithsonian Magazine

Driving home from a dinner party on a March night in 1913, the oil magnate George Harbaugh turned on to Cleveland’s Euclid Avenue. It was one of the city’s busiest streets, jammed with automobiles, horse-drawn carriages, bicyclists, trolleys and pedestrians, all believing they had the right of way. Harbaugh did not see the streetcar until it smashed into his roadster. “It is remarkable,” the local newspaper reported, “that the passengers escaped with their lives.”

Many others wouldn’t. More than 4,000 people died in car crashes in the United States in 1913, the same year that Model T’s started to roll off Henry Ford’s assembly line. The nation’s roads weren’t built for vehicles that could speed along at 40 miles an hour, and when those unforgiving machines met at a crowded intersection, there was confusion and, often, collision. Though police officers stood in the center of many of the most dangerous crossroads blowing whistles and waving their arms, few drivers paid attention.

A Cleveland engineer named James Hoge had a solution for all this chaos. Borrowing the red and green signals long used by railroads, and tapping into the electricity that ran through the trolley lines, Hoge created the first “municipal traffic control system.” Patented 100 years ago, Hoge’s invention was the forerunner of a ubiquitous and uncelebrated device that has shaped American cities and daily life ever since-—the stoplight.

Hoge’s light made its debut on Euclid Avenue at 105th Street in Cleveland in 1914 (before the patent was issued). Drivers approaching the intersection now saw two lights suspended above it. A policeman sitting in a booth on the sidewalk controlled the signals with a flip of a switch. “The public is pleased with its operation, as it makes for greater safety, speeds up traffic, and largely controls pedestrians in their movements across the street,” the city’s public safety director wrote after a year of operation.

Others were already experimenting with and improving upon Hoge’s concept, until various inventors had refined the design to the one that controls traffic and raises blood pressure today. We have
William Potts, a Detroit police officer who had studied electrical engineering, to thank for the yellow light, but as a municipal employee he could not patent his invention.

By 1930, all major American cities and many small towns had at least one electric traffic signal, and the innovation was spreading around the world. The simple device tamed the streets; motor vehicle fatality rates in the United States fell by more than 50 percent between 1914 and 1930. And the technology became a symbol of progress. To be a “one stoplight town” was an embarrassment. “Because of the potent power of suggestion, [or] a delusion of grandeur, almost every crossroad hamlet, village, and town installed it where it was neither ornate nor useful,” the Ohio Department of Highways grumbled.

An additional complaint that gained traction was the device’s unfortunate impact on civility. Long before today’s epidemic of road rage, critics warned that drivers had surrendered some of their humanity; they didn’t have to acknowledge each other or pedestrians at intersections, but rather just stare at the light and wait for it to change. As early as 1916, the Detroit Automobile Club found it necessary to declare a “Courtesy Week,” during which drivers were encouraged to display “the breeding that motorists are expected to manifest in all other human relations.” As personal interactions declined, a new, particularly modern scourge appeared—impatience. In 1930, a Michigan policeman noted that drivers “are becoming more and more critical and will not tolerate sitting under red lights.”

The new rules of the road took some getting used to, and some indoctrination. In 1919, a Cleveland teacher invented a game to teach children how to recognize traffic signals, and today, kids still play a version of it, Red Light, Green Light. Within a few decades, the traffic light symbol had been incorporated into children’s entertainment and toys. Heeding the signals has become so ingrained that it governs all kinds of non-driving behavior. Elementary schools put the brakes on bad behavior with traffic light flashcards, and a pediatrician created the “Red Light, Green Light, Eat Right” program to promote healthful eating. Sexual assault prevention programs have adopted the traffic light scheme to signal consent. And the consulting firm Booz Allen suggested in 2002 that companies assess their CEOs as crisis (“red light”), visionary (“green light”) or analytical (“yellow light”) leaders. You can even find the colorful cues on the soccer field: A referee first issues a yellow warning card before holding up the red card, which tells the offending player to hit the road, so to speak.

A newsboy’s stand and traffic light in Los Angeles, 1942 (Library of Congress)

In a century the traffic light went from a contraption that only an engineer could love to a pervasive feature of everyday life—there are some two million of them in the United States today—and a powerful symbol. But its future is not bright. Driverless vehicles are the 21st-century’s Model T, poised to dramatically change not only how we move from place to place but also our very surroundings. Researchers are already designing “autonomous intersections,” where smart cars will practice the art of nonverbal communication to optimize traffic flow, as drivers themselves once did. Traffic lights will begin to disappear from the landscape, and the new sign of modernity will be living in a “no stoplight town.”

Should I Stay or Should I Go?

U.S. crosswalk signals are downright pedestrian. but others are so clever they’ll stop you in your tracks.

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

Image by Chris Lyons. (original image)

These Are the World’s Most Dangerous Emerging Pathogens, According to WHO

Smithsonian Magazine

International officials recently gathered to discuss one of the biggest threats facing humanity—and this wasn’t the Paris climate talks. As Science’s Kai Kupferschmidt reports, the setting was Geneva, Switzerland and the task was the selection of a shortlist of the world’s most dangerous emerging pathogens. These diseases are considered by a World Health Organization (WHO) committee of clinicians and scientists to be the pathogens “likely to cause severe outbreaks in the near future, and for which few or no medical countermeasures exist.” Here’s the WHO’s list, and what you should know about these scary diseases:

Crimean Congo hemorrhagic fever

This tick-borne fever got its name from the Crimea, where it first emerged in 1944, and the Congo, where it spread in 1969. Now, it can be found all over the world, though it primarily occurs in Asia. The disease is often misnamed “Asian Ebola virus” for its fast-moving effects, which include enlargement of the liver, fever, aching muscles and vomiting.

Outbreak News Today’s Chris Whitehouse writes that CCHF is currently spreading across India, where agricultural workers are often exposed to diseased, tick-bearing animals. According to the WHO, outbreaks of the disease can have a fatality rate of up to 40 percent. There is no vaccine for CCHF, but at least one has been shown to be effective in animals.

Ebola virus disease

It’s no surprise to see Ebola virus disease on the list—it has been ravaging African countries for decades, with widespread outbreaks throughout West Africa and the recent resurgence in Liberia. Also known as Ebola hemorrhagic fever, the disease has an average fatality rate of 50 percent, but has been as high as 90 percent in some outbreaks.

Though it is still unclear exactly how the virus is transmitted, scientists believe that bats serve as a natural “reservoir” for Ebola, which is then transmitted through some form of contact. There are no current licensed vaccines, but clinical trials for at least two are underway.

Marburg hemorrhagic fever

In 1967, a mysterious disease broke out in Europe, killing laboratory workers who had been exposed to monkeys from Uganda. The cause, Marburg virus, was named after the German city where it was first detected and is a filovirus—a family of viruses that include Ebola.

Marburg virus has broken out only sporadically since the 1960s, but occurs in people who have spent time in caves frequented by Rousettus bats. Marburg causes a rash, malaise and bleeding and is often misdiagnosed. There is no current vaccine or treatment.

Lassa fever 

First diagnosed in Benin, Lassa fever can be difficult for doctors to diagnose and only becomes symptomatic in 20 percent of the people who become infected, according to the WHO. When it does strike, patients can move from mild effects like a slight fever to, in more severe cases, hemorrhaging, encephalitis and shock. But the fever’s most devastating and common complication is deafness. About a third of all Lassa patients lose some or all of their hearing.

Lassa fever is primarily found in West Africa and is contracted when people come into contact with the waste of infected Mastomys rats or the bodily fluids of those with the disease. Though the antiviral drug ribavirin may be effective in Lassa fever cases, there is no current vaccine.

MERS and SARS coronavirus diseases

Middle East Respiratory Syndrome (MERS) and Severe Acute Respiratory Syndrome (SARS) have had their fair share of media coverage. They’re members of the coronavirus family—viruses that usually cause upper respiratory illness. Though transmission seems to come from infected camels, the diseases are both easy to catch from infected peoples’ coughs or sneezes.

Both conditions emerged relatively recently. SARS broke out in Asia in 2003, but the global outbreak was contained and no cases have been reported since 2004. The news isn’t that great concerning MERS: The disease, which started in Saudi Arabia in 2012, broke out again in South Korea this year. The WHO reports that 36 percent of reported patients die. Health officials tell SciDevNet that it’s unlikely a vaccine will be developed anytime soon.

Nipah and Rift Valley fever

The final two entries on the WHO’s list are viruses from animals—Nipah virus infection and Rift Valley fever. Nipah was first identified in 1998 when Malaysian pig farmers fell ill. To stop the outbreak, the Malaysian government ordered euthanasia over a million pigs. Even so, the virus later showed up in Bangladesh and India. Nipah causes brain inflammation, convulsions and even personality changes. 

Rift Valley fever originated with Kenyan sheep farmers in 1931 and has since been identified in outbreaks throughout Africa. The disease is spread by handling diseased animal tissue, drinking infected milk or being bitten by infected mosquitos. However, the WHO has never documented a case of human-to-human transmission. The disease causes symptoms similar to meningitis and can be hard to detect in its early stages. Though most people get a milder version of the disease, others aren’t so lucky. Around eight percent of patients get ocular disease, brain inflammation and could eventually die. Neither Nipah nor Rift Valley fever have currently-approved human vaccines.

Though the diseases on this list were identified as the most likely to cause widespread epidemics, the WHO also designated three other diseases as “serious”: chikungunya, severe fever with thrombocytopenia syndrome, and Zica. Diseases like Malaria and HIV/AIDS were not included because of already established disease control and research into treatment and prevention.

What Pilgrims Heard When They Arrived in America

Smithsonian Magazine

For both the English settlers who landed at Plymouth Rock, and the Native Americans who met them, their first meetings introduced an entirely new soundscape. But with the passage of time, many of those sounds were lost—especially as the religious traditions that were so important to colonists and indigenous peoples changed or died out. So it was even more meaningful when an audience in Washington, D.C., gathered to hear the sacred sounds of both English colonists and New England’s indigenous Wampanoag people earlier this month.

Waking the Ancestors: Recovering the Lost Sacred Sounds of Colonial America,” was no ordinary living history program. Performed by educators from Plimoth Plantation in Plymouth, Massachusetts, the program was developed as part of the Smithsonian’s Religion in America initiative.

Just as calls to prayer and church bells are part of city life around the world, the religious lives of America’s indigenous people and colonists had their own distinctive sounds. “Waking the Ancestors” explored just what those sounds might have been like. With the help of meticulous historical research, the team behind the program reconstructed how worship traditions sounded after the arrival of the Mayflower in 1620 in what is now Massachusetts.

That soundscape is anything but familiar to 21st-century listeners. The region was new to English colonists, but not to the Wampanoag, who once numbered over 100,000 in what is now Massachusetts and Rhode Island. The Pilgrims would have heard the traditional songs and dances of Wampanoag people when they arrived—and in turn, the Wampanoag would have heard Pilgrims worshiping in Anglican, Puritan and Separatist styles.

To demonstrate, the program featured worship music in all three styles, ranging from the choral harmonies of Anglicans to the unadorned chanting of Puritans and Separatists, which focused more on the text than music. “For [Separatists], music was just the handmaiden of worship,” Richard Pickering, Plimoth Plantation’s deputy director and the “Waking the Ancestors” program leader, tells Smithsonian.com. Attendees heard multiple versions of psalms sung in different styles and period accents—an attempt to illustrate the spiritual rifts and changes that occurred within what many think of as a homogenous group of colonists.

Those religious shifts were reflected in indigenous people as well. As Puritan missionaries like John Eliot began to organize indigenous people into townships based on religious beliefs, the sounds of Wampanoag worship changed.

“[Puritans were] so convinced of their own beliefs being the belief,” says Pickering. “Some [Wampanoag people] become Christian and some mantain their ancient faiths. There’s a very curious blending of both with some people. I don’t think you can even begin to grasp the complexity.”

“We’ve come through a lot in the past few centuries,” Darius Coombs, who directs Plimoth Plantation’s Eastern Woodlands interpretation and research. “Christianity came along and that was pretty much put on us as native people. We had to go along with the flow and accept that.”

Coombs oversees the plantation’s Wampanoag Homesite, which showcases 17th-century life through the eyes of indigenous people—and unlike other areas of Plimoth Plantation, it is staffed not by role players, but by Wampanoag and other native people. He lent the perspective and traditions of native people to the program, which culminated in a traditional Stomp Dance designed to awaken past generations.

The arrival of colonists is inextricably linked with tragedy for the Wampanoag people, who were stricken with a series of epidemics after encountering Europeans, were slaughtered during a war against the English colonists, and whose language died almost entirely over time. But ironically, some of the very forces that endangered native peoples’ spiritual traditions during colonization helped bring back the Wampanoag language in the 21st century.

In 1992, Jessie Little Doe Baird, who belongs to the Wampanoag Nation’s Mashpee tribe, began having dreams in which her ancestors appeared to her speaking a language she could not understand. Compelled to bring back Wôpanâak, which had been little used since the 1830s, Baird and researchers from the Massachusetts Institute of Technology used a rare book by missionary John Eliot to reconstruct the language. Eliot, who was given the nickname “the Apostle of the American Indian” due to his efforts to convert the area’s indigenous people, translated his so-called “Indian Bible,” a translation of the King James Bible, into the language of the local indigenous people in order to convert them, but his book has helped the Wampanoag connect even more deeply to their past traditions.

Though Wôpanâak is being taught to children and indigenous people today with the help of the Wôpanâak Language Reclamation Project, it is fiercely guarded by the Wampanoag people and is rarely spoken in public. Toodie Coombs, Darius’ wife, spoke in the language in a moment that was not recorded out of respect for the language itself. “That was incredibly powerful,” says Pickering. Coombs agrees. “A lot of people think that language is just an object. You can’t [treat it] like that—it took us a century to get our language back.”

For Pickering, part of the program’s challenge was the need to portray the complexity—and pain—of early colonial and Native American interactions. “We always acknowledge the loss and anguish,” he says. “We always talk about the human cost, but we place an emphasis on persistence. There are native people among you, but for so long, native people were utterly invisible, even though in plain sight.” 

Coombs adds that, unlike other interpreters at Plimoth Plantation, his identity as a native person is not a costume or a role he can shed at the end of the day. “It’s not like a job we shut off at 5:00 and turn on at 9:00. We are the people 24 hours a day.” With that historical burden comes a personal one, too, he says—a responsibility to bring his own ancestors with him as he helps modern audiences imagine the sounds of nearly 400 years ago.

Meet the Colorful New Weapon Scientists Are Using to Save Toads From a Devastating Fungus

Smithsonian Magazine

Valerie McKenzie's hotel guests could only be described as extraordinarily high-maintenance.

First off, they each require individual portable plastic units, which come free-of-charge with a jaw-dropping view of Colorado's Collegiate Peaks Mountains. During the first half of October, they were also treated to painstakingly-prepared, protein-packed daily meals, a two-week intensive probiotic bath treatment and a bi-weekly skin swab for microbiome analysis. Sadly, McKenzie's pampered charges weren’t able to express their appreciation for the royal treatment.

After all, the biologist at the University of Colorado, Boulder was running a “toad hotel.”

The fieldwork that McKenzie wrapped up in October has the potential to save billions of lives—amphibian lives, certainly, but possibly some human lives as well. She's hoping that the probiotic treatments she and her team administered to the toads in her hotels this fall could help give future toads a chance at fighting a deadly pathogen. 

For decades, frog, toad and salamander populations worldwide have been ravaged by a mysterious fungal pathogen called Batrachochytrium dendrobatidis (Bd). That's a problem, because amphibians—40 percent of which are at risk of imminent extinction worldwide—are crucial to healthy ecosystems. Journalist Elizabeth Kolbert helped bring this ecological crisis to the public’s attention in an article for the New Yorker in 2009, and later in her Pulitzer Prize-winning book The Sixth Extinction

It’s not just that these amphibians keep insect populations in check and serve as food for larger predators. They are also especially sensitive to their environments, making them “indicator species,” or animals whose health and population fluctuations can be used to gauge wider environmental disruption and damage. As if that weren't bad enough, biologists are also concerned by the fact that frogs, toads and salamanders play a role in regulating mosquito populations, which carry devastating diseases from West Nile to Zika.

And from a commercial standpoint, “we’re potentially losing pharmaceuticals,” says Reid Harris, a James Madison biologist and amphibian researcher. Harris is referring to the fact that frogs’ skin secretions might someday play in a role in treatments for key human diseases like HIV. “Losing even one species is unacceptable, but we’re looking at losing 42 percent of species,” Harris says. In some places, he adds, the environment is already reacting to extinctions in unforeseen ways. “In Panama there has been a massive extinction, and now you see a lot more algae growing in streams there,” he says.

"Toad hotels" for treated and control toads. (Valerie McKenzie)

McKenzie’s work builds on research Harris started nearly a decade ago. In 2008, his lab discovered that J. lividum, a bacteria naturally found on the skin of many toad and frog species, had useful fungus-fighting effects. It seemed to hold off the Bd long enough for frog immune systems to kick in and finish the job.

Harris first found himself drawn to J. lividum after watching it turn a deep purple color when in the lab, back when he was working with organic chemist Kevin P.C. Minbiole, now at Villanova University. “Any time a colony produced a color it got Kevin’s attention,” Harris says. He wanted to figure out the mechanism behind the color change.

As it turned out, the metabolite producing that hue change was key: While all the frogs he looked at had some J. lividum on them, only the ones who were bathed in a J. lividum solution were found to have the metabolite on them—and those were the ones that survived Bd exposure. All but one frog in the control group died.  

In 2010, Harris was involved in a field trial with J. lividium that went further. After Reid cultured a strain of the bacteria native to California mountain yellow-legged frogs, biologist Vance Vredenburg of San Francisco State University applied the treatment using plastic containers. The frogs treated with the bacteria were the only ones that survived a year. But in year two, trout ate the entire population. (The trout had been dropped into the water for recreational fishing.) Vredenburg never published the results.

McKenzie’s toad hotels—a project her team dubbed “Purple Rain” in memory of Prince and in homage to the bacteria’s color—involved bathing 150 wild boreal toads in a J. lividum probiotic solution, too. She began by isolating a native strain of the bacteria and demonstrating that it had a protective effect. The native component was key: “We don’t want to take a microbe from another part of the world and introduce it,” she explains.

In a paper published this September, McKenzie, Harris and several other researchers demonstrated that treating captive boreal toads with J. lividum showed no adverse health effects, and increased their likelihood of surviving Bd exposure by 40 percent. The paper emphasized the importance of maintaining healthy microbiome diversity in captive animal populations across the board—especially because we do not yet understand the myriad health-related roles that these bacteria play.

McKenzie’s team initially planned to treat captive-bred toads and release them into the wild, but a cold snap killed off that cohort of toads. The state of Colorado asked McKenzie’s team to work with a wild population of metamorphosing toads instead. “They were feeling quite desperate,” says McKenzie. “In the Collegiate Peaks area, boreal toads were thriving and uninfected until several years ago, when state biologists started detecting Bd in those sites.” Boreal toads became endangered in Colorado in 1993.

Metamorphic toads from Brown's Creek, after treatment and sampling, just before they were to be released back into the wetland. (Stephanie Shively)

McKenzie’s lab is still waiting to get back the data that will tell them if J. lividum stayed on the toads in her toad hotels. She's hoping the bacteria will have stuck for at least two weeks. “Toads can become infected as tadpoles, but the Bd tends to remain on their mouth parts,” says McKenzie. “It spreads during metamorphosis. And during metamorphosis the toads are hanging out in giant congregations, so if there is one infected individual, the infection can spread quickly.”

She adds that “if there’s an epidemic during metamorphosis, it wipes out 90 percent of individuals.” In those cases, the youngest adults die out before they ever lay eggs. Her team won’t get a sense of the survival rate within the treatment group because they didn’t mark the toads they experimented with for recapture (there were too few to make that a likely possibility). But if any toads survive in the spring it will be a huge success.

The next step would be to treat several hundred or even thousands of toads, says McKenzie, and to mark those for recapture to better determine how effective J. lividum treatments are at protecting toads.  

While J. lividum treatments have shown promise for boreal toads and mountain yellow-legged frogs, they aren’t a magic bullet. For instance, they may not help all kinds of frogs, says Matt Becker, a frog researcher at the Smithsonian Conservation Biology Institute. Becker says he hasn’t seen success in treating captive Panamanian golden frogs with the bacteria. “The purple bacteria does not want to stick on their skin," he says.

There’s also the problem of rollout. “Each frog in its own Tupperware container, that’s not really going to work,” says Harris. Instead, he says scientists may someday transmit J. lividum or other probiotic treatments via water sources or by inoculating the treatment into the soil. “You can imagine scenarios where you augment the environment,” he says.

“When we get to populations where there are only a couple of strongholds left and we do targeted treatments, they may have a shot at persisting” or at least surviving a few more generations, says McKenzie. “That may give them a shot at continuing to evolve and adapt to the pathogen.” In other words, ultimately the goal isn’t to prop up amphibian populations indefinitely—but to buy them time. 

U.S. Life Expectancy Drops for Third Year in a Row, Reflecting Rising Drug Overdoses, Suicides

Smithsonian Magazine

On average, life expectancy across the globe is steadily ticking upward—but the same can’t be said for the United States. Three reports newly published by the Centers for Disease Control and Prevention highlight a worrying downward trend in Americans’ average life expectancy, with the country’s ongoing drug crisis and climbing suicide rates contributing to a third straight year of decline.

As Lenny Bernstein notes for The Washington Post, the three-year drop represents the longest sustained decline in expected lifespan since the tumultuous period of 1915 to 1918. Then, the decrease could be at least partially attributed to World War I and the devastating 1918 influenza pandemic. Now, the drivers are drug overdoses, which claimed 70,237 lives in 2017, and suicides, which numbered more than 47,000 over the same period. Both of these figures rose between 2016 and 2017.

“Life expectancy gives us a snapshot of the Nation’s overall health,” CDC Director Robert R. Redfield said in a statement, “and these sobering statistics are a wakeup call that we are losing too many Americans, too early and too often, to conditions that are preventable.”

According to Ars Technica’s Beth Mole, 2015 marked the first recorded drop in U.S. life expectancy since 1993, with Americans shaving an average of 0.1 years off of their lifespans. The same proved true in 2016 and 2017, Cathleen O’Grady writes in a separate Ars Technica piece, making the latest projection 78.6 years, down 0.3 years from 2015’s 78.8. Broken down by gender, men could expect to live an average of 76.1 years, down from 76.2 in 2016, while women could anticipate living until 81.1, the same age projected in 2016.

Although the country’s aging Baby Boomer population factored into the decline, Mike Stobbe of the Associated Press reports that increased deaths amongst younger and middle-aged individuals (particularly those between 24 and 44) had an outsized effect on calculations.

As Kathryn McHugh of Harvard Medical School tells NPR’s Richard Harris, “We're seeing the drop in life expectancy not because we're hitting a cap [for lifespans of] people in their 80s, [but] because people are dying in their 20s [and] 30s.”

The overall number of deaths across the U.S. totaled 2.8 million, or 69,255 more than in 2016, Erin Durkin notes for The Guardian. Of the top 10 leading causes of death—heart disease, cancer, unintentional injuries (drug overdoses constituted slightly less than half of this category in 2017), chronic lower respiratory disease, stroke, Alzheimer’s, diabetes, influenza and pneumonia, kidney disease, and suicide—only cancer witnessed a decrease in mortality rates. Seven, including suicide and unintentional injuries, experienced increases.

Josh Katz and Margot Sanger-Katz of The New York Times note that the rising number of overdose deaths corresponds with the growing use of synthetic opioids known as fentanyls. Deaths involving fentanyl increased more than 45 percent in 2017 alone, while deaths from legal painkillers remained stable from 2016 to 2017. To date, the overdose epidemic has wrought the most devastation in Northeast, Midwest and mid-Atlantic states.

Robert Anderson, chief of the Center for Health Statistics’ mortality branch, tells the Post’s Bernstein that the leveling off of prescription drug deaths may be the result of public health initiatives designed to curb the widespread availability and subsequent abuse of such medicines. Still, the rising prevalence of fentanyl, which is often mixed with heroin or falsely marketed as heroin, means the nation’s drug crisis is far from over.

In terms of deaths from suicide, Bernstein writes that there is a huge disparity between urban and rural Americans. The suicide rate amongst urban residents is 11.1 per 100,000 people, as opposed to rural residents’ 20 per 100,000.

“Higher suicide rates in rural areas are due to nearly 60 percent of rural homes having a gun versus less than half of homes in urban areas,” psychiatrist and behavioral scientist Keith Humphreys of Stanford University says. “Having easily available lethal means is a big risk factor for suicide.”

Speaking with NPR, disease prevention expert William Dietz of George Washington University stressed the links between overdoses and suicides. Both may occur amongst people “less connected to each other in communities” and are tied to a “sense of hopelessness, which in turn could lead to an increase in rates of suicide and certainly addictive behaviors.”

McHugh echoes Dietz, concluding, “There's a tremendous amount of overlap between the two that isn't talked about nearly enough.”

Why We're Giving People 20 Percent Doses of the Yellow Fever Vaccine

Smithsonian Magazine

Even as Zika dominates headlines, another mosquito-transmitted disease has been marching steadily across Africa: yellow fever. With over 900 confirmed cases and thousands more suspected in Angola and the Democratic Republic of Congo, health officials are scrambling to vaccinate populations in these areas in time to halt the virus’ spread. The problem: there isn’t enough of the vaccine to go around.

The yellow fever vaccine stockpile, which usually stands at 6 million doses, has already been depleted twice this year. Producing more takes nearly six months—time Africa doesn’t have. Last week, the dire situation led the World Health Organization to approve the use of a mini-dose—just 20 percent of the full vaccination—to help struggling populations make it through this latest epidemic.

According to WHO, the fractional dosing measure likely protects against the disease for at least 12 months, compared to the lifetime protection that regular vaccination usually affords. “We don't have any data on long-term durability,” says Anna Durbin, a researcher specializing in vaccines at John Hopkins Bloomberg School of Public Health. In fact, the vaccination decision illustrates a broken system when it comes to vaccine supply and demand. 

Around 1 billion people in 46 countries are at risk for yellow fever, a mosquito-transmitted disease primarily found in South America and Africa that belongs to the same genus as Zika, Dengue and West Nile. About 15 percent of those who are infected fully develop the disease, whose symptoms include fever, chills, body aches, nausea, weakness and jaundice—the yellowing of the skin and eyes that inspired the virus’ name. Up to 50 percent die.

Once you have it, yellow fever is incurable; doctors can only treat the symptoms. But it can easily be prevented. A single dose of the highly-effective yellow fever vaccine can impart lifetime immunity. The yellow fever vaccine is a live attenuated vaccine, which means it contains a form of the live virus that has been altered to prevent it from causing disease. Injecting this hobbled virus stimulates the body to produce antibodies that guard against yellow fever infection.

This latest outbreak has proved to be unexpectedly virulent. “It is the largest outbreak [of yellow fever] that we've seen in a very, very long time,” says Durbin. The WHO and its partners have so far delivered an estimated 18 million vaccine doses to Angola, Democratic Republic of the Congo and Uganda. But it hasn’t been enough to quell the spread—hence the mini-doses. 

In the past, fractional dosing has successfully been used for rabies and is currently being used for Polio, according to Sarah Cumberland, a spokesperson for WHO. Clinical trials have shown it elicits a similar antibody response as the full injection. In fact, some trials suggest that the dose can be reduced to as little as ten percent.

But no research has yet tested fractional dosing on children, notes Cumberland. It is still unclear how children respond to the vaccine, but some suggest they have a weaker response than adults, so the lower doses may not impart full immunity.

Aedes aegypti, the species of mosquito that transmits Zika and yellow fever, enjoying a blood meal. (Wikimedia Commons)

The latest recommendation for yellow fever is not a permanent mandate. Once vaccines become available again, WHO notes that doctors should return to full potency vaccines—and routine, preventative vaccinations—for all. “Vaccine shipments are being reprogrammed to prioritize the emergency response, but at the same time we are rescheduling vaccine supplies for routine vaccination,” says Cumberland. 

Yet at the root of this outbreak and the repeated vaccine shortages lurks a cyclical problem. As vaccine shortages grow, fewer people are routinely vaccinated and the population as a whole becomes more susceptible to the virus. This, in turn, could provoke more outbreaks that place even greater strain on the limited stores. “With the regular shortage of the vaccine, what we are seeing is less vaccine being given…as part of the routine immunization programs,” says Durbin. This lack of routine vaccination adds to the "vicious cycle" of perpetual shortage. 

Increasing production of the vaccine is no small task. Current methods rely on growing the weakened virus in an in a chicken egg, a nearly 80-year-old method that takes up to six months and requires pathogen-free chicken eggs, which are hard to come by. Advancements in modern cell-culture technology may ultimately speed up yellow fever vaccine production. But making such a large change in production will take time and research to ensure the new products are safe.

The problem is, vaccines aren’t particularly profitable. They cost millions or billions of dollars to develop, and the resulting product is sold at low prices to impoverished regions. Plus, people only need one or two shots in a lifetime. 

“In the past, a lot of companies dropped out of making vaccines,” says Art Reingold, an epidemiologist at Berkeley School of Public Health who serves on the Advisory Committee on Immunization Practices. Ultimately, these companies realized that “they could make more profit by producing a drug that old people in the United States have to take every day of their life—to lower their cholesterol or their blood pressure or to give them an erection—than they could by making a vaccine to give to poor children that, when you give them one or two doses, they are protected for life,” he says.

As a result, today there are only six manufacturers worldwide producing yellow fever vaccines, and stores fall short nearly every year.

Fear and anti-vaccine sentiment further perpetuate these troubles, Reingold adds. Along with the cost of vaccination, fear also likely drives the black-market trade of fake yellow fever vaccination certificates, placing even more people at risk of contracting the disease.

But if we want vaccines, which have prevented millions of deaths and illnesses throughout history, then “somebody has to do the research, somebody needs to do the development, and somebody needs to invest the money in it,” says Reingold. If not, then these kinds of perpetual vaccine shortages will swiftly become the new normal. 

What Happened When Hong Kong's Schools Went Virtual to Combat the Spread of Coronavirus

Smithsonian Magazine

In the video, my son’s preschool teacher is sitting alone in an empty classroom, surrounded by wooden toy blocks. “When I am building, do I put the small block down and then the big block?” she asks the camera. “Or do I put the big block and then the small block?”

My 3-year-old son is lounging on the couch, half watching, half flipping through a pop-up book. He’s dressed in a fleece shark costume, his preferred attire when not forced to wear his school uniform.

This is what "school" looks like these days here in Hong Kong. Because of the coronavirus outbreak, all schools, including my son's private bilingual preschool, have been closed since January, and won’t reopen until late April at the earliest. "The exact date of class resumption is subject to further assessment," announced the Education Bureau, which controls all schools in Hong Kong, public and private, on February 25. It’s all part of the “social distancing” measures the city has mandated to slow the virus’s spread, which include closing libraries, museums and recreation facilities like pools. Students from preschoolers through PhD candidates are now doing all their education online, a move the Education Bureau calls "suspending classes without suspending learning."

As coronavirus spreads across the globe, other countries are joining Hong Kong and mainland China in this massive, unplanned experiment in online learning. According to Unesco, as of Friday, 14 countries have shut schools down nationwide, affecting upwards of 290 million students, while 13 countries, including the United States, have seen localized school closings. In recent days, schools from Scarsdale, New York, to San Francisco have closed temporarily over contagion concerns. The University of Washington and Stanford University have turned to online classes for the remainder of the quarter, and others are following suit for various lengths of time. Some experts believe more widespread and long-term closures will be necessary in areas with high levels of community transmission. States are preparing for that possibility by looking at their own online learning policies.

A teacher edits a video lesson he recorded for his students. (Isaac Lawrence/AFP via Getty Images)

But what does online learning involve here in Hong Kong? It depends. The city benefits from high internet penetration—90 percent of citizens over 10 years old are online. But beyond that it gets more complicated. The city has a diverse variety of schools, from free government-run schools to partially subsidized English-language schools for non-Cantonese speakers to private religious and international schools. Hong Kong has no specific online curriculum, so schools are cobbling together their own solutions using a myriad of platforms and apps, from Google Classroom, a free web service for assigning and sharing work, to BrainPOP, a site offering animated educational videos. Some students are expected to work alongside their classmates in real time. Others are allowed to watch pre-recorded videos or complete emailed worksheets at their own pace. Some parents are happy with their setups. Others have taken to Facebook to commiserate over “mommy needs wine” memes. The situation can give some insight into what Americans might expect as some schools transition to online learning.

“I’ve been working from home the past four weeks, and it’s been incredibly insightful to actually see what’s going on, because normally I’m not in school,” says Anna Adasiewicz, a business development manager originally from Poland, who has lived in Hong Kong for 16 years. Her 12-year-old daughter attends a subsidized English-language school run by the English Schools Foundation, which runs 22 schools in Hong Kong.

Unlike my son and his shark costume, Adasiewicz’s daughter is expected to be “dressed appropriately” and sit at a table, not a couch, when she logs on to Google Classroom each morning. Her school has been using the free service to share assignments, monitor progress, and let students and teachers chat. They're also doing interactive lessons via Google Hangouts Meet, a virtual-meeting software made free in the wake of the coronavirus.

“I actually think she’s more focused with this approach,” Adasiewicz says. “She’s not distracted by other kids. The class sizes are normally about 30, so I imagine a typical teacher spends a good portion of the time on behavior management. Here the teacher can mute anyone!”

Cat Lao, a special education classroom assistant, whose daughters are 3, 6 and 8, has also been happy with the experience. Her youngest daughter is in a local preschool while her older two attend an English Schools Foundation primary school. Her middle daughter has been using the Seesaw app to share assignments with her teacher and receive feedback. Her eldest daughter has been using Google Classroom and Flipgrid, an app that lets teachers set topics or questions for students to respond to via video. This child especially appreciates the real-time Google Meets, Lao says, since she misses the social aspects of school.

“They’re still learning, and still part of their community as much as they can be,” she says.

But many parents are not happy to find themselves working as de facto part-time teachers.

“For parents who have to work from home, managing school can be quite a task,” says Pragati Mor, a teacher and mother of two young daughters who attend the French International School of Hong Kong.

Her children’s online learning program has been full of technological glitches, Mor says, which requires taking time from her own workday to fuss with unfamiliar programs.

“It needs adult supervision,” she says. “It can be quite daunting.”

Susan Bridges, an education professor at the University of Hong Kong who studies online learning, admits, “It’s a challenge; lots of parents are having to adjust their lifestyles to what feels like homeschooling.”

Research shows that it’s more difficult to keep students motivated online, which means teachers need to mix up their strategies, Bridges says. This can include making lectures shorter, and incorporating real-time quizzes and online small group work. Another problem is testing. If a teacher had planned a proctored exam, they may need to switch to an unsupervised type of assessment instead, such as a term paper. Then there’s the question of hands-on learning, which is especially important in some higher education fields, such as medicine or speech pathology.

“All of that field work that’s essential for our professional and clinical programs, all of these are very difficult to replace, so that’s a big challenge,” Bridges says.

Charles Taylor, the owner of an English-language tutoring center in Hong Kong’s New Territories district, has had to think outside the box to make online learning successful. Before coronavirus hit, he’d already begun using a virtual classroom platform called WizIQ to connect his students with classrooms in Southeast Asia, as a sort of online exchange program. This put him in a better position than many to jump directly to online learning, he says. The main challenge is keeping young children engaged without the physical presence of a teacher. To deal with this, he’s shortened class lengths from an hour to 30 minutes for his 5- and 6-year-old students.

“I think this situation is a really great opportunity for people to be utilizing technology in a more fundamental kind of way,” he says.

Successful online learning is all about “engagement and interaction,” Bridges says. The University of Hong Kong has been helping its professors create more dynamic online learning environments using video meeting platforms like Zoom and recording technology like Panopto, which make it possible to insert quizzes, PowerPoints and captions into pre-recorded lectures. Beyond that, class formats have been up to the individual professors.

But, as Bridges points out, privacy and space are major concerns. Professors are discovering that students won’t turn on their video cameras because they’re embarrassed to be sitting in their childhood bedrooms in front of old K-Pop posters. Zoom has a solution for this, as Bridges demonstrates to me. She turns on a digital background and suddenly she appears to be in a sunny, minimalist office, a potted plant on the desk behind her. Other than a slight pixilation of her face, it looks pretty real.

“These are just little fix-its,” she says.

Still, a digital background can’t change the stresses of multiple people learning and working in Hong Kong’s notoriously tiny apartments.

“It’s crowded, it’s complicated, there are demands on technology,” says Adasiewicz, whose husband, a lawyer, has also been working from home. “We had to update our router.”

A woman and a boy wear a mask as they play basketball on February 27, 2020, in Hong Kong. (Vernon Yuen/NurPhoto via Getty Images)

Childcare is a major issue as well. Many Hong Kongers are now returning to their offices after an extended period of working remotely, leaving children at home in front of screens. Some rely on their nannies—nearly half of Hong Kong families with children and a working mother employ a live-in “foreign domestic helper,” usually from the Philippines or Indonesia. Other families count on grandparents for childcare, which means elderly caregivers who may not speak English must serve as tech support.

And not all classes lend themselves to online education. It’s hard to teach physical education online, and missing out on exercise is a problem not only for obesity rates but also for vision. Hong Kong has one of the highest rates of myopia (near-sightedness) in the world, with some 70 percent of kids over 12 suffering, and experts believe it’s because children spend too much time indoors looking at close objects like books and tablets. For many kids, who live in crowded housing estates with little green space, schools’ tracks and rooftop basketball courts provide some of the few opportunities they have for outdoor play. Some schools are encouraging students to take frequent breaks to do mini-exercises like a minute of jumping jacks.

Many hope this experience will force Hong Kong schools to professionalize and standardize their online curricula. This could potentially provide a template for other cities and countries facing their own coronavirus school shutdowns.

“Could this crisis inspire the bureau [of education] to incorporate online learning into the official curriculum and take Hong Kong education to the next level?” wondered Chak Fu Lam, a professor of management at the City University of Hong Kong, in a letter to the editor of the South China Morning Post.

At the end of the day, most parents and teachers seem to understand the situation is out of their control, and that everyone is doing the best they can.

“We have to embrace technology,” Adasiewicz says. “It’s coming our way whether we like it or not.”

Unfortunately, it seems, so is coronavirus.

How the India Pale Ale Got Its Name

Smithsonian Magazine

The British Indian army was parched. Soaking through their khakis in the equatorial heat, they pined for real refreshment. These weren’t the jolly days of ice-filled gin-and-tonics, lawn chairs and cricket. The first Brits to come south were stuck with lukewarm beer—specifically dark, heavy, porter, the most popular brew of the day in chilly Londontown, but unfit for the tropics. One Bombay-bound supply ship was saved from wrecking in the shallows when its crew lightened it by dumping some of its cargo — no great loss, a newspaper reported, "as the goods consisted principally of some heavy lumbersome casks of Government porter."

51GpknOl1wL._SL160_.jpg

The Brewer's Tale: A History of the World According to Beer

~ William Bostwick (author) More about this product
List Price: $26.95
Price: $20.21
You Save: $6.74 (25%)

Most of that porter came from George Hodgson's Bow brewery, just a few miles up the river Lea from the East India Company's headquarters in east London. Outward bound, ships carried supplies for the army, who paid well enough for a taste of home, and particularly for beer, but the East India Company (EIC) made all its profit on the return trip, when its clippers rode low in the water, holds weighed with skeins of Chinese silk and sacks of cloves.

The trip to India took at least six months, crossing the equator twice. In these thousand-ton ships, called East Indiaman, the hold was a hellish cave, hazy with heat and packed gunwale to gunwale with crates and barrels that pitched and rolled and strained their ropes with every wave. While sailors sick from scurvy groaned above, the beer below fared just as poorly. It often arrived stale, infected, or worse, not at all, the barrels having leaked or broken — or been drunk — en route.

Hodgson sold his beer on 18-month credit, which meant the EIC could wait to pay for it until their ships returned from India, emptied their holds, and refilled the company's purses. Still, the army, and thus the EIC, was frustrated with the quality Hodgson was providing. Hodgson tried unfermented beer, adding yeast once it arrived safely in port. They tried beer concentrate, diluting it on shore. Nothing worked. Nothing, that is, until Hodgson offered, instead of porter, a few casks of a strong, pale beer called barleywine or "October beer." It got its name from its harvest-time brewing, made for wealthy country estates "to answer the like purpose of wine" — an unreliable luxury during years spent bickering with France. "Of a Vinous Nature" — that is, syrupy strong as good Sherry — these beers were brewed especially rich and aged for years to mellow out. Some lords brewed a batch to honor a first son's birth, and tapped it when the child turned eighteen. To keep them tasting fresh, they were loaded with just-picked hops. Barclay Perkins's KKKK ale used up to 10 pounds per barrel. Hodgson figured a beer that sturdy could withstand the passage to India.

He was right. His shipment arrived to fanfare. On a balmy January day in 1822, the Calcutta Gazette announced the unloading of "Hodgson's warranted prime picked ale of the genuine October brewing. Fully equal, if not superior, to any ever before received in the settlement." The army had been waiting for this — pale and bright and strong, those Kentish hops a taste of home (not to mention a scurvy-busting boost of antibiotics).

The praise turned Hodgson's sons Mark and Frederick, who took over the brewery from their father soon after, ruthless. In the years to come, if they heard that another brewer was preparing a shipment, they'd flood the market to drive down prices and scare off the competition. They tightened their credit limits and hiked up their prices, eventually dumping the EIC altogether and shipping beer to India themselves. The suits downriver were not amused. By the late 1820s, EIC director Campbell Marjoribanks, in particular, had had enough. He stormed into Bow's rival Allsopp with a bottle of Hodgson's October beer and asked for a replica.

Allsopp was good at making porter — dark, sweet, and strong, the way the Russians liked it. When Sam Allsopp tried the sample of Hodgson's beer Marjoribanks had brought, he spit it out — too bitter for the old man's palate. Still, India was an open market. Allsopp agreed to try a pale. He asked his maltster, Job Goodhead, to find the lightest, finest, freshest barley he could. Goodhead kilned it extra lightly, to preserve its subtle sweetness – he called it “white malt” – and steeped a test brew (legend has it) in a tea kettle. The beer that barley made was something special too: "a heavenly compound," one satisfied drinker reported. "Bright amber, crystal clear," he went on, with a "very peculiar fine flavor."

******

IPAs were high class. To recreate Allsop’s legendary brew, I'd need the best ingredients available today, and that meant Maris Otter malt and Cascade hops. If your pint smells like a loaf of country bread, if you could almost eat your beer with a knife and fork and slice of sharp Wensleydale, if one sip swims in Anglicized visions of hearths and hay lofts, chances are these images are conjured by Maris Otter barley. Maris Otter is a touchstone for British and British-style beer. A hardy winter-harvested barley prized for its warm, full tones, its taste might be traditional, but its provenance is modern. Maris Otter was first developed in 1966 at the Plant Breeding Institute on Cambridge's Maris Lane. Those were dark days for British beer. Cheap, low-brow milds dominated the pubs, and an expensive grain like Maris Otter never quite caught on with big brewers. (Fullers was an exception and Maris Otter is one reason its London Pride is so admired.) Maris Otter almost vanished. By the 1990s, no one was growing the barley at all. What grain stores were left in the few old-timers' barns was all that remained, the last aromatic breath of a golden age. Then, in 2002, two companies bought the rights to the heirloom strain, and Maris Otter started popping up again.

For hops, I went straight to the source. I met John Segal, Jr., a few years back over a plate of local duck at the Lagunitas Brewing Company's backyard beer garden in Petaluma, California. He was wearing a sterling silver, cowboy-style belt buckle emblazoned with a pair of twirling hop vines. Our conversation quickly turned to beer. Segal is a hops farmer in Washington's Yakima Valley, the hop world's Napa. The Segals are a dynasty there. John's dad wore a matching buckle. His son wears one too.

What Maris Otter is to British beer, Cascade hops are to American. Thanks to high-profile flagships like Sierra Nevada's Pale and Anchor Brewing's Liberty, American pales are defined by the spritzy grapefruit blossom nose of Cascade hops. And John Segal grew them first. As influential as Cascades are, they're relatively new. Like Maris Otter, their roots go back to the late '60s. The American hops industry had never fully recovered since the one-two of Prohibition and a plague of the hop-withering parasite downy mildew in the late 1920s wiped out the crop and many of its buyers. Farmers grew almost entirely Clusters, a workhorse bittering hop, leaving the specialty strains to Europe: Coors Light's image may have been all-American, but its spicy-sweet nose was decidedly Teutonic, from aromatic Czech and German strains like Hallertau Mittelfruh.

But when a fungus-spread epidemic of vertcillium wilt in the 1950s cut the Mittelfruh harvest and inflated prices, American brewers — already wary of the Cluster monoculture's susceptibility to a similar outbreak — started pushing for homegrown diversity. Coors talked to the Department of Agriculture, who talked to some breeders, who talked to John Segal, who planted a few samples of a hybrid strain he called “USDA56013” in 1968. Four years of test brewing (and a name change) later, and Coors bought Segal Ranch's first commercially available crop of Cascades, paying a dollar a pound at a time when most growers were lucky to get half that. Two years later, a fledgling San Francisco start-up called Anchor bought some for a new beer they were making, Liberty Ale. Liberty shocked American palates, the Cascade's citrus bite too aggressive for most. But growers saw its quality, and corresponding price, and Cascades soon swept the valley. Today, Liberty is a craft beer common denominator, and Cascades are an icon. I asked John for a sample, and a few days later a zip-tied bag of bright green leaves landed on my stoop.

******

I brewed carefully, watching my temperatures to the degree, lest my grains steep too hot and, like over-brewed tea, leech bitter tannins into the brew. I made sure not to boil my hops too vigorously or for too long, to keep as many of their fragile, volatile oils intact. I carefully cleaned and sanitized a fermenter and added an all-purpose, classic yeast strain — with none of abbey yeast's fruit or saison's pepper, called "Whitbread Ale" and described, lamb-like, as clean, mild, and delicate. I gave my beer time. I was gentle. I was patient. And then I sent my beer to India – symbolically.

First, safety: I added an extra handful of hops, a preservative boost for the aging time ahead. Then — no room for barrels in my galley-size kitchen, and no hold below-deck in my fourth-floor apartment — I simulated a wooden cask by sprinkling a handful of toasted oak chips into the fermenter. I banished the brew to the top of the fridge, the warmest, dustiest corner I could find.

Six months later, a bright January day felt equatorial enough to announce my IPA's arrival and dust off the jug for a taste. The beer-logged hops had settled to the bottom. A few wood chips remained afloat. In between, the beer was clear, pale, and sparkled through the dust. I siphoned out a glass — opting against refrigeration in the name of authenticity, I sipped it warm. I thought that months steeping with sodden leaves and lumber would stain the flavor of pure-bred hops and malt. I anticipated old and stale; traditional IPAs could not have been as great as the fantasy. Those thirsty soldiers would have relished any taste of home, their palates primed by want. Instead, the beer I made was fresh and flowery, finishing with just a touch of caramel sweetness, like a dusting of toasted coconut. Quenching and bright, a taste of spring in the dead of winter, a glimpse of the south-Asian sun. What I thought would be flat tasted alive. Exactly as good beer should, no matter how old.

Editor's note, April 14, 2015: We have made a few slight changes to the above text to avoid confusion where there are discrepancies in the historical record and corrected the spelling of Frederick Hodgson's name.

The London Graveyard That’s Become a Memorial for the City’s Seedier Past

Smithsonian Magazine

London’s first red light district was on the south side of the River Thames, in the marshy, damp soils of the borough known as Southwark. There, in lands outside official London city limits, taverns, theaters, brothels and bear-baiting “amuseuments” flourished as popular forms of entertainment during the Medieval era. Today, the South Bank is known for gleaming office towers, and well-appointed cocktails bars and gastropubs, as tourists flock to the Tate Modern museum in a repurposed power station, take in Shakespeare at the Globe Theatre and admire the South Bank’s redevelopment. But the seamier side of Southwark history is recognized there too, in a small lot at the corner of Redcross Way. 

Though rusted, the iron gates surrounding Cross Bones graveyard are festooned with ribbons, feathers, beads and other tokens commemorating those buried there. A plaque honoring “The Outcast Dead” was added in 2006, a more permanent version of a plaque said to have originally been placed on the gates by a group of Londoners in 1998. And every year since then, right around Halloween, these Southwark pilgrims re-enact a ritual drama to remember those whose final resting place is in Cross Bones, particularly the many prostitutes who are said to have been buried there during the Middle Ages.

Southwark’s association with prostitution goes back to the first century AD, when invading Roman soldiers used the area as a home base. Whorehouses operated in the area for centuries, through the Viking era and the Crusades, and became especially popular after the 12th-century construction of a permanent London Bridge brought a steady stream of commerce to the area’s taverns. By then, Southwark was controlled by the Bishop of Winchester, one of the oldest, richest and most important diocese in England. Among other powers, the Bishop had the right to license and tax the borough’s prostitutes, who were derisively known as “Winchester Geese,” perhaps after their custom of baring their white breasts to entice customers. To be “bitten by a Winchester Goose” was to contract a sexually transmitted disease, likely syphilis or gonorrhea. 

Southwark’s brothels—which numbered between a handful and 18, depending on the year—were known as “the stews,” and survived for centuries despite repeated attempts from the royal throne to close them down. The crown also tried controlling the brothels through regulation: In 1161, Henry II laid down 39 rules known as the "Ordinances Touching the Government of the Stewholders in Southwark Under the Direction of the Bishop of Winchester." The rules made sure the prostitutes were able to come and go at will, required that all new workers were registered, restricted their activities on religious holidays, prevented nuns and married women from joining, banned cursing, and prohibited the women from taking their own lovers for free. The penalty for the latter included fines, prison time, a dip on the “cucking stool” into raw sewage, and banishment from Southwark.

Although the Bishop of Winchester regulated and taxed the area’s prostitutes, Christian doctrine prevented them from being buried in consecrated ground. The first likely reference to Cross Bones as a cemetery for Southwark’s “geese” comes from Tudor historian John Stow, who wrote in his 1598 Survey of London: “I have heard of ancient men, of good credit, report, that these single women were forbidden the rites of the church, so long as they continued that sinful life, and were excluded from Christian burial, if they were not reconciled before their death. And therefore there was a plot of ground called the Single Woman’s churchyard, appointed for them far from the parish church.” 

“The stews” closed in the 17th century, and by the dawn of the Victorian era, Southwark was one of the worst slums in London, dense with crime and cholera, a place even policeman feared to tread. Cross Bones was repurposed into a pauper’s graveyard that served the parish of St. Saviour’s. In 1833, the antiquarian William Taylor wrote: “There is an unconsecrated burial ground known as the Cross Bones at the corner of Redcross Street, formerly called the Single Woman's burial ground, which is said to have been used for this purpose.” The area’s inhabitants led miserable lives, and suffered indignities even after death: Cross Bones was a favorite hunting ground for the bodysnatchers who unearthed corpses for use in anatomy classes at Southwark’s Guy's Hospital, among other places.

After the public complained that the overcrowded cemetery offended public health and decency, Cross Bones was closed in 1853 on the grounds that it was “completely overcharged with dead.” An 1832 letter from parish authorities had noted the ground was “so very full of coffins that it is necessary to bury within two feet of the surface,” and that “the effluviem is so very offensive that we fear the consequences may be very injurious to the surrounding neighborhood.” (At the time, people feared the city’s burgeoning population of foul-smelling corpses was partly responsible for the city’s cholera epidemic. The true culprit, the water supply, was discovered later.)  The land was sold for development 30 years later, but the sale declared void under the Disused Burial Grounds Act of 1884. Locals resisted further attempts at development, although the land was briefly used as a fairground, until complaints about the showmen’s “steam organs and noisy music” became overwhelming. 

The cemetery was more or less forgotten about until the 1990s, when the London Underground needed to build an electricity substation for the Jubilee Line extension on the site. Museum of London archeologists knew the land contained an old burial ground, and asked permission to excavate a small portion of the cemetery. They were given six weeks to complete the dig, in which they removed 148 skeletons from the top layers of the soil; by their estimate, less than one percent of the bodies packed beneath the ground. More than half of the skeletons the archeologists unearthed were from children, reflecting the high rates of infant mortality in that section of London during the 19th century, when Cross Bones served as a pauper’s cemetery. The scarred bones, encased in cheap coffins, showed that disease—including scurvy, syphilis and rickets—was rife. And the other 99 percent who remain underground? Their secrets will probably stay buried for generations more.

Image by Flickr user Porsupah Ree. A shrine marking London's Cross Bones Graveyard. (original image)

Image by Flickr user David Fisher. People hang tributes on the exterior of the Cross Bones Cemetery. (original image)

Image by Flickr user Garry Knight. Cross Bones is a place of complex modern rituals, meant to remember the women and children buried here, as well as mark recent history. (original image)

Image by Flickr user David Fisher. A plaque outside the gates of Cross Bones remembers its history as an unconsecrated graveyard for prostitutes. (original image)

Image by Flickr user G Travels. According to local historian Patricia Dark, the Cross Bones Cemetery "is a place where you can go and celebrate the people nobody remembers." (original image)

Meanwhile, author John Constable, a local poet and playwright, has begun his own work at Cross Bones. As Constable tells it, he was writing late one night in November, 1996, when he felt overtaken by a character he calls “The Goose,” the spirit of a medieval prostitute. She began dictating what would later become the first poem in Constable’s Southwark Mysteries:

For tonight in Hell
They are tolling the bell
For the Whore that lay at the Tabard,
And well we know
How the carrion crow
Doth feast in our Cross Bones Graveyard.

Constable says that later the same night, “the Goose” took him on a walk through the Southwark streets, whispering more poems, plays and songs in his ears, until the strange tour ended in a vacant lot. According to Constable, he didn’t know the lot contained Cross Bones until several years later. In fact, Constable insists that on that night in 1996, he had never heard of Cross Bones at all. 

The verse Constable wrote down that night was later published as the Southwark Mysteries and has been performed at Shakespeare’s Globe Theatre and Southwark Cathedral, both not far from where the “stews” once stood. The Southwark Mysteries also formed the centerpiece of the first Halloween ritual at Cross Bones in 1998. For 13 years, until 2010, a growing community around Cross Bones performed parts of the Southwark Mysteries, created altars to lost loved ones, and joined in a candle-lit procession that ended at the cemetery gates. The ritual now takes place in a more simplified form, as part of monthly vigils at the site. The International Union of Sex Workers has even called for Cross Bones to be the first World Heritage site dedicated to those in the sex trade. 

The modern rituals of remembrance at Cross Bones are complex, notes Patricia Dark, a Southwark historian and an archivist at Southwark Council. She notes that the identification of Cross Bones as a prostitute’s burial ground is more theory than proven fact, and rests primarily on Stow’s assertion in his Survey. And yet Cross Bones has become a potent site for remembrance because of more recent history, too. Southwark, once a vibrant riverside community filled with manufacturers, wharves, and warehouses, emptied out during the 1960s, when the rise of shipping containers greatly reduced the number of men necessary to work the docks. Redevelopment during the 1980s placed an emphasis on white collar business,  leaving little room for the remnants of Southwark’s working class community. “The Borough now has lots of shiny steel office towers,” Dark says, “and lots of upscale places for an office worker to get lunch or socialize after work, but very little that would support actual community life on a day-to-day basis—it's all a bit soulless. … I think that Crossbones, by its very nature ... is a place where you can go and celebrate the people nobody remembers. I'd argue that the act of doing that helps the people doing the remembering feel like they matter too.”

In 2007, Transport for London, which now owns the site, gave Constable access inside the gates, where he and other volunteers have created a wild garden. Today, an informal group known as the Friends of Cross Bones is working to ensure that a planned redevelopment of the site preserves the garden as a more permanent place of reflection and remembrance. While no final lease agreement has been signed, the Southwark Council Community Project Bank has pledged £100,000 to create such a garden, and Transport for London planning guidelines have promised to be “sympathetic to its heritage.” 

The community that has sprung up around Cross Bones is watching the developments closely. Monthly vigils to refresh the shrines at the site and honor the dead there continue, and several local homeless people have appointed themselves gatekeepers to keep desecration at bay. Constable has also developed a range of performances, workshops, and walks that continue to draw participants from London and beyond, many of whom choose to remember their own dead at the site. According to Constable, the rituals at Cross Bones are working to “heal the wound of history.” In some cases, they may also be a case of the community of today working to heal itself.

Fractal Patterns in Nature and Art Are Aesthetically Pleasing and Stress-Reducing

Smithsonian Magazine

Humans are visual creatures. Objects we call “beautiful” or “aesthetic” are a crucial part of our humanity. Even the oldest known examples of rock and cave art served aesthetic rather than utilitarian roles. Although aesthetics is often regarded as an ill-defined vague quality, research groups like mine are using sophisticated techniques to quantify it – and its impact on the observer.

We’re finding that aesthetic images can induce staggering changes to the body, including radical reductions in the observer’s stress levels. Job stress alone is estimated to cost American businesses many billions of dollars annually, so studying aesthetics holds a huge potential benefit to society.

Researchers are untangling just what makes particular works of art or natural scenes visually appealing and stress-relieving – and one crucial factor is the presence of the repetitive patterns called fractals.

Are fractals the key to why Pollock’s work captivates? (AP Photo/LM Otero)

Pleasing patterns, in art and in nature

When it comes to aesthetics, who better to study than famous artists? They are, after all, the visual experts. My research group took this approach with Jackson Pollock, who rose to the peak of modern art in the late 1940s by pouring paint directly from a can onto horizontal canvases laid across his studio floor. Although battles raged among Pollock scholars regarding the meaning of his splattered patterns, many agreed they had an organic, natural feel to them.

My scientific curiosity was stirred when I learned that many of nature’s objects are fractal, featuring patterns that repeat at increasingly fine magnifications. For example, think of a tree. First you see the big branches growing out of the trunk. Then you see smaller versions growing out of each big branch. As you keep zooming in, finer and finer branches appear, all the way down to the smallest twigs. Other examples of nature’s fractals include clouds, rivers, coastlines and mountains.

In 1999, my group used computer pattern analysis techniques to show that Pollock’s paintings are as fractal as patterns found in natural scenery. Since then, more than 10 different groups have performed various forms of fractal analysis on his paintings. Pollock’s ability to express nature’s fractal aesthetics helps explain the enduring popularity of his work.

The impact of nature’s aesthetics is surprisingly powerful. In the 1980s, architects found that patients recovered more quickly from surgery when given hospital rooms with windows looking out on nature. Other studies since then have demonstrated that just looking at pictures of natural scenes can change the way a person’s autonomic nervous system responds to stress.

Are fractals the secret to some soothing natural scenes? (Ronan, CC BY-NC-ND)

For me, this raises the same question I’d asked of Pollock: Are fractals responsible? Collaborating with psychologists and neuroscientists, we measured people’s responses to fractals found in nature (using photos of natural scenes), art (Pollock’s paintings) and mathematics (computer generated images) and discovered a universal effect we labeled “fractal fluency.”

Through exposure to nature’s fractal scenery, people’s visual systems have adapted to efficiently process fractals with ease. We found that this adaptation occurs at many stages of the visual system, from the way our eyes move to which regions of the brain get activated. This fluency puts us in a comfort zone and so we enjoy looking at fractals. Crucially, we used EEG to record the brain’s electrical activity and skin conductance techniques to show that this aesthetic experience is accompanied by stress reduction of 60 percent – a surprisingly large effect for a nonmedicinal treatment. This physiological change even accelerates post-surgical recovery rates.

Artists intuit the appeal of fractals

It’s therefore not surprising to learn that, as visual experts, artists have been embedding fractal patterns in their works through the centuries and across many cultures. Fractals can be found, for example, in Roman, Egyptian, Aztec, Incan and Mayan works. My favorite examples of fractal art from more recent times include da Vinci’s Turbulence (1500), Hokusai’s Great Wave (1830), M.C. Escher’s Circle Series (1950s) and, of course, Pollock’s poured paintings.

Although prevalent in art, the fractal repetition of patterns represents an artistic challenge. For instance, many people have attempted to fake Pollock’s fractals and failed. Indeed, our fractal analysis has helped identify fake Pollocks in high-profile cases. Recent studies by others show that fractal analysis can help distinguish real from fake Pollocks with a 93 percent success rate.

How artists create their fractals fuels the nature-versus-nurture debate in art: To what extent is aesthetics determined by automatic unconscious mechanisms inherent in the artist’s biology, as opposed to their intellectual and cultural concerns? In Pollock’s case, his fractal aesthetics resulted from an intriguing mixture of both. His fractal patterns originated from his body motions (specifically an automatic process related to balance known to be fractal). But he spent 10 years consciously refining his pouring technique to increase the visual complexity of these fractal patterns.

The Rorschach inkblot test relies on what you read in to the image. (Hermann Rorschach)

Fractal complexity

Pollock’s motivation for continually increasing the complexity of his fractal patterns became apparent recently when I studied the fractal properties of Rorschach inkblots. These abstract blots are famous because people see imaginary forms (figures and animals) in them. I explained this process in terms of the fractal fluency effect, which enhances people’s pattern recognition processes. The low complexity fractal inkblots made this process trigger-happy, fooling observers into seeing images that aren’t there.

Pollock disliked the idea that viewers of his paintings were distracted by such imaginary figures, which he called “extra cargo.” He intuitively increased the complexity of his works to prevent this phenomenon.

Pollock’s abstract expressionist colleague, Willem De Kooning, also painted fractals. When he was diagnosed with dementia, some art scholars called for his retirement amid concerns that that it would reduce the nurture component of his work. Yet, although they predicted a deterioration in his paintings, his later works conveyed a peacefulness missing from his earlier pieces. Recently, the fractal complexity of his paintings was shown to drop steadily as he slipped into dementia. The study focused on seven artists with different neurological conditions and highlighted the potential of using art works as a new tool for studying these diseases. To me, the most inspiring message is that, when fighting these diseases, artists can still create beautiful artworks.

Recognizing how looking at fractals reduces stress means it’s possible to create retinal implants that mimic the mechanism. (Nautilus image via www.shutterstock.com)

My main research focuses on developing retinal implants to restore vision to victims of retinal diseases. At first glance, this goal seems a long way from Pollock’s art. Yet, it was his work that gave me the first clue to fractal fluency and the role nature’s fractals can play in keeping people’s stress levels in check. To make sure my bio-inspired implants induce the same stress reduction when looking at nature’s fractals as normal eyes do, they closely mimic the retina’s design.

When I started my Pollock research, I never imagined it would inform artificial eye designs. This, though, is the power of interdisciplinary endeavors – thinking “out of the box” leads to unexpected but potentially revolutionary ideas.

The Gory Origins of Valentine's Day

Smithsonian Magazine

On Feb. 14, sweethearts of all ages will exchange cards, flowers, candy, and more lavish gifts in the name of St. Valentine. But as a historian of Christianity, I can tell you that at the root of our modern holiday is a beautiful fiction. St. Valentine was no lover or patron of love.

Valentine’s Day, in fact, originated as a liturgical feast to celebrate the decapitation of a third-century Christian martyr, or perhaps two. So, how did we get from beheading to betrothing on Valentine’s Day?

Early origins of St. Valentine

Ancient sources reveal that there were several St. Valentines who died on Feb. 14. Two of them were executed during the reign of Roman Emperor Claudius Gothicus in 269-270 A.D., at a time when persecution of Christians was common.

How do we know this? Because, an order of Belgian monks spent three centuries collecting evidence for the lives of saints from manuscript archives around the known world.

They were called Bollandists after Jean Bolland, a Jesuit scholar who began publishing the massive 68-folio volumes of “Acta Sanctorum,” or “Lives of the Saints,” beginning in 1643.

Since then, successive generations of monks continued the work until the last volume was published in 1940. The Brothers dug up every scrap of information about every saint on the liturgical calendar and printed the texts arranged according to the saint’s feast day.

The Valentine martyrs

The volume encompassing Feb. 14 contains the stories of a handful of “Valentini,” including the earliest three of whom died in the third century.

St. Valentine blessing an epileptic (Wellcome Images, CC BY)

The earliest Valentinus is said to have died in Africa, along with 24 soldiers. Unfortunately, even the Bollandists could not find any more information about him. As the monks knew, sometimes all that the saints left behind was a name and day of death.

We know only a little more about the other two Valentines.

According to a late medieval legend reprinted in the “Acta,” which was accompanied by Bollandist critique about its historical value, a Roman priest named Valentinus was arrested during the reign of Emperor Gothicus and put into the custody of an aristocrat named Asterius.

As the story goes, Asterius made the mistake of letting the preacher talk. Father Valentinus went on and on about Christ leading pagans out of the shadow of darkness and into the light of truth and salvation. Asterius made a bargain with Valentinus: If the Christian could cure Asterius’s foster-daughter of blindness, he would convert. Valentinus put his hands over the girl’s eyes and chanted:

“Lord Jesus Christ, en-lighten your handmaid, because you are God, the True Light.”

Easy as that. The child could see, according to the medieval legend. Asterius and his whole family were baptized. Unfortunately, when Emperor Gothicus heard the news, he ordered them all to be executed. But Valentinus was the only one to be beheaded. A pious widow, though, made off with his body and had it buried at the site of his martyrdom on the Via Flaminia, the ancient highway stretching from Rome to present-day Rimini. Later, a chapel was built over the saint’s remains.

St. Valentine was not a romantic

The third third-century Valentinus was a bishop of Terni in the province of Umbria, Italy.

St. Valentine kneeling (David Teniers III)

According to his equally dodgy legend, Terni’s bishop got into a situation like the other Valentinus by debating a potential convert and afterward healing his son. The rest of story is quite similar as well: He too, was beheaded on the orders of Emperor Gothicus and his body buried along the Via Flaminia.

It is likely, as the Bollandists suggested, that there weren’t actually two decapitated Valentines, but that two different versions of one saint’s legend appeared in both Rome and Terni.

Nonetheless, African, Roman or Umbrian, none of the Valentines seems to have been a romantic.

Indeed, medieval legends, repeated in modern media, had St. Valentine performing Christian marriage rituals or passing notes between Christian lovers jailed by Gothicus. Still other stories romantically involved him with the blind girl whom he allegedly healed. Yet none of these medieval tales had any basis in third-century history, as the Bollandists pointed out.

St. Valentine baptizing St. Lucilla (Jacopo Bassano (Jacopo da Ponte))

In any case, historical veracity did not count for much with medieval Christians. What they cared about were stories of miracles and martyrdoms, and the physical remains or relics of the saint. To be sure, many different churches and monasteries around medieval Europe claimed to have bits of a St. Valentinus’ skull in their treasuries.

Santa Maria in Cosmedin in Rome, for example, still displays a whole skull. According to the Bollandists, other churches across Europe also claim to own slivers and bits of one or the other St. Valentinus’ body: For example, San Anton Church in Madrid, Whitefriar Street Church in Dublin, the Church of Sts. Peter and Paul in Prague, Saint Mary’s Assumption in Chelmno, Poland, as well as churches in Malta, Birmingham, Glasgow, and on the Greek isle of Lesbos, among others.

For believers, relics of the martyrs signified the saints’ continuing their invisible presence among communities of pious Christians. In 11th-century Brittany, for instance, one bishop used what was purported to be Valentine’s head to halt fires, prevent epidemics, and cure all sorts of illnesses, including demonic possession.

As far as we know, though, the saint’s bones did nothing special for lovers.

Unlikely pagan origins

Many scholars have deconstructed Valentine and his day in booksarticles and blog postings. Some suggest that the modern holiday is a Christian cover-up of the more ancient Roman celebration of Lupercalia in mid-February.

Lupercalia originated as a ritual in a rural masculine cult involving the sacrifice of goats and dogs and evolved later into an urban carnival. During the festivities half-naked young men ran through the streets of Rome, streaking people with thongs cut from the skins of newly killed goats. Pregnant women thought it brought them healthy babies. In 496 A.D., however, Pope Gelasius supposedly denounced the rowdy festival.

Still, there is no evidence that the pope purposely replaced Lupercalia with the more sedate cult of the martyred St. Valentine or any other Christian celebration.

Chaucer and the love birds

The love connection probably appeared more than a thousand years after the martyrs’ death, when Geoffrey Chaucer, author of “The Canterbury Tales” decreed the February feast of St. Valentinus to the mating of birds. He wrote in his “Parlement of Foules”:

“For this was on seynt Volantynys day. Whan euery bryd comyth there to chese his make.”

It seems that, in Chaucer’s day, English birds paired off to produce eggs in February. Soon, nature-minded European nobility began sending love notes during bird-mating season. For example, the French Duke of Orléans, who spent some years as a prisoner in the Tower of London, wrote to his wife in February 1415 that he was “already sick of love” (by which he meant lovesick.) And he called her his “very gentle Valentine.”

English audiences embraced the idea of February mating. Shakespeare’s lovestruck Ophelia spoke of herself as Hamlet’s Valentine.

In the following centuries, Englishmen and women began using Feb. 14 as an excuse to pen verses to their love objects. Industrialization made it easier with mass-produced illustrated cards adorned with smarmy poetry. Then along came Cadbury, Hershey’s, and other chocolate manufacturers marketing sweets for one’s sweetheart on Valentine’s Day.

Valentine’s Day chocolates (GillianVann/Shutterstock.com)

Today, shops everywhere in England and the U.S. decorate their windows with hearts and banners proclaiming the annual Day of Love. Merchants stock their shelves with candy, jewelry and Cupid-related trinkets begging “Be My Valentine.” For most lovers, this request does not require beheading.

Invisible Valentines

It seems that the erstwhile saint behind the holiday of love remains as elusive as love itself. Still, as St. Augustine, the great fifth-century theologian and philosopher argued in his treatise on “Faith in Invisible Things,” someone does not have to be standing before our eyes for us to love them.

And much like love itself, St. Valentine and his reputation as the patron saint of love are not matters of verifiable history, but of faith.

The Army's First Black Nurses Were Relegated to Caring for Nazi Prisoners of War

Smithsonian Magazine

On the summer afternoon in 1944 that 23-year-old Elinor Powell walked into the Woolworth’s lunch counter in downtown Phoenix, it never occurred to her that she would be refused service. She was, after all, an officer in the U.S. Army Nurse Corps, serving her country during wartime, and she had grown up in a predominantly white, upwardly mobile Boston suburb that didn’t subject her family to discrimination.

But the waiter who turned Elinor away wasn’t moved by her patriotism. All he saw was her brown skin. It probably never occurred to him that the woman in uniform was from a family that served its country, as Elinor’s father had in the First World War, as well as another relative who had been part of the Union Army during the Civil War. The only thing that counted at that moment—and in that place, where Jim Crow laws remained in force—was the waiter’s perception of a black army nurse as not standing on equal footing with his white customers.

Infuriated and humiliated, Elinor left Woolworth’s and returned to POW Camp Florence, in the Arizona desert. She was stationed there to look after German prisoners of war, who had been captured in Europe and Northern Africa and then sent across the Atlantic Ocean, for detainment in the United States during World War II.

Elinor, like many other black nurses in the Army Nurse Corps, was tasked with caring for German POWs—men who represented Hitler’s racist regime of white supremacy. Though their presence is rarely discussed in American history, from 1942 to 1946, there were 371,683 German POWs scattered across the country in more than 600 camps. Some POWs remained until 1948.

And these POWs were kept busy. Prisoners of war, under rules set by the Geneva Convention, could be made to work for the detaining power. And, with millions of American men away serving in the military, there was a significant labor shortage in the United States. Farms, plants, canneries, and other industries needed workers.

For black nurses, the assignment to take care of German POWs—to tend to Nazis—was deeply unwelcome. To the African-American women who had endured the arduous process of being admitted into the U.S. Army Nurse Corps, this assignment felt like a betrayal. They volunteered to serve to help wounded American soldiers, not the enemy.

Long before World War II, black nurses had been struggling to serve their country. After the United States declared war on Germany in 1917, black nurses tried to enroll in the Red Cross, which was then the procurement agency for the Army Nurse Corps. The Red Cross rejected them, because they didn’t have the required membership in the American Nurses Association (ANA), which didn’t allow blacks to join at the time. A few black nurses eventually served in the First World War, but not because they were finally admitted into the Army Nurse Corps. The 1918 flu epidemic wiped out so many thousands of people that a handful of black nurses were called to assist.

More than two decades later, after Hitler invaded Poland, the United States began an aggressive war preparedness program, and the Army Nurse Corps expanded its recruiting process. Wanting to serve their country and receive a steady military income, thousands of black nurses filled out applications to enlist. They received the following letter:

“Your application to the Army Nurse Corps cannot be given favorable consideration as there are no provisions in Army regulations for the appointment of colored nurses in the Corps.”

The rejection letter was a crushing blow, but also an honest appraisal of how the country regarded black nurses: They weren’t valued as American citizens or seen as fit to wear a military uniform.

The National Association of Colored Graduate Nurses (NACGN)—an organization founded in 1908 for black registered nurses as an alternative to the ANA, which still hadn’t extended its membership to black nurses—challenged the letter. And with political pressure from civil rights groups and the black press, 56 black nurses were finally admitted into the U.S. Army Nurse Corps in 1941. Some went to Fort Livingston in Louisiana and others to Fort Bragg, in North Carolina, both segregated bases.

When Elinor Powell entered the army in 1944, she completed her basic training an hour outside of Tucson, Arizona, at Fort Huachuca, which had become the largest military installation for black soldiers and nurses. The army had a strict quota for black nurses, and only 300 of them served in the entire Army Nurse Corps, which had 40,000 white nurses. It was evident the military didn’t really want black women to serve at all, and they made this clear.

German POWs in Camp Florence, Arizona, circa 1944-1946 (Photo courtesy of Chris Albert)

Elinor’s cohort of newly trained Army nurses soon received shocking news: There had been too much fraternization between white nurses and German POWs at Camp Florence. So the Army was bringing in black nurses as replacements.

POW camps would become an ongoing assignment for the majority of African-American nurses. The remainder were stationed at segregated bases with black soldiers, who mostly performed maintenance and menial jobs during the war, and understood what it meant to wear a U.S. military uniform and still be treated like a second-class citizen.

Life for a black army nurse at a POW camp could be lonely and isolated. The camps in the South and Southwest, in particular, strictly enforced Jim Crow. The list of complaints from black nurses included being routinely left out of officer meetings and social functions, and being forced to eat in segregated dining halls. The trips to nearby towns were also degrading because of establishments that either relegated blacks to subpar seating and service or barred them from entering altogether.

At the hospitals in the POW camps, black nurses weren’t that fulfilled either. A great many of the prisoners were in good health, which had been a requirement to make the trans-Atlantic journey in the first place, so the black nurses weren’t utilized to full capacity. There were typical bedside nursing duties and occasional appendectomies performed, but rarely were there critical cases.

In some ways, from a social standpoint, the German POWs fared better than the black nurses. Local white residents, U.S. Army guards and officers were friendly toward them—a level of respect that black laborers, soldiers, and nurses did not experience with any regularity.

When German prisoners first arrived in the United States, many were shocked by the racial hierarchy entrenched in American culture. They saw the segregated bathrooms and restricted dining halls at train stations, and during their days-long journeys to their respective POW camps had black train attendants bringing them food and drinks and calling them “sir.” It was clear that in the United States, there was an inherent expectation of subservience to whites, even to those from Hitler’s army.

Once at camp, life for German POWs, for the most part, was comfortable. From the clean accommodations and regular meals, to the congeniality of Americans, some POWs were relieved to have been captured. And the interactions with black nurses were largely civilized.

But there were occasions when black nurses found themselves humiliated by German POWs and not backed up by the U.S. Army. At Camp Papago Park, outside of Phoenix, a German POW said he hated “niggers” in front of a black nurse. She reported the incident to the commanding officer, expecting a swift reprimand. The nurse later discovered the commanding officer didn’t think any punishment was necessary. She complained about the incident in a letter to the National Association of Colored Graduate Nurses:

“That is the worst insult an army officer should ever have to take. I think it is insult enough to be here taking care of them when we volunteered to come into the army to nurse military personnel…All of this is making us very bitter.”

Meanwhile, even though black nurses were underutilized, there was an urgent need for more nurses to care for the returning American soldiers, wounded in battle. Nevertheless, white nurses were tasked to tend to Americans almost exclusively. Yes, thousands of white nurses also had POW camp assignments—there were very few black women in Army Nurse Corps. But if a black unit could replace a white one at a camp, the swap was made.

As the war entered its final year, the numbers of wounded men rose exponentially. President Roosevelt made the alarming announcement of legislation to establish a nursing draft in his State of the Union Address on January 6, 1945. Radio announcements said the draft would be instituted unless 18,000 additional nurses volunteered.

At the time of the president’s address, there were 9,000 applications from black nurses hoping to enlist in the Army Nurse Corps. But those nurses didn’t count toward the goal, or dissuade FDR’s announcement—to the dismay of NACGN, the black press and civil rights organizations.

Congressman Adam Clayton Powell Jr., the esteemed minister from Harlem, famously denounced the decision: “It is absolutely unbelievable that in times like these, when the world is going forward, that there are leaders in our American life who are going backward. It is further unbelievable that these leaders have become so blindly and unreasonably un-American that they have forced our wounded men to face the tragedy of death rather than allow trained nurses to aid because these nurses’ skins happen to be of a different color.”

Elinor and Frederick, summer 1947 (Photo courtesy of Chris Albert)

The draft legislation stalled in the Senate and the conscription of nurses never occurred. But with morale among black army nurses reaching record lows, the NACGN approached First Lady Eleanor Roosevelt for help, given her commitment to equal rights. And the meeting was a success.

In the final year of the war, black nurses were no longer assigned exclusively to POW camps. After a few months they were transferred to army hospitals for wounded American soldiers.

Elinor remained at POW Camp Florence for the duration of the war, and fell in love with a German prisoner, Frederick Albert. While fellow Americans humiliated her with segregation, a German, of all people, uplifted her. The two shunned the racist policies of Jim Crow and Nazism, seeking solace in a forbidden romance. They would spend their lives together in constant search of a community that accepted them, more than 20 years before laws banning interracial marriage were struck down in the 1967 Loving v. Virginia decision.

By war’s end, only about 500 black nurses had served in the U.S. Army Nurse Corps during WWII, even though thousands had applied. Despite the discrimination they faced, black army nurses demonstrated a persistent will to be a part of the U.S. Army Nurse Corp and serve their country. Their efforts paid off when President Truman issued an executive order to desegregate the entire military in 1948.

And by 1951, the National Association of Colored Graduate Nurses dissolved into the American Nurses Association, which had extended its membership to all nurses regardless of race.

The Doctor Who Starved Her Patients to Death

Smithsonian Magazine

Today the little town of Olalla, a ferry’s ride across Puget Sound from Seattle, is a mostly forgotten place, the handful of dilapidated buildings a testament to the hardscrabble farmers, loggers and fisherman who once tried to make a living among the blackberry vines and Douglas firs. But in the 1910s, Olalla was briefly on the front page of international newspapers for a murder trial the likes of which the region has never seen before or since.

At the center of the trial was a woman with a formidable presence and a memorable name: Dr. Linda Hazzard. Despite little formal training and a lack of a medical degree, she was licensed by the state of Washington as a “fasting specialist.” Her methods, while not entirely unique, were extremely unorthodox. Hazzard believed that the root of all disease lay in food—specifically, too much of it. “Appetite is Craving; Hunger is Desire. Craving is never satisfied; but Desire is relieved when Want is supplied,” she wrote in her self-published 1908 book Fasting for the Cure of Disease. The path to true health, Hazzard wrote, was to periodically let the digestive system “rest” through near-total fasts of days or more. During this time, patients consumed only small servings of vegetable broth, their systems “flushed” with daily enemas and vigorous massages that nurses said sometimes sounded more like beatings.

Despite the harsh methods, Hazzard attracted her fair share of patients. One was Daisey Maud Haglund, a Norwegian immigrant who died in 1908 after fasting for 50 days under Hazzard’s care. Haglund left behind a three-year-old son, Ivar, who would later go on to open the successful Seattle-based seafood restaurant chain that bears his name. But the best-remembered of Hazzard’s patients are a pair of British sisters named Claire and Dorothea (known as Dora) Williamson, the orphaned daughters of a well-to-do English army officer.

As Olalla-based author Gregg Olsen explains in his book Starvation Heights (named after the locals’ term for Hazzard’s institute), the sisters first saw an ad for Hazzard’s book in a newspaper while staying at the lush Empress Hotel in Victoria, British Columbia. Though not seriously ill, the pair felt they were suffering from a variety of minor ailments: Dorothea complained of swollen glands and rheumatic pains, while Claire had been told she had a dropped uterus. The sisters were great believers in what we might today call “alternative medicine,” and had already given up both meat and corsets in an attempt to improve their health. Almost as soon as they learned of Hazzard’s Institute of Natural Therapeutics in Olalla, they became determined to undergo what Claire called Hazzard’s “most beautiful treatment.”

The institute’s countryside setting appealed to the sisters almost as much as the purported medical benefits of Hazzard’s regimen. They dreamed of horses grazing the fields, and vegetable broths made with produce fresh from nearby farms. But when the women reached Seattle in February 1911 after signing up for treatment, they were told the sanitarium in Olalla wasn’t quite ready. Instead, Hazzard set them up in apartment on Seattle’s Capitol Hill, where she began feeding them a broth made from canned tomatoes. A cup of it twice a day, and no more. They were given hours-long enemas in the bathtub, which was covered with canvas supports when the girls started to faint during their treatment.

By the time the Williamsons were transferred to the Hazzard home in Olalla two months later, they weighed about 70 pounds, according to one worried neighbor. Family members would have been worried too, if any of them had known what was going on. But the sisters were used to family disapproving of their health quests, and told no one where they were going. The only clue something was amiss came in a mysterious cable to their childhood nurse, Margaret Conway, who was then visiting family in Australia. It contained only a few words, but seemed so nonsensical the nurse bought a ticket on a boat to the Pacific Northwest to check up on them.

Dr. Hazzard’s husband Samuel Hazzard (a former Army lieutenant who served jail time for bigamy after marrying Linda) met Margaret in Vancouver. Aboard the bus to their hotel, Samuel delivered some startling news: Claire was dead. As Dr. Hazzard later explained it, the culprit was a course of drugs administered to Claire in childhood, which had shrunk her internal organs and caused cirrhosis of the liver. To hear the Hazzards tell it, Claire had been much too far gone for the “beautiful treatment” to save her.

Margaret Conway wasn’t trained as a doctor, but she knew something was amiss. Claire’s body, embalmed and on display at the Butterworth mortuary near Pike Place Market, looked like it belonged to another person—the hands, facial shape, and color of the hair all looked wrong to her. Once she was in Olalla, Margaret discovered that Dora weighed only about 50 pounds, her sitting bones protruding so sharply she couldn’t sit down without pain. But she didn’t want to leave Olalla, despite the fact that she was clearly starving to death.

The horrors revealed in Dora’s bedroom were matched by the ones in Hazzard’s office: the doctor had been appointed the executor of Claire’s considerable estate, as well as Dora’s guardian for life. Dora had also signed over her power of attorney to Samuel Hazzard. Meanwhile, the Hazzards had helped themselves to Claire’s clothes, household goods, and an estimated $6,000 worth of the sisters’ diamonds, sapphires and other jewels. Dr. Hazzard even delivered a report to Margaret concerning Dora’s mental state while dressed in one of Claire’s robes.

Margaret got nowhere trying to convince Dr. Hazzard to let Dora leave. Her position as a servant hindered her—she often felt too timid to contradict those in a class above her—and Hazzard was known for her terrible power over people. She seemed to hypnotize them with her booming voice and flashing dark eyes. In fact, some wondered if Hazzard’s interest in spiritualism, theosophy and the occult  had given her strange abilities; perhaps she hypnotized people into starving themselves to death?

In the end it took the arrival of John Herbert, one of the sisters’ uncles, whom Margaret had summoned from Portland, Oregon, to free Dora. After some haggling, he paid Hazzard nearly a thousand dollars to let Dora leave the property. But it took the involvement of the British vice consul in nearby Tacoma—Lucian Agassiz—as well as a murder trial to avenge Claire’s death.

As Herbert and Agassiz would discover once they started researching the case, Hazzard was connected to the deaths of several other wealthy individuals. Many had signed large portions of their estates over to her before their deaths. One, former state legislator Lewis E. Radar, even owned the property where her sanitarium was located (its original name was “Wilderness Heights”). Rader died in May 1911, after being moved from a hotel near Pike Place Market to an undisclosed location when authorities tried to question him. Another British patient, John “Ivan” Flux, had come to America to buy a ranch, yet died with $70 to his name. A New Zealand man named Eugene Wakelin was also reported to have shot himself while fasting under Hazzard’s care; Hazzard had gotten herself appointed administer of his estate, draining it of funds. In all, at least a dozen people are said to have starved to death under Hazzard’s care, although some claim the total could be significantly higher.

On August 15, 1911, Kitsap County authorities arrested Linda Hazzard on charges of first-degree murder for starving Claire Williamson to death. The following January, Hazzard’s trial opened at the county courthouse in Port Orchard. Spectators crowded the building to hear servants and nurses testify about how the sisters had cried out in pain during their treatments, suffered through enemas lasting for hours, and endured baths that scalded at the touch. Then there was what the prosecution called “financial starvation”: forged checks, letters, and other fraud that had emptied the Williamson estate. To make matters darker, there were rumors (never proven) that Hazzard was in league with the Butterworth mortuary, and had switched Claire’s body with a healthier one so no one could see just how skeletal the younger Williamson sister had been when she died.

Hazzard herself refused to take any responsibility for Claire’s death, or the deaths of any of her other patients. She believed, as she wrote in Fasting for the Cure of Disease, that “[d]eath in the fast never results from deprivation of food, but is the inevitable consequence of vitality sapped to the last degree by organic imperfection.” In other words, if you died during a fast, you had something that was going to kill you soon anyway. In Hazzard’s mind, the trial was an attack on her position as a successful woman, and a battle between conventional medicine and more natural methods. Other names in the natural health world agreed, and several offered their support during their trial. Henry S. Tanner, a doctor who fasted publicly for 40 days in New York City in 1880, offered to testify in order to “hold up the [conventional] medical fraternity to the derision of the world.” (He was never given the chance.)

Though extreme, Hazzard’s fasting practice drew on a well-established lineage. As Hazzard noted in her book, fasting for health and spiritual development is an ancient idea, practiced by both yogis and Jesus Christ. The ancient Greeks thought demons could enter the mouth during eating, which helped encourage the idea of fasting for purification. Pythagoras, Moses and John the Baptist all recognized the spiritual power of the fast, while Cotton Mather thought prayer and fasting would solve the Salem "witchcraft" epidemic.

The practice experienced a revival in the late-19th century, when a doctor named Edward Dewey wrote a book called The True Science of Living, in which he said that "every disease that afflicts mankind [develops from] more or less habitual eating in excess of the supply of gastric juices." (He also advocated what he called the “no-breakfast plan.”) Dewey's patient and later publisher, Charles Haskel, declared himself "miraculously cured" after a fast, and his own book, Perfect Health: How to Get It and How to Keep It, helped promote the idea of starving yourself for your own good. Even Upton Sinclair, author of The Jungle, got into the act with his non-fiction book The Fasting Cure, published in 1911. And the idea of fasting your way into health is still around, of course: today there are juice cleanses, extreme calorie deprivation diets, and the breatharians, who try to live on light and air alone.

Back in 1911, the jury in Hazzard’s trial was unmoved by her claims of politically motivated persecution. After a short period of deliberation, they returned a verdict of manslaughter. Hazzard was sentenced to hard labor at the penitentiary in Walla Walla, and her medical license revoked (for reasons unknown, she was later pardoned by the governor, although her license was never reinstated) She served two years, fasting in prison to prove the value of her regimen, and then moved to New Zealand to be near supporters. In 1920, she returned to Olalla to finally build the sanitarium of her dreams, calling the building a “school for health.”

The institute burned to the ground in 1935, and three years later, Hazzard, then in her early ’70s, fell ill and undertook a fast of her own. It failed to restore her to health, and she died shortly thereafter. Today, all that remains of her sanitarium are a 7-foot-tall concrete tower and the ruins of the building’s foundation, both now choked with ivy. The location of her downtown Seattle offices, the Northern Bank and Trust building at Fourth and Pike, still stands, the shoppers and tourists that swarm the streets below blissfully unaware of the schemes once plotted above.

How Woodrow Wilson’s War Speech to Congress Changed Him – and the Nation

Smithsonian Magazine

A group of activists calling themselves the Emergency Peace Federation visited White House on February 28, 1917, to plead with their longtime ally, President Woodrow Wilson. Think of his predecessors George Washington and John Adams, they told him. Surely Wilson could find a way to protect American shipping without joining Europe’s war. 

If they had met with him four months earlier, they would have encountered a different man. He had run on peace, after all, winning re-election in November 1916 on the slogan “He kept us out of war.” Most Americans had little interest in sending soldiers into the stalemated slaughter that had ravaged the landscapes of Belgium and France since 1914. Wilson, a careful, deliberative former professor, had even tried to convince England and Germany to end World War I through diplomacy throughout 1916. On January 22, speaking before the U.S. Senate, he had proposed a negotiated settlement to the European war, a “peace without victory.”

What the peace delegation didn’t fully realize was that Wilson, caught in a series of events, was turning from a peace proponent to a wartime president. And that agonizing shift, which took place over just 70 days in 1917, would transform the United States from an isolated, neutral nation to a world power.

“The President’s mood was stern,” recalled Federation member and renowned social worker Jane Addams, “far from the scholar’s detachment.” Earlier that month, Germany had adopted unrestricted submarine warfare: Its U-boats would attack any ship approaching Britain, France, and Italy, including neutral American ships. The peace delegation hoped to bolster Wilson’s diplomatic instincts and to press him to respond without joining the war. William I. Hull, a former student of Wilson’s and a Quaker pacifist, tried to convince Wilson that he, like the presidents who came before him, could protect American shipping through negotiation.

But when Hull suggested that Wilson try to appeal directly to the German people, not their government, Wilson stopped him.

“Dr. Hull,” Wilson said, “if you knew what I know at the present moment, and what you will see reported in tomorrow morning’s newspapers, you would not ask me to attempt further peaceful dealings with the Germans.”

Then Wilson told his visitors about the Zimmermann Telegram.

“U.S. BARES WAR PLOT,” read the Chicago Tribune’s headline the next day, March 1, 1917. “GERMANY SEEKS AN ALLIANCE AGAINST US; ASKS JAPAN AND MEXICO TO JOIN HER,” announced the New York Times. German foreign minister Arthur Zimmermann’s decoded telegram, which Wilson’s administration had leaked to the Associated Press, instructed the German ambassador in Mexico to propose an alliance. If the U.S. declared war over Germany’s unrestricted submarine warfare, Zimmermann offered to “make war together” with Mexico in exchange for “generous financial support and an understanding on our part that Mexico is to reconquer the lost territory in Texas, New Mexico, and Arizona” (ceded under the Treaty of Guadalupe Hidalgo that ended the Mexican-American War nearly 70 years earlier).

Until the dual shocks of unrestricted submarine warfare and the Zimmermann Telegram, Wilson had truly intended to keep the United States out of World War I. But just 70 days later, on April 2, 1917, he asked Congress to declare war on Germany. Wilson’s agonized decision over that period permanently changed America’s relationship with the world: He forsook George Washington's 124-year precedent of American neutrality in European wars. His idealistic justifications for that decision helped launch a century of American military alliances and interventions around the globe.

In his January speech, Wilson had laid out the idealistic international principles that would later guide him after the war. Permanent peace, he argued, required governments built on the consent of the governed, freedom of the seas, arms control and an international League of Peace (which later became the League of Nations). He argued that both sides in the war—the Allies, including England and France, and the Central Powers, including Germany—should accept what he called a “peace without victory.” The alternative, he argued, was a temporary “peace forced upon the loser, a victor’s terms imposed upon the vanquished.” That, Wilson warned, would leave “a sting, a resentment, a bitter memory” and build the peace on “quicksand.”

But nine days later, at 4 p.m. on January 31, the German ambassador in Washington informed the U.S. State Department that his nation would begin unrestricted submarine warfare—which threatened American commerce and lives on the Atlantic Ocean—at midnight. “The President was sad and depressed,” wrote Wilson’s adviser Edward House in his diary the next day. “[He] said he felt as if the world had suddenly reversed itself; that after going from east to west, it had begun to go from west to east and that he could not get his balance.”

Wilson cut off diplomatic relations with Germany, but refused to believe war was inevitable. “We do not desire any hostile conflict with the Imperial German Government,” he told Congress on February 3. “We are the sincere friends of the German people and earnestly desire to remain at peace with the Government which speaks for them. We shall not believe that they are hostile to us unless and until we are obliged to believe it.”

Though most Americans weren’t eager to fight, Wilson’s critics raged at his inaction. “I don’t believe Wilson will go to war unless Germany literally kicks him into it,” former President Theodore Roosevelt, who had failed in his bid to re-take the White House in 1912, wrote to U.S. Senator Henry Cabot Lodge.

Then, on February 23, came the “kick.” That day, the British government delivered a copy of the Zimmermann Telegram to Walter Hines Pace, the American ambassador in London. It was the espionage coup of the war. Britain’s office of naval intelligence had intercepted and partially decoded it in January, and a British spy’s contact in a Mexican telegraph office had stolen another copy on February 10. Pace stayed up all night drafting a message to Wilson about the telegram and its origins. When Zimmermann’s message arrived from London at the State Department in D.C. on Saturday night, February 24, Acting Secretary of State Frank L. Polk took it directly to the White House. Wilson, Polk recalled later, showed “much indignation.”

Four days later, when Wilson met with the peace activists, he revealed that his thoughts about how to bring about a lasting peace had changed. He told them, according to Addams’ recollection in her memoir, that “as head of a nation participating in the war, the President of the United States would have a seat at the Peace Table, but that if he remains the representative of a neutral country he could at best only ‘call through a crack in the door.’”

The telegram inflamed American public opinion and turned the nation toward war. Yet even then, the deliberative Wilson was not quite ready. His second inaugural address, delivered March 5, asked Americans to abandon isolationism. “We are provincials no longer,” he declared. “The tragic events of the 30 months of vital turmoil through which we have just passed have made us citizens of the world. There can be no turning back. Our own fortunes as a nation are involved whether we would have it so or not.” Today, Wilson’s address reads like a prelude to war—but at the time, pacifists like Addams heard it as a continuation of his focus on diplomacy.

When Wilson met with his cabinet on March 20, he was still undecided. But two events the previous week added to his calculus. German U-boats had sunk three American ships, killing 15 people. And the ongoing turmoil in Russia had forced Nicholas II to abdicate the throne, ending 300 years of Romanov rule. The czar’s abdication had ceded power to a short-lived provisional government created by the Russian legislature. That meant that all of the Allied nations in World War I were now democracies fighting a German-led coalition of autocratic monarchies.

The cabinet unanimously recommended war. Wilson left without announcing his plans. “President was solemn, very sad!” wrote Secretary of the Navy Josephus Daniels in his diary.

Wilson likely made his decision that night. On March 21, he set a date with Congress for a special session on April 2 on “grave matters of national policy.” Alone, Wilson wrote his speech by hand and by typewriter.

According to a story that appears in many Wilson biographies, the president invited his friend Frank Cobb, editor of the New York World, to the White House on the night before his speech. Wilson revealed his anguish to his friend. He’d tried every alternative to war, he said, and he feared Americans would forsake tolerance and freedom in wartime. In words that echoed his speech to the Senate, Wilson said he still feared that a military victory would prove hollow over time.

“Germany would be beaten and so badly beaten that there would be a dictated peace, a victorious peace,” Wilson said, according to Cobb. “At the end of the war there will be no bystanders with sufficient power to influence the terms. There won’t be any peace standards left to work with.” Even then, Wilson said, “If there is any alternative, for God’s sake, let’s take it!” (Cobb’s account, given to two fellow journalists and published after his death in 1924, is so dramatic that some historians think it’s not authentic. Other historians find it credible.)

On April 2, when Wilson came to the podium at the Capitol, no one but House and perhaps Wilson’s wife, Edith, knew what he would say. He asked Congress to “declare the recent course of the Imperial German Government to be in fact nothing less than war against the government and people of the United States,” and to “formally accept the status of belligerent.” He recounted Germany’s submarine attacks and called the Zimmermann Telegram evidence of “hostile purpose.” He also declared the German government a “natural foe of liberty.” His speech’s most famous phrase would resound through the next century, through American military victories and quagmires alike: “The world must be made safe for democracy.”

Cheers resounded through the House chamber. Later that week, Congress declared war, with 373-50 votes in the House and an 82-6 margin in the Senate.

But after the speech, back at the White House, Wilson was melancholy. “My message today was a message of death for our young men,” Wilson said—and then broke into tears. “How strange it seems to applaud that.” (His secretary, Joseph Tumulty, recorded the president’s words in his 1921 memoir. But as with Cobb’s dramatic anecdote, there is doubt among historians about the story’s veracity.)

All in all, 116,516 Americans died in World War I among about nine million deaths worldwide. (More would die from the flu epidemic of 1918 and pneumonia than on the battlefield.) Wilson’s own administration struck blows against freedom and tolerance during the war, imprisoning anti-war activists such as socialist Eugene Debs. And at the Versailles conference of 1919, Wilson became one of the victors dictating peace terms to Germany. His earlier fears that such a peace would not last eerily foreshadowed the conflicts that eventually erupted into another world war.

Wilson’s high-minded argument that the U.S. should fight World War I to defend democracy has been debated ever since. A different president might have justified the war on simple grounds of self-defense, while diehard isolationists would have kept America neutral by cutting its commercial ties to Great Britain. Instead, Wilson’s sweeping doctrines promised that the United States would promote stability and freedom across the world. Those ideas have defined American diplomacy and war for the last 100 years, from World War II and NATO to Vietnam and the Middle East. A century later, we’re still living in Woodrow Wilson’s world. 

Plague hits Mouse Town, USA!

National Museum of American History

Antibodies are always looking out for us, and this week we're taking a closer look at them. Antibody-based tests, vaccines, and drugs have dramatically influenced American history, culture, and quality of life. Smallpox, polio, and syphilis, once constant threats, are now distant memories for many, and recent antibody-based therapies continue to further the human battle against disease. This is the third post in our series. Read the first on pregnancy tests and the second on celebratory an-tee-bodies t-shirts.

In the 1940s a silent killer stalked the streets of a very small town near San Francisco. On one side of town, the residents fell ill and succumbed to a deadly plague. On the other side, the townsfolk were untouched, and went about their lives as if nothing were wrong. Looming over this stricken town was a white-clad figure with the power of life and death at his fingertips.

A black and white print depicting skeleton dancing gleefuly despite the decay of their bodies. One skeleton plays some variation of the clarinet.

The town was "Mouse Town," a laboratory where only mice lived, the giant figure was Karl F. Meyer, and the plague was, truly, the plague: Yersinia pestis, the bacterium responsible for bubonic plague. It was the same disease that killed more than half the population of Europe in the 14th century. But from 1936 to 1946, the plague was well-controlled and studied in a pristine laboratory at the University of California at San Francisco.

Yersinia pestis has plagued humans for thousands of years. The bacterium is transmitted by fleas, and can persist undetected in reservoirs of rodent hosts between human outbreaks. The disease presents in several forms, but the most common is bubonic plague, in which the lymphatic system is infected. The lymph nodes become swollen and blackened, producing buboes in the armpits and groin of the victim. Without proper treatment, bubonic plague is fatal in around 90% of cases. Pneumonic plague, contracted from airborne particles, takes up residence in the lungs and is almost invariably fatal within a few days, if untreated. Many diseases have been referred to as plagues, but only Yersinia pestis is the plague.

Four objects: three are wide jars with painted decoration with words in a different language, the fourth is a separate photo in which there is a round brownish ball resembling the texture of an avocado. It has a small brown piece of paper affixed to it

Faced with such a deadly disease, people have tried practically everything to prevent or cure it. The collections of the Division of Medicine and Science include several attempts at plague remedies. Scorzonera, also known as viper's grass, was used for both plague and snake bite. The black root of this plant has a pronounced flavor of oyster. If that didn't work, the patient might try a sweeter medicine, the electarium de ovo. This paste of honey, herbs, and egg was supposed to treat plague.

A more esoteric approach to plague treatment was a bezoar—a "stone" formed from indigestible material in the stomach of an animal. When pulverized or dissolved in vinegar, the bezoars were long thought to possess medicinal or magical properties. Unfortunately, neither bezoars nor the herbal remedies were effective against plague.

A man in a white lab coat and black tie with shiny black gloves stands in a room filled with binders and glass jars. He concentrated as he holds something white.

Which brings us to Mouse Town . . .

Even at the beginning of the 20th century, we still had no effective treatment for plague. American scientist Karl F. Meyer had been working on plague research for decades by the time he performed his Mouse Town experiments in 1946. The work required a specially constructed isolation facility from which the disease could not escape. In a secured room, a glass box eight feet long, four feet wide, and three feet deep surrounded the entire town. The little mouse village was divided into two sections, with 100 mice on each side of the wall. On the east side, the mice had plain water to drink, but those on the west side were given water dosed with sulfadiazine, an early antibiotic. The entire town was then invaded by a force of 800 plague-infected fleas.

A weathered teal and white paper box covered with text laying next to a lighter colored object of the same size with six holes in it and six square shaped light pink items contained in a clear wrap

A plague epidemic quickly took hold of the east end of town. Within nine days, the entire population of East Mouse Town was dead. Meanwhile, the mice on the west side, with their prophylactic doses of sulfadiazine, went on with their lives. Only 10 of them grew ill and died, probably because they got too small a dose of antibiotic, or too large a dose of the plague bacteria. Based on Meyer's research, drug therapies were introduced in China and India, hotspots of plague infection.

However, Meyer's end goal was to develop vaccines that he hoped could immunize people against plague once and for all. Plague vaccines using killed bacteria have been in use since the 1890s. However, due to the complex way in which Yersinia pestis attacks the body, it has been difficult to develop a vaccine that is effective against all forms of the disease. Meyer's improved vaccines saw action among the U.S. armed forces in Korea and Vietnam.

A clear vial with a white cap and label next to a white box with a blue line running down the side

Currently, there is no plague vaccine that is approved for general human use in the United States, although a vaccine is administered to armed forces personnel assigned to high-risk areas. Fortunately, most people have little chance of encountering plague, and prophylactic antibiotics still provide additional protection.

Among those who do have to worry about catching plague are prairie dogs. In their close-knit cities, prairie dogs have long been noted as potential carriers of the disease—and they can transmit plague to people. In 1933, fears of plague prompted ranchers in West Texas to try to wipe prairie dogs out entirely, and by 1937, the U.S. Public Health Service dispatched rolling laboratories throughout the western states to collect and examine rodents for fleas and signs of plague.

A lean ferret leaps after a prairie dog that is trying to leave its hole in a hurry

Controlling plague among prairie dogs became even more important when it was discovered that the rodents were transmitting the disease to one of their predators, the critically endangered black-footed ferret. Once declared extinct, the ferrets have been mounting a recovery in recent decades. To protect them from contracting plague from their dinners, various methods of immunization have been attempted: ferrets have been captured and vaccinated, and prairie dog colonies have been dusted with pesticide to kill fleas. Now a more modern method is being tested. The U.S. Fish and Wildlife Service plans to use drones to deliver vaccine-infused peanut butter pellets to black-footed ferret habitats. The irresistible treats will hopefully provide prairie dogs with immunity so the ferrets can continue dining on prairie dogs without a side of plague.

Charles Richter is a guest blogger for the Division of Medicine and Science and a Ph.D. candidate in American religious history at The George Washington University.

On Monday, October 16, 2017, join us to learn about the vaccines of Dr. Maurice Hilleman. They changed American history—yet few of us now know his name.

Explore the Antibody Initiative website to see the museum's rich collections, which span the entire history of antibody-based therapies and diagnostics. 

The Antibody Initiative was made possible through the generous support of Genentech.

Author(s): 
guest blogger Charles Richter
Posted Date: 
Monday, October 16, 2017 - 10:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=tQ3Oj6t0ZyU:fHDmWafmvfE:V_sGLiPBpWU OSayCanYouSee?i=tQ3Oj6t0ZyU:fHDmWafmvfE:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

How Samuel Mudd Went From Lincoln Conspirator to Medical Savior

Smithsonian Magazine

Fort Jefferson looks like a postcard version of paradise: a burnished brick fortress built on a coral island, circled by turquoise ocean stretching to the horizon in every direction. Magnificent frigatebirds and pelicans are the only permanent residents of the fort, which forms the heart of Dry Tortugas National Park, 70 miles west of Key West in the Gulf of Mexico. But 150 years ago, this was America’s largest military prison—and home to one of its most infamous men.

During the Civil War, Samuel A. Mudd was a surgeon and tobacco farmer in southern Maryland, a hotbed of Confederate sympathy. Thirty-one years old, with reddish hair, Mudd and his wife Sarah had four young children and a brand-new house when John Wilkes Booth, on the run after assassinating Abraham Lincoln, came to his farm needing medical help in the early morning hours of April 15, 1865. Though Mudd proclaimed his innocence in the assassination plot, testimony during his trial for conspiracy revealed that he had met Booth at least once prior to the murder, and setting Booth’s broken leg did him no favors. His fate sealed, Mudd received a life sentence in federal prison.

Three other Lincoln conspirators were convicted with Mudd. Samuel Arnold and Michael O’Laughlen, former Confederate soldiers from Baltimore, received life sentences for helping Booth concoct a plan—never carried out—to kidnap Lincoln. Edward (or Edman) Spangler, a carpenter, worked for John T. Ford at Ford’s Theatre and got six years for helping Booth escape. In July 1865, the four men were sent to Fort Jefferson in irons.

“We thought that we had at last found a haven of rest, although in a government Bastile [sic], where, shut out from the world, we would dwell and pass the remaining days of our life. It was a sad thought, yet it had to be borne,” Arnold wrote in his memoir.

Built in the 1840s, Fort Jefferson defended American waters from Caribbean pirates; during the war, the fort remained with the Union and blockaded Confederate ships trying to enter the Gulf of Mexico. Arched ports called casemates, arranged in three tiers around the fort’s six sides, had space for 420 heavy guns. Outside the massive walls, a seawater moat and drawbridge guarded the sally port, the fortress’ single entrance.

After the war, the Army transformed the fortress into a prison. Vacant casemates became open-air cells for more than 500 inmates serving time for desertion, mutiny, murder and other offenses. In July 1865, when the conspirators arrived, 30 officers and 531 enlisted men continued to augment the fort’s defenses, using prisoner labor to hoist cannons into position, build barracks and powder magazines, continue excavating the moat and repair masonry.

Mudd shared a cell with O’Laughlen, Arnold and Spangler. They had full view of the comings and goings of the fort’s inhabitants across the parade ground, the fort’s central field, as well as the arrival of the supply boats, which brought food, letters and newspapers. It was comfortable compared to the “dungeon,” a first-floor cell where Mudd was sent temporarily after he tried, and failed, to escape on a supply boat in September 1865. There, one small window overlooked the moat, where the fort’s toilets emptied.

Mudd suffered through a monotonous diet of bread, coffee, potatoes and onions; he refused to eat the imported meat, which spoiled quickly in the humid warmth. Bread consisted of “flour, bugs, sticks and dirt,” Arnold carped. Mudd complained about the squalid conditions in letters to his wife. “I am nearly worn out, the weather is almost suffocating, and millions of mosquitos, fleas, and bedbugs infest the whole island. We can’t rest day or night in peace for the mosquitos,” he wrote.

Fort Jefferson provided an unusually fertile breeding ground for the pests, including Aedes aegypti, the mosquito that carries the yellow fever virus. Because there was no natural source of drinking water—the “dry” in Dry Tortugas—the fort installed steam condensers to desalinate seawater. The fresh water was then stored in open barrels in the parade ground. “Those steam condensers are one of the main reasons yellow fever came about at the fort,” says Jeff Jannausch, lead interpreter for the Yankee Freedom III, the ferry that brings visitors to the Dry Tortugas today.

Image by Kat Long. Built in the 1840s, Fort Jefferson defended American waters from Caribbean pirates. (original image)

Image by Kat Long. During the Civil War, the fort remained with the Union and blockaded Confederate ships trying to enter the Gulf of Mexico (original image)

Image by Kat Long. A wide view of modern-day Fort Jefferson (original image)

Image by Kat Long. The beautiful scenery was no solace for prisoners at Fort Jefferson. (original image)

Image by Kat Long. Mudd shared his cell with three other Lincoln conspirators. (original image)

Image by Kat Long. A landmarker at Fort Jefferson (original image)

Image by Kat Long. Vacant casemates became open-air cells for more than 500 inmates serving time for desertion, mutiny, murder and other offenses. (original image)

Image by Library of Congress. Portrait of Samuel Mudd believed to be taken when he worked in Fort Jefferson's carpentry shop (original image)

In the mid-19th century, though, no one knew what caused yellow fever or how it spread. The most popular theory held that bad air or “miasmas” brought on the high fever and delirium; bleeding from eyes, nose and ears; digested blood that came up as “black vomit,” and the jaundice that gave the fever its name.

The first case emerged on August 18, 1867, and there were three more by August 21. By this time, the number of prisoners at Fort Jefferson had dwindled to 52, but hundreds of officers and soldiers remained stationed at there. Cases spread. Thirty men in Company M got sick in a single night. “Quite a panic exists among soldiers and officers,” Mudd worried.

Without knowing the precise cause of the fever, the fort’s commanding officer, Major Val Stone, focused on containing the outbreak among the inhabitants as best he could. For men already showing symptoms, Stone had the post physician, Joseph Sim Smith, set up a makeshift quarantine hospital on Sand Key, a tiny island two-and-a-half miles away. Two companies were shipped to other keys to keep them from the contagion, and two remained to guard the inmates. “Prisoners had to stand the brunt of the fever, their only safety being an overruling Providence,” Arnold wrote in a 1902 newspaper article.

That left 387 souls at the fort. Smith contracted the fever on September 5 and died three days later. Mudd volunteered to take over the main hospital at Fort Jefferson, but not without some bitterness toward the government that had imprisoned him.  “Deprived of liberty, banished from home, family and friends, bound in chains,” Mudd wrote, “for having exercised a simple act of common humanity in setting the leg of a man for whose insane act I had no sympathy, but which was in line with my professional calling. It was but natural that resentment and fear should rankle in my heart.” But once committed, he threw himself into the patients’ care.

Mudd, like most doctors of the time, believed in purging and sweating to treat fevers. He administered calomel, a mercury-based drug that induced vomiting, and followed up with a dose of Dover’s Powder, which contained ipecac and opium to encourage sweating. He permitted patients to drink warm herbal teas, but no cold water.

He also closed down the Sand Key quarantine and treated those patients at the main hospital, believing—correctly—that isolating them would ensure their deaths and do nothing to stop the fever’s spread. “Mudd demanded clean bedding and clothes for the sick. Before he took over, when someone died they’d throw the next patient into the same bed,” says Marilyn Jumalon, a docent at the Dr. Mudd House Museum in Maryland. “He implemented a lot of the hygienic steps that saved people’s lives.”

By October 1, nearly all of the fort’s inhabitants were ill, and an elderly doctor from Key West arrived to help Mudd with the cascade of cases. “The fever raged in our midst, creating havoc among those dwelling there. Dr. Mudd was never idle. He worked both day and night, and was always at post, faithful to his calling,” Arnold wrote.

Through his exertions, the number of deaths remained remarkably low. Of 270 cases, only 38 people, or 14 percent, died—including conspirator Michael O’Laughlen. In comparison, mortality rates from other outbreaks in the second half of the 19th century were much worse. In 1873, yellow fever hit Fort Jefferson again, and this time 14 of 37 infected men died—a mortality rate of nearly 37 percent. In an 1853 epidemic in New Orleans, 28 percent those afflicted died; in Norfolk and Portsmouth, Virginia in 1855, 43 percent; and in Memphis in 1878, 29 percent.

A grateful survivor, Lieutenant Edmund L. Zalinski, thought Mudd had earned clemency from the government. He petitioned President Andrew Johnson. “He inspired the hopeless with courage, and by his constant presence in the midst of danger and infection, regardless of his own life, tranquillized the fearful and desponding,” Zalinski wrote. “Many here who have experience his kind and judicious treatment can never repay him.” Two hundred-and-ninety-nine other officers and soldiers signed it.

Mudd sent a copy of the petition to his wife Sarah, who had visited Johnson several times to plead for her husband’s release, and she circulated it around Washington. In January 1869, a delegation of Maryland politicians met with Johnson at the White House and echoed Mrs. Mudd’s entreaty. They delivered a copy of the petition, and further argued that Mudd, Arnold and Spangler should be pardoned because they had nothing to do with planning Lincoln’s assassination.

The tide of public opinion was turning toward clemency, and Zalinski’s account gave Johnson leverage against critics. On February 8, 1869, less than a month before he would leave office and President-elect Grant would take over, President Johnson summoned Mrs. Mudd to the White House and gave her a copy of the pardon.

His life sentence dismissed, Mudd departed Fort Jefferson forever on March 11 of that year aboard the aptly named steamer Liberty. Spangler and Arnold were freed later that month.

The doctor, just 35 but appearing much older, returned to his family in Maryland—but his presence is still vivid at Fort Jefferson. A plaque mounted in the dungeon where Mudd battled mosquitos echoes his official pardon. “Samuel A. Mudd devoted himself to the care and cure of the sick…and earned the admiration and gratitude of all who observed or experienced his generous and faithful service to humanity.”

How Chicago Transformed From a Midwestern Outpost Town to a Towering City

Smithsonian Magazine

In 1833, Chicago was a wilderness outpost of just 350 residents, clumped around a small military fort on soggy land where the Chicago River trickled into Lake Michigan. The site was known to local natives as Chigagou, or the “wild garlic place.” By the end of the century, this desolate swamp had been transformed into a modern metropolis of 1.7 million, known the world over for its dense web of railroads, cruelly efficient slaughterhouses, fiery blast furnaces, and soaring skyscrapers.

Chicago’s rise was so sudden and so astounding that many observers concluded it must have been predestined by nature or God, a view that echoed the 19th-century belief in the inevitability of American expansion and progress known as Manifest Destiny. In 1880, for instance, the former lieutenant governor of Illinois, William Bross, told members of the Chicago Historical Society that, “He who is the Author of Nature selected the site of this great city.” In 1923, in an address to the Geographical Society of Chicago, a University of Chicago geographer, J. Paul Goode, argued that the city’s location made its growth inevitable. His talk was titled “Chicago: A City of Destiny.”

Nature had, indeed, endowed Chicago with a crucial locational advantage: The city sits between the Great Lakes and Mississippi River watersheds, making it possible for people working or living there to travel by boat all the way to the Atlantic Ocean or to the Gulf of Mexico. But geography alone would not secure the city’s destiny: Chicago’s growth, like that of many other American cities, was also predicated on government-led engineering projects—and the mastery of our most essential resource, water. Between the 1830s and 1900, lawmakers, engineers, and thousands of long-forgotten laborers created a new, manmade geography for Chicago—building a canal and sewers, raising city streets, and even reversing a river. These monumental feats of engineering—as much as nature—spurred Chicago’s miraculous growth, and provided a model for other American cities to engineer their way to success.

The promise of Chicago’s geography was immediately obvious to the first Europeans who passed through the site in 1673. Fur trader Louis Joliet and Jesuit missionary Jacques Marquette paddled up the Illinois and Des Plaines Rivers, crossing a short, but sometimes terribly muddy land route, or portage, to the Chicago River—which, in turn, flowed into Lake Michigan. Marveling at the route’s imperial possibilities because it connected the Gulf of Mexico to territories north of the Great Lakes, Joliet reported to the governor of French Canada, “we can quite easily go to Florida in boat” by building only one canal. Such a canal would link Quebec to the fertile lands of the continental interior where, Joliet advised the governor, there would be “great advantages…to founding new colonies,” thereby expanding the reach of its lucrative fur trading operations.

The French never undertook the canal or fulfilled their imperial vision. But even without a canal, the portage remained a vital, if often unpleasant, route for fur traders. In 1818, Gurdon S. Hubbard, an employee of the American Fur Company, paddled from Lake Michigan up the Chicago River to its source about six miles inland. At that point, their boats had to be “placed on short rollers…until the [Mud] lake was reached.” For three days, the men slogged through the portage. “Four men only remained in a boat and pushed with…poles, while six or eight others waded in the mud alongside…[and still] others busied themselves in transporting our goods on their backs.” All the while, the men were beset by leeches that “stuck so tight to the skin that they broke in pieces if force was used to remove them.”

By the 1830s, Illinois officials, inspired by the success of New York’s Erie Canal (1825) and the Ohio and Erie Canal (1832), began construction of the Illinois and Michigan Canal, which was designed to harness gravity to siphon water out of the Chicago River—effectively reversing the river’s flow so that it went away from, rather than into, Lake Michigan. The bold, costly plan called for making a “deep cut” channel through very tough clay called hardpan. The state began construction in 1836. Within a year, though, the Panic of 1837 struck, and by November 1841, Illinois had largely stopped work on the canal. By 1842, the state’s debt was $10.6 million and annual interest payments were $800,000. The canal—along with spending on a railroad and the failure of the state bank—had plunged Illinois into ruin. In 1843, the state abandoned the canal project, having already spent $5.1 million dollars.

The Chicago River in 2015 (Wikimedia Commons)

Real estate investors, who had a lot to lose if Chicago’s growth stalled, urged the state to resume canal construction. New York City land speculator Arthur Bronson and a group of Chicago boosters found lenders who were willing to provide the state with an additional $1.5 million to complete the canal. The lenders had one condition, however: To cut costs, the state had to abandon the deep cut for a cheaper, shallower channel. Instead of using the “deep cut” channel and its gravity-fed system to reverse the flow of the river, engineers would use pumps to push a smaller volume of river water into the canal without forcing the river to reverse its course. Crews began digging again in 1845, completing the project in 1848.

Just as Joliet had imagined, the canal transformed Chicago into a major center of trade. On April 24, 1848, the first cargo boat to arrive in Chicago by canal, General Thornton, hauled sugar from New Orleans through the city on its way to Buffalo. In its first decade of operation, the canal carried a staggering amount of freight: 5.5 million bushels of wheat; 26 million bushels of corn; 27 million pounds of pork; 563 million board feet of lumber. With the canal—and later the railroads—Chicago became an increasingly attractive location for manufacturers. Cyrus McCormick, for example, moved his mechanical reaper factory from Virginia to the banks of the Chicago River less than a year before the canal’s imminent completion.

While the canal established Chicago as a major city, it also created problems whose solutions required still more engineering. One such issue arrived on April 29, 1849, when the John Drew, from New Orleans, carried cholera into the city. Within hours of the boat’s arrival, its captain and several passengers died. The disease spread rapidly throughout the city, sending physicians rushing from patient to patient to soothe fevers, cramps, and diarrhea. One-tenth of the city’s 29,000 residents contracted the disease and 678 died.

In swampy cities like Chicago, waterborne diseases like cholera thrived. By 1854, the city had survived epidemics of cholera, typhoid, and dysentery, killing as many as 1,500 people at a time. Though scientists had not yet identified the germs that caused these diseases, even casual observers understood that illness spread in places with poor drainage. In 1850, the newspaper Gem of the Prairie observed, for example, that parts of Chicago were “quagmires, the gutters running with filth at which the very swine turn up their noses.” From the “reeking mass of abominations” beneath the plank streets, the paper contended, “miasmas wafted into the neighboring shops and dwellings, to poison their inmates.” The only solution was “a thorough system of drainage.”

So, in 1855, officials mounted a dramatic attempt to rescue their city with another massive engineering project by hiring Ellis Sylvester Chesbrough, an engineer renowned for his work on Boston’s water system, to raise Chicago out of the muck. First, Chesbrough laid the sewers above the streets, positioning them so that gravity would carry their contents into the Chicago River. He then filled the streets with dirt, covering the sewers and elevating the city’s thoroughfares as much as eight feet above the buildings that flanked them. Many Chicagoans built staircases from the street down to their front doors. Others raised their structures—more than 200— using jacks.

As Chicagoans hoisted their buildings and the city began growing anew, Chesbrough’s sewers flooded the river with waste, causing new problems. The Chicago River flowed directly into Lake Michigan, the city’s source of drinking water. Initially, the volume of sewage was small and lake water diluted its polluting effects, as Chesbrough had calculated. But, when Chicago’s population tripled from 100,000 in 1860 to 300,000 in 1870, the amount of feces, chemicals, and decaying animal matter making its way into the waterways multiplied. The putrid smell of the river became unbearable and pollution began to flow into the city’s drinking water.

It was time for more engineering. In 1865, Chesbrough and state officials decided to manage Chicago’s water pollution by enacting an old proposal: making a deep cut through the Illinois and Michigan Canal and, this time, actually reversing the Chicago River and sending the city’s sewage down the canal, away from Lake Michigan. After six years, on July 15, 1871, throngs of people crowded the riverbanks to see workers chop down a temporary dam separating the river and the canal. The onlookers threw pieces of straw on the river and watched as they slowly began to float toward the canal, and away from their drinking water.

Ever since, Chicago has continued to grow, and most of the time, its river has run backward. In 1900, the Sanitary District of Chicago, a regional government agency, completed the new, deeper Sanitary and Ship Canal, which has largely kept the dirty Chicago River running away from the lake, even as the metropolitan area has grown to 9.5 million people today.

The reversal of the river marked a crucial juncture in the story of Chicago’s miraculous rise. It was the culmination of a series of great engineering projects orchestrated by the state that created the conditions—sewage, drinking water, and a route between the Great Lakes and Mississippi River basins—for Chicago to become the great industrial metropolis Carl Sandburg described in 1914: “Hog Butcher, Tool Maker, Stacker of Wheat, Player with Railroads and Freight Handler to the Nation.”

Chicago’s history confirms the old adage that geography is destiny. But the city’s experiences also suggest that geography is not just a fixed fact of nature, as Bross and Goode had implied; geography is also something continually made and remade by people and governments, a thing as fluid as water itself. Chicago’s model of growth—based on government-led water engineering projects—was duplicated by other cities—such as Los Angeles and Las Vegas—in the 20th century. This history of engineering-led growth in Chicago and other cities is both inspirational and a cautionary tale for our current age, when climate change demands that we engineer our cities to keep rising seas at bay. If geography is destiny, Chicago’s history offers the hope that fate is still partly in our hands.

How a Tiny Worm is Irritating the Most Majestic of Giraffes

Smithsonian Magazine

What is a fly to a giraffe?

It’s difficult to imagine a single insect even coming to the attention of these peculiar animals, which weigh in at thousands of pounds and routinely stretch their necks to heights of more than 14 feet. In Uganda’s Murchison Falls National Park, however, Michael B. Brown, a wildlife conservation researcher, has noticed something that might be harder to ignore: Whole clouds of insects swarming around the necks of these quadrupedal giants.

Under ordinary circumstances, such irritants might be unexceptional. But a growing body of evidence suggests that those flies might be linked to a more serious problem, a skin disease that seems to be spreading through giraffe populations across the continent. It sometimes takes the shape of holes in the animals’ flesh, circles of dead tissue, altogether distinct from the animals’ distinctive spots.

For giraffes, it’s just one problem among many—and it’s likely far less serious than the effects of climate change, poaching and habitat loss. But a better understanding of this ugly disease’s causes might help us make sense of the many other threats to these long-necked animals that have led wild giraffe populations to a precipitous decline—nearly 40 percent in the past 15 years.

According to a recent paper from the journal Biological Conservation, the giraffe skin disease “was first described in the mid-1990s in Uganda.” The Smithsonian National Zoo’s partners have identified similar lesions on giraffes in Tanzania and elsewhere. Since 1990, other possible evidence of the disease has been spotted in numerous other countries, including Namibia, Zimbabwe and Botswana. As the Biological Conservation paper’s authors note, however, it’s unclear whether the disease is becoming more common or whether we’re just getting better at spotting it as our ability to study giraffes improves.

One way to clear up that uncertainty would be to identify the disease’s etiology—the underlying cause of the problem, assuming there’s just one.

Image by Michael B. Brown. The skin disease sometimes takes the shape of holes in the animals’ flesh, circles of dead tissue, altogether distinct from the animals’ distinctive spots. (original image)

Image by Michael B. Brown. Even if the skin lesion doesn’t expose giraffes to other diseases, the mere presence of it could have other effects, including irritating them in a way that limits their willingness to socialize—and hence their capacity to reproduce. (original image)

Kali Holder, an infectious disease researcher and veterinary pathologist at the National Zoo's Global Health Program, whose efforts has been supported by the Morris Animal Foundation, is working on a likely possibility: A tiny parasitic nematode that another Zoo pathologist spotted in samples of diseased tissue. The nematode, Holder suspects, could be carried by flies like those Brown has reported.

Studied through a microscope, the problem doesn’t look like much, especially to the untrained eye: on the slide that Holder showed me, a bright pink flush crept down the magnified chasm of a giraffe’s hair follicle. That discoloration, Holder said, is probably evidence of the hyperkeratotic areas—unusually thickened skin under attack by the giraffe’s own immune system—that Brown and others in the field have spotted on the edges of skin lesions.

Though evidence of the disease is clearly visible in photographs of giraffes, the problem’s source is harder to spot back on the slide. Curled up against itself, and seen in cross section, the worm is barely recognizable as a worm. But, as Holder told me, it is still recognizably alien from its surrounding tissue, thanks in part to the shimmering outer layer that surrounds it. Resembling nothing so much as a cracked, but still intact, window, that region is, Holder says, “kind of like the cuticle. It’s a specialized protein that helps protect these guys from the hostile environments of a host body.” Surveilling the terrain within, she points out other landmarks, most notably the worm’s digestive tract and its reproductive organs.

“The skin is one of the most important defensive organs, both against the elements and infections,” Holder said, who is studying a likely possibility—a tiny parasitic nematode. (National Zoo)

If you were to study it with the naked eye, this tiny worm would be visible, but only just. That doesn’t mean that the worms are harmless. “The skin is one of the most important defensive organs, both against the elements and infections,” Holder said.

Accordingly, those lesions may be opening the giraffes to other pathogens. But she’s also concerned about other possibilities: “Maybe lower reproductive success because they’re spending more time grooming. Or maybe they aren’t as mobile, because they’re in pain, so they aren’t eating as much,” she says. Coupled with other stressors, including habitat loss, the nematode could have dire consequences for giraffe populations more generally.

Some are thinner than a “strike from a mechanical pencil,” they’re small, sure, Holder says. “Their longest dimension might be two or three millimeters, and they’re fractions of a millimeter in diameter.” But there’s something on the slide that’s smaller still: the parasite’s young.

These nematodes, she explained, “don’t lay eggs. They lay live embryos called microfilariae, which just means ‘tiny threads.’” Though the slide Holder shows me is static, it’s hard not to imagine what it must have been like for the giraffe from which it was taken—flesh wriggling with tiny creatures, alive with a microscopic life not its own. In other words, this hungry invader is there to make more of its own.

That sounds horrifying, and it is, but only up to a point. Apart from those grotesque lesions, the nematode that Holder is studying doesn’t appear to be as terrible as some related parasites. In humans, other nematode species that reproduce by microfilariae are the causative agents of river blindness—a debilitating eye disease caused by black fly bites—and a handful of other tropical illnesses, but these ones aren’t quite so troubling, as far as we know.

Even if the skin lesion doesn’t expose giraffes to other diseases, the mere presence of it could have other effects, including irritating them in a way that limits their willingness to socialize—and hence their capacity to reproduce. As Holder puts it, “For any given animal, [this nematode] may not be the cause of a specific problem or death. But on a population level, you may start to have lower reproductive success. There are cascading potential effects.”

For now, such fears are partially speculative, since scientists aren’t even sure what the worm is. That makes it hard to say how far it has spread, which makes it harder still to evaluate how much harm it’s doing. This is where Holder’s work becomes so important: She and her colleagues—including Chris Whittier, a veterinary global health researcher at Tufts University—suspect that the nematode infecting giraffes belongs to a genus called Stephanofilaria, which is best known for a species that parasitizes domestic cattle. To better confirm that, though, they’d need to acquire a fully intact sample of an adult parasite, in order to establish a full description of it.

That proves easier said than done: For a while, Holder couldn’t even figure out how to get a whole worm out of a host, partly because there’s so little work done on Stephanofilaria. (Relatively easy to kill off with anti-worm drugs in cattle, the parasite has long been considered economically unimportant.)

Holder eventually found what looked to be a protocol in a veterinary journal, but there was a catch—it was written in Portuguese. Fortunately, she claims, “I speak pathology. So I can read most romance languages, as long as they’re talking about pathology.” After some careful study—and with the help of her “Romance language background, Google magic, and citing references”—she was able to puzzle out the method, which involves finely chopping up the infected flesh and then soaking it in a saline solution, at which point the worms should abandon ship of their own accord.

With a worm to examine, the Zoo and its partners in the field will be better positioned to make sense of the parasite’s genetics.

As Robert C. Fleischer, head of the Zoo’s Center for Conservation Genomics, tells me, they’ve already been able to examine the nematode’s DNA, but they can’t find a match for it in GenBank, a major database of genetic information for tens of thousands of organisms. That means in part that they can’t yet confirm whether the giraffe parasite is actually Stephanofilaria—or how it might be related to similar-seeming organisms in domestic cattle. More clearly identifying physical specimens—from both giraffe and cattle—would go a long way toward overcoming that uncertainty.

Once they do, they’ll have much more information about the scope of the problem. As is the case with cattle, treating such parasites should be relatively simple—Holder suggests a regimen of Ivermectin, which is sometimes administered to giraffes in zoo settings, would do the trick—but understanding its origins and the risks it presents is more difficult. Once they’ve genetically sequenced the nematode, it will be far easier for their partners in the field to confirm whether the same parasite is infecting different giraffes in discrete locations.

This matters in part because, as Brown says, they’ve noticed that the lesions seem to be far more common among some Ugandan giraffe populations, but is largely absent in other regions. That, in its own turn, would make it easier to target the infection vectors. They might also be able to determine if this is a new parasite species or just one that’s on the rise due to other factors.

“Maybe this parasite is not that important, but knowing if the vector is new to this area may offer an insight into other vector-borne diseases that may be more relevant,” Holder says.

Brown, for one, says that he hasn’t identified declining birthrates among populations with the skin diseases—though he also notes that it can be difficult to definitively confirm such observations in an animal with a 14-month gestation period. It’s entirely possible, then, that the parasites don’t present a real risk, at least not in and of themselves. But that exposed necrotic tissue could lead to other problems. It might, for example, attract ox peckers, birds that can both expand lesions as they feed on them, and potentially spread the infection to other animals. The only way to know for sure would be to study the nematodes more fully.

Suzan Murray, director of the Smithsonian’s Global Health Program, suggests that climate change may play a role: Insects such as horn flies that could be transmitting parasites could be thriving in generally warmer and wetter conditions. Such information could benefit wildlife conservation more generally, since it could help us anticipate and respond to emerging crises before they reach epidemic levels. Given that a similar skin disease has been identified in Kenyan rhinos, better understanding the underlying environmental roots of the problem might contribute to our understanding of the broader ecosystem, even if it doesn’t have an immediate effect on the well-being of giraffes.

In other words, the inquiries of scientists such as Holder and the field researchers whose efforts intersect with them have potentially enormous, practical consequences, even when their actual objects of study are minute.

Field work supporting the Smithsonian's Global Health Program's research into the skin parasite has proceeded in large part through the work of the Uganda Wildlife Authority and the Uganda Conservation Foundation. They have collaborated on the Rothschild Giraffe Conservation Project, an effort funded by SeaWorld and Busch Gardens Conservation Fund.

Forget the Vegetables—Junk Food Could Help Fight Obesity

Smithsonian Magazine

The 2004 release of Super Size Me—a documentary about Morgan Spurlock’s 24-pound weight gain and health decline during a month-long McDonald’s binge—and other books and exposés of the last decade, for that matter, have tarnished the reputation of fast food and other processed foods.

But what if the food Spurlock ate at the chain was healthier? What if, by eating food engineered to be lower-calorie, lower-fat versions of popular favorites, he lost weight in the course of 30 days rather than gain it?   

Journalist David Freedman says in the world of weight loss, "what counts is calories, fat and sugar, which processed foods can be low in, and unprocessed foods can be high in." Photo courtesy of David Freedman.

Journalist David Freedman made this case—that fast food and processed food may actually help in the fight against obesity instead of hindering it—in an article this summer in The Atlantic. At a time when the loudest and clearest food message is to eat fresh, locally grown, organic foods, the piece prompted a range of reactions from scientists and fellow journalists in the food and health worlds.

In a nutshell, can you explain your big idea?

A high percentage of the obese are more or less hooked on fatty, sugary, processed foods, and we seem helpless to change that. Getting the 100 million obese people in the U.S. to eat less junk food and more unprocessed, "whole" foods, would be helpful to turning the tide on the obesity epidemic—but unprocessed foods are largely too expensive and hard to access for large numbers of poor obese. What we can do right now with food technology is create lower-calorie, lower-fat, lower-sugar processed foods that will deliver the same stimulating sensations as the junkier stuff but help the obese make their diet healthier overall. We need to push the fast food and processed food industries to move toward these healthier versions of their foods.

So wait—Twinkies could actually help people lose weight?

Yes, Twinkies could actually help people lose weight, if there were lower-calorie but still tasty versions of them. But the statement needs some qualifications. It's not the ideal way to lose weight; it only makes sense if for whatever reasons getting on a healthier diet isn't in the cards. It's the answer for someone who's going to keep eating Twinkies whether there are low-calorie versions or not. For that person, the lower-calorie Twinkie is potentially a step in the right direction. And, by the way, researchers have demonstrated that people can in fact lose weight on a diet of nothing but snack cakes, though no one is recommending it.

How did you get interested in this topic?

Six years ago, I struggled to lose 20 pounds, on doctor's orders. That got me wondering about obesity science in general, and the problem of behavior change in particular. Obesity is headed toward robbing living Americans today of a combined billion years of life.

There are a cacophony of conflicting theories and advice promoted by my fellow science journalists. Cut down on fat but feel free to eat lots of carbs. Cut down on carbs but feel free to eat lots of fat. Calories are everything, or calories don't matter at all. Exercise is the key rather than diet. Diet is the key rather than exercise. It's nearly impossible to keep lost weight off. It's all in the genes. It's all in your gut bacteria, and on and on.

I've traveled the U.S. and the world interviewing highly credentialed obesity experts and observing their weight-loss programs. There's little controversy among scientists about what works, and it's been backed up by hundreds of studies. What works is gradually moving people to lower-calorie, less-fatty, less-sugary foods and getting them moving more, along with providing a broad array of behavioral supports so they stick with it forever. The claims pushed by prominent journalists for magic-bullet solutions like switching wholesale to natural foods or to ultra-low-carb diets just cause most obesity experts to smack their heads in frustration, even though the public eats them up.

Well-read laypeople seemed to mostly parrot journalist Michael Pollan's science-free declaration that shunning processed foods can solve obesity and all other food-related health problems, though processing in and of itself is utterly irrelevant to obesity. What counts is calories, fat and sugar, which processed foods can be low in, and unprocessed foods can be high in.

Honey and fruit jam right off the farm-stand shelf are sugary calorie nightmares, and pork belly from locally raised, free-range, antibiotic-free pigs is a fatty calorie nightmare. But a McDonald's egg-white breakfast sandwich, though processed, is a relatively low-calorie, tasty dish that's a great source of lean protein, and has whole grains, both of which are key, satisfying target foods for people who want to keep weight off.

What is this pervasive message, that all processed foods are bad, doing to Americans’ ability to lose weight?

I realized this enormous misconception—the absurd dream of getting farm-fresh meals onto the plates of tens of millions of poor, obese people hooked on junk food—was standing in the way of what might be the one workable solution to attacking obesity: getting the food industry to create healthier versions of its popular foods that those people would actually eat. We need lower-fat meat, in particular, beef; reduced-sugar versions of candy, cakes and other sweets; reduced-fat substitutes for oily foods like salad dressing; whole-grain versions of floury foods like white bread. But we need these healthier versions to taste and look exactly like the originals, or most people won't switch to them.

What are the challenges to making low-calorie, low-fat, low-sugar alternatives tasty?

There are few serious technical or manufacturing obstacles to making healthier versions of popular processed foods. Food scientists know how to replace fat and sugar in foods with healthier alternatives that taste just about the same. It's not a perfect art yet, but it's getting there fast. The bigger challenge is getting the big food companies to really push this stuff, given that the public tends to be wary of healthier alternatives, and that health-food advocates condemn these efforts rather than applaud them. What's the incentive for these companies to make healthier foods? I'm in favor of forcing them to do it through regulation, but the American public hates that sort of regulation, so it won't happen.

A compounding problem is the relentless criticism that the delusional, misinformed, blind haters of all processed foods aim at Big Food companies that even try to bring out healthier stuff. Burger King's Satisfries and McDonald's Egg-White McMuffin have both been hooted down in the press by health-food advocates as not being truly healthy foods—never mind that these dishes are great steps in the right direction. It's absurd and disastrously counter-productive.

What makes your approach more realistic than a switch to whole, unprocessed foods, from an economic standpoint?

No one—absolutely no one—has advanced a clear plan for how at any time in the next 50 years we're going to be able to grow, ship and sell enough whole food to an entire population that today mostly lives on processed food. Add to this simple fact that this movement wants to do away with giant farms, food factories and shipping foods over distances. Then add to it that if there were some miraculous way to pull this off, the prices for the food would be astronomical by anyone's reckoning, compared to processed foods. It's a lovely idea—hey, I'd love to live in that world—but it's an absurd pipe dream. Meanwhile, the human race is giving up a billion years of life to obesity, and on average hugely lowering the quality of those years of life that we do have.

In this Knight Science Journalism critique of your piece, the author writes: 

One way Freedman works his magic is to confuse ‘unprocessed foods’ with ‘wholesome foods.’ Most of his examples of unprocessed foods are things he says are ‘tailored to the dubious health fantasies of a small, elite minority.’… Grass-fed beef might be too expensive and too difficult to produce for the masses. But what about soybeans, whole grains, fruits, and vegetables? They are commodities, they are cheap, and they are plentiful.


What’s your response to this?

This is breathtakingly ignorant, and sadly typical of many of the loud, arrogant voices that objected to my article. Though to be sure, some of the objections to my article were more thoughtful and well informed. These folks have clearly led cushy lives, and need to find out how most of the country and world lives. I've led a cushy life, too, but before opening my mouth on this subject I went out and spent many, many hours walking a number of different disadvantaged neighborhoods all over the country and the planet: talking to countless people in these communities about their diets and shopping, visiting their stores, and interviewing scientists and clinicians who directly work with overweight populations. Let me tell you, it doesn't get simpler or truer than this: Processed food is, for all but the most geographically isolated communities cheaper, more convenient, and easier to access. What's more, it pushes people's taste-sensation buttons. We've been telling the world for nearly a century to eat more vegetables. How's that working out? This fellow might get all his buttons pushed by broccoli that's readily accessible and affordable to him (and so do I, by the way), but the fact that he thinks it applies to the rest of the world, and in particular to the obese world, and most particularly to the obese population that is poor and vulnerable, is a good sign of how poor a job journalists have done in researching this subject before pontificating about it.

Every big thinker has predecessors whose work was crucial to his discovery. Who gave you the foundation to build your idea?

B.F. Skinner, a Harvard behavioral scientist and social philosopher, is, in my book, the patron saint of the science of behavior change. He took us 90 percent of the way there, and everything since then has either been in the wrong direction or is fighting to work out the remaining 10 percent. Skinner demonstrated with striking clarity how all organisms, including humans, tend to do what they are rewarded to do. It's really that simple. The tricky part sometimes is to identify what the rewards are behind certain behaviors, but in the case of obesity it's pretty obvious: People get the huge sensual reward of eating high-calorie, sweet and fatty foods, and of sitting around on their butts. These rewards are deceptively powerful, much more so for most of us than the negative consequences of overeating and under-exercising, consequences that tend to come on us at an imperceptible rate, versus the huge, immediate rush we get from eating. Thus to beat the problem we need to make sure people are getting similarly powerful rewards from eating healthier foods. Making available healthier versions of junk food that deliver similar sensations is a great way to do it.

Who will be most affected by this idea?

I've heard directly and indirectly that the article has had a big impact in the processed food industry, especially at fast food companies.

How so?  

Several major food companies have told me that the article has led to a stream of conversations about how they might move toward more healthy foods. I've also heard from a number of food industry groups asking me to speak at conferences.

Most of the public, as is true with politics and most everything else, has already made up its minds about this subject and won't be swayed by my article. But a small, more open-minded segment of the public seems to have found the article eye-opening. I take a lot of encouragement in that.

How might it change life, as we know it?

It would be wonderful if the article went at least a very small way toward making it easier for processed-food companies to bring out healthier versions of their products without being hooted down by the Pollanites. Burger King brought out its lower-calorie, lower-fat "Satisfries" a month or so after the article came out. I think that's entirely a coincidence, but hey, a journalist can dream.

What questions are left unanswered?

So many! Will Big Food actually bring out healthier products? If they do, will the obese public be willing to try embracing them? If they do move to these products, will it really get them on the road to losing and keeping off weight? Might the government be able to use regulation, or the threat of it, to accelerate the move to healthier processed foods?

What is next for you?

I hesitate to even mention what I'm working on, because it explores an argument that tends to provoke intensely negative reaction from most people. But it follows the theme of my trying to point out how sometimes the well-educated, generally affluent influencers in the public who see themselves as champions of beneficial change for all actually cling to notions that in the end are good for them but more generally bad for the poor and vulnerable.

Museums With Their Own Niche

Smithsonian Magazine

I peered at the rows of lunchboxes and stopped with a smile in front of a gleaming Strawberry Shortcake, its pink and white figures recalling peanut butter and jelly sandwiches, piles of crayons and an overnight party where at least one lucky girl unrolled a Strawberry Shortcake sleeping bag. I wondered if one of these lunchboxes was still hidden in the dusty recesses of my house. In an instant, a tall man with hair like gray steel wool was at my side.

“Ah, you’re of the metal lunchbox era!” said Tim Seewer, artist, cook and partner in Etta’s Lunchbox Café and Museum in New Plymouth, Ohio. “The Florida Board of Education decided in 1985 to ban metal lunchboxes because they could be used as a weapon. All across the United States, lunchboxes started to go plastic. Ironically, the last metal lunchbox was Rambo.”

Etta’s is a thoroughly charming bit of Americana. Lodged in an old blue-tiled general store, this free museum displays owner LaDora Ousley’s collection of 850 lunchboxes as well as the tobacco and lard tins that were the precursors to the lunchbox. The collection offers a unique lens into the popular culture of the last century—especially when accompanied by commentary from Seewer and Ousley, who do double time in the kitchen making pizza, sandwiches and salads. A 1953 Roy Rogers and Dale Evans lunchbox, the first to have a four-color lithograph panel, is among the collection’s notable items. Also on display are lunchboxes featuring the many television icons that followed: Gunsmoke, Looney Tunes, a host of Disney characters, Popeye, Space Cadet, the Dukes of Hazzard, and more.

The collection both chronicles the stories and characters that shaped many a childhood and offers a perspective on larger social trends in America. As an example, Ousley points to her tobacco tins, which were produced beginning in 1860 with sentimental domestic scenes on them. “It was a clever cross-marketing ploy,” Ousley explains. “Women weren’t allowed to buy tobacco, but it was a sign of status to own one of these tins. It showed you knew a man wealthy enough to buy one and that you were special enough to receive it as a gift.”

Museums with a singular focus—whether on an object or a theme—offer visitors an intimate educational experience, often enhanced by the presence of an owner or curator with an unmatched passion for the subject. Here are seven more narrowly focused museums from around the country, some tiny and precariously funded, others more firmly established.

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. Located in New Plymouth, Ohio, Etta's Lunchbox Café and Museum displays owner LaDora Ousley's collection of 850 lunchboxes. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. A 1953 Roy Rogers and Dale Evans lunchbox, the first to have a four-color lithograph panel, is among the collection's notable items. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. In 1985, the Florida Board of Education banned metal lunchboxes because they could be used as a weapon. Rambo was the last metal lunchbox made. (original image)

Image by Courtesy of Etta's Lunchbox Cafe & Lunchbox Museum. Lunchboxes on display at Etta's Lunchbox Café and Museum include television icons such as Looney Tunes, Disney characters, Popeye and the Dukes of Hazzard. (original image)

Image by Richard Clement / Reuters / Corbis. At last count, Velveteria, the Museum of Velvet Paintings has nearly 2,500 velvet paintings. (original image)

Image by Courtesy of the National Museum of Roller Skating. The National Museum of Roller Skating boasts 2,000 square feet of memorabilia from roller derby, roller speed and figure skating, and roller hockey. (original image)

Image by Courtesy of the National Museum of Roller Skating. The National Museum of Roller Skating contains the largest collection of historical roller skates in the world. Some of their skates date back to 1819. (original image)

Image by Courtesy of Roadchix. The Hobo Museum is located in the hobo capital of the world, Britt, Iowa. Every year the museum and Britt host a hobo convention that attracts up to 20,000 ramblers from all parts of the country. (original image)

Image by Newscom. The Bigfoot Discovery Museum was inspired by owner Michael Rugg's encounter with a Sasquatch-like creature when he was a child. (original image)

Image by Courtesy of Vent Haven Museum. Located in Fort Mitchell, Kentucky is the world's only public collection of materials related to ventriloquism. The Vent Haven Museum features 700 ventriloquist dummies arranged in three buildings, some sitting in rows as if waiting for a class to begin. (original image)

Velveteria, the Museum of Velvet Paintings in Portland, Oregon, has nearly 2,500 velvet paintings at last count. Eleven years ago, Caren Anderson and Carl Baldwin were shopping in a thrift store, spied a black velvet painting of a naked woman emerging from a flower and had to have it. That impulse buy ultimately led to a vast collection, much of which is now displayed in an 1,800-square-foot museum. Co-authors of Black Velvet Masterpieces: Highlights from the Collection of the Velveteria Museum, Anderson and Baldwin have a connoisseur’s eye for this neglected art form and an appreciation for its history. The paint-on-velvet form had its origins in ancient China and Japan, enjoyed some popularity in Victorian England, then had its modern heyday when American servicemen like Edgar Leeteg expressed the beauty they saw in the South Seas islands on black velvet. You can tour the museum for $5.00, but watch out for unexpected emotion. “A young couple got engaged in our black light room the other day,” says Anderson.

The National Museum of Roller Skating in Lincoln, Nebraska, boasts 2,000 square feet of memorabilia from roller derby, roller speed and figure skating, and roller hockey. Included are a pair of the first skates ever made, which resemble modern inline skates, patent models from the history of roller skate design, costumes, trophies, photos and magazines on skating. Oddest items: a pair of skates powered by an engine worn on the back and a pair of skates made for a horse—with a photograph of the horse wearing them. This is the world’s only museum devoted to roller skating; admission is free.

The Hobo Museum is located in the hobo capital of the world, Britt, Iowa. According to curator Linda Hughes, the town fathers of Britt tossed out a welcome mat for hoboes in 1899 after hearing that Chicago rolled up theirs when Tourist Union 63—the hobo union—wanted to come to town. A famous hobo named Onion Cotton came to Britt in 1900, and hoboes have been gathering there ever since. The museum is currently housed in an old movie theater, but has so much material it plans to expand into a larger space. The collection includes contents of famous hobo satchels, a hat adorned with clothespins and feathers from Pennsylvania Kid, tramp art, hobo walking sticks, and an exhibit of the character language hoboes use to leave each other messages. Every year, Britt and the museum host a hobo convention that attracts up to 20,000 ramblers from all parts of the country. “It’s like a big family reunion,” Hughes says.

The Museum of Mountain Bike Art and Technology Museum is located above a bike store in Statesville, North Carolina, with a 5,000 square-foot showroom displaying the evolution of mountain bikes. The collection includes “boneshakers”—bikes from 1869 with wooden spoke wheels—as well as bikes with interchangeable parts from the turn of the century. Among this free museum’s 250 bikes are several from the mountain bike boom beginning in the 1970s, when the energy crisis pushed people to make tougher bikes. Many of these are highly designed with great craftsmanship. “Even if you have no interest in bikes, you’d hang one on the wall because they’re so pretty,” says owner Jeff Archer. The museum holds an annual mountain-bike festival that attracts many of the sport’s pioneers.

The Bigfoot Discovery Museum in Felton, California, was inspired by owner Michael Rugg’s encounter with a Sasquatch-like creature when he was a child. The museum offers local history tied to Bigfoot; plaster casts of foot and hand prints; hair, scat and tooth samples; displays that discuss hypotheses to explain Bigfoot sightings and Bigfoot in popular culture; and a research library. In the audio-visual section, the controversial Patterson-Gimlin film purporting to show a Bigfoot spied in the wild runs on a continuous loop. “I’ve got everything I’ve found dealing with Bigfoot or mystery primates here,” Hughes says.

Vent Haven Museum in Fort Mitchell, Kentucky, is the world’s only public collection of materials related to ventriloquism. A Cincinnati businessman named William Shakespeare Berger and later president of the International Brotherhood of Ventriloquists began the collection in the early 1900s; ventriloquists—“vents”—still donate materials. There are 700 ventriloquist dummies arranged in three buildings, some sitting in rows as if waiting for a class to begin. Unusual creations include a head carved by a German prisoner in a Soviet POW camp from World War II—the vent performed for fellow prisoners as well as for the cook to get extra food—and a family of figures used by a blind Vaudeville-era vent. Photographs and drawings of vents abound, including one from the late 1700s, when ventriloquism was more often a trick to con people out of money instead of a form of entertainment. The museum also has a library with 1,000 volumes and voluminous correspondence for scholars. Admission is by appointment only, and curator Jennifer Dawson leads hour-and-a-half tours for $5.00. A yearly convention is held nearby.

The Robert C. Williams Paper Museum in Atlanta originated with a collection by Dard Hunter, an artist from America’s Arts and Crafts Movement who traveled the world to record the ways that people made paper and collect artifacts. In the museum, visitors can examine precursors to modern paper, including many tapa cloths made from pounded bark in Sumatra and Tunisia with inscriptions from special occasions; a vat used by Chinese papermakers in 200 B.C.; and one of the one million prayers printed on paper and enshrined in wooden pagodas that were commissioned by the Empress Shotuku after Japan’s smallpox epidemic of 735. In all, there are over 100,000 watermarks, papers, tools, machines and manuscripts. Admission for individuals is free; guided tours are $5 per person or $8.50 for a tour and paper-making exercise.

Why Rare Hawaiian Monk Seals Are Lining Up to Get Their Shots

Smithsonian Magazine

On a summer day on the island of Kaua`i, a Hawaiian monk seal hauls his 500-pound body out of the surf and galumphs toward a nursing female and her newborn pup. When he gets a few feet away from the mother, she arches her back and faces him, head high. He does the same. She barks. He barks. Snot and saliva fly. 

It’s typical—if awkward—monk seal courtship behavior, more posturing than physical. But scientists are concerned that this kind of scene could swiftly turn into a deadly disease outbreak for one of the most endangered marine mammals in the world. The Hawaiian monk seal has been listed under the Endangered Species Act since 1976, after its numbers were devastated by decades of hunting and other forms of human contact.

About a decade ago, researchers grew worried that a strain of morbillivirus, the genus of viruses that includes measles and canine distemper, could wipe out the last of these rare seals. In response, they’ve launched the first-ever effort to vaccinate a species of wild marine mammals—an effort that has come with a host of first-ever challenges.

The 1,200 or so monk seals that survive in the wild are spread over vast swaths of ocean, coming ashore for only brief periods of time to rest, molt and give birth on islands that stretch across the Central Pacific. Morbillivirus, which is spread by respiratory secretions, could kill off a significant chunk of them without anyone knowing. Thankfully, a growing population of monk seals in the main Hawaiian Islands is making it easier for researchers and their dedicated volunteer network to find—and immunize—them.

For the endangered monk seal, disease has always been the “monster lurking over the horizon,” says Charles Littnan, lead scientist for the National Oceanic and Atmospheric Administration’s Hawaiian Monk Seal Research Program (HMSRP). But it wasn’t until the past decade that research revealed that the species had precariously low genetic diversity. At that point, that infectious diseases “rocketed to an immediate concern,” Littnan says.

In fact, disease may have contributed to the demise of the only other species of Neomonachus, the genus that includes the Hawaiian monk seal: the extinct Caribbean monk seal. Disease “can wipe out seal populations all over the world, and we know that there are disease concerns for the living monk seals,” Kris Helgen, a zoologist at the National Museum of Natural History who studies the extinct monk seal’s evolutionary history, told Smithsonian.com in 2014.

“Simply put, morbillivirus outbreaks in pinnipeds and cetaceans are the things that marine mammal stranding responders have nightmares about,” says Dr. Michelle Barbieri, the lead veterinarian with HMSRP who is supervising rollout of the vaccine program. “The disease could spread easily, infecting many animals out in the ocean before we are able to detect what's going on.” 

Two monk seals tussle on a beach on Kaua`i in 2015. (Kim Rogers)

Littnan and his team had already started developing a plan to respond to the event of a morbillivirus outbreak when, in 2010, their fears were validated. That was when researchers identified the first known case of morbillivirus in the Central Pacific, in a Longman’s beaked whale that stranded on Maui. 

Littnan knew that the disease had already killed tens of thousands of seals and dolphins in the Atlantic, Mediterranean, Arctic and North Pacific oceans. Soon after, a northern fur seal, whose native habitat is the west coast of the United States, turned up on an O‘ahu beach near where monk seals are known to haul out and rest. While the fur seal wasn’t infected, its species is known to carry the disease.

Fortunately, there have been no known cases of morbillivirus in Hawaiian monk seals—yet. Blood tests indicate no prior population exposure, probably because the seals are buffered by the archipelago’s isolation in the middle of the Pacific Ocean. While that’s good, it also means there is no natural immunity. And that leaves this already-vulnerable species quite exposed.

If morbillivirus does break out, Hawaiian monk seals won’t stand a chance. An invasive disease, like an exotic species, can quickly wipe out a vulnerable population. In seals, morbillivirus targets the lungs and brain. Pneumonia may develop, skin lesions may erupt, and the animal may exhibit abnormal behavior, resulting in death in as little as five days.

Littnan and Barbieri knew the only hope for these seals was total vaccination. But 85 percent of the species live in the remote Northwestern Hawaiian Islands, among atolls and islets, elusive even to field biologists who study them. Finding monk seals to vaccinate, especially if the vaccine required a follow-up booster, would be a challenge.

Another challenge was finding the right vaccine. The most effective vaccines generally contain a live virus, which runs a chance of infecting the vaccinated animal. There was no way that the National Marine Fisheries Service, the regulatory agency overseeing the seal’s recovery, would risk introducing the live virus into the population. That left vaccines with dead viruses. But the immune responses in those are short-lived and require frequent boosters—hardly an option when dealing with a wild marine species that spends two-thirds of its life at sea.

The best choice turned out to be recombinant vaccine, which takes advantage of the way viruses inject their genetic material into cells. Researchers create recombinant vaccines by inserting harmless viruses with genetic material that stimulate an immune response in the host subject. The vaccine the researchers chose was one made for ferrets. It isn’t as strange as it sounds: Because all morbilliviruses are antigenically similar, meaning that vaccines made for one can cross-protect against another. However, there can always be adverse reactions.

A juvenile and weaner monk seal greet each other on a Kauai beach in 2014. (Kim Rogers)

Meanwhile, across the Pacific in California, researchers were conducting trials using the ferret vaccine in five captive harbor seals. It worked: Tests found that the initial vaccination, followed by a booster one month later, produced persistent antibodies to the virus. The seals had no noticeable side effects. 

The project hit a snag when, in 2013, after nearly a decade of work into a vaccination program, the manufacturer, Merial, put the vaccine on indefinite backorder. “That took us totally by surprise,” Littnan says. “It was unfortunate timing because this vaccine has been strong production for a long time and used quite broadly not only for ferrets in the wild but very broadly in the zoo and aquaria industry to vaccinate marine mammals and other mammals.”

Littnan kept moving forward, modeling potential spatial and temporal progress of the disease, and planning his team’s response in the advent of an outbreak.

This form of aggressive intervention to save the species wasn’t new to HMSRP. In the past, Littnan’s team had stepped in to disentangle seals trapped in marine debris and de-hook seals caught on fishing lines. They translocated young seals from areas of low survival to high. And with The Marine Mammal Center of Sausalito, California, they started rehabilitating underweight and malnourished seals.

Littnan reports that more than 30 percent of monk seals alive today are due to these interventionist efforts. The annual decline of the population has slowed, from 8 percent in the 1980s to 2.8 percent now.

In late 2015, the manufacturer made a limited quantity of the ferret vaccine available. Littnan didn’t waste any time in procuring enough vaccines for 58 animals. Because the vaccines had about a year before they expired, he decided to inoculate the population immediately to—hopefully—prevent an outbreak rather than respond to one.

Barbieri started with seven monk seals at Ke Kai Ola, the rehabilitation center run by The Marine Mammal Center on Hawai‘i Island. Now, they’re targeting seals in the wild around O‘ahu and Kaua‘i, where 40 to 50 seals regularly show up on each island.

The inoculation itself is a simple process, utilizing a pole syringe to inject one millimeter of vaccine through a 10 millimeter syringe and topping that off with a booster three to five weeks later. As of this writing, at least 43 animals have received vaccinations. Because seals often go on multi-day foraging trips at sea and circumnavigate an island at will, you never know when or where they’ll turn up. Thus, finding a seal during the window its booster is required may be the trickiest part of the inoculation process.

While 58 portions certainly isn’t enough to vaccinate every animal in the population, it is enough to create herd immunity among the growing pocket populations of seals around the Main Hawaiian Islands. The idea is that, if the disease does enter the population, it won’t spread to epidemic proportions.

“We’re using this project as an opportunity to learn about how long the antibodies are detectable in the blood of vaccinated monk seals,” Barbieri says, “And we will be able to compare those data to previous studies.” In the future, such a program could lay the groundwork for protecting seals against other diseases like West Nile.

Littnan hopes to roll out the vaccination program to the remote Northwestern Hawaiian Islands, a stretch of uninhabited islands, islets, and atolls that make up the recently expanded Papahānaumokuākea Marine National Monument where Littnan’s field crews stay for five months every summer. But that all depends on vaccine availability.

“There’s hope,” Littnan says. “We’ve been reaching out to the company. Hopefully, they understand the need and will stick with the product.”

Even with an unlimited supply of vaccines, however,  the success of the program hinges on all vaccinated seals achieving what Barbieri calls “perfect immunity.” “Antibodies to morbillivirus do not exactly predict protection in the face of exposure,” says Barbieri. “We will never expose vaccinated monk seals to the virus to find out if they acquire disease or not, so there will remain several unknowns surrounding this question.”

That is, unless a monk seal finds itself naturally infected. But that is a scenario scientists would rather not ponder.

The Science of "Little House on the Prairie"

Smithsonian Magazine

To read Laura Ingalls Wilder’s Little House books is to step out of one’s own world and into hers. For all their relentless nostalgia, their luscious descriptions of life on the prairie, it’s hard to criticize their rich detail.

Wilder has achieved folk hero status thanks to eight books she wrote and published between 1932 and 1943, and a ninth published posthumously. Based on her family’s travels as settlers in Wisconsin, Minnesota and South Dakota from the 1860s through the 1880s, the novels are considered to be semi-autobiographical, even with Wilder’s tweaking of dates, people and events.

Reading the books, though, it’s hard to resist treating the stories as a true historical account. So rich is Wilder’s detail that you’re on the prairies with her, bundled in furs during winter, or roasting in the summer sun in a full-sleeve dress. Readers don’t just get a window into her life; they walk by her side.

For this reason, her biggest fans hold the LauraPalooza conference every two years to celebrate their heroine’s life and works. But like a Russian nesting doll, within every subculture is yet another subculture, and one unexpected element of the conference: hard scientific  study.

Wilder’s reflections on her life experiences have spurred some scientists to use remarkable research techniques to clarify details from the books that seem a little too incredible. Finding the site of a schoolhouse where she taught that hasn’t existed for decades; a terrible winter of blizzards pounding the Ingalls’ small town day after day—for months; Laura’s sister being blinded by a fever that shouldn’t normally cause that kind of damage.

“Scientists are a bit like detectives,” said Barb Mayes Boustead, a presenter and co-organizer of this year’s conference, held in July at South Dakota State University. “We see something that isn’t explained, and we want to find the evidence that will help explain it. There is no shortage of aspects of Laura’s life and writings to investigate.”

********

From an early age, Jim Hicks had a special empathy for Laura: they both grew up on the prairie. Reading Wilder’s books next to a hearth in his small elementary school in Woodstock, Illinois, snow chipping away at the windows, he developed an interest in visiting the places Laura described in her books.

A retired high school physics teacher, Hicks strived to have his students understand physics in real-world terms. He turned his own classroom techniques on himself when trying to find the site of the Brewster school, where Laura went to teach as a mere teenager:

The Brewster settlement was still miles ahead. It was twelve miles from town. … At last she saw a house ahead. Very small at first, it grew larger as they came nearer to it. Half a mile away there was another, smaller one, and far beyond it, another. Then still another appeared. Four houses; that was all. They were far apart and small on the white prairie. Pa pulled up the horses. Mr. Brewster's house looked like two claim shanties put together to make a peaked roof. These Happy Golden Years (1943)

Hicks knew that Laura traveled to the school in a horse cart. Thinking of horse legs as compound pendulums, swinging back and forth with a constant time period, Hicks measured the length of his wife’s horse from knee to hoof to figure out the time of one oscillation. Then by measuring the stride length for a casual walk, Hicks could estimate the rate of travel, in this case around 3 miles per hour.

Frances B. Hicks, Jim's wife, takes measurements to calculate travel time via a horse. (Courtesy of Jim Hicks)

In These Happy Golden Years, Laura describes the drive as occurring just after the family’s noon meal in December. To get back before dark, Hicks estimated Laura’s driver, her father, had five hours of daylight to make the round trip, so one leg would take 2 ½ hours. At a horse speed of 3 miles per hour, a one-way trip would be between 7 or 8 miles, not the 12 that Laura estimated in the excerpt above.

Finding an old map Laura drew of DeSmet, South Dakota, which showed the Brewster school in a southwesterly direction, Hicks drew a seven-to-eight mile arc on a map of DeSmet. With the help of homestead land claim records and Laura’s description that she could see the light of the setting sun glinting off the windows of a nearby shanty, Hicks predicted the most likely location of the Brewster school site, to the west of a homestead settled by the Bouchie family, the “Brewsters” of Laura’s books. Further research confirmed another book detail: Louis and Oliv Bouchie homesteaded on separate but adjoining parcels, and to satisfy homestead requirements, built the separate halves of their mutual home right on the dividing line.

The result: Laura’s peak-roofed shanty.

“Art, physics and all the liberal arts and sciences are an invention of the human spirit, to try and find answers for causes,” says Hicks. “For a true depth of understanding, to be able to think on your feet with a balanced worldview, you need both parts.”

*********************

When she’s not helping organize LauraPalooza, Barb Boustead spends her hours as a meteorologist in the National Weather Service’s Omaha office. An impassioned weather educator, she writes about the science of weather, its impacts, and how people can prepare for inclement weather on her blog, Wilder Weather.

At the end of a recent winter, Boustead revisited a Wilder book from her youth, The Long Wintercentered on the Ingalls’ trials during an exceptionally harsh South Dakota winter.

"There's women and children that haven't had a square meal since before Christmas," Almanzo put it to him. "They've got to get something to eat or they'll starve to death before spring." – The Long Winter (1940)

Boustead said she found herself wondering whether the back-to-back blizzards Laura wrote about had been as bad as she described. Boustead realized that as a meteorologist, she had the tools not only to find out, but to quantify that winter’s severity.

The winter of 1880-81 was relatively well documented for the time. Compiling records on temperature, precipitation and snow depth from 1950 through 2013, she developed a tool to assign a relative “badness” score to the weather recorded at one or more stations in a geographic area. The Accumulated Winter Season Severity Index (AWSSI, rhymes with “bossy”) assigns an absolute severity grade for how the weather compares with the entire country, and a relative severity grade for comparing regional weather. It can also track year-over-year trends.

Boustead applied the tool to records at weather stations from the 1800s. Every site Boustead investigated in Laura’s region in that year falls into the “extreme” category rating on the AWSSI scale, marking it as a record year for snowfall and temperature lows. The season covered in The Long Winter still ranks in the top 10 worst winters on record for South Dakota, as well as other regions of the country.

Boustead said she has found that people pay more attention to the science of weather when a good story is involved.  “Scientists are told to give facts and information, and not tell a ‘story,’ since that becomes associated with fiction—but it’s not fiction,” Boustead said.  

*********

During a meeting in 2000 between medical students and an attending physician at the Albert Einstein College of Medicine in New York City, the subject of scarlet fever came up.

Beth Tarini, now an assistant professor of pediatrics at the University of Michigan, but at the time a third-year medical student on her pediatrics rotation, piped up. “You can go blind from that, can’t you?”

The attending physician said no, but hesitated when Tarini insisted, citing it as the cause of Mary Ingalls’ blindness, as recounted by her sister Laura in By the Shores of Silver Lake.

Beth Tarini, an assistant professor of pediatrics at the University of Michigan, with her collection of Wilder books. (Courtesy of Beth Tarini)

Motivated, Tarini started digging through med school books and references from the 19th century to see if she could find even a hint of verification that scarlet fever could truly be the cause of Mary’s loss of vision.  Picking up the project after a decade-long hiatus, Tarini and an assistant, Sarah Allexan, broadened the search, seeking evidence of an epidemic that might have caused a spate of blindness in children.

They found something better: an actual account of Mary’s fever, facial paralysis and month-long descent into blindness in a local paper from the Minnesota town where the Ingalls family lived.

They also dug into letters between Laura and her daughter Rose, which eventually became part of Laura’s autobiography:

She was suddenly taken sick with a pain in her head and grew worse quickly. She was delirious with an awful fever. We feared for several days that she would not get well. … One morning when I looked at her I saw one side of her face drawn out of shape. Ma said Mary had had a stroke. –Pioneer Girl (Published posthumously in 2014)

Using the newspaper’s reports along with those letters, Tarini guessed Mary had been laid low by either meningitis or encephalitis. A main clue was Laura’s description of Mary’s affliction as a “spinal sickness.”

She narrowed down the likely cause as viral meningoencephalitis, an inflammation of the covering of the spinal cord and brain, not only because of the prolonged headache and fever, but because of the time it took for Mary to go blind. Losing her vision progressively was more indicative of nerve damage from chronic inflammation following an infection. Laura had probably described Mary’s illness as scarlet fever because it commonly plagued children in that time, and readers would have been familiar with it as a terrible illness.

“The newspaper reports brought home the fact that Mary was a real person and her suffering was witnessed and recorded by her community,” Tarini said. “That reinforced our sense that we were getting close to truth.”

Viral encephalitis does not have a cure. Like other virus-caused illnesses, it simply must run its course. But chances are, if Mary Ingalls were similarly stricken today, her blue eyes would still see after she recovered. Hospitalized immediately for a spinal tap and full bloodwork, she would be well fed and kept hydrated, treated for seizures if they occurred, and given steroids for any vision-threatening inflammation. Tissue and fluid samples may be sent to the Centers for Disease Control to help confirm the diagnosis of viral or bacterial meningitis or encephalitis.

“It’s the ultimate differential diagnostic challenge,” Tarini said. “I don’t have the patient there to give me the history or to examine. I had to assemble the clues that history left me.”

How Humankind Got Ahead of Infectious Disease

Smithsonian Magazine

World health officials and organizations are currently involved in a final push to eradicate polio, the paralyzing disease that was once a crisis in the United States but now remains in just three countries—Pakistan, Nigeria and Afghanistan. If the efforts succeed, polio will join smallpox as one of the only human infectious diseases to have been eliminated, entirely. Such a feat involves cooperation, coordination and determination, but it also rests on one crucial development: vaccines, what career immunologist John Rhodes calls “the most successful medical measure of any.”

Rhodes has spent his life studying how the immune system reacts to first encounters with infectious agents and other fundamental aspects of vaccine development and success. His research interests have included influenza, malaria and HIV/AIDS vaccines, with time at the U.S. National Institutes of Health, the Wellcome Foundation in London and GlaxoSmithKline, where he was the director of strategy in immunology from 2001 until 2007. In his new book, The End of Plagues: The Global Battle Against Infectious Disease (MacSci), Rhodes traces the long road to vaccination and the twists and turns that are still ahead.

Your story begins with smallpox, widely cited as one of the biggest killers in history. How did that disease affect society?

Up until the 17th century, it was the Black Death, or bubonic plague, which had the most impact. The Great Plague of London, which happened in 1666, was the last major visitation, at least in Britain. After that, there was a considerable change in the pattern of disease in that smallpox became the biggest killer. The difference between the plague and smallpox is that smallpox afflicted people across the social scale. Those at the very highest, the very top of society, the highest in the land, seemed equally at risk, whereas in the case of the plague it was just the poor people who tended to die in very large numbers.

How many people were affected?

If you lived in London in the 18th century, then most children would have smallpox during their childhood. The mortality rates were about 20 to 30 percent. It was a common experience in virtually every household in the cities.

Help came from an unlikely source, a woman who was an aristocrat rather than a member of the medical profession. Who was Lady Mary Wortley Montagu, and what role did she play?

She was a remarkable woman and a pioneer of women’s rights. She went in 1717 to Constantinople, modern-day Istanbul, with her husband who was ambassador, where she found out the customs of ordinary people and discovered that the Greek people in Constantinople had this long-standing custom of protecting their children with the forerunner to vaccination, which is called variolation. By giving small amounts of the smallpox germ under the skin, preferably from a non-serious case of smallpox, they could protect their children. When she came back to London, she championed and pioneered this against a good deal of resistance, especially from members of the medical profession, who were still promoting the classical ideas of upsets in the four vital humors as being the cause of disease. Purging, vomiting, bloodletting were the treatments of choice at the time.

Mary was a lone voice. Then she convinced Caroline of Ansbach, the wife of the Prince of Wales, that this was the way to protect aristocratic children who could afford the treatment. Mary and Caroline pioneered it, which led to the first trial in 1721, the so called Royal Experiment in Newgate Prison, where a handful of prisoners were injected with smallpox on the understanding that if they survived they would be pardoned. (They were all due to be hanged.)

Was this approach seen as, well, gross at the time?

You have to remember that this was taking place when disease was rife, sanitation was poor, there was no reliable supply of clean water so diseases like cholera caused epidemics periodically. Inevitably, that is why people tended to drink beer—small beer it was called, with a low level of alcohol—because they knew it was safe. The standards of life were very much different from what they are today. Any sign of some sort of protective measure was seized upon and the standards of proof were very, very low. If it seemed to be safe, then people would adopt it because they hoped it would be lifesaving. That is how half a dozen prisoners came to persuade King George that this should be adopted for the members of his family.

At what point does Edward Jenner, the English doctor credited as the pioneer of vaccination, come into the picture?

Jenner was aware of variolation that had been championed by the Lady Mary and Princess Caroline, and also in the Americas by Cotton Mather. Jenner himself was variolated as a child; it was a horrendous experience. He was very unwell for quite a while. Part of the reason was that members of the medical profession were trying to regain ownership of the process from practitioners who they viewed as breaking from medical tradition, so they added a period of fasting and strange diet in order to remystify the process. Jenner came across the notion that milkmaids were never susceptible to smallpox, and he realized it might be possible to use an innocuous agent, cowpox, in order to do the same thing as the very dangerous variolation. It took him almost three decades before he actually did the experiments, in the late 1790s. It wasn’t a step in the dark. It was an improvement on something that already existed—a pivotal improvement, which relatively quickly spread across the world.

There are stunning stories of how vaccination spread. Can you offer an example?

The King of Spain and others essentially wanted to protect their colonies, which were enormously valuable assets to them. So, in the early 19th century, in what I’ve called “the founding voyages,” chains of children were vaccinated one by one so that the vaccine remained fresh over the course of a sea voyage. By the end of the voyage, the last few children would be vaccinated so there was fresh material, fresh cowpox material in this case, to begin to vaccinate in South America. The Portuguese also championed the same strategy. One of the good things was they didn’t confine it to their own colonies. They went into Asia as well. And that is how the spread of vaccination occurred across the globe.

Was there a backlash from skeptics?

I don’t think it was anything we would recognize as a legitimate reason to concern over safety. It was much more to do with religious and philosophical objections to the introduction of a bestial humor [a vital fluid from a non-human animal] into the human body. The idea of deliberately using a disease from a cow to protect humans against disease was repugnant to a large group of people. There were more reasoned critics who believed there was little benefit from vaccination, and it took a little while for it to convince people. But it was only a matter of five years or so before it was beginning its inexorable spread.

How did vaccination evolve, and eventually move beyond smallpox?

There was a sort of gradual, slowly evolving incremental improvement until the end of the 19th century. When there was an explosion in the field of bacteriology, scientists began to realize that there were many other diseases which could be addressed with vaccines, and that led to widespread attempts to bring about vaccines for other infectious diseases. Louis Pasteur and Robert Koch were the important figures of the late 19th century.

It was germ theory that altered everything. In the 1860s, Pasteur was first to show that germs do not arise spontaneously. They exist pretty much everywhere around us. He did away with the theory of spontaneous germ generation. He also managed to produce a vaccine against rabies and also cholera. And a lot of his discoveries were almost serendipitous. In the case of cholera, the researchers had left a culture of cholera germ out on the bench, so it grew weak. Then, when they injected it into chickens, instead of getting cholera, the chickens were protected against subsequent infection… Pasteur knew all about Jenner’s work, by the way, and he used the term “vaccine,” extending it to all kinds of vaccines in Jenner’s honor.

Thereafter, there were all kinds of exciting stories. One of the most important was the discovery antibodies, or antitoxins as they were then called.

It’s clear that vaccines have brought us a long way. What are the plagues that, contrary to your book’s title, are still threats?

Malaria is a huge killer on a global scale and a lot of the disease burden is in the developing world. There are exciting vaccines in the pipeline for malaria.

And tuberculosis, surprisingly, still produces a huge mortality on the global scale. The BCG vaccine, discovered in the early part of the 20th century, is highly controversial. It is used in Britain and used in Europe and third world countries, but it is not used in the U.S.A. One of the problems is if you vaccinate against TB with BCG, you can’t then screen for whether someone has TB or not. If you have been vaccinated, it looks as though you’ve been exposed.

The third is HIV/AIDs, where there has been so much effort and interest in developing a protective vaccine. It has been hugely frustrating for a decade at least. It is partly because the virus targets the very system you are trying to enhance and strengthen—it targets the immune system and the cells, which normally defend us against infection. Those three I would pick on as the major global targets, together with polio.

 

Interested in learning more? Read John Rhodes' The End of Plagues: The Global Battle Against Infectious Disease (MacSci).

A Brief History of Openly Gay Olympians

Smithsonian Magazine

Watching figure skater Adam Rippon compete, it’s easy to forget that he’s on skates. His dramatic, sharp movements – and facial expressions to match–emulate those of a professional dancer, at once complementing and contradicting his smooth, unfettered movement along the ice. He hides the technical difficulty of every jump and spin with head-flips and a commanding gaze, a performer as well as an athlete. But there’s one thing Rippon won’t be hiding – this year, he and freestyle skier Gus Kenworthy will become the first openly gay American men to ever compete in the Winter Olympics.

“The atmosphere in the country has changed dramatically,” says Cyd Zeigler, who co-founded Outsports, a news website that highlights the stories of LGBT athletes, in 1999. “Two men getting married wasn’t even a possibility when we started Outsports. Now it’s a reality in Birmingham, Alabama. There are gay role models at every turn – on television, on local sports, and in our communities.”

Even so, the last time that the United States sent an openly gay man to any Olympic Games was in 2004, when equestrians Guenter Seidel and Robert Dover won bronze in team dressage. It was Dover’s sixth time representing the United States at the Olympics; during his second Games, in 1988, Dover came out, becoming the first openly gay athlete to compete in the modern Olympics.

"I wish that all gay athletes would come out in all disciplines – football, baseball, the Olympics, whatever," Dover has said. "After six Olympics, I know they're in every sport. You just have to spend one day in the housing, the gyms, or at dinner to realize we're all over."

Indeed, by the time Dover came out on the international stage, it was clear that gay athletes were competing and winning in all levels of professional sports. Seven years earlier, tennis star Billie Jean King was famously outed when a lawsuit filed by a former lover led her to publicly admit to having a lesbian affair. (King promptly lost her all her professional endorsements, but later said she only wished that she had come out sooner.) And in 1982, former Olympian Tom Waddell – who would die from AIDS at the height of the epidemic five years later – helped found the first Gay Games for LGBT athletes. 1,350 athletes competed.

But it was more than a decade earlier when an openly gay athlete first performed in the Olympic Games. Just not exactly during competition.

English figure skater John Curry had barely come off the high of winning gold at the 1976 Winter Olympics in Innsbruck, Austria, when reporters caught wind of his sexuality from an article published in the International Herald Tribune. They cornered the skater in a press conference to grill him on matters most personal, according to Bill Jones’s Alone: The Triumph and Tragedy of John Curry. Curry acknowledged that the rumors about his sexuality were true, but when journalists asked prurient questions betraying the era’s misconceptions about homosexuality and masculinity, Curry fought back: “I don’t think I lack virility, and what other people think of me doesn’t matter,” he said. “Do you think that what I did yesterday was not athletic?” (It should be noted as well that homosexual acts were outlawed in the U.K. at the time.)

But even though the competition was over for Curry, custom had it that medal winners were expected to appear in exhibition performances. There, in a fiery, unflinching athletic spectacle, Curry abandoned his usual lively routine of skips and hops for a stern technical masterpiece, making him the first openly gay athlete to perform on the Olympic stage.

“When everyone had telephoned their story and discussions broke out in many languages around the bar, opinion began to emerge that it was [Curry] who was normal and that it was we who were abnormal,” wrote Christopher Brasher, a reporter for The Observer, in his coverage that year.

LGBT journalists and historians, including Zeigler and Tony Scupham-Bilton, have catalogued the many Olympians who were homosexual but competed in a time before being “out” was safe and acceptable. German runner Otto Peltzer, for instance, competed in the 1928 and 1932 Olympics, but was arrested by the Nazis in 1934 for his homosexuality and was later sent to the concentration camps. In more recent years, athletes have waited to come out until after their time in competition was over, including figure skaters Johnny Weir and Brian Boitano and American diver Greg Louganis. Louganis was long rumored to be gay, but didn’t come out publicly until the opening ceremonies of the 1994 Gay Games: "Welcome to the Gay Games,” Louganis said to the crowd. “It's great to be out and proud."

Though the early history of openly gay Olympians is dotted with male athletes, openly gay women have quietly gained prevalence in recent competitions. French tennis player Amélie Mauresmo is among the first women to come out publicly prior to an Olympic appearance – though, Zeigler added, whether an athlete comes out publicly is based in part on the prominence of their sport outside the Olympics. In 1999, a year before her first Olympic competition, reporters questioned her sexuality after an opponent called her “half a man” for showing up to a match with her girlfriend. Mauresmo’s casual discussion of her sexuality as an integral part of her life and dismissal of concerns that she would lose sponsorship represented a shift in the stigma surrounding coming out as an athlete. Fear of commercial failure still underpinned many athletes’ decisions not to come out, but Mauresmo was undaunted.

“No matter what I do, there will always be people against me,” Mauresmo has said. “With that in mind, I decided to make my sexuality clear… I wanted to say it once and for all. And now I want us to talk about tennis.” Mauresmo still faced criticism for her “masculinity.” But her sponsor, Nike, embraced her muscular look by designing clothes that would display her strength, according to the 2016 book Out in Sport. Mauresmo went on to win silver in women’s singles in 2004.

At the 2008 Summer Olympics in Beijing, 11 openly gay athletes competed, only one of whom – Australian diver Matthew Mitcham, who won gold and is a vocal LGBT activist – was a man. All six openly gay athletes at the 2010 Winter Olympics in Vancouver were women, as were all seven of the openly gay athletes at the 2014 Winter Olympics in Sochi. Both of the intervening Summer Olympics saw a greater turnout of openly gay athletes, but women still held the large majority. In 2016, four of the players on the U.S. women’s basketball team – Delle Donne, Brittney Griner, Seimone Augustus and Angel McCoughtry––were openly gay.

This accounting of course elides that sexual orientation is a spectrum. Olympians who openly identify as bisexual, for instance, are growing in number as well. Additionally, the International Olympic Committee, and the many governing bodies within, have made some strides when it comes to recognizing that gender is not binary, though policies for transgender athletes remain a thorny debate among officials and athletes. That being said, the IOC allowed pre-surgery transgender athletes to take part in the 2016 Rio Games.

With this year’s Winter Games in Pyeongchang, Rippon and Kenworthy are the first openly gay American men to compete in the Olympics since the legality of same-sex marriage was established throughout the United States in 2015, and the cultural shift is apparent. While American tennis legend Martina Navratilova, who came out in 1981 but competed as an Olympian for the first time in 2004, has said that coming out in 1981 cost her $10 million in sponsorships, Kenworthy boasts sponsorships with Visa, Toyota and Ralph Lauren, to name a few. The skier also recently appeared in an ad for Head & Shoulders, with a rainbow pride flag waving behind him.

“The atmosphere for LGBT athletes has changed quicker in past decade,” says Scupham-Bilton, LGBT and Olympic historian. “In the 20th century there was more homophobia in sport and society in general. As the increase in LGBT equality has progressed, so has acceptance of LGBT athletes.”

There’s one notable exception: Sochi 2014. The summer before hosting the Winter Olympics, in what many saw as an affront to gay rights activism, the Russian government passed a law prohibiting the promotion of “nontraditional” sexual relationships to minors. The United States used the Olympic platform as an opportunity for subtle protest, including prominent gay athletes Brian Boitano, Billie Jean King and Caitlin Cahow in its Olympic delegation, and protests were staged across the world. Despite the outpouring of international support, Canadian figure skater Eric Radford opted to wait until after Sochi to come out, citing his desire to be recognized for his skill, rather than his sexuality. He’s already made his mark at the Pyeongchang Games, where his performance with skating partner Meagan Duhamel vaulted Canada to the top of the team figure skating competition.

Rippon and Kenworthy have used their newfound platforms to make statements on political issues. Rippon recently made headlines when he refused an offer to meet with Vice President Mike Pence due to disagreements with his stances on LGBT rights – which include past statements that appear to support funding gay conversion therapy. Pence’s former press secretary denied his support for gay conversion therapy during the 2016 presidential campaign. Kenworthy also criticized the Vice President as a “bad fit” to lead the United States' delegation at the Opening Ceremony in Pyeongchang on Friday.

Political platforms and sponsorships aside, Rippon and Kenworthy ultimately hoped that by coming out they could live as freer, more authentic versions of themselves – and empower others to do the same.

“There is pressure that comes with this responsibility and I feel I have a responsibility to the LGBT community now,” Kenworthy has said. “I want to be a positive example and an inspiration for any kids that I can.”

337-360 of 374 Resources