Skip to Content

Found 1,976 Resources

The Future Is Here Festival Considers Extraterrestrial Life and the Essence of Humanity

Smithsonian Magazine

There’s no good reason to expect that alien life, should any prove detectable, will be created in humanity’s image as Hollywood films tend to model them, said Seth Shostak, director of the Search for Extraterrestrial Intelligence (SETI) on Sunday at Smithsonian magazine’s “Future is Here” festival in Washington, D.C. Shostak, by the way, consults with film companies on alien depictions.

“Hollywood usually resorts to little gray guys with big eyeballs, no hair, no sense of humor and no clothes, because it saves a whole lot of backstory,” he said. “We’ve been rather anthropocentric. We assume that they’re somewhat like we are. That may be fundamentally wrong.” In response to an audience member’s question, he added, “Our data set on alien sociology is sparse.”

Extraterrestrial life is likely to be more computer-like than human in nature. Just as humans are building artificial intelligence, aliens may do the same, Shostak said, and instead of finding the kinds of aliens that show up in movies, humans could be more likely to encounter the robots or computer systems created by the aliens. So humans who hope to find extraterrestrial life ought to look in places that are different than how we've imagined them to date. Further-evolved alien life probably doesn’t require planets with water and oxygen, as people do, Shostak said.

Seth Shostak, director of SETI, spoke about the search for extraterrestrial life. (Richard Greenhouse Photography)

Shostak's critique of popular culture's take on aliens' appearance was one of many critiques raised at the festival, which played host to scientists, philosophers, authors and engineers. While there, they envisioned a future where science meets science fiction. Sunday’s lineup of speakers, supported in part by the John Templeton Foundation, included Frans de Waal, a professor of primate behavior at Emory University; Marco Tempest, a “cyber illusionist”; Rebecca Newberger Goldstein, a philosopher and author; Sara Seager, a planetary scientist and astrophysicist; and several NASA scientists and engineers.

As varied as they were, the talks had one common thread: Human narcissism can be rather misleading and unproductive at times, while at others, it may hold great scientific promise.  

If aliens are too often thought of in human terms, there is the opposite tendency to underappreciate animal ingenuity because they are compared to human intelligence. That sells dolphins, apes, elephants, magpies, octopi and others short, said de Waal, a primatologist. He’d rather scientists allow for more elasticity in adopting an anthropomorphic set of vocabulary and concepts to consider certain animals as rather more like humans.

Frans de Waal, a primatologist, talked about animal cognition at the festival. (Richard Greenhouse Photography)

De Waal showed a video of a bonobo carrying a heavy rock on its back for half a kilometer until it arrived at the hardest surface in the sanctuary, where it used the rock to crack open some nuts. “That means she picked up her tool 15 minutes before she had the nuts,” de Waal said. “The whole idea that animals live only in the present has been abandoned.”

He showed a video of a chimp and another of an elephant each recognizing itself in a mirror, opening wide to gain an otherwise inaccessible view of the insides of their mouths. “If your dog did this, you’re going to call me,” he said.

All animal cognition, clearly, isn’t created equally, but de Waal stressed that for the animals that do exhibit cognition, it’s hardly a sin to use anthropomorphic terms to describe, say, a chimp laughing when tickled. It certainly looks and functions like a human laugh, he said.

The focus first on yet-unknown, and perhaps not-even-existent, alien life, and then on very familiar creatures, with which we share the planet, served as a microcosm of the broader scope of the day’s agenda. Laying the groundwork for the notion that the future has arrived already, Michael Caruso, editor-in-chief of Smithsonian magazine, told the audience to consider itself as a group of time machines.

“Your eyes are actually lenses of a time machine,” he said, noting that the further into space we look, the more of the past we see. “The light from the moon above us last night came to us a second and a half old. The light from the sun outside today is eight minutes and 19 seconds in the past. The light that we see from the stars at the center of the Milky Way is actually from the time of our last ice age, 25,000 years ago. Even the words I’m speaking right now, by the time you hear them exist a nanosecond in the past.”

While everything surrounding attendees represents the past, they themselves are the future. The key, he said, is to share knowledge, compare notes and overlap what we all know.

“That’s what we do here at the festival,” Caruso said.

Sara Seager, a planetary scientist and astrophysicist, studies exoplanets. (Richard Greenhouse Photography)

Other speakers picked up where Shostak and de Waal left off. In the search for extraterrestrial life, scientists are studying exoplanets, or planets that orbit stars other than the sun. Some of these, said Seager, a MIT professor of planetary science and physics, display ripe conditions to support life. “We know that small planets are out there waiting to be found,” she said. Though that doesn’t mean it’s easy hunting. “I liken it to winning the lottery—a few times,” she said.

Philosopher and writer Rebecca Newberger Goldstein, meanwhile, turned the lens not on planets many light years away, but instead on the human condition domestically. She discussed what she called the “mattering map,” a spectrum upon which individuals weigh and evaluate the degree to which they matter. “We are endowed with a mattering instinct,” she said. Or put another way: Everyone has an address on the mattering map, “an address of your soul.”

So much psychic power is embedded in the notion of mattering, she added, that people often give up their lives to secure the opportunity to matter, or if they feel they no longer matter. This is particularly relevant in the age of social media, and selfies, she said, when there is a temptation to measure how much one matters based on others’ approval.

“Who doesn’t like it when their Twitter following grows?” she asked.

Other speakers filled in more holes in the broader conversation about the future colliding with the present. “What was once magic is now reality,” said Marco Tempest, a “cyber illusionist” whose magic performance was enhanced by digital elements. He performed a card trick while wearing a digital headset, and the audience saw, presumably, what he saw projected on a screen. The projection overlaid digital information atop the cards, sometimes animating certain elements and other times adding additional information. Magicians and hackers are alike, Tempest said, in that they don’t take what surrounds them at face value. They see material as something to be played with, examined and questioned, rather than taken for granted.

NASA engineer Adam Steltzner talked about the Mars 2020 project. (Richard Greenhouse Photography)

A variety of National Aeronautics and Space Administration representatives, including Dava Newman, NASA’s deputy administrator, discussed everything from Hollywood depictions of space exploration to augmented and virtual reality. NASA’s mission is “off the Earth, for the Earth,” Newman said. She stressed that everything that NASA does, particularly when it comes to areas that are quite far from Earth, relates back to what is best for people on Earth. So it's off the planet, but it's all for the benefit of the planet. Jim Green, who directs NASA’s planetary science division, spoke highly of art’s capacity to impact the real-life space program. “Science fiction is so important to our culture, because it allows us to dream,” he said.

That melding of dreaming and reality, of searching for what humanity has never encountered, such as extraterrestrial life and new planets, is a vital mix that helps keep things grounded, said Seager, the astrophysicist, in an interview after her talk.

“We do have our ultimate goal, like the Holy Grail. I don’t want to say we may never find it [extraterrestrial life], but that thought is always kind of there,” she said. “At least we will be finding other stuff along the way.”

The Wealthy Activist Who Helped Turn “Bleeding Kansas” Free

Smithsonian Magazine

On May 24, 1854, Anthony Burns, a young African-American man, was captured on his way home from work. He had escaped from slavery in Virginia and had made his way to Boston, where he was employed in a men’s clothing store. His owner tracked him down and had him arrested. Under the Fugitive Slave Act of 1850 and the United States Constitution, Burns had no rights whatsoever.

To the people of Boston, his capture was an outrage. Seven thousand citizens tried to break him out of jail, and the finest lawyers in Boston tried to make a case for his freedom, all to no avail. On June 2, Burns was escorted to a waiting ship and returned to bondage.

This entire episode had a profound impact on many Bostonians, but one in particular: Amos Adams Lawrence. The Burns episode likely was the first time Lawrence came face-to-face with the evils of slavery, and shortly after Burns was returned to bondage, he wrote to his uncle that “we went to bed one night old-fashioned, conservative, Compromise Union Whigs and waked up stark mad Abolitionists.” (The Whig Party was divided over slavery at this time; by 1854, when the Republican Party was organized, the Whigs were no longer a strong force in U.S. politics.)

Lawrence was a somewhat unlikely abolitionist. He was born into one of the bluest of blue-blood families in Boston and had every benefit his family’s wealth could provide, attending Franklin Academy, an elite boarding school, and then Harvard. True, the Lawrence family had a strong philanthropic ethic. Amos’s uncle, Abbott Lawrence, donated $50,000 to Harvard in 1847—which at the time was the largest single donation given to any college in the United States—to establish Lawrence Scientific School, and Amos’s father, also named Amos, retired at age 45 to devote the remainder of his life to philanthropy. In 1854, Amos Adams Lawrence wrote in his private diary that he needed to make enough money in his business practices to support charities that were important to him.

A print created in Boston in the 1850s showing Anthony Burns and scenes from his life (Image courtesy of Library of Congress)

But those business practices made backing an anti-slavery charity unlikely. His family made its fortune in the textile industry, and Lawrence himself created a business niche as a commission merchant selling manufactured textiles produced in New England. Most of the textiles Lawrence and his family produced and sold were made from cotton, which was planted, picked, ginned, baled, and shipped by slaves. This fact presents an interesting conundrum. The Burns episode made Lawrence, as he wrote, “a stark mad abolitionist,” but, as far as we know, the fact that his business relied on the same people he was trying to free did not seem to bother him.

Lawrence very quickly had the opportunity to translate his new-found abolitionism into action. On May 30, 1854, in the midst of the Burns affair, President Franklin Pierce signed into law the Kansas-Nebraska Act, which established Kansas and Nebraska as territories but allowed each to decide for themselves, under the concept of popular sovereignty, whether they wanted slavery or not. To many abolitionists, this was an outrage, because it opened the possibility for another slave state to enter the union. Also, with the slave-holding state of Missouri right next door, the pro-slavery side seemed to have an undue advantage.

This was Lawrence’s chance. A friend introduced him to Eli Thayer, who had just organized the Emigrant Aid Company to encourage antislavery settlers to emigrate to Kansas with the goal of making the territory a free state. Lawrence became the company’s treasurer, and immediately began dipping into his pocket to cover expenses. When the first antislavery pioneers arrived in Kansas, they decided to call their new community “Lawrence,” knowing that without their benefactor’s financial aid, their venture likely would not have been possible.

Lawrence was frequently frustrated that the company’s leaders were not aggressive enough to raise money, but he quietly continued to cover the bills. At one point, he confided to his diary, when bills for the Emigrant Aid Company came due, he did not have enough of his own money on hand, so he sold shares in his business to cover the expenses. Whenever there was a need for special funding in Kansas, Lawrence would donate and ask others to do so as well. Lawrence and his brothers, for example, contributed to the purchase of Sharps rifles—the most advanced weapons of the day—for citizens of Lawrence.

44-caliber Sharps percussion sporting rifle used by abolitionist John Brown, ca 1856 (Image courtesy of National Museum of American History)

They needed those guns. Because Lawrence, Kansas, was the center of the antislavery movement, it became the bullseye of the target of pro-slavery folks. In late 1855, Missourians lined up planning to attack Lawrence in what was called the Wakarusa War. Nothing happened that time, and the Missourians returned home. But less than a year later came the “Sack of Lawrence,” in which pro-slavery Missourians burned much of the town to the ground. Amos Lawrence continued to support the effort to make Kansas a free state. In 1857, Lawrence again dug into his pocket and donated $12,696 to establish a fund “for the advancement of religious and intellectual education of the young in Kansas.”

Eventually, in 1861, Kansas was admitted to the Union as a free state. The town of Lawrence played an important role in this development, and several of its residents became leaders in the early state government.  But the wounds of the territorial period continued to fester. In August 1863, during the Civil War, Lawrence burned again: Willian Clarke Quantrill, a Confederate guerrilla chieftain, led his cutthroat band into the town, killed more than 200 men and boys, and set the place on fire.

Just several months before, Lawrence had been granted approval from the new state legislature to build the University of Kansas in their town. Citizens needed to raise $15,000 to make this happen, and the raid had nearly wiped out everyone. Again, Amos Lawrence came to the rescue, digging into his pocket for $10,000 to make sure Lawrence, Kansas would become the home of the state university.

In 1884, Amos Lawrence finally visited the town that bore his name. Citizens rolled out the red carpet to honor their namesake. He was honored by the university he was instrumental in creating. He was invited as the guest of honor for several other events.  But Lawrence had always been a very private person, and the hoopla over his visit was too much. He stayed for a couple of days, then returned home to Boston. He never visited again.

To the people of modern-day Lawrence, Amos Lawrence has faded from memory. A reporter writing about him in a recent local newspaper article was unaware that he had visited the town. But Lawrence's support and money were essential in making Kansas a free state. When Lawrence responded to Burns's brutal treatment, he showed how a citizen can be shocked out of complacency and into action—and thus made history.

Robert K. Sutton is the former chief historian of the National Park Service. He is author of Stark Mad Abolitionists: Lawrence, Kansas, and the Battle Over Slavery in the Civil War Era (New York: Skyhorse Press, 2017).  He wrote this for What It Means to Be American, a project of the Smithsonian and Zócalo Public Square.

Revisiting the First Ladies’ Homes

Smithsonian Magazine

Preserving the memory of the nation's first female president is a task that Farron and William Smith take seriously. Last fall, the couple opened a museum in Wyethville, Virginia, dedicated to Edith Bolling Wilson, who some historians claim ran the U.S. government while her husband, Woodrow Wilson, recovered from a massive stroke during his second term. The Smiths own the two-story brick building in this small southwest Virginia city, where Mrs. Wilson was born more than 100 years ago.

"We decided that once our kids were educated, we'd devote our time to making the museum," Farron Smith says. "We've spent a lot of money on it; we could have re-educated our children again. But we just feel a real responsibility to preserve this for future generations."

In doing so, the Smiths have joined forces with a clutch of other torchbearers for former first ladies. Their birthplaces, childhood homes and post-White House residences have been turned into museums and memorials across the country. The National Park Service operates some of them, while others are community efforts.

The Mamie Doud Eisenhower birthplace in Boone, Iowa, is a fine example of the latter. The wooden cottage had a succession of owners after Mrs. Eisenhower's birth in 1896 and decades later faced demolition. A group formed to save the house, and a neighbor then offered to tear down a house on a lot across the street to make way for the Doud residence. In 1975, the birthplace was moved to its new location and the museum opened five years later.

"We have a struggle," explains Charles Irwin, executive director of the Boone County Historical Society, which oversees the museum. "We've had declining attendance over the years, because we're getting further away from the Eisenhower era."

Two other factors affect the plight of first ladies museums: money and status.

"For so long, there was a certain devaluation of women in general, and wives in particular," said Carl Sferrazza Anthony, historian for the National First Ladies' Library – housed in the family home of 25th first lady Ida Saxton McKinley – in Canton, Ohio. "Sometimes it is merely practicality; the family may have needed money and sold the house or torn it down to sell for the property value. In the cases of those who came from politically powerful, socially prominent or wealthy families into which presidents married, some of these sites have been preserved…[T]he importance of the Todd family [Abraham Lincoln's in-laws] in Kentucky and Republican history [meant] that house was preserved."

Sometimes neither money nor prestige is enough. Take the case of Hammersmith Farm in Newport, R.I., where Jacqueline Bouvier spent her childhood summers and held the reception for her wedding to John F. Kennedy. In 1977, the family sold the estate to a private group called Camelot Gardens, which opened it as a museum. "It felt as if the family had just stepped outside," Anthony recalls. "Unfortunately, the state government didn't decide to buy it and it became too expensive to maintain. It was sold to a private owner and all the furnishings auctioned off."

The existing museums are cautiously optimistic that the keen interest in new first lady, Michelle Obama, drums up business for them. Anthony says the library has been flooded with queries from the media since Mrs. Obama spoke at the Democratic National Convention last summer.

And about 200 miles southwest of the library, the number of visitors to the Lucy Hayes Heritage Center in Chillicothe, Ohio, was up to 149 for the month of April; the small frame house where the 19th president's wife was born typically never gets more than 500 visitors throughout the year.

"People often crack that had these women not been married to these men, we'd never know of them," Anthony said. "But the other side of that truth is that had many of these men not married the women they did, we'd never have heard of these men."

Image by Courtesy of The National First Ladies' Library. The Saxton McKinley house, located in Canton, Ohio, was home to former President William McKinley and his wife Ida. They lived there during the period of time when William served in the U.S. House of Representatives. (original image)

Image by Courtesy of NPS WD Urbin. Eleanor Roosevelt used Val-Kill, located in New York, as a retreat, office and “laboratory” for social change. This is the only national historic site dedicated to a first lady. (original image)

Image by Bettmann / Corbis. Bess Truman grew up in this home located at 219 North Delaware Street, Independence, Mo. Bess and Harry S. Truman moved into the home after their marriage in 1919 and returned to the house after living in the White House. (original image)

Image by Suzanne Caswell. Located in Boone, Iowa, The Mamie Eisenhower birthplace went through multiple owners. In 1975 it was moved across the street to its current location to avoid being demolished completely. (original image)

Image by Kevin R. Morris / Corbis. The Mary Todd Lincoln house in Lexington, Ky. was the first house restored to honor a first lady. (original image)

Image by Library of Congress. Located in Philadelphia, Pa., the Dolley Todd house was built by lawyer John Todd and his wife Dolley Payne. When Todd died, Payne married James Madison, who was to become the fourth president of the United States. (original image)

Image by The Edith Bolling Wilson Birthplace Foundation. Edith Bolling Wilson’s two-story brick house is located in Wytheville, Va. The home was constructed in the 1840s and has three storefronts at street level. (original image)

Image by The Ohio Department of Development, Division of Tourism. Lucy Webb Hayes was born in this house in Chillicothe, Ohio on August 28, 1831. (original image)

Image by National Park Services. White Haven was the family home of Julia Dent Grant. This photo taken in 1860 is the earliest known photo of the main house. (original image)

First Ladies' homes – and sites of homes - open to the public:

Editor's Note: The location of the Edith Bolling Wilson was misspelled in a photo caption. The correct spelling is Wytheville, Va.

Bowl with figures

National Museum of African Art
Wood bowl composed of a kneeling female figure holding the bowl supported by female and male figures. Four female figures clasp arms atop the bowl's lid. A bearded male head rolls under the bowl behind the supporting figures. The bowl has an overall dark patina with underlying pigments of red, white and green.

A Brief History of Openly Gay Olympians

Smithsonian Magazine

Watching figure skater Adam Rippon compete, it’s easy to forget that he’s on skates. His dramatic, sharp movements – and facial expressions to match–emulate those of a professional dancer, at once complementing and contradicting his smooth, unfettered movement along the ice. He hides the technical difficulty of every jump and spin with head-flips and a commanding gaze, a performer as well as an athlete. But there’s one thing Rippon won’t be hiding – this year, he and freestyle skier Gus Kenworthy will become the first openly gay American men to ever compete in the Winter Olympics.

“The atmosphere in the country has changed dramatically,” says Cyd Zeigler, who co-founded Outsports, a news website that highlights the stories of LGBT athletes, in 1999. “Two men getting married wasn’t even a possibility when we started Outsports. Now it’s a reality in Birmingham, Alabama. There are gay role models at every turn – on television, on local sports, and in our communities.”

Even so, the last time that the United States sent an openly gay man to any Olympic Games was in 2004, when equestrians Guenter Seidel and Robert Dover won bronze in team dressage. It was Dover’s sixth time representing the United States at the Olympics; during his second Games, in 1988, Dover came out, becoming the first openly gay athlete to compete in the modern Olympics.

"I wish that all gay athletes would come out in all disciplines – football, baseball, the Olympics, whatever," Dover has said. "After six Olympics, I know they're in every sport. You just have to spend one day in the housing, the gyms, or at dinner to realize we're all over."

Indeed, by the time Dover came out on the international stage, it was clear that gay athletes were competing and winning in all levels of professional sports. Seven years earlier, tennis star Billie Jean King was famously outed when a lawsuit filed by a former lover led her to publicly admit to having a lesbian affair. (King promptly lost her all her professional endorsements, but later said she only wished that she had come out sooner.) And in 1982, former Olympian Tom Waddell – who would die from AIDS at the height of the epidemic five years later – helped found the first Gay Games for LGBT athletes. 1,350 athletes competed.

But it was more than a decade earlier when an openly gay athlete first performed in the Olympic Games. Just not exactly during competition.

English figure skater John Curry had barely come off the high of winning gold at the 1976 Winter Olympics in Innsbruck, Austria, when reporters caught wind of his sexuality from an article published in the International Herald Tribune. They cornered the skater in a press conference to grill him on matters most personal, according to Bill Jones’s Alone: The Triumph and Tragedy of John Curry. Curry acknowledged that the rumors about his sexuality were true, but when journalists asked prurient questions betraying the era’s misconceptions about homosexuality and masculinity, Curry fought back: “I don’t think I lack virility, and what other people think of me doesn’t matter,” he said. “Do you think that what I did yesterday was not athletic?” (It should be noted as well that homosexual acts were outlawed in the U.K. at the time.)

But even though the competition was over for Curry, custom had it that medal winners were expected to appear in exhibition performances. There, in a fiery, unflinching athletic spectacle, Curry abandoned his usual lively routine of skips and hops for a stern technical masterpiece, making him the first openly gay athlete to perform on the Olympic stage.

“When everyone had telephoned their story and discussions broke out in many languages around the bar, opinion began to emerge that it was [Curry] who was normal and that it was we who were abnormal,” wrote Christopher Brasher, a reporter for The Observer, in his coverage that year.

LGBT journalists and historians, including Zeigler and Tony Scupham-Bilton, have catalogued the many Olympians who were homosexual but competed in a time before being “out” was safe and acceptable. German runner Otto Peltzer, for instance, competed in the 1928 and 1932 Olympics, but was arrested by the Nazis in 1934 for his homosexuality and was later sent to the concentration camps. In more recent years, athletes have waited to come out until after their time in competition was over, including figure skaters Johnny Weir and Brian Boitano and American diver Greg Louganis. Louganis was long rumored to be gay, but didn’t come out publicly until the opening ceremonies of the 1994 Gay Games: "Welcome to the Gay Games,” Louganis said to the crowd. “It's great to be out and proud."

Though the early history of openly gay Olympians is dotted with male athletes, openly gay women have quietly gained prevalence in recent competitions. French tennis player Amélie Mauresmo is among the first women to come out publicly prior to an Olympic appearance – though, Zeigler added, whether an athlete comes out publicly is based in part on the prominence of their sport outside the Olympics. In 1999, a year before her first Olympic competition, reporters questioned her sexuality after an opponent called her “half a man” for showing up to a match with her girlfriend. Mauresmo’s casual discussion of her sexuality as an integral part of her life and dismissal of concerns that she would lose sponsorship represented a shift in the stigma surrounding coming out as an athlete. Fear of commercial failure still underpinned many athletes’ decisions not to come out, but Mauresmo was undaunted.

“No matter what I do, there will always be people against me,” Mauresmo has said. “With that in mind, I decided to make my sexuality clear… I wanted to say it once and for all. And now I want us to talk about tennis.” Mauresmo still faced criticism for her “masculinity.” But her sponsor, Nike, embraced her muscular look by designing clothes that would display her strength, according to the 2016 book Out in Sport. Mauresmo went on to win silver in women’s singles in 2004.

At the 2008 Summer Olympics in Beijing, 11 openly gay athletes competed, only one of whom – Australian diver Matthew Mitcham, who won gold and is a vocal LGBT activist – was a man. All six openly gay athletes at the 2010 Winter Olympics in Vancouver were women, as were all seven of the openly gay athletes at the 2014 Winter Olympics in Sochi. Both of the intervening Summer Olympics saw a greater turnout of openly gay athletes, but women still held the large majority. In 2016, four of the players on the U.S. women’s basketball team – Delle Donne, Brittney Griner, Seimone Augustus and Angel McCoughtry––were openly gay.

This accounting of course elides that sexual orientation is a spectrum. Olympians who openly identify as bisexual, for instance, are growing in number as well. Additionally, the International Olympic Committee, and the many governing bodies within, have made some strides when it comes to recognizing that gender is not binary, though policies for transgender athletes remain a thorny debate among officials and athletes. That being said, the IOC allowed pre-surgery transgender athletes to take part in the 2016 Rio Games.

With this year’s Winter Games in Pyeongchang, Rippon and Kenworthy are the first openly gay American men to compete in the Olympics since the legality of same-sex marriage was established throughout the United States in 2015, and the cultural shift is apparent. While American tennis legend Martina Navratilova, who came out in 1981 but competed as an Olympian for the first time in 2004, has said that coming out in 1981 cost her $10 million in sponsorships, Kenworthy boasts sponsorships with Visa, Toyota and Ralph Lauren, to name a few. The skier also recently appeared in an ad for Head & Shoulders, with a rainbow pride flag waving behind him.

“The atmosphere for LGBT athletes has changed quicker in past decade,” says Scupham-Bilton, LGBT and Olympic historian. “In the 20th century there was more homophobia in sport and society in general. As the increase in LGBT equality has progressed, so has acceptance of LGBT athletes.”

There’s one notable exception: Sochi 2014. The summer before hosting the Winter Olympics, in what many saw as an affront to gay rights activism, the Russian government passed a law prohibiting the promotion of “nontraditional” sexual relationships to minors. The United States used the Olympic platform as an opportunity for subtle protest, including prominent gay athletes Brian Boitano, Billie Jean King and Caitlin Cahow in its Olympic delegation, and protests were staged across the world. Despite the outpouring of international support, Canadian figure skater Eric Radford opted to wait until after Sochi to come out, citing his desire to be recognized for his skill, rather than his sexuality. He’s already made his mark at the Pyeongchang Games, where his performance with skating partner Meagan Duhamel vaulted Canada to the top of the team figure skating competition.

Rippon and Kenworthy have used their newfound platforms to make statements on political issues. Rippon recently made headlines when he refused an offer to meet with Vice President Mike Pence due to disagreements with his stances on LGBT rights – which include past statements that appear to support funding gay conversion therapy. Pence’s former press secretary denied his support for gay conversion therapy during the 2016 presidential campaign. Kenworthy also criticized the Vice President as a “bad fit” to lead the United States' delegation at the Opening Ceremony in Pyeongchang on Friday.

Political platforms and sponsorships aside, Rippon and Kenworthy ultimately hoped that by coming out they could live as freer, more authentic versions of themselves – and empower others to do the same.

“There is pressure that comes with this responsibility and I feel I have a responsibility to the LGBT community now,” Kenworthy has said. “I want to be a positive example and an inspiration for any kids that I can.”

Delegate

National Museum of African American History and Culture
A 1984 issue of Delegate magazine published by MelPat Associates. The cover of the magazine is white with an image of the Olympic rings, all red except the bottom left ring, which has been replaced by a blue ribbon badge with text that reads [1984 / DELEGATE]. Below the rings is a grid of black and white photographic portraits. Twenty-two (22) men and women are pictured, with the name of everyone printed under his or her image in blue. Blue text at the bottom right corner reads [The / Olympics / Past and / Present / page 159]. The spine of the magazine is white with red text that reads [DELEGATE, 1984 - The 8th Year of the 3rd Century].

The magazine’s content opens with a masthead, set in white text against a black background reading [DELEGATE, 1983], and a table of contents, followed by an untitled editorial note about the 1984 presidential election.

The content then continues with profiles of African American business organizations, business leaders, events, community organizations, sororities, fraternities, doctors, dentists, politicians, actors, and journalists. This includes the National Urban League, NAACP, Harlem YMCA Sports Hall of Fame, Pan-Hellenic Council, American Bridge Association, Interracial Council for Business Opportunity, Delegate Magazine reception, Iota Phi Lambda Sorority, Chesebrough-Pond’s Inc., Ciba-Geigy, Prince Hall Grand Lodge, John Hunter Camp Fund, The Girl Friends, National Association for Equal Opportunity in Higher Education, Frederick Douglass Awards Dinner, Opportunities Industrialization Centers of America, Phelps-Stokes Center for Human Development, Mamie Phipps Clark, Northside Center, National Newspaper Publishers Association, 100 Black men, AME Zion Church, Top Ladies of Distinction, Carats, Inc., Links, Republican Party, Lambda Kappa Mu Sorority, National United Church Ushers Association of America, National Association of Medical Minority Educators, Eddie Atkinson, National Association of Market Developers, Suzanne de Passe, Negro Ensemble Co., Dance Theatre of Harlem, Walter Mondale, Jesse Jackson, Democratic Party, CBS Records, Phi Beta Sigma Fraternity, Edgar B. Felton, Black Congress on Health, Law and Economics, National Black Nurses’ Association, National Bar Association, National Pharmaceutical Association, Alpha Kappa Alpha Sorority, National Medical Association, Morehouse School of Medicine, Zeta Phi Beta Sorority, Alpha Phi Alpha Fraternity, National Association of University Women, National Association of Negro Business and Professional Women’s Clubs, Tuskegee Airmen, Ancient Egyptian Arabic Order Nobles of the Mystic Shrine, 100 Black Women, Eta Phi Beta Sorority, Boys Choir of Harlem, Cardinal Cooke, Oliver C. Sutton, Adam Clayton Powell, Jr., Black Caucus Weekend, National Council of Negro Women, 369th Veterans’ Association, Beaux Arts Ball, The Edges Group, and Comus Social Club. In the middle is a large feature on the Olympics, one titled “The Story of the Past” and the other “The Story of the Future.” There are also features on black Hollywood and the cities of Houston, San Francisco, Los Angeles, Oakland, and Oak Bluffs.

There are approximately 511 pages with black and white photographs and advertisements throughout, as well as a few advertisements in color. The back cover of the magazine features a full page advertisement for Kool cigarettes.

Portrait of Egon Wellesz

Hirshhorn Museum and Sculpture Garden

Why 'Paradise Lost' Is Translated So Much

Smithsonian Magazine

"Paradise Lost," John Milton's 17th-century epic poem about sin and humanity, has been translated more than 300 times into at least 57 languages, academics have found.

“We expected lots of translations of 'Paradise Lost,'" literature scholar Islam Issa tells Alison Flood of the Guardian, "but we didn’t expect so many different languages, and so many which aren’t spoken by millions of people."

Isaa is one of the editors of a new book called Milton in TranslationThe research effort led by Issa, Angelica Duran and Jonathan R. Olson looks at the global influence of the English poet's massive composition in honor of its 350th anniversary. Published in 1667 after a blind Milton dictated it, "Paradise Lost" follows Satan's corruption of Adam and Eve, painting a parable of revolution and its consequences.

Milton himself knew these concepts intimately—he was an active participant in the English Civil War that toppled and executed King Charles I in favor of Oliver Cromwell's Commonwealth.

These explorations of revolt, Issa tells Flood, are part of what makes "Paradise Lost" maintain its relevance to so many people around the world today. The translators who adapt the epic poem to new languages are also taking part in its revolutionary teachings, Issa notes. One of the best examples is when Yugoslav dissident Milovan Djilas spent years translating "Paradise Lost" painstakingly into Serbo-Croatian on thousands of sheets of toilet paper while he was imprisoned. The government banned the translation, along with the rest of Djilas' writing.

That wasn't the first time a translation was banned—when "Paradise Lost" was first translated into Germany, it was instantly censored for writing about Biblical events in "too romantic" a manner. Just four years ago, a bookstore in Kuwait was apparently shut down for selling a translation of Milton's work, though according to the owner, copies of “Paradise Lost” remained available at Kuwait University's library.

As the world becomes increasingly globalized expect to Milton's seminal work to continue to spread far and wide. In the last 30 years, the researchers found that more translations of "Paradise Lost" have been published than in the 300 years before that.

Massachusetts - Cultural Destinations

Smithsonian Magazine

Isabella Stewart Gardner Museum
This jewel of a museum is housed in a 15th-century Venetian-style palace surrounding a verdant courtyard. Works by Rembrandt, Michelangelo, Degas, Titian and others share the space with the best in decorative and contemporary arts. The museum also features concerts every Sunday, September through May.

Plimoth Plantation
A living museum near present-day Plymouth, Plimoth Plantation interprets the colonial village as it was in 1627, seven years after the Mayflower’s arrival. At the Wampanoag Homesite, learn about the culture of the Wampanoag, who have lived in southeastern New England for more than 12,000 years. Climb aboard the Mayflower II, a full-scale reproduction of the famous ship. And at the Nye Barn, take a gander at heritage breeds of livestock from around the world, including Kerry cattle, and Arapawa Island goats.

Old Sturbridge Village
Experience life in an 1830s New England village at this interpretive outdoor museum in central Massachusetts. Visitors can tour more than 40 original buildings and 200 acres of grounds, all meticulously maintained to recreate early American village life.

Whaling Museum (New Bedford)
"Moby Dick" fans take note. In 1907, the Old Dartmouth Historical Society founded the whaling museum to tell the story of whaling and of New Bedford, once the whaling capital of the world. The museum holds an extensive collection of artifacts and documents of the whaling industry and features contemporary exhibits on whales and human interaction with the sea mammals.

Harvard University and Massachusetts Institute of Technology
These two venerable institutions have shaped the city of Cambridge and together offer a vacation’s worth of sightseeing. Of Harvard’s many respected museums, the Fogg Art Museum, with it collection of European and American painting, prints and photography is a popular favorite. And Harvard’s Arnold Arboretum, designed by landscape architect Frederick Law Olmsted is a wonderful place to spend a sunny morning or afternoon. For the more science and technology-minded, the MIT Museum offers exhibits on robotics, holography and more.

Kennedy Library and Museum
The presidency of John F. Kennedy lasted only 1,000 days but left an indelible mark on American history and culture. This stunning museum is the official repository for all things Camelot.

(Salem) More than 150 people were arrested and imprisoned during the witch-hunt that led to the infamous witch trials of 1692 and 1693. Of them, 29 were convicted and 19 hanged. Others died in prison. Learn about this dramatic moment in American history and enjoy the present charms of this picturesque New England town. To see both Salem and Boston in one day, hop aboard the Nathaniel Bowditch, which offers eight round-trips daily between the two cities.

National Historical Park (Lowell)
The exhibits and grounds here chronicle the shift from farm to factory, the rise of female and immigrant labor, as well as the industrial technology that fueled these changes. Housed in the restored former textile mill of the Boott Manufacturing Company, the park’s Boott Cotton Mills Museum features a 1920's weave room whose 88 power-looms generate a deafening clatter (ear plugs provided). Find out what it was like to be a ";Mill Girl" at the heart of the United State’s industrial revolution. Nearby is a cluster of lively art museums and galleries, including the New England Quilt Museum and the Revolving Museum.

Lighthouse (Boston)
Built in 1716, it was the first lighthouse in North America and is the only one in the U.S. that has not been automated. The second-oldest lighthouse is on Martha’s Vineyard.

Fanueil Hall
Built as a gift to the city of Boston in 1742 by Peter Fanueil, the city’s richest merchant, the hall served as a central market as well as a platform for political and social change. Colonists first protested the Sugar Act here in 1764, establishing the doctrine of no taxation without representation. Samuel Adams rallied Bostonians to independence from Britain, George Washington celebrated the first birthday of the new nation, and Susan B. Anthony spoke out for civil rights, all at Fanueil Hall. In 1826, the hall was expanded to include Quincy Market. Today, shops and restaurants fill the bustling site, which attracts 18 million visitors a year.

Lost Footage of One of the Beatles' Last Live Performances Found in Attic

Smithsonian Magazine

More than 50 years after the beginning of Beatlemania, it seems that every recorded moment the Beatles spent together between forming in 1960 and dissolving in 1970 has been archived, restored, remastered and remastered again. But one long-lost Beatles performance recently resurfaced: a 92-second clip that shows the Fab Four playing their song “Paperback Writer” on a 1966 episode of the British TV program “Top of the Pops.”

The Press Association reports that the Beatles’ appearance on the show was believed to be lost to history, since back in the 1960s, the BBC was not as fastidious about recording and archiving its programs. But in the days before on-demand streaming or even VCR recording, music enthusiast David Chandler used his 8-millimeter wind-up camera to record the Beatles’ June 16, 1966 “Top of the Pops” appearance. Chandler gave the film to the television archive organization Kaleidoscope, which is trying to track down lost bits of the U.K.’s broadcast history.

Gianluca Mezzofiore at CNN reports that the film reel had sat in Chandler’s attic for more than 50 years until news broke this spring that a collector in Mexico had found an 11-second clip of the performance.

That find was considered significant: it’s the band’s only live “Top of the Pops” appearance (the show aired pre-recorded songs in previous years). The clip also captured the Beatles as their time on a tour bus came to a close. Later that summer, the Fab Four played their last commercial gig ever at Candlestick Park in San Francisco before becoming a studio band. (They did, however, play a final surprise show on a London rooftop in 1969.)

“[I]f you’re a Beatles fans, it’s the holy grail,” Kaleidoscope C.E.O. Chris Perry told the BBC’s Colin Paterson after the 11-second find. “People thought it was gone forever.”

He’s even more stunned by the longer clip. “Kaleidoscope thought finding 11 seconds of ‘Paperback Writer’ was incredible, but to then be donated 92 seconds—and nine minutes of other 1966 Top of the Pops footage was phenomenal,” he says in a statement.

The raw film Chandler captured is silent. That’s why Kaleidoscope worked to remaster the film, enhance the footage and sync it with audio of the song. The restored clip will debut at Birmingham City University on Saturday during a day-long event celebrating its discovery.

A little over a year ago, Kaleidoscope officially launched a hunt to find the U.K.’s top 100 missing television shows, surveying 1,000 television professionals, academics, journalists and TV nerds to determine what shows they’d most like to see recovered. At the top of the list were lost episodes of “Doctor Who,” while missing performances from “Top of the Pops,” which aired from 1964 until 2006, came in as the second most wanted. So far, the BBC reports, Kaleidoscope has recovered at least 240 musical performances, including Elton John singing “Rocket Man” on “Top of the Pops” in 1972.

“These lost episodes really can end up in the most unusual of places and people might not even know they have them,” Perry said in a statement released when the Kaleidoscope hunt for lost-to-history shows began. In this case, it’s probably best to ignore the Beatles’ advice: If you have vintage film stored somewhere in your attic, don’t let it be.

At Moab, Music Among the Red Rocks

Smithsonian Magazine

With its stunning red rocks, the area around Moab is an adventurer's paradise, attracting hikers, bikers and river rafters to southeastern Utah. But when the summer heat tapers off around Labor Day, the region becomes an extraordinary concert hall for world-class musicians. The Moab Music Festival, now in its 16th year, holds a series of chamber music concerts, most of them outdoors amid the spectacular red rock landscape and along the Colorado River. This year's festival runs from August 28–September 13.

I've been lucky enough to attend 13 of the festivals since the event was organized in 1992 by artistic director Leslie Tomkins and Michael Barrett, a conducting protégé of my father Leonard Bernstein.

In the interest of full disclosure, Michael Barrett and I have collaborated over the years on several concerts for children and families, similar to my father's Young People's Concerts that were televised from 1958 through 1972. How I wish my father had lived to hear music in Moab's beautiful natural settings. Music lovers hear anew some of world's best classical music as it resonates off the rocks or finds acoustical purity in the dead silence of the remote settings.

Image by Steve Adams. (left to right)Emily Bruskin, Jesse Mills, Festival Artistic Director, co-founder and violist Leslie Tomkins and Tanya Tomkins at Fisher Towers (original image)

Image by Steve J. Sherman. The view from the back of the grotto looking toward the Colorado River during a Moab Music Festival concert (original image)

Image by Steve J. Sherman. Moab Music Festival audiences listening to music in natures own concert hall, a grotto along the Colorado River (original image)

Image by Neal Herbert. Violinists Karen Gomyo and Jennifer Frautschi and pianist Eric Zvian perform in the grotto at the Moab Music Festival (original image)

Image by Neal Herbert. The audience at the Moab Music Festival enjoys a concert at the Festival Tent as the sun sets over Onion Creek (original image)

Image by Neal Herbert. Moab Music Festival audiences are treated to a rainbow over Red Cliffs Lodge during memorable concert (original image)

Image by Neal Herbert. Moab Music Festival audience enjoys the music while relaxing at Hunter Canyon (original image)

My favorite Moab concerts are those set in a red rock grotto in Canyonlands National Park, accessible only by jet boating down the Colorado River. Getting there is a windy, gorgeous ride, snaking between the canyon walls that rear up on either side, a swath of deep blue sky above, and the striking formations dazzling concertgoers at every bend of the river. Thrilling! And the music hasn't even started yet.

The grotto is a natural amphitheatre with a sandy floor that accommodates camp and lawn chairs. If you want "box" seats, climb up to one of niches or ledges on the rock walls. Taking in the scene for the first time, one may wonder how in the world did that Steinway grand piano get here. River outfitters bring it down, snugly blanketed, at dawn on a jet boat. Eight men haul it up from the riverbank to the grotto, where they reattach its legs. Yet knowing that never seems to diminish my astonishment at the incongruity of the piano's presence. The enormous black instrument sits placidly in the red sand, like a tame stallion, awaiting the signal from its rider to unleash its magnificent strength.

I recall a two-piano performance of Stravinsky's "Rite of Spring," that was so intense that it seemed the very rocks themselves might crack. Toward the end of the first movement, Barrett's fierce playing caused his thumb to split open; blood smeared across the piano keys. During the quietest part of the second movement, a crow cawed in primal accompaniment. In a climactic section that ends in a great silence, we could hear Stravinsky's anguished chord yawning back at us from somewhere far across the river fully four seconds later. An acoustical marvel.

Classical chamber music is the mainstay of the festival, but it also serves up generous helpings of traditional folk, jazz, Latin music, and the works of living composers. This year's season includes William Bolcom and John Musto's brand-new comic chamber operas based on Italian folktales, tango-tinged jazz by Paquito d'Rivera, Scott Joplin piano rags and works by the versatile American composer Derek Bermel, plus chamber works by the likes of Bach, Beethoven and Brahms.

Founding a musical festival in Moab was "a total gamble," says Barrett. Driving through the tiny town in the early '90s he had been captivated by the "breathtaking landscape, the open spaces and the remoteness." The town, in an economic downturn at the time after losing its mining industry, was posed for something new. The festival remains a nonprofit "labor of love," he says, but over the years it has tripled its musical events and some 2,500 people attend annually. "It combines the best that humanity has to offer with the best nature has to offer," he says.

Watch this video in the original article

Hundreds of Artifacts Looted From Iraq and Afghanistan to Be Repatriated

Smithsonian Magazine

In 2002, border officials at London’s Heathrow Airport intercepted a pair of wooden crates brought into the country via a flight from Peshawar, Pakistan. Inside, they found a patchwork of 1,500-year-old clay limbs that had been crudely hacked off of sculptures that once stood in Buddhist monasteries in the ancient kingdom of Gandhāra in present-day northwestern Pakistan and northeastern Afghanistan.

Now, 17 years after their recovery, the looted artifacts are returning home. According to a British Museum press release, the 4th-century sculptures—which include nine sculpted heads and one torso—are among a group of 843 heritage objects scheduled to be repatriated from the London institution to the National Museum of Afghanistan in Kabul.

The stolen items had been seized with the help of the U.K. Border Force, the Metropolitan Police’s Art and Antiquities Unit, and even several private individuals. Following their identification as Afghan artifacts they were ultimately stored at the museum for “safekeeping and recording.”

Speaking with the Guardian’s Mark Brown, curator St. John Simpson describes the sculptural fragments, which were likely targeted during the Taliban’s 2001 iconoclasm spree, as “stunning and “quite outstanding.”

“We’ve returned thousands of objects to Kabul over the years,” he adds, “but this is the first time we’ve been able to work on Buddhist pieces.”

Mesopotamian cuneiform tablets (© Trustees of the British Museum)

According to the Independent’s Adam Forrest, the nine terracotta heads depict Buddha; meditating bodhisattvas, or individuals on the path of enlightenment; a bald monk; and three larger figures, one of whom may be Vajrapani, Buddha’s spiritual guide. The Evening Standard’s Robert Dex further identifies the stone torso as belonging to a statue of a Buddhist holy man.

As Brown writes, the sculptures speak to Buddhism’s short-lived influence in what is now Afghanistan, where the religion thrived between roughly the 4th and 8th centuries.

“The return of any object which has been illegally trafficked is hugely important symbolically,” Simpson tells Brown. “But these pieces will form one of the largest groups of Buddhist art to be returned to the Kabul museum. When you think [of] what Kabul has been through since the 1990s, culminating in the atrocity of Bamiyan”—in 2001, the Taliban blew up an iconic set of monumental 6th-century statues known as the Bamiyan Buddhas—“this is a huge event.”

With the National Museum of Afghanistan's blessing, some of the Buddhist sculptures will go on view at the British Museum before being transferred to officials at the Afghanistan embassy in London. Upon their return to Kabul, they will be exhibited at the National Museum.

Per the museum press release, artifacts set for repatriation also include examples of the 1st-century Begram Ivories, a Buddha statue dating to the 2nd or 3rd century, Bronze Age cosmetic flasks, medieval Islamic coins, pottery, stone bowls, and “other minor items of mixed date and materials.”

Separately, Naomi Rea reports for artnet News, the British Museum will return a set of 154 Mesopotamian cuneiform tablets to the National Museum of Iraq in Baghdad. Seized in 2011, the clay texts date to the mid-3rd century B.C. and describe administrative operations in the lost city of Irisagrig. With the permission of the National Museum of Iraq, a selection of the artifacts will also go on view at the British Museum before returning home.

Second "Three-Parent" Baby Born. This Time, It's a Girl

Smithsonian Magazine

On January 5, a baby was born with the DNA from three parents—the second in the world. Doctors from the Nadiya Clinic in the Ukrainian capital Kiev announced that the baby girl was produced with a technique called pronuclear transfer, used to treat infertility. But the move is stirring up controversy in the medical community, reports Michelle Roberts at the BBC.

While “three-parent” babies may sound like a concerning step towards genetically modified humans, there’s a legitimate medical reason for the procedure. The treatment was designed to help mothers suffering from a disease of the mitochondria—the organelles that serve as a cellular “powerhouse”—give birth to children without passing down the condition.

During the procedure, doctors fertilize an egg from the mother with the mitochondrial dysfunction with the sperm from the father. That embryo nucleus is then removed from the egg and implanted into a healthy egg from a donor. Susan Scutti at CNN reports that the resulting child receives the bulk of its 20,000 to 25,000 genes from its parents. About 37 genes, which regulate mitochondria, come from the donor egg, technically giving the child genetic material from three people.

Last year, a couple from Jordan who had lost two daughters to Leigh syndrome, underwent a similar procedure called spindle nuclear transfer. It was performed in Mexico by U.S. doctor John Zhang since the procedure is not currently legal in the United States. The couple gave birth to a healthy boy, whose gender was selected to prevent him from passing along the altered genes (mitochondrial DNA comes only from the mother).

The Ukrainian procedure, however, is stirring controversy. It was used as general treatment for infertility—not as a work around for mitochondrial disease, Scutti reports. The couple also gave birth to a girl, meaning that she will pass along the donor mitochondrial DNA if she has children.

The mother in question had been unable to get pregnant for 15 years. Using the procedure as an IVF technique allows doctors to bypass cells or enzymes in the mother’s egg that might prevent pregnancy or hinder cell division, explains Andy Coghlan at New Scientist.

Though Great Britain voted to allow the procedure for mitochondrial problems in February 2015, this is the first test of the method as an IVF technique. Adam Balen, chairman of the British Fertility Society tells Roberts that the latest use of the treatment is concerning. “Pronuclear transfer is highly experimental and has not been properly evaluated or scientifically proven,” he says. “We would be extremely cautious about adopting this approach to improve IVF outcomes.”

Valery Zukin, director of the Nadiya Clinic tells Scutti that a medical review board approved the procedure and that a thorough genetic screening was performed before the treatment. “In Ukraine, the situation is very simple—it’s not forbidden,” Zukin tells Scutti. “We have not any regulation concerning this.”

Zhang, who performed the first three-parent treatment last year, tells Scutti he’s not too concerned about the couple having a female child as long as the mitochondria are healthy. But he does have a few problems with the procedure. First, he says using the treatment on a healthy 34-year old woman is probably not appropriate and that statistically it was likely she could get pregnant without IVF. Secondly, Zukin used a virus protein to facilitate the procedure, which will integrate the virus into the baby’s DNA. Zhang says using electronic techniques are the current standard.

According to Roberts, Zukin has a second patient who received the treatment and is scheduled to give birth in March.

Portrait of Mrs. Karpeles (Frau K.)

Hirshhorn Museum and Sculpture Garden

More Than Half of All Coffee Species Are at Risk of Extinction

Smithsonian Magazine

Most popular coffee blends derive from either the Arabica or Robusta bean, but as Somini Sengupta explains for The New York Times, these strains are just two of the world’s 124 wild coffee species. Although the majority of these varieties are neither cultivated nor consumed, the genetic diversity they represent could be the key to preserving your morning cup of joe—especially as climate change and deforestation threaten to eradicate the beloved source of caffeine.

A pair of papers published in Science Advances and Global Change Biology place the potential coffee crisis in perspective, revealing that 75 of Earth’s wild coffee species, or some 60 percent, are at risk of extinction. The Arabica bean, a native Ethiopian species used to make most high-quality brews, is one such threatened species: According to Helen Briggs of BBC News, the team behind the Global Change Biology study found that Arabica’s population could fall by around 50 percent by 2088.

Arabica beans are at the core of rich, flavorful blends including Javan coffee, Ethiopian sidamo and Jamaican blue mountain. Comparatively, Adam Moolna writes for the Conversation, Robusta has a harsher taste and is most often used in instant blends. Interestingly, Arabica actually originates from Robusta, which was bred with a species known as Coffea eugenoides to create the crossbred bean.

Genetic interbreeding may be the best way to save commercial coffee species. As Helen Chadburn, a species conservation scientist at the Kew Royal Botanic Gardens and co-author of the Science Advances study, tells Popular Mechanic’s John Wenz, wild species carry “genetic traits”—think drought tolerance and pest or disease resistance—“that may be useful for the development … of our cultivated coffees.”

It’s also possible that experimenting with different types of wild coffee could yield tasty new brews. Chadburn adds, “Some other coffee species are naturally low in caffeine, or have an excellent (and unusual) flavor.”

There are a litany of obstacles associated with coffee conservation. In Madagascar and Tanzania, for example, some species are clustered in small areas, leaving them more vulnerable to a single extinction event. On a larger scale, habitat loss, land degradation, drought and deforestation also pose significant risks.

The main threat facing Arabica crops is climate change, according to Jeremy Hodges, Fabiana Batista and Aine Quinn of Bloomberg. Arabica requires a year-round temperature of 59 to 75 degrees Fahrenheit, as well as distinct rainy and dry seasons, in order to grow properly. When temperatures fall, the beans become frosty; when temperatures rise, the quality of the coffee falls, and yield per tree declines.

As global warming pushes temperatures upward, coffee farmers are being forced to innovate. Growers across Africa and South America are moving their crops to higher, cooler ground, but as Eli Meixler reports for Time, this may not be enough to save the Arabica bean—particularly in Ethiopia, where up to 60 percent of the area used for coffee cultivation could become unsuitable by century’s end.

Maintaining wild coffee species in seed banks or nationally protected forests could also prove essential to the caffeinated drink’s survival. Unfortunately, The New York Times’ Sengupta notes, the researchers found that just over half of wild coffee species are held in seed banks, while two-thirds grow in national forests. Even if scientists can boost the percentage of coffee seeds stored in seed banks, The Conversation’s Moolna points out that these samples don’t hold up in storage as well as crops such as wheat or maize.

Overall, the two new studies present a dire vision of coffee’s future—or lack thereof. As Aaron Davis, a Kew researcher who co-authored both papers, tells Daily Coffee News’ Nick Brown, in terms of sustainability and conservation efforts, the coffee sector is around 20 to 30 years behind other agricultural industries. As coffee yields shrink, Lauren Kent adds for CNN, consumers may notice their daily caffeine boost becoming both more expensive and less palatable.

Coffee isn’t completely out of the game yet: According to Moolna, conservation focused on maintaining genetic diversity and sustaining species in their native environments, rather than solely in collections such as seed banks, could save the drink from extinction. Still, if you’re a coffee fan, you may want to stock up on your favorite roasts sooner rather than later.

Shark Repellent: It’s Not Just For Batman Anymore

Smithsonian Magazine

Holy sardines! It’s a still from the 1966 film Batman

Every superhero would be wise to heed the lessons of the Caped Crusader, as explored below in the first of our series on shark-related patents and designs.

Today we look at shark repellent, the most famous of which was seen in the exciting opening of the original Batman film –that’s with Adam West not Michael Keaton– when the Caped Crusader is attacked by a shark while trying to intercept a boat with a helicopter – I’m sorry, Batcopter. Pretty typical Batman stuff, really. His first solution? Punch the shark – sorry, Batpunch the shark. The shark doesn’t give up as easily as the average cartoonish henchman, so Batman tries plan B: Bat shark repellent. It works. The shark falls into the ocean and EXPLODES. I honestly didn’t see that coming.

Well, it turns out that shark repellent is real, although I’m not sure it has been bat-weaponized into a convenient aerosol bomb. So unfortunately, it looks less like this:

Thankfully, Batman clearly labels all his Bat Sprays so this image is pretty straightforward. A still from the 1966 film Batman

And more like this:

U.S. patent no. 2,458,540 for “a composition and device for discouraging the predatory intentions of carnivorous fish” aka SHARK REPELLENT (image: google patents)

It probably won’t surprise you to hear that  it’s not quite as effective as the explosive bat spray. (Correction: The Joker had rigged the shark to explode, as villains are wont to do.)

Real shark repellent was first developed during World War II in an effort to help save the lives of seamen and pilots who had to await rescue in open water. The patent for “shark repellent” was issued to a team of American chemists –Richard L. Tuve, John M. Fogelberg, Frederic E. Brinnick, and Horace Stewart Spring– in 1949. Typically, these patent applications are pretty dry, but this one introduces the invention with a surprisingly vivid description of the problem faced by soldiers during the war:

“Since the beginning of the war with its submarine and air activity, numerous occasions have arisen in which men have been forced to swim for their lives. Our armed services and merchant marine have been helpful by providing the men with equipment to help them stay afloat. This phase of the problem or, rather, the equipment long ago reached a point of development where remaining afloat for extended periods offered little difficulty. In cold Atlantic waters, the greatest menace has been the cold. However, in the warm Pacific Ocean and the South Atlantic, a different menace arises for the waters are alive with carnivorous fish. The weakened condition of wounded men cast into the water puts them at a distinct disadvantage in trying to fight off sharks and barracuda which are attracted by their blood.”

Their design is a small chemical disk in a waterproof package that can be attached to a life vest. In the event that someone is stranded at sea, the disk can be exposed to seawater, which will activate the chemicals to “cast a protective veil of a chemical material around the swimmer.” Those chemicals consist primarily of copper acetate. which is safe for the swimmer but has been proven to be so distasteful to sharks that they’ll ignore raw meet meat floating in a pool of the mixture. It approximates the odor of dead shark – the only thing that’s been proven to repel the carnivorous fish.

The inventors had the good of all humanity in mind and specified that the deterrent could be used by any world government without the payment of royalties. While no shark repellent is fool proof, early tests of the 1949 repellent showed that the copper mixture was 72-96 percent effective. Later tests showed that maybe it wasn’t so effective. Work continued.

More recently, researchers have been working on a more effective shark repellent that is literally derived from a distilled essence of dead shark and has proven effective on a number of species. In 2001 chemical engineer Eric Stroud started the company Shark Defense  to refine an array of chemical and electrochemical shark deterrents such as shark resistant sunscreen and fishing hooks, and hope to someday offer shark repellent fishing nets and other products to protect boats and submarines.

Although advancements have been made, the perfect shark repellent continues to elude scientists. So if you’re planning to watch all of Shark Week in situ, I’d recommend getting to work on a weaponized Bat Spray.

Reality plus drama equals "EMERGENCY!"

National Museum of American History

The pre-reality television show EMERGENCY! premiered on January 27, 1972. Health- and medical-themed programs such as the radio and television drama Dr. Kildare had long been popular, but EMERGENCY! broke new ground. Set in Los Angeles, EMERGENCY! paid great attention to detail as it told the stories of fictional paramedics and doctors as they went about their jobs saving lives. The show didn't just look real, it was actually quite close to the real thing. An important but little-known part of the story involves the equipment used by the series' actors.

Black and white photo of actors on set, mid-scene

In pitching the premise of the show, coproducer Jack Webb collaborated with the Los Angeles County Fire Department. The close connection between the production staff and emergency personnel became a hallmark of the show. Webb had portrayed Sergeant Joe Friday on Dragnet and produced Adam-12, police shows that strove to convey a sense of reality. Technical advisers included firefighters and paramedics who enhanced the reality of the show. An additional boost to authenticity came with the casting of actor Mike Stoker, who drove Engine 51. Stoker was a firefighter in Los Angeles before joining the cast, and he continued to work in that profession while the series aired and after it ended.

Photo of defibrillator in orange case

In 2000 the National Museum of American History received a donation of materials relating to EMERGENCY! from the Project 51 Committee, a group formed to preserve the legacy of this important program which took its name from Station 51. Some of these objects (helmets, shirts, and coats) are housed with other television costumes in our Culture and the Arts division. Medical-related objects came to the Medicine and Science division.

Two of the objects in the Medicine and Science division were used by actors Kevin Tighe and Randolph Mantooth who portrayed paramedics Roy De Soto and John Gage, respectively, on the show. One is a defibrillator, an electrical device used to shock a patient's heart back into a regular beating pattern (often after a heart attack). The other is a biophone, a portable radio and data transmitter used by paramedics to talk to doctors in the hospital and transmit information, such as electrocardiograms. Although these two units are non-operative, both objects were manufactured by companies that provided operable equipment to real paramedics.

Photo of Biophone in case

Photo of label

These objects illustrate how producers Webb and Robert Cinader aimed to make a program where the lines between reality and drama intersected. Their goal was not simply to entertain, but also to educate the public about life-saving measures. Although the stories presented in the episodes were scripted, they depicted real dangers faced by firefighters and paramedics. The series motivated many people to embark upon careers in the emergency medical field. The Atlanta Constitution reported that after the series premiere, Los Angeles County increased its paramedic units from three to fifteen and credited the show for that increase. One of our colleagues here at the Museum became an emergency medical technician (EMT) because she watched EMERGENCY! It would be interesting to learn how many others made the same career choice due to the influence of Roy De Soto and John Gage.

Connie Holland is a project assistant in Medicine and Science. She has also blogged about radio programming from 1928.

Author(s): 
Connie Holland
Posted Date: 
Tuesday, September 8, 2015 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=dxzTs9Wd7_E:InEHpgmNi7E:V_sGLiPBpWU OSayCanYouSee?i=dxzTs9Wd7_E:InEHpgmNi7E:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Five Things to Know About Roger Bannister, the First Person to Break the 4-Minute Mile

Smithsonian Magazine

Roger Bannister, the first person to break the 4-minute mile, died in Oxford on Saturday at age 88, the Associated Press reports.

More than 60 years ago, back on a cinder track at Oxford University's Iffley Road Stadium in 1954, Bannister completed four laps in 3:59.4, a record-breaking performance that many believed was not humanly possible. The image of the exhausted Bannister with his eyes closed and mouth agape appeared on the front page of newspapers around the world, a testament to what humankind could achieve.

“It became a symbol of attempting a challenge in the physical world of something hitherto thought impossible,” Bannister said as the 50th anniversary of the run approached, according to the AP. “I'd like to see it as a metaphor not only for sport, but for life and seeking challenges.”

Here are five things you should know about the iconic athlete and his stunning mid-century run.

He Sought the Record Due to Olympic Failure

Frank Litsky and Bruce Weber at The New York Times report that Bannister began running to avoid bullies and the air raid sirens during the WWII blitz of London.

The tall, lanky blonde also happened to be booksmart, and used his intellect to land an athletic scholarship to Oxford University. There, Bannister caught the eye of coaches while serving as a pacemaker for a mile race in 1947. While pacemakers generally drop out before the end of the race, Bannister continued on, reportedly beating the field by 20 yards, AP sportswriter Chris Lehourites recounts.

Though Bannister quickly became one of the U.K.’s most promising track stars, he remained a true student-athlete. History.com reports that he skipped running the 1500 meters at the 1948 London Olympics so he could concentrate on his studies. In 1952, he competed at the Helsinki Olympics, coming in fourth in the 1500 meters. That performance was roundly criticized by the British press. Afterward, he resolved to break the 4-minute mile, which several other runners were chasing. Thanks to insights he gleaned from medical school, he created a specially tailored training regimen to prepare himself for his barrier-breaking run on May 6, 1954.

Track singlet worn by Englishman Roger Bannister (b. 1931) at the 1954 Commonwealth Games, Vancouver, Canada. Bannister barely beat Landry, finishing at 3:58.8, less than a second ahead of Landy at 3:59.6. (National Museum of American History)

Breaking the Record Wasn’t His Most Famous Run

As it so happens, Bannister’s record only last 46 days before Australian runner John Landy shaved 1.5 seconds off of his time at a meet in Turku, Finland. Michael McGowan at The Guardian reports that the back-t0-back record-breaking performances set the stage for one of running’s most incredible showdowns when in August of 1954, Bannister and Landy faced off at the British Empire and Commonwealth Games at the Vancouver Exhibition (renamed the Pacific National Exhibition in 1946).

During the race, Landy led with Bannister at his heels. At the final turn, however, Landy turned and looked over his left shoulder to find out where Bannister was. At that moment Bannister surpassed Landy on the right, winning the race. Both men finished what came to be known as the Miracle Mile in under 4 minutes, the first time that had ever happened.

Vancouver sculptor Jack Harman erected a statue of the runners during the race, which still stands outside the exhibition. In the work, Landy is looking over his shoulder at Bannister. McGowan reports that Landy joked that while Lot’s wife in the Bible was turned into a pillar of salt for looking back, “I am probably the only one ever turned into bronze for looking back.”

Bannister retired from running soon after setting the record

Though he was chosen as Sports Illustrated’s first "Sportsman of the Year" and could have continued on with a professional running career, Bannister shocked the world by retiring from running at the end of that summer after winning the 1500 meters at the European Championships in Bern, Switzerland, reports McGowan.

“As soon as I ceased to be a student, I always knew I would stop being an athlete,” he once said, as Adam Addicott at The Sportsman recounts. That fall he began his rounds as a doctor.

Bannister went on to have a long career as a neurologist, serving for many years as the director of the National Hospital for Nervous Diseases in London.

He Fought Against Drugs in Sports

Bannister, who became Sir Roger Bannister after being knighted in 1975, never lost his interest in athletics. Between 1971 and 1974, he served at the chairman of the British Sports Council and between 1976 and 1983, he served as the president of the International Council of Sport Science and Physical Recreation.

But most significantly, Addicott reports, as chair of the Sports Council he gathered together a group of researchers to develop the first test for anabolic steroids, a substance that Bannister and many others believed the Soviet Union and Eastern Bloc nations were using to juice their athletes. “I foresaw the problems in the 1970s and arranged for the group of chemists to detect the first radioimmunoassay test for anabolic steroids,” he told Mike Wise at The Washington Post in 2014. “The only problem was it took a long time for the Olympic and other authorities to introduce it on a random basis. I foresaw it being necessary.”

Addicott reports that in recent years Bannister remained a vocal anti-doping advocate and expressed "extreme sadness" in its prevelance in sport today. 

"I hope that Wada (World Anti-Doping Agency) and Usada (US Anti-Doping Angency) will be successful in bringing this to an end," he said in an interview with ITV just last month.

Bannister’s Record Is Long Gone

Kevin J. Delaney at Quartz reports that Bannister’s records did not live much past the summer of 1954. Since then, 500 American men alone have broken the 4 minute mark, including 21 who have done so since the beginning of this year.

The current record to beat is 3:43.13, which was set by 24-year-old Moroccan runner Hicham El Guerrouj in 1999. Delaney reports that with the right body type and training, some models predict a 3:39 minute mile is theoretically achievable in the future.

For women, no athlete has broken the 4-minute mile...yet. Russian Svetlana Masterkova currently holds the world record in the race, ripping out a time of 4:12.56 at the Weltklasse Grand Prix track-and-field meet in Zurich, Switzerland, in 1996.

Five Books on World War I

Smithsonian Magazine

On the 11th hour of the 11th day of the 11th month of 1918, an armistice between Allied forces and Germany put an end to the fighting of what was then referred to as the Great War. President Woodrow Wilson declared November 11, of the following year, Armistice Day. In 1938, an act of Congress made the day a legal holiday, and by 1954, that act was amended to create Veterans Day, to honor American veterans of all wars.

Journalist Adam Hochschild, author of To End All Wars (2011), an account of World War I from the perspective of both hawks and doves in Great Britain, provides his picks of books to read to better understand the conflict.

Hell’s Foundations (1992), by Geoffrey Moorhouse

Of the 84 British regiments that fought in the Gallipoli campaign in Turkey in 1915 and 1916, the Lancashire Fusiliers from Bury, in northern England, suffered the most casualties. The regiment lost 13,642 men in the war—1,816 in Gallipoli alone.

For journalist Geoffrey Moorhouse, the subject hit close to home. He grew up in the small mill town of Bury, and his grandfather had survived Gallipoli. In Hell’s Foundations, Moorhouse describes the town, its residents’ attitudes toward the war and the continued suffering of the soldiers who survived.

From Hochschild: A fascinating and unusual look at the war in microcosm, by showing its effects on one English town.

Testament of Youth (1933), by Vera Brittain

In 1915, Vera Brittain, then a student at the University of Oxford, enlisted as a nurse in the British Army’s Voluntary Aid Detachment. She saw the horrors of war firsthand while stationed in England, Malta and France. Wanting to write about her experiences, she initially set to work on a novel, but was discouraged by the form. She then considered publishing her actual diaries. Ultimately, however, she wrote cathartically about her life between the years 1900 and 1925 in a memoir, Testament of Youth. The memoir has been called the best-known book of a woman’s World War I experience, and is a significant work for the feminist movement and the development of autobiography as a genre.

From Hochschild: Brittain lost her brother, her fiancé and a close friend to the war, while working as a nurse herself.

Regeneration Trilogy, by Pat Barker

In the 1990s, British author Pat Barker penned three novels: Regeneration (1991), The Eye in the Door (1993) and The Ghost Road (1995). Though fictional, the series, about shell-shocked officers in the British army, is based, in part, on true-life stories. Barker’s character Siegfried Sassoon, for instance, was closely based on the real Siegfried Sassoon, a poet and soldier in the war, and Dr. W.H.R. Rivers was based on the actual neurologist of that name, who treated patients, including Sassoon, at the Craiglockhart War Hospital in Scotland. The New York Times once called the trilogy a “fierce meditation on the horrors of war and its psychological aftermath.”

From Hochschild: The finest account of the war in recent fiction, written with searing eloquence and a wide angle of vision that ranges from the madness of the front lines to the fate of war resisters in prison.

The Great War and Modern Memory (1975), by Paul Fussell

After serving as an infantry officer in World War II, Paul Fussell felt a kinship to soldiers of the First World War. Yet he wondered just how much he had in common with their experiences. “What did the war feel like to those whose world was the trenches? How did they get through this bizarre experience? And finally, how did they transform their feelings into language and literary form?” he writes in the afterword to the 25th anniversary edition of his monumental book The Great War and Modern Memory.

To answer these questions, Fussell went directly to firsthand accounts of World War I written by 20 or 30 British men who fought in it. It was from this literary perspective that he wrote The Great War and Modern Memory, about life in the trenches. Military historian John Keegan once called the book “an encapsulation of a collective European experience.”

From Hochschild: A subtle, superb examination of the literature and mythology of the war, by a scholar who was himself a wounded veteran of World War II.

The First World War (1998), by John Keegan

The title is simple and straightforward, and yet in and of itself poses an enormous challenge to its writer: to tell the full story of World War I. Keegan’s account of the war is, no doubt, panoramic. Its most commended elements include the historian’s dissections of military tactics, both geographical and technological, used in specific battles and his reflections on the thought processes of the world leaders involved.

From Hochschild: This enormous cataclysm is hard to contain in a single one-volume overview, but Keegan’s is probably the best attempt to do so.

The Blasphemous Geologist Who Rocked Our Understanding of Earth's Age

Smithsonian Magazine

On a June afternoon in 1788, James Hutton stood before a rock outcropping on Scotland’s western coast named Siccar Point. There, before a couple of other members of the Scottish Enlightenment, he staked his claim as the father of modern geology.

Aa Hutton told the skeptics who accompanied him there by boat, Siccar Point illustrated a blasphemous truth: the Earth was old, almost beyond comprehension.

Three years earlier, he’d unveiled two papers, together called "Theory of the Earth," at a pair of meetings of the Royal Society of Edinburgh. Hutton proposed that the Earth constantly cycled through disrepair and renewal. Exposed rocks and soil were eroded, and formed new sediments that were buried and turned into rock by heat and pressure. That rock eventually uplifted and eroded again, a cycle that continued uninterrupted.

“The result, therefore, of this physical enquiry,” Hutton concluded, “is that we find no vestige of a beginning, no prospect of an end.”

His ideas were startling at a time when most natural philosophers—the term scientist had not yet been coined—believed that the Earth had been created by God roughly 6,000 years earlier. The popular notion was that the world had been in a continual decline ever since the perfection of Eden. Therefore, it had to be young. The King James Bible even set a date: October 23, 4004 BC.

At Siccar Point, Hutton pointed to proof of his theory: the junction of two types of rock created at different times and by different forces. Gray layers of metamorphic rock rose vertically, like weathered boards stuck in the ground. They stabbed into horizontal layers of red, layered sandstone, rock only beginning to be deposited. The gray rock, Hutton explained, had originally been laid down in horizontal layers of perhaps an inch a year of sediment long ago. Over time, subterranean heat and pressure transformed the sediment into rock and then a force caused the strata to buckle, fold and become vertical.

Here, he added, was irrefutable proof the Earth was far older than the prevailing belief of the time.

John Playfair, a mathematician who would go on to become Hutton's biographer with his 1805 book, Life of Dr. Hutton, accompanied him that day. “The mind seemed to grow giddy by looking so far back into the abyss of time; and whilst we listened with earnestness and admiration to the philosopher who was now unfolding to us the order and series of these wonderful events, we became sensible how much further reason may sometimes go than imagination may venture to follow,” he late wrote.

Hutton, born in 1726, never became famous for his theories during his life. It would take a generation before the geologist Charles Lyell and the biologist Charles Darwin would grasp the importance of his work. But his influence endures today.

An illustration of Hutton doing fieldwork, by artist John Kay. (Library of Congress)

"A lot of what is still in practice today in terms of how we think about geology came from Hutton," says Stephen Marshak, a geology professor at the University of Illinois who has made the pilgrimage to Siccar Point twice. To Marshak, Hutton is the father of geology.

Authors like Stephen Jay Gould and Jack Repcheck—who wrote a biography of Hutton titled The Man Who Found Time—credit him with freeing science from religious orthodoxy and laying the foundation for Charles Darwin’s theory of evolution.

"He burst the boundaries of time, thereby establishing geology's most distinctive and transforming contribution to human thought—Deep Time," Gould wrote in 1977.

Hutton developed his theory over 25 years, first while running a farm in eastern Scotland near the border with England and later in an Edinburgh house he built in 1770. There, one visitor wrote that "his study is so full of fossils and chemical apparatus of various kinds that there is barely room to sit down."

He was spared financial worries thanks to income from the farm and other ventures, and had no dependent family members, because he never married. Thus freed of most earthly burdens, he spent his days working in the study and reading. He traveled through Scotland, Wales and England, collecting rocks and surveying the geology. Through chemistry, he determined that rocks could not have precipitated from a catastrophe like Noah’s Flood, the prevailing view of previous centuries, otherwise they would be dissolved by water. Heat and pressure, he realized, formed rocks.

That discovery came with help from Joseph Black, a physician, chemist and the discoverer of carbon dioxide. When Hutton moved to Edinburgh, Black shared his love of chemistry, a key tool to understanding the effect of heat on rock. He deduced the existence of latent heat and the importance of pressure on heated substances. Water, for instance, stays liquid under pressure even when heated to a temperature that normally would transform it to steam. Those ideas about heat and pressure would become key to Hutton’s theory about how buried sediments became rock.

Black and Hutton were among the leading lights of the Royal Society of Edinburgh, along with Adam Smith, the economist and author of The Wealth of Nations, David Hume, the philosopher, Robert Burns, the poet, and James Watt, the inventor of the two-cylinder steam engine that paved the way for the Industrial Revolution.

Hutton's principle of uniformitarianism—that the present is the key to the past—has been a guiding principle in geology and all sciences since. Marshak notes that despite his insight, Hutton didn’t grasp all the foundations of geology. He thought, for example, that everything happened at a similar rate, something that does not account for catastrophic actions like mountain building or volcanic eruptions, which have shaped the Earth.

Unlike many of his contemporaries, Hutton never found fame during his life. But his portrait of an ever-changing planet had a profound effect. Playfair's book fell into favor with Charles Lyell, who was born in 1797, the year that Hutton died. Lyell's first volume of "Principles of Geology" was published in 1830, using Hutton and Playfair as starting points.

Charles Darwin brought a copy aboard the Beagle in 1832 and later became a close friend of Lyell after completing his voyages in 1836. Darwin’s On the Origins of Species owes a debt to Hutton’s concept of deep time and rejection of religious orthodoxy.

"The concept of Deep Time is essential. Now, we take for granted the Earth is 4.5 billion years old. Hutton had no way of knowing it was that kind of age. But he did speculate that the Earth must be very, very old," Marshak says. "That idea ultimately led Darwin to come up with his phrasing of the theory of evolution. Because only by realizing there could be an immense amount of time could evolution produce the diversity of species and also the record of species found in fossils."

"The genealogy of these ideas," he adds, "goes from Hutton to Playfair to Lyell to Darwin."

Your Tweets Can Predict When You’ll Get the Flu

Smithsonian Magazine

Simply by looking at geo-tagged tweets, an algorithm can track the spread of flu and predict which users are going to get sick. Image via Adam Sadilek, University of Rochester

In 1854, in response to a devastating cholera epidemic that was sweeping through London, British doctor John Snow introduced an idea that would revolutionize the field of public health: the epidemiological map. By recording instances of cholera in different neighborhoods of the city and plotting them on a map based on patients’ residences, he discovered that a single contaminated water pump was responsible for a great deal of the infections.

The map persuaded him—and, eventually, the public authorities—that the miasma theory of disease (which claimed that diseases spread via noxious gases) was false, and that the germ theory (which correctly claimed that microorganisms were to blame) was true. They put a lock on the handle of the pump responsible for the outbreak, signaling a paradigm shift that permanently changed how we deal with infectious diseases and thus sanitation.

The mapping technology is quite different, as is the disease, but there’s a certain similarity between Snow’s map and a new project conducted by a group of researchers led by Henry Kautz of the University of Rochester. By creating algorithms that can spot flu trends and make predictions based on keywords in publicly available geotagged tweets, they’re taking a new approach to studying the transmission of disease—one that could change the way we study and track the movement of diseases in society.

“We can think of people as sensors that are looking at the world around them and then reporting what they are seeing and experiencing on social media,” Kautz explains. “This allows us to do detailed measurements on a population scale, and doesn’t require active user participation.”

In other words, when we tweet that we’ve just been laid low by a painful cough and a fever, we’re unwittingly providing rich data for an enormous public health experiment, information that researchers can use to track the movement of diseases like flu in high resolution and real time.

Kautz’ project, called SocialHealth, has made use of tweets and other sorts of social media to track a range of public health issues—recently, they began using tweets to monitor instances of food poisoning at New York City restaurants by logging everyone who had posted geotagged tweets from a restaurant, then following their tweets for the next 72 hours, checking for mentions of vomiting, diarrhea, abdominal pain, fever or chills. In doing so, they detected 480 likely instances of food poisoning.

But as the season changes, it’s their work tracking the influenza virus that’s most eye-opening. Google Flu Trends has similarly sought to use Google searchers to track the movement of flu, but the model greatly overestimated last year’s outbreak, perhaps because media coverage of flu prompted people to start making flu-related queries. Twitter analysis represents a new dataset with a few qualities—a higher geographic resolution and the ability to capture the movement of a user over time—that could yield better predictions.

To start their flu-tracking project , the SocialHealth researchers looked specifically at New York, collecting around 16 million geotagged public tweets per month from 600,000 users for three months’ time. Below is a time-lapse of one New York Twitter day, with different colors representing different frequencies of tweets at that location (blue and green mean fewer tweets, orange and red mean more):

To make use of all this data, his team developed an algorithm that determines if each tweet represents a report of flu-like symptoms. Previously, other researchers had simply done this by searching for keywords in tweets (“sick,” for example), but his team found that the approach leads to false positives: Many more users tweet that they’re sick of homework than they’re feeling sick.

To account for this, his team’s algorithm looks for three words in a row (instead of one), and considers how often the particular sequence is indicative of an illness, based on a set of tweets they’d manually labelled. The phrase “sick of flu,” for instance, is strongly correlated with illness, whereas “sick and tired” is less so. Some particular words—headache, fever, coughing—are strongly linked with illness no matter what three-word sequence they’re part of.

Once these millions of tweets were coded, the researchers could do a few intriguing things with them. For starters, they looked at changes in flu-related tweets over time, and compared them with levels of flu as reported by the CDC, confirming that the tweets accurately captured the overall trend in flu rates. However, unlike CDC data, it’s available in nearly real-time, rather than a week or two after the fact.

But they also went deeper, looking at the interactions between different users—as represented by two users tweeting from the same location (the GPS resolution is about half a city block) within the same hour—to model how likely it is that a healthy person would become sick after coming into contact with someone with the flu. Obviously, two people tweeting from the same block 40 minutes apart didn’t necessarily meet in person, but the odds of them having met are slightly higher than two random users.

As a result, when you look at a large enough dataset of interactions, a picture of transmission emerges. They found that if a healthy user encounters 40 other users who report themselves as sick with flu symptoms, his or her odds of getting flu symptoms the next day increases from less than one percent to 20 percent. With 60 interactions, that number rises to 50 percent.

The team also looked at interactions on Twitter itself, isolating pairs of users who follow each other and calling them “friendships.” Even though many Twitter relationships exist only on the Web, some correspond to real-life interactions, and they found that a user who has ten friends who report themselves as sick are 28 percent more likely to become sick the next day. In total, using both of these types of interactions, their algorithm was able to predict whether a healthy person would get sick (and tweet about it) with 90 percent accuracy.

We’re still in the early stages of this research, and there are plenty of limitations: Most people still don’t use Twitter (yes, really) and even if they do, they might not tweet about getting sick.

But if this sort of system could be developed further, it’s easy to imagine all sorts of applications. Your smartphone could automatically warn you, for instance, if you’d spent too much time in the places occupied by people with the flu, prompting you to go home to stop putting yourself in the path of infection. An entire city’s residents could even be warned if it were on the verge of an outbreak.

Despite the 150 years we’re removed from John Snow’s disease-mapping breakthrough, it’s clear that there are still aspects of disease information we don’t fully understand. Now, as then, mapping the data could help yield the answers.

Top 10 Nation-Building Real Estate Deals

Smithsonian Magazine

Despite the recent unpleasantness in the real estate market, many still hold (or once held, or will hold again) to the axiom of the late millionaire Louis Glickman: “The best investment on earth is earth.” This applies for nations, too. Below are ten deals in which the United States acquired territory, ranked in order of their consequences for the nation. Feel free to make bids of your own. (Just to be clear, these are deals, or agreements; annexations and extralegal encroachments don’t apply.)

1. The Treaty of Paris (1783): Before the United States could start acquiring real estate, it had to become the United States. With this deal, the former 13 colonies received Great Britain’s recognition as a sovereign nation. Included: some 830,000 square miles formerly claimed by the British, the majority of it—about 490,000 square miles—stretching roughly from the western boundaries of the 13 new states to the Mississippi. So the new nation had room to grow—pressure for which was already building.

2. The Treaty of Ghent (1814): No land changed hands under this pact, which ended the Anglo-American War of 1812 (except for the Battle of New Orleans, launched before Andrew Jackson got word that the war was over). But it forced the British to say, in effect: OK, this time we really will leave. Settlement of the former Northwest Territory could proceed apace, leading to statehood for Indiana, Illinois, Michigan, Wisconsin and Minnesota, the eastern part of which was in the territory. (Ohio had become a state in 1803.)

3. The Louisiana Purchase (1803): It doubled the United States’ square mileage, got rid of a foreign power on its western flank and gave the fledgling nation control of the Mississippi. But the magnitude of this deal originated with our counterparty, the French. The Jefferson administration would have paid $10 million just for New Orleans and a bit of land east of the Mississippi. Napoleon asked: What would you pay for all of Louisiana? (“Louisiana” being the heart of North America: from New Orleans north to Canada and from the Mississippi west to the Rockies, excluding Texas.) Jefferson’s men in Paris, James Monroe and Robert Livingston, exceeded their authority in closing a deal for $15 million. The president did not complain.

4. The Alaska Purchase (1867): Russia was a motivated seller: the place was hard to occupy, let alone defend; the prospect of war in Europe loomed; business prospects looked better in China. Secretary of State William H. Seward was a covetous buyer, but he got a bargain: $7.2 million for 586,412 square miles, about 2 cents an acre. Yes, Seward’s alleged folly has been vindicated many times over since Alaska became the gateway to Klondike gold in the 1890s. He may have been visionary, or he may have been just lucky. (His precise motives remain unclear, historian David M. Pletcher writes in The Diplomacy of Involvement: American Economic Expansion Across the Pacific, because “definitive written evidence” is lacking.) The secretary also had his eye on Greenland. But we’re getting ahead of ourselves.

Image by Newscom. With the Treaty of Paris in 1783, the former 13 colonies received Great Britain's recognition as a sovereign nation along with some 830,000 square miles. (original image)

Image by Bettmann / Corbis. The United States expanded from the original 13 colonies in a series of deals that began in 1783 with the Treaty of Paris. (original image)

Image by Newscom. Although no land changed hands under the Treaty of Ghent in 1814, it forced the British to leave the Northwest Territory to allow settlement. This lead to statehood for Indiana, Illinois, Michigan, Wisconsin and Minnesota. (original image)

Image by Bettmann / Corbis. The Louisiana Purchase in 1803 doubled the United States' square mileage, got rid of a foreign power on its western flank and gave the fledgling nation control of the Mississippi. (original image)

Image by Library of Congress. Secretary of State William H. Seward bargained with Russia for the sale of Alaska in 1867. Seward bought 586,412 square miles for $7.2 million, about 2 cents an acre. What was once known as Seward's Folly has proven to be quite valuable with the discovery of gold and oil in the region. (original image)

Image by Bettmann / Corbis. In order to keep the Germans from controlling shipping lanes in the Atlantic and the Caribbean, the Wilson administration signed the Virgin Islands Purchase in 1917. The U.S. paid Denmark $25 million in exchange for St. Thomas, St. Croix and St. John. (original image)

5. The Treaty of Guadalupe Hidalgo (1848): The Polk administration negotiated from strength—it had troops in Mexico City. Thus the Mexican-American War ended with the United States buying, for $15 million, 525,000 square miles in what we now call the Southwest (all of California, Nevada and Utah, and parts of Wyoming, Colorado, Arizona and New Mexico). Mexico, though diminished, remained independent. The United States, now reaching the Pacific, began to realize its Manifest Destiny. On the other hand, the politics of incorporating the new territories into the nation helped push the Americans toward civil war.

6. The Oregon Treaty (1846): A victory for procrastination. The United States and Great Britain had jointly occupied 286,000 square miles between the northern Pacific and the Rockies since 1818, with the notion of sorting things out later. Later came in the early 1840s, as more Americans poured into the area. The 1844 presidential campaign featured the battle cry “Fifty-four forty or fight!” (translation: “We want everything up to the latitude of Alaska’s southern maritime border”), but this treaty fixed the northern U.S. border at the 49th parallel—still enough to bring present-day Oregon, Washington and Idaho and parts of Montana and Wyoming into the fold.

7. The Adams-Onís Treaty (1819): In the mother of all Florida real estate deals, the United States bought 60,000 square miles from Spain for $5 million. The treaty solidified the United States’ hold on the Atlantic and Gulf coasts and pushed Spanish claims in the North American continent to west of the Mississippi (where they evaporated after Mexico won its independence in 1821… and then lost its war with the United States in 1848; see No. 5).

8. The Gadsden Purchase (1853): This time, the United States paid Mexico $10 million for only 30,000-odd square miles of flat desert. The intent was to procure a route for a southern transcontinental railroad; the result was to aggravate (further) North-South tensions over the balance between slave and free states. The railroad wasn’t finished until 1881, and most of it ran north of the Gadsden Purchase (which now forms the southern parts of New Mexico and Arizona).

9. The Virgin Islands Purchase (1917): During World War I, the Wilson administration shuddered to think: If the Germans annex Denmark, they could control shipping lanes in the Atlantic AND the Caribbean. So the Americans struck a deal with the Danes, paying $25 million for St. Thomas, St. Croix and St. John. Shipping continued; mass tourism came later.

10. The Greenland Proffer (1946): The one that got away. The biggest consequence of this deal is that it never happened. At least since Seward’s day (see No. 4), U.S. officials had cast a proprietary eye toward our neighbor to the really far north. After World War II, the United States made it official, offering $100 million to take the island off Denmark’s administrative hands. Why? Defense. (Time magazine, January 27, 1947: “Greenland’s 800,000 square miles would make it the world’s largest island and stationary aircraft carrier.”) “It is not clear,” historian Natalia Loukacheva writes in The Arctic Promise: Legal and Political Autonomy of Greenland and Nunavut, “whether the offer was turned down... or simply ignored.” Greenland achieved home rule in 1979.

The Commoner Who Salvaged a King’s Ransom

Smithsonian Magazine

George Fabian Lawrence, better known as “Stoney Jack,” parlayed his friendships with London navvies into a stunning series of archaeological discoveries between 1895 and 1939.

It was only a small shop in an unfashionable part of London, but it had a most peculiar clientele. From Mondays to Fridays the place stayed locked, and its only visitors were schoolboys who came to gaze through the windows at the marvels crammed inside. But on Saturday afternoons the shop was opened by its owner—a “genial frog” of a man, as one acquaintance called him, small, pouched, wheezy, permanently smiling and with the habit of puffing out his cheeks when he talked. Settling himself behind the counter, the shopkeeper would light a cheap cigar and then wait patiently for laborers to bring him treasure. He waited at the counter many years—from roughly 1895 until his death in 1939—and in that time accumulated such a hoard of valuables that he supplied the museums of London with more than 15,000 ancient artifacts and still had plenty left to stock his premises at 7 West Hill, Wandsworth.

“It is,” the journalist H.V. Morton assured his readers in 1928,

perhaps the strangest shop in London. The shop sign over the door is a weather-worn Ka-figure from an Egyptian tomb, now split and worn by the winds of nearly forty winters. The windows are full of an astonishing jumble of objects. Every historic period rubs shoulders in them. Ancient Egyptian bowls lie next to Japanese sword guards and Elizabethan pots contain Saxon brooches, flint arrowheads or Roman coins…

There are lengths of mummy cloth, blue mummy beads, a perfectly preserved Roman leather sandal found twenty feet beneath a London pavement, and a shrunken black object like a bird’s claw that is a mummified hand… all the objects are genuine and priced at a few shillings each.

H.V. Morton, one of the best-known British journalists of the 1920s and 1930s, often visited Lawrence’s shop as a young man, and wrote a revealing and influential pen-portrait of him.

This higgledy-piggledy collection was the property of George Fabian Lawrence, an antiquary born in the Barbican area of London in 1861—though to say that Lawrence owned it is to stretch a point, for much of his stock was acquired by shadowy means, and on more than one occasion an embarrassed museum had to surrender an item it had bought from him.

For the better part of half a century, however, august institutions from the British Museum down winked at his hazy provenances and his suspect business methods, for the shop on West Hill supplied items that could not be found elsewhere. Among the major museum pieces that Lawrence obtained and sold were the head of an ancient ocean god, which remains a cornerstone of the Roman collection at the Museum of London; a spectacular curse tablet in the British Museum, and the magnificent Cheapside Hoard: a priceless 500-piece collection of gemstones, broaches and rings excavated from a cellar shortly before the First World War. It was the chief triumph of Lawrence’s career that he could salvage the Hoard, which still comprises the greatest trove of Elizabethan and Stuart-era jewelery ever unearthed.

Lawrence’s operating method was simple but ingenious. For several decades, he would haunt London’s building sites each weekday lunch hour, sidling up to the laborers who worked there, buying them drinks and letting them know that he was more than happy to purchase any curios—from ancient coins to fragments of pottery—that they and their mates uncovered in the course of their excavations. According to Morton, who first visited the West Hill shop as a wide-eyed young man around 1912, and soon began to spend most of his Saturday afternoons there, Lawrence was so well known to London’s navvies that he was universally referred to as “Stoney Jack.” A number, Morton added, had been offered “rudimentary archaeological training,” by the antiquary, so they knew what to look for.

Lawrence made many of his purchases on the spot; he kept his pockets full of half-crowns (each worth two shillings and sixpence, or around $18.50 today) with which to reward contacts, and he could often be spotted making furtive deals behind sidewalk billboards and in barrooms. His greatest finds, though were the ones that wended their way to Wandsworth on the weekends, brought there wrapped in handkerchiefs or sacks by navvies spruced up in their Sunday best, for it was only then that laborers could spirit their larger discoveries away from the construction sites and out from under the noses of their foremen and any landlords’ representatives. They took such risks because they liked and trusted Lawrence—and also, as JoAnn Spears explains it, because he “understood networking long before it became a buzzword, and leveraged connections like a latter-day Fagin.”

London navvies–laborers who excavated foundations, built railways and dug tunnels, all by hand–uncovered thousands of valuable artefacts in the British capital each year.

Two more touches of genius ensured that Stoney Jack remained the navvies’ favorite. The first was that he was renowned for his honesty. If ever a find sold for more than he had estimated it was worth, he would track down the discoverer and make certain he received a share of the profits. The second was that Lawrence never turned a visitor away empty-handed. He rewarded even the most worthless discoveries with the price of half a pint of beer, and the workmen’s attitude toward his chief rival—a representative of the City of London’s Guildhall Museum who earned the contemptuous nickname “Old Sixpenny”—is a testament to his generosity.

Lawrence lived at just about the time that archaeology was emerging as a professional discipline, but although he was extremely knowledgeable, and enjoyed a long career as a salaried official—briefly at the Guildhall and for many years as Inspector of Excavations at the newer Museum of London—he was at heart an antiquarian. He had grown up as the son of a pawnbroker and left school at an early age; for all his knowledge and enthusiasm, he was more or less self-taught. He  valued objects for themselves and for what they could tell him about some aspect of the past, never, apparently, seeing his discoveries as tiny fragments of some greater whole.

To Lawrence, Morton wrote,

the past appeared to be more real, and infinitely more amusing, than the present. He had an almost clairvoyant attitude to it. He would hold a Roman sandal—for leather is marvelously preserved in the London clay—and, half closing his eyes, with his head on one side, his cheroot obstructing his diction, would speak about the cobbler who had made it ages ago, the shop in which it had been sold, the kind of Roman who had probably brought it and the streets of the long-vanished London it had known.

The whole picture took life and colour as he spoke. I have never met anyone with a more affectionate attitude to the past.

Like Morton, who nursed a love of ancient Egypt, Stoney Jack acquired his interest in ancient history during his boyhood. “For practical purposes,” he told another interviewer, “let us say 1885, when as a youth of 18 I found my first stone implement…. It chanced that one morning I read in the paper of the finding of some stone implements in my neighborhood. I wondered if there were any more to be found. I proceeded to look for them in the afternoon, and was rewarded.”

A Roman “curse tablet”, recovered by Lawrence from an excavation in Telegraph Street, London, is now part of the collection of the British Museum.

Controversial though Lawrence’s motives and his methods may have been, it is hard to avoid the conclusion that he was the right man in the right place to save a good deal of London’s heritage. Between 1890 and 1930 the city underwent redevelopment at a pace unheard of since the Great Fire of 1666; old buildings were demolished and replaced with newer, taller ones that required deeper foundations. In the days before the advent of widespread mechanization in the building trade, much of the necessary digging was done by navvies, who hacked their way down through Georgian, Elizabethan, medieval and finally Saxon and Roman strata that had not been exposed for centuries.

It was a golden age for excavation. The relatively small scale of the work—which was mostly done with picks and shovels—made it possible to spot and salvage minor objects in a way no longer practicable today. Even so, no formal system existed for identifying or protecting artifacts, and without Lawrence’s intervention most if not all of the 12,000 objects he supplied to the Museum of London, and the 300 and more catalogued under his name at the British Museum, would have been tipped into skips and shot into Thames barges to vanish into landfill on the Erith marshes. This was very nearly the fate of the treasure with which Stoney Jack will always be associated: the ancient bucket packed to the brim with a king’s ransom worth of gems and jewelery that was dug out of a cellar in the City of London during the summer of 1912.

It is impossible to say for certain who uncovered what would become known as the Cheapside Hoard, exactly where they found it, or when it came into the antiquary’s possession. According to Francis Sheppard, the date was June 18, 1912, and the spot an excavation on the corner of Friday Street and Cheapside in a district that had long been associated with the jewelery trade. That may or may not be accurate; one of Lawrence’s favorite tricks was to obscure the precise source of his most valued stock so as to prevent suspicious landowners from lodging legal claims.

This dramatic pocket watch, dated to c.1610 and set in a case carved from a single large Colombian emerald, was one of the most valuable of the finds making up the Cheapside Hoard–and led historian Kris Lane to put forward a new theory explaining the Hoard’s origins. Photo: Museum of London.

Whatever the truth, the discovery was a spectacular one whose value was recognized by everyone who saw it—everyone, that is, but the navvies who uncovered the Hoard in the first place. According to Morton, who claimed to have been present as a boy when the find was brought to West Hill by its discoverers one Saturday evening, the workmen who had uncovered it believed that they had “struck a toyshop.” Tipping open a sack, the men disgorged an enormous lump of clay resembling “an iron football, the journalist recalled, “and they said there was a lot more of it. When they had gone, we went up to the bathroom and turned the water on to the clay. Out fell pearl earrings and pendants and all kinds of crumpled jewellery.”

For the most accurate version of what happened next, it is necessary to turn to the records of the Museum of London, which reveal that the discovery caused so much excitement that a meeting of the museum’s trustees was convened at the House of Commons the next evening, and the whole treasure was assembled for inspection a week later. “By that time,” Sheppard notes, “Lawrence had somehow or other got hold of a few more jewels, and on June 26 sent him a cheque for £90…. Whether this was the full amount paid by the trustees for the hoard is not clear. In August 1913 he was paid £47 for unspecified purchases for the museum.”

Morton—who was 19 at the time of the discovery—offered a more romantic account many years later: “I believe that Lawrence declared this as treasure trove and was awarded a large sum of money, I think a thousand pounds. I well remember that he gave each of the astounded navvies something like a hundred pounds each, and I was told that these men disappeared, and were not seen again for months!”

Whatever the truth, the contents of the navvies’ bucket were certainly astonishing. The hoard consisted of several hundred  pieces—some of them gems, but most worked pieces of jewelery in a wide variety of styles. They came from all over the world; among the most spectacular pieces were a number of cameos featuring Roman gods, several fantastical jewels from Mughal India, a quantity of superb 17th-century enamelware, and a large hinged watch case carved from a huge emerald.

A finely-worked salamander brooch, typical of the intricate Stuart-era jewelry that made up the Cheapside Hoard. Photo: Museum of London.

The collection was tentatively dated to around 1600-1650, and was rendered particularly valuable by the ostentatious fashions of the time; many of the pieces had bold, complex designs that featured a multiplicity of large gems. It was widely assumed, then and now, that the Cheapside Hoard was the stock-in-trade of some Stuart-era jeweler that had been buried for safekeeping some time during the Civil War that shattered England, Ireland and Scotland between 1642 and 1651, eventually resulting in the execution of Charles I and the establishment of Oliver Cromwell’s short-lived puritan republic.

It is easy to imagine some hapless jeweler, impressed into the Parliamentarian army, concealing his valuables in his cellar before marching off to his death on a distant battlefield. More recently, however, an alternative theory has been advanced by Kris Lane, an historian at Tulane whose book The Color of Paradise: The Emerald in the Age of Gunpowder Empires suggests that the Cheapside Hoard probably had its origins in the great emerald markets of India, and may once have belonged to a Dutch gem merchant named Gerard Polman.

The story that Lane spins goes like this: Testimonies recorded in London in 1641 show that, a decade earlier, Polman had  booked passage home from Persia after a lifetime’s trading in the east. He had offered £100 or £200 to the master of an East India Company ship Discovery in Gombroon, Persia, to bring him home to Europe, but got no further than the Comoros Islands before dying–possibly poisoned by the ship’s crew for his valuables. Soon afterwards, the carpenter’s mate of the Discovery, one Christopher Adams, appropriated a large black box, stuffed with jewels and silk, that had once belonged to Polman. This treasure, the testimonies state, was astonishingly valuable; according to Adams’s wife, the gems it contained were “so shiny that they thought the cabin was afire” when the box had first been opened in the Indian Ocean. “Other deponents who had seen the jewels on board ship,” adds Lane, “said they could read by their brilliance.”

Cheapside–for many years center of London’s financial district district, but in Stuart times known for its jewelry stores–photographed in c.1900.

It is scarcely surprising, then, that when the Discovery finally hove to off Gravesend, at the mouth of the Thames, at the end of her long voyage, Adams jumped ship and went ashore in a small boat, taking his loot with him. We know from the Parliamentary archive that he made several journeys to London to fence the jewels, selling some to a man named Nicholas Pope who kept a shop off Fleet Street.

Soon, however, word of his treachery reached the directors of the East India Company, and Adams was promptly taken into custody. He spent the next three years in jail. It is the testimony that he gave from prison that may tie Polman’s gems to the Cheapside Hoard.

The booty, Adams admitted, had included “a greene rough stone or emerald three inches long and three inches in compass”—a close match for the jewel carved into a  hinged watch case that Stoney Jack recovered in 1912. This jewel, he confessed, “was afterward pawned at Cheapside, but to whom he knoweth not”, and Lane considers it a “likely scenario” that the emerald found its way into the bucket buried in a Cheapside cellar; “many of the other stones and rings,” he adds, “appear tantalizingly similar to those mentioned in the Polman depositions.” If Lane is right, the Cheapside Hoard may have been buried in the 1630s, to avoid the agents of the East India Company, rather than lost during the chaos of the Civil War.

Whether or not Lane’s scholarly detective work has revealed the origins of the Cheapside Hoard, it seems reasonable to ask whether the good that Stoney Jack Lawrence did was enough to outweigh the less creditable aspects of his long career. His business was, of course, barely legitimate, and, in theory, his navvies’ finds belonged to the owner of the land that they were working on—or, if exceptionally valuable, to the Crown. That they had to be smuggled off the building sites, and that Lawrence, when he catalogued and sold them, chose to be vague about exactly where they had been found, is evidence enough of his duplicity.

A selection of the 500 pieces making up the Cheapside Hoard that were recovered from a ball of congealed mud and crushed metalwork resembling an “iron football” uncovered in the summer of 1912. Photo: Museum of London.

Equally disturbing, to the modern scholar, is Lawrence’s willingness to compromise his integrity as a salaried official of several museums by acting as both buyer and seller in hundreds of transactions, not only setting his own price, but also authenticating artifacts that he himself supplied. Yet there is remarkably little evidence that any institution Lawrence worked for paid over the odds for his discoveries, and when Stoney Jack died, at age 79, he left an estate totaling little more than £1,000 (about $87,000 now). By encouraging laborers to hack treasures from the ground and smuggle them out to him, the old antiquary also turned his back on the possibility of setting up regulated digs that would almost certainly have turned up additional finds and evidence to set his greatest discoveries in context. On the other hand, there were few regulated digs in those days, and had Lawarence never troubled to make friends with London navvies, most of his finds would have been lost for ever.

For H.V. Morton, it was Stoney Jack’s generosity that mattered. “He loved nothing better than a schoolboy who was interested in the past,” Morton wrote. “Many a time I have seen a lad in his shop longingly fingering some trifle that he could not afford to buy. ‘Put it in your pocket,’ Lawrence would cry. ‘I want you to have it, my boy, and–give me threepence!‘”

But perhaps the last word can be left to Sir Mortimer Wheeler, something of a swashbuckler himself, but by the time he became keeper of the Museum of London in the 1930s–after Stoney Jack had been forced into retirement for making one illicit purchase too many outside a guarded building site–a pillar of the British archaeological establishment.

“But for Mr Lawrence,” Wheeler conceded,

not a tithe of the objects found during building or dredging operations in the neighborhood of London during the last forty years would have been saved to knowledge. If on occasion a remote landowner may, in the process, theoretically have lost some trifle that was his just due, a higher justice may reasonably recognize that… the representative and, indeed, important prehistoric, Roman, Saxon and medieval collections of the Museum are largely founded upon this work of skillful salvage.

Sources

Anon. “Saved Tudor relics.” St Joseph News-Press (St Joseph, MO), August 3, 1928; Anon. “Stoney Jack’s work for museum.” Straits Times (Singapore), August 1, 1928; Michael Bartholomew. In Search of HV Morton. London: Methuen, 2010; Joanna Bird, Hugh Chapman & John Clark. Collectanea Loniniensia: Studies in London Archaeology and History Presented to Ralph Merrifield. London: London & Middlesex Archaeological Society, 1978; Derby Daily Telegraph, November 20, 1930; Exeter & Plymouth Gazette, March 17, 1939; Gloucester Citizen, July 3, 1928; Kris E. Lane. The Colour of Paradise: the Emerald in the Age of Gunpowder Empires. New Haven: Yale University Press, 2010;  J. MacDonald. “Stony Jack’s Roman London.” In J. Bird, M. Hassall and Harvey Sheldon, Interpreting Roman London. Oxbow Monograph 58 (1996); Ivor Noël Hume. A Passion for the Past: the Odyssey of a Transatlantic Archaeologist. Charlottesville : University of Virginia Press, 2010; Arthur MacGregor. Summary Catalogue of the Continental Archaeological Collections. Oxford: Ashmolean Museum, 1997; Francis Sheppard. Treasury of London’s Past. London: Stationery Office, 1991;  HV Morton. In Search of London. Boston: Da Capo Press, 2002; Derek Sherborn. An Inspector Recalls. London: Book Guild, 2003; JoAnn Spears. “The Cheapside Hoard.” On the Tudor Trail, February 23, 2012. Accessed June 4, 2013; Peter Watts. “Stoney Jack and the Cheapside Hoard.” The Great Wen, November 18, 2010. Accessed June 4, 2013.

How Advertising Shaped the First Opioid Epidemic

Smithsonian Magazine

When historians trace back the roots of today’s opioid epidemic, they often find themselves returning to the wave of addiction that swept the U.S. in the late 19th century. That was when physicians first got their hands on morphine: a truly effective treatment for pain, delivered first by tablet and then by the newly invented hypodermic syringe. With no criminal regulations on morphine, opium or heroin, many of these drugs became the "secret ingredient" in readily available, dubiously effective medicines.

In the 19th century, after all, there was no Food and Drug Administration (FDA) to regulate the advertising claims of health products. In such a climate, a popular so-called “patent medicine” market flourished. Manufacturers of these nostrums often made misleading claims and kept their full ingredients list and formulas proprietary, though we now know they often contained cocaine, opium, morphine, alcohol and other intoxicants or toxins.

Products like heroin cough drops and cocaine-laced toothache medicine were sold openly and freely over the counter, using colorful advertisements that can be downright shocking to modern eyes. Take this 1885 print ad for Mrs. Winslow’s Soothing Syrup for Teething Children, for instance, showing a mother and her two children looking suspiciously beatific. The morphine content may have helped.

Image by NIH National Library of Medicine. 1885 advertisement for Mrs. Winslow's Soothing Syrup. This product was for teething children and contained morphine. (original image)

Image by NIH National Library of Medicine. Published in Mumbles Railway Publishing, 19th century. (original image)

Yet while it’s easy to blame patent medicines and American negligence for the start of the first opioid epidemic, the real story is more complicated. First, it would be a mistake to assume that Victorian era Americans were just hunky dory with giving infants morphine syrup. The problem was, they just didn’t know. It took the work of muckraking journalists such as Samuel Hopkins Adams, whose exposé series, “The Great American Fraud” appeared in Colliers from 1905 to 1906, to pull back the curtain.

But more than that, widespread opiate use in Victorian America didn’t start with the patent medicines. It started with doctors.

The Origins of Addiction

Patent medicines typically contained relatively small quantities of morphine and other drugs, says David Herzberg, a professor of history at SUNY-University at Buffalo. “It’s pretty well recognized that none of those products produced any addiction,” says Herzberg, who is currently writing a history of legal narcotics in America.

Until the Harrison Narcotics Act of 1914, there were no federal laws regulating drugs such as morphine or cocaine. Moreover, even in those states that had regulations on the sale of narcotics beginning in the 1880s, Herzberg notes that “laws were not part of the criminal code, instead they were part of medical/pharmacy regulations.”

The laws that existed weren't well-enforced. Unlike today, a person addicted to morphine could take the same “tattered old prescription” back to a compliant druggist again and again for a refill, says David Courtwright, a historian of drug use and policy at the University of North Florida.

And for certain ailments, patent medicines could be highly effective, he adds. “Quite apart from the placebo effect, a patent medicine might contain a drug like opium,” says Courtwright, whose book Dark Paradise: A History of Opiate Addiction in America, provides much of the original scholarship in this area. “If buyers took a spoonful because they had, say, a case of the runs, the medicine probably worked.” (After all, he points out, “opium is a constipating agent.”)

Patent medicines may not have been as safe as we would demand today or live up to claims of panacea, but when it came to coughs and diarrhea, they probably got the job done. “Those drugs are really famous, and they do speak to a time where markets were a little bit out of control,” Herzberg says. "But the vast majority of addiction during their heyday was caused by physicians.”

From handbills and pamphlets advertising glyco-heroin 1900-1920, from the College of Physicians of Philadelphia's collection of medical trade ephemera. (Historical Medical Library, College of Physicians of Philadelphia)

Marketing to Doctors

For 19th century physicians, cures were hard to come by. But beginning in 1805, they were handed a way to reliably make patients feel better. That’s the year German pharmacist Friedeich Serturner isolated morphine from opium, the first “opiate” (the term opioid once referred to purely synthetic morphine like drugs, Courtwright notes, before becoming a catchall covering even those drugs derived from opium).

Delivered by tablet, topically and, by mid-century, through the newly invented hypodermic syringe, morphine quickly made itself indispensable. Widespread use by soldiers during the Civil War also helped trigger the epidemic, as Erick Trickey reports in Smithsonian.com. By the 1870s, morphine became something of “a magic wand [doctors] could wave to make painful symptoms temporarily go away,” says Courtwright.

Doctors used morphine liberally to treat everything from the pain of war wounds to menstrual cramps. “It’s clear that that was the primary driver of the epidemic,” Courtwright says. And 19th century surveys Courtwright studied showed most opiate addicts to be female, white, middle-aged, and of “respectable social background”—in other words, precisely the kind of people who might seek out physicians with the latest tools.

Industry was quick to make sure physicians knew about the latest tools. Ads for morphine tablets ran in medical trade journals, Courtwright says, and, in a maneuver with echoes today, industry sales people distributed pamphlets to physicians. The College of Physicians of Philadelphia Historical Medical Library has a collection of such “medical trade ephemera” that includes a 1910 pamphlet from The Bayer Company titled, “The Substitute for the Opiates.”

The substitute? Heroin hydrochloride, at the time a new drug initially believed to be less addictive than morphine. Pamphlets from the Antikamnia Chemical Company, circa 1895 show an easy cheat sheet catalog of the company’s wares, from quinine tablets to codeine and heroin tablets.

(College of Physicians of Philadelphia's Historical Medical Library)

Physicians and pharmacists were the key drivers in increasing America's per capita consumption of drugs like morphine by threefold in the 1870s and 80s, Courtwright writes in a 2015 paper for the New England Journal of Medicine. But it was also physicians and pharmacists who ultimately helped bring the crisis back under control.

In 1889, Boston physician James Adams estimated that about 150,000 Americans were "medical addicts": those addicted through morphine or some other prescribed opiate rather than through recreational use such as smoking opium. Physicians like Adams began encouraging their colleagues to prescribe “newer, non-opiate analgesics,” drugs that did not lead to depression, constipation and addiction.

“By 1900, doctors had been thoroughly warned and younger, more recently trained doctors were creating fewer addicts than those trained in the mid-nineteenth century,” writes Courtwright.

This was a conversation had between doctors, and between doctors and industry. Unlike today, drug makers did not market directly to the public and took pride in that contrast with the patent medicine manufacturers, Herzberg says. “They called themselves the ethical drug industry and they would only advertise to physicians.”

But that would begin to change in the early 20th century, driven in part by a backlash to the marketing efforts of the 19th century patent medicine peddlers.

"San Diego lynx bares its fangs vigorously when zoo veterinarian is near cage, vet says it acts this way because it fears his hypodermics," reads the first photo caption for this Librium advertisement. "Tranquil as a tabby," says the second. (LIFE Magazine)

Marketing to the Masses

In 1906, reporting like Adams’ helped drum up support for the Pure Food and Drug Act. That gave rise to what would become the Food and Drug Administration, as well as the notion that food and drug products should be labeled with their ingredients so consumers could make reasoned choices.

That idea shapes federal policy right up until today, says Jeremy Greene, a colleague of Herzberg’s and a professor of the history of medicine at Johns Hopkins University School of Medicine: “That path-dependent story is part of the reason why we are one of the only countries in the world that allows direct-to-consumer advertising," he says.

At the same time, in the 1950s and 60s, pharmaceutical promotion became more creative, coevolving with the new regulatory landscape, according to Herzberg. As regulators have set out the game, he says, “Pharma has regularly figured out how to play that game in ways that benefit them.

Though the tradition of eschewing direct marketing to the public continued, advertising in medical journals increased. So, too, did more unorthodox methods. Companies staged attention-grabbing gimmicks, such as Carter Products commissioning Salvador Dali to make a sculpture promoting its tranquilizer, Miltown, for a conference. Competitor Roche Pharmaceuticals invited reporters to watch as its tranquilizer Librium was used to sedate a wild lynx.

Alternatively, some began taking their messaging straight to the press.

“You would feed one of your friendly journalists the most outlandishly hyped-up promise of what your drug could do,” Greene says. “Then there is no peer review. There is no one checking to if see it’s true; it’s journalism!” In their article, Greene and Herzberg detail how ostensibly independent freelance science journalists were actually on the industry payroll, penning stories about new wonder drugs for popular magazines long before native advertising became a thing.

One prolific writer, Donald Cooley, wrote articles with headlines such as “Will Wonder Drugs Never Cease!” for magazines like Better Homes and Garden and Cosmopolitan. “Don’t confuse the new drugs with sedatives, sleeping pills, barbiturates or a cure,” Cooley wrote in an article titled “The New Nerve Pills and Your Health.” “Do realize they help the average person relax.”

As Herzberg and Greene documented in a 2010 article in the American Journal of Public Health, Cooley was actually one of a stable of writers commissioned by the Medical and Pharmaceutical Information Bureau, a public relations firm, working for the industry. In a discovery Herzberg plans to detail in an upcoming book, it turns out there is “a rich history of companies knocking at the door, trying to claim that new narcotics are in fact non-addictive” and running advertisements in medical trade journals that get swatted down by federal authorities.

A 1932 ad in the Montgomery Advertiser, for instance, teases a new “pain relieving drug, five times as potent as morphine, as harmless as water and with no habit forming qualities.” This compound, “di-hydro-mophinone-hydrochlorid” is better known by the brand name Dilaudid, and is most definitely habit forming, according to Dr. Caleb Alexander, co-director of the Center for Drug Safety and Effectiveness at Johns Hopkins.

And while it’s not clear if the manufacturer truly believed it was harmless, Alexander says it illustrates the danger credulity presents when it comes to drug development. “If it sounds too good to be true, it probably is,” he says. “It is this sort of thinking, decades later, that has driven the epidemic."

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

It wasn’t until 1995, when Purdue Pharma successfully introduced OxyContin, that one of these attempts was successful, says Herzberg. “OxyContin passed because it was claimed to be a new, less-addictive type of drug, but the substance itself had been swatted down repeatedly by authorities since the 1940s,” he says. OxyContin is simply oxycodone, developed in 1917, in a time-release formulation Purdue argued allowed a single dose to last 12 hours, mitigating the potential for addiction.

Ads targeting physicians bore the tagline, “Remember, effective relief just takes two.”

“If OxyContin had been proposed as a drug in 1957 authorities would have laughed and said no,” Herzberg says.

Captivating the Consumer

In 1997, the FDA changed its advertising guidelines to open the door to direct-to-consumer marketing of drugs by the pharmaceutical industry. There were a number of reasons for this reversal of more than a century of practice, Greene and Herzberg say, from the ongoing ripples of the Reagan-era wave of deregulation, to the advent of the “blockbuster” pharmaceutical, to advocacy by AIDS patients rights groups.

The consequences were profound: a surge of industry spending on print and television advertising describing non-opioid drugs to the public that hit a peak of $3.3 billion in 2006. And while ads for opioid drugs were typically not shown on television, Greene says the cultural and political shifts that made direct-to-consumer advertising possible also changed the reception to the persistent pushing of opioids by industry.

Once again, it was not the public, but physicians that were the targets of opioid marketing, and this was often quite aggressive. The advertising campaign for OxyContin, for instance, was in many ways unprecedented.

Purdue Pharma provided physicians with starter coupons that gave patients a free seven to 30-day supply of the drug . The company's sales force—which more than doubled in size from 1996 to 2000—handed doctors OxyContin-branded swag including fishing hats and plush toys. A music CD was distributed with the title “Get in the Swing with OxyContin.” Prescriptions for OxyContin for non-cancer related pain boomed from 670,000 written in 1997, to 6.2 million in 2002.

But even this aggressive marketing campaign was in many ways just the smoke. The real fire, Alexander argues, was a behind-the-scenes effort to establish a more lax attitude toward prescribing opioid medications generally, one which made regulators and physicians alike more accepting of OxyContin.

“When I was in residency training, we were taught that one needn’t worry about the addictive potential of opioids if a patient had true pain,” he says. Physicians were cultivated to overestimate the effectiveness of opioids for treating chronic, non-cancer pain, while underestimating the risks, and Alexander argues this was no accident.

Purdue Pharma funded more than 20,000 educational programs designed to promote the use of opioids for chronic pain other than cancer, and provided financial support for groups such as the American Pain Society. That society, in turn, launched a campaign calling pain “the fifth vital sign,” which helped contribute to the perception there was a medical consensus that opioids were under, not over-prescribed.

.....

Are there lessons that can be drawn from all this? Herzberg thinks so, starting with the understanding that “gray area” marketing is more problematic than open advertising. People complain about direct-to-consumer advertising, but if there must be drug marketing, “I say keep those ads and get rid of all the rest," he says, "because at least those ads have to tell the truth, at least so far as we can establish what that is.”

Even better, Herzberg says, would be to ban the marketing of controlled narcotics, stimulants and sedatives altogether. “This could be done administratively with existing drug laws, I believe, based on the DEA’s power to license the manufacturers of controlled substances.” The point, he says, would not be to restrict access to such medications for those who need them, but to subtract “an evangelical effort to expand their use.”

Another lesson from history, Courtwright says, is that physicians can be retrained. If physicians in the late 19th century learned to be judicious with morphine, physicians today can relearn that lesson with the wide array of opioids now available.

That won’t fix everything, he notes, especially given the vast black market that did not exist at the turn of the previous century, but it’s a proven start. As Courtwright puts it: Addiction is a highway with a lot of on-ramps, and prescription opioids are one of them. If we remove the billboards advertising the exit, maybe we can reduce, if not eliminate the number of travelers.

“That’s how things work in public health,” he says. “Reduction is the name of the game.”

1921-1944 of 1,976 Resources