Skip to Content

Found 1,991 Resources

Lost Footage of One of the Beatles' Last Live Performances Found in Attic

Smithsonian Magazine

More than 50 years after the beginning of Beatlemania, it seems that every recorded moment the Beatles spent together between forming in 1960 and dissolving in 1970 has been archived, restored, remastered and remastered again. But one long-lost Beatles performance recently resurfaced: a 92-second clip that shows the Fab Four playing their song “Paperback Writer” on a 1966 episode of the British TV program “Top of the Pops.”

The Press Association reports that the Beatles’ appearance on the show was believed to be lost to history, since back in the 1960s, the BBC was not as fastidious about recording and archiving its programs. But in the days before on-demand streaming or even VCR recording, music enthusiast David Chandler used his 8-millimeter wind-up camera to record the Beatles’ June 16, 1966 “Top of the Pops” appearance. Chandler gave the film to the television archive organization Kaleidoscope, which is trying to track down lost bits of the U.K.’s broadcast history.

Gianluca Mezzofiore at CNN reports that the film reel had sat in Chandler’s attic for more than 50 years until news broke this spring that a collector in Mexico had found an 11-second clip of the performance.

That find was considered significant: it’s the band’s only live “Top of the Pops” appearance (the show aired pre-recorded songs in previous years). The clip also captured the Beatles as their time on a tour bus came to a close. Later that summer, the Fab Four played their last commercial gig ever at Candlestick Park in San Francisco before becoming a studio band. (They did, however, play a final surprise show on a London rooftop in 1969.)

“[I]f you’re a Beatles fans, it’s the holy grail,” Kaleidoscope C.E.O. Chris Perry told the BBC’s Colin Paterson after the 11-second find. “People thought it was gone forever.”

He’s even more stunned by the longer clip. “Kaleidoscope thought finding 11 seconds of ‘Paperback Writer’ was incredible, but to then be donated 92 seconds—and nine minutes of other 1966 Top of the Pops footage was phenomenal,” he says in a statement.

The raw film Chandler captured is silent. That’s why Kaleidoscope worked to remaster the film, enhance the footage and sync it with audio of the song. The restored clip will debut at Birmingham City University on Saturday during a day-long event celebrating its discovery.

A little over a year ago, Kaleidoscope officially launched a hunt to find the U.K.’s top 100 missing television shows, surveying 1,000 television professionals, academics, journalists and TV nerds to determine what shows they’d most like to see recovered. At the top of the list were lost episodes of “Doctor Who,” while missing performances from “Top of the Pops,” which aired from 1964 until 2006, came in as the second most wanted. So far, the BBC reports, Kaleidoscope has recovered at least 240 musical performances, including Elton John singing “Rocket Man” on “Top of the Pops” in 1972.

“These lost episodes really can end up in the most unusual of places and people might not even know they have them,” Perry said in a statement released when the Kaleidoscope hunt for lost-to-history shows began. In this case, it’s probably best to ignore the Beatles’ advice: If you have vintage film stored somewhere in your attic, don’t let it be.

More Than Half of All Coffee Species Are at Risk of Extinction

Smithsonian Magazine

Most popular coffee blends derive from either the Arabica or Robusta bean, but as Somini Sengupta explains for The New York Times, these strains are just two of the world’s 124 wild coffee species. Although the majority of these varieties are neither cultivated nor consumed, the genetic diversity they represent could be the key to preserving your morning cup of joe—especially as climate change and deforestation threaten to eradicate the beloved source of caffeine.

A pair of papers published in Science Advances and Global Change Biology place the potential coffee crisis in perspective, revealing that 75 of Earth’s wild coffee species, or some 60 percent, are at risk of extinction. The Arabica bean, a native Ethiopian species used to make most high-quality brews, is one such threatened species: According to Helen Briggs of BBC News, the team behind the Global Change Biology study found that Arabica’s population could fall by around 50 percent by 2088.

Arabica beans are at the core of rich, flavorful blends including Javan coffee, Ethiopian sidamo and Jamaican blue mountain. Comparatively, Adam Moolna writes for the Conversation, Robusta has a harsher taste and is most often used in instant blends. Interestingly, Arabica actually originates from Robusta, which was bred with a species known as Coffea eugenoides to create the crossbred bean.

Genetic interbreeding may be the best way to save commercial coffee species. As Helen Chadburn, a species conservation scientist at the Kew Royal Botanic Gardens and co-author of the Science Advances study, tells Popular Mechanic’s John Wenz, wild species carry “genetic traits”—think drought tolerance and pest or disease resistance—“that may be useful for the development … of our cultivated coffees.”

It’s also possible that experimenting with different types of wild coffee could yield tasty new brews. Chadburn adds, “Some other coffee species are naturally low in caffeine, or have an excellent (and unusual) flavor.”

There are a litany of obstacles associated with coffee conservation. In Madagascar and Tanzania, for example, some species are clustered in small areas, leaving them more vulnerable to a single extinction event. On a larger scale, habitat loss, land degradation, drought and deforestation also pose significant risks.

The main threat facing Arabica crops is climate change, according to Jeremy Hodges, Fabiana Batista and Aine Quinn of Bloomberg. Arabica requires a year-round temperature of 59 to 75 degrees Fahrenheit, as well as distinct rainy and dry seasons, in order to grow properly. When temperatures fall, the beans become frosty; when temperatures rise, the quality of the coffee falls, and yield per tree declines.

As global warming pushes temperatures upward, coffee farmers are being forced to innovate. Growers across Africa and South America are moving their crops to higher, cooler ground, but as Eli Meixler reports for Time, this may not be enough to save the Arabica bean—particularly in Ethiopia, where up to 60 percent of the area used for coffee cultivation could become unsuitable by century’s end.

Maintaining wild coffee species in seed banks or nationally protected forests could also prove essential to the caffeinated drink’s survival. Unfortunately, The New York Times’ Sengupta notes, the researchers found that just over half of wild coffee species are held in seed banks, while two-thirds grow in national forests. Even if scientists can boost the percentage of coffee seeds stored in seed banks, The Conversation’s Moolna points out that these samples don’t hold up in storage as well as crops such as wheat or maize.

Overall, the two new studies present a dire vision of coffee’s future—or lack thereof. As Aaron Davis, a Kew researcher who co-authored both papers, tells Daily Coffee News’ Nick Brown, in terms of sustainability and conservation efforts, the coffee sector is around 20 to 30 years behind other agricultural industries. As coffee yields shrink, Lauren Kent adds for CNN, consumers may notice their daily caffeine boost becoming both more expensive and less palatable.

Coffee isn’t completely out of the game yet: According to Moolna, conservation focused on maintaining genetic diversity and sustaining species in their native environments, rather than solely in collections such as seed banks, could save the drink from extinction. Still, if you’re a coffee fan, you may want to stock up on your favorite roasts sooner rather than later.

Shark Repellent: It’s Not Just For Batman Anymore

Smithsonian Magazine

Holy sardines! It’s a still from the 1966 film Batman

Every superhero would be wise to heed the lessons of the Caped Crusader, as explored below in the first of our series on shark-related patents and designs.

Today we look at shark repellent, the most famous of which was seen in the exciting opening of the original Batman film –that’s with Adam West not Michael Keaton– when the Caped Crusader is attacked by a shark while trying to intercept a boat with a helicopter – I’m sorry, Batcopter. Pretty typical Batman stuff, really. His first solution? Punch the shark – sorry, Batpunch the shark. The shark doesn’t give up as easily as the average cartoonish henchman, so Batman tries plan B: Bat shark repellent. It works. The shark falls into the ocean and EXPLODES. I honestly didn’t see that coming.

Well, it turns out that shark repellent is real, although I’m not sure it has been bat-weaponized into a convenient aerosol bomb. So unfortunately, it looks less like this:

Thankfully, Batman clearly labels all his Bat Sprays so this image is pretty straightforward. A still from the 1966 film Batman

And more like this:

U.S. patent no. 2,458,540 for “a composition and device for discouraging the predatory intentions of carnivorous fish” aka SHARK REPELLENT (image: google patents)

It probably won’t surprise you to hear that  it’s not quite as effective as the explosive bat spray. (Correction: The Joker had rigged the shark to explode, as villains are wont to do.)

Real shark repellent was first developed during World War II in an effort to help save the lives of seamen and pilots who had to await rescue in open water. The patent for “shark repellent” was issued to a team of American chemists –Richard L. Tuve, John M. Fogelberg, Frederic E. Brinnick, and Horace Stewart Spring– in 1949. Typically, these patent applications are pretty dry, but this one introduces the invention with a surprisingly vivid description of the problem faced by soldiers during the war:

“Since the beginning of the war with its submarine and air activity, numerous occasions have arisen in which men have been forced to swim for their lives. Our armed services and merchant marine have been helpful by providing the men with equipment to help them stay afloat. This phase of the problem or, rather, the equipment long ago reached a point of development where remaining afloat for extended periods offered little difficulty. In cold Atlantic waters, the greatest menace has been the cold. However, in the warm Pacific Ocean and the South Atlantic, a different menace arises for the waters are alive with carnivorous fish. The weakened condition of wounded men cast into the water puts them at a distinct disadvantage in trying to fight off sharks and barracuda which are attracted by their blood.”

Their design is a small chemical disk in a waterproof package that can be attached to a life vest. In the event that someone is stranded at sea, the disk can be exposed to seawater, which will activate the chemicals to “cast a protective veil of a chemical material around the swimmer.” Those chemicals consist primarily of copper acetate. which is safe for the swimmer but has been proven to be so distasteful to sharks that they’ll ignore raw meet meat floating in a pool of the mixture. It approximates the odor of dead shark – the only thing that’s been proven to repel the carnivorous fish.

The inventors had the good of all humanity in mind and specified that the deterrent could be used by any world government without the payment of royalties. While no shark repellent is fool proof, early tests of the 1949 repellent showed that the copper mixture was 72-96 percent effective. Later tests showed that maybe it wasn’t so effective. Work continued.

More recently, researchers have been working on a more effective shark repellent that is literally derived from a distilled essence of dead shark and has proven effective on a number of species. In 2001 chemical engineer Eric Stroud started the company Shark Defense  to refine an array of chemical and electrochemical shark deterrents such as shark resistant sunscreen and fishing hooks, and hope to someday offer shark repellent fishing nets and other products to protect boats and submarines.

Although advancements have been made, the perfect shark repellent continues to elude scientists. So if you’re planning to watch all of Shark Week in situ, I’d recommend getting to work on a weaponized Bat Spray.

Portrait of Mrs. Karpeles (Frau K.)

Hirshhorn Museum and Sculpture Garden

Why 'Paradise Lost' Is Translated So Much

Smithsonian Magazine

"Paradise Lost," John Milton's 17th-century epic poem about sin and humanity, has been translated more than 300 times into at least 57 languages, academics have found.

“We expected lots of translations of 'Paradise Lost,'" literature scholar Islam Issa tells Alison Flood of the Guardian, "but we didn’t expect so many different languages, and so many which aren’t spoken by millions of people."

Isaa is one of the editors of a new book called Milton in TranslationThe research effort led by Issa, Angelica Duran and Jonathan R. Olson looks at the global influence of the English poet's massive composition in honor of its 350th anniversary. Published in 1667 after a blind Milton dictated it, "Paradise Lost" follows Satan's corruption of Adam and Eve, painting a parable of revolution and its consequences.

Milton himself knew these concepts intimately—he was an active participant in the English Civil War that toppled and executed King Charles I in favor of Oliver Cromwell's Commonwealth.

These explorations of revolt, Issa tells Flood, are part of what makes "Paradise Lost" maintain its relevance to so many people around the world today. The translators who adapt the epic poem to new languages are also taking part in its revolutionary teachings, Issa notes. One of the best examples is when Yugoslav dissident Milovan Djilas spent years translating "Paradise Lost" painstakingly into Serbo-Croatian on thousands of sheets of toilet paper while he was imprisoned. The government banned the translation, along with the rest of Djilas' writing.

That wasn't the first time a translation was banned—when "Paradise Lost" was first translated into Germany, it was instantly censored for writing about Biblical events in "too romantic" a manner. Just four years ago, a bookstore in Kuwait was apparently shut down for selling a translation of Milton's work, though according to the owner, copies of “Paradise Lost” remained available at Kuwait University's library.

As the world becomes increasingly globalized expect to Milton's seminal work to continue to spread far and wide. In the last 30 years, the researchers found that more translations of "Paradise Lost" have been published than in the 300 years before that.

Massachusetts - Cultural Destinations

Smithsonian Magazine

Isabella Stewart Gardner Museum
This jewel of a museum is housed in a 15th-century Venetian-style palace surrounding a verdant courtyard. Works by Rembrandt, Michelangelo, Degas, Titian and others share the space with the best in decorative and contemporary arts. The museum also features concerts every Sunday, September through May.

Plimoth Plantation
A living museum near present-day Plymouth, Plimoth Plantation interprets the colonial village as it was in 1627, seven years after the Mayflower’s arrival. At the Wampanoag Homesite, learn about the culture of the Wampanoag, who have lived in southeastern New England for more than 12,000 years. Climb aboard the Mayflower II, a full-scale reproduction of the famous ship. And at the Nye Barn, take a gander at heritage breeds of livestock from around the world, including Kerry cattle, and Arapawa Island goats.

Old Sturbridge Village
Experience life in an 1830s New England village at this interpretive outdoor museum in central Massachusetts. Visitors can tour more than 40 original buildings and 200 acres of grounds, all meticulously maintained to recreate early American village life.

Whaling Museum (New Bedford)
"Moby Dick" fans take note. In 1907, the Old Dartmouth Historical Society founded the whaling museum to tell the story of whaling and of New Bedford, once the whaling capital of the world. The museum holds an extensive collection of artifacts and documents of the whaling industry and features contemporary exhibits on whales and human interaction with the sea mammals.

Harvard University and Massachusetts Institute of Technology
These two venerable institutions have shaped the city of Cambridge and together offer a vacation’s worth of sightseeing. Of Harvard’s many respected museums, the Fogg Art Museum, with it collection of European and American painting, prints and photography is a popular favorite. And Harvard’s Arnold Arboretum, designed by landscape architect Frederick Law Olmsted is a wonderful place to spend a sunny morning or afternoon. For the more science and technology-minded, the MIT Museum offers exhibits on robotics, holography and more.

Kennedy Library and Museum
The presidency of John F. Kennedy lasted only 1,000 days but left an indelible mark on American history and culture. This stunning museum is the official repository for all things Camelot.

(Salem) More than 150 people were arrested and imprisoned during the witch-hunt that led to the infamous witch trials of 1692 and 1693. Of them, 29 were convicted and 19 hanged. Others died in prison. Learn about this dramatic moment in American history and enjoy the present charms of this picturesque New England town. To see both Salem and Boston in one day, hop aboard the Nathaniel Bowditch, which offers eight round-trips daily between the two cities.

National Historical Park (Lowell)
The exhibits and grounds here chronicle the shift from farm to factory, the rise of female and immigrant labor, as well as the industrial technology that fueled these changes. Housed in the restored former textile mill of the Boott Manufacturing Company, the park’s Boott Cotton Mills Museum features a 1920's weave room whose 88 power-looms generate a deafening clatter (ear plugs provided). Find out what it was like to be a ";Mill Girl" at the heart of the United State’s industrial revolution. Nearby is a cluster of lively art museums and galleries, including the New England Quilt Museum and the Revolving Museum.

Lighthouse (Boston)
Built in 1716, it was the first lighthouse in North America and is the only one in the U.S. that has not been automated. The second-oldest lighthouse is on Martha’s Vineyard.

Fanueil Hall
Built as a gift to the city of Boston in 1742 by Peter Fanueil, the city’s richest merchant, the hall served as a central market as well as a platform for political and social change. Colonists first protested the Sugar Act here in 1764, establishing the doctrine of no taxation without representation. Samuel Adams rallied Bostonians to independence from Britain, George Washington celebrated the first birthday of the new nation, and Susan B. Anthony spoke out for civil rights, all at Fanueil Hall. In 1826, the hall was expanded to include Quincy Market. Today, shops and restaurants fill the bustling site, which attracts 18 million visitors a year.

Portrait of Egon Wellesz

Hirshhorn Museum and Sculpture Garden

Five Things to Know About Roger Bannister, the First Person to Break the 4-Minute Mile

Smithsonian Magazine

Roger Bannister, the first person to break the 4-minute mile, died in Oxford on Saturday at age 88, the Associated Press reports.

More than 60 years ago, back on a cinder track at Oxford University's Iffley Road Stadium in 1954, Bannister completed four laps in 3:59.4, a record-breaking performance that many believed was not humanly possible. The image of the exhausted Bannister with his eyes closed and mouth agape appeared on the front page of newspapers around the world, a testament to what humankind could achieve.

“It became a symbol of attempting a challenge in the physical world of something hitherto thought impossible,” Bannister said as the 50th anniversary of the run approached, according to the AP. “I'd like to see it as a metaphor not only for sport, but for life and seeking challenges.”

Here are five things you should know about the iconic athlete and his stunning mid-century run.

He Sought the Record Due to Olympic Failure

Frank Litsky and Bruce Weber at The New York Times report that Bannister began running to avoid bullies and the air raid sirens during the WWII blitz of London.

The tall, lanky blonde also happened to be booksmart, and used his intellect to land an athletic scholarship to Oxford University. There, Bannister caught the eye of coaches while serving as a pacemaker for a mile race in 1947. While pacemakers generally drop out before the end of the race, Bannister continued on, reportedly beating the field by 20 yards, AP sportswriter Chris Lehourites recounts.

Though Bannister quickly became one of the U.K.’s most promising track stars, he remained a true student-athlete. History.com reports that he skipped running the 1500 meters at the 1948 London Olympics so he could concentrate on his studies. In 1952, he competed at the Helsinki Olympics, coming in fourth in the 1500 meters. That performance was roundly criticized by the British press. Afterward, he resolved to break the 4-minute mile, which several other runners were chasing. Thanks to insights he gleaned from medical school, he created a specially tailored training regimen to prepare himself for his barrier-breaking run on May 6, 1954.

Track singlet worn by Englishman Roger Bannister (b. 1931) at the 1954 Commonwealth Games, Vancouver, Canada. Bannister barely beat Landry, finishing at 3:58.8, less than a second ahead of Landy at 3:59.6. (National Museum of American History)

Breaking the Record Wasn’t His Most Famous Run

As it so happens, Bannister’s record only last 46 days before Australian runner John Landy shaved 1.5 seconds off of his time at a meet in Turku, Finland. Michael McGowan at The Guardian reports that the back-t0-back record-breaking performances set the stage for one of running’s most incredible showdowns when in August of 1954, Bannister and Landy faced off at the British Empire and Commonwealth Games at the Vancouver Exhibition (renamed the Pacific National Exhibition in 1946).

During the race, Landy led with Bannister at his heels. At the final turn, however, Landy turned and looked over his left shoulder to find out where Bannister was. At that moment Bannister surpassed Landy on the right, winning the race. Both men finished what came to be known as the Miracle Mile in under 4 minutes, the first time that had ever happened.

Vancouver sculptor Jack Harman erected a statue of the runners during the race, which still stands outside the exhibition. In the work, Landy is looking over his shoulder at Bannister. McGowan reports that Landy joked that while Lot’s wife in the Bible was turned into a pillar of salt for looking back, “I am probably the only one ever turned into bronze for looking back.”

Bannister retired from running soon after setting the record

Though he was chosen as Sports Illustrated’s first "Sportsman of the Year" and could have continued on with a professional running career, Bannister shocked the world by retiring from running at the end of that summer after winning the 1500 meters at the European Championships in Bern, Switzerland, reports McGowan.

“As soon as I ceased to be a student, I always knew I would stop being an athlete,” he once said, as Adam Addicott at The Sportsman recounts. That fall he began his rounds as a doctor.

Bannister went on to have a long career as a neurologist, serving for many years as the director of the National Hospital for Nervous Diseases in London.

He Fought Against Drugs in Sports

Bannister, who became Sir Roger Bannister after being knighted in 1975, never lost his interest in athletics. Between 1971 and 1974, he served at the chairman of the British Sports Council and between 1976 and 1983, he served as the president of the International Council of Sport Science and Physical Recreation.

But most significantly, Addicott reports, as chair of the Sports Council he gathered together a group of researchers to develop the first test for anabolic steroids, a substance that Bannister and many others believed the Soviet Union and Eastern Bloc nations were using to juice their athletes. “I foresaw the problems in the 1970s and arranged for the group of chemists to detect the first radioimmunoassay test for anabolic steroids,” he told Mike Wise at The Washington Post in 2014. “The only problem was it took a long time for the Olympic and other authorities to introduce it on a random basis. I foresaw it being necessary.”

Addicott reports that in recent years Bannister remained a vocal anti-doping advocate and expressed "extreme sadness" in its prevelance in sport today. 

"I hope that Wada (World Anti-Doping Agency) and Usada (US Anti-Doping Angency) will be successful in bringing this to an end," he said in an interview with ITV just last month.

Bannister’s Record Is Long Gone

Kevin J. Delaney at Quartz reports that Bannister’s records did not live much past the summer of 1954. Since then, 500 American men alone have broken the 4 minute mark, including 21 who have done so since the beginning of this year.

The current record to beat is 3:43.13, which was set by 24-year-old Moroccan runner Hicham El Guerrouj in 1999. Delaney reports that with the right body type and training, some models predict a 3:39 minute mile is theoretically achievable in the future.

For women, no athlete has broken the 4-minute mile...yet. Russian Svetlana Masterkova currently holds the world record in the race, ripping out a time of 4:12.56 at the Weltklasse Grand Prix track-and-field meet in Zurich, Switzerland, in 1996.

Top 10 Nation-Building Real Estate Deals

Smithsonian Magazine

Despite the recent unpleasantness in the real estate market, many still hold (or once held, or will hold again) to the axiom of the late millionaire Louis Glickman: “The best investment on earth is earth.” This applies for nations, too. Below are ten deals in which the United States acquired territory, ranked in order of their consequences for the nation. Feel free to make bids of your own. (Just to be clear, these are deals, or agreements; annexations and extralegal encroachments don’t apply.)

1. The Treaty of Paris (1783): Before the United States could start acquiring real estate, it had to become the United States. With this deal, the former 13 colonies received Great Britain’s recognition as a sovereign nation. Included: some 830,000 square miles formerly claimed by the British, the majority of it—about 490,000 square miles—stretching roughly from the western boundaries of the 13 new states to the Mississippi. So the new nation had room to grow—pressure for which was already building.

2. The Treaty of Ghent (1814): No land changed hands under this pact, which ended the Anglo-American War of 1812 (except for the Battle of New Orleans, launched before Andrew Jackson got word that the war was over). But it forced the British to say, in effect: OK, this time we really will leave. Settlement of the former Northwest Territory could proceed apace, leading to statehood for Indiana, Illinois, Michigan, Wisconsin and Minnesota, the eastern part of which was in the territory. (Ohio had become a state in 1803.)

3. The Louisiana Purchase (1803): It doubled the United States’ square mileage, got rid of a foreign power on its western flank and gave the fledgling nation control of the Mississippi. But the magnitude of this deal originated with our counterparty, the French. The Jefferson administration would have paid $10 million just for New Orleans and a bit of land east of the Mississippi. Napoleon asked: What would you pay for all of Louisiana? (“Louisiana” being the heart of North America: from New Orleans north to Canada and from the Mississippi west to the Rockies, excluding Texas.) Jefferson’s men in Paris, James Monroe and Robert Livingston, exceeded their authority in closing a deal for $15 million. The president did not complain.

4. The Alaska Purchase (1867): Russia was a motivated seller: the place was hard to occupy, let alone defend; the prospect of war in Europe loomed; business prospects looked better in China. Secretary of State William H. Seward was a covetous buyer, but he got a bargain: $7.2 million for 586,412 square miles, about 2 cents an acre. Yes, Seward’s alleged folly has been vindicated many times over since Alaska became the gateway to Klondike gold in the 1890s. He may have been visionary, or he may have been just lucky. (His precise motives remain unclear, historian David M. Pletcher writes in The Diplomacy of Involvement: American Economic Expansion Across the Pacific, because “definitive written evidence” is lacking.) The secretary also had his eye on Greenland. But we’re getting ahead of ourselves.

Image by Newscom. With the Treaty of Paris in 1783, the former 13 colonies received Great Britain's recognition as a sovereign nation along with some 830,000 square miles. (original image)

Image by Bettmann / Corbis. The United States expanded from the original 13 colonies in a series of deals that began in 1783 with the Treaty of Paris. (original image)

Image by Newscom. Although no land changed hands under the Treaty of Ghent in 1814, it forced the British to leave the Northwest Territory to allow settlement. This lead to statehood for Indiana, Illinois, Michigan, Wisconsin and Minnesota. (original image)

Image by Bettmann / Corbis. The Louisiana Purchase in 1803 doubled the United States' square mileage, got rid of a foreign power on its western flank and gave the fledgling nation control of the Mississippi. (original image)

Image by Library of Congress. Secretary of State William H. Seward bargained with Russia for the sale of Alaska in 1867. Seward bought 586,412 square miles for $7.2 million, about 2 cents an acre. What was once known as Seward's Folly has proven to be quite valuable with the discovery of gold and oil in the region. (original image)

Image by Bettmann / Corbis. In order to keep the Germans from controlling shipping lanes in the Atlantic and the Caribbean, the Wilson administration signed the Virgin Islands Purchase in 1917. The U.S. paid Denmark $25 million in exchange for St. Thomas, St. Croix and St. John. (original image)

5. The Treaty of Guadalupe Hidalgo (1848): The Polk administration negotiated from strength—it had troops in Mexico City. Thus the Mexican-American War ended with the United States buying, for $15 million, 525,000 square miles in what we now call the Southwest (all of California, Nevada and Utah, and parts of Wyoming, Colorado, Arizona and New Mexico). Mexico, though diminished, remained independent. The United States, now reaching the Pacific, began to realize its Manifest Destiny. On the other hand, the politics of incorporating the new territories into the nation helped push the Americans toward civil war.

6. The Oregon Treaty (1846): A victory for procrastination. The United States and Great Britain had jointly occupied 286,000 square miles between the northern Pacific and the Rockies since 1818, with the notion of sorting things out later. Later came in the early 1840s, as more Americans poured into the area. The 1844 presidential campaign featured the battle cry “Fifty-four forty or fight!” (translation: “We want everything up to the latitude of Alaska’s southern maritime border”), but this treaty fixed the northern U.S. border at the 49th parallel—still enough to bring present-day Oregon, Washington and Idaho and parts of Montana and Wyoming into the fold.

7. The Adams-Onís Treaty (1819): In the mother of all Florida real estate deals, the United States bought 60,000 square miles from Spain for $5 million. The treaty solidified the United States’ hold on the Atlantic and Gulf coasts and pushed Spanish claims in the North American continent to west of the Mississippi (where they evaporated after Mexico won its independence in 1821… and then lost its war with the United States in 1848; see No. 5).

8. The Gadsden Purchase (1853): This time, the United States paid Mexico $10 million for only 30,000-odd square miles of flat desert. The intent was to procure a route for a southern transcontinental railroad; the result was to aggravate (further) North-South tensions over the balance between slave and free states. The railroad wasn’t finished until 1881, and most of it ran north of the Gadsden Purchase (which now forms the southern parts of New Mexico and Arizona).

9. The Virgin Islands Purchase (1917): During World War I, the Wilson administration shuddered to think: If the Germans annex Denmark, they could control shipping lanes in the Atlantic AND the Caribbean. So the Americans struck a deal with the Danes, paying $25 million for St. Thomas, St. Croix and St. John. Shipping continued; mass tourism came later.

10. The Greenland Proffer (1946): The one that got away. The biggest consequence of this deal is that it never happened. At least since Seward’s day (see No. 4), U.S. officials had cast a proprietary eye toward our neighbor to the really far north. After World War II, the United States made it official, offering $100 million to take the island off Denmark’s administrative hands. Why? Defense. (Time magazine, January 27, 1947: “Greenland’s 800,000 square miles would make it the world’s largest island and stationary aircraft carrier.”) “It is not clear,” historian Natalia Loukacheva writes in The Arctic Promise: Legal and Political Autonomy of Greenland and Nunavut, “whether the offer was turned down... or simply ignored.” Greenland achieved home rule in 1979.

Your Tweets Can Predict When You’ll Get the Flu

Smithsonian Magazine

Simply by looking at geo-tagged tweets, an algorithm can track the spread of flu and predict which users are going to get sick. Image via Adam Sadilek, University of Rochester

In 1854, in response to a devastating cholera epidemic that was sweeping through London, British doctor John Snow introduced an idea that would revolutionize the field of public health: the epidemiological map. By recording instances of cholera in different neighborhoods of the city and plotting them on a map based on patients’ residences, he discovered that a single contaminated water pump was responsible for a great deal of the infections.

The map persuaded him—and, eventually, the public authorities—that the miasma theory of disease (which claimed that diseases spread via noxious gases) was false, and that the germ theory (which correctly claimed that microorganisms were to blame) was true. They put a lock on the handle of the pump responsible for the outbreak, signaling a paradigm shift that permanently changed how we deal with infectious diseases and thus sanitation.

The mapping technology is quite different, as is the disease, but there’s a certain similarity between Snow’s map and a new project conducted by a group of researchers led by Henry Kautz of the University of Rochester. By creating algorithms that can spot flu trends and make predictions based on keywords in publicly available geotagged tweets, they’re taking a new approach to studying the transmission of disease—one that could change the way we study and track the movement of diseases in society.

“We can think of people as sensors that are looking at the world around them and then reporting what they are seeing and experiencing on social media,” Kautz explains. “This allows us to do detailed measurements on a population scale, and doesn’t require active user participation.”

In other words, when we tweet that we’ve just been laid low by a painful cough and a fever, we’re unwittingly providing rich data for an enormous public health experiment, information that researchers can use to track the movement of diseases like flu in high resolution and real time.

Kautz’ project, called SocialHealth, has made use of tweets and other sorts of social media to track a range of public health issues—recently, they began using tweets to monitor instances of food poisoning at New York City restaurants by logging everyone who had posted geotagged tweets from a restaurant, then following their tweets for the next 72 hours, checking for mentions of vomiting, diarrhea, abdominal pain, fever or chills. In doing so, they detected 480 likely instances of food poisoning.

But as the season changes, it’s their work tracking the influenza virus that’s most eye-opening. Google Flu Trends has similarly sought to use Google searchers to track the movement of flu, but the model greatly overestimated last year’s outbreak, perhaps because media coverage of flu prompted people to start making flu-related queries. Twitter analysis represents a new dataset with a few qualities—a higher geographic resolution and the ability to capture the movement of a user over time—that could yield better predictions.

To start their flu-tracking project , the SocialHealth researchers looked specifically at New York, collecting around 16 million geotagged public tweets per month from 600,000 users for three months’ time. Below is a time-lapse of one New York Twitter day, with different colors representing different frequencies of tweets at that location (blue and green mean fewer tweets, orange and red mean more):

To make use of all this data, his team developed an algorithm that determines if each tweet represents a report of flu-like symptoms. Previously, other researchers had simply done this by searching for keywords in tweets (“sick,” for example), but his team found that the approach leads to false positives: Many more users tweet that they’re sick of homework than they’re feeling sick.

To account for this, his team’s algorithm looks for three words in a row (instead of one), and considers how often the particular sequence is indicative of an illness, based on a set of tweets they’d manually labelled. The phrase “sick of flu,” for instance, is strongly correlated with illness, whereas “sick and tired” is less so. Some particular words—headache, fever, coughing—are strongly linked with illness no matter what three-word sequence they’re part of.

Once these millions of tweets were coded, the researchers could do a few intriguing things with them. For starters, they looked at changes in flu-related tweets over time, and compared them with levels of flu as reported by the CDC, confirming that the tweets accurately captured the overall trend in flu rates. However, unlike CDC data, it’s available in nearly real-time, rather than a week or two after the fact.

But they also went deeper, looking at the interactions between different users—as represented by two users tweeting from the same location (the GPS resolution is about half a city block) within the same hour—to model how likely it is that a healthy person would become sick after coming into contact with someone with the flu. Obviously, two people tweeting from the same block 40 minutes apart didn’t necessarily meet in person, but the odds of them having met are slightly higher than two random users.

As a result, when you look at a large enough dataset of interactions, a picture of transmission emerges. They found that if a healthy user encounters 40 other users who report themselves as sick with flu symptoms, his or her odds of getting flu symptoms the next day increases from less than one percent to 20 percent. With 60 interactions, that number rises to 50 percent.

The team also looked at interactions on Twitter itself, isolating pairs of users who follow each other and calling them “friendships.” Even though many Twitter relationships exist only on the Web, some correspond to real-life interactions, and they found that a user who has ten friends who report themselves as sick are 28 percent more likely to become sick the next day. In total, using both of these types of interactions, their algorithm was able to predict whether a healthy person would get sick (and tweet about it) with 90 percent accuracy.

We’re still in the early stages of this research, and there are plenty of limitations: Most people still don’t use Twitter (yes, really) and even if they do, they might not tweet about getting sick.

But if this sort of system could be developed further, it’s easy to imagine all sorts of applications. Your smartphone could automatically warn you, for instance, if you’d spent too much time in the places occupied by people with the flu, prompting you to go home to stop putting yourself in the path of infection. An entire city’s residents could even be warned if it were on the verge of an outbreak.

Despite the 150 years we’re removed from John Snow’s disease-mapping breakthrough, it’s clear that there are still aspects of disease information we don’t fully understand. Now, as then, mapping the data could help yield the answers.

Five Books on World War I

Smithsonian Magazine

On the 11th hour of the 11th day of the 11th month of 1918, an armistice between Allied forces and Germany put an end to the fighting of what was then referred to as the Great War. President Woodrow Wilson declared November 11, of the following year, Armistice Day. In 1938, an act of Congress made the day a legal holiday, and by 1954, that act was amended to create Veterans Day, to honor American veterans of all wars.

Journalist Adam Hochschild, author of To End All Wars (2011), an account of World War I from the perspective of both hawks and doves in Great Britain, provides his picks of books to read to better understand the conflict.

Hell’s Foundations (1992), by Geoffrey Moorhouse

Of the 84 British regiments that fought in the Gallipoli campaign in Turkey in 1915 and 1916, the Lancashire Fusiliers from Bury, in northern England, suffered the most casualties. The regiment lost 13,642 men in the war—1,816 in Gallipoli alone.

For journalist Geoffrey Moorhouse, the subject hit close to home. He grew up in the small mill town of Bury, and his grandfather had survived Gallipoli. In Hell’s Foundations, Moorhouse describes the town, its residents’ attitudes toward the war and the continued suffering of the soldiers who survived.

From Hochschild: A fascinating and unusual look at the war in microcosm, by showing its effects on one English town.

Testament of Youth (1933), by Vera Brittain

In 1915, Vera Brittain, then a student at the University of Oxford, enlisted as a nurse in the British Army’s Voluntary Aid Detachment. She saw the horrors of war firsthand while stationed in England, Malta and France. Wanting to write about her experiences, she initially set to work on a novel, but was discouraged by the form. She then considered publishing her actual diaries. Ultimately, however, she wrote cathartically about her life between the years 1900 and 1925 in a memoir, Testament of Youth. The memoir has been called the best-known book of a woman’s World War I experience, and is a significant work for the feminist movement and the development of autobiography as a genre.

From Hochschild: Brittain lost her brother, her fiancé and a close friend to the war, while working as a nurse herself.

Regeneration Trilogy, by Pat Barker

In the 1990s, British author Pat Barker penned three novels: Regeneration (1991), The Eye in the Door (1993) and The Ghost Road (1995). Though fictional, the series, about shell-shocked officers in the British army, is based, in part, on true-life stories. Barker’s character Siegfried Sassoon, for instance, was closely based on the real Siegfried Sassoon, a poet and soldier in the war, and Dr. W.H.R. Rivers was based on the actual neurologist of that name, who treated patients, including Sassoon, at the Craiglockhart War Hospital in Scotland. The New York Times once called the trilogy a “fierce meditation on the horrors of war and its psychological aftermath.”

From Hochschild: The finest account of the war in recent fiction, written with searing eloquence and a wide angle of vision that ranges from the madness of the front lines to the fate of war resisters in prison.

The Great War and Modern Memory (1975), by Paul Fussell

After serving as an infantry officer in World War II, Paul Fussell felt a kinship to soldiers of the First World War. Yet he wondered just how much he had in common with their experiences. “What did the war feel like to those whose world was the trenches? How did they get through this bizarre experience? And finally, how did they transform their feelings into language and literary form?” he writes in the afterword to the 25th anniversary edition of his monumental book The Great War and Modern Memory.

To answer these questions, Fussell went directly to firsthand accounts of World War I written by 20 or 30 British men who fought in it. It was from this literary perspective that he wrote The Great War and Modern Memory, about life in the trenches. Military historian John Keegan once called the book “an encapsulation of a collective European experience.”

From Hochschild: A subtle, superb examination of the literature and mythology of the war, by a scholar who was himself a wounded veteran of World War II.

The First World War (1998), by John Keegan

The title is simple and straightforward, and yet in and of itself poses an enormous challenge to its writer: to tell the full story of World War I. Keegan’s account of the war is, no doubt, panoramic. Its most commended elements include the historian’s dissections of military tactics, both geographical and technological, used in specific battles and his reflections on the thought processes of the world leaders involved.

From Hochschild: This enormous cataclysm is hard to contain in a single one-volume overview, but Keegan’s is probably the best attempt to do so.

The Blasphemous Geologist Who Rocked Our Understanding of Earth's Age

Smithsonian Magazine

On a June afternoon in 1788, James Hutton stood before a rock outcropping on Scotland’s western coast named Siccar Point. There, before a couple of other members of the Scottish Enlightenment, he staked his claim as the father of modern geology.

Aa Hutton told the skeptics who accompanied him there by boat, Siccar Point illustrated a blasphemous truth: the Earth was old, almost beyond comprehension.

Three years earlier, he’d unveiled two papers, together called "Theory of the Earth," at a pair of meetings of the Royal Society of Edinburgh. Hutton proposed that the Earth constantly cycled through disrepair and renewal. Exposed rocks and soil were eroded, and formed new sediments that were buried and turned into rock by heat and pressure. That rock eventually uplifted and eroded again, a cycle that continued uninterrupted.

“The result, therefore, of this physical enquiry,” Hutton concluded, “is that we find no vestige of a beginning, no prospect of an end.”

His ideas were startling at a time when most natural philosophers—the term scientist had not yet been coined—believed that the Earth had been created by God roughly 6,000 years earlier. The popular notion was that the world had been in a continual decline ever since the perfection of Eden. Therefore, it had to be young. The King James Bible even set a date: October 23, 4004 BC.

At Siccar Point, Hutton pointed to proof of his theory: the junction of two types of rock created at different times and by different forces. Gray layers of metamorphic rock rose vertically, like weathered boards stuck in the ground. They stabbed into horizontal layers of red, layered sandstone, rock only beginning to be deposited. The gray rock, Hutton explained, had originally been laid down in horizontal layers of perhaps an inch a year of sediment long ago. Over time, subterranean heat and pressure transformed the sediment into rock and then a force caused the strata to buckle, fold and become vertical.

Here, he added, was irrefutable proof the Earth was far older than the prevailing belief of the time.

John Playfair, a mathematician who would go on to become Hutton's biographer with his 1805 book, Life of Dr. Hutton, accompanied him that day. “The mind seemed to grow giddy by looking so far back into the abyss of time; and whilst we listened with earnestness and admiration to the philosopher who was now unfolding to us the order and series of these wonderful events, we became sensible how much further reason may sometimes go than imagination may venture to follow,” he late wrote.

Hutton, born in 1726, never became famous for his theories during his life. It would take a generation before the geologist Charles Lyell and the biologist Charles Darwin would grasp the importance of his work. But his influence endures today.

An illustration of Hutton doing fieldwork, by artist John Kay. (Library of Congress)

"A lot of what is still in practice today in terms of how we think about geology came from Hutton," says Stephen Marshak, a geology professor at the University of Illinois who has made the pilgrimage to Siccar Point twice. To Marshak, Hutton is the father of geology.

Authors like Stephen Jay Gould and Jack Repcheck—who wrote a biography of Hutton titled The Man Who Found Time—credit him with freeing science from religious orthodoxy and laying the foundation for Charles Darwin’s theory of evolution.

"He burst the boundaries of time, thereby establishing geology's most distinctive and transforming contribution to human thought—Deep Time," Gould wrote in 1977.

Hutton developed his theory over 25 years, first while running a farm in eastern Scotland near the border with England and later in an Edinburgh house he built in 1770. There, one visitor wrote that "his study is so full of fossils and chemical apparatus of various kinds that there is barely room to sit down."

He was spared financial worries thanks to income from the farm and other ventures, and had no dependent family members, because he never married. Thus freed of most earthly burdens, he spent his days working in the study and reading. He traveled through Scotland, Wales and England, collecting rocks and surveying the geology. Through chemistry, he determined that rocks could not have precipitated from a catastrophe like Noah’s Flood, the prevailing view of previous centuries, otherwise they would be dissolved by water. Heat and pressure, he realized, formed rocks.

That discovery came with help from Joseph Black, a physician, chemist and the discoverer of carbon dioxide. When Hutton moved to Edinburgh, Black shared his love of chemistry, a key tool to understanding the effect of heat on rock. He deduced the existence of latent heat and the importance of pressure on heated substances. Water, for instance, stays liquid under pressure even when heated to a temperature that normally would transform it to steam. Those ideas about heat and pressure would become key to Hutton’s theory about how buried sediments became rock.

Black and Hutton were among the leading lights of the Royal Society of Edinburgh, along with Adam Smith, the economist and author of The Wealth of Nations, David Hume, the philosopher, Robert Burns, the poet, and James Watt, the inventor of the two-cylinder steam engine that paved the way for the Industrial Revolution.

Hutton's principle of uniformitarianism—that the present is the key to the past—has been a guiding principle in geology and all sciences since. Marshak notes that despite his insight, Hutton didn’t grasp all the foundations of geology. He thought, for example, that everything happened at a similar rate, something that does not account for catastrophic actions like mountain building or volcanic eruptions, which have shaped the Earth.

Unlike many of his contemporaries, Hutton never found fame during his life. But his portrait of an ever-changing planet had a profound effect. Playfair's book fell into favor with Charles Lyell, who was born in 1797, the year that Hutton died. Lyell's first volume of "Principles of Geology" was published in 1830, using Hutton and Playfair as starting points.

Charles Darwin brought a copy aboard the Beagle in 1832 and later became a close friend of Lyell after completing his voyages in 1836. Darwin’s On the Origins of Species owes a debt to Hutton’s concept of deep time and rejection of religious orthodoxy.

"The concept of Deep Time is essential. Now, we take for granted the Earth is 4.5 billion years old. Hutton had no way of knowing it was that kind of age. But he did speculate that the Earth must be very, very old," Marshak says. "That idea ultimately led Darwin to come up with his phrasing of the theory of evolution. Because only by realizing there could be an immense amount of time could evolution produce the diversity of species and also the record of species found in fossils."

"The genealogy of these ideas," he adds, "goes from Hutton to Playfair to Lyell to Darwin."

Reality plus drama equals "EMERGENCY!"

National Museum of American History

The pre-reality television show EMERGENCY! premiered on January 27, 1972. Health- and medical-themed programs such as the radio and television drama Dr. Kildare had long been popular, but EMERGENCY! broke new ground. Set in Los Angeles, EMERGENCY! paid great attention to detail as it told the stories of fictional paramedics and doctors as they went about their jobs saving lives. The show didn't just look real, it was actually quite close to the real thing. An important but little-known part of the story involves the equipment used by the series' actors.

Black and white photo of actors on set, mid-scene

In pitching the premise of the show, coproducer Jack Webb collaborated with the Los Angeles County Fire Department. The close connection between the production staff and emergency personnel became a hallmark of the show. Webb had portrayed Sergeant Joe Friday on Dragnet and produced Adam-12, police shows that strove to convey a sense of reality. Technical advisers included firefighters and paramedics who enhanced the reality of the show. An additional boost to authenticity came with the casting of actor Mike Stoker, who drove Engine 51. Stoker was a firefighter in Los Angeles before joining the cast, and he continued to work in that profession while the series aired and after it ended.

Photo of defibrillator in orange case

In 2000 the National Museum of American History received a donation of materials relating to EMERGENCY! from the Project 51 Committee, a group formed to preserve the legacy of this important program which took its name from Station 51. Some of these objects (helmets, shirts, and coats) are housed with other television costumes in our Culture and the Arts division. Medical-related objects came to the Medicine and Science division.

Two of the objects in the Medicine and Science division were used by actors Kevin Tighe and Randolph Mantooth who portrayed paramedics Roy De Soto and John Gage, respectively, on the show. One is a defibrillator, an electrical device used to shock a patient's heart back into a regular beating pattern (often after a heart attack). The other is a biophone, a portable radio and data transmitter used by paramedics to talk to doctors in the hospital and transmit information, such as electrocardiograms. Although these two units are non-operative, both objects were manufactured by companies that provided operable equipment to real paramedics.

Photo of Biophone in case

Photo of label

These objects illustrate how producers Webb and Robert Cinader aimed to make a program where the lines between reality and drama intersected. Their goal was not simply to entertain, but also to educate the public about life-saving measures. Although the stories presented in the episodes were scripted, they depicted real dangers faced by firefighters and paramedics. The series motivated many people to embark upon careers in the emergency medical field. The Atlanta Constitution reported that after the series premiere, Los Angeles County increased its paramedic units from three to fifteen and credited the show for that increase. One of our colleagues here at the Museum became an emergency medical technician (EMT) because she watched EMERGENCY! It would be interesting to learn how many others made the same career choice due to the influence of Roy De Soto and John Gage.

Connie Holland is a project assistant in Medicine and Science. She has also blogged about radio programming from 1928.

Author(s): 
Connie Holland
Posted Date: 
Tuesday, September 8, 2015 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=dxzTs9Wd7_E:InEHpgmNi7E:V_sGLiPBpWU OSayCanYouSee?i=dxzTs9Wd7_E:InEHpgmNi7E:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

The Commoner Who Salvaged a King’s Ransom

Smithsonian Magazine

George Fabian Lawrence, better known as “Stoney Jack,” parlayed his friendships with London navvies into a stunning series of archaeological discoveries between 1895 and 1939.

It was only a small shop in an unfashionable part of London, but it had a most peculiar clientele. From Mondays to Fridays the place stayed locked, and its only visitors were schoolboys who came to gaze through the windows at the marvels crammed inside. But on Saturday afternoons the shop was opened by its owner—a “genial frog” of a man, as one acquaintance called him, small, pouched, wheezy, permanently smiling and with the habit of puffing out his cheeks when he talked. Settling himself behind the counter, the shopkeeper would light a cheap cigar and then wait patiently for laborers to bring him treasure. He waited at the counter many years—from roughly 1895 until his death in 1939—and in that time accumulated such a hoard of valuables that he supplied the museums of London with more than 15,000 ancient artifacts and still had plenty left to stock his premises at 7 West Hill, Wandsworth.

“It is,” the journalist H.V. Morton assured his readers in 1928,

perhaps the strangest shop in London. The shop sign over the door is a weather-worn Ka-figure from an Egyptian tomb, now split and worn by the winds of nearly forty winters. The windows are full of an astonishing jumble of objects. Every historic period rubs shoulders in them. Ancient Egyptian bowls lie next to Japanese sword guards and Elizabethan pots contain Saxon brooches, flint arrowheads or Roman coins…

There are lengths of mummy cloth, blue mummy beads, a perfectly preserved Roman leather sandal found twenty feet beneath a London pavement, and a shrunken black object like a bird’s claw that is a mummified hand… all the objects are genuine and priced at a few shillings each.

H.V. Morton, one of the best-known British journalists of the 1920s and 1930s, often visited Lawrence’s shop as a young man, and wrote a revealing and influential pen-portrait of him.

This higgledy-piggledy collection was the property of George Fabian Lawrence, an antiquary born in the Barbican area of London in 1861—though to say that Lawrence owned it is to stretch a point, for much of his stock was acquired by shadowy means, and on more than one occasion an embarrassed museum had to surrender an item it had bought from him.

For the better part of half a century, however, august institutions from the British Museum down winked at his hazy provenances and his suspect business methods, for the shop on West Hill supplied items that could not be found elsewhere. Among the major museum pieces that Lawrence obtained and sold were the head of an ancient ocean god, which remains a cornerstone of the Roman collection at the Museum of London; a spectacular curse tablet in the British Museum, and the magnificent Cheapside Hoard: a priceless 500-piece collection of gemstones, broaches and rings excavated from a cellar shortly before the First World War. It was the chief triumph of Lawrence’s career that he could salvage the Hoard, which still comprises the greatest trove of Elizabethan and Stuart-era jewelery ever unearthed.

Lawrence’s operating method was simple but ingenious. For several decades, he would haunt London’s building sites each weekday lunch hour, sidling up to the laborers who worked there, buying them drinks and letting them know that he was more than happy to purchase any curios—from ancient coins to fragments of pottery—that they and their mates uncovered in the course of their excavations. According to Morton, who first visited the West Hill shop as a wide-eyed young man around 1912, and soon began to spend most of his Saturday afternoons there, Lawrence was so well known to London’s navvies that he was universally referred to as “Stoney Jack.” A number, Morton added, had been offered “rudimentary archaeological training,” by the antiquary, so they knew what to look for.

Lawrence made many of his purchases on the spot; he kept his pockets full of half-crowns (each worth two shillings and sixpence, or around $18.50 today) with which to reward contacts, and he could often be spotted making furtive deals behind sidewalk billboards and in barrooms. His greatest finds, though were the ones that wended their way to Wandsworth on the weekends, brought there wrapped in handkerchiefs or sacks by navvies spruced up in their Sunday best, for it was only then that laborers could spirit their larger discoveries away from the construction sites and out from under the noses of their foremen and any landlords’ representatives. They took such risks because they liked and trusted Lawrence—and also, as JoAnn Spears explains it, because he “understood networking long before it became a buzzword, and leveraged connections like a latter-day Fagin.”

London navvies–laborers who excavated foundations, built railways and dug tunnels, all by hand–uncovered thousands of valuable artefacts in the British capital each year.

Two more touches of genius ensured that Stoney Jack remained the navvies’ favorite. The first was that he was renowned for his honesty. If ever a find sold for more than he had estimated it was worth, he would track down the discoverer and make certain he received a share of the profits. The second was that Lawrence never turned a visitor away empty-handed. He rewarded even the most worthless discoveries with the price of half a pint of beer, and the workmen’s attitude toward his chief rival—a representative of the City of London’s Guildhall Museum who earned the contemptuous nickname “Old Sixpenny”—is a testament to his generosity.

Lawrence lived at just about the time that archaeology was emerging as a professional discipline, but although he was extremely knowledgeable, and enjoyed a long career as a salaried official—briefly at the Guildhall and for many years as Inspector of Excavations at the newer Museum of London—he was at heart an antiquarian. He had grown up as the son of a pawnbroker and left school at an early age; for all his knowledge and enthusiasm, he was more or less self-taught. He  valued objects for themselves and for what they could tell him about some aspect of the past, never, apparently, seeing his discoveries as tiny fragments of some greater whole.

To Lawrence, Morton wrote,

the past appeared to be more real, and infinitely more amusing, than the present. He had an almost clairvoyant attitude to it. He would hold a Roman sandal—for leather is marvelously preserved in the London clay—and, half closing his eyes, with his head on one side, his cheroot obstructing his diction, would speak about the cobbler who had made it ages ago, the shop in which it had been sold, the kind of Roman who had probably brought it and the streets of the long-vanished London it had known.

The whole picture took life and colour as he spoke. I have never met anyone with a more affectionate attitude to the past.

Like Morton, who nursed a love of ancient Egypt, Stoney Jack acquired his interest in ancient history during his boyhood. “For practical purposes,” he told another interviewer, “let us say 1885, when as a youth of 18 I found my first stone implement…. It chanced that one morning I read in the paper of the finding of some stone implements in my neighborhood. I wondered if there were any more to be found. I proceeded to look for them in the afternoon, and was rewarded.”

A Roman “curse tablet”, recovered by Lawrence from an excavation in Telegraph Street, London, is now part of the collection of the British Museum.

Controversial though Lawrence’s motives and his methods may have been, it is hard to avoid the conclusion that he was the right man in the right place to save a good deal of London’s heritage. Between 1890 and 1930 the city underwent redevelopment at a pace unheard of since the Great Fire of 1666; old buildings were demolished and replaced with newer, taller ones that required deeper foundations. In the days before the advent of widespread mechanization in the building trade, much of the necessary digging was done by navvies, who hacked their way down through Georgian, Elizabethan, medieval and finally Saxon and Roman strata that had not been exposed for centuries.

It was a golden age for excavation. The relatively small scale of the work—which was mostly done with picks and shovels—made it possible to spot and salvage minor objects in a way no longer practicable today. Even so, no formal system existed for identifying or protecting artifacts, and without Lawrence’s intervention most if not all of the 12,000 objects he supplied to the Museum of London, and the 300 and more catalogued under his name at the British Museum, would have been tipped into skips and shot into Thames barges to vanish into landfill on the Erith marshes. This was very nearly the fate of the treasure with which Stoney Jack will always be associated: the ancient bucket packed to the brim with a king’s ransom worth of gems and jewelery that was dug out of a cellar in the City of London during the summer of 1912.

It is impossible to say for certain who uncovered what would become known as the Cheapside Hoard, exactly where they found it, or when it came into the antiquary’s possession. According to Francis Sheppard, the date was June 18, 1912, and the spot an excavation on the corner of Friday Street and Cheapside in a district that had long been associated with the jewelery trade. That may or may not be accurate; one of Lawrence’s favorite tricks was to obscure the precise source of his most valued stock so as to prevent suspicious landowners from lodging legal claims.

This dramatic pocket watch, dated to c.1610 and set in a case carved from a single large Colombian emerald, was one of the most valuable of the finds making up the Cheapside Hoard–and led historian Kris Lane to put forward a new theory explaining the Hoard’s origins. Photo: Museum of London.

Whatever the truth, the discovery was a spectacular one whose value was recognized by everyone who saw it—everyone, that is, but the navvies who uncovered the Hoard in the first place. According to Morton, who claimed to have been present as a boy when the find was brought to West Hill by its discoverers one Saturday evening, the workmen who had uncovered it believed that they had “struck a toyshop.” Tipping open a sack, the men disgorged an enormous lump of clay resembling “an iron football, the journalist recalled, “and they said there was a lot more of it. When they had gone, we went up to the bathroom and turned the water on to the clay. Out fell pearl earrings and pendants and all kinds of crumpled jewellery.”

For the most accurate version of what happened next, it is necessary to turn to the records of the Museum of London, which reveal that the discovery caused so much excitement that a meeting of the museum’s trustees was convened at the House of Commons the next evening, and the whole treasure was assembled for inspection a week later. “By that time,” Sheppard notes, “Lawrence had somehow or other got hold of a few more jewels, and on June 26 sent him a cheque for £90…. Whether this was the full amount paid by the trustees for the hoard is not clear. In August 1913 he was paid £47 for unspecified purchases for the museum.”

Morton—who was 19 at the time of the discovery—offered a more romantic account many years later: “I believe that Lawrence declared this as treasure trove and was awarded a large sum of money, I think a thousand pounds. I well remember that he gave each of the astounded navvies something like a hundred pounds each, and I was told that these men disappeared, and were not seen again for months!”

Whatever the truth, the contents of the navvies’ bucket were certainly astonishing. The hoard consisted of several hundred  pieces—some of them gems, but most worked pieces of jewelery in a wide variety of styles. They came from all over the world; among the most spectacular pieces were a number of cameos featuring Roman gods, several fantastical jewels from Mughal India, a quantity of superb 17th-century enamelware, and a large hinged watch case carved from a huge emerald.

A finely-worked salamander brooch, typical of the intricate Stuart-era jewelry that made up the Cheapside Hoard. Photo: Museum of London.

The collection was tentatively dated to around 1600-1650, and was rendered particularly valuable by the ostentatious fashions of the time; many of the pieces had bold, complex designs that featured a multiplicity of large gems. It was widely assumed, then and now, that the Cheapside Hoard was the stock-in-trade of some Stuart-era jeweler that had been buried for safekeeping some time during the Civil War that shattered England, Ireland and Scotland between 1642 and 1651, eventually resulting in the execution of Charles I and the establishment of Oliver Cromwell’s short-lived puritan republic.

It is easy to imagine some hapless jeweler, impressed into the Parliamentarian army, concealing his valuables in his cellar before marching off to his death on a distant battlefield. More recently, however, an alternative theory has been advanced by Kris Lane, an historian at Tulane whose book The Color of Paradise: The Emerald in the Age of Gunpowder Empires suggests that the Cheapside Hoard probably had its origins in the great emerald markets of India, and may once have belonged to a Dutch gem merchant named Gerard Polman.

The story that Lane spins goes like this: Testimonies recorded in London in 1641 show that, a decade earlier, Polman had  booked passage home from Persia after a lifetime’s trading in the east. He had offered £100 or £200 to the master of an East India Company ship Discovery in Gombroon, Persia, to bring him home to Europe, but got no further than the Comoros Islands before dying–possibly poisoned by the ship’s crew for his valuables. Soon afterwards, the carpenter’s mate of the Discovery, one Christopher Adams, appropriated a large black box, stuffed with jewels and silk, that had once belonged to Polman. This treasure, the testimonies state, was astonishingly valuable; according to Adams’s wife, the gems it contained were “so shiny that they thought the cabin was afire” when the box had first been opened in the Indian Ocean. “Other deponents who had seen the jewels on board ship,” adds Lane, “said they could read by their brilliance.”

Cheapside–for many years center of London’s financial district district, but in Stuart times known for its jewelry stores–photographed in c.1900.

It is scarcely surprising, then, that when the Discovery finally hove to off Gravesend, at the mouth of the Thames, at the end of her long voyage, Adams jumped ship and went ashore in a small boat, taking his loot with him. We know from the Parliamentary archive that he made several journeys to London to fence the jewels, selling some to a man named Nicholas Pope who kept a shop off Fleet Street.

Soon, however, word of his treachery reached the directors of the East India Company, and Adams was promptly taken into custody. He spent the next three years in jail. It is the testimony that he gave from prison that may tie Polman’s gems to the Cheapside Hoard.

The booty, Adams admitted, had included “a greene rough stone or emerald three inches long and three inches in compass”—a close match for the jewel carved into a  hinged watch case that Stoney Jack recovered in 1912. This jewel, he confessed, “was afterward pawned at Cheapside, but to whom he knoweth not”, and Lane considers it a “likely scenario” that the emerald found its way into the bucket buried in a Cheapside cellar; “many of the other stones and rings,” he adds, “appear tantalizingly similar to those mentioned in the Polman depositions.” If Lane is right, the Cheapside Hoard may have been buried in the 1630s, to avoid the agents of the East India Company, rather than lost during the chaos of the Civil War.

Whether or not Lane’s scholarly detective work has revealed the origins of the Cheapside Hoard, it seems reasonable to ask whether the good that Stoney Jack Lawrence did was enough to outweigh the less creditable aspects of his long career. His business was, of course, barely legitimate, and, in theory, his navvies’ finds belonged to the owner of the land that they were working on—or, if exceptionally valuable, to the Crown. That they had to be smuggled off the building sites, and that Lawrence, when he catalogued and sold them, chose to be vague about exactly where they had been found, is evidence enough of his duplicity.

A selection of the 500 pieces making up the Cheapside Hoard that were recovered from a ball of congealed mud and crushed metalwork resembling an “iron football” uncovered in the summer of 1912. Photo: Museum of London.

Equally disturbing, to the modern scholar, is Lawrence’s willingness to compromise his integrity as a salaried official of several museums by acting as both buyer and seller in hundreds of transactions, not only setting his own price, but also authenticating artifacts that he himself supplied. Yet there is remarkably little evidence that any institution Lawrence worked for paid over the odds for his discoveries, and when Stoney Jack died, at age 79, he left an estate totaling little more than £1,000 (about $87,000 now). By encouraging laborers to hack treasures from the ground and smuggle them out to him, the old antiquary also turned his back on the possibility of setting up regulated digs that would almost certainly have turned up additional finds and evidence to set his greatest discoveries in context. On the other hand, there were few regulated digs in those days, and had Lawarence never troubled to make friends with London navvies, most of his finds would have been lost for ever.

For H.V. Morton, it was Stoney Jack’s generosity that mattered. “He loved nothing better than a schoolboy who was interested in the past,” Morton wrote. “Many a time I have seen a lad in his shop longingly fingering some trifle that he could not afford to buy. ‘Put it in your pocket,’ Lawrence would cry. ‘I want you to have it, my boy, and–give me threepence!‘”

But perhaps the last word can be left to Sir Mortimer Wheeler, something of a swashbuckler himself, but by the time he became keeper of the Museum of London in the 1930s–after Stoney Jack had been forced into retirement for making one illicit purchase too many outside a guarded building site–a pillar of the British archaeological establishment.

“But for Mr Lawrence,” Wheeler conceded,

not a tithe of the objects found during building or dredging operations in the neighborhood of London during the last forty years would have been saved to knowledge. If on occasion a remote landowner may, in the process, theoretically have lost some trifle that was his just due, a higher justice may reasonably recognize that… the representative and, indeed, important prehistoric, Roman, Saxon and medieval collections of the Museum are largely founded upon this work of skillful salvage.

Sources

Anon. “Saved Tudor relics.” St Joseph News-Press (St Joseph, MO), August 3, 1928; Anon. “Stoney Jack’s work for museum.” Straits Times (Singapore), August 1, 1928; Michael Bartholomew. In Search of HV Morton. London: Methuen, 2010; Joanna Bird, Hugh Chapman & John Clark. Collectanea Loniniensia: Studies in London Archaeology and History Presented to Ralph Merrifield. London: London & Middlesex Archaeological Society, 1978; Derby Daily Telegraph, November 20, 1930; Exeter & Plymouth Gazette, March 17, 1939; Gloucester Citizen, July 3, 1928; Kris E. Lane. The Colour of Paradise: the Emerald in the Age of Gunpowder Empires. New Haven: Yale University Press, 2010;  J. MacDonald. “Stony Jack’s Roman London.” In J. Bird, M. Hassall and Harvey Sheldon, Interpreting Roman London. Oxbow Monograph 58 (1996); Ivor Noël Hume. A Passion for the Past: the Odyssey of a Transatlantic Archaeologist. Charlottesville : University of Virginia Press, 2010; Arthur MacGregor. Summary Catalogue of the Continental Archaeological Collections. Oxford: Ashmolean Museum, 1997; Francis Sheppard. Treasury of London’s Past. London: Stationery Office, 1991;  HV Morton. In Search of London. Boston: Da Capo Press, 2002; Derek Sherborn. An Inspector Recalls. London: Book Guild, 2003; JoAnn Spears. “The Cheapside Hoard.” On the Tudor Trail, February 23, 2012. Accessed June 4, 2013; Peter Watts. “Stoney Jack and the Cheapside Hoard.” The Great Wen, November 18, 2010. Accessed June 4, 2013.

How Advertising Shaped the First Opioid Epidemic

Smithsonian Magazine

When historians trace back the roots of today’s opioid epidemic, they often find themselves returning to the wave of addiction that swept the U.S. in the late 19th century. That was when physicians first got their hands on morphine: a truly effective treatment for pain, delivered first by tablet and then by the newly invented hypodermic syringe. With no criminal regulations on morphine, opium or heroin, many of these drugs became the "secret ingredient" in readily available, dubiously effective medicines.

In the 19th century, after all, there was no Food and Drug Administration (FDA) to regulate the advertising claims of health products. In such a climate, a popular so-called “patent medicine” market flourished. Manufacturers of these nostrums often made misleading claims and kept their full ingredients list and formulas proprietary, though we now know they often contained cocaine, opium, morphine, alcohol and other intoxicants or toxins.

Products like heroin cough drops and cocaine-laced toothache medicine were sold openly and freely over the counter, using colorful advertisements that can be downright shocking to modern eyes. Take this 1885 print ad for Mrs. Winslow’s Soothing Syrup for Teething Children, for instance, showing a mother and her two children looking suspiciously beatific. The morphine content may have helped.

Image by NIH National Library of Medicine. 1885 advertisement for Mrs. Winslow's Soothing Syrup. This product was for teething children and contained morphine. (original image)

Image by NIH National Library of Medicine. Published in Mumbles Railway Publishing, 19th century. (original image)

Yet while it’s easy to blame patent medicines and American negligence for the start of the first opioid epidemic, the real story is more complicated. First, it would be a mistake to assume that Victorian era Americans were just hunky dory with giving infants morphine syrup. The problem was, they just didn’t know. It took the work of muckraking journalists such as Samuel Hopkins Adams, whose exposé series, “The Great American Fraud” appeared in Colliers from 1905 to 1906, to pull back the curtain.

But more than that, widespread opiate use in Victorian America didn’t start with the patent medicines. It started with doctors.

The Origins of Addiction

Patent medicines typically contained relatively small quantities of morphine and other drugs, says David Herzberg, a professor of history at SUNY-University at Buffalo. “It’s pretty well recognized that none of those products produced any addiction,” says Herzberg, who is currently writing a history of legal narcotics in America.

Until the Harrison Narcotics Act of 1914, there were no federal laws regulating drugs such as morphine or cocaine. Moreover, even in those states that had regulations on the sale of narcotics beginning in the 1880s, Herzberg notes that “laws were not part of the criminal code, instead they were part of medical/pharmacy regulations.”

The laws that existed weren't well-enforced. Unlike today, a person addicted to morphine could take the same “tattered old prescription” back to a compliant druggist again and again for a refill, says David Courtwright, a historian of drug use and policy at the University of North Florida.

And for certain ailments, patent medicines could be highly effective, he adds. “Quite apart from the placebo effect, a patent medicine might contain a drug like opium,” says Courtwright, whose book Dark Paradise: A History of Opiate Addiction in America, provides much of the original scholarship in this area. “If buyers took a spoonful because they had, say, a case of the runs, the medicine probably worked.” (After all, he points out, “opium is a constipating agent.”)

Patent medicines may not have been as safe as we would demand today or live up to claims of panacea, but when it came to coughs and diarrhea, they probably got the job done. “Those drugs are really famous, and they do speak to a time where markets were a little bit out of control,” Herzberg says. "But the vast majority of addiction during their heyday was caused by physicians.”

From handbills and pamphlets advertising glyco-heroin 1900-1920, from the College of Physicians of Philadelphia's collection of medical trade ephemera. (Historical Medical Library, College of Physicians of Philadelphia)

Marketing to Doctors

For 19th century physicians, cures were hard to come by. But beginning in 1805, they were handed a way to reliably make patients feel better. That’s the year German pharmacist Friedeich Serturner isolated morphine from opium, the first “opiate” (the term opioid once referred to purely synthetic morphine like drugs, Courtwright notes, before becoming a catchall covering even those drugs derived from opium).

Delivered by tablet, topically and, by mid-century, through the newly invented hypodermic syringe, morphine quickly made itself indispensable. Widespread use by soldiers during the Civil War also helped trigger the epidemic, as Erick Trickey reports in Smithsonian.com. By the 1870s, morphine became something of “a magic wand [doctors] could wave to make painful symptoms temporarily go away,” says Courtwright.

Doctors used morphine liberally to treat everything from the pain of war wounds to menstrual cramps. “It’s clear that that was the primary driver of the epidemic,” Courtwright says. And 19th century surveys Courtwright studied showed most opiate addicts to be female, white, middle-aged, and of “respectable social background”—in other words, precisely the kind of people who might seek out physicians with the latest tools.

Industry was quick to make sure physicians knew about the latest tools. Ads for morphine tablets ran in medical trade journals, Courtwright says, and, in a maneuver with echoes today, industry sales people distributed pamphlets to physicians. The College of Physicians of Philadelphia Historical Medical Library has a collection of such “medical trade ephemera” that includes a 1910 pamphlet from The Bayer Company titled, “The Substitute for the Opiates.”

The substitute? Heroin hydrochloride, at the time a new drug initially believed to be less addictive than morphine. Pamphlets from the Antikamnia Chemical Company, circa 1895 show an easy cheat sheet catalog of the company’s wares, from quinine tablets to codeine and heroin tablets.

(College of Physicians of Philadelphia's Historical Medical Library)

Physicians and pharmacists were the key drivers in increasing America's per capita consumption of drugs like morphine by threefold in the 1870s and 80s, Courtwright writes in a 2015 paper for the New England Journal of Medicine. But it was also physicians and pharmacists who ultimately helped bring the crisis back under control.

In 1889, Boston physician James Adams estimated that about 150,000 Americans were "medical addicts": those addicted through morphine or some other prescribed opiate rather than through recreational use such as smoking opium. Physicians like Adams began encouraging their colleagues to prescribe “newer, non-opiate analgesics,” drugs that did not lead to depression, constipation and addiction.

“By 1900, doctors had been thoroughly warned and younger, more recently trained doctors were creating fewer addicts than those trained in the mid-nineteenth century,” writes Courtwright.

This was a conversation had between doctors, and between doctors and industry. Unlike today, drug makers did not market directly to the public and took pride in that contrast with the patent medicine manufacturers, Herzberg says. “They called themselves the ethical drug industry and they would only advertise to physicians.”

But that would begin to change in the early 20th century, driven in part by a backlash to the marketing efforts of the 19th century patent medicine peddlers.

"San Diego lynx bares its fangs vigorously when zoo veterinarian is near cage, vet says it acts this way because it fears his hypodermics," reads the first photo caption for this Librium advertisement. "Tranquil as a tabby," says the second. (LIFE Magazine)

Marketing to the Masses

In 1906, reporting like Adams’ helped drum up support for the Pure Food and Drug Act. That gave rise to what would become the Food and Drug Administration, as well as the notion that food and drug products should be labeled with their ingredients so consumers could make reasoned choices.

That idea shapes federal policy right up until today, says Jeremy Greene, a colleague of Herzberg’s and a professor of the history of medicine at Johns Hopkins University School of Medicine: “That path-dependent story is part of the reason why we are one of the only countries in the world that allows direct-to-consumer advertising," he says.

At the same time, in the 1950s and 60s, pharmaceutical promotion became more creative, coevolving with the new regulatory landscape, according to Herzberg. As regulators have set out the game, he says, “Pharma has regularly figured out how to play that game in ways that benefit them.

Though the tradition of eschewing direct marketing to the public continued, advertising in medical journals increased. So, too, did more unorthodox methods. Companies staged attention-grabbing gimmicks, such as Carter Products commissioning Salvador Dali to make a sculpture promoting its tranquilizer, Miltown, for a conference. Competitor Roche Pharmaceuticals invited reporters to watch as its tranquilizer Librium was used to sedate a wild lynx.

Alternatively, some began taking their messaging straight to the press.

“You would feed one of your friendly journalists the most outlandishly hyped-up promise of what your drug could do,” Greene says. “Then there is no peer review. There is no one checking to if see it’s true; it’s journalism!” In their article, Greene and Herzberg detail how ostensibly independent freelance science journalists were actually on the industry payroll, penning stories about new wonder drugs for popular magazines long before native advertising became a thing.

One prolific writer, Donald Cooley, wrote articles with headlines such as “Will Wonder Drugs Never Cease!” for magazines like Better Homes and Garden and Cosmopolitan. “Don’t confuse the new drugs with sedatives, sleeping pills, barbiturates or a cure,” Cooley wrote in an article titled “The New Nerve Pills and Your Health.” “Do realize they help the average person relax.”

As Herzberg and Greene documented in a 2010 article in the American Journal of Public Health, Cooley was actually one of a stable of writers commissioned by the Medical and Pharmaceutical Information Bureau, a public relations firm, working for the industry. In a discovery Herzberg plans to detail in an upcoming book, it turns out there is “a rich history of companies knocking at the door, trying to claim that new narcotics are in fact non-addictive” and running advertisements in medical trade journals that get swatted down by federal authorities.

A 1932 ad in the Montgomery Advertiser, for instance, teases a new “pain relieving drug, five times as potent as morphine, as harmless as water and with no habit forming qualities.” This compound, “di-hydro-mophinone-hydrochlorid” is better known by the brand name Dilaudid, and is most definitely habit forming, according to Dr. Caleb Alexander, co-director of the Center for Drug Safety and Effectiveness at Johns Hopkins.

And while it’s not clear if the manufacturer truly believed it was harmless, Alexander says it illustrates the danger credulity presents when it comes to drug development. “If it sounds too good to be true, it probably is,” he says. “It is this sort of thinking, decades later, that has driven the epidemic."

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

Image by A selection of contemporary ads for painkillers. (original image)

It wasn’t until 1995, when Purdue Pharma successfully introduced OxyContin, that one of these attempts was successful, says Herzberg. “OxyContin passed because it was claimed to be a new, less-addictive type of drug, but the substance itself had been swatted down repeatedly by authorities since the 1940s,” he says. OxyContin is simply oxycodone, developed in 1917, in a time-release formulation Purdue argued allowed a single dose to last 12 hours, mitigating the potential for addiction.

Ads targeting physicians bore the tagline, “Remember, effective relief just takes two.”

“If OxyContin had been proposed as a drug in 1957 authorities would have laughed and said no,” Herzberg says.

Captivating the Consumer

In 1997, the FDA changed its advertising guidelines to open the door to direct-to-consumer marketing of drugs by the pharmaceutical industry. There were a number of reasons for this reversal of more than a century of practice, Greene and Herzberg say, from the ongoing ripples of the Reagan-era wave of deregulation, to the advent of the “blockbuster” pharmaceutical, to advocacy by AIDS patients rights groups.

The consequences were profound: a surge of industry spending on print and television advertising describing non-opioid drugs to the public that hit a peak of $3.3 billion in 2006. And while ads for opioid drugs were typically not shown on television, Greene says the cultural and political shifts that made direct-to-consumer advertising possible also changed the reception to the persistent pushing of opioids by industry.

Once again, it was not the public, but physicians that were the targets of opioid marketing, and this was often quite aggressive. The advertising campaign for OxyContin, for instance, was in many ways unprecedented.

Purdue Pharma provided physicians with starter coupons that gave patients a free seven to 30-day supply of the drug . The company's sales force—which more than doubled in size from 1996 to 2000—handed doctors OxyContin-branded swag including fishing hats and plush toys. A music CD was distributed with the title “Get in the Swing with OxyContin.” Prescriptions for OxyContin for non-cancer related pain boomed from 670,000 written in 1997, to 6.2 million in 2002.

But even this aggressive marketing campaign was in many ways just the smoke. The real fire, Alexander argues, was a behind-the-scenes effort to establish a more lax attitude toward prescribing opioid medications generally, one which made regulators and physicians alike more accepting of OxyContin.

“When I was in residency training, we were taught that one needn’t worry about the addictive potential of opioids if a patient had true pain,” he says. Physicians were cultivated to overestimate the effectiveness of opioids for treating chronic, non-cancer pain, while underestimating the risks, and Alexander argues this was no accident.

Purdue Pharma funded more than 20,000 educational programs designed to promote the use of opioids for chronic pain other than cancer, and provided financial support for groups such as the American Pain Society. That society, in turn, launched a campaign calling pain “the fifth vital sign,” which helped contribute to the perception there was a medical consensus that opioids were under, not over-prescribed.

.....

Are there lessons that can be drawn from all this? Herzberg thinks so, starting with the understanding that “gray area” marketing is more problematic than open advertising. People complain about direct-to-consumer advertising, but if there must be drug marketing, “I say keep those ads and get rid of all the rest," he says, "because at least those ads have to tell the truth, at least so far as we can establish what that is.”

Even better, Herzberg says, would be to ban the marketing of controlled narcotics, stimulants and sedatives altogether. “This could be done administratively with existing drug laws, I believe, based on the DEA’s power to license the manufacturers of controlled substances.” The point, he says, would not be to restrict access to such medications for those who need them, but to subtract “an evangelical effort to expand their use.”

Another lesson from history, Courtwright says, is that physicians can be retrained. If physicians in the late 19th century learned to be judicious with morphine, physicians today can relearn that lesson with the wide array of opioids now available.

That won’t fix everything, he notes, especially given the vast black market that did not exist at the turn of the previous century, but it’s a proven start. As Courtwright puts it: Addiction is a highway with a lot of on-ramps, and prescription opioids are one of them. If we remove the billboards advertising the exit, maybe we can reduce, if not eliminate the number of travelers.

“That’s how things work in public health,” he says. “Reduction is the name of the game.”

Poetry Matters: In Baseball, No Poet Has Yet to Do the Game Justice

Smithsonian Magazine

Baseball is a game of unpredictable actions occurring within strictly defined guidelines—innings, strikes and outs. It should be perfect for poetry. But there has yet to be a truly great poem about baseball. The desire to be serious is what kills most baseball poems—they’re all metaphor and have none of the spontaneous joy that went into, say, John Fogarty’s pop song “Center Field.”

Put me in coach, I’m ready to play.

“April is the cruelest month,” is one of the most famous lines in poetry, but it is one that only makes sense in the post-apocalyptic world of T.S. Eliot’s “The Waste Land.” For the rest of us, clinging to hope, warm weather and the eternal prospect of new beginnings, April is not cruel at all, but welcomed. And in America, it’s welcomed because of baseball. Indeed baseball and spring, the meaning of one spills into the other in a mutually reinforcing bond of associations between the game and rebirth. It is the time when the white chill of snow is replaced by the diamond’s green growth of grass.

But this renewal is specific, even nationalistic, and uniquely American. Baseball speaks to the our country’s character and experience. In particular, the sport is rooted in special connection that Americans have with the land; an encounter with nature formed a particular type of person—and a particular type of democracy and culture.

This baseball was used in the 1937 Negro League East-West All-Star Game, played on August 8, 1937 at Comiskey Park in Chicago, Illinois. Buck Leonard (1907-1997), first baseman for the Homestead Grays, hit a home run to help the East win 7-2, keeping this baseball as a souvenir. (Image courtesy of the American History Museum)

The founding myth about baseball—that General Abner Doubleday “invented” the game in and around Cooperstown, New York, as an activity for his troops—is historically inaccurate, but satisfying nonetheless. Where better for baseball to have been created than in the sylvan woodlands of upstate New York, home of James Fenimore Cooper’s frontier heroes, Leatherstocking and Natty Bumppo? If Cooperstown is a myth, it is one that endures because the idea of America’s game being born out of the land confirms the specialness, not just of the game, but of the people the game represents. Yet it is impossible to disentangle baseball from its myths; and it seems uncanny that the first professional baseball game ever played actually occurred in urban Hoboken, New Jersey, at a place called “Elysian Fields,” Uncanny, because in Greek mythology, these are the fields where the gods and virtuous disported after they had passed on. Is this heaven?

Recall a certain magical ballfield built in Iowa cornfield, where the old time gods of baseball came out to play? The 1982 novel Shoeless Joe by W.P. Kinsella, later adapted into the 1989 film Field of Dreams, starring Kevin Costner, certainly paid homage to that Greek myth.

The virtuous and heroic in baseball is the subject of much non-fiction journalism of course, from beat writing to one of the greatest essays ever penned, John Updike’s eulogy to Ted Williams, “the best old hitter of the century.”  Inevitably it is also the subject of both literary fiction and poetry. Poetry is especially suited to expressing the mythic attractions of the game. And back when poetry was more a part of regular conversation, sportswriters and newspapermen used verse to comment on the game. In 1910, Franklin P. Adams penned his famous tribute to the Cubs’ double play combination, “Tinker to Evers to Chance/A trio of bear cubs fleeter then birds.” And probably the single most well-known poem is Ernest Thayer’s comic 1888 ballad of mighty “Casey at the Bat.” Fiction inevitably requires the author to get down and dirty in the rough and tumble of a difficult sport played (mostly) by young men, full of aggression and testosterone–not always a pretty sight.

But poetry creates just the right tone to convey the larger meaning of the game, if not always the game itself. There are not many poems from the participant’s point of view. With a poem comes the almost automatic assumption that the poet will see through the baseball game to something else, frequently the restoration of some lost unity or state of grace. Poetic baseball creates an elegy in which something lost can be either regained or at least properly mourned.

In 1910 the great sportswriter Grantland Rice made just that point in his “Game Called,” that as the players and the crowd exit the stadium: “But through the night there shines the light/home beyond the silent hill.”

Carl Yastrzemski of the Boston Red Sox wore this batting helmet around 1970. “Yaz” played 23 seasons and 3,308 games for Boston, racking up more than 3,000 hits and 400 home runs. He cut away the right earpiece to hear more clearly. (Image courtesy of the American History Museum)

In his comic riff on sports, the comedian George Carlin croons that in baseball “you go home.” There are a lot of poems in which families re-connect, sometimes successfully, by watching baseball or by having fathers teach sons how to play.

For modernist poets—the heirs to Eliot—baseball was generally ignored because it was too associated with a romantic, or even sentimental, view of life. Modernism was nothing, but hard headed and it was difficult to find a place for games. William Carlos Williams, in his 1923 poem “The Crowd at the Ball Game,” delights in the game, precisely because it’s a time out from the hum-drum grind of daily work.

The crowd at the ball game
is moved uniformly
by a spirit of uselessness
which delights them

And this purposelessness has a point, “all to no end save beauty/the eternal.” Williams is mostly after the relationship between crowd and individual, the game is not really the thing.

The great Marianne Moore got something of a reputation in the popular press for actually being a fan of baseball, and in 1968 threw out the first pitch at Yankee Stadium (above). In fact she was often seen in stands, taking in a game and some of her poems reference bats and balls. She talked about creativity more expansively in “Baseball and Writing:”

Fanaticism? No. Writing is exciting
and baseball is like writing.
You can never tell with either
how it will go
or what you will do;
generating excitement

This gets closer to the flow experience of the game itself rather than just describing it but the poem then breaks down into a not very good roll-call of Yankee players from the early ‘60s. Baseball always crops up enough to make it interesting to see how poets have used it. May Swenson turned baseball into an amusing puzzle and word play game based on romance and courtship:

Bat waits
for ball
to mate.
Ball hates
to take bat’s
bait. Ball
flirts, bat’s
late, don’t
keep the date.

And at the end, inevitably, everyone heads home. The Beat Poet Gregory Corso has a typically hallucinatory encounter with Ted Williams “In the Dream of the Baseball Star” in which Williams unaccountably is unable to hit a single pitch and “The umpire dressed in strange attire/thundered his judgment: YOU’RE OUT!”

Fellow beat Lawrence Ferlinghetti invoked baseball to make a civil rights point.

Watching baseball, sitting in the sun, eating popcorn,
reading Ezra Pound,
and wishing that Juan Marichal would hit a hole right through the
Anglo-Saxon tradition in the first Canto
and demolish the barbarian invaders

You can sense in the shift from the game to Ezra Pound, the poet’s uneasiness with the game itself and his eagerness to move from the physical to the intellectual. When the body appears in a baseball poem it is the body of the aging poet, as in Donald Hall’s extended, very well done, but extremely depressing linkage of innings going by with aging—and death. Maybe baseball poems will always be troubled with an excess of seriousness; perhaps we’ve become too rooted in the mythology of baseball and character to treat it on its own terms. Alternate takes by African Americans, like Quincy Troupe’s “Poem for My Father” about the impact of the Negro leagues and the prowess of such players as Cool Papa Bell, give another angle on the tradition. Further such outsider views, especially from the point of view of women who are not either adoring spectators or “Baseball Annies”, would be welcome as well.

As with a new season, hope springs eternal not just that a new season is starting but that someday some poet will give baseball the kind of relaxed attention that does the sport justice. It really is remarkable that baseball, which occupies such a large part of our culture and history, remains in the view of this critic, so inadequately treated by our writers and poets.

Image by Image courtesy of the National Portrait Gallery. Babe Ruth (1895-1948) also of the Yankees in a photograph by Nickolas Muray. © Courtesy Nickolas Muray Photo Archives © Family of Babe Ruth & Babe Ruth Baseball League, Inc. by CMG Worldwide (original image)

Image by Image courtesy of the National Portrait Gallery. Josh Gibson (c.1911-1947) who played for the Homestead Grays and Pittsburgh Crawfords in a photograph by Charles “Teeny” Harris. © Estate of Charles “Teenie” Harris (original image)

Image by Image courtesy of the National Portrait Gallery. Roger Maris (1934-1985) of the New York Yankees by Robert Vickrey. Gift of Scott Vickrey (original image)

When the Olympics Gave Out Medals for Art

Smithsonian Magazine

At the 1912 Summer Olympics in Stockholm, American Walter Winans took the podium and waved proudly to the crowd. He had already won two Olympic medals—a gold for sharpshooting at the 1908 London Games, as well as a silver for the same event in 1912—but the gold he won at Stockholm wasn’t for shooting, or running, or anything particularly athletic at all. It was instead awarded for a small piece of bronze he had cast earlier that year: a 20-inch-tall horse pulling a small chariot. For his work, An American Trotter, Winans won the first ever Olympic gold medal for sculpture.

For the first four decades of competition, the Olympics awarded official medals for painting, sculpture, architecture, literature and music, alongside those for the athletic competitions. From 1912 to 1952, juries awarded a total of 151 medals to original works in the fine arts inspired by athletic endeavors. Now, on the eve of the 100th anniversary of the first artistic competition, even Olympics fanatics are unaware that arts, along with athletics, were a part of the modern Games nearly from the start.

“Everyone that I’ve ever spoken to about it has been surprised,” says Richard Stanton, author of The Forgotten Olympic Art Competitions. “I first found out about it reading a history book, when I came across a little comment about Olympic art competitions, and I just said, ‘what competitions?’” Propelled by curiosity, he wrote the first—and still the only—English-language book ever published on the subject.

To learn about the overlooked topic, Stanton had to dig through crumbling boxes of often-illegible files from the International Olympic Committee archives in Switzerland—many of which hadn’t seen the light of day since they were packed away decades ago. He discovered that the story went all the way back to the Baron Pierre de Coubertin, the founder of the IOC and the modern Games, who saw art competitions as integral to his vision of the Olympics. “He was raised and educated classically, and he was particularly impressed with the idea of what it meant to be a true Olympian—someone who was not only athletic, but skilled in music and literature,” Stanton says. “He felt that in order to recreate the events in modern times, it would be incomplete to not include some aspect of the arts.”

At the turn of the century, as the baron struggled to build the modern Olympics from scratch, he was unable to convince overextended local organizers of the first few Games in Athens, St. Louis and Paris that arts competitions were necessary. But he remained adamant. “There is only one difference between our Olympiads and plain sporting championships, and it is precisely the contests of art as they existed in the Olympiads of Ancient Greece, where sport exhibitions walked in equality with artistic exhibitions,” he declared.

Finally, in time for the 1912 Stockholm Games, he was able to secure a place for the arts. Submissions were solicited in the categories of architecture, music, painting, sculpture and literature, with a caveat—every work had to be somehow inspired by the concept of sport. Some 33 (mostly European) artists submitted works, and a gold medal was awarded in each category. In addition to Winans’ chariot, other winners included a modern stadium building plan (architecture), an “Olympic Triumphal March” (music), friezes depicting winter sports (painting) and Ode to Sport (literature).  The baron himself was among the winners. Fearing that the competitions wouldn’t draw enough entrants, he penned the winning ode under the pseudonyms George Hohrod and Martin Eschbach, leaving the medal jury unaware of the true author.

Image by Collection: Olympic Museum Lausanne. The bronze medals awarded during the 1924 Olympic art competitions in Paris in the "Sculpture" category. (original image)

Image by Collection: Olympic Museum Lausanne. Jean Jacoby's Corner, left, and Rugby. At the 1928 Olympic Art Competitions in Amsterdam, Jacoby won a gold medal for Rugby. (original image)

Image by Collection: Idrottsmuseet i Malmö. Walter Winans An American Trotter won the gold medal in the "Sculpture" category at the first Olympic Art Competitions in 1912 in Stockholm. (original image)

Image by Collection: Norbert Mueller. Anniversary of the Reintroduction of the Olympic Games, 1914, Edouard Elzingre. (original image)

Image by Collection: Deutsches Sport & Olympia Museum, Cologne. Carlo Pellegrini's series of winter sport graphic artworks won an Olympic gold medal. (original image)

Image by Collection: Norbert Mueller. The original program of the presentation of prizes in May 1911 in the Court of Honor of the Sorbonne in Paris. (original image)

Image by Collection: Carl and Liselott Diem-Archiv. A letter from Pierre de Coubertin that aimed to motivate the IOC Art Congress in 1906 to artistically enhance sports festivals and inspire them to hold music and literature competitions in association with sporting events. (original image)

Image by Collection: Deutsches Sport & Olympia Museum, Cologne. Ode to Sport won the gold medal in "Literature" at the first Olympic Art Competitions in 1912. (original image)

Over the next few decades, as the Olympics exploded into a premier international event, the fine arts competitions remained an overlooked sideshow. To satisfy the sport-inspired requirement, many paintings and sculptures were dramatic depictions of wrestling or boxing matches; the majority of the architecture plans were for stadiums and arenas. The format of the competitions was inconsistent and occasionally chaotic: a category might garner a silver medal, but no gold, or the jury might be so disappointed in the submissions that it awarded no medals at all. At the 1928 Amsterdam Games, the literature category was split into lyric, dramatic and epic subcategories, then reunited as one for 1932, and then split again in 1936.

Many art world insiders viewed the competitions with distrust. “Some people were enthusiastic about it, but quite a few were standoffish,” Stanton says. “They didn't want to have to compete, because it might damage their own reputations.” The fact that the events had been initiated by art outsiders, rather than artists, musicians or writers—and the fact that all entries had to be sport-themed—also led many of the most prominent potential entrants to decide the competitions were not worth their time.

Still, local audiences enjoyed the artworks—during the 1932 Games, nearly 400,000 people visited the Los Angeles Museum of History, Science and Art to see the works entered—and some big names did enter the competitions. John Russell Pope, the architect of the Jefferson Memorial, won a silver at the 1932 Los Angeles Games for his design of the Payne Whitney Gymnasium, constructed at Yale University. Italian sculptor Rembrandt Bugatti, American illustrator Percy Crosby, Irish author Oliver St. John Gogarty and Dutch painter Isaac Israëls were other prominent entrants.

In 1940 and 1944, the Olympics were put on hold as nearly all participating countries became embroiled in the violence and destruction of World War II. When they returned, the art competitions faced a bigger problem: the new IOC president’s obsession with absolute amateurism. “American Avery Brundage became the president of the IOC, and he was a rigid supporter of amateur athletics,” Stanton says. “He wanted the Olympics to be completely pure, not to be swayed by the weight of money.” Because artists inherently rely on selling their work for their livelihood—and because winning an Olympic medal could theoretically serve as a sort of advertisement for the quality of an artist’s work—Brundage took aim at the art competitions, insisting they represented an unwelcome incursion of professionalism. Although Brundage himself had once entered a piece of literature in the 1932 Games’ competitions and earned an honorable mention, he stridently led a campaign against the arts following the 1948 Games.

After heated debate, it was eventually decided that the art competitions would be scrapped. They were replaced by a noncompetitive exhibition to occur during the Games, which eventually became known as the Cultural Olympiad. John Copley of Britain won one of the final medals awarded, a silver in 1948 for his engraving, Polo Players. He was 73 years old at the time, and would be the oldest medalist in Olympic history if his victory still counted. The 151 medals that had been awarded were officially stricken from the Olympic record, though, and currently do not count toward countries’ current medal counts.

Still, half a century later, the concept behind the art competitions lingers. Starting in 2004, the IOC has held an official Sport and Art Contest leading up to each summer Games. For the 2012 contest, entrants sent sculptures and graphic works on the theme of “Sport and the Olympic values of excellence, friendship and respect.” Though no medals are at stake, winners will receive cash prizes, and the best works will be selected and displayed in London during the Games. Somewhere, the Baron Pierre de Coubertin might be smiling.

Why Did a Venomous Fish Evolve a Glowing Eye Spike?

Smithsonian Magazine

In 2003, Leo Smith was dissecting a velvetfish. Smith, an evolutionary biologist at the University of Kansas, was trying to figure out the relationships between mail-cheeked fishes, an order that includes velvetfishes, as well as waspfishes, stonefishes and the infamous lionfish. As he worked his way to the velvetfish's upper jaw, though, he realized something strange—he was having trouble removing the lachrymal bone.

“On a normal fish, there's a little bit of connective tissue and you can work a scalpel blade between the upper jaw and this bone,” recalls Smith, whose work centers on the evolution of fish venom and bioluminescence. “I was having just a horrible time trying to separate it. When I finally got it separated, I noticed there was this thing that's all lumpy and bumpy … it was then that it hit me that it had to be some sort of locking mechanism.”

To be fair, most velvet fish already resemble thorny, blobby mutants, so an extra skewer isn’t really that unusual. But given that Smith has spent years studying mail-cheeked fishes (Scorpaeniformes)—an order that gets its common name from the bone plates found on each cheek—you’d think he would have noticed a massive, locking eye spike before. He hadn’t. He and his colleagues would dub this strange new discovery the “lachrymal saber.”

(FYI: Lachrymal comes from the Latin word for “tear.” While fish can’t cry, it’s still the technical name for the bone forming the eye socket.)

Smith and his co-authors at the American Society of Ichthyologists and Herpetologists describe this unlikely eye spike for the first time in the journal Copeia—and even report on one that glows fluorescent green, a little eye lightsaber. The authors can’t yet say exactly what the appendage is for. But they do claim that it has the potential to profoundly rearrange the Scorpaeniformes evolutionary tree, changing what we know about these highly venomous fish.

The finding also raises the question: How the heck did a glowing, locking sword-like appendage go ignored for so long?

A species of stonefish, the Spotted Ghoul (Inimicus sinensis), buried in the gravel. (Leo Smith)

It’s easy to miss a stonefish. True to their name, they closely resemble rocks, with cobbled-covered exteriors that mirror underwater rubble or chunks of coral. But step on one, and you’ll never forget it.

There are more venomous fish in the seas than snakes on land—or indeed, than all venomous vertebrates combined—but the stonefish is one of the most venomous on the planet. Getting pricked by one of these marine monsters can feel, as an unlucky victim once put it, like “hitting your toe with a hammer and then rubbing over it again and again with a nail file." While it’s uncommon, divers have even died after such an encounter.

Stonefish and their cousins are also marvelous at camouflage. Some grow algae and hydroid gardens on their backs, others can change color at will, and one, the decoy scorpionfish, has a lure on its dorsal fin that resembles a tiny, swimming fish. Found mainly in the tropical waters throughout the Indo-Pacific, these remarkable creatures use their disguises to both ambush prey and avoid becoming lunch themselves.

But the lachrymal saber, a unique aspect of these fish, had somehow gone overlooked. And while it’s not a Star Wars lightsaber or a blade from Lord of the Rings, this saber might be something even more impressive. Picture a complex spine under the fish’s eye that operates like a ratchet and pawl, laterally locking into place like two sharp arms. “They don't actually even move the saber itself,” says Smith. “They move the underlying bone that's connected to it through the locking mechanism and then that rotation is what locks it out.”

In at least one species—Centropogon australis, a breed of waspfish—the saber glows a biofluorescent lime green, while the rest of the fish glows orange-red under certain light.

Adam Summers, a biomechanist and fish specialist at the University of Washington, is currently trying to CT scan all 40,000 species of fish. Summers, who was not involved in the recent study, has already scanned 3052 species and 6,077 specimens, while studying many mail-cheeked fishes for years. And he’s never noticed the saber.

“Erectile defenses in fishes are really common,” says Summers, who was also a scientific consultant on Pixar’s Finding Nemo and Finding Dory. He isn’t referring to fish penises, but anatomical defenses that pop up when certain species are stressed or threatened. “If you’ve ever caught a fish and tried to pull it off the hook, you know the dorsal spines erect and they can poke the living crap out of you," hes says, "but that we missed one that was under the eye—sort of an eye saber—is pretty insane.”

To determine that these fish really are related beyond the saber, the researchers in the new study used DNA sequencing to confirm their findings. Looking at 5,280 aligned nucleotides and using 12 outgroups as controls, they built a phylogenetic or evolutionary tree. Once you have the tree, Smith explains, there are methods called ancestral character state reconstruction that allow us to trace when characters evolve. And that may help biologists unify a group of fishes that was previously thought to be separate families.

“The taxonomy of Scorpaeniformes is historically muddled,” Smith explains. “The scorpionfish and stonefish relationships have been really problematic, and there have been a lot of family-level names attached to this group that are dramatically cleaned up when these groups are treated as the two main lineages rather than the 10 traditional families. It is much cleaner now and the presence of a lachrymal saber can separate the two revised families completely.”

An Ocellated Waspfish (Apistus carinatus) being skeletonized by flesh-eating beetles at the Field Museum. (Leo Smith)

When he was first dissecting the velvetfish, Smith didn’t understand what he was looking at. “I just thought they were kind of spinier or lumpier,” he says. “These fish have a lot of spines and bumps on their head. So I was like, ‘Oh, these [lachrymal] ones are kind of more interesting.’”

Smith spent years examining fish skeletons and live fish to determine how widespread this saber was. Fortunately, as a curator at the Biodiversity Institute at the University of Kansas, he has access to one of the largest libraries of fish specimens in the world.

Many of these exemplar fish were made using a method called “clearing and staining,” in which scientists use a mix of liquid formaldehyde and a stomach enzyme called trypsin to dissolve muscle and other soft tissue. The result is a clear skeleton with red-tinted bones and blue-colored cartilage, like stained glass. This technique makes it easy to study skeletal structures of vertebrates.

“People who study fishes closely often work with dead preserved fish and these kinds of really cool things don't work in an animal that isn’t mobile,” says Summers. Still, “to find this and then to realize that it’s an uniting character for a whole group of fishes is very, very cool.”

Smith isn’t sure why the fish evolved this trait. The obvious assumption is it’s defensive, given the projected spines expand the width of the head, making the fish harder to swallow and more likely to puncture a would-be predator. Similar defensive measures exist: the deep-sea lanternshark, for example, has glowing “lightsabers” on its dorsal spine that are believed to defend against predators. But Smith hasn’t seen the lachrymal saber used defensively, except in photographs of mail-cheeked fishes getting eaten.

“I went into this assuming it was an anti-predator, complex anatomical thing that grew that way and now as every day goes on, I start questioning that more and more,” Smith says. “Part of it is I can never get the stupid things to do it … I mean you would think if it was just anti-predator, if I bumped the tank they would immediately get them out.” The other option, he says, is that it might be for attracting mates, though he points out that both genders appear to have the sabers.

In other words, for now, the eye spike is still a mystery.

In 2006, with Ward Wheeler, Smith found that more than 1200 species of fish are venomous, compared to previous estimates of 200. He updated that number a decade later to between 2386 and 2962. He also worked on a PLOS One paper with noted ichthyologists Matt Davis and John Sparks to show that bioluminescence evolved 27 separate times in marine fish lineages. He even revised the taxonomy of butterflyfishes.

With this new finding, Smith may have disrupted the way we think about fish relationships yet again, says Sarah Gibson, an adjunct professor of biology at St. Cloud State University in Minnesota who studies Triassic fish. “I think it's a pretty important, big study,” she says. “Knowing the evolutionary relationships of a group can really impact our understanding of the evolutionary history of fishes in general.” (Gibson worked with Smith when she was doing her dissertation, but was not part of the recent study.)

Understanding the evolution of stonefish is key to their conservation, adds Summers. “You can't conserve something unless you know who it is,” he says. The mystery of the lachrymal saber “is an interesting question that's worth addressing and I’m still blown away that we missed it.”

In the end, this discovery also underscores something Smith once told The New York Times: Despite centuries of research and exploration, “we really don’t know anything about fish.”

When Did Girls Start Wearing Pink?

Smithsonian Magazine

Little Franklin Delano Roosevelt sits primly on a stool, his white skirt spread smoothly over his lap, his hands clasping a hat trimmed with a marabou feather. Shoulder-length hair and patent leather party shoes complete the ensemble.

We find the look unsettling today, yet social convention of 1884, when FDR was photographed at age 2 1/2, dictated that boys wore dresses until age 6 or 7, also the time of their first haircut. Franklin’s outfit was considered gender-neutral.

But nowadays people just have to know the sex of a baby or young child at first glance, says Jo B. Paoletti, a historian at the University of Maryland and author of Pink and Blue: Telling the Girls From the Boys in America, to be published later this year. Thus we see, for example, a pink headband encircling the bald head of an infant girl.

Why have young children’s clothing styles changed so dramatically? How did we end up with two “teams”—boys in blue and girls in pink?

“It’s really a story of what happened to neutral clothing,” says Paoletti, who has explored the meaning of children’s clothing for 30 years. For centuries, she says, children wore dainty white dresses up to age 6. “What was once a matter of practicality—you dress your baby in white dresses and diapers; white cotton can be bleached—became a matter of ‘Oh my God, if I dress my baby in the wrong thing, they’ll grow up perverted,’ ” Paoletti says.

The march toward gender-specific clothes was neither linear nor rapid. Pink and blue arrived, along with other pastels, as colors for babies in the mid-19th century, yet the two colors were not promoted as gender signifiers until just before World War I—and even then, it took time for popular culture to sort things out.

For example, a June 1918 article from the trade publication Earnshaw's Infants' Department said, “The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger color, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl.” Other sources said blue was flattering for blonds, pink for brunettes; or blue was for blue-eyed babies, pink for brown-eyed babies, according to Paoletti.

In 1927, Time magazine printed a chart showing sex-appropriate colors for girls and boys according to leading U.S. stores. In Boston, Filene’s told parents to dress boys in pink. So did Best & Co. in New York City, Halle’s in Cleveland and Marshall Field in Chicago.

Today’s color dictate wasn’t established until the 1940s, as a result of Americans’ preferences as interpreted by manufacturers and retailers. “It could have gone the other way,” Paoletti says.

So the baby boomers were raised in gender-specific clothing. Boys dressed like their fathers, girls like their mothers. Girls had to wear dresses to school, though unadorned styles and tomboy play clothes were acceptable.

Image by Bettmann / Corbis. Like other young boys of his era, Franklin Roosevelt wears a dress. This studio portrait was likely taken in New York in 1884. (original image)

Image by TongRo Image Stock / Corbis. Pink and blue arrived as colors for babies in the mid-19th century, yet the two colors were not promoted as gender signifiers until just before World War I. (original image)

Image by Winterthur Museum and Library. In 1920, the paper doll Baby Bobby has a pink dress in his wardrobe, as well as lace-trimmed collars and underclothes. (original image)

Image by University of Maryland Costume and Textile Collection. In the Victorian era, a boy (photographed in 1870) wears a pleated skirt and high button baby boots and poses with ornate millinery. (original image)

Image by University of Maryland Costume and Textile Collection. A boy’s T-shirt from 2007 announces why he would don pink. “When boys or men wear pink, it’s not just a color but is used to make a statement—in this case, the statement is spelled out,” says the University of Maryland’s Jo Paoletti. (original image)

Image by University of Maryland Costume and Textile Collection. Sister and brother, circa 1905, wear traditional white dresses in lengths appropriate to their ages. “What was once a matter of practicality—you dress your baby in white dresses and diapers, white cotton can be bleached—became a matter of ‘Oh my God, if I dress my babies in the wrong thing, they’ll grow up perverted,’ ” says Paoletti. (original image)

Image by Ladies’ Home Journal, 1905. In 1905, the girls and boys are indistinguishable in a Mellin’s baby food advertisement. When the company sponsored a contest to guess the children’s gender, no one got all the correct answers. Notice the boys’ fussy collars, which today we consider feminine. (original image)

Image by University of Maryland Costume and Textile Collection. Rompers made from a 1960 sewing pattern would be passed down to younger siblings. Play clothes at this time could be gender neutral. An example from Hollywood is the young actress Mary Badham wearing overalls as Scout in the 1962 movie To Kill a Mockingbird. (original image)

Image by Winterthur Museum and Library. The wardrobe of the boy paper doll Percy (1910) included picture hats, skirts, tunics with knickers, knickers and long overalls. (original image)

Image by Simplicity Creative Group. A Simplicity sewing pattern from 1970, when the unisex look was all the rage. “One of the ways [feminists] thought that girls were kind of lured into subservient roles as women is through clothing,” says Paoletti. “ ‘If we dress our girls more like boys and less like frilly little girls . . . they are going to have more options and feel freer to be active.’ ” (original image)

Image by Don Berkemeyer. Paoletti is a historian at the University of Maryland and author of Pink and Blue: Telling the Girls From the Boys in America, to be published later this year. (original image)

When the women’s liberation movement arrived in the mid-1960s, with its anti-feminine, anti-fashion message, the unisex look became the rage—but completely reversed from the time of young Franklin Roosevelt. Now young girls were dressing in masculine—or at least unfeminine—styles, devoid of gender hints. Paoletti found that in the 1970s, the Sears, Roebuck catalog pictured no pink toddler clothing for two years.

“One of the ways [feminists] thought that girls were kind of lured into subservient roles as women is through clothing,” says Paoletti. “ ‘If we dress our girls more like boys and less like frilly little girls . . . they are going to have more options and feel freer to be active.’ ”

John Money, a sexual identity researcher at Johns Hopkins Hospital in Baltimore, argued that gender was primarily learned through social and environmental cues. “This was one of the drivers back in the ’70s of the argument that it’s ‘nurture not nature,’ ” Paoletti says.

Gender-neutral clothing remained popular until about 1985. Paoletti remembers that year distinctly because it was between the births of her children, a girl in ’82 and a boy in ’86. “All of a sudden it wasn’t just a blue overall; it was a blue overall with a teddy bear holding a football,” she says. Disposable diapers were manufactured in pink and blue.

Prenatal testing was a big reason for the change. Expectant parents learned the sex of their unborn baby and then went shopping for “girl” or “boy” merchandise. (“The more you individualize clothing, the more you can sell,” Paoletti says.) The pink fad spread from sleepers and crib sheets to big-ticket items such as strollers, car seats and riding toys. Affluent parents could conceivably decorate for baby No. 1, a girl, and start all over when the next child was a boy.

Some young mothers who grew up in the 1980s deprived of pinks, lace, long hair and Barbies, Paoletti suggests, rejected the unisex look for their own daughters. “Even if they are still feminists, they are perceiving those things in a different light than the baby boomer feminists did,” she says. “They think even if they want their girl to be a surgeon, there’s nothing wrong if she is a very feminine surgeon.”

Another important factor has been the rise of consumerism among children in recent decades. According to child development experts, children are just becoming conscious of their gender between ages 3 and 4, and they do not realize it’s permanent until age 6 or 7. At the same time, however, they are the subjects of sophisticated and pervasive advertising that tends to reinforce social conventions. “So they think, for example, that what makes someone female is having long hair and a dress,’’ says Paoletti. “They are so interested—and they are so adamant in their likes and dislikes.”

In researching and writing her book, Paoletti says, she kept thinking about the parents of children who don’t conform to gender roles: Should they dress their children to conform, or allow them to express themselves in their dress? “One thing I can say now is that I’m not real keen on the gender binary—the idea that you have very masculine and very feminine things. The loss of neutral clothing is something that people should think more about. And there is a growing demand for neutral clothing for babies and toddlers now, too.”

“There is a whole community out there of parents and kids who are struggling with ‘My son really doesn’t want to wear boy clothes, prefers to wear girl clothes.’ ” She hopes one audience for her book will be people who study gender clinically. The fashion world may have divided children into pink and blue, but in the world of real individuals, not all is black and white.

Correction: An earlier version of this story misattributed the 1918 quotation about pink and blue clothes to the Ladies’ Home Journal. It appeared in the June 1918 issue of Earnshaw's Infants’ Department, a trade publication.

Lunar Bat-men, the Planet Vulcan and Martian Canals

Smithsonian Magazine

Bat-Men On The Moon!
One August morning in 1835, readers of the New York Sun were astonished to learn that the Moon was inhabited. Three-quarters of the newspaper's front page was devoted to the story, the first in a series entitled "Great Astronomical Discoveries Lately Made by Sir John Herschel, L.L.D, F.R.S, &c At The Cape of Good Hope." Herschel, a well-known British astronomer, was able "by means of a telescope of vast dimensions and an entirely new principle," the paper reported, to view objects on the Moon as though they were "at the distance of a hundred yards." Each new story in the six-part series reported discoveries more fantastic than the last.

Herschel's telescope revealed lunar forests, lakes and seas, "monstrous amethysts" almost a hundred feet high, red hills and enormous chasms. Populating this surreal landscape were animals resembling bison, goats, pelicans, sheep—even unicorns. Beavers without tails walked on two legs and built fires in their huts. A ball-shaped amphibian moved around by rolling. There were moose, horned bears and miniature zebras. But the biggest surprise of all was reserved for the fourth article in the series. Herschel and his team of astronomers had spotted humanoids: bipedal bat-winged creatures four feet tall with faces that were "a slight improvement" on the orangutan's. Dubbed Vespertilio-homo (or, informally, the bat-man), these creatures were observed to be "innocent," but they occasionally conducted themselves in a manner that the author thought might not be fit for publication.

The Sun also described massive temples, though the newspaper cautioned that it was unclear whether the bat-men had built them or the structures were the remnants of a once-great civilization. Certain sculptural details—a globe surrounded by flames—led the Sun's writer to wonder whether they referred to some calamity that had befallen the bat-men or were a warning about the future.

Reaction to the series—an effort to boost circulation, which it did—ranged from amazed belief to incredulity. Herschel himself was annoyed. In a letter to his aunt Caroline Herschel, also an astronomer, he wrote, "I have been pestered from all quarters with that ridiculous hoax about the Moon—in English French Italian & German!!" The author of the piece was most likely Richard Adams Locke, a Sun reporter. The newspaper never admitted it concocted the story. It's tempting to think that we're immune to such outlandish hoaxes today, and perhaps we are. But a passage from the series reminds us that we're not as different from our forebears of almost 200 years ago as we might think. When Herschel made his supposed optic breakthrough, the Sun reported, a colleague leapt into the air and exclaimed: "Thou art the man!"

Planet Vulcan Found!
Vulcan is best known today as the fictional birthplace of the stoic Mr. Spock on "Star Trek," but for more than half a century it was considered a real planet that orbited between Mercury and the Sun. More than one respectable astronomer claimed to have observed it.

Astronomers had noticed several discrepancies in Mercury's orbit. In 1860, French mathematician Urbain Le Verrier speculated that an undetected planet exerting a gravitational pull on Mercury could account for the odd orbit. He named it Vulcan.

An astronomer named Edmond Lescarbault said he had spotted the planet the previous year. Other astronomers pored over reports of previous sightings of objects crossing in front of the Sun. Occasional sightings of planet-like objects were announced, each prompting astronomers to recalculate Vulcan's orbit. After the solar eclipse of 1878, which gave astronomers a rare opportunity to see objects normally obscured by the Sun's glare, two astronomers reported they had seen Vulcan or other objects inside Mercury's orbit.

Le Verrier was awarded the Légion d'honneur for predicting the location of a real planet: Neptune. He died in 1877 still believing he had also discovered Vulcan. It took until 1915 and improved photography and the acceptance of Einstein's general theory of relativity, which explained Mercury's orbital discrepancies, for the idea to be laid to rest. The observations of the phantom planet were either wishful thinking or sunspots.

Martians Build Canals!
Percival Lowell peered through a telescope on an Arizona hilltop and saw the ruddy surface of Mars crisscrossed with canals. Hundreds of miles long, they extended in single and double lines from the polar ice caps. Bringing water to the thirsty inhabitants of an aging planet that was drying up, the canals were seen as a spectacular feat of engineering, a desperate effort by the Martians to save their world.

Lowell was an influential astronomer, and the canals, which he mapped with elaborate precision, were a topic of scientific debate during the early 20th century. We know now that the canals didn't exist, but how did this misperception begin?

In 1877, Giovanni Schiaparelli, an Italian astronomer, reported seeing canali on the surface of Mars. When his report was translated into English, canali, which in Italian means channels, was rendered as canals, which are by definition man-made.

Lowell's imagination was ignited by Schiaparelli's findings. In 1894, Lowell built an observatory in Flagstaff, Arizona, and focused on Mars. Other astronomers had noticed that some areas of the planet's surface seemed to change with the seasons—blue-green in the summer and reddish-ocher in the winter. These changes seemed to correspond with the growing and shrinking of the polar ice caps. Lowell believed that the melting caps in summer filled the canals with water that fed large areas of vegetation. He filled notebook after notebook with observations and sketches and created globes showing the vast network of waterways built by Martians.

The intricacy of Lowell's canal system is all the more mystifying because it doesn't seem to correspond to any actual features on the planet—yet he apparently saw the same canals in exactly the same places time after time. Even in Lowell's day, most other astronomers failed to see what he saw, and his theory fell into disrepute among most of the scientific community (though the public continued to embrace the notion). To this day, no one knows whether Lowell's maps were the result of fatigue, optical illusions or, perhaps, the pattern of blood vessels in his eye.

Like any romantic idea, belief in Martian canals proved hard to abandon. The possibility of life on the planet closest to ours has fascinated us for centuries and continues to do so. Lowell's canals inspired science fiction writers including H.G. Wells and Ray Bradbury. It took the Mariner missions to Mars of the 1960s and 1970s to prove that there are no canals on the Red Planet.

The Earth Is Hollow!
(and we might live on the inside)

Imagine the earth as a hollow ball with an opening at each pole. On its inner surface are continents and oceans, just like on the outer surface. That's the Earth envisioned by Capt. John Cleves Symmes, an American veteran of the War of 1812. He toured the country in the 1820s, lecturing on the hollow Earth and urging Congress to fund an expedition to the polar openings. His hope was that Earth's inner surface would be explored and that trade would be established with its inhabitants.

The hollow Earth theory wasn't entirely new—the idea of open spaces inside Earth had been suggested by ancient thinkers including Aristotle, Plato and Seneca. Caves and volcanoes gave the concept plausibility, and legends and folktales abound with hidden civilizations deep below the crust.

In 1691, to explain variations in Earth's magnetic poles, royal astronomer Sir Edmond Halley, better known for recognizing the schedule of a brilliant comet, proposed a hollow Earth consisting of four concentric spheres. The interior must be lit and inhabited, he said; the idea of the Creator failing to populate the land and provide its populace with life-giving light seemed inconceivable. Halley proposed a luminous substance that filled the cavity, and he attributed the aurora borealis to its escape through the crust at the poles.

To make a weird idea even weirder, Cyrus Teed, a 19th-century physician, alchemist and experimenter with electricity, concluded that the world was not only hollow but also that human beings were living on its inner surface. He got the idea in 1869, when an angelic vision announced (after Teed had been shocked into unconsciousness by one of his experiments) that Teed was the messiah. According to the angel, the Sun and other celestial bodies rose and set within the hollow Earth due to an atmosphere that bent light in extreme arcs. The entire cosmos, he claimed, was contained inside the sphere, which was 8,000 miles in diameter. Teed changed his name to Koresh (the Hebrew form of "Cyrus"), founded his own cult (Koreshanity) and eventually built a compound for his followers, who numbered 250, in southwestern Florida. The compound is now preserved by the state of Florida as the Koreshan State Historic Site and draws tens of thousands of visitors every year.

Venus Attacks!
In 1950, Immanuel Velikovsky published Worlds in Collision, a book that claimed cataclysmic historical events were caused by an errant comet. A psychoanalyst by training, Velikovsky cited the Old Testament book of Joshua, which relates how God stopped the Sun from moving in the sky. Moses' parting of the Red Sea, Velikovsky claimed, could be explained by the comet's gravitational pull. He theorized that in 1500 B.C., Jupiter spewed out a mass of planetary material that took the form of a comet before becoming the planet Venus.

Velikovsky was one in a long line of catastrophists, adherents of the theory that sudden, often planet-wide cataclysms account for things like mass extinctions or the formation of geological features. His book is remarkable not so much for its theories—which are unexceptional by catastrophist standards—but for its popularity and longevity. A New York Times best seller for 11 weeks, it can be found on the science shelves of bookstores to this day and enjoys glowing reviews on some Web sites.

Worlds in Collision was met with derision from scientists. Among other problems, the composition of Venus and Jupiter are quite different, and the energy required for ejecting so much material would have vaporized the nascent planet. At a 1974 debate sponsored by the American Association for the Advancement of Science, Carl Sagan, the popular astronomer, was among the panelists opposing Velikovsky. But the attacks may have strengthened Velikovsky's standing; he struck some people as an underdog fighting the scientific establishment.

Velikovsky's ideas seemed radical a half century ago—most astronomers assumed that planetary change occurred at a slow, constant rate. His remaining adherents point to the asteroid impact that killed most of the dinosaurs 65 million years ago as evidence he was ahead of his time.

Erik Washam is the associate art director for Smithsonian.

I'll have an order of desegregation, please

National Museum of American History

We often remember the civil rights movement as a few iconic events that took place at famous landmarks—the Edmund Pettus Bridge, the National Mall. Programming intern Alex Kamins learned that it took place all over the country, including a small roadside eatery in the middle of Maryland.

I recently drove about 40 miles to go to a diner. There's nothing wrong with the diners closer to home here in Washington, D.C., but I knew the Double T Diner in Catonsville, Maryland, had a story to tell—along with some solid diner fare.

A contemporary photograph of the outside of the Double T Diner. It has a traditional chome exterior. Several cars are parked in the parking lot in front of it.

Located on Route 40, once the main connection between Washington, D.C., and New York, the Double T offers a retro setting designed to evoke the 1950s, complete with neon colors, chrome design, and a jukebox on every table. The clientele tends to be locals looking for a bite to eat and perhaps a friendly chat. I managed to go the day before the Fourth of July and it was packed. People were lining up in droves in order to get a table. As I was by myself, I managed to sneak in under the radar and find a spot at the counter.

As popular as it is, I don't think most of the people squeezing into the Double T know that 55 years ago, this diner refused to serve African American patrons. Back in 1961, a demonstration took place there in which students took days off from classes, risked their lives, and stood outside and picketed—probably feeling fear and apprehension every minute they were there. They were fighting for one simple cause: that African Americans would be treated as equals and that the diner would drop its Jim Crow-era segregationist policy.

With the election of John F. Kennedy as president in 1960, many people (particularly African Americans) hoped that he might lead the country away from segregation. But progress was slow and Kennedy's focus seemed to be on the Soviet Union and the threat of communism spreading around the globe.

A booklet concerning the 1962 nuclear fallout plans. It is yellow with red and black text boxes and font. A mushroom cloud decorates the righthand side.

However, a few events in the first couple years of his term forced him to acknowledge that something had to be done about civil rights in America. Starting in 1960, the United States received an increase in diplomatic representatives from Africa. Multiple African diplomats, in particular William Fitzjohn of Sierra Leone and Adam Malik Sow of Chad, were harassed and beaten at a number of establishments as they made their way along Route 40 from Washington, D.C., to the United Nations headquarters in New York. In an August 1960 article for the Washington Post titled "D.C. is a Hardship Post for Negro Diplomats," reporter Milton Viorst was able to convey to his readers the state of affairs for these diplomats as they made their way to D.C.: "[The diplomat] has learned to live in 'colored' hotels, eat in 'colored' restaurants, and spend his evenings in 'colored' movies. When asked how he accepts it, he shrugs and calls it a hazard of his profession."

While Kennedy publicly apologized to Fitzjohn and Sow for what happened, he saw these incidents as a thorn in his side in terms of his overall plan for the time he was in office. According to Nick Bryant'sThe Bystander: John F. Kennedy and the Struggle for Black Equality, he even chastised the African diplomats (not directly, but in a private phone call with one of his advisors) for taking Route 40 saying, "Can't you tell those African ambassadors not to drive on Route 40? It's a hell of a road—I used to drive it years ago, but why would anyone want to drive it today when you can fly? Tell these ambassadors I wouldn't think of driving from New York to Washington. Tell them to fly!"

Kennedy created the Special Protocol Service Section at the State Department and installed Pedro Sanjuan as its head. Sanjuan personally visited every establishment along Route 40 and pleaded with them to cease segregation, presenting himself as representing the president, complete with a letter from Kennedy. As a result of his visits, more than half of the 78 restaurants along Route 40 voluntarily complied with his request.

According to Raymond Arsenault's Freedom Riders: 1961 and the Struggle for Racial Justice, Sanjuan plead his case to Maryland lawmakers by saying, "when an American citizen humiliates a foreign representative or another American citizen for racial reasons, the results can be just as damaging to his country as the passing of secret information to the enemy." Unfortunately for Kennedy, he didn't know this twist was coming. According to Nicholas Murray Vachon's The Junction: The Cold War, Civil Rights, and the African Diplomats of Maryland's Route 40, Kennedy aide Harris Wofford remembers that when it came to civil rights, the president made decisions "hurriedly, at the last minute, in response to Southern political pressures without careful consideration of an overall strategy." He felt as though his involvement in this crisis would make America look weak in the eyes of the rest of the world and that the Soviet Union would take advantage of the situation, which they eventually did. Bryant describes in The Bystander how Soviet officials in New York "offered to sign leases on behalf of at least three United Nations-based African diplomats who had been rebuffed by white landlords."

Four students stand in the foreground of this photo depicting a three story building with clocktower in the back and campus grounds. Other students mill around.

Around this time, African American students saw a strategic opportunity to increase the visibility of the movement. Vachon describes how the students donned dashikis, robes, and fake accents and proceeded to go to a number of establishments along Route 40. The response they received was mixed but often cordial. One place demanded to see their credentials while others either served them in another room or treated them as regular customers.

Bryant continues his narrative by describing how Sanjuan, frustrated that "the administration was unwilling to enact new legislation," began to turn to civil rights groups such as CORE to organize and picket the remaining segregated institutions along Route 40. CORE (short for the Congress of Racial Equality) decided to initiate a Freedom Motorcade that was to be scheduled for November 11, but just a few days before it was to take place, "the majority of restaurant owners along U.S. #40 agreed to desegregate," as stated on CORE's protest flyer "End Racial Discrimination along U.S. 40 between the Delaware Memorial Bridge and Baltimore."

However, some institutions refused to comply and CORE printed up flyers for anyone who was interested. The call on those flyers was to "Help us finish the job!" and the aim was to orchestrate a number of sit-ins in the segregated establishments. On the flyers, the demonstrators outlined a step-by-step process of what would happen to the volunteers. They would be verbally abused and under the constant threat of violence. Some were arrested, but the focal point (as outlined in the flyer) was to "be courteous and stay non-violent throughout, no matter what the provocation."

A photo from the museum of the installed portion of the counter from the Greensboro diner. It includes the countertop, four chairs, and part of the back wall with a mirror on it.

Every day for the next few months, 300 to 400 students made their way into the remaining segregated establishments and would remain there until they were read the trespass law by the owner in the presence of police, as was required by Maryland law. These sit-ins were so successful that by June 1962, all of establishments along Route 40 were completely desegregated thanks to the Public Accommodations Law passed by the Baltimore City Council.

As for the Double T Diner…

Though I was at the diner only long enough to enjoy a burger and fries, it seemed almost surreal to picture what happened there 55 years ago. Both black and white students, as well as ordinary volunteers and prominent civil rights activists, sat at this very counter silently protesting the treatment of African Americans. I kept turning my head in many directions, looking for any glimpse of the dramatic events of the past. Diners were talking about their plans for the Fourth of July and the score of the previous night's Orioles game, and were unlikely to be thinking about the events of the past. There's no plaque or newspaper article taped to the wall to inform diners about what happened here.

As I left the diner, I kept thinking to myself how, even though we tend to think of the civil rights movement as a few seminal events, the reality was it took place all over the country, where thousands of people risked their lives every day for a more humane nation. We can still think about the history surrounding a location even as we enjoy a cheeseburger.

A photograph of the meal the author ordered; there is a hamburger with fixings and fries, along with condiments, a glass of water and eating utensils.

Alex Kamins completed a programming internship working with the Office of Programs and Strategic Initiatives. He is a graduate student at New York University's Graduate School of Arts and Sciences and is set to graduate next May.

Author(s): 
intern Alex Kamins
Posted Date: 
Monday, December 5, 2016 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=ZkHPPuKXOvE:RQP18VX4gtE:V_sGLiPBpWU OSayCanYouSee?i=ZkHPPuKXOvE:RQP18VX4gtE:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

The Prussian Nobleman Who Helped Save the American Revolution

Smithsonian Magazine

The baron wore an eight-pointed silver star on his chest, etched with the word Fidelitas. “Squad, halt!” he shouted—some of the few English words he knew. He walked among the 100 men in formation at Valley Forge, adjusting their muskets. He showed them how to march at 75 steps a minute, then 120. When their discipline broke down, he swore at them in German and French, and with his only English curse: “Goddamn!”

It was March 19, 1778, almost three years into the Revolutionary War. The Continental Army had just endured a punishing winter at Valley Forge. And a stranger—former Prussian army officer Baron Friedrich Wilhelm von Steuben—was on the scene to restore morale, introduce discipline and whip the tattered soldiers into fighting shape.

To one awestruck 16-year-old private, the tall, portly baron in the long blue cloak was as intimidating as the Roman god of war. “He seemed to me the perfect personification of Mars,” recalled Ashbel Green years later. “The trappings of his horse, the enormous holsters of his pistols, his large size, and his strikingly martial aspect, all seemed to favor the idea.”

Some of the baron’s aura was artifice. Von Steuben had never been a general, despite the claim of the supporters who recommended him. A decade past his service as a captain in the Prussian army, von Steuben, 47, filled his letters home with tall tales about his glorious reception in America. But the baron’s skills were real. His keen military mind and charismatic leadership led George Washington to name him the Continental Army’s acting inspector general soon after his arrival at its camp in Valley Forge, Pennsylvania. In less than two months in spring 1778, von Steuben rallied the battered, ill-clothed, near-starving army.

“They went from a ragtag collection of militias to a professional force,” says Larrie Ferreiro, whose recent book, Brothers at Arms, tells the story of foreign support for the American Revolution. Ferreiro considers von Steuben the most important of all the volunteers from overseas who flocked to America to join the Revolution. “[It was] Steuben’s ability to bring this army the kind of training and understanding of tactics that made them able to stand toe to toe with the British,” he says.

Born into a military family in 1730—at first, his last name was the non-noble Steuben—he was 14 when he watched his father direct Prussian engineers in the 1744 siege of Prague. Enlisting around age 16, von Steuben rose to the rank of lieutenant and learned the discipline that made the Prussian army the best in Europe. “Its greatness came from its professionalism, its hardiness, and the machine-like precision with which it could maneuver on the battlefield,” wrote Paul Lockhart in his 2008 biography of von Steuben, The Drillmaster of Valley Forge.

Von Steuben spent 17 years in the Prussian army, fought in battles against Austria and Russia during the Seven Years’ War, became a captain, and attended Prussian king Frederick the Great’s elite staff school. But a vindictive rival schemed against him, and he was dismissed from the army during a 1763 peacetime downsizing. Forced to reinvent himself, von Steuben spent 11 years as court chamberlain in Hohenzollern-Hechingen, a tiny German principality. In 1769, the prince of nearby Baden named him to the chivalric Order of Fidelity. Membership came with a title: Freiherr, meaning “free lord,” or baron.

In 1775, as the American Revolution broke out, von Steuben’s boss, the Hechingen prince, ran out of money. Von Steuben, his salary slashed, started looking for a new military job. But Europe’s great armies, mostly at peace, didn’t hire him. In 1777, he tried to join the army in Baden, but the opportunity fell through in the worst way possible. An unknown person there lodged a complaint that von Steuben had “taken liberties with young boys” in his previous job, writes Lockhart. The never-proven, anonymously reported rumor destroyed von Steuben’s reputation in Germany. So he turned to his next-best prospect: America.

In September 1777, the disgraced baron sailed from France to volunteer for the Continental Army, bankrolled by a loan from his friend, French playwright Pierre-Augustin Caron de Beaumarchais. A letter from America’s diplomats in Paris, Benjamin Franklin and Silas Deane, vouched for him and reported that France’s minister of war and foreign minister had done so too.

But Deane and Franklin’s letter also falsely claimed that von Steuben was a lieutenant general and exaggerated his closeness to Frederick the Great—“the greatest public deception ever perpetrated in a good cause,” wrote Thomas Fleming in Washington’s Secret War: The Hidden History of Valley Forge. Why? Only the highest recommendation would make an impression back home. Congress, desperate for volunteers earlier in the war, had been overwhelmed by unemployed Europeans eager for military jobs, and the number of officers from overseas had begun to stir resentment among American-born officers. “Congress had sternly warned they wanted no more foreigners arriving in America with contracts for brigadier and major generalships in their trunks,” Fleming wrote. Though von Steuben didn’t exaggerate his accomplishments to Franklin and Deane, he went along with the story once he got to America—and added some flourishes of his own. At one point, he even claimed he’d turned down paid positions with the Holy Roman Empire to serve in the United States.  

Von Steuben landed at Portsmouth, New Hampshire, on December 1, 1777, with four French aides to translate for him and a large dog named Azor. His exaggerated reputation spread fast. In Boston, he met John Hancock, who hosted a dinner for him, and chatted up Samuel Adams about politics and military affairs. Next, von Steuben headed to York, Pennsylvania, the temporary American capital while the British occupied Philadelphia. Aware that the Continental Congress had soured on foreign volunteers, von Steuben offered to serve under Washington and asked to be paid only if America won the war. They took the deal and sent von Steuben to Valley Forge.

“Baron Steuben has arrived at camp,” Washington wrote soon after. “He appears to be much of a gentleman, and as far as I have had an opportunity of judging, a man of military knowledge and acquainted with the world.” Washington’s confidence in von Steuben grew quickly. Within two weeks, he made the baron acting inspector general and asked him to examine the Continental Army’s condition.

“What [Steuben] discovered was nothing less than appalling,” wrote Fleming in Washington’s Secret War. “He was confronting a wrecked army. A less courageous (or less bankrupt) man would have quit on the spot.” Unlike the American forces in New York, who had beaten the British at Saratoga in fall 1777, the army in Pennsylvania had suffered a series of defeats. When they lost the Battle of Brandywine in September 1777, the British had seized Philadelphia. Now—following common military practice of the era—they had camped for the winter. But Valley Forge, their winter quarters, was nearly as punishing as battle: hastily built huts, cruel temperatures, scarce food.

The baron found soldiers without uniforms, rusted muskets without bayonets, companies with men missing and unaccounted for. Short enlistments meant constant turnover and little order. Regiment sizes varied wildly. Different officers used different military drill manuals, leading to chaos when their units tried to work together. If the army had to fight on short notice, von Steuben warned Washington, he might find himself commanding one-third of the men he thought he had. The army had to get into better shape before fighting resumed in the spring.

So, von Steuben put the entire army through Prussian-style drills, starting with a model company of 100 men. He taught them how to reload their muskets quickly after firing, charge with a bayonet and march in compact columns instead of miles-long lines. Meanwhile, he wrote detailed lists of officers’ duties, giving them more responsibility than in English systems.

Soldiers gaped at the sight of a German nobleman, in a French-style black beaver hat, drilling poorly clothed troops. Though von Steuben raged and cursed in a garbled mixture of French, English, and German, his instructions and presence began to build morale. “If anything, the curses contributed to Steuben’s reputation as an exotic character who was good for a laugh now and then,” wrote Fleming.

And though the baron was appalled at the condition of the army he was tasked with making over, he soon developed an appreciation for its soldiers. “The genius of this nation is not in the least to be compared with that of the Prussian, Austrians, or French,” von Steuben wrote to a Prussian friend. “You say to your soldier ‘Do this and he doeth it’; but I am obliged to say [to the American soldier]: ‘This is the reason why you ought to do that: and then he does it.’”

Off the drilling field, von Steuben befriended the troops. A lifelong bachelor, he threw dinner parties rather than dine alone. One night, the guests pooled their rations to give von Steuben’s manservant the ingredients for a dinner of beefsteak and potatoes with hickory nuts. They also drank “salamanders”—cheap whiskey set on fire.

As von Steuben’s work progressed, news of the United States’ treaties of alliance with France reached Valley Forge. Washington declared May 6, 1778 a day of celebration. He asked von Steuben to ready the army for a ceremonial review.

At 9 a.m. on May 6, 7,000 soldiers lined up on the parade ground. “Rank by rank, with not a single straying step, the battalions swung past General Washington and deployed into a double line of battle with the ease and swiftness of veterans,” Fleming wrote. Then the soldiers performed the feu de joie, a ceremonial rifle salute in which each soldier in a line fires in sequence—proof of the army’s new discipline. “The plan as formed by Baron von Steuben succeeded in every particular,” wrote John Laurens, an aide to Washington.

The baron’s lessons didn’t just make the American troops look impressive in parades—under his tutelage, they became a formidable battlefield force. Two weeks after the celebration, the Marquis de Lafayette led a reconnaissance force of 2,200 to observe the British evacuation from Philadelphia. When a surprise British attack forced Lafayette to retreat, von Steuben’s compact column formation enabled the entire force to make a swift, narrow escape. At the Battle of Monmouth on June 28, the Revolution’s last major battle in the northern states, American troops showed a new discipline. They stood their ground during ferocious fire and bayonet attacks and forced the British to retreat. “Monmouth vindicated Steuben as an organizer,” wrote Lockhart. The Continental Army’s new strength as a fighting force, combined with the arrival of the French fleet off the coast of New York in July 1778, turned the tide of the war.

Von Steuben served in the Continental Army for the rest of the Revolutionary War. In 1779, he codified his lessons into the Army’s Blue Book. Officially the Regulations for the Order and Discipline of the Troops of the United States, it remained the Army training manual for decades. The Army still uses some portions of it in training manuals today, including von Steuben’s instructions on drill and ceremonies.

After the war, the governor of New York granted von Steuben a huge wilderness estate in the Mohawk Valley as a reward for his service in the war. Von Steuben died there in November 1794 at age 64. His importance to the Revolution is evident in Washington’s last act as commanding general. In December 1783, just before retiring to Mount Vernon, he wrote von Steuben a letter of thanks for his “great Zeal, Attention and Abilities” and his “faithful and Meritorious Services.” Though his name is little known among Americans today, every U.S. soldier is indebted to von Steuben—he created America’s professional army.

Remembering Forrest Mars Jr.

National Museum of American History
Photograph of a copper chocolate pot and wooden whisk, 1740s-1760s

Hearing that Forrest Mars Jr. had passed away on July 26, 2016, put me in a sad but reflective mood. One of the giants of the chocolate world, Forrest, along with his brother John and sister Jacqueline, owned and led the $35 billion food company Mars Inc. Many people think of Forrest Mars as a businessman famous for managerial skills, but I knew him as a devoted historian of early American chocolate and a donor of important chocolate consumption objects to the Smithsonian.

Photograph of Howard Shapiro delivering a presentation before a museum audience.
Photographs of museum visitors at a hands-on station during the 2009 Smithsonian Chocolate Symposium.
In 2008, my colleagues at the museum and I first began working with Forrest when he encouraged us to host a symposium on the history of chocolate. Held at the museum in February 2009, the meeting was wildly successful. People clamored for a ticket to hear scholars reveal a fascinating and largely unknown history of international colonial trade and cultural diffusion. We quickly realized that the business history of chocolate was an outstanding prism for gaining insight and understanding into what made the early American economy tick.
Josiah Webb & Co. advertisement, about 1860. Horses pull a wagon carrying packed goods away from the factory building, which bears the company's name.
M&M point of sale display, 1942. The display contains several tubes of wrapped M&M containers and reads "Chocolate...flavor sealed in."
Forrest’s interest and commitment to history was strong. As Howard Shapiro, Global Director of Plant Science and External Research for Mars, explained, Forrest believed that chocolate was “inextricably linked to the soul of America.” Forrest was adamant that the story of early American chocolate should be recorded and preserved – not just in popular magazine stories and trade books, but in serious scholarly works. Along with our symposium Mars sponsored the major opus Chocolate: History, Culture, and Heritage (several chapters of which are available on our museum’s website) as well as museum installations and demonstrations around the United States and Canada. The Mars family was so strongly committed to history that they became the naming sponsor of American Enterprise, an exhibition chronicling the business history of America, which opened just a year ago.
Screenshot of the Colonial Drinks interactive included in the American Enterprise exhibition.
But history is more than beautiful objects, archival photographs, and interpretive words. History, if you talked to Forrest Mars, was also about taste and smell. Committed to preserving all aspects of chocolate history, Forrest encouraged his company to commercially produce historic American chocolate so that everyone could smell and taste what the nation's founders would have consumed. I for one was pleasantly surprised to experience the peppery and complicated taste that defined chocolate in the late 1700s. 
Photograph of museum visitors and staff interacting during the Business of Chocolate public program.
Forrest Mars was the epitome of a high-impact, low-profile leader. He was a business manager who with his sister and brother expanded their father’s company into a truly global brand. Fiercely committed to the company’s five founding principles—Quality, Responsibility, Mutuality, Efficiency, and Freedom—Mars did more than watch the bottom line. Constantly looking long-range, the family made decisions that were often about the planet and people, not simply profit. They chose to fund projects like the sequencing of the cacao genome, not because it made sense on the accounting books, but because it made a huge difference to the thousands of small holder farmers raising cacao around the world. 
Photograph of curator Peter Liebhold posing in front of a Dove chocolate display in China alongside a sales agent.
I would see Forrest at least once a year at the annual meeting of the Colonial Chocolate Society. Modest and unassuming, he would sit quietly in a back corner munching on a small bag of M&Ms, listening intently to the scholarly presentations. I’m pretty sure most of the conference attendees didn’t recognize him and that would have made him happy.
 
Peter Liebhold is a co-curator of the American Enterprise exhibition and a curator in the Work and Industry Division at the National Museum of American History. 
Posted Date: 
Thursday, July 28, 2016 - 18:45
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=LuuMB1FBxMY:fE9d83FN0LE:V_sGLiPBpWU OSayCanYouSee?i=LuuMB1FBxMY:fE9d83FN0LE:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Crashing Alexander Hamilton's Birthday Weekend

Smithsonian Magazine

It’s a birthday card the recipient will never see, given the year he’s celebrating: His 258th. But were he to glimpse the missive — signed with ink, quills and looping cursive handwriting — he might blush from the posthumous attention.

“Today is Alexander Hamilton’s birthday, right?” asks an excited guest on the morning of Saturday, January 10. She’s just entered Hamilton Grange National Memorial, a preserved historical house in Harlem where Hamilton lived for two years with his wife, Elizabeth Schuyler, and seven children. The woman is off by one day — Hamilton was born on January 11 — but it hardly matters: It’s the Founding Father’s birthday weekend, and the festivities stretch across three days.

Even though Hamilton’s visage is printed on the $10 bill, the canny, wunderkind statesmen is often overshadowed by the likes of Jefferson, Washington and Adams. Never elected president, Hamilton’s highest rank in the executive branch was as Secretary of the Treasury under George Washington. Hamilton played paramount roles at the Constitutional Convention and crafting the Federalist Papers, but — other than his currency placement — may be best known for having been killed by then-Vice President Aaron Burr in a duel in 1804. Except this past weekend and with this small but passionate band of Hamilton devotees, who gather annually to fete the late thinker, honor his legacy, and trek around New York City to his various haunts, houses and stomping grounds.

“[We want] to make it easier, quicker for people to get to the essence of Alexander Hamilton’s greatness,” says Rand Scholet, the founder of the Alexander Hamilton Awareness Society (AHA), an organization that trumpets Hamilton’s achievements and for three years has organized the annual birthday crawl. It’s in New York City that Hamilton attended school (Kings College, today’s Columbia University), practiced law, and built his home.

The weekend’s traditions are equal parts solemn and quirky: A cake cutting at the Museum of American Finance on Wall Street, where a permanent exhibition spotlights Hamilton’s economic acumen; a long-distance call dialed from the museum to Nevis, the Caribbean Island where Hamilton was born; and a blessing at Trinity Church in lower Manhattan, where Hamilton is buried. Each time the group sings “Happy Birthday,” an unwritten rule is in effect: no agreement beforehand on how to address Hamilton. As a result, the final verse is always more cacophony than song. Revelers call him “Alexander,” “Major General Hamilton,” and — if they’re feeling particularly playful — “Hammy.”

The Morris-Jumel Mansion in New York City is the last surviving headquarters of George Washington’s revolutionary army in Manhattan and one of the stops on the Alexander Hamilton birthday tour. (Trish Mayo/Morris Jumel Mansion)

On Saturday morning, Scholet dons a colorful Continental Congress-themed tie and AHA-emblazoned sport coat, shepherding around fellow fans and eager to rattle off Hamilton’s unheralded accomplishments: creating a blueprint for the nation’s economy; establishing the Coast Guard; and serving as Washington’s loyal aide-de-camp throughout the Revolutionary War.

“Alexander Hamilton was George Washington’s indispensable partner in war and peace for over 22 years,” Scholet says excitedly in a creaky, third-story room of Hamilton Grange.

Downstairs, a team of historians reads Hamilton’s love letters aloud. A particularly steamy passage causes one attendee to smirk and waggle his eyebrows suggestively.

Hamilton Grange serves as the weekend’s hub, a gathering place for admirers to swap anecdotes, recount favorite stories, and debate apocrypha. (No, Martha Washington probably didn’t have a pet cat named Hamilton.) Alice and Ed Magdziak — Hamilton enthusiasts from New Jersey — share an analogy.

“Hamilton is the George Harrison of the Founding Fathers,” Ed says, alluding to the talented Beatle who never quite got the same acclaim as bandmates John Lennon and Paul McCartney. Like Harrison, Hamilton might not be as well-known as his colleagues — but he has all their zeal and passion, if not more, Ed adds.

Nearby are Ian and Hartley Connett, a father-and-adult-son duo from Dobbs Ferry, New York. This is Ian’s third Hamilton birthday weekend. This year, the younger Connett, managed to sell Hamilton to his father and friends, and the cadre crawls the city to celebrate.

“For me, Hamilton represents the epitome of what it means to be an American,” Hartley says, referencing Hamilton’s success despite a modest upbringing and lowly pedigree.

The Connett party’s itinerary parallels AHA’s for a time, and then veers off. They’ll have drinks at Fraunces Tavern, that iconic Manhattan watering hole that dates back to the 18th century. They’ll also venture to the Weehawken, New Jersey, site where political rival Burr killed Hamilton in a duel in 1804.


Members of the US Coast Guard, Sector New York place the traditional wreath sponsored by the Museum of American Finance next to Alexander Hamilton's grave following a blessing led by Trinity Church (Nicole Scholet)

Burr makes some Hamilton fans bristle — “No comment,” says one of Connett’s friends brusquely when asked his thoughts — but AHA is eager to make peace. “Aaron Burr is not a villain,” Scholet says. “He actually has a very similar background to Hamilton,” he continues, noting both men lost their parents early in life. The National Parks Service, which maintains Hamilton Grange, seems eager to sow peace, too. One of the docents at the site is Elizabeth Reese, a fifth great grandniece of Burr. Her volunteering at the site is penance, she jokes, for a deadly duel two centuries ago.

When the Connetts depart for New Jersey, a different passel of Hamilton disciples migrates about 20 blocks north to the Morris-Jumel Mansion in Washington Heights, a Washington headquarters during the war that’s now a historic landmark and museum. Here, Hamilton devotees pack into a cozy parlor to hear lawyer Pooja Nair speak about Hamilton’s career in law — and that strange time he teamed up with Burr to defend a client.

“This is a legal dream team,” Nair says breathlessly. The case — dubbed the Manhattan Well Murder — was a consummate media frenzy, Nair notes, and placed Hamilton’s legal prowess in the national spotlight. Nair’s audience is rapt, and varied: Hamilton fans are young and old, male and female, and — perhaps — even Federalists and anti-Federalists.

The weekend’s events conclude at Trinity Church early Sunday afternoon, where a group of two dozen gathers at Hamilton’s grave. His tomb, a faded marble obelisk, is adorned with gifts: Wreaths, flags, bows, and — in a clever nod to the first-ever Secretary of the Treasury — various American currency. It’s here that two clergy members lead a blessing, bringing the birthday weekend to a close.

“Do we have any Hamilton descendants here?” asks the rector.

“In spirit,” quips one woman, earnestly. Those around her nod in agreement.

1945-1968 of 1,991 Resources