Found 31,632 Resources containing: Shapes
Antietam is my favorite battlefield because it is still largely unspoiled—it doesn’t have the huge number of memorials that dot Gettysburg and it is more pristine than Chancellorsville and the Wilderness, where roads, shopping malls and housing developments encroach on the sites. The landscape and the buildings here recall the 19th century—if you can ignore the automobiles—and a visitor is left to contemplate what happened on this otherwise peaceful, cultivated landscape on September 17, 1862—still known as America’s bloodiest day, when nearly 23,000 soldiers were wounded or lost their lives.
Occasionally as the land is worked or eroded by water, a corpse surfaces on the battlefield as it did one day in 1989, making headlines in the local press. The macabre story prompted me to write the poem: “On a Recently Discovered Casualty of the Battle of Antietam,” which was published in the Kentucky Poetry Review. It’s not a very good poem—verbally clunky—but I like the opening lines:
“Farm land, plowed land, shot plowed,/Now plowed again to uncover a biography.”
I’ve gone on to have modest success as a poet, but after that first Antietam work I have not written more than one or two “history” poems. I think my unconscious decision was that poetry is another part of my life, separate from my job as an historian. Recently though, I started writing poetry about the Civil War as I worked on the upcoming exhibition for the National Portrait Gallery, “Dark Fields of the Republic. Alexander Gardner’s Photographs, 1859-1872.”An 1862 photograph by Alexander Gardner depicts the the dead on the field after the Battle of Antietam. (Collection of Bob Zeller)
Gardner was one of the pioneering figures in creating documentary photography. Not only an excellent technician, he made his name by taking pictures of the Antietam battlefield soon after the fighting ended, and he left a cache of indelible images of the dead and the blasted landscape. When displayed to the public at a gallery in Manhattan, the New York Times wrote that Gardner’s photographs had “a terrible distinctness” and that the images brought the reality of modern war into the parlors and streets of the home front. It was a devastating moment for Americans as they saw the costs of war pictured so graphically and distinctly in the pitiless gaze of the camera.
BRADY’S STUDIO: “The Dead at Antietam”
Photographs of the battle
dead had a “terrible distinctness,”
horror fused in the clarity
of the new imagery
the gallery crowds
scarred yet flocking to it
unable to look away
the reality of war
the camera caught KIA
with pockets turned out
looted, shoes and socks stripped off
(We regret . . .your son
Maryland campaign. . .painlessly
. . .he didn’t suffer, at peace,
Sincerely, Col. . . . )
the old proprieties
dissolving in the acid of the new
the modern arriving, click of a shutter,
It was “the birth of the new,” not just for photography, but in the culture and society at large. The photographs contributed to the huge sea change in America with the onset of modernism in everything from manufacturing to literature. And the photographs influenced the course of the War itself. A year after Antietam, Gardner went to Gettysburg where he again documented the cost of battle.
BURIAL DETAIL, Gettysburg July 7, 1863
—more than 3,000 horses and mules were killed at the Battle of Gettysburg
it wasn’t the men
somehow you got numb to the bodies
blown apart, befouled and twisted
black like metal work
no, it was the horses
bloated in their caisson or wagon
traces, a dying struggle to get up
dead on their haunches
uncomprehending eyes frozen
bulging bewildered at what had fallen
on them shrieking
from a cloud of steel
no, it was the horses
that the Iron Brigade’s farm boy
veterans wept over as they pyred
them into a torch of smoke
Abraham Lincoln by Alexander Gardner, 1861 (National Portrait Gallery, Smithsonian Institution)
Gardner was Lincoln’s favorite photographer and the president must have seen the photographs of Gettysburg when he visited Gardner’s Washington studio in early November 1863, just before he went to the battlefield to help dedicate the cemetery. It is my supposition that the rhetoric of the Gettysburg Address was shaped in part by Lincoln’s photographic encounter of the battle dead. It is there in the chasteness of Lincoln’s language as well as in the appeal that “. . .we cannot consecrate—we cannot hallow—this ground. The brave men, living and dead, who struggled here, have consecrated it, far above our poor power to add or detract.”
WORD CLOUD OVER GETTYSBURG
The crowd, vaguely gathered
about the podium, what was next?
the President suddenly
doffing his tall hat, taking
a small paper from it, rising,
or preliminary throat clearing,
the crowd distracted
barely noticing that tall figure
or hearing that reedy tenor,
the flat midwestern vowels, the words
and sentences cadenced,
cast out above them
promissory, floating up and into
then past the grey November sky,
arcing out above the earth bound
hearing only fragments, incomplete:
“cannot hallow. . .”, “last full
measure. . .,” “new birth. . .”
“of the. . .,” “. . people,”
“ by the. . . ,” “shall not perish,” “earth.”
Words uttered, flying, the President
suddenly sitting, proceedings
resumed, while unnoticed
far out and high, the words regathered
meaning, force, and fell back
to earth, seeding the dark fields.
It is this sense of hallowed ground that motivates my work on the first major retrospective of Alexander Gardner’s photography. Details of biography, history and photographic detail aside, the exhibition is called “Dark Fields of the Republic” because I want Gardner’s photographs to evoke for a modern audience what they did for 19th century Americans, including Lincoln, who saw them for the first time.
Gardner’s photographs are a record of the sacrifice and loss that occurred in the great national struggle over the Union and for American freedom. They are a graphic, documentary record of how heroism in history is as equally mixed with tragedy–and that all change entails loss along with the gains. In the ceaseless workings of American democracy, the sacrifice that Lincoln noted is indelibly imprinted not just in his words, but in the photographs of Alexander Gardner: “That from these honored dead we take increased devotion to that cause for which they gave the last full measure of devotion—that we here highly resolve that these dead shall not have died in vain.” The battlefield exerts its gravitational pull both on myself and, whether knowingly or not, on all Americans and our history.
“Dark Fields of the Republic. Alexander Gardner’s Photographs” opens at the National Portrait Gallery on September 17, 2015–the 153rd anniversary of the battle of Antietam, the battle that permitted Abraham Lincoln to issue the Emancipation Proclamation and so change the nature and consequences of the Civil War.
Some Christmas traditions feel as worn as that Rudolph sweater you pull out of the closet year after year, but there are many unique ways to ring in the holiday. These samplings (both edible and otherwise) of how others around the globe commemorate Christmas can help spice up your season.
While the Feast of the Seven Fishes, or Festa dei sette pesci, is a big Christmas Eve tradition in Italian-American families, it's not actually part of the traditional festivities in Italy itself. However, in southern Italy the consumption of fish on Christmas Eve is indeed a local custom (the number of actual dishes is irrelevant). Here it's less about the opulence of the feast and more about the simplicity of the dish, which is generally associated with fasting. Types of common Christmas Eve dishes include baccalà (salted cod), either pan-fried or baked with potatoes, and zuppa di pesce, a delicious shellfish and fish soup. They're accompanied, of course, by plenty of red wine, ample amounts of olive oil (the more generous the pour on everything and anything, the better) and—if you're lucky—Altamura bread—quite possibly the best bread in Italy.In Italy, zuppa di pesci typically combines fish, mussels, shrimp, scallops and other underwater delicacies. (Courtesy of Flickr user Smabs Sputzer)
On Christmas Day the festivities continue with a lunch time meal that begins with pasta such as orecchiette, a tiny, ear-shaped pasta that's typical of the region, flavored with tomatoes and ricotta forte cheese, followed by either more fish or a meat dish like lamb. The region's go-to dessert is panettone, a spongy loaf of sweet bread filled with golden raisins and candied orange zest. After eating, the family stays gathered around the table, perhaps with a bottle of sweet wine or sambuca and a bowl of mandarins, playing card games like setto e mezzo, scopa and tressette.Both Italians and Peruvians look to panettone bread as a go-to Christmas desert. (Courtesy of Flickr user N i c o l a)
While culinary traditions in France vary from region to region, one that's particularly popular in the country's north is le Reveillon. This lavish, multi-course feast typically begins after midnight mass on Christmas Eve and lasts well after the sun comes up, complete with dancing and plenty of wine. Although many French also host a Reveillon on New Year's Eve, for most this is one night of the year when culinary decadence abounds. Dishes include oysters grilled or served on the half shell, smoked salmon with crème fraîche, escargot, the iconic scallops in cream sauce dish known as coquilles St. Jacques and even roasted goose. In southern France's Provence region, a version of the meal concludes with lei tretze dessèrts, a series of 13 desserts that historically represent Jesus and the apostles. Although it sounds like a lot of sweets, these treats are small dishes ranging from dried plums to grapes to nougat that are left on the table for three days, until December 27. However, one that's almost always included is the bûche de Noël, or Yule log, a chocolate-filled cake that symbolizes the logs families gathered around for warmth as part of the Christmas tradition.The French typically host a Reveillon on Christmas Eve, and the feast includes dishes like the classic coquilles St. Jacques, pictured above. (Courtesy of Flickr user Mark H. Anbinder)
Canada's Quebec Province (as well as New Orleans) also boasts its own version of le Reveillon, which almost always includes tourtière, a meat pie with potatoes, onions and spices. As you can imagine, Christmas Day is reserved for rest.
The main holiday meal in the Czech Republic is typically served in the evening on Christmas Eve, and for many, it's the first meal of the day. Traditional dishes include fish soup and fried carp served with potato salad, best when prepared a day in advance. Sweets are another mainstay: goodies include gingerbread cookies, apple strudel and vánočka, a buttery type of twist bread. Tables are always set for an even number of guests to ward off bad luck.Fried carp is a staple at the Czech Christmas dinner table. (Courtesty of Flickr user elPadawan)
One of the biggest differences between Christmas in the Czech Republic and in countries like the U.S. and Canada is that the primary gift-giver for kids is Ježíšek, or 'Little Jesus', crawling in through a window when the family is in another room. Once he arrives (usually indicated when one of the parents' rings a bell), the family moves from the dinner table to the Christmas tree to open gifts.
Many Norwegians begin their official Christmas celebrations on December 23 with 'Little Christmas,' which includes decorating the tree and snacking on risengrynsgrøt, a creamy rice pudding served hot with butter and cinnamon. Families often hide an almond in the pudding, and whoever gets it wins a marzipan pig. When the five o'clock bells ring on Christmas Eve throughout Norway, it's time for the official holiday meal to begin. Popular dishes include roasted pork belly, or ribbe, served with a side of pork sausage, potatoes and sauerkraut, and boiled cod, which is especially popular in southern coastal towns like Bergen and Stavanger. Pickled herring is usually on hand as an accompaniment for aquavit, the country's herb-infused national spirit. There's also plenty of gløgg—Norway's take on mulled wine—and dark beer. For dessert, many families make kransekake, a multitiered cake resembling a Christmas tree. It's made with almonds, confectioner's sugar and egg whites and often decorated with Norwegian flag toothpicks.Ribbe, pictured above, is a Norwegian Christmas classic. (Courtesy of Flickr user Tomas Ekeli)
Despite Santa residing in nearby Lapland, Norwegians have their own gift-giver: Julenisse, a similar character who wears a red stocking cap, sports a long white beard and lives in the Norwegian forest. He typically shows up on Christmas Eve—sometimes knocking at the door, other times in secret, but always with plenty of gifts in hand.
Christmas day typically involves a late brunch or early dinner, and the rest of the week is spent visiting with family and friends. Tip: If you really go Norwegian with your festivities, use only white lights—not colored—for your holiday décor.
In Peru, Christmas Eve is known as Nochebuena, or Good Night, and it's when families gather together around the nativity scene (Christmas trees aren't so prevalent in most South American countries), exchange gifts and open up presents left at the manger by Santa, except in the Andean regions, where gifts are traditional exchanged on January 6, the Feast of the Epiphany. Nochebuena is also the night of the main Christmas meal, which typically takes place after the misa de gallo, or 10 p.m. Rooster Mass. Roast turkey is usually the main dish, accompanied by side dishes such as tamales, garlic-seasoned rice and applesauce. Similar to southern Italy, panettone is a dessert favorite, served with a cup of steaming and spicy Peruvian hot chocolate. Champagne is also a staple, especially for toasting the birth of the baby Jesus, whose figure is placed in the nativity scene after mass. Once the kids open their gifts they head to bed, leaving the adults to continuing imbibing and celebrating until the early morning hours.At Christmas, Peruvians enjoy a slice of panettone bread with their traditional brand of spicy hot chocolate. (Courtesy of Flickr user David Zhou)
Christmas is one of the Philippines' most anticipated and revered holidays, with Christmas Eve being the night of their traditional Nochebuena feast. Here this means dishes like queso de bola or Edam cheese; puto bumbong, a purple-colored glutinous rice steamed in bamboo tubes and served with butter and a mixture of sugar and coconut; a sweet bread known as ensaymada and cured ham. This family meal is served on a table decorated with brightly colored fruits and typically takes place after midnight mass, sometimes lasting until sunrise. Christmas morning is a time for visiting with extended family, notably older relatives, with lunch to follow.At Nochebuena, Filipinos serve puto bumbong, steamed purple rice with sugar and coconut, as well as a sweet bread known as ensaymada. (Courtesy of Flickr user Anton Diaz)
Europeans first brought their Christmas traditions to Zimbabwe, and today they're part of the local fabric, including things like Santa Claus and singing Christmas carols. The holiday here is a summer affair, meaning foods such as fruits, sadza—a cornmeal staple—and roasted meat that is often cooked whole over an open fire and shared outdoors with the entire village. This can be any game meat from goat to ox to warthog. However, a particularly special Christmas treat for Zimbabweans is chicken with rice. This typically expensive meal is the main dish on Christmas Day.In Zimbabwe, Christmastime meals often entail meat cooked over an open fire and sadza. (Courtesy of Flickr user Dan Mason)
"Do you like chocolate?" That's one of the first questions I ask museum visitors during a chocolate program I lead in the museum's Wallace H. Coulter Performance Plaza. Nearly every time, the response is unanimous: "Of course!" Most of us don't see chocolate as more than a delicious (and often addictive) candy we love to eat, especially around Halloween and Valentine’s Day. Many people are surprised, then, when I show them how chocolate has had many other uses besides being a confection.
Our interactive program on chocolate history, The Business of Chocolate: From Bean to Drink, helps visitors understand how people made chocolate in the 18th century as well as chocolate's historical roles in American business and society. In particular, I love talking with visitors about chocolate's use in military rations, both because it's a story I first heard from my grandfather and because it's a topic I was able to research during my internship.
My grandfather, Harlan Thomas Kennedy, a veteran of World War II, used to share memories of eating chocolate on the battlefront. Growing up in a poor mining family in western Kentucky during the Great Depression, my grandfather hardly had much chocolate until his time in the army. While in the 82nd Airborne Division fighting in Belgium and the Netherlands during Operation Market Garden, he received field rations, a most spartan variant being the K-ration. These rations each included a chocolate bar!
My grandfather's fondness for the rationed chocolate bars was so great that he would even trade cigarettes for more chocolate. These chocolate bars certainly served as a morale boost when on the front, where resources had to be limited.
Of all foods, why chocolate? Because of its caffeine and high calorie content, it was a reliable source of energy for soldiers on the front. Chocolate consumption among Americans dates back to colonial times—George Washington and the Continental Army during the Revolutionary War would have consumed chocolate as a hot beverage, for example. By World War II chocolate had become a staple of military rations.
In fact, the U.S. War Department collaborated with chocolate manufacturers to produce Ration D bars, especially suitable for extreme temperatures sometimes encountered on the front. A mixture of chocolate, sugar, powdered milk, oat flour, and vitamins provided 600 calories per serving and made a very effective survival food.
It surprises me that my grandpa enjoyed the bars so much, as they were designed more for sustenance than for taste. They were to be eaten slowly to supply maximum energy. The Ration D bars were intended to "taste a little better than a boiled potato," according to U.S. Army Quartermaster Paul Logan in a 1937 correspondence with Hershey's, and many other soldiers apparently disliked the bar since it was mostly bitter and extremely dense. I suppose if the bars were too tasty, they would've been eaten too quickly!
Due to the amount of chocolate the War Department and the Red Cross sent to soldiers abroad, chocolate on the home front was very limited. Many magazine advertisements asked civilians for wartime cooperation and understanding, as chocolate became an integral part in the war effort. Could you imagine if chocolate were rationed today?
As with many other products, chocolate's wartime production helped it develop into a mass consumer food in the decades after the war. If you are interested in learning more about chocolate's military legacy, you should check out the M&M's story, currently on display in the American Enterprise exhibition. M&M's were first introduced to World War II soldiers as a sugar-coated chocolate candy that didn't melt in your hands.
Listening to my grandfather's wartime memories of chocolate helped me realize how small things in our lives connect to bigger movements, ideas, and events. Have you talked with your family members or friends about the role of special foods like chocolate in their lives?
Sean Jacobson completed an internship in the Department of Visitor Services. He is a recent graduate of Western Kentucky University, where he majored in History and Broadcasting. The Business of Chocolate: From Bean to Drink is a free daytime program. Check our calendar for upcoming dates.
To read Laura Ingalls Wilder’s Little House books is to step out of one’s own world and into hers. For all their relentless nostalgia, their luscious descriptions of life on the prairie, it’s hard to criticize their rich detail.
Wilder has achieved folk hero status thanks to eight books she wrote and published between 1932 and 1943, and a ninth published posthumously. Based on her family’s travels as settlers in Wisconsin, Minnesota and South Dakota from the 1860s through the 1880s, the novels are considered to be semi-autobiographical, even with Wilder’s tweaking of dates, people and events.
Reading the books, though, it’s hard to resist treating the stories as a true historical account. So rich is Wilder’s detail that you’re on the prairies with her, bundled in furs during winter, or roasting in the summer sun in a full-sleeve dress. Readers don’t just get a window into her life; they walk by her side.
For this reason, her biggest fans hold the LauraPalooza conference every two years to celebrate their heroine’s life and works. But like a Russian nesting doll, within every subculture is yet another subculture, and one unexpected element of the conference: hard scientific study.
Wilder’s reflections on her life experiences have spurred some scientists to use remarkable research techniques to clarify details from the books that seem a little too incredible. Finding the site of a schoolhouse where she taught that hasn’t existed for decades; a terrible winter of blizzards pounding the Ingalls’ small town day after day—for months; Laura’s sister being blinded by a fever that shouldn’t normally cause that kind of damage.
“Scientists are a bit like detectives,” said Barb Mayes Boustead, a presenter and co-organizer of this year’s conference, held in July at South Dakota State University. “We see something that isn’t explained, and we want to find the evidence that will help explain it. There is no shortage of aspects of Laura’s life and writings to investigate.”
From an early age, Jim Hicks had a special empathy for Laura: they both grew up on the prairie. Reading Wilder’s books next to a hearth in his small elementary school in Woodstock, Illinois, snow chipping away at the windows, he developed an interest in visiting the places Laura described in her books.
A retired high school physics teacher, Hicks strived to have his students understand physics in real-world terms. He turned his own classroom techniques on himself when trying to find the site of the Brewster school, where Laura went to teach as a mere teenager:
The Brewster settlement was still miles ahead. It was twelve miles from town. … At last she saw a house ahead. Very small at first, it grew larger as they came nearer to it. Half a mile away there was another, smaller one, and far beyond it, another. Then still another appeared. Four houses; that was all. They were far apart and small on the white prairie. Pa pulled up the horses. Mr. Brewster's house looked like two claim shanties put together to make a peaked roof. –These Happy Golden Years (1943)
Hicks knew that Laura traveled to the school in a horse cart. Thinking of horse legs as compound pendulums, swinging back and forth with a constant time period, Hicks measured the length of his wife’s horse from knee to hoof to figure out the time of one oscillation. Then by measuring the stride length for a casual walk, Hicks could estimate the rate of travel, in this case around 3 miles per hour.Frances B. Hicks, Jim's wife, takes measurements to calculate travel time via a horse. (Courtesy of Jim Hicks)
In These Happy Golden Years, Laura describes the drive as occurring just after the family’s noon meal in December. To get back before dark, Hicks estimated Laura’s driver, her father, had five hours of daylight to make the round trip, so one leg would take 2 ½ hours. At a horse speed of 3 miles per hour, a one-way trip would be between 7 or 8 miles, not the 12 that Laura estimated in the excerpt above.
Finding an old map Laura drew of DeSmet, South Dakota, which showed the Brewster school in a southwesterly direction, Hicks drew a seven-to-eight mile arc on a map of DeSmet. With the help of homestead land claim records and Laura’s description that she could see the light of the setting sun glinting off the windows of a nearby shanty, Hicks predicted the most likely location of the Brewster school site, to the west of a homestead settled by the Bouchie family, the “Brewsters” of Laura’s books. Further research confirmed another book detail: Louis and Oliv Bouchie homesteaded on separate but adjoining parcels, and to satisfy homestead requirements, built the separate halves of their mutual home right on the dividing line.
The result: Laura’s peak-roofed shanty.
“Art, physics and all the liberal arts and sciences are an invention of the human spirit, to try and find answers for causes,” says Hicks. “For a true depth of understanding, to be able to think on your feet with a balanced worldview, you need both parts.”
When she’s not helping organize LauraPalooza, Barb Boustead spends her hours as a meteorologist in the National Weather Service’s Omaha office. An impassioned weather educator, she writes about the science of weather, its impacts, and how people can prepare for inclement weather on her blog, Wilder Weather.
At the end of a recent winter, Boustead revisited a Wilder book from her youth, The Long Winter, centered on the Ingalls’ trials during an exceptionally harsh South Dakota winter.
"There's women and children that haven't had a square meal since before Christmas," Almanzo put it to him. "They've got to get something to eat or they'll starve to death before spring." – The Long Winter (1940)
Boustead said she found herself wondering whether the back-to-back blizzards Laura wrote about had been as bad as she described. Boustead realized that as a meteorologist, she had the tools not only to find out, but to quantify that winter’s severity.
The winter of 1880-81 was relatively well documented for the time. Compiling records on temperature, precipitation and snow depth from 1950 through 2013, she developed a tool to assign a relative “badness” score to the weather recorded at one or more stations in a geographic area. The Accumulated Winter Season Severity Index (AWSSI, rhymes with “bossy”) assigns an absolute severity grade for how the weather compares with the entire country, and a relative severity grade for comparing regional weather. It can also track year-over-year trends.
Boustead applied the tool to records at weather stations from the 1800s. Every site Boustead investigated in Laura’s region in that year falls into the “extreme” category rating on the AWSSI scale, marking it as a record year for snowfall and temperature lows. The season covered in The Long Winter still ranks in the top 10 worst winters on record for South Dakota, as well as other regions of the country.
Boustead said she has found that people pay more attention to the science of weather when a good story is involved. “Scientists are told to give facts and information, and not tell a ‘story,’ since that becomes associated with fiction—but it’s not fiction,” Boustead said.
During a meeting in 2000 between medical students and an attending physician at the Albert Einstein College of Medicine in New York City, the subject of scarlet fever came up.
Beth Tarini, now an assistant professor of pediatrics at the University of Michigan, but at the time a third-year medical student on her pediatrics rotation, piped up. “You can go blind from that, can’t you?”
The attending physician said no, but hesitated when Tarini insisted, citing it as the cause of Mary Ingalls’ blindness, as recounted by her sister Laura in By the Shores of Silver Lake.Beth Tarini, an assistant professor of pediatrics at the University of Michigan, with her collection of Wilder books. (Courtesy of Beth Tarini)
Motivated, Tarini started digging through med school books and references from the 19th century to see if she could find even a hint of verification that scarlet fever could truly be the cause of Mary’s loss of vision. Picking up the project after a decade-long hiatus, Tarini and an assistant, Sarah Allexan, broadened the search, seeking evidence of an epidemic that might have caused a spate of blindness in children.
They found something better: an actual account of Mary’s fever, facial paralysis and month-long descent into blindness in a local paper from the Minnesota town where the Ingalls family lived.
They also dug into letters between Laura and her daughter Rose, which eventually became part of Laura’s autobiography:
She was suddenly taken sick with a pain in her head and grew worse quickly. She was delirious with an awful fever. We feared for several days that she would not get well. … One morning when I looked at her I saw one side of her face drawn out of shape. Ma said Mary had had a stroke. –Pioneer Girl (Published posthumously in 2014)
Using the newspaper’s reports along with those letters, Tarini guessed Mary had been laid low by either meningitis or encephalitis. A main clue was Laura’s description of Mary’s affliction as a “spinal sickness.”
She narrowed down the likely cause as viral meningoencephalitis, an inflammation of the covering of the spinal cord and brain, not only because of the prolonged headache and fever, but because of the time it took for Mary to go blind. Losing her vision progressively was more indicative of nerve damage from chronic inflammation following an infection. Laura had probably described Mary’s illness as scarlet fever because it commonly plagued children in that time, and readers would have been familiar with it as a terrible illness.
“The newspaper reports brought home the fact that Mary was a real person and her suffering was witnessed and recorded by her community,” Tarini said. “That reinforced our sense that we were getting close to truth.”
Viral encephalitis does not have a cure. Like other virus-caused illnesses, it simply must run its course. But chances are, if Mary Ingalls were similarly stricken today, her blue eyes would still see after she recovered. Hospitalized immediately for a spinal tap and full bloodwork, she would be well fed and kept hydrated, treated for seizures if they occurred, and given steroids for any vision-threatening inflammation. Tissue and fluid samples may be sent to the Centers for Disease Control to help confirm the diagnosis of viral or bacterial meningitis or encephalitis.
“It’s the ultimate differential diagnostic challenge,” Tarini said. “I don’t have the patient there to give me the history or to examine. I had to assemble the clues that history left me.”
All summer long, North Korea has tested one weapon after another, the most recent being a ballistic missile this Friday. And with each new act of belligerence, experts and the media have scrambled to make sense of what comes next. “What is North Korea Trying to Hit?” asked the Washington Post, while Bloomberg went straight for the gut-punch with “Scared About North Korea? You Aren’t Scared Enough.” For the more levelheaded readers (like Alaskans, the Americans who live within closest range of a North Korean missle, but are more concerned about bears and moose), the real question might be, why do North Koreans hate us so much? After all, the Korean War—as horrifically destructive as it was—ended more than 60 years ago. The United States hasn’t attacked North Korea once since that armistice was signed, but the little country has remained a belligerent—and since 2006, nuclear-armed—thorn in the world’s side.
Part of this perpetual aggression has to do with the personal experiences of North Korea’s founding father, dictator Kim Il-sung. Born in Japanese-occupied Korea in 1912, Kim Il-sung spent most of his childhood in China, eventually joining the Chinese Communist Party and leading a renowned band of guerrilla fighters that took on Japanese forces in northeast China and Korea (a region then called Manchuria). But when other members of the Chinese Communist Party accused Kim of conspiring with the Japanese, he learned that loyalty wasn’t always returned. In the 1930s, Kim also knew the Soviet Union was deporting ethnic Koreans from the Soviet Far East back to Korea, because the Soviets, too, feared Koreans would support Japan in the latter’s expansion across Asia. Even the countries that should have ostensibly been Kim’s allies from the start of his military career didn’t seem to have his home nation’s best interests at heart.
From there, things only got worse. Having joined the Soviet Red Army in 1940, Kim Il-sung was perfectly positioned for a fortuitous appointment—Stalin made him the head of the North Korean Temporary People’s Committee in 1946, and when North Korea officially became a country in 1948, Kim was declared its prime minister (at that point Russia and the U.S. had succeeded in defeating Japan and divided the Korean peninsula into two countries, with the border drawn so that the U.S. would administer over Seoul).
In 1950, Kim Il-sung convinced Soviet Premier Josef Stalin to provide tanks for a war that would reunify North and South Korea. Kim nearly succeeded, advancing his troops down to the southern edge of the peninsula to take almost the entirety of South Korea. But then American forces led by General Douglas MacArthur pushed the North Koreans all the way back up to their shared border with China. When Kim begged Stalin for help, the Soviet dictator said no. And Chairman Mao Zedong of China waited two days before agreeing to assist the North Koreans.
“Imagine how one would feel knowing that you lost your country for those two days,” says James Person, director of the Center for Korean History and Public Policy at the Wilson Center. “The historical experience and Kim’s own personal experience shaped the way that the Korean leadership saw the world”—as a hostile place with no reliable allies.
After three years of fighting, the war ended in 1953. Even then only an armistice was signed—not a formal peace agreement. A new border was drawn that gave South Korea slightly more territory and created the demilitarized zone, or DMZ, between the two nations. The U.S. continued assisting South Korea in its development, and China and the Soviet Union remained nominal allies of North Korea.
North Korea’s idiosyncratic foreign policy since then can be traced in the history of three words: juche, songun and byungjin. Each has taken its turn as a central tenet for every new Kim in the North Korean dynasty. Each has colored the totalitarian regime’s reaction to the rest of the world—and especially its relationship to the U.S.
Juche (Going It Alone)
In 1972, North Korea’s socialist constitution adopted “juche—a creative application of Marxism-Leninism—as the guideline for state activities,” according to Understanding North Korea, a publication of the South Korean government. Although the word is often translated as “self-reliance,” North Korea expert Jonathan Pollack, who works with the Brookings Institution, says that doesn’t capture the whole of it. “Juche is more what I would call ‘self-determination.’ It basically says you can beg, borrow and steal from anyone in the world, but you can still tell them to go f*** themselves,” Pollack says. “There’s a level at which they’ve been so audacious through all their history—don’t get me wrong—but you kind of have to admire it.”
For Kim Il-sung, juche was the result of not trusting either of North Korea’s nominal allies, the Soviet Union and China. He already felt betrayed by their lack of support during the Korean War, and his opinion didn’t improve during the Cold War. North Korea perceived the Soviets as having capitulated to the U.S. during the Cuban Missile Crisis in 1962, Person says, and his experiences in China made him wary of fully trusting Mao Zedong. So beginning in the early 1960s, the country threw an enormous amount of resources into developing its military. By 1965, North Korea’s budget for national defense rose to nearly 30 percent of its GDP, when it had only accounted for 4.3 percent of its GDP just nine years earlier, reports Atsuhito Isozaki.
Kim Il-sung continued to squeeze China, the Soviet Union and Eastern European Communist countries for all he could get, all the while keeping them at arm’s length. “No foreign country has retained a major presence in the North, other than in an advisory capacity,” Pollack says. But that mistrust of other countries and determination to forge their own path backfired when the Soviet Union collapsed at the end of the 20th century, and North Korea’s go-it-alone mentality was tested by a sudden decline in foreign aid. Shortly after that, in 1994, Kim Il-sung died, and the torch of leadership passed on to his son, Kim Jong-il.
Songun (Maintaining Power With Military Might)
Kim Jong-il inherited a country—but also a devastating economic recession and famine. Without the Soviet Union providing food aid and acting as willing trading partner , North Korea’s economy contracted by a quarter, Pollack says. Several million people died of starvation, though the exact number is unknown because the country is so secretive. But rather than invest in agricultural development, Kim Jong-il doubled down on his father’s policy of increased military spending, creating a new national ethos called songun, or “military first.”
“The military is not just an institution designed to perform the function of defending the country from external hostility,” writes researcher Han S. Park for the Korea Economic Institute of America. “Instead, it provides all of the other institutions of the government with legitimacy. [Under songun], no problem is too big or too small for the military to solve.”
In a country of only 24 million people, more than 1 million are active members of the military, and the institution has a compulsory 10-year service requirement. Not only do military personnel test weapons and train for battle, they’re also assigned more menial duties like carrying groceries for civilians and repairing plumbing. With the U.S. conducting annual military drills in South Korea to show its continued support of South Korea’s existence, Kim Jong-il’s military focus served to reinforce his false narrative: The country needed the military not only to survive the famine, but also to protect itself against the external threat of an aggressive U.S.
“They have a vested interest in maintaining the idea of an implacable American adversary,” Pollack says. “It enables him to explain why they’re backward: if it were not for the evil Americans, we would be x, y, and z economically advanced.”
Byungjin (Parallel Paths to Butter and Bombs)
After Kim Jong-il died in 2011, his son, Kim Jong-un, assumed office and quickly developed a new vision for the future of the country—byungjin, or “parallel paths.” The idea built on what had been established by his grandfather at the country’s origins, incorporating the ideas of both juche and songun. Introduced in 2013 as a major policy, it directed that North Korea’s economy would focus on manufacturing consumer goods and developing a nuclear deterrent.
“It’s not just about trying to get attention,” Person says of North Korea’s nascent nuclear program. “They are trying to demonstrate that they’re able to defend themselves, and they’re resisting regime change.” Kim Jong-un only needed to look at the outside world for examples of what happens when a country either stops pursuing or doesn’t fully develop a nuclear weapon program: Saddam Hussein was toppled in Iraq in 2006, and Muammar Qaddafi was killed in 2011. It doesn’t matter that North Korea isn’t entirely analogous to those countries, Person says; focusing on nuclear weapons continues to legitimize Kim Jong-un’s rule.
The manufacturing prong of byungjin indicates that unlike his father, Kim Jong-un may have also recognized that a nation of people can’t live on nuclear weapons alone. “[The isolationism] can’t go on forever,” Pollack says. “Unless North Korean leaders are content with remaining isolated and backward, there will be pressures that will erode the loyalty of central elites.”
But because North Korea has long defined its national policy in relation to the existential threat of external foes, when that happens is anyone’s guess. “They’ve had almost a 70-year history and they’re still standing,” Pollack adds. “I’m not going to hazard a prediction or presume they’re going to end soon.”
On August 21, 1915, the Conklin family departed Huntington, New York on a cross-country camping trip in a vehicle called the “Gypsy Van.” Visually arresting and cleverly designed, the 25-foot, 8-ton conveyance had been custom-built by Roland Conklin’s Gas-Electric Motor Bus Company to provide a maximum of comfort while roughing it on the road to San Francisco. The New York Times gushed that had the “Commander of the Faithful” ordered the “Jinns… to produce out of thin air… a vehicle which should have the power of motion and yet be a dwelling place fit for a Caliph, the result would have fallen far short of the actual house upon wheels which [just] left New York.”
For the next two months, the Conklins and the Gypsy Van were observed and admired by thousands along their westward route, ultimately becoming the subjects of nationwide coverage in the media of the day. Luxuriously equipped with an electrical generator and incandescent lighting, a full kitchen, Pullman-style sleeping berths, a folding table and desk, a concealed bookcase, a phonograph, convertible sofas with throw pillows, a variety of small appliances, and even a “roof garden,” this transport was a marvel of technology and chutzpah.
For many Americans, the Conklin’s Gypsy Van was their introduction to Recreational Vehicles, or simply, RVs. Ubiquitous today, our streamlined motorhomes and camping trailers alike can trace their origins to the time between 1915 and 1930, when Americans’ urge to relax by roughing it and their desire for a host of modern comforts first aligned with a motor camping industry that had the capacity to deliver both.
The Conklins did not become famous simply because they were camping their way to California. Camping for fun was not novel in 1915: It had been around since 1869, when William H.H. Murray published his wildly successful Adventures in the Wilderness; Or, Camp-Life in the Adirondacks, America’s first “how-to” camp guidebook.
Ever since Murray, camping literature has emphasized the idea that one can find relief from the noise, smoke, crowds, and regulations that make urban life tiresome and alienating by making a pilgrimage to nature. All one needed to do was head out of town, camp in a natural place for a while, and then return home restored in spirit, health and sense of belonging. While in the wild, a camper—like any other pilgrim—had to undergo challenges not found at home, which is why camping has long been called “roughing it.” Challenges were necessary because, since Murray’s day, camping has been a recapitulation of the “pioneer” experience on the pre-modern “frontier” where the individual and family were central and the American nation was born.
Camping’s popularity grew slowly, but got more sophisticated when John B. Bachelder offered alternatives to Murray’s vision of traveling around the Adirondacks by canoe in his 1875 book Popular Resorts and How to Reach Them. Bachelder identified three modes of camping: on foot (what we call “backpacking”); on horseback, which allowed for more gear and supplies; and with a horse and wagon. This last was most convenient, allowing for the inclusion ‘of more gear and supplies as well as campers who were unprepared for the rigors of the other two modes. However, horse-and-wagon camping was also the most costly and geographically limited because of the era’s poor roads. In short order, Americans across the country embraced all three manners of camping, but their total number remained relatively small because only the upper middle classes had several weeks’ vacation time and the money to afford a horse and wagon.
Over the next 30 years, camping slowly modernized. In a paradoxical twist, this anti-modern, back-to-nature activity has long been technologically sophisticated. As far back as the 1870s, when a new piece of camping gear appeared, it was often produced with recently developed materials or manufacturing techniques to improve comfort and convenience. Camping enthusiasts, promoters, and manufacturers tended to emphasize the positive consequences of roughing it, but, they added, one didn’t have to suffer through every discomfort to have an authentic and satisfying experience. Instead, a camper could “smooth” some particularly distressing roughness by using a piece of gear that provided enhanced reliability, reduced bulk, and dependable outcomes.
Around 1910 the pace of camping’s modernization increased when inexpensive automobiles began appearing. With incomes rising, car sales exploded. At the same time, vacations became more widespread—soon Bachelder’s horses became motor vehicles, and all the middle classes started to embrace camping. The first RV was hand built onto an automobile in 1904. This proto-motorhome slept four adults on bunks, was lit by incandescent lights and included an icebox and a radio. Over the course of the next decade, well-off tinkerers continued to adapt a variety of automobiles and truck chassis to create even more spacious and comfortable vehicles, but a bridge was crossed in 1915 when Roland and Mary Conklin launched their Gypsy Van.
Unlike their predecessors, the wealthy Conklins modified a bus into a fully furnished, double-deck motorhome. The New York Times, which published several articles about the Conklins, was not sure what to make of their vehicle, suggesting that it was a “sublimated English caravan, land-yacht, or what you will,” but they were certain that it had “all the conveniences of a country house, plus the advantages of unrestricted mobility and independence of schedule.” The family’s journey was so widely publicized that their invention became the general template for generations of motorhomes.
The appeal of motorhomes like the Conklins’ was simple and clear for any camper who sought to smooth some roughness. A car camper had to erect a tent, prepare bedding, unpack clothes, and establish a kitchen and dining area, which could take hours. The motorhome camper could avoid much of this effort. According to one 1920s observer, a motorhome enthusiast simply “let down the back steps and the thing was done.” Departure was just as simple.When the Conklin family traveled from New York to San Francisco in their luxury van, the press covered their travels avidly. (Courtesy of the George Grantham Bain Collection at the Library of Congress)
By the middle of the 1920s, many Americans of somewhat more average means were tinkering together motorhomes, many along the lines made popular by the Conklins, and with the economy booming, several automobile and truck manufacturers also offered a limited number of fully complete motorhomes, including REO’s “speed wagon bungalow” and Hudson-Essex’s “Pullman Coach.”
In spite of their comforts, motorhomes had two distinct limitations, which ultimately led to the creation of the RV’s understudy: the trailer. A camper could not disconnect the house portion and drive the automobile part alone. (The Conklins had carried a motorcycle.) In addition, many motorhomes were large and limited to traveling only on automobile-friendly roads, making wilder landscapes unreachable. As a consequence of these limitations and their relatively high cost, motorhomes remained a marginal choice among RV campers until the 1960s. Trailers, by contrast, became the choice of people of average means.
The earliest auto camping trailers appeared during the early 1910s but they were spartan affairs: a plain device for carrying tents, sleeping bags, coolers, and other camping equipment. Soon, motivated tinkerers began to attach tent canvas on a collapsible frame, adding cots for sleeping and cupboards for cooking equipment and creating the first “tent trailers.” By mid-decade, it was possible to purchase a fully equipped, manufactured one. In 1923’s Motor Camping, J.C. Long and John D. Long declared that urban Americans were “possessed of the desire to be somewhere else” and the solution was evident—trailer camping. Tent trailering also charmed campers because of its convenience and ease. “Your camping trip will be made doubly enjoyable by using a BRINTNALL CONVERTIBLE CAMPING TRAILER,” blared an advertisement by the Los Angeles Trailer Company. The trailer was “light,” incorporated “comfortable exclusive folding bed features,” and had a “roomy” storage compartment for luggage, which left the car free to be “used for passengers.”
Tent trailering, however, had some drawbacks that became clear to Arthur G. Sherman in 1928 when he and his family headed north from their Detroit home on a modest camping trip. A bacteriologist and the president of a pharmaceutical company, Sherman departed with a newly purchased tent trailer that the manufacturer claimed could be opened into a waterproof cabin in five minutes. Unfortunately, as he and his family went to set it up for the first time, a thunderstorm erupted, and claimed Sherman, they “couldn’t master it after an hour’s wrestling.” Everyone got soaked. The experience so disgusted Sherman that he decided to create something better.
The initial design for Sherman’s new camping trailer was a masonite body standing six-feet wide by nine-feet long and no taller than the family’s car. On each side was a small window for ventilation and two more up front. Inside, Sherman placed cupboards, icebox, stove, built-in furniture and storage on either side of a narrow central aisle. By today’s standards, the trailer was small, boxy and unattractive, but it was solid and waterproof, and required no folding. Sherman had a carpenter build it for him for about $500 and the family took their new “Covered Wagon” (named by the children) camping the following summer of 1929. It had some problems—principally, it was too low inside—but the trailer aroused interest among many campers, some of whom offered to buy it from him. Sherman sensed an opportunity.
That fall, Sherman built two additional Covered Wagons. One was for a friend, but the other one he displayed at the Detroit Auto Show in January 1930. He set the price at $400, which was expensive, and although few people came by the display, Sherman reported that they were “fanatically interested.” By the end of the show, he had sold 118 units, the Covered Wagon Company was born, and the shape of an RV industry was set.
Over the next decade the company grew rapidly and to meet demand, trailers were built on an assembly line modeled on the auto industry. In 1936, Covered Wagon was the largest trailer producer in an expanding American industry, selling approximately 6,000 units, with gross sales of $3 million. By the end of the 1930s, the solid-body industry was producing more than 20,000 units per year and tent trailers had more or less disappeared.
Arthur Sherman’s solid-body trailer quickly gained acceptance for two principal reasons. First, Sherman was in the right place, at the right time, with the right idea. Detroit was at the center of the Great Lakes states, which at that time contained the country’s greatest concentration of campers. Furthermore, southern Michigan was the hub of the automobile industry, so a wide range of parts and skills were available, especially once the Depression dampened demand for new automobiles. And, a solid-body trailer took another step along the path of modernization by providing a more convenient space that was usable at any time.
Today’s 34-foot Class A motorhome with multiple TVs, two bathrooms, and a king bed is a version of the Conklin’s “Gypsy Van” and fifth-wheel toy haulers with popouts are the descendants of Arthur Sherman’s “Covered Wagon,” and these, in turn, are modernized versions of Bachelder’s horse-and-wagon camping. Between 1915 and 1930, Americans’ desire to escape modern life’s pressures by traveling into nature intersected with their yearning to enjoy the comforts of modern life while there. This contradiction might have produced only frustration, but tinkering, creativity, and a love of autos instead gave us recreational vehicles.
Source of the information below: Smithsonian Arctic Studies Center Alaska Native Collections: Sharing Knowledge website, by Aron Crowell, entry on this artifact http://alaska.si.edu/record.asp?id=196, retrieved 8-28-2012: Hunting hat, Sugpiaq (Alutiiq), Koniag. caguyaq "bentwood hunting hat" - Language: Koniag Sugpiaq (Alaska Peninsula dialect) Designs painted on bentwood hats that were worn by whalers and sea otter hunters summoned helping spirits that included the killer whale/wolf, raven, and giant eagle. This hat depicts a wolf's face with down-turned mouth, long snout and nostrils, and crescent-shaped eyes. Ornaments of colored yarn and thread dangle from the eyes and pointed ivory ears stick out from the back of the hat. The ivory side panels may represent a bird's wings. Sea lion whiskers attached in back are said to represent a tally of whales the owner had killed.
This object is on loan to the Anchorage Museum at Rasmuson Center, from 2010 through 2022.
If you’ve been to the Italian coast south of Rome you probably want to return. Picturesque scenery, mild weather, fertile soil and the teeming sea provide a banquet for the senses, and the easy pace of life leaves plenty of time for reverie and romance. The ancient Greeks founded the colony of Neapolis (Naples) along this stretch of Mediterranean coast around 600 B.C.; half a millennium later, the colony was absorbed by the Roman Empire. By the first century B.C., the Bay of Naples, a single day’s sail from the hustling imperial capital, had become the favorite vacation spot of the Roman elite. The entire region from Puteoli (modern Pozzuoli) in the north to Surrentum (Sorrento) in the south, embracing towns such as Pompeii and Herculaneum, was dotted with richly adorned villas of extraordinary splendor. The great Roman orator and statesman Cicero dubbed the Bay “the crater of all delights.”
The lifestyles that wealthy Romans enjoyed in their second homes is the subject of “Pompeii and the Roman Villa: Art and Culture Around the Bay of Naples,” an exhibition on view at the National Gallery of Art in Washington, D.C. through March 22. The show, which will also travel to the Los Angeles County Museum of Art (May 3-October 4), includes 150 objects, mainly from the National Archaeological Museum in Naples, but also on loan from site museums at Pompeii, Boscoreale, Torre Annunziata and Baia, as well as from museums and private collections in the United States and Europe. A number of items, including recently discovered murals and artifacts, have never been exhibited in the United States before.
Strolling among the marble busts, bronze statues, mosaics, silver tableware and colorful wall paintings, one cannot help but feel awed by the sophisticated taste and sumptuous décor that the imperial family and members of the aristocracy brought to the creation of their country houses. It’s almost enough to make one forget that it all came to an end with the devastating eruption of Mount Vesuvius in A.D. 79.
We do not know how many of the estimated 20,000 residents of Pompeii and more than 4,000 inhabitants of Herculaneum perished, but we do know a great deal about how they lived.
In their maritime pleasure palaces the elite partook of opulence and relaxation as a respite from the business in which they engaged in the city. These retreats had everything one could desire to exercise the body, mind and spirit: gymnasia and swimming pools; columned courtyards with gardens watered by an aqueduct built by the emperor Augustus; baths warmed by fire or cooled with snow from the peak of Vesuvius; libraries in which to read and write; picture galleries and extravagantly painted dining rooms in which to entertain; loggias and terraces with sweeping vistas of the lush countryside and the resplendent sea.
High-ranking Romans followed the lead of Julius Caesar and the emperors Caligula, Claudius and Nero, all of whom owned houses in Baiae (modern Baia). Augustus vacationed in Surrentum and Pausilypon (Posillipo), and bought the island of Capreae (Capri); his son Tiberius built a dozen villas on the island and ruled the empire from there for the last decade of his life. Cicero had several homes around the bay (he retreated there to write), and the poet Virgil and the naturalist Pliny also had residences in the area.
The show begins with images of the owners of the villas—marble or bronze busts of emperors, members of their families and private individuals such as Gaius Cornelius Rufus, whose sculpted likeness was found in the atrium of his family’s house at Pompeii. A fresco of a seated woman lost in thought is believed to portray the matron of Villa Arianna at Stabiae, about three miles east of Pompeii. Another woman is shown admiring herself in a hand mirror that resembles one on view in an adjacent case. The back of the mirror on display is adorned with a relief of cupids fishing (perhaps to remind its user of love as she applied her makeup and donned gold jewelry similar to the bracelets and earrings that are also on view). Nearby are furnishings and accouterments such as silver wine cups embellished with hunting and mythological scenes; elaborate bronze oil lamps; figurines of muscular male deities; frescoes of opulent seaside villas; and representations of delicacies harvested from the sea—all reflecting the owners’ taste for luxury.
The next section of the exhibition is devoted to the Roman villas’ colonnaded courtyards and gardens. Frescoes depict lushly planted scenes populated by peacocks, doves, golden orioles and other birds and punctuated with stone statuary, birdbaths and fountains, examples of which are also on display. Many of these frescoes and carvings reference the fecundity of nature through depictions of wild animals (a life-size bronze boar attacked by two dogs, for instance) and of Dionysus, the god of wine, accompanied by his lascivious companions, the satyrs and maenads. Other garden decorations allude to more cerebral pursuits, such as a mosaic of Plato’s Academy convening in a sacred grove.
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Pompeii, Two seaside villas, probably 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Ufficio Scavi, Pompei, Photography ⓒ Luciano Pedicini. Pompeii, House of the Golden Bracelet, Garden Scene, 1st century BC - 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Ufficio Scavi, Pompei, Fotografica Foglia, Alfredo and Pio Foglia. Moregine, Triclinium A, central wall, Apollo with muses Clio and Euterpe, 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Ufficio Scavi, Pompei, Photography ⓒ Luciano Pedicini. Pompeii, House of the Gilded Cupids, Mask of Silenos, 1st century BC - 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Pompeii, Villa of T. Siminius Stephanus, Plato's Academy, 1st century BC - 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Herculaneum, Villa dei Papiri, Bust of kouros (youth) or Apollo, 1st century BC. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Vesuvian region/Herculaneum, Dionysos with kantharos and maenad, 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Rione Terra at Puteoli (Pozzuoli), Gaius (Caligula), 1st century AD. (original image)
Image by Hood Museum of Art, Dartmouth College, Hanover, New Hampshire. Gift of Arthur M. Loew, Class of 1921A. Sir Lawrence Alma-Tadema (British, 1836 - 1912), A Sculpture Gallery, 1874. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Pompeii, House of the Silversmith, or from Herculaneum, Skyphos entwined with ivy leaves 1st century BC - 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Rione Terra at Puteoli (Pozzuoli), Head of the Athena Lemnia, probably early 1st century AD. (original image)
Image by Soprintendenza Speciale per i Beni Archeologici di Napoli e Pompei, Museo Archeologico Nazionale di Napoli, Photography ⓒ Luciano Pedicini. Pompeii, House of Pansa, Lampstand, 1st half of 1st century AD. (original image)
One of the highlights of the show is the frescoed walls of a dining room (triclinium) from Moregine, south of Pompeii. The frescoes were removed from the site in 1999–2001 to save them from damage due to flooding. In a coup de théâtre, three walls form a U-shaped reconstruction that allows visitors to be surrounded by murals showing Apollo, the Greek god of the arts, prophecy and medicine, and the Muses. The depiction of Apollo is an example of the most crucial theme of the exhibition: the Romans’ abiding taste for Greek culture. “They were lovers of what was to them—as it is to us—‘ancient’ Greece,” explains Carol Mattusch, an art history professor at George Mason University and guest curator of the exhibition. “They read Homeric poetry, they loved the comedies of Menander, they were followers of the philosopher Epicurus, and they collected art in the Greek style,” she says. Sometimes they even spoke and wrote Greek rather than Latin.
Cultivated Romans commissioned replicas of “old master” Greek statues, portraits of Greek poets, playwrights and philosophers, and frescoes picturing scenes from Greek literature and mythology. One of the frescoes in the exhibition depicts the classic group of Greek goddesses known as the Three Graces, and a beautifully rendered painting on marble shows a Greek battling a centaur. Also on view are a life-size marble statue of Aphrodite that emulates Greek art of the fifth century B.C. and a head of Athena that is a copy of a work by Phidias, the sculptor of the Parthenon. These expressions of Hellenic aesthetics and thought help explain why some say that the Romans conquered Greece, but Greek culture conquered Rome.
And alas, a volcano and the passage of time nearly conquered all. The cataclysmic eruption of Vesuvius entombed Herculaneum in a flow of lava and mud and spewed forth a mushroom-like cloud of debris that buried Pompeii in pumice stones and volcanic ash. Pliny the Younger wrote an eyewitness account of the eruption from across the bay in Misenum: “buildings were now shaking with violent shocks….darkness, blacker and denser than any night” and the sea “receded from the shore so that quantities of sea creatures were left stranded on dry sand” as flames burst from the volcanic cloud. His uncle Pliny the Elder, admiral of the imperial fleet based at Misenum and a naturalist, took a boat to get a closer look and died on the beach at Stabiae, reportedly asphyxiated by toxic fumes.
The final section of the exhibition is devoted to the volcano, its subsequent eruptions throughout the 17th century, and to the impact of the rediscovery and excavation of Pompeii and Herculaneum. The Bourbon kings that ruled Naples in the 18th century enlisted treasure hunters to tunnel into the ruins in search of statuary, ceramics, frescoes and metalwork. Their success led to later archaeological excavations that revealed nearly the entire town of Pompeii and the remains of Herculaneum and of country villas in the surrounding area.
The discoveries drew sightseers to the region and spawned an industry for reproductions of antiquities along with a Pompeiian revival style in the arts. An 1856 watercolor by the Italian artist Constantino Brumidi shows his design for the Pompeiian-style frescoes that grace a conference room in the United States Capitol, and an imaginary scene, painted in 1874 by the British artist Sir Lawrence Alma-Tadema, depicting a sculpture gallery from antiquity, pictures actual objects found in the excavations of Pompeii and Herculaneum, some of which are on view in the exhibition, including the spectacular carved marble table supports from Pompeii that served as models for desks in the National Post Office in Washington, D.C. Such objects epitomize the artistic excellence and fine craftsmanship the Romans demanded in the furnishing and adornment of their villas around the Bay of Naples. Leaving the exhibition, one’s thoughts turn inevitably to planning a trip to visit the archaeological sites near the Bay and experience first-hand the Mediterranean coast that has beckoned for millennia.
Jason Edward Kaufman is the Chief U.S. Correspondent for The Art Newspaper.
It may be one of the most famous dinosaurs of all time. The trouble is that shortly after being discovered, the Jurassic creature fell into an identity crisis. The name for the long-necked, heavy-bodied herbivore Brontosaurus excelsus—the great “thunder lizard”—was tossed into the scientific wastebasket when it was discovered that the dinosaur wasn't different enough from other specimens to deserve its own distinct genus.
But now, in a paleontological twist, Brontosaurus just might be back. A new analysis of dinosaur skeletons across multiple related species suggests that the original thunder lizard is actually unique enough to resurrect the beloved moniker, according to researchers in the U.K. and Portugal.
“We didn’t expect this at all at the beginning,” says study co-author Emmanuel Tschopp of the Universidade Nova de Lisboa. At first, Tschopp had been working only with Octávio Mateus of the Museu da Lourinhã to update the family tree of diplodocid dinosaurs.
But when it started looking like Brontosaurus might be real after all, they asked Roger Benson at the University of Oxford to join their team and run a statistical analysis on their findings. “Roger’s calculations gave the same results,” Tschopp says. “Brontosaurus should be valid.”
The name Brontosaurus excelsus was coined by Yale paleontologist Othniel Charles Marsh, who described the species in an 1879 paper with the mundane title “Notice of New Jurassic Reptiles.” His description is based on an enormous partial skeleton exhumed from the 150-million-year-old rock of Como Bluff, Wyoming. This “monster” of a dinosaur added to Marsh’s rapidly growing fossil collection, which already included similar species. Just two years earlier, Marsh had named Apatosaurus ajax—the “deceptive lizard”—from a partial skeleton found in the Jurassic rock of Colorado.
Brontosaurus quickly gained fame because it was among the first dinosaurs the public encountered. An illustration of its skeleton “was the first dinosaur restoration to get a wide circulation,” points out North Carolina Museum of Natural Sciences historian Paul Brinkman. This “helped spread the popularity of Brontosaurus in an era before dinosaurs proliferated widely in natural history museums.” And once museums started to put up skeletons of Brontosaurus—the first was assembled in New York City in 1905—the dinosaur’s popularity only grew.
But as anyone who has strolled through an up-to-date museum hall knows, the name Brontosaurus was eventually abandoned. In 1903, paleontologist Elmer Riggs found that most of the traits that seemed to distinguish Marsh's two specimens had to do with differences in growth, and it was more likely that the skeletons belonged to the same genus. Since it was named first, Apatosaurus had priority over Brontosaurus. Despite the extreme similarity between Marsh’s skeletons, Riggs recognized that they differed just enough to be regarded as different species. Therefore Apatosaurus ajax would remain in place, and Brontosaurus was changed to Apatosaurus excelsus. It took a while for museums to follow suit, but by the 1970s everyone finally got on board with the shift.
Bringing Brontosaurus back from scientific obsolescence would be the equivalent of restoring Pluto to the status of planet. And much like the drawn-out debate over the extraterrestrial body, the status of Brontosaurus relies on definitions and the philosophy of how scientists go about making divisions in a messy natural world.
To navigate the ever-growing number of dinosaur species, paleontologists look to a discipline called cladistics. In short, scientists pore over dinosaur skeletons to score a set of subtle characteristics, such as the way a flange of bone is oriented. Computer programs sort through those traits to create a family tree based upon who shares which characteristics. However, different researchers might pick different characteristics and score them in different ways, so any single result is a hypothesis that requires verification from other researchers independently generating the same results.
Here’s where Brontosaurus stomps in. Tschopp and colleagues had set out to create a revised family tree of diplodocid dinosaurs—huge sauropods found from the western United States to Portugal—with a special emphasis on sorting out how many species of Diplodocus and Apatosaurus there were. The researchers scored 477 anatomical landmarks across 81 individual dinosaurs. While the general shape of the tree supported what other paleontologists had previously proposed, there was a surprise in store: The bones Marsh originally called Brontosaurus seem to stand apart from the two Apatosaurus species, the team reports today in PeerJ.An infographic traces the history of Brontosaurus and Apatosaurus. (StudioAM, CC BY 4.0)
Most of the differences the researchers identified were subtle anatomical features, but there are some broader traits, Tschopp says. “The most obvious and visual feature would be that Apatosaurus has a wider neck than Brontosaurus,” he says, adding that despite the title “thunder lizard,” Brontosaurus was not quite as robust as Apatosaurus.
These results came from two Brontosaurus skeletons: the one Marsh used to coin the name, and a second that could confidently be referred to as the same species. There are more possible Brontosaurus bones out there, and Tschopp studied many of them in preparation for the current study. But because the skeletons were incomplete, the bones popped up in various positions on the family tree. Now, with the new diplodocid tree in hand, Tschopp says he plans to take a second look at these bones to see whether they truly group with Brontosaurus or something else.
What remains unclear is whether Brontosaurus is here to stay. Southern Methodist University paleontologist Louis Jacobs praises the new study. “Numerous new sauropods have been discovered and named in the last couple of decades, new techniques have been developed, and we simply have a more sophisticated understanding of sauropods now,” he says. The potential resurrection comes out of this burgeoning understanding. In short, Jacobs says, “good for them, and bully for Brontosaurus!”
John Whitlock of Mount Aloysius College is more reserved. “For me the issue is how you want to define genera and species in dinosaur paleontology,” Whitlock says. Some researchers will look at this study and conclude that Brontosaurus should still be an Apatosaurus because of their close relationship, forming what paleontologists call a monophyletic group, while others will emphasize the diversity. There’s no standard rule for how such decisions should be made. “I think we are going to start seeing discussion about not only how much change is enough to split a monophyletic group but, more importantly, how do we compare characters and character states?” Whitlock says. “That's going to be a fun debate to be a part of, and I'm excited about it.”
The fate of Brontosaurus now relies upon whether other paleontologists will be able to replicate the results, as well as what those researchers think about the threshold for when dinosaurs merit different names.
Other dinosaurs are held in the same taxonomic tension. While some researchers recognize the slender tyrannosaur Gorgosaurus libratus as a unique genus, for example, others see it as a species of Albertosaurus. But the battle for Brontosaurus stands apart. The name has become a totem of the extinct creatures that continue to ignite our imaginations with scenes of Jurassic titans ambling over fern-carpeted floodplains. We’ve kept the name Brontosaurus alive because the hefty herbivore is an emissary to a past we can never visit, but that we can still connect to through the dinosaur’s magnificent bones. Protocol will ultimately dictate the dinosaur’s title, but in spirit if not in science, those old bones will always be Brontosaurus.
Antibodies are always looking out for us, and this week we're taking a closer look at them. This is the fourth post in our Antibodies Week series. Read our other posts on pregnancy tests, an-tee-bodies t-shirts, and the plague.
On June 19, 1875, William Emerson Baker invited some 2,500 guests to his farm in Needham, Massachusetts, to celebrate the launching of his "sanitary piggery." As a "good-cheer souvenir" of the event, he created the "Porcineograph." This delightful lithograph, preserved in the collections of the Library of Congress, includes an unusual pig-shaped map of the United States (a geHOGgraphy!) with pigs cavorting around the border and banners proclaiming the pork-based dishes enjoyed in every state in the nation.
Baker, having made his fortune manufacturing sewing machines, was free to indulge his passion for public health and to promote his ideas on farming reform. His sanitary piggery, though decidedly eccentric in some respects, was based on the sound idea that clean, pure food would promote human health. Raising pigs in a sanitary environment would help keep them disease-free, and healthy pigs provide wholesome meat.
Control of disease in livestock, especially pigs, was a very real concern across much of the nation. A particularly virulent infectious disease had been decimating hogs in the Midwest as early as the 1830s. By the time of Baker's party, the disease had spread to 35 states and was causing millions of dollars in losses a year. The disease was known as hog cholera, swine fever, or sometimes swine plague. In 1884 the federal government responded to the growing threats to livestock with the establishment of the Bureau of Animal Industry (BAI) within the U.S. Department of Agriculture.
The bureau's investigations into livestock health were grounded in laboratory science and the emerging field of bacteriology. The period between Baker's party and the founding of the BAI had been a particularly fruitful time for medical science—the germ theory of disease had been established, and research into the diseases of animals had paved the way. In France, Louis Pasteur developed vaccines for chicken cholera (1879) and anthrax (1881), and in 1885, demonstrations of his rabies vaccine became front-page news.
In order to develop a vaccine for hog cholera, researchers at the bureau first had to identify the cause of the disease. In 1885 they thought they had the answer when they discovered a new genus of bacteria in the infected pigs. The bacteria, named Salmonella choleraesuis, turned out to be a red herring, but investigators continued to follow this lead for nearly 20 more years.
In 1903 BAI researchers Emil A. de Schweinitz and Marion Dorset discovered the real cause of hog cholera—a virus. Scientists at this time had no reliable tools for seeing viruses, which are 10 to 100 times smaller than bacteria. Instead, they used a very fine-pored porcelain filter to remove bacteria from samples, and then tested whether or not the samples were still infectious. They found that when blood from a pig with hog cholera was filtered, the bacteria-free filtrate was still capable of causing hog cholera in healthy pigs. This proved that hog cholera resulted from something smaller than bacteria—they called this something a "filterable virus."
With the identity of this new infectious agent confirmed, scientists could now work on a vaccine to defeat it. Injections of blood serum, drawn from hyper-immune pigs whose blood was rich with antibodies to hog cholera, proved to be an effective treatment, but failed to result in lasting immunity. However, when an injection of the antibody serum was followed by a dose of the live hog cholera virus, a long-lasting immunity was conferred. In 1906 Marion Dorset patented the method for manufacturing hog cholera serum, including instructions for the serum-virus technique of vaccination, and declared his invention free for use to all persons in the country.
The serum-virus vaccine was used on U.S. farms for about 50 years. A serious drawback to this method of vaccination was that it continuously reintroduced the live virus into the environment. As long as this technique continued, eradication of the disease was impossible. A better vaccine was needed, and researchers pursued two directions in vaccine development: a killed-virus vaccine, in which the virus has been completely inactivated, and a modified, or weakened, live-virus vaccine. Either approach can be successful, if the vaccine confers immunity without causing disease. During the 1940s researchers at the Lederle pharmaceutical firm developed a modified live-virus vaccine that was ready for the market by 1951.
Although the new vaccine was more effective than both the serum-virus method and the killed-virus vaccine, outbreaks of hog cholera continued to be a significant problem. In the 1950s a consortium of interests—the government, pork producers, veterinarians, and the animal vaccine industry—opted for a different strategy: they proposed a program of eradication. Authorized by an act of Congress signed by President John F. Kennedy in September 1961, the program entailed strict surveillance, quick diagnosis, quarantine, and eventually destruction of infected animals. In January 1978 the USDA was finally able to declare the country hog cholera-free. One hundred years after Baker launched his sanitary piggery, the geHOGraphy was a healthier place for pigs.
Diane Wendt is a curator in the Division of Medicine and Science at the National Museum of American History.
Explore the Antibody Initiative website to see the museum's rich collections, which span the entire history of antibody-based therapies and diagnostics.
The Antibody Initiative was made possible through the generous support of Genentech.
Major technological innovations such as gunpowder, GPS and freeze dried ice cream are more likely to be credited to military research than to women’s undergarments, but one humble pair of lady’s stockings in the Smithsonian collections represents nothing less than the dawn of a new age—the age of synthetics.
Woven of a completely new material, the experimental stockings held in the collections of the National Museum of American History were made in 1937 to test the viability of the first man made fiber developed entirely in a laboratory. Nylon was touted as having the strength of steel and the sheerness of cobwebs. Not that women were jonesing for the feel of steel or cobwebs around their legs, but the properties of nylon promised a replacement for the luxurious, but oh so delicate silk that was prone to snag and run.
An essential part of every woman’s wardrobe, stockings provided the perfect vehicle for DuPont, the company responsible for the invention of nylon, to introduce their new product with glamorous aplomb. Nylon stockings made their grand debut in a splashy display at the 1939 World’s Fair in New York. By the time the stockings were released for sale to the public on May 15, 1940 demand was so high that women flocked to stores by the thousands. Four million pairs sold out in four days.
In her book Nylon; The Story of a Fashion Revolution, Susannah Handley writes: “Nylon became a household word in less than a year and in all the history of textiles, no other product has enjoyed the immediate, overwhelming public acceptance of DuPont nylon.”
The name may have become synonymous with stockings, but hosiery was merely the market of choice for nylon’s introduction. According the American Chemical Society it was a well calculated decision. They state on their web site:
The decision to focus on hosiery was crucial. It was a limited, premium market. "When you want to develop a new fiber for fabrics you need thousands of pounds," said Crawford Greenewalt, a research supervisor during nylon development who later became company president and CEO. "All we needed to make was a few grams at a time, enough to knit one stocking."
The experimental stockings were manufactured by Union Hosiery Company for Dupont with a cotton seam and a silk welt and toe. They were black because scientists hadn’t yet figured out how to get the material to take flesh-colored dye. One of the other hurdles to be overcome was the fact that nylon distorted when exposed to heat. Developers eventually learned to use that property to their advantage by stretching newly sewn stockings over leg-shaped forms and steaming them. The result was silky smooth, form-fitting hosiery that never needed ironing.
Nylon’s impact on fashion was immediate, but the revolution sparked by the invention of what was originally called fiber-66 rapidly extended its tendrils down through every facet of society. It has given rise to a world of plastics that renders our lives nearly unrecognizable from civilizations of a century ago.
“It had a huge impact,” says Matt Hermes, associate professor at the bioengineering department at Clemson University. He is a former chemist for DuPont who worked with some of the early developers of synthetics and wrote a biography on nylon’s inventor Wallace Caruthers. “There’s a whole series of synthetic materials that indeed came from the base idea that chemists can design and develop a series of materials that had certain properties, and the ability to do it from the most basic molecules.”
There-in lies the true revolution of nylon. Synthetic materials were not completely new. But until the breakthrough of nylon, no useful fibers had ever been synthesized entirely in the laboratory. Semi-synthetics such as Rayon and cellophane were derived from a chemical process that required wood pulp as a basic element. Manufacturers were stuck with the natural properties plant material brought to the table. Rayon for instance was too stiff, ill-fitting and shiny to be embraced as a replacement for real silk, which is, of course, merely the chemical processing of wood pulp in the belly of a silk worm rather than a test tube. Nylon, on the other hand, not only made great stockings, but was manufactured through human manipulation of nothing more than “coal, air and water”—a mantra often repeated by its promoters.
The process, involves heating a specific solution of carbon, oxygen, nitrogen and hydrogen molecules to very high temperature until the molecules begin to hook together in what’s called a long-chain polymer that can be drawn from a beaker on the tip of a stir stick like a string of pearls.
The completely unnatural features of nylon may not play as well in the marketplace today, but in 1940, on the heels of the Great Depression, the ability to dominate the elements through chemistry energized a nation weary of economic and agricultural uncertainty. “One of the largest impacts was not only the generation of the synthetic material era,” says Hermes, “but also the idea that the nation could recover from the economic doldrums that went on year after year during the depression. When new materials began to surface, these were hopeful signs.”
It was a time when industrial chemistry promised to lead humankind into a brighter future. “All around us are the products of modern chemistry,” boasted one promotional film from 1941. “Window shades, draperies, upholstery and furniture, all are made of, or covered with, something that came from a test-tube. . . in this new world of industrial chemistry the horizon is unlimited.”
The modern miracle of that first pair of nylon stockings represented the epitome of human superiority over nature, American ingenuity and a luxurious lifestyle. Perhaps more important, however is that the new material being woven into hosiery promised to release the nation from reliance on Japan for 90 percent of its silk at a time when animosity was reaching a boiling point. In the late 1930s, the U.S. imported four-fifths of the world’s silk. Of that, 75 to 80 percent went into the making of women’s stockings—a $400,000 annual industry (about $6 million in today's dollars). The invention of nylon promised to turn the tables.
By 1942, the significance of that promise was felt in force with the outbreak of World War II. The new and improved stockings women had quickly taken to were wrenched away as nylon was diverted to the making of parachutes (previously made of silk). Nylon was eventually used to make glider tow ropes, aircraft fuel tanks, flak jackets, shoelaces, mosquito netting and hammocks. It was essential to the war effort, and it has been called “the fiber that won the war.”
Suddenly, the only stockings available were those sold before the war or bought on the black market. Women took to wearing “leg make-up” and painting seams down the backs of their legs to give the appearance of wearing proper stockings. According to the Chemical Heritage Foundation, one entrepreneur made $100,000 off of stockings produced from a diverted nylon shipment.
After the war the re-introduction of nylon stockings unleashed consumer madness that would make the Tickle-Me-Elmo craze of the 90s look tame by comparison. During the “nylon riots” of 1945 and ’46 women stood in mile-long lines in hopes of snagging a single pair. In her book Handley writes: “On the occasion when 40,000 people queued up to compete for 13,000 pairs of stockings, the Pittsburgh newspaper reported ‘a good old fashioned hair-pulling, face-scratching fight broke out in the line.’”
Nylon stockings remained the standard in women’s hosiery until 1959 when version 2.0 hit the shelves. Pantyhose—panties and stockings all in one—did away with cumbersome garter belts and allowed the transition to ever higher hemlines. But by the 1980s the glam was wearing off. By the 90s, women looking for comfort and freedom began to go au-natural, leaving their legs bare as often as not. In 2006, the New York Times referred to the hosiery industry as “An Industry that Lost its Footing.”
In the last 30 years sheer pantyhose have done a complete 180, devolving into fashion no-no’s except for sheer black and in offices where dress code prohibits bare legs. The mere mention of pantyhose ruffles some women’s feathers. In 2011, Forbes writer Meghan Casserly blogged they were “oppressive,” “sexist,” “tacky” and “just plain ugly.” She was striking out against one pantyhose manufacturer’s campaign to re-invigorate the market among younger women.
Fashion editor for the Washington Post, Robin Givhan takes a more subdued stance. “I wouldn’t say they’re tacky. They’re just not a part of the conversation; they’re a non-issue in fashion.”
Even at formal affairs, Givhan says bare legs are now the norm. “I think there’s a certain generation of women that feel they’re not properly dressed in a polished way unless they’re wearing them, but I think they’re going the way of the dodo bird,” she says. “I don’t think there is even the slightest chance that they’re coming back.”
No matter, they’ve made their point. Nylon has become an indispensible part of our lives found in everything from luggage and furniture to computers and engine parts. Chemistry and human ambition have transformed the world in which we live.
Pasta is a staple in most of our kitchens. According to a Zagat survey; about half of the American population eats pasta 1-2 times a week and almost a quarter eats it about 3-4 times a week. Needless to say, we love pasta. Seriously, who wouldn’t want a big bowl of spaghetti and meatballs or Bucatini all’Amatriciana.
The popularity of pasta in America dates back to Thomas Jefferson, who had a pasta machine sent to Philadelphia in the late 18th century after he fell in love with the fashionable food while dining in Paris. He was so enamored by pasta that he even designed his own pasta machine while on a trip to Italy. The pasta dish he made infamous in the United States is something we like to call macaroni and cheese. But, America’s true love affair with pasta didn’t heat up until the 20th century, with a boom in immigrants hailing from Italy. When the first Italians arrived, one of the only pasta varieties available in the United States was spaghetti; that’s why it is so iconic to Italian American cuisine. Now, of course, it is hard to find a grocery store today that doesn’t have at least half an aisle dedicated to different pasta varieties. For a clear view on the number of varieties, check out Pop Chart Lab’s chart of 250 shapes of pasta, The Plethora of Pasta Permutations.
Over the past few decades, pasta has been given a bad reputation by many low carb fad diets such as the original Atkins diet. On the flip side, the touted Mediterranean Diet includes pasta as a staple. Part of the confusion over the merits of eating bread draw from the conflation of durum wheat, which pasta is traditionally made from, and wheat used for baking bread. Durum pasta has a low glycemic index(GI) of about 25-45. To compare, white bread has a high GI of about 75 and potatoes have a GI of about 80, as do many breakfast cereals. According to the American Journal of Clinical Nutrition, eating foods with a low GI has been associated with higher HDL-cholesterol concentrations (the “good” cholesterol), a decreased risk of developing diabetes and cardiovascular disease. And, case-control studies have also shown positive associations between dietary glycemic index and the risk of colon and breast cancers. Pasta made with even healthier grains, such as whole grain and spelt, do add additional nutrients but do not necessarily lower the GI.
The way pasta is cooked also affects its healthiness. For the healthiest and tastiest way, you want to cook the pasta al dente, which means “to the tooth” or “to the bite.” If overcooked, the GI index will rise, meaning pasta that is cooked al dente is digested and absorbed slower than overcooked mushy pasta. So to make your pasta healthy and delicious, follow the tips below.
Use a large pot: Size matters. The pasta should be swimming in a sea of water because it will expand while cooking. If there is not enough water than the pasta will get mushy and sticky. The average pasta pot size is between 6 and 8 quarts, and it should be filled about 3/4 of the way or about 4-5 quarts with water for 1 pound of pasta.
Fill the pot with cold water: This goes for cooking anything with water. Hot water dissolves pollutants more quickly than cold, and some pipes contain lead that can leak into the water. Just to be safe, always use cold water from the tap and run the water for a little before using.
Heavily salt the water: Adding salt to the water is strictly for flavor. You want to salt the water as it is coming to a boil. While the pasta is cooking, it absorbs the salt adding just that extra touch to the overall meal. Do as Mario Batali does and salt the water until it “tastes like the sea.” To get that saltiness, Mark Ladner, executive chef at Del Posto, advises to use about 1 tbsp. of salt per quart of water.
There is an old wives tale that says salt will also make the pasta water boil faster. This is not completely the case. Adding salt to water elevates the boiling point and to increase the boiling point of 1 quart of water by 1 degree Fahrenheit you would need 3 tablespoons of salt. And, that is way too much salt for anyone’s taste buds.
Olive oil is said to prevent the pot from boiling over and prevent the pasta from sticking together. But, the general consensus is that it does more harm than good. It can prevent the sauce from sticking to the pasta. Since oil is less dense than water and is composed of hydrophobic molecules, it creates a layer across the top of the water. When the pasta is drained, it is poured through this oiled layer and leaves a fresh coat of oil on the pasta.
However, if you are not using a sauce or are using an olive oil base, then the oil has little effect.
Make sure the water is boiled: For all the impatient cooks out there, just wait that extra minute until the water is boiling with big bubbles. The boiling temperature is what prevents the pasta from getting mushy. That first plunge into the boiling water is critical to the texture of the final product. It will also help you time the pasta better.
Stir: Do not forget to stir. It may sound obvious, but this simple step can easily be forgotten through everyday distractions and the rush of cooking dinner. Without stirring, the pasta will for sure stick together and cook unevenly.
Take the lid off: Once you add the pasta, wait for the water to come back to a rolling boil and then remove the lid. This is just so you don’t have that white foam exploding over the edges of your pot like Mt. Vesuvius. An alternative tip from Lidia Bastianich is to leave the lid on but keep it propped open with a wooden spoon.
Cook, Time & Test: Yes, you can follow the timing on the box or package of pasta. But, the best timer is your mouth. Chef and cookbook author Jacob Kenedy says in his book The Geometry of Pasta to “start tasting the pasta at 15-20 second intervals, from a minute or two before you think the pasta might be ready.”
If serving the pasta with a sauce, Chef Michael Chiarello recommends taking the pasta out at about 4 minutes before the package time. Then add it to the sauce and let it finish cooking for a minute or two until it is al dente. This method should be used with only a proportionate amount of sauce. You do not want to have a huge pot of sauce for a pound or less of pasta. It is a great idea to make extra sauce, especially to put some in the freezer for another day or to serve on the side.
For a completely different take on cooking pasta, follow this rule from Mary Ann Esposito:
“My rule for cooking dry store bought pasta is to bring the water to a rapid boil; stir in the pasta and bring the water back to a boil. Put on the lid and turn the heat off. Set the timer for 7 minutes. Works beautifully for cuts like spaghetti, ziti, rigatoni and other short cuts of pasta.”
Don’t drain all of the pasta water: Pasta water is a great addition to the sauce. Add about a ¼-1/2 cup or ladle full of water to your sauce before adding the pasta. The salty, starchy water not only adds flavor but helps glue the pasta and sauce together; it will also help thicken the sauce.
The way you drain the pasta can also affect the flavor and texture. If cooking long pasta such as linguini or spaghetti, try using tongs or a pasta fork to transfer the pasta from the water to the sauce. You want to marry the sauce and the pasta as quickly as possibly. With short pasta, it is ideal to have a pasta pot that has a built in strainer or use a colander in the sink. Just make sure you don’t let the pasta sit too long or it will stick together.
Don’t rinse cooked pasta: Adding oil to pasta is not the only culprit to preventing the sauce and pasta from harmoniously mixing. Rinsing the cooked pasta under water does just the same. According to Giada de Laurentiis in her cookbook Everyday Pasta, “the starch on the surface contributes flavor and helps the sauce adhere.” If you rinse the water, you rinse away the starch.
Do you have any secrets to cooking the perfect pasta?
Valentine’s Day is known as a time for people to send love notes, including anonymous ones signed “your secret admirer.” But during the Victorian era and the early 20th century, February 14 was also a day on which unlucky victims could receive “vinegar valentines” from their secret haters.
Sold in the United States and Britain, these cards featured an illustration and a short line or poem that, rather than offering messages of love and affection, insulted the recipient. They were used as an anonymous medium for saying mean things that its senders wouldn’t dare say to someone’s face—a concept that may sound familiar to today’s readers. Scholar Annebella Pollen, who has written an academic paper on vinegar valentines, says that people often ask her whether these cards were an early form of “trolling.”
“We like to think that we’re living in these terrible times,” she says. “But actually if you look at intimate history, things weren’t always so rosy.”
Image by Collectors Weekly. In 1910s, an anonymous postcard might berate a couple, if the woman were perceived as dominating the man. The same sort of arguments were made against women’s suffrage. (original image)
People sent vinegar valentines as far back as at least 1840. Back then, they were called “mocking,” “insulting,” or “comic” valentines—“vinegar” seems to be a modern description. They were especially popular during the mid-19th century, when both the U.S. and Britain caught Valentine’s Day fever, a time talked about as “a Valentine’s craze or Valentine’s mania,” Pollen says. “The press was always talking about this phenomena … These were new, kind of mind-boggling quantities, these millions and millions of cards,” both sweet and sour.
Printers mass-produced Valentine cards that ranged from the expensive, ornate, and sentimental kind to the vinegar variety, which were cheap. “They were designed to expand this holiday into something that could include a whole range of different people and a whole range of different emotions,” she says.
Before these mass-produced cards hit the market, people had hand made their own valentines, both sentimental and vinegar (so far, the historical examples of nicer valentines predate the meaner ones). Pollen argues that although manufacturers didn’t invent vinegar valentines, they expanded upon them. In Barry Shank’s book on greeting cards and American business culture, he writes that vinegar valentines “were a part of the valentine craze from the earliest years of its commercialization.”
Vinegar valentines could be lightly teasing or truly nasty—such as those that suggested the reader commit suicide. And many of them were written as though these negative thoughts were popular opinion. One, for example, told the reader that “Everyone thinks you an ignorant lout.”
Some warded off unwanted suitors, while others made fun of people for drinking too much, putting on airs, or engaging in excessive public displays of affection. There were cards telling women they were too aggressive or accusing men of being too submissive, and cards that insulted any profession you could think of—artist, surgeon, saleslady, etc.
So specialized were these cards, particularly those sold in the U.S., Shank writes, that they actually “documented the changing shape of the middle classes.” Throughout the 19th and early 20th centuries, their subjects shifted “from sailor, carpenter, and tailor to policeman, clerk, and secretary.”
And who could blame them? Just as card makers today sell valentines targeted for siblings, in-laws, grandparents, or pets, manufacturers during Valentine’s Day’s heyday saw these insulting messages as a way to make money, and it’s clear that consumers liked what they were selling. According to the writer Ruth Webb Lee, by the mid-19th century, vinegar valentines represented about half of all valentine sales in the U.S.
Image by Royal Pavilion & Museums, Brighton & Hove. Vinegar valentine's card, c1875. Shows a young woman throwing a bucket of water at a man. Bears message: 'Here's a pretty cool reception, At least you'll say there's no deception, It says as plain as it can say, Old fellow you'd best stop away.' (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Vinegar Valentine's Card, c1875. Shows a man bearing a picture of a heart struck by arrows and the title 'Pity a Poor Wounded Heart'. Bears message: 'Tis said you share your love with many. But I believe you have not any At least enough to give away. You keep it for yourself they say.' (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Vinegar Valentine's card, c1875. Shows a drunken man holding on to a lamp post. Bears message: 'The kiss of the bottle is your heart's delight, And fuddled you reel home to bed every night, What care you for damsels, no matter how fair! Apart from your liquor, you've no love to spare.' (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Vinegar Valentine's card, c1875. Shows a miserable woman holding several books: 'Pray do you ever mend your clothes, Or comb your hair? Well, I suppose You've got no time, for people, say, You're reading novels all the day.' (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Write You Down An Ass? Tis Done Sir. Vinegar valentine's card, 19th century. Shows a man in black holding a picture of another man. Bears message: 'Oh what a pretty Valentine, And so like you, friend of mine For every one says you're an ass, And other donkeys quite surpass.' (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Vinegar Valentine's card, c1875. Shows a middle aged woman looking at a drawing of a cat in female dress. Bears message: 'Why do they call you a nasty old cat, And say many things a deal ruder than that, 'Tis from envy perhaps of your manifold graces, How would it not please you to claw in their faces'. (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Must Settle Down Sometime, But Won't Throw Himself Away Too Early. Vinegar Valentine's card, 19th century. Shows a haggard whiskered man leaning against a bar, smoking. (original image)
Image by Royal Pavilion & Museums, Brighton & Hove. Where Ignorance is Bliss, 'Tis Folly to be Wise. Vinegar Valentine's card, 19th century. Shows a shy woman in black. Bears message: 'Why maiden why, are you so very shy? Pray don't imagine for a moment I, Am on the point of making love to you, For you are much mistaken if you do.' (original image)
Yet not everyone was a fan of these mean valentines. In 1857, The Newcastle Weekly Courant complained that “the stationers’ shop windows are full, not of pretty love-tokens, but of vile, ugly, misshapen caricatures of men and women, designed for the special benefit of those who by some chance render themselves unpopular in the humbler circles of life.”
Although scholars don’t know how many of them were sent as a joke—the someecards of their day—or how many were meant to harm, it is clear that some people took their message seriously. In 1885, London’s Pall Mall Gazette reported that a husband shot his estranged wife in the neck after receiving a vinegar valentine that he could tell was from her. Pollen also says there was a report of someone committing suicide after receiving an insulting valentine—not completely surprising, considering that’s exactly what some of them suggested.
“We see on Twitter and on other kinds of social media platforms what happens when people are allowed to say what they like without fear of retribution,” she says. “Anonymous forms [of communication] do facilitate particular kinds of behavior. They don’t create them, but they create opportunities.”
Compared to other period cards, there aren’t very many surviving specimens of vinegar valentines. Pollen attributes this to the fact that people probably didn’t save nasty cards that they got in the mail. They were more likely to preserve sentimental valentines like the ones people exchange today.
These cards are a good reminder that no matter how much people complain that the holiday makes them feel either too pressured to buy the perfect gift or too sad about being single, it could be worse. You could get a message about how everyone thinks you’re an ass.
In the best tradition of skulduggery, claim and counterclaim, foosball (or table football) ,that simple game of bouncing little wooden soccer players back and forth on springy metal bars across something that looks like a mini pool table, has the roots of its conception mired in confusion.
Some say that in a sort of spontaneous combustion of ideas, the game erupted in various parts of Europe simultaneously sometime during the 1880s or ’90s as a parlor game. Others say that it was the brainchild of Lucien Rosengart, a dabbler in the inventive and engineering arts who had various patents, including ones for railway parts, bicycle parts, the seat belt and a rocket that allowed artillery shells to be exploded while airborne. Rosengart claimed to have come up with the game toward the end of the 1930s to keep his grandchildren entertained during the winter. Eventually his children’s pastime appeared in cafés throughout France, where the miniature players wore red, white and blue to remind everyone that this was the result of the inventiveness of the superior French mind.
There again, though, Alexandre de Finesterre has many followers, who claim that he came up with the idea , being bored in a hospital in the Basque region of Spain with injuries sustained from a bombing raid during the Spanish Civil War. He talked a local carpenter, Francisco Javier Altuna, into building the first table, inspired by the concept of table tennis. Alexandre patented his design for fútbolin in 1937, the story goes, but the paperwork was lost during a storm when he had to do a runner to France after the fascist coup d'état of General Franco. (Finesterre would also become a notable footnote in history as one of the first airplane hijackers ever.)
While it’s debatable whether Señor Finisterre actually did invent table football, the indisputable fact is the first-ever patent for a game using little men on poles was granted in Britain, to Harold Searles Thornton, an indefatigable Tottenham Hotspur supporter, on November 1, 1923. His uncle, Louis P. Thornton, a resident of Portland, Oregon, visited Harold and brought the idea back to the United States and patented it in 1927. But Louis had little success with table football; the patent expired and the game descended into obscurity, no one ever realising the dizzying heights it would scale decades later.
The world would have been a much quieter place if the game had stayed as just a children’s plaything, but it spread like a prairie fire. The first league was established in 1950 by the Belgians, and in 1976, the European Table Soccer Union was formed. Although how they called it a ‘union’ when the tables were different sizes, the figures had different shapes, none of the handles were the same design and even the balls were made of different compositions is a valid question. Not a unified item amongst them.
The game still doesn’t even have a single set of rules – or one name. You’ve got lagirt in Turkey, jover au baby-foot in France, csocso in Hungary, cadureguel-schulchan in Israel, plain old table football in the UK, and a world encyclopedia of ridiculous names elsewhere around the globe. The American “foosball” (where a player is called a “fooser”) borrowed its name from the German version, “fußball”, from whence it arrived in the United States. (And, really, you can’t not love a game where they have a table with two teams made up only of Barbie dolls, or that is played in tournaments with such wonderful names as the 10th Annual $12,000 Bart O’Hearn Celebration Foosball Tournament, held in Austin, Texas, in 2009.)
Foosball re-arrived on American shores thanks to Lawrence Patterson, who was stationed in West Germany with the U.S. military in the early 1960s. Seeing that table football was very popular in Europe, Patterson seized the opportunity and contracted a manufacturer in Bavaria to construct a machine to his specification to export to the US. The first table landed on American soil in 1962, and Patterson immediately trademarked the name “Foosball” in America and Canada, giving the name “Foosball Match” to his table.
Patterson originally marketed his machines through the “coin” industry, where they would be used mainly as arcade games. Foosball became outrageously popular, and by the late ’80s, Patterson was selling franchises, which allowed partners to buy the machines and pay a monthly fee to be guaranteed a specific geographical area where only they could place them in bars and other locations. Patterson sold his Foosball Match table through full-page ads in such prestigious national publications as Life, Esquire and the Wall Street Journal, where they would appear alongside other booming franchise-based businesses such as Kentucky Fried Chicken. But it wasn’t until 1970 that the U.S. had its own home-grown table, when two Bobs, Hayes and Furr, got together to design and build the first all-American-made foosball table.
From the perspective of the second decade of the third millennium, with ever more sophisticated video games, digital technology and plasma televisions, it’s difficult to imagine the impact that foosball had on the American psyche. During the 1970s, the game became a national phenomenon.
Sports Illustrated and “60 Minutes” covered tournaments where avid and addicted players, both amateur and professional, traveled the length and breadth of America following big bucks prizes, with the occasional Porsche or Corvette thrown in as an added incentive. One of the biggest was the Quarter-Million Dollar Professional Foosball Tour, created by bar owner and foosball enthusiast E. Lee Peppard of Missoula, Montana. Peppard promoted his own brand of table, the Tournament Soccer Table, and hosted events in 32 cities nationwide with prizes of up to $20,000. The International Tournament Soccer Championships (ITSC), with a final held on Labor Day weekend in Denver, reached the peak of prize money in 1978, with $1 million as the glimmering star for America’s top professionals to reach out for.
The crash of American foosball was even more rapid than its rise. Pac-man, that snappy little cartoon character, along with other early arcade games, were instrumental in the demise of the foosball phenomenon. The estimated 1000 tables a month that were selling around the end of the ’70s crashed to 100, and in 1981, the ITSC filed for bankruptcy. But the game didn’t die altogether; in 2003, the U.S. became part of the International Table Soccer Federation, which hosts the Multi-Table World Championships each January in Nantes, France.
But it’s still nice to know that even in a globalized world of evenrmore uniformity, table football, foosball, csosco, lagirt or whatever you want to call it still has no absolutely fixed idea of what really does constitute the core of the game. The American/Texas Style is called “Hard Court” and is known for its speed and power style of play. It combines a hard man with a hard rolling ball and a hard, flat surface. The European/French Style, “Clay Court” is exactly opposite of the American style. It features heavy (non-balanced) men, and a very light and soft cork ball. Add to that a soft linoleum surface and you have a feel best described as sticky. In the middle is European/German Style, “Grass Court,” characterized by its “enhanced ball control achieved by softening of components that make up the important man/ball/surface interaction.” And even the World Championships use five different styles of table, with another 11 distinct styles being used in various other international competitions.
Until recently this dilettante approach to the tables and rulebooks also applied to the competitions. Up until a few years ago, Punta Umbrí in Huelva, Spain, hosted the World Table Football Cup Championship in August each year. Well, sort of. It was played on a Spanish-style table and, according to Kathy Brainard, co-author with Johnny Loft of The Complete Book of Foosball and past president of the United States Table Soccer Federation, “If the tournament is run on a Spanish-made table and has the best players from wherever that table can be found, then it could honestly be called the World Championship of Foosball, on that specific table.” A bit of diplomatic looking down the nose there.
Brainard went on to say that the real championship, called the World Championship of Table Soccer, was played in Dallas on a U.S.-made table and offered $130,000 in prize money. Although, admittedly, that was before 2003, at which time the American associations had to accept the ignominy of being part of a truly international World Championship, and not simply be able to hold their own table football version of the baseball World Series
In the general roly-poly of life, table football is mainly something that people play for fun in a smoky bar—at least they did before cigarettes were banned.
While British “foosers” might not be able to look forward to winning such large prizes as American players, they still take the game seriously. Oxford University is one of the top table football venues in England, with many highly thought of players on the national scene. Thirty college teams and one pub team play regularly on Garlando brand tables against other top pub and university sides.
Dave Trease is captain of Catz I (St. Catherine’s College, Oxford) who says his position as captain hangs on the fact that he has the only “brush shot” in the university.
“A brush shot is where you have the ball stationary and then you have to flick it very hard at an angle. To be honest, I think it’s more luck than anything, but it looks good when it works.” And he admits that his skills on the Garlando don’t travel.
“I’m rubbish on anything else! I’ve found something I’m good at, where I can have a laugh and not take it all too seriously. And you don’t get any table football hooligans either, although you’ve got to keep an eye on people greasing the ball or jamming the table.”
Ruth Eastwood, captain of Catz II, beat all her female opponents (all five of them anyway) to win the women’s event, ranking her fourth nationally. But having won the tournament, does she see big contracts being offered?
“I don’t think it’s likely, particularly when you take into account that my prize money was only £15 and the prizes for the whole competition were only £300. I don’t think we’re in the same league as the World Championships, but at least I can say I was women’s champion, even if there were only five other women!”
It's probably stretching the imagination just that bit too far to think that table football will every become an Olympic sport, but they probably thought the same about beach volleyball at one time. Sadly, the small figures that populate the field during playing time won't be able to collect the medals themselves. That will have to be left to the flick-wristed humans who control their every move.
Jurassic World is a real "Indominus rex" at the box office, breaking several records on its opening weekend and continuing to draw audiences worldwide. The star of the show may be a human-engineered hybrid dinosaur, but the movie also features 17 real fossil species, from massive plant-eaters to flying reptiles. For anyone who can't get enough #prattkeeping, feather debating and genetically modified rampaging, here are 14 fun facts about the actual ancient animals featured in the film:The Mosasaurus isn't shy at all during its feeding time in the film. (Animated gif via Giphy/Jurassic World/Universal Pictures)
1. Mosasaurs Were Patient Predators
The terrifying Mosasaurus was not a dinosaur but a colossal marine lizard. While it possessed a fearsome maw featuring two rows of teeth, the Mosasaurus is thought to have had poor depth perception and a weak sense of smell. Scientists think that one of its main hunting techniques was lying in wait for prey near the water’s surface and attacking when animals came up for air. In 2013, one mosasaur fossil found in Angola held the remains of three other mosasaurs in its stomach, providing evidence that the aquatic beasts might also have been cannibals.
2. Blame It on the Brontosaurus
The peaceable, long-necked Apatosaurus—controversially also known as Brontosaurus—was an herbivore that feasted on low-lying plants and tree leaves. Fossils of its bones have previously confused scientists, because they can resemble those of the formidable Mosasaurus, given the immense size and length of both creatures. Based on scientists’ calculations, the giant Apatosaurus is among the sauropods that may have produced enough methane gas to contribute to a warming climate during the Mesozoic era.Ankylosaurus had spiky armor and a clubbed tail that made it a "living tank." (Naz-3D/iStock Photo)
3. Ankylosaurus Was a “Living Tank”
With its arched back and curved tail, the Ankylosaurus resembles the dinosaur version of a super-sized and much spikier armadillo. Thanks to the sharp, bony plates that line its back, along with a tail shaped like a club, Ankylosaurus has been given the nickname “living tank.” Its main Achilles’ heel was its soft, exposed underbelly, but predators would have had to flip the armored dinosaur over to get to this weak spot.
4. Velociraptors, aka Prehistoric Chickens
While the Hollywood version may seem sleek and graceful, the Velociraptor seen in the film is closer in form to a much larger raptor called Deinonychus. Real Velociraptors were smaller, often loners and likely had feathers, leading some to describe them as “prehistoric chickens.” Still, raptors as a whole were likely among the smartest of dinosaurs, due to the larger size of their brains relative to their bodies–the second highest brain-body weight ratio after the Troödon. This degree of intelligence is consistent with that of modern-day ostriches.
5. Triceratops Horns Existed Mainly For Looks
The horns of the Triceratops have long fueled debate among scientists about their purpose. The latest research suggests that they likely served as identification and ornamentation. However, previous findings also uncovered Tyrannosaurs rex bite marks on Triceratops horns, indicating that the features could have been used for defense in certain cases.Stegosaurus had big spikes but a tiny brain. (fabiofoto/iStock Photo)
6. Stegosaurus Was No Brainiac
While it had a large body and several spiky plates that served as protection, the Stegosaurus had an exceptionally small brain for its body size—its brain has been compared to a walnut or lime. For some time, scientists believed the dinosaur had an ancillary group of nerves in a cavity above its rear end that helped to supplement its tiny noggin, but this hypothesis was later disproved.
7. Getting Attacked By a T. rex Really Bites
The original King of the Dinosaurs, Tyrannosaurus rex holds the real-life claim to fame of having the strongest bite of any land animal, living or extinct. Using a model that simulated the impact of its bite, scientists estimate that the force of a T. rex chomp could have been 3.5 times more powerful than that of an Australian saltwater crocodile, which holds the record among animals still alive today.
8. Pterosaurs Had Weak Feet
One of the two main species to escape from Jurassic World's Aviary, the Pteranodon had a wingspan of up to 18 feet. Its diet typically consisted of fish, and some species of pterosaurs had pouches like those of pelicans to hold their prey. It was likely able to dive as well as fly to obtain food. However, as one paleontologist notes in Forbes, the feet of a Pteranodon were probably too weak to carry the weight of a human, as the creatures are shown doing in the movie.Pteranodons and Dimorphodons populate the Aviary and later terrorize park guests. (Animated gif via Giphy/Jurassic World/Universal Pictures)
9. Dimorphodon Had Multipurpose Teeth
The Dimorphodon is the other flying reptile seen in the film, with a wingspan of about eight feet. Its name translates to “two-form tooth” and refers to the differences between its upper and lower sets of teeth. The upper set are sharper and longer and likely intended for snatching prey from the water. A second set of tinier teeth in the bottom jaw appears to be for gripping prey in transit.
10. The “Cows of the Cretaceous” Were Into Roaming
The Edmontosaurus was a medium-size duck-billed dinosaur that dined on fruits and veggies. Nicknamed the “cow of the Cretaceous,” these dinosaurs moved in herds of thousands that may have traversed thousands of miles during a single migration.
11. The Dinosaur That Ate Pebbles
Among several dinosaurs that share traits with ostriches, Gallimimus may have employed an interesting feeding strategy. Because it was unable to physically chew the plants it consumed, Gallimimus also ingested pebbles, which would mash up the food internally during the digestion process.These are the tiniest dinosaurs seen in the film. (Wikimedia Commons Microceratus)
12. Diminutive Dinosaurs Lost Out to a Wasp
The smallest dinosaurs in the film, Microceratus, were ten inches tall on average and roughly two and a half feet long. The miniature herbivores were initially called Microceratops, but paleontologists were forced to change the title after it was revealed that a genus of wasp had already claimed the moniker.
13. Parasaurolophus Had a Noisy Crest
Parasaurolophus are known for the distinctive crests that adorned their heads, which have since been modeled by paleontologists. Based on these simulations, scientists discovered that the crest could emit a loud sound when air flows through it, indicating that it helped these dinosaurs communicate.
14. The Baryonyx Went Spear Fishing
The Baryonyx, a fish eater, has a name meaning “heavy claw” in Greek because of the large, sharp extended claws that made up the thumb of each hand. Paleontologists think the dinosaur used these claws like spears to catch fish. This carnivorous dinosaur also had sets of serrated teeth similar to those of modern-day crocodiles for chomping on prey.
The artifact is neither sexy nor delicate, as Mallory Warner will tell you. Warner, who works in the division of medicine and science at the Smithsonian’s National Museum of American History, helps to curate a large archive of items that have, in some way, changed the course of science. She points to a DNA analyzer used by scientists in the Human Genome Project (the landmark effort that yielded the first complete blueprint of a human’s genetic material) and a photograph film from a 1970s attempt to build a synthetic insulin gene. Many of the pieces related to genetic research, she says, are “hulking, refrigerator-size scientific things.”
The Roche 454 GS FLX + DNA gene sequencer, which was produced from 2005 to 2015, is actually a bit shorter than a refrigerator: it weighs more than 500 pounds, according to official product specifications. The Roche machine is also unique: it was the first next-generation gene sequencer to be sold commercially. It used a then-new technology known as sequencing-by-synthesis to tease apart the sequence of bases that comprise genetic code.
Even the tiniest organism—too small to be seen with the naked eye—contains hundreds of genes that work together to determine everything from its appearance to the way it reacts to disease. These genes are made up of alternating patterns of bases. By reading the patterns—a process known as gene sequencing—scientists can learn a great deal about how an organism operates.
Next-generation sequencers dramatically reduced the cost and time required for gene sequencing. Although that may seem like an esoteric credential, consider that the Human Genome Project took about 13 years and an estimated $3 billion to sequence the entire human genome, largely relying on a method known as Sanger sequencing. A next-generation Roche 454 machine could do that task in ten days, according to the company, making it possible for small teams to stitch together enormous amounts of genetic data in significantly less time.
The Roche 454 sequencers have been used to unravel the genetic mysteries of strawberries, bacteria and Neanderthals; they have produced data that have helped scientists understand disease resistance in the developing world; and, in one memorable case, diagnosed a young American boy whose condition stumped doctors for years.
The Roche 454 sequencers have been used to unravel the genetic mysteries of strawberries, bacteria and Neanderthals; they have produced data that have helped scientists understand disease resistance in the developing world; and, in one memorable case, diagnosed a young American boy whose condition stumped doctors for years.
But one of the most interesting things a Roche 454 has done is possibly help secure the future of chocolate.
About 25 years ago, many people became deeply concerned about the world’s chocolate supply. Chocolate as we know it—in its sweet, delicious form—is made from cacao beans, which are the product of the Theobroma cacao tree.
T. cacao is native to Central and South America, and people have been harvesting its beans for centuries. Europeans first came across the cacao tree in early trips to what they called the New World. The natural product of cacao beans is bitter, so Europeans started mixing chocolate with sugar, and a craze began that has yet to end. Chocolate is a multi-billion dollar business today with growing demand coming from countries like China, India, Russia and Brazil.Harvesting cacao in Ghana, the pods are cut and seeds and pulp are scooped out. (Smithsonian Archives Center, National Museum of American History)
But emerging demand comes up against ancient problems. To expand production, cacao trees were transplanted to West Africa, where they could grow comfortably in the tropical climate. However, cacao trees take several years to mature, and they’re not very productive: a single tree produces roughly enough pods to make one pound of chocolate every year.
Still, the most pressing problem seems to be that these trees are highly susceptible to disease. In the late 1980s, a devastating blight with a fanciful name—witches’ broom fungus—began to blossom on cacao trees in the Brazilian region of Bahia. Witches’ broom gets its name from the tiny, broom-shaped clusters of branches that form on infected trees. In just a decade, Bahia’s chocolate production dropped by more than half. Scientists and candy makers became terrified that witches’ broom—or frosty pod, another devastating fungus that infects cacao trees—would reach farms in the West African countries of Ghana, the Ivory Coast and Nigeria, home to many of the world’s top cocoa bean exporters.
“Our issue was that we needed to be able to breed trees that are resistant to frosty pod and witches’ broom before those diseases make it to West Africa,” says David Kuhn, a research molecular biologist for the USDA in Miami. “Because if [that] happens, your candy bar will be $35.”
If a $35 candy bar does not seem like a catastrophe, consider that an estimated 6.5 million farmers depend on chocolate for their livelihoods and an abrupt change in the market could result in devastating effects.
Scientists in Miami were looking at breeding disease-resistant trees, but it was slow going. Kuhn explains that “tree breeding by its nature is a very slow process. You have to make a cross, hand-pollinate the trees, get the pods, take the seeds, plant them, and then you wait three to five years for those trees to flower and then you’ll be able to evaluate them.” In other words, it takes three to five years before scientists can figure out whether a particular crop of trees has been successfully bred to yield disease-resistant beans.Howard Shapiro of the Mars company assembled and directed a worldwide team of scientists to sequence the cacao genome. (Smithsonian Archives Center, National Museum of American History)
In 2008, inspired by the rise of sequencing technology, candy company Mars, Inc., under the direction of Howard Shapiro, agreed to contribute $10 million to fund a multinational project to sequence the entire T. cacao genome. A complete copy could speed up the breeding process by allowing scientists and breeders to more quickly pinpoint which specific genes guard against disease. Because the tree is tropical, a multinational consortium evolved to work on the cacao genome project. A team in Costa Rica sampled a local T. cacao tree. Kuhn’s lab in Miami helped extract the plant’s genetic material, and then sent that material on to labs where the genetic material was processed and sequenced.
T. cacao was the “first large plant that we had ever done,” says Keithanne Mockaitis, the former sequencing director at Indiana University. She had been working with the Roche 454 and other next-generation sequencers for a couple of years, but the size and detail of the T. cacao project made it one of their most ambitious projects yet.
She says Mars helped by introducing scientists, breeders and farmers from around the world to each other. “We would have conferences and sometimes they would actually invite the African cacao breeders, and that was wonderful because I was able to meet them and understand what they know,” says Mockaitis.
The contacts with the farmers were invaluable, in part because the project’s data would be open source. That means the scientists’ findings would be made available on a website, for free, to anyone who wanted to access them.
The first public website went up in 2010, with a complete set of results. For another three years, the team worked on adding data and generating a fuller genome, and they released a paper in 2013. Although challenges remain for chocolate, Mockaitis says the genome is a positive first step.
Six years ago, Peter Liebhold, chair of the museum’s division of work and industry at the museum, came across the cacao genome project while researching potential artifacts for a large exhibition on the history of American businesses. He was drawn to the open source project because it represented a novel and successful approach to the research and development process.
“In thinking about R&D, we wanted to say that it was important and accomplished in very different ways,” says Liebhold. He floated the idea of acquiring Indiana University’s Roche 454 sequencer, which could be credited with helping saved chocolate.
Although the machine was fading from use and had been replaced by newer technology—it was scheduled to be discontinued by the manufacturer in 2015—asking for a full gene sequencer was bold. During their heyday, sequencers cost around $700,000 (now that the product line is winding down, you can buy one on eBay for much less). “The joy of working at the Smithsonian is that you can make unreasonable requests of people,” says Liebhold.
Mockhaitis, a Virginia native who cites her teenage trips to the Smithsonian as one of the reasons she become a scientist, was thrilled to hear about the request. Roche agreed to pay for Indiana University to donate their machine, ship it and service it. Mockaitis had moved to a new lab, but she supplemented the donation with sample tubes and testing plates from her lab.
One of the plates donated by Mockaitis—called a picotiter testing plate—appears in the exhibition, alongside a photo of cacao farmers and a replica of a cacao pod. In a photo, the sequencer gleams against a dark background, its neat surfaces appearing to hum with function. Above the photo is a long, blunt knife that a cacao farmer might use in a harvest. The gap between the two sets of instruments is vast, but, as the exhibit attempts to demonstrate, the gap can be bridged.
“This [story] is particularly nice because it’s such a global story,” says Warner. “We have scientists all over North America, and the work going to benefit farmers in other parts of the world.”
As for the sequencer itself, it’s currently living in a box in the museum’s storage. It was too big for the exhibition, says Warner, but she’ll show it to whomever asks, including—recently—to a visiting Roche executive. The technology, Liebhold admits, is “no longer the cutting edge.” The sequencer was critical to the tale, but it has already moved into history.
In late April, small trucks filled with orange-red clay lined up near Roland Garros, a large tennis complex in the western outskirts of Paris. Throughout the grounds, workers were moving from court to court, meticulously laying down the clay, a mixture of crushed tile and brick, and chalking lines.
They were preparing the signature look for this month’s French Open. At nearly 120 years old, the Open is a venerable institution with rich history, but its longevity pales in comparison with the game of tennis that’s being played in the city’s 16th arrondissement, about three miles northeast.
At 74 rue Lauriston, a staid Haussmannian building like others in the quarter, a sign made of two metal racquets hangs inconspicuously over the sidewalk. A bronze plaque on the massive wooden front doors reads: Société du Jeu de Paume et Racquets. Inside the club, up two flights of stairs, is what the un-indoctrinated would call a tennis court, but the rubber floor’s reddish hue is really the only similarity to those famed courts at Roland Garros.
Four two-story-high black slate walls, three of which have a sloped roof running along them, surround the rectangular court. There’s a net, but it sags heavily in the middle.
Two white-clad men are on opposite sides of the net, hitting a green felt-covered ball back and forth with wooden racquets. The racquet heads are the size of a small skillet, slightly teardrop-shaped and strung tightly. The ball sounds heavy coming off the racquet and skids constantly. Often the men play shots off one of the lengthwise walls and occasionally aim for large openings in the walls, under which a series of evenly spaced white lines, resembling football yardage markers, extend out across the floor.
They’re playing jeu de paume, a relic of a bygone era in Paris.
Known in English as real tennis or court tennis, jeu de paume, meaning “game of the palm,” is the ancestor of modern lawn tennis, which wasn’t developed until the late 1800s.
Popularized by monks and villagers in southern France during the 11th and 12th centuries (who played with their bare hands, hence the name), paume was one of the country’s favorite pastimes from the 14th to the 17th centuries. At the dawn of the 17th century, there were over 500 courts, from Pau to Chinon.
The sport’s mecca was Paris, where over 7000 citizens — kings, aristocrats and commoners alike — played at nearly 250 courts throughout the city and suburbs.
Today, it’s quite a different story. The bulk of the world’s 8,000 or so players live in England, Australia and the United States. Here in France, there are just three playable courts in the entire country, two of which are in the Paris metro area: Société Sportive, the only one within city limits, and Fontainebleau, the former château of King Henri IV and later Napoleon, situated in a leafy suburb 40 miles to the southeast.
And though a few other remnants of the game’s glorious past still stand in Paris —including two courts built by Napoleon III in the Tuileries, now museums, a hotel on Île Saint-Louis, and the famous salle at Louis XIV’s Versailles where the French Revolution started— jeu de paume has largely faded from the city’s collective memory.
But for the approximately 170 Parisian members of Comité Français de Jeu de Paume, the sport’s national governing body, it’s still the 17th century. Driven by a passion for their unique sport, this small but dedicated group is keeping the game alive.
“What’s interesting for me about paume is that there are so many possibilities for each shot,” explains Gil Kressmann, the former president of Société Sportive. Kressmann, a well-built, graying man in his mid-60s, was introduced to the game as a youngster in Bordeaux. “Each stroke, as a function of your position and that of your opponent, there are almost an infinite amount of solutions and you have to choose the best in a matter of seconds.”
Image by Jonathan Brand. Entrance to the Société Sportive de Jeu de Paume at 74 rue Lauriston in Paris’ 16th arrondissement, the last jeu de paume court within the city limits. In the 17th century, at the height of the game’s popularity in France, Paris had over 250 courts and 7000 players. (original image)
Image by Jonathan Brand. The current game of jeu de paume evolved from a game played by southern French villagers and monks in the 11th century. (original image)
Image by Jonathan Brand. Société Sportive professional Rod McNaughtan hits a ball during a lesson. The wooden racquets are re-inforced with graphite at the head, but that’s one of the only technological advances in equipment the last few hundred years. Note the slightly off-set head—meant to replicate an open palm. Jeu de paume in French means “game of the palm.” (original image)
Image by Jonathan Brand. The Fontainebleau court, which is longer than the one in Paris. No two are exactly alike, giving a decided advantage to home court players. (original image)
Image by Jonathan Brand. A court tennis ball looks like a lawn tennis ball, but in reality has more in common with a baseball. The centers are made with cork, wrapped in cloth and then finished with a green felt cover. They are dense and skid off the floor rather than bounce. Currently the balls are hand-sewn every week by professionals at each of the clubs around the world. (original image)
Image by Courtesy of Château Versailles. It was here at the famous jeu de paume court at Versailles in 1789 that the Third Estate signed the Serment de Jeu de Paume, or the Oath of the Tennis Court, and started the French revolution. (original image)
Paume, the saying goes, is to chess what lawn tennis is to checkers. At a glance, the game does resemble lawn tennis — there’s a service, a return, the same scoring terminology (love, 15, 30, 40, advantage) and a full match is the best of three six-game sets.
But with 13 walls, including a buttress called the tambour on the receiving end, over 50 different styles of serve and complex rules like the chase, in which the ball can bounce twice on your side without your losing the point, it quickly becomes clear that jeu de paume is much more nuanced; it’s a game of precision and wits.
“In lawn tennis, the guys who hit the ball the hardest have the advantage, but in paume, it’s not essential,” Kressmann says.
No two courts are alike. At Fontainebleau, the floor is a few meters longer than its counterpart in Paris, and the walls respond differently as well. This is because the game, originally played outdoors in medieval marketplaces, moved indoors in the 14th century as cities became more populated and courts had to be built wherever there was room.
Thus, home court advantage and experience triumphs over sheer athleticism. And because of the multitude of shot options each time you prepare to strike the ball, the more court time you have logged the better, regardless of fitness level.
“Up until recently, most of the world champions were over 30 years old,” notes Ivan Ronaldson, a former professional at Fontainebleau and now at Prince’s Court in Washington, D.C., one of nine courts in the United States.
The equipment is another of the game’s many idiosyncratic attractions. The heavy wooden racquets, with offset heads meant to replicate an open palm, have evolved little since their introduction in the 14th century.
The same can be said for the balls, which look like their lawn tennis counterparts but in reality have more in common with baseballs. Made with cork centers and felt covers, the balls have little bounce and wear out easily. The professionals, or paumiers, hand sew the balls each week, just like their ancestors did under Henri IV, who created the game’s first association of teaching pros.
“All the history like that which is behind us is really fabulous as well,” Kressmann says. “It’s an essential part of the game.”
In Paris especially, protecting the sport’s rich history in the city — from King Charles V’s construction of one of the first courts, at the Louvre in 1368, to the destruction of many former courts during Haussmann’s 19th-century modernization of Paris — is just as important to many players as picking up a racquet.
Yves Carlier, the chief curator at Château Fontainebleau and a member of the paume club, has written extensive histories of the game in book form and for the Comité’s Web site. And in 2008, the Société Sportive commissioned Parisian historian Hubert Demory to publish a short book on the game and the club’s origins for its centennial.
Watch this video in the original article
Much of what has been chronicled has helped to debunk myths about the game in Paris that others have tried to propagate; often that jeu de paume was traditionally an aristocratic game.
Some cite the Oath of the Tennis Court, or Serment de Jeu de Paume, which took place on Versailles’ jeu de paume court and launched the French Revolution, as proof of the game’s noble roots.
It is a common source of frustration for some current players like Guy Durand, the treasurer at the Fontainebleau club. “Jeu de paume has been called the game of kings, but it was not,” he says. “And the Revolution had nothing to do with the decline of the game; by that time many courts had become theaters or exhibition halls.”
Indeed, even by 1657 the number of courts in Paris had fallen to about 114, according to Demory’s book. By the time of the Revolution in 1789, he notes, there were just 12 places to play.
Durand’s curiosity extends beyond the history books. Like many fellow players, he is constantly on the lookout for former paume sites around France. Traveling through the Loire Valley recently, he came across a car garage that clearly had been a paume court. He noticed the tambour, still intact, as he drove by.
Durand, a restaurateur in Fontainebleau, made an appointment with the mayor to discuss buying and renovating the court for use, but the price was overwhelming.
“To build a court from scratch it’s like one million Euro to make it nice,” he says. “And to renovate an existing structure, well, let’s just say it’s even more.”
The enormous cost of creating new structures is just one of the obstacles to a rosier future for the game. Access to existing courts, public awareness and the steep learning curve of the game also prove to be limiting factors. But there are a few bright signs: the Comité receives limited funding from the French government and there are agreements now in place between every club, including the one in Bordeaux, and local schools to train younger players.
And earlier this year, 17-year-old Mathieu Sarlangue, a top player at the Société Sportive, won the Racquette D’Or, the French national amateur championship, and breathed some fresh air into the game.
“If newcomers arrive to find a good young player like Mathieu,” Kressmann joked to me in March, “it’s even better because they won’t think it’s all old guys like me.”
But unless Roger Federer suddenly decides to hang up his lawn tennis racquet for paume, the reality is that this sport will continue to live on for years as it has here in Paris and the rest of the world, toeing the fine line between past and present.
The author has been a Comité-sanctioned player in Paris since February and estimates he ranks somewhere between 169 and 170.
Richard III may have died an unloved king, humiliated in death, tossed naked into a tiny grave and battered by history. But with two British cities trying to claim the last Plantagenet king’s remains 500 years after his death, maybe his reputation is finally turning a corner.
The discovery of his remains last fall (and the confirmation of the results this week) was the culmination of a four-year search instigated by Phillipa Langley of the Richard III Society. Both the search and the discovery were unprecedented: “We don’t normally lose our kings,” says Langley.
But it’s perhaps not too surprising that Richard’s bones were misplaced. Richard gained and lost the crown of England during the tumultuous Wars of the Roses period (1455-1487). It is a notoriously difficult period to keep straight: The country lurched from civil war to civil war in a series of wrestling matches between two branches of the Plantagenet house, the Yorks and the Lancasters.
Richard was the Duke of Gloucester and a York; his brother, Edward IV, had taken the throne from the Lancastrian king, Henry VI. When Edward died in 1483, he left Richard in charge as regent to his 12-year-old son, to be Edward V. But in June 1483, just before the boy’s intended coronation, Richard snatched the crown off his nephew’s head by claiming that the child was illegitimate. The boy and his younger brother were both packed off to the Tower of London—and were never seen again.
In the meantime, Richard III had his own usurpers to deal with. The Lancasters were out of the picture, but there was another upstart claimant on the scene, Henry Tudor. Two years and two months after he was anointed king, Richard faced a faction of Tudors at the Battle of Bosworth on August 22, 1485. He lost and was killed, only 32 years old. The Wars of the Roses were over, the Plantagenet house was swept aside, and the Tudors were on the throne. Richard’s battered body was brought back to nearby Leicester, where it was handed over to the Franciscan friars and quickly dumped into a small grave at the Greyfriars Church.
Given that they could barely keep a king on the throne in all this, keeping track of him after he was dead was probably even more difficult—especially since the new regime didn’t want to keep track of him. Henry Tudor, now Henry VII, feared that Richard’s burial site would become a rallying point for anti-Tudorists, so its location was kept quiet. When Henry VIII created the Anglican Church in the mid 16th-century, breaking off from the Vatican, England’s missions were dissolved; the friary was taken apart stone by stone and Richard’s grave was lost with it. Rumors even spread that his bones were dug up and thrown into a river.
The man too would have been forgotten, if not for the Bard himself. William Shakespeare, who always turned to history for a good plot, turned Richard III into one of the most sinister villains ever in his The Tragedy of Richard III.
It wasn’t hard: Richard III already had a bad reputation, especially according to the Tudor historians. His ignominious end and hurried burial was thought fitting for a villain who allegedly murdered his two young nephews to steal the crown; killed his wife to marry his niece; had his own brother drowned in a barrel of wine; and murdered all and sundry who dared challenge him.
In Richard III, Shakespeare further embellished the tale, doing nothing for Richard’s reputation. He opens his play by having Richard III himself claim that he was so ugly, dogs barked at him, and declaring: “And therefore, since I cannot prove a lover… I am determined to be a villain.”
Before the first act is over, he’s killed his brother and Henry VI, and goes on to murder the two young princes. Shakespeare also turned Richard’s scoliosis-curved spine into a hunchback, furnishing him with a limp that he might not have had and a withered arm that he definitely didn’t have, just to reinforce the point. Of course, Shakespeare’s depiction of Richard III is about as historically accurate as any period film Hollywood ever produced—dramatized to a point just past recognition. But on the other side, there are the Ricardians, who see the much-maligned king as a victim of Tudor propaganda.
The Richard III Society was founded in 1924 to “strip away the spin, the unfair innuendo, Tudor artistic shaping and the lazy acquiescence of later ages, and get at the truth”. He didn’t kill his nephews, or his brother or Henry VI, and he didn’t kill his wife—that’s all the stuff that historians in the pay of the Tudors wanted everyone to believe. Moreover, according to the society, wise Richard III instituted a number of important legal reforms, including the system of bail and, rather ironically, the presumption of innocence before guilt; he was also a great champion of the printing press.
So finding his bones, for the Richard III Society, was in part about reclaiming the king from history’s rubbish pile. Langley, armed with “intuition” that his remains weren’t destroyed and historical research, determined that what was now a parking lot owned by the Leicester Council was in fact the site of the lost church and grave. In August 2012, digging began—with permission and help from Leicester—and a cross-disciplinary team of experts from the University of Leicester spent days painstakingly excavating the area.
What they found, in just three weeks, was the body of a man they believed to be Richard III. And on February 4, the university confirmed that the skeleton was indeed the last Plantagenet king. Not only did he fit the physical description depicted in historical sources—the famously curved spine, the product of the onset of scoliosis at age 10; slim, almost feminine—but his DNA matched that of two descendants of the king as well.
Their findings also confirmed that Richard III was killed rather gruesomely—he was felled by one of two vicious blows to the head, including one from a sword that nearly sliced the back of his skull off. The team found 10 wounds to his body in total, including a “humiliation” stab wound to his right buttock and several to his trunk that were likely inflicted after his death; there was also evidence that his hands had been bound.
This fits with the traditional story that after the king was killed, he was stripped naked and slung over a horse to be brought to Leicester. Though he was buried in a place of honor at Greyfriars, in the choir, he was dumped unceremoniously in a quickly dug and too small grave, with no coffin or even a shroud—a deficiency that both the cities of Leicester and York would now like to redress.
Leicester, the city of his death, has the trump card. In order to dig up the car park, the University of Leicester had to take out a license with Britain’s Ministry of Justice, basically a permit that detailed what they would have to do if they found any human remains. The exhumation license dictates that they must bury the bones as close to where they found them as possible, and do so by August 2014; this license was upheld Tuesday by the Ministry of Justice.
Leicester Cathedral is a handy stone’s throw away from the car park and it’s been designated as the new burial site. It has been the home of a memorial to Richard since 1980. Canon David Monteith of Leicester Cathedral is still a bit in shock over the discovery and the flurry of interest in it. “It’s the stuff of history books, not the stuff of today,” he says, laughing, adding too that they only found out the body was Richard’s the day before the world did. Though a spring 2014 burial is possible, it will be some time, he said, before plans to inter the king are firmed up, “Lots of things have to happen.”
Among those things will be finding an appropriate place to put him: The cathedral is small, but busy, and Monteith is aware that the king’s bones will become a tourist attraction. (Henry Tudor’s fears were apparently well-founded) Another issue will be what kind of service (Richard’s already had a funeral) an Anglican church should give to a Catholic king who died before the formation of the Church of England. And finally, there’s the question of who will pay for the burial and improvements.
But while the Cathedral makes its plans, the northern England city of York is putting in its own claim for the king’s remains. On Wednesday, York sent letters, signed by the Lord Mayor, city councilors, and civic leaders, and backed by academics and descendants of Richard III, to the Ministry of Justice and the Crown. It’s unclear how long the process might take; again, this is all pretty unprecedented.
The York complainants pointed out that Richard grew up just north of York, became Lord President of the Council of the North there, spent a lot of time and money in the city, and granted favors to the city while he was king. York also claims that Richard wanted to be buried in York Minster Cathedral, where he was building a chantry for 100 priests.
“The city is very keen to have the man have his living wish fulfilled,” says Megan Rule, spokeswoman for the city, adding that York loved Richard III even as forces converged to remove him from power. “York people were loyal to him then and remain so.”
Leicester, however, dismisses York’s claims. City Mayor Peter Soulsby says, “York’s claim no doubt will fill a few column inches in the Yorkshire Post, but beyond that, it’s not something that anybody is taking seriously. The license was very specific, that any interment would be at Leicester Cathedral… It’s a done deal.”
Moreover, the city of Leicester is already planning a multi-million-pound educational center around the king’s car park grave: In December, the City purchased a former school building adjacent to the site for £800,000 to turn into a museum detailing the history of Leicester, with a big focus on Richard’s part in it. The center is expected to be complete by 2014, handily in time for Richard’s reburial.
It’s also easy to dismiss the fight over his remains as two cities wrestling over tourists. Leicester has already debuted a hastily put together exhibition on the king and the discovery. But the debate has tumbled into a minefield of regional loyalties—though this is ancient history, it can feel very current. As Professor Lin Foxhall, head of University of Leicester’s archeology department, notes, “You get these old guys here who are still fighting the Wars of the Roses.”
The Richard III Society’s Phillipa Langley is staying out of the debate about where Richard’s remains should go—though she can understand why Leicester and York both want him. “They’re not fighting over the bones of a child killer—for them he was an honorable man,” Langley says. “This guy did so much for us that people don’t know about. They’re actually fighting for someone who the real man wants to be known, that’s why they want him.”
Others, however, are more skeptical about this whitewashed version of Richard and about what impact the discovery will have on his reputation. “What possible difference is the discovery and identification of this skeleton going to make to anything? … Hardly changes our view of Richard or his reign, let alone anything else,” grumbled Neville Morley, a University of Bristol classics professor, on his blog.
“Bah, and humbug.” Peter Lay, editor for History Today, wrote in an op-ed for The Guardian on Monday declaring that the claim that the discovery rewrites history is overblown, and that the jury is still out on Richard’s real character—at the very least, he probably did kill the princes. And historian Mary Beard prompted a fierce 140-character debate on Twitter this week after she tweeted, “Gt fun & a mystery solved that we've found Richard 3. But does it have any HISTORICAL significance? (Uni of Leics overpromoting itself?))”.
Langley, however, is still confident that this discovery will have an impact. “I think there’s going to be a major shift in how Richard is viewed,” she says. “It’s very satisfying, it’s been a long time coming.”
The Dutch city Hertogenbosch, colloquially referred to as “Den Bosch,” remains remarkably similar today to its layout during the medieval age. Similar enough, says mayor Tom Rombouts, that the city’s celebrated native son, painter Hieronymus Bosch, if somehow revived, could still find his way blindfolded through the streets.
This year, timed to coincide with the 500th anniversary of Bosch’s death, Den Bosch is hosting the largest-ever retrospective of the renowned and fanciful eschatological painter who borrowed from his hometown’s name to create a new one for himself. The exhibition, “Hieronymus Bosch: Visions of Genius,” held at Den Bosch’s Het Noordbrabants Museum gathers 19 of 24 known paintings and some 20 drawings by the master (c. 1450-1516). Several dozen works by Bosch’s workshop, followers, and other of his contemporaries provide further context in the exhibit.
What makes this exhibit even more extraordinary is that none of Bosch’s works reside permanently in Den Bosch. In the run-up to the exhibit, the Bosch Research and Conservation Project engaged in a multi-year, careful study of as much of the Bosch repertoire as it could get its hands on. In news that made headlines in the art world, the researchers revealed that “The Temptation of St. Anthony,” a painting in the collection of Kansas City’s Nelson-Atkins Museum of Art -- believed not to be an actual Bosch -- was painted by Bosch himself and that several works at the Museo del Prado in Spain were actually painted by his workshop (his students.)
Bosch’s art is known for its fantastical demons and hybrids and he’s often discussed anachronistically in Surrealist terms, even though he died nearly 400 years before Salvador Dalí was born. In his “Haywain Triptych” (1510-16), a fish-headed creature with human feet clad in pointed black boots swallows another figure with a snake twisted around her leg. Elsewhere, in “The Last Judgement” (c. 1530-40) by a Bosch follower, a figure with a human head, four feet and peacock feathers narrowly avoids the spear of a bird-headed, fish-tailed demon dressed in armor and wearing a sword.The Haywain Triptych (London: National Portrait Gallery via Wikicommons)
Bosch’s is a world in which figures are likely to wear boats as clothing or to emerge from snail’s shells; one of greatest dangers is getting eaten alive by demons; and eerily, owls proliferate. Most bizarre, perhaps, is a drawing by Bosch and workshop titled “Singers in an egg and two sketches of monsters,” in which a musical troupe (one member has an owl perched on his head) practices its craft from inside an egg.
Beyond the exhibit itself, the city is obsessed with Bosch. Cropped figures from Bosch’s works appear throughout Den Bosch, plastered to storefront windows, and toys shaped like Bosch’s demons are available for sale in museum gift shops. Other events include a boat tour of the city’s canals (with Bosch-styled sculptures punctuating the canal edges and hellfire projections under bridges), a nighttime light show projected on buildings in city center (which was inspired by a family trip the mayor took to Nancy, France), and much more.
“This city is the world of Bosch. Here, he must have gotten all of his inspiration through what happened in the city and what he saw in the churches and in the monasteries,” Rombouts says in an interview with Smithsonian.com. “This was little Rome in those days.”
When one projects back 500 years, though, it’s hard to dig up more specific connections between Bosch and his city due the lack of a surviving paper trail.The Last Judgment is thought to be created by a Bosch follower. (Academy of Fine Arts Vienna via Wikicommons)
Late last year, researchers at the Rijksmuseum were able to identify the exact location of the street scene in Johannes Vermeer’s “The Little Street”, thanks to 17th-century tax records. But there is no such archive for Bosch, who kept few records that survive today. There is no indication that he ever left the city of Den Bosch, and yet no depictions of Den Bosch, from which he drew his name, seem to surface in any of his paintings or drawings.
The town does know, however, in which houses the artist, who was born either Joen or Jeroen van Aken into a family of painters, lived and worked and where his studio stood. The latter is a shoe store, and the former a shop whose proprietors had long refused to sell but, nearing retirement age, they have slated the house for sale to the city to turn into a museum, the mayor says.
Asked if Den Bosch will be able to purchase any works by Bosch, Rombouts says the city had hoped to do so, but price tags are prohibitive. “If we would have been more clever, we could have said to [the Kansas City museum], ‘May we have it on loan for eternity?’ And then said that it is a Bosch,” he says. “But we would have to be honest.”
While those at the Nelson-Atkins were surely elated to learn about the upgrade, curators at other museums who saw works they considered to be authentic Bosch’s downgraded were none too happy, said Jos Koldeweij, chairman of the Bosch Research and Conservation Project’s scientific committee.
“Sometimes it’s very emotional; sometimes it’s very academic,” he says. “At the end, it should be very academic, because museums are not art dealers. So the value in money isn’t what is the most important thing. What’s most important is what everything is.” Still, some conversations “got touchy,” he says.
In addition to the Prado works, the committee declared two double-sided panels depicting the flood and Noah’s ark at Rotterdam’s Museum Boijmans Van Beuningen, as being from the workshop and dated to c.1510 to 1520. The museum, however, identifies both as Bosch and dated to 1515, the year before his death.
“This is a process of consensus, and discussions about the originality of a work will continue until everyone agrees,” says Sjarel Ex, the Boijmans’ director.
“We think that it is very necessary,” Ex says of the investigation, noting the importance in particular of Bosch’s drawings. “What do we know about the time over 500 years ago?” he adds. Just 700 drawings remain in all of Western culture which were created before the year 1500. “That’s how rare it is,” he says.
The star of Bosch’s repertoire, the Prado’s “The Garden of Earthly Delights,” is not part of the exhibit, although that’s unsurprising. “It’s huge and too fragile,” Koldeweij says. “Nobody reckoned it would come. It’s impossible. There are a number of artworks that never travel. So [Rembrandt’s] ‘Night Watch’ doesn’t go to Japan, and the ‘Garden’ doesn’t come here.”Death and the Miser (Click the link in the credit for a larger version.) (National Gallery of Art via Wikicommons)
“Death and the Miser” from Washington’s National Gallery of Art (c. 1485-90 in the gallery’s estimation, and c. 1500-10 in the exhibit’s tally) appears early in the exhibit and reflects powerfully the religious view that would have been ubiquitous in 16th-century Den Bosch..
In what is perhaps a double portrait, a man – the titular “miser,” a label associated with greed and selfishness—lies on his deathbed, as a skeleton opens the door and points an arrow at the man. An angel at the man’s side guides his gaze upward toward a crucifixion hanging in the window, as demons do their mischief. One looks down from atop the bed’s canopy; another hands the man a bag of coins (designed to tempt him with earthly possessions and to distract him from salvation); and yet others engage perhaps another depiction of the miser (carrying rosary beads in his hand) in the foreground as he hoards coins in a chest.
That choice between heaven and hell, eternal life and perpetual damnation, and greed and lust on the one hand and purity on the other -- which surfaces so often in Bosch’s work -- takes on an even more fascinating role in this particular work. Analysis of the underdrawing reveals that Bosch originally placed the bag of coins in the bedridden man’s grasp, while the final painting has the demon tempting the man with the money. The miser, in the final work, has yet to make his choice.
“Responsibility for the decision lies with the man himself; it is he, after all, who will have to bear the consequences: will it be heaven or hell?” states the exhibition catalog.
The same lady-or-the-tiger scenario surfaces in the “Wayfarer Triptych” (c. 1500-10) on loan from the Boijmans. A journeyer, likely an Everyman, looks over his shoulder as he walks away from a brothel. Underwear hangs in a window of the decrepit house; a man pees in a corner; and a couple canoodles in the doorway. As if matters weren’t sufficiently dour, a pigs drink at a trough -- no doubt a reference to the Prodigal Son -- in front of the house.The Wayfarer (or The Pedlar) (Museum Boijmans Van Beuningen via Wikicommons)
The man has left the house behind, but his longing gaze, as well as the closed gate and cow obstructing his path forward, question the degree to which he’s prepared to actually carry on along the straight and narrow path, rather than regressing. And his tattered clothes, apparent leg injury, and several other bizarre accessories on his person further cloud matters.
Turning on the television or watching any number of movies today, one is liable to come across special effect-heavy depictions of nightmarish sequences that evoke Bosch’s demons and hell-scapes. In this regard, Bosch was doubtless ahead of his time.
But his works are also incredibly timeless, particularly his depictions of people struggling with basic life decisions: to do good, or to do evil. The costumes and the religious sensibilities and a million other aspects are decidedly medieval, but at their core, the decisions and the question of what defines humanity are very modern indeed.
Paleo-artist Ray Troll’s obsession began way back in 1993, when he spotted what he calls a “strange doorstop” in the basement of the Los Angeles County Natural History Museum. “It was a beautiful whorl… I thought it was a big snail,” he says now, recollecting the moment when he visited the museum for a book he was working on.
In reality, his guide explained, the fossilized spiral was the jaw of an ancient shark.
Little did Troll know, this rocky jaw would consume his mind over the next 20 years, just as it had done with scientists before him. The strange tooth “whorl” belonged to the Helicoprion genus, the “buzz sharks” (a moniker Troll introduced in 2012). The bizarre beasts swam Earth’s waters some 270 million years ago, persisting for about 10 million years.
Russian geologist Alexander Karpinsky discovered the first Helicoprion in 1899 in Russia—he imagined the whorl as a fused-together coil of teeth that curled up over the shark’s snout. Throughout the early 1900s an American geologist, Charles Rochester Eastman, made the case that it was instead a defense structure on the creature’s back.
Since these early postulations, no one has been able to perfectly position the more than two-foot-wide spiral of knife-like tips. Smithsonian scientists were even pretty sure that the whorl belonged deep in the shark’s throat. The thought of this century-old fossil enigma was too alluring for the artist to ignore—instantly, Troll was hooked.
About a week after his museum visit, he cold-called the world’s authority at the time on Paleozoic sharks, Rainer Zangerl. Sporting an MFA in studio arts from Washington State University, Troll, now 61, most likely seemed a poor candidate for interpreting paleontological discoveries. But since his first sketch of a dinosaur (“crayons were my first medium”), Troll has demonstrated an irresistible affinity for the extinct and the living, particularly fish.
Starting in the 1970s, he began blending his own flavor of surrealism with humor and biology. One 1984 drawing depicts a cluster of fish nearly nipping a bare-bunned human from below. The caption reads: “Bottom Fish.” Another piece portrays two golden orange fish hovering above the ocean, staring at each other in the moonlight: “Snappers In Love.” Perhaps the most popular design, “Spawn Til You Die”, pictures two belly-up salmon and crossbones.
By 1995, his first major touring museum exhibit—“Dancing to the Fossil Record”—was working its way across the country, featuring drawings, fish tanks, fossils and a soundtrack and dance floor. “I just made a career out of shedding light on these animals,” Troll says.
When Troll met with Zangerl, the scientist was “very patient and he mentored me,” Troll recalls. Zangerl introduced him to all sorts of ancient shark species and directed Troll to another expert: Danish scientist Svend Erik Bendix-Almgreen, who had studied Helicoprion extensively and hypothesized decades earlier that the whorl belonged along the beast’s bottom jaw.
Throughout the late ’90s and into the 21st century, Troll’s drawings slowly shifted from a diversity of salmon, snappers and rockfish (printed in magazines, books, t-shirts and as murals commissioned by NOAA and California’s Monterey Bay Aquarium) to a lot of sharks in both natural and surreal settings. “My interest in Paleozoic sharks was at a peak,” he says.
Image by Ray Troll, www.trollart.com. "A Man, a Shark and Twenty Years, 2013," part of the touring exhibit “The Buzz Sharks of Long Ago” now at the Museum of Natural and Cultural History at the University of Oregon. (original image)
Image by Hall Anderson. Artist Ray Troll stands in front of a mural he and fellow artist Memo Jauergui painted for the buzz shark exhibit in Idaho. (original image)
The first time Troll put a Helicoprion to paper was for a book he was working on called Planet Ocean. Thanks to his newfound shark knowledge from “The Helicoprion Masters,” as he refers to Zangerl and Almgreen, Troll became the first person to draw a believable buzz shark. His depiction led to his 1998 appearance on the Discovery Channel’s “Prehistoric Sharks” segment featuring paleontologist Richard Lund.
Troll kept in touch with Almgreen for reference help and by 2001 he was publishing a kid’s alphabet book, Sharkabet, which also turned into a traveling exhibit. It featured a full swath of drawings of the beasts past and present. Helicoprion, of course, was in all of its circular-saw glory, pursuing a thin fish and accompanying the letter “H.”
By 2007, Troll had moved on to fantastical map making with his book Cruisin’ The Fossil Freeway (also a touring exhibit) with author Kirk Johnson, currently the director of the Smithsonian’s National Museum of Natural History. Recounting and mapping their 5,000-mile road trip, the book strings together the layered fossil history of the American West and within it, the “ever-elusive fossilized tooth whorls of Helicoprion,” paleo-blogger (and Smithsonian.com contributor) Brian Switek wrote in his review of the book.
Sure, “there’s a whole host of beasties and creatures that I’m enamored with,” Trolls says: “but Helicoprion became one of my favorite characters in the story of my life.”
Twenty years after his introduction to the fossil, Troll has reviewed the “literally hundreds of drawings” of Helicoprion and turned them into a traveling exhibit of his madness. The show began in 2013 in Idaho, a state rich with Helicoprion fossils, as these sharks once swam in the Paleozoic ocean waters that covered much of the Northern Hemisphere.
“Unraveling the Mystery of the Buzz Sharks of Idaho” became “The Summer of Sharks” in Alaska and “The Buzz Sharks of Long Ago” in Washington. Its current home lies within the Museum of Natural and Cultural History on the University of Oregon’s campus. The exhibit touts jaw replicas and Troll’s own whimsical whorl depictions, like big yellow spirals that resemble tribal symbols of the sun with scribbled numbers above each tooth. Up to 180 teeth can exist in one whorl, Troll says. His more recent pieces depict a single human silhouette, himself no doubt, tumbling through a skyful of multicolored whorls.
Troll’s passion, however, has served a purpose far beyond the aesthetic charm of a framed picture—it has shaped the scientific community’s knowledge of Helicoprion itself. Back in the mid-1990s, when he wrote and spoke with Almgreen, Troll discovered that the scientist had published his hypothesis about the buzz shark’s physiology in an obscure paper in 1966. This knowledge remained hidden, lost to memory even to prominent paleontologists, until 2010, when an undergraduate student working as an intern at the Idaho Museum of Natural History got in touch with Troll.
Jesse Pruitt had come across the museum’s Helicoprion collection during an introductory tour, and he recognized the fossil from a “Shark Week” episode that had aired on the Discovery Channel a few months before. He asked the collections manager about the whorls. She recalled that Troll had loaned a couple out from the museum for an exhibit “and suggested that I should contact him,” Pruitt says. Right away, “[Troll] told me to find the Almgreen paper and look for Idaho #4, the name of a fossil in the museum’s collections.” At this point, Pruitt’s advisor paleontologist Leif Tapanila became interested as well.
“I hadn't seen [the] original paper before that,” Tapanila says. Idaho #4, the very fossil that Almgreen used to make his own hypothesis, would be integral, Troll assured the duo, “if one wanted new insights and finally establish that the whorl was in the lower jaw.”
Publishing their findings in a landmark 2013 Biology Letters paper, Tapanila’s team used CT scans of Idaho #4 to reveal a view that Almgreen couldn’t see in the ’60s. Inside this fossil, they discovered all the parts of Helicoprion’s upper and lower jaw, which led to their reconstruction of the whorl that “partly confirms” Almgreen’s original hunch, Tapanila writes in the 2013 paper. “Idaho #4 became the Rosetta stone of sorts for deciphering these sharks,” Pruitt says. Indeed, the whorl was located on the lower jaw, just as Almgreen had suggested. But what Almgreen could not see, Tapanila says, is that it was attached to the full length of the shark’s jaw. These teeth “filled up its entire mouth.”
One of the paper’s more astounding findings shows that buzz sharks are not sharks at all. The scans revealed that they actually belong to the closely related ratfish family, ironic considering that one of Troll’s many sea life obsessions over the years happens to be with ratfish. He has one tattooed upon his upper bicep, and the fish inspired the name of his band, “The Ratfish Wranglers.” There's even a ratfish species, Hydrolagus trolli, that was named after him in 2002.
Troll’s comic-like depictions of the long-debunked Helicoprion hypotheses and his best take based on the new research are printed in the paper alongside Tapanila’s study. Since day one, “Troll was part of the science team,” Tapanila says. “He puts the pieces together.”
The most recent illustration shows Helicoprion with its mouth packed full of spiral-sawed teeth, reflecting the 2013 finding, which Tapanila says he’s pretty sure is spot on—“as sure as a scientist is ever willing to say that they’re sure.”
Though he’s played a true role in science, Troll remains unabashedly an artist. Scientists work within strict confines, he says. “They have to be cautious.” They know where Helicoprion fits in the family tree now, but they still need to learn what this ratfish looked like. “No one’s ever seen the body—all we have are the whorls,” Troll says, “and that’s where I come in.”
Troll’s “Buzz Sharks of Long Ago” will be on exhibit at the New Mexico Museum of Natural History for the summer of 2016 and at The Museum of The Earth in Ithaca, New York, the following year.
Editor's Note: The article has been updated to reflect the fact that "Dancing to the Fossil Record" was not Troll's first art exhibit.
Soon after I enrolled as a graduate student at Cambridge University in 1964, I encountered a fellow student, two years ahead of me in his studies, who was unsteady on his feet and spoke with great difficulty. This was Stephen Hawking. He had recently been diagnosed with a degenerative disease, and it was thought that he might not survive long enough even to finish his PhD. But he lived to the age of 76, passing away on March 14, 2018.
It really was astonishing. Astronomers are used to large numbers. But few numbers could be as large as the odds I’d have given against witnessing this lifetime of achievement back then. Even mere survival would have been a medical marvel, but of course he didn’t just survive. He became one of the most famous scientists in the world—acclaimed as a world-leading researcher in mathematical physics, for his best-selling books and for his astonishing triumph over adversity.
Perhaps surprisingly, Hawking was rather laid back as an undergraduate student at Oxford University. Yet his brilliance earned him a first class degree in physics, and he went on to pursue a research career at the University of Cambridge. Within a few years of the onset of his disease, he was wheelchair-bound, and his speech was an indistinct croak that could only be interpreted by those who knew him. In other respects, fortune had favored him. He married a family friend, Jane Wilde, who provided a supportive home life for him and their three children.
The 1960s were an exciting period in astronomy and cosmology. This was the decade when evidence began to emerge for black holes and the Big Bang. In Cambridge, Hawking focused on the new mathematical concepts being developed by the mathematical physicist Roger Penrose, then at University College London, which were initiating a renaissance in the study of Einstein’s theory of general relativity.
Using these techniques, Hawking worked out that the universe must have emerged from a “singularity”—a point in which all laws of physics break down. He also realised that the area of a black hole’s event horizon—a point from which nothing can escape—could never decrease. In the subsequent decades, the observational support for these ideas has strengthened—most spectacularly with the 2016 announcement of the detection of gravitational waves from colliding black holes.Hawking at the University of Cambridge (Lwp Kommunikáció/Flickr, CC BY-SA)
Hawking was elected to the Royal Society, Britain’s main scientific academy, at the exceptionally early age of 32. He was by then so frail that most of us suspected that he could scale no further heights. But, for Hawking, this was still just the beginning.
He worked in the same building as I did. I would often push his wheelchair into his office, and he would ask me to open an abstruse book on quantum theory—the science of atoms, not a subject that had hitherto much interested him. He would sit hunched motionless for hours—he couldn’t even to turn the pages without help. I remember wondering what was going through his mind, and if his powers were failing. But within a year, he came up with his best ever idea—encapsulated in an equation that he said he wanted on his memorial stone.
The great advances in science generally involve discovering a link between phenomena that seemed hitherto conceptually unconnected. Hawking’s “eureka moment” revealed a profound and unexpected link between gravity and quantum theory: he predicted that black holes would not be completely black, but would radiate energy in a characteristic way.
This radiation is only significant for black holes that are much less massive than stars—and none of these have been found. However, “Hawking radiation” had very deep implications for mathematical physics—indeed one of the main achievements of a theoretical framework for particle physics called string theory has been to corroborate his idea.
Indeed, the string theorist Andrew Strominger from Harvard University (with whom Hawking recently collaborated) said that this paper had caused “more sleepless nights among theoretical physicists than any paper in history.” The key issue is whether information that is seemingly lost when objects fall into a black hole is in principle recoverable from the radiation when it evaporates. If it is not, this violates a deeply believed principle of general physics. Hawking initially thought such information was lost, but later changed his mind.
Hawking continued to seek new links between the very large (the cosmos) and the very small (atoms and quantum theory) and to gain deeper insights into the very beginning of our universe—addressing questions like “was our big bang the only one?” He had a remarkable ability to figure things out in his head. But he also worked with students and colleagues who would write formulas on a blackboard—he would stare at it, say whether he agreed and perhaps suggest what should come next.
He was specially influential in his contributions to “cosmic inflation”—a theory that many believe describes the ultra-early phases of our expanding universe. A key issue is to understand the primordial seeds which eventually develop into galaxies. Hawking proposed (as, independently, did the Russian theorist Viatcheslav Mukhanov) that these were “quantum fluctuations” (temporary changes in the amount of energy in a point in space)—somewhat analogous to those involved in “Hawking radiation” from black holes.
He also made further steps towards linking the two great theories of 20th century physics: the quantum theory of the microworld and Einstein’s theory of gravity and space-time.
In 1987, Hawking contracted pneumonia. He had to undergo a tracheotomy, which removed even the limited powers of speech he then possessed. It had been more than ten years since he could write, or even use a keyboard. Without speech, the only way he could communicate was by directing his eye towards one of the letters of the alphabet on a big board in front of him.
But he was saved by technology. He still had the use of one hand; and a computer, controlled by a single lever, allowed him to spell out sentences. These were then declaimed by a speech synthesiser, with the androidal American accent that thereafter became his trademark.
His lectures were, of course, pre-prepared, but conversation remained a struggle. Each word involved several presses of the lever, so even a sentence took several minutes to construct. He learnt to economize with words. His comments were aphoristic or oracular, but often infused with wit. In his later years, he became too weak to control this machine effectively, even via facial muscles or eye movements, and his communication—to his immense frustration—became even slower.Hawking in zero gravity (NASA)
At the time of his tracheotomy operation, he had a rough draft of a book, which he’d hoped would describe his ideas to a wide readership and earn something for his two eldest children, who were then of college age. On his recovery from pneumonia, he resumed work with the help of an editor. When the U.S. edition of A Brief History of Time appeared, the printers made some errors (a picture was upside down), and the publishers tried to recall the stock. To their amazement, all copies had already been sold. This was the first inkling that the book was destined for runaway success, reaching millions of people worldwide.
And he quickly became somewhat of a cult figure, featuring on popular TV shows ranging from the Simpsons to The Big Bang Theory. This was probably because the concept of an imprisoned mind roaming the cosmos plainly grabbed people’s imagination. If he had achieved equal distinction in, say, genetics rather than cosmology, his triumph probably wouldn’t have achieved the same resonance with a worldwide public.
As shown in the feature film The Theory of Everything, which tells the human story behind his struggle, Hawking was far from being the archetype unworldy or nerdish scientist. His personality remained amazingly unwarped by his frustrations and handicaps. He had robust common sense, and was ready to express forceful political opinions.
However, a downside of his iconic status was that that his comments attracted exaggerated attention even on topics where he had no special expertise—for instance, philosophy, or the dangers from aliens or from intelligent machines. And he was sometimes involved in media events where his “script” was written by the promoters of causes about which he may have been ambivalent.
Ultimately, Hawking’s life was shaped by the tragedy that struck him when he was only 22. He himself said that everything that happened since then was a bonus. And what a triumph his life has been. His name will live in the annals of science and millions have had their cosmic horizons widened by his best-selling books. He has also inspired millions by a unique example of achievement against all the odds—a manifestation of amazing willpower and determination.
Most people think of synchronized swimming, which gained Olympic status in 1984, as a newcomer sport that dates back only as far as Esther Williams' midcentury movies. But the aquatic precursors of synchronized swimming are nearly as old as the Olympics themselves.
Ancient Rome’s gladiatorial contests are well known for their excessive and gruesome displays, but their aquatic spectacles may have been even more over the top. Rulers as early as Julius Caesar commandeered lakes (or dug them) and flooded amphitheaters to stage reenactments of large naval battles— called naumachiae—in which prisoners were forced to fight one another to the death, or drown trying. The naumachiae were such elaborate productions that they were only performed at the command of the emperor, but there is evidence that other—less macabre—types of aquatic performances took place during the Roman era, including an ancient forerunner to modern synchronized swimming.
The first-century A.D. poet Martial wrote a series of epigrams about the early spectacles in the Colosseum, in which he described a group of women who played the role of Nereids, or water nymphs, during an aquatic performance in the flooded amphitheater. They dove, swam and created elaborate formations and nautical shapes in the water, such as the outline or form of a trident, an anchor and a ship with billowing sails. Since the women were portraying water nymphs, they probably performed nude, says Kathleen Coleman, James Loeb Professor of the Classics at Harvard University, who has translated and written commentaries on Martial’s work. Yet, she says, “There was a stigma attached to displaying one’s body in public, so the women performing in these games were likely to have been of lowly status, probably slaves.”
Regardless of their social rank, Martial was clearly impressed with the performance. “Who designed such amazing tricks in the limpid waves?” he asks near the end of the epigram. He concludes that it must have been Thetis herself—the mythological leader of the nymphs—who taught “these feats” to her fellow-Nereids.
Fast forward to the 19th century and naval battle re-enactments appear again, this time at the Sadler’s Wells Theater in England, which featured a 90-by-45 foot tank of water for staging “aqua dramas.” Productions included a dramatization of the late-18th-century Siege of Gibraltar, complete with gunboats and floating batteries, and a play about the sea-god Neptune, who actually rode his seahorse-drawn chariot through a waterfall cascading over the back of the stage. Over the course of the 1800s, a number of circuses in Europe, such as the Nouveau Cirque in Paris and Blackpool Tower Circus in England, added aquatic acts to their programs. These were not tent shows, but elegant, permanent structures, sometimes called the “people’s palaces,” with sinking stages or center rings that could be lined with rubber and filled with enough water to accommodate small boats or a group of swimmers.Royal Aquarium, Westminster. Agnes Beckwith, c. 1885 (© The British Library Board)
In England, these Victorian swimmers were often part of a performing circuit of professional "natationists" who demonstrated "ornamental" swimming, which involved displays of aquatic stunts, such as somersaults, sculling, treading water and swimming with arms and legs bound. They waltzed and swam in glass tanks at music halls and aquariums, and often opened their acts with underwater parlor tricks like smoking or eating while submerged. Though these acts were first performed by men, female swimmers soon came to be favored by audiences. Manchester (U.K.) Metropolitan University's sports and leisure historian, Dave Day, who has written extensively on the subject, points out that swimming, "packaged as entertainment," gave a small group of young, working-class women the opportunity to make a living, not only as performers, but also as swimming instructors for other women. But as more women in England learned to swim, the novelty of their acts wore off.
Image by hippodrome memories. Water circus at Blackpool (original image)
In the United States, however, the idea of a female aquatic performer still seemed quite avant-garde when Australian champion swimmer Annette Kellerman launched her vaudeville career in New York in 1908. Billed as the "Diving Venus" and often considered the mother of synchronized swimming, Kellerman wove together displays of diving, swimming and dancing, which The New York Times called "art in the making." Kellerman's career—which included starring roles in mermaid and aquatic-themed silent films and lecturing to female audiences about the importance of getting fit and wearing sensible clothing—reached its pinnacle when she, and a supporting cast of 200 mermaids, replaced prima-ballerina Pavlova as the headline act at the New York Hippodrome in 1917.
While Kellerman was promoting swimming as a way to maintain health and beauty, the American Red Cross, which had grown concerned about high drowning rates across the country, turned to water pageants as an innovative way to increase public interest in swimming and water safety. These events, which featured swimming, acting, music, life-saving demonstrations or some combination of these, became increasingly popular during the 1920s. Clubs for water pageantry, water ballet and "rhythmic" swimming—along with clubs for competitive diving and swimming—started popping up in every pocket of America.Annette Kellerman (1887-1975), Australian professional swimmer, vaudeville and film star in her famous custom swimsuit (Library of Congress via Wikicommons)
One such group, the University of Chicago Tarpon Club, under the direction of Katharine Curtis, had begun experimenting with using music not just as background, but as a way to synchronize swimmers with a beat and with one another. In 1934, the club, under the name Modern Mermaids, performed to the accompaniment of a 12-piece band at the Century of Progress World's Fair in Chicago. It was here that "synchronized swimming" got its name when announcer Norman Ross used the phrase to describe the performance of the 60 swimmers. By the end of the decade, Curtis had overseen the first competition between teams doing this type of swimming and written its first rulebook, effectively turning water ballet into the sport of synchronized swimming.
While Curtis, a physical education instructor, was busy moving aquatic performance in the direction of competitive sport, American impresario Billy Rose saw a golden opportunity to link the already popular Ziegfeld-esque “girl show” with the rising interest in water-based entertainment. In 1937, he produced the Great Lakes Aquacade on the Cleveland waterfront, featuring—according to the souvenir program—"the glamour of diving and swimming mermaids in water ballets of breath-taking beauty and rhythm."
The show was such a success that Rose produced two additional Aquacades in New York and San Francisco, where Esther Williams was his star mermaid. Following the show, Williams became an international swimming sensation through her starring roles in MGM's aquamusicals, featuring water ballets elaborately choreographed by Busby Berkeley.
Though competitive synchronized swimming—which gained momentum during the middle of the century—began to look less and less like Williams' water ballets, her movies did help spread interest in the sport. Since its 1984 Olympic induction, synchronized swimming has moved farther from its entertainment past, becoming ever "faster, higher, and stronger," and has proven itself to be a serious athletic event.
But regardless of its roots, and regardless of how it has evolved, the fact that synchronized swimming remains a spectator favorite—it was one of the first sporting events to sell out in Rio—just goes to show that audiences still haven't lost that ancient appetite for aquatic spectacle.
How to watch synchronized swimming
If synchronized swimming looks easy, the athletes are doing their jobs. Though it is a grueling sport that requires tremendous strength, flexibility, and endurance—all delivered with absolute precision while upside down and in the deep end—synchronized swimmers are expected to maintain "an illusion of ease," according to the rulebook issued by FINA, the governing body of swimming, diving, water polo, synchronized swimming and open water swimming.
Olympic synchronized swimming includes both duet and team events, with scores from technical and free routines combined to calculate a final rank. Routines are scored for execution, difficulty and artistic impression, with judges watching not only for perfect synchronization and execution, both above and below the surface, but also for swimmers' bodies to be high above the water, for constant movement across the pool, for teams to swim in sharp but quickly changing formations, and for the choreography to express the mood of the music.
The United States and Canada were the sport's early leaders, but Russia—with its rich traditions in dance and acrobatics, combined with its stringent athletic discipline—has risen to dominance in recent years, winning every gold Olympic medal of the 21st century and contributing to the ever-changing look of the sport. Russia, followed by China, remains the team to watch in Rio this year, while the U.S. is hoping for a win from American duet pair Mariya Koroleva and Anita Alvarez.
There have been some interesting creatures popping up in the Arctic. Canadian hunters have found white bears with brown tints—a cross between Ursus maritimus, the polar bear, and Ursus arctos horribilis, the grizzly. A couple of decades ago, off the coast of Greenland, something that appeared to be half-narwhal, half-beluga surfaced, and much more recently, Dall’s porpoise and harbor porpoise mixes have been swimming near British Columbia.
In “The Arctic Melting Pot,” a study published in the journal Nature in December 2010, Brendan Kelly, Andrew Whiteley and David Tallmon claim, ”These are just the first of many hybridizations that will threaten polar diversity.” The biologists speculated a total of 34 possible hybridizations (pdf).
Arctic sea ice is melting, and fast—at a rate of 30,000 square miles per year, according to NASA. And, some scientists predict that the region will be ice-free within about 40 years. “Polar bears are spending more time in the same areas as grizzlies; seals and whales currently isolated by sea ice will soon be likely to share the same waters,” says Kelly and his colleagues in the study. Naturally, there will be some interbreeding.
Such mixed offspring are hard to find. But, thanks to technology and the creative mind of artist Nickolay Lamm, they’re not hard to envision.
Say a harp seal (Phoca groenandica) mates with a hooded seal (Cystophora crostata), or a bowhead whale (Balaena mysticetus) breeds with a right whale (Eubalaena spp.). What would the offspring look like? Dina Spector, an editor at Business Insider, was curious and posed the question to Lamm.
This past spring, Lamm, who creates forward-looking illustrations from scientific research, produced scenes depicting the effect of sea level rise on coastal U.S. cities over the next few centuries, based on data reported by Climate Central, for the news outlet. Now, building off Spector’s question, he has created a series of digitally manipulated photographs—his visions of several supposed Arctic hybrids.
“In that Nature report, it was just a huge list of species which could cross breed with one another. I feel that images speak a lot more,” says Lamm. “With these, we can actually see the consequences of climate change.”
Lamm first selected several of the hybridizations listed in the study for visual examination. He then picked a stock photo of one of the two parent species (shown on the left in each pairing), then digitally manipulated it to reflect the shape, features and coloring of the other species (on the right). Blending these, he derived a third photograph of their potential young.
To inform his edits in Photoshop, the artist looked at any existing photographs of the crossbred species. “There are very, very few of them,” he notes. He also referred to any written descriptions of the hybrids and, enlisting the help of wildlife biologist Elin Pierce, took into account the dominant features of each original species. In some cases, Lamm took some artistic merit. He chose to illustrate the narwhal-beluga mix, for example, with no tusk, when Pierce suggested that the animal may or may not have a very short tooth protruding from its mouth.
Biologists are concerned about the increasing likelihood of this crossbreeding. “As more isolated populations and species come into contact, they will mate, hybrids will form and rare species are likely to go extinct,” reports Nature.
Many critics of Lamm’s series have argued that these hybrids may just be a product of evolution. But, to that, Lamm says,”Climate change is a result of us humans and not just some natural evolution that would happen without us.”
About the project itself, he adds, “I am personally concerned about the environment, and this is just my way of expressing my worry about climate change.”
Even if you don’t know it, you have probably been surrounded by house sparrows your entire life. Passer domesticus is one of the most common animals in the world. It is found throughout Northern Africa, Europe, the Americas and much of Asia and is almost certainly more abundant than humans. The birds follow us wherever we go. House sparrows have been seen feeding on the 80th floor of the Empire State Building. They have been spotted breeding nearly 2,000 feet underground in a mine in Yorkshire, England. If asked to describe a house sparrow, many bird biologists would describe it as a small, ubiquitous brown bird, originally native to Europe and then introduced to the Americas and elsewhere around the world, where it became a pest of humans, a kind of brown-winged rat. None of this is precisely wrong, but none of it is precisely right, either.
Part of the difficulty of telling the story of house sparrows is their commonness. We tend to regard common species poorly, if at all. Gold is precious, fool’s gold a curse. Being common is, if not quite a sin, a kind of vulgarity from which we would rather look away. Common species are, almost by definition, a bother, damaging and in their sheer numbers, ugly. Even scientists tend to ignore common species, choosing instead to study the far away and rare. More biologists study the species of the remote Galapagos Islands than the common species of, say, Manhattan. The other problem with sparrows is that the story of their marriage with humanity is ancient and so, like our own story, only partially known.
Many field guides call the house sparrow the European house sparrow or the English sparrow and describe it as being native to Europe, but it is not native to Europe, not really. For one thing, the house sparrow depends on humans to such an extent it might be more reasonable to say it is native to humanity rather than to some particular region. Our geography defines its fate more than any specific requirements of climate or habitat. For another, the first evidence of the house sparrow does not come from Europe.
The clan of the house sparrow, Passer, appears to have arisen in Africa. The first hint of the house sparrow itself is based on two jawbones found in a layer of sediment more than 100,000 years old in a cave in Israel. The bird to which the bones belonged was Passer predomesticus, or the predomestic sparrow, although it has been speculated that even this bird might have associated with early humans, whose remains have been found in the same cave. The fossil record is then quiet until 10,000 or 20,000 years ago, when birds very similar to the modern house sparrow begin to appear in the fossil record in Israel. These sparrows differed from the predomestic sparrow in subtle features of their mandible, having a crest of bone where there was just a groove before.
Once house sparrows began to live among humans, they spread to Europe with the spread of agriculture and, as they did, evolved differences in size, shape, color and behavior in different regions. As a result, all of the house sparrows around the world appear to have descended from a single, human-dependent lineage, one story that began thousands of years ago. From that single lineage, house sparrows have evolved as we have taken them to new, colder, hotter and otherwise challenging environments, so much so that scientists have begun to consider these birds different subspecies and, in one case, species. In parts of Italy, as house sparrows spread, they met the Spanish sparrow (P. hispaniolensis). They hybridized, resulting in a new species called the Italian sparrow (P. italiiae).
As for how the relationship between house sparrows and humans began, one can imagine many first meetings, many first moments of temptation to which some sparrows gave in. Perhaps the small sparrows ran—though “sparrowed” should be the verb for their delicate prance—quickly into our early dwellings to steal untended food. Perhaps they flew, like sea gulls, after children with baskets of grain. What is clear is that eventually sparrows became associated with human settlements and agriculture. Eventually, the house sparrow began to depend on our gardened food so much so that it no longer needed to migrate. The house sparrow, like humans, settled. They began to nest in our habitat, in buildings we built, and to eat what we produce (whether our food or our pests).
Meanwhile, although I said all house sparrows come from one human-loving lineage, there is one exception. A new study from the University of Oslo has revealed a lineage of house sparrows that is different than all the others. These birds migrate. They live in the wildest remaining grasslands of the Middle East, and do not depend on humans. They are genetically distinct from all the other house sparrows that do depend on humans. These are wild ones, hunter-gatherers that find everything they need in natural places. But theirs has proven to be a far less successful lifestyle than settling down.
Maybe we would be better without the sparrow, an animal that thrives by robbing from our antlike industriousness. If that is what you are feeling, you are not the first. In Europe, in the 1700s, local governments called for the extermination of house sparrows and other animals associated with agriculture, including, of all things, hamsters. In parts of Russia, your taxes would be lowered in proportion to the number of sparrow heads you turned in. Two hundred years later came Chairman Mao Zedong.
Image by Dorling Kindersley / Getty Images. The house sparrow, like humans, settled. They began to nest in our habitat, in buildings we built, and to eat what we produce. (original image)
Image by David Courtenay / Getty Images. Passer domesticus is one of the most common animals in the world. It is found throughout Northern Africa, Europe, the Americas and much of Asia and is almost certainly more abundant than humans. (original image)
Image by Courtesy of The Fat Finch. Chairman Mao Zedong commanded people all over China to come out of their houses to bang pots and make the sparrows fly, which, in March of 1958, they did, pictured. The sparrows flew until exhausted, then they died, mid-air, and fell to the ground. (original image)
Mao was a man in control of his world, but not, at least in the beginning, of the sparrows. He viewed sparrows as one of the four “great” pests of his regime (along with rats, mosquitoes and flies). The sparrows in China are tree sparrows, which, like house sparrows, began to associate with humans around the time that agriculture was invented. Although they are descendants of distinct lineages of sparrows, tree sparrows and house sparrows share a common story. At the moment at which Mao decided to kill the sparrows, there were hundreds of millions of them in China (some estimates run as high as several billion), but there were also hundreds of millions of people. Mao commanded people all over the country to come out of their houses to bang pots and make the sparrows fly, which, in March of 1958, they did. The sparrows flew until exhausted, then they died, mid-air, and fell to the ground, their bodies still warm with exertion. Sparrows were also caught in nets, poisoned and killed, adults and eggs alike, anyway they could be. By some estimates, a billion birds were killed. These were the dead birds of the great leap forward, the dead birds out of which prosperity would rise.
Of course moral stories are complex, and ecological stories are too. When the sparrows were killed, crop production increased, at least according to some reports, at least initially. But with time, something else happened. Pests of rice and other staple foods erupted in densities never seen before. The crops were mowed down and, partly as a consequence of starvation due to crop failure, 35 million Chinese people died. The great leap forward leapt backward, which is when a few scientists in China began to notice a paper published by a Chinese ornithologist before the sparrows were killed. The ornithologist had found that while adult tree sparrows mostly eat grains, their babies, like those of house sparrows, tend to be fed insects. In killing the sparrows, Mao and the Chinese had saved the crops from the sparrows, but appear to have left them to the insects. And so Mao, in 1960, ordered sparrows to be conserved (replacing them on the list of four pests with bedbugs). It is sometimes only when a species is removed that we see clearly its value. When sparrows are rare, we often see their benefits; when they are common, we see their curse.
When Europeans first arrived in the Americas, there were Native American cities, but none of the species Europeans had come to expect in cities: no pigeons, no sparrows, not even any Norway rats. Even once European-style cities began to emerge, they seemed empty of birds and other large animals. In the late 1800s, a variety of young visionaries, chief among them Nicholas Pike, imagined that what was missing were the birds that live with humans and, he thought, eat our pests. Pike, about whom little is known, introduced about 16 birds into Brooklyn. They rose from his hands and took off and prospered. Every single house sparrow in North America may be descended from those birds. The house sparrows were looked upon favorably for a while until they became abundant and began to spread from California to the New York Islands, or vice versa anyway. In 1889, just 49 years after the introduction of the birds, a survey was sent to roughly 5,000 Americans to ask them what they thought of the house sparrows. Three thousand people responded and the sentiment was nearly universal: The birds were pests. This land became their land too, and that is when we began to hate them.
Because they are an introduced species, now regarded as invasive pests, house sparrows are among the few bird species in the United States that can be killed essentially anywhere, any time, for any reason. House sparrows are often blamed for declines in the abundance of native birds, such as bluebirds, though the data linking sparrow abundance to bluebird decline are sparse. The bigger issue is that we have replaced bluebird habitats with the urban habitats house sparrows favor. So go ahead and bang your pots, but remember, you were the one who, in building your house, constructed a house sparrow habitat, as we have been doing for tens of thousands of years.
As for what might happen if house sparrows became more rare, one scenario has emerged in Europe. House sparrows have become more rare there for the first time in thousands of years. In the United Kingdom, for example, numbers of house sparrows have declined by 60 percent in cities. As the birds became rare, people began to miss them again. In some countries the house sparrow is now considered a species of conservation concern. Newspapers ran series on the birds’ benefits. One newspaper offered a reward for anyone who could find out “what was killing our sparrows.” Was it pesticides, some asked? Global warming? Cellphones? Then just this year a plausible (though probably incomplete) answer seems to have emerged. The Eurasian sparrowhawk (Accipiter nisus), a hawk that feeds almost exclusively on sparrows, has become common in cities across Europe and is eating the sparrows. Some people have begun to hate the hawk.
In the end, I can’t tell you whether sparrows are good or bad. I can tell you that when sparrows are rare, we tend to like them, and when they are common, we tend to hate them. Our fondness is fickle and predictable and says far more about us than them. They are just sparrows. They are neither lovely nor terrible, but instead just birds searching for sustenance and finding it again and again where we live. Now, as I watch a sparrow at the feeder behind my own house, I try to forget for a moment whether I am supposed to like it or not. I just watch as it grabs onto a plastic perch with its thin feet. It hangs there and flutters a little to keep its balance as the feeder spins. Once full, it fumbles for a second and then flaps its small wings and flies. It could go anywhere from here, or at least anywhere it finds what it needs, which appears to be us.
Rob Dunn is a biologist at North Carolina State University and the author of The Wild Life of Our Bodies. He has written for Smithsonian about our ancestors’ predators, singing mice and the discovery of the hamster.