Found 31,948 Resources containing: Shapes
Bathed in butter or lightly spritzed with fresh lemon juice, lobster is the king of seafood—a royal crustacean with an untraceable lineage whose journey from seafloor to table can be fraught with political and ecological uncertainty. With consumer demand for responsibly harvested seafood rising, companies such as Red Lobster, Chicken of the Sea and Seattle Fish Co. have pledged to do a better job of tracing the source of the lobster they import. Following through with their promise, however, remains difficult because there’s no effective way to identify where a lobster was caught once it hits the docks. That’s why Stephen Box and Nathan Truelove, researchers from Smithsonian Marine Station in Fort Pierce, Florida, are searching the lobster’s genetic code for a better traceability tool.
Most of the lobster tails consumed in the United States come from the Caribbean, where exactly is nearly impossible to say with current technology. But that information is critically important because illegal, unregulated and unreported lobster fishing costs some countries millions of dollars in lost revenue annually. It also reduces the number of lobsters in marine sanctuaries intended as safe habitats where animals can breed and grow without fishing pressure.
If, however, a lobster’s home territory is written into its genetic code as Box and Truelove suspect, it just may be possible to distinguish a legally captured lobster from one with a shady background—maybe even after it’s made it to the dinner plate.
Economically, Caribbean lobster, also known as spiny lobster, is among the largest and most important fishery in the Caribbean. The U.S. is the largest consumer of that resource. According to Jimmy Andino, a researcher and lobster fisheries specialist at the Center for Marine Studies in Honduras, his country alone exports $40 million worth of lobster to the U.S. market. He says intensive lobster fishing throughout the Caribbean is causing a steady decline in both the number and size of lobster available to satisfy that market. The incentive to fish outside legal boundaries is strong.
Lobsters spend their first few months of life as tiny swimming larvae that can be carried far and wide by currents. As a result, their genes have been homogenized throughout the Caribbean. “There’s very little genetic differentiation amongst lobsters in the Caribbean,” says Box. “But what we suspect, is that once a lobster settles into an area, it’s environment starts shaping how it will function in that specific location. We are all influenced by our environment, and we start expressing genes to respond to environmental conditions.”
In Himalayan rabbits, for example, warm conditions turn off genes that tell the animal’s cells to produce melanin. With no melanin, the rabbit’s fur turns white. Under cold conditions melanin genes turn on and the fur turns black. In the case of lobster, environmental factors such as salinity, water depth and turbidity may cause certain changes in the animal’s genetic code that turn specific genes on or off.
This summer, Box and Truelove will be gathering tissue samples from lobsters in five geographically distinct areas of the Caribbean to see if they can find specific bits of DNA that are expressed in predictable ways based on their location. The scientists don’t even need to know what those genes do, just whether or not they’re turned on or off.
“If we can identify that, we can say ‘if you’re expressing that set of genes, or that specific signature of genes, you must be living in this area,’” Box says, “because you wouldn’t be expressing them if you lived in a different area.”Intensive lobster fishing throughout the Caribbean is causing a steady decline in both the number and size of lobster available to satisfy that market. The incentive to fish outside legal boundaries is strong. (© Alex Mustard/Nature Picture Library/Corbis)
Such a tool would be a huge improvement over current tracking methods which rely on resource-intensive patrol boats, self-reporting by fishers when they off-load their catch and GPS installed on fishing vessels, which tells where a boat has been but not where a lobster has been caught.
Searching for environmentally sensitive DNA in any organism is a relatively new field, and applying these concepts to fisheries management is uncharted territory. “If it can be done, it’s going to be very, very useful,” says Nancy Daves from NOAA Fisheries Office of International Affairs. “We know that there is a significant amount [of poaching and illegal fishing] in the Caribbean, where it’s like a basin with countries all around it. They’re all stealing from each other.”
In Jamaica, for instance, the government reports that poachers robbed $130 million in lobster from that country’s waters between 2006 and 2011. “They actually build in a factor of 10 percent in their management plan to allot for illegal take,” says Daves. “They acknowledge this as a fact of life in the Caribbean.”
It is a fact of life that the U.S. plays a hand in, and could conceivably change if traceability improves and importers and distributors refuse to purchase lobster from illegal, unregulated and unreported (IUU) sources. The lobster pledge some have already signed is intended to stem the import of lobster caught using dangerous scuba diving methods that have been outlawed in most Caribbean countries. Despite the laws, some fishers are still using scuba, and as more and more lobsters are taken out of the sea, they are diving ever deeper to find them.This summer, Stephen Box and Nathan Truelove will be gathering tissue samples from lobsters in five geographically distinct areas of the Caribbean to see if they can find specific bits of DNA that are expressed in predictable ways based on their location. (George Stoyle, Earth in Focus)
Box says that every year, along the impoverished Miskito coast of Honduras and Nicaragua, decompression sickness from diving too deep and staying too long kills roughly 20 divers every year and cripples many more. A genetic tool that identifies the depth of a lobster’s range would help signatories to the lobster pledge follow through on their promise. Similarly, finding a genetic signature that identifies the geographic region a lobster comes from will help curtail poaching across international borders. “Lobster is not part of the Honduran diet,” Andino says, “but it’s part of our industry for exportation. The genetic work will help us to be sure that the lobster that is caught in Honduras belongs to Honduras. That it’s not going to illegal and unreported fishing.”
According to Box, as important as their economic impact is, poachers can also make it difficult to gauge the ecological sustainability of the fishery they poach from as well as the one they claim to fish. “If you’re trying to manage a fishery for a specific area,” says Box, “you really want to know how much production is coming out of that area. If you’re actually stealing it from somewhere else, it can be very difficult to know how many lobster you really have.”Caribbean lobster, also known as spiny lobster, is among the largest and most important fishery. The U.S. is the largest consumer of that resource. (© Michele Westmorland/Corbis)
The genetic method Box and Truelove are exploring would help natural resource managers get a better handle on their lobster populations, and they’re working with Andino to collect samples of lobster throughout Honduran waters in hopes of better understanding the country’s lobster stock.
The technology they are developing may also be applicable to other fisheries. “I think it is theoretically something that can and will be used,” says John Henderschedt, Director of NOAA’s Seafood Inspection Program. “What is less clear, at least in the near term, is the extent to which it can be used in various fisheries.” Genetic testing technology is expensive compared to some other methods. In addition, some environmental conditions change from year to year, so the genetic signature for a given region may need to be identified on an annual basis. Henderschedt says it’s not likely to be worth the cost in every circumstance, but it could be very valuable if used in areas where IUU fishing poses the greatest risk for environmental or economic losses.The genetic method Box and Truelove are exploring would help natural resource managers get a better handle on their lobster populations. (George Stoyle, Earth in Focus)
According to Truelove, those are questions to be addressed down the road. Right now, he and Box are focused on step one. “There have been no genetic studies on this species,” he says. “We’re basically building this up from scratch.” Even the techniques for gathering DNA in the field are new. Using liquid nitrogen to preserve very high quality DNA, Box says they’ll have to “baby” tissue samples from throughout the Caribbean all the way back to their lab in Florida.
To find what they’re looking for, they need to sequence as much of the genetic code as they can. Once they identify genes that respond to environmental conditions specific to each region, they won’t need such careful collecting methods. At that point, the scientists should be able to locate those genetic markers in meat from lobster at the fish market or even in samples taken from lobster that is frozen and packaged for export to the U.S.
Truelove won’t take a guess yet as to how much detail they will glean from this work. “One of the big unknowns we want to try to figure out with this technique is; how much can we really zoom in? Will we be able to distinguish Honduran lobster from Nicaraguan lobster, or can we continue zooming until we can distinguish lobsters caught using casitas (shallow water shelters built to attract lobster) from those caught off shore in deeper water that would identify them as being caught using scuba at dangerous depths?”
According to Box, that would be the epitome of success, as good as a lobster delivered to the dock with a return address label glued to its forehead.
The plot could not be simpler: A young bunny says goodnight to the objects and creatures in a green-walled bedroom, drifting gradually to sleep as the lights dim and the moon glows in a big picture window. Goodnight Moon has sold more than 48 million copies since it was published in 1947. It has been translated into at least a dozen languages, from Spanish to Hmong, and countless parents around the world have read it to their sleepy children.
Author Margaret Wise Brown, subject of a new biography, based Goodnight Moon on her own childhood ritual of saying goodnight to the toys and other objects in the nursery she shared with her sister Roberta, a memory that came back to her in a vivid dream as an adult. The text she jotted down upon waking is at once both cozy and unsettling, mimicking and inducing the unmoored feeling that comes with drifting away to sleep. Unlike so many children’s books, with their pat plots and clumsy didactics, it’s also one that parents can stand rereading—and not only for its soporific effect on their sons and daughters.
Reviewers have described the book as less a story than “an incantation,” and writers on the craft of writing have labored to tease out the strands of its genius. This exercise feels dangerous, since a close reading may raise more questions than answers (when was the bunny planning to eat that mush, anyway?). But while the book’s relationship to reality may be slightly askew, it also feels true to childhood, a period when, as Brown was quick to note, the world adults take for granted seems every bit as strange as a fairy story, and the pleasure of language lies less in what it communicates than in its sound and rhythm.
She may not be a household name like Beatrix Potter or Dr. Seuss, but with her innovative insights into what the very young really want to read about, Margaret Wise Brown (1910-1952) revolutionized children’s literature. The new book, In the Great Green Room, is by author Amy Gary, who bases her account of Brown’s “brilliant and bold life” partly on a trove of unpublished manuscripts, journals and notes that she discovered in Roberta’s hayloft in 1990. Over more than 25 years, as Gary pored over reams and reams of fragile onionskin that had been left untouched since Brown’s sudden death at age 42, the biography gradually took shape—and the woman who emerged was no less charming and strange than her most famous work.
Born into a wealthy family and raised on Long Island, Brown came to children’s literature in a roundabout way. In college, she admired Modernist writers like Virginia Woolf and Gertrude Stein, although she devoted more energy to the equestrian team than to academics. After breaking off an engagement with a well-bred beau (she overheard him laughing with her father over how to control her), she moved to Manhattan to pursue a vague literary ambition, living primarily on an allowance from her parents.
Brown loved the hustle and bustle of city life, but the short stories she wrote for adults failed to interest publishers. Feeling pressure from her father to either marry or start supporting herself, she eventually decided to enroll at the Bureau of Educational Experiments’ Cooperative School for Student Teachers—more usually known as Bank Street, for its Greenwich Village location. There, the school’s founder Lucy Sprague Mitchell recruited her to collaborate on a series of textbooks in a style Mitchell called “Here-and-Now.”
At the time, children’s literature still consisted largely of fairy tales and fables. Sprague, basing her ideas on the relatively new science of psychology and on observations of how children themselves told stories, believed that preschoolers were primarily interested in their own small worlds, and that fantasy actually confused and alienated them. “It is only the blind eye of the adult that finds the familiar uninteresting,” Mitchell wrote. “The attempt to amuse children by presenting them with the strange, the bizarre, the unreal, is the unhappy result of this adult blindness.”
Under Sprague’s mentorship, Brown wrote about the familiar—animals, vehicles, bedtime rituals, the sounds of city and country—testing her stories on classrooms of young children. It was important not to talk down to them, she realized, and yet still to speak to them in their own language. That would mean summoning her own keen, childlike senses to observe the world as a child does—which is how one chilly November she found herself spending the night in a friend’s barn, listening to the rumbling of cows’ bellies and the purring of farm cats.
Maintaining a childlike perspective was the key to her work, but throughout her life, Brown worried that she had failed to grow up—even as she approached 40, she was painting glow-in-the-dark stars over the bed in her New York apartment. But like the wandering protagonist of one of her other classics, Home for a Bunny, she often felt out of place. “I am stuck in my childhood,” she told a friend, “and that raises the devil when one wants to move on.” The whimsical quality she interpreted as immaturity appealed to most of her friends, but it was a constant source of stress in her longest intimate relationship.
Brown met Michael Strange (born Blanche Oelrichs) at the home of a married man with whom they were each having an affair. Brown’s love life had always been complicated, and as she watched friends settle down with husbands and families, it was a fate she both yearned for and feared. But Strange, a poet who had been married to the actor John Barrymore, seemed to offer both the coziness of family life and the adventure Brown craved. Despite the era’s strong taboo around same-sex relationships, the women moved into apartments next door to one another and lived as a couple, on and off, through most of the 1940s.
Image by Bain News Service, Publisher. Retrieved from the Library of Congress. Michael Strange. At the time this photo was taken she was married to John Barrymore. (original image)
Image by Courtesy of the author. "The Only House" (pictured here, today) was Brown's island getaway in Vinalhaven, Maine. (original image)
Image by Photo by Consuelo Kanaga. Courtesy of the Brooklyn Museum. Margaret with quill pen, her preferred writing instrument (original image)
Image by Courtesy of the Westerly Public Library. Margaret (right) and her sister, Roberta. Part of the family menagerie included a squirrel, rabbits, guinea pig, and dog that shared their father's name, Bruce (original image)
Strange—alluring but also mercurial and narcissistic—was not an easy person to love. But even as she dismissed her partner’s “baby stories,” Brown was becoming a major force in the world of children’s publishing. Publishing dozens of titles a year under multiple names at seven publishers, she cultivated many of the best illustrators in the business and ensured that their work, an integral part of her books, was given its due at the printers. One of these was Goodnight Moon, for which she recruited her close friend Clement Hurd to provide the color-saturated paintings that have since become iconic. When it went on sale for $1.75 in the fall of 1947, the New York Times praised the combination of art and language, urging parents that the book “should prove very effective in the case of a too wide-awake youngster.”
Although she gave some of her earliest stories away for a pittance, Brown became a tough negotiator, once going so far as to mail her editor a set of dueling pistols. And as she matured, her stories grew past the simple “Here-and-Now” she had learned under Sprague, becoming more dreamlike and evocative. “The first great wonder at the world is big in me,” she wrote to Strange. “That is the real reason that I write”
Though she was grief-stricken after Strange died of leukemia in 1950, it was then that Brown fully came into her own, reconciling her disappointment at never being able to write “serious” work for adults with her success in the growing children’s publishing field (the Baby Boom had made baby books big business). Her new self-confidence led to a (thoroughly veiled) autobiography in picture book form, Mister Dog, about a pipe-smoking terrier who “belonged to himself” and “went wherever he wanted to go.”
“She was comfortable in her solitude,” Gary writes. “She belonged to herself and only herself.”
Soon after reconciling herself to life as a successful, independent woman, Brown met and fell in love with the man with whom she believed she would spend the rest of her life. James Stillman Rockefeller Jr., a handsome great-nephew of J.D. Rockefeller who was known to his friends as “Pebble,” asked her to marry him. For their honeymoon, the couple planned to sail around the world.
Before they could begin their grand adventure, Brown had to take a business trip to France, where she developed appendicitis. Her emergency surgery was successful, but the French doctor prescribed strict bed rest as she recovered. On the day scheduled for her release, a nurse asked how she felt. “Grand!” Brown declared, kicking up her feet—and dislodging a blood clot in her leg, which traveled to her brain and killed her within hours. She was 42.
Although he went on to find love and raise a family with another woman, Rockefeller never quite got over Brown. Gary, who relied on the now-elderly Pebble’s recollections for the last chapters of her biography, also persuaded him to write a moving prologue about their brief time together. “It has been sixty years since those days,” he writes, “but over half a century later, her light is burning ever brighter.”
It’s a sentiment with which any Goodnight Moon family is likely to agree.
Music is so much a part of black America, it pops up all over the vast new National Museum of African American History and Culture. From Harriet Tubman’s modest hymnal of spirituals to Sly Stone’s signed Fender Rhodes keyboard and Public Enemy’s boom box that helps close the 20th-century cultural history, there’s no separating the importance of music from the history on hand.
But when one arrives at the entry to the fourth floor “Musical Crossroads” exhibition, heralded by the sparkly red finish on Chuck Berry’s Cadillac, the futuristic fantasy of the Parliament-Funkadelic mothership replica, and Michael Jackson’s Victory Tour fedora, it is as if entering its own inclusive African-American Music History Museum.
And inclusive it is—with displays on African music imported by the enslaved to this country, devotional music that helped bind black communities against all odds, gospel, minstrel music, ragtime, jazz, blues, rhythm & blues, rock ’n’ roll, hip-hop and EDM. Yes, and some country stars of color as well.
One of the challenges of opening the Smithsonian’s newest major museum was acquiring its contents from scratch. Sure, the nearby National Museum of American History already had a lot of artifacts, from Scott Joplin sheet music to Dizzy Gillespie’s B-flat trumpet.The 1973 Cadillac Eldorado convertible was driven on stage for the big superstar tribute concert for Chuck Berry in the 1987 film Hail! Hail! Rock 'n' Roll. (NMAAHC)
But it was important not to raid other museums; those artifacts were part of the American story.
It was up to Musical Crossroads curator Dwandalyn Reece to amass the objects that would fill the 6,200 square foot space.
Other American music museums had a significant head start on major artifacts—from Cleveland’s Rock & Roll Hall of Fame to the Experience Music Project in Seattle. And that’s not to mention all of the historical items in all of the Hard Rock Cafes around the world.
In the more than 20 years since she started her career, Reece says, “the whole concept of music as memorabilia has flourished.”
Still, there was something about the prestige of the Smithsonian that convinced many to donate cherished and long-held heirlooms that were not previously seen or available.
One of the most impressive things about the museum is that relics like Little Richard’s flashy jacket or Chuck Berry’s car, were donated directly from the artists themselves. Others, like Bo Diddley’s signature square guitar and porkpie hat, were given by their estates.Made by Henri Selmer of Paris, Louis Armstrong's trumpet is among only a few to be inscribed with his name. (NMAAHC)
Some families donated items that were not previously known to have existed at all, such as the ensemble that the celebrated opera singer Marian Anderson wore as she sang on the steps of the Lincoln Memorial in 1939. The historic concert before a crowd of more than 75,000 people and millions more on radio had been organized with the help of First Lady Eleanor Roosevelt after the Daughters of the American Revolution refused to allow Anderson to sing to an integrated audience at its Constitution Hall.
“That’s a tremendous event in the history of the United States and in music,” Reece says. Her outfit that day “would have been a wished-for item if I’d known it existed. But I didn’t know it existed.”
While researching another object however, she says, “we were put in contact with the family and they let us know that they still had the outfit and they were willing to donate it to the museum.”
Flashy as it is, the shiny red 1973 Cadillac Eldorado convertible at the Musical Crossroads entrance may not seem to have anything to do with Chuck Berry, other than simple ownership. He started pioneering Rock ’n’ Roll by mixing of country and R&B two decades earlier.Michael Jackson's signature fedora he wore for his 1984 six-month Victory tour. (NMAAHC)
But, Reece says, “the car has its own symbolism.”
It was driven on stage for the big superstar tribute concert for Berry captured in the 1987 movie Hail! Hail! Rock ’n’ Roll.
“It’s more than just a shiny object that’s standing in the center of the museum,” she says. “It’s also a symbolic element of Chuck Berry’s own personal story and career, tied to his relationship, growing up in St. Louis, Missouri, and not being allowed to go to the Fox Theatre as a child, because of his race. And then you have this moment where he’s driving a car across the stage at this same theater 40 years later. Everything represented by that—the freedom and liberation and sense of achievement of an African-American man who is one of the architects of America’s greatest exports, Rock ’n’ Roll, and what that says about music from that standpoint. Where does music function as a tool of liberation and protest and individuality in American culture and African-American culture.”
A Chuck Berry guitar that he nicknamed “Maybellene” is also part of the display—one of more than a dozen or so guitars on display.
But there are other items tied to individual artists that helped define their place in music and the American imagination—from the wire rim glasses of Curtis Mayfield to the eyepatch of Slick Rick; from the cape (and signed shoes) of James Brown to the star-shaped guitar and outfit of Bootsy Collins. And there are the tiny tap shoes once worn by a 3-year-old Sammy Davis Jr.
One never knows what particular item will provide that instant connection to the artist it represents, but it can come in artifacts big and small—from the elaborate dresser kit of Lena Horne to the singular metal cigarette lighter of bluesman Josh White.
A 1946 Selmer trumpet played by Louis Armstrong represents that jazz great; Miles Davis’ legacy is marked by a stylish jacket he wore in the 1960s. The formidable dress of Ella Fitzgerald, and M.C. Hammer’s parachute pants are also under glass (as if to say, “Can’t Touch This”).
One ensemble does double duty—a costume from Lady Sings the Blues calls to mind the both singer who wore it, Diana Ross, and the character she portrayed, Billie Holiday, who is otherwise represented by an oversized acetate of a 1953 10-inch studio album, “An Evening with Billie Holiday.”
Along the way, there are artists represented who will likely be unfamiliar to wide audiences, from 19th century composer Francis Johnson to early prodigy Blind Tom Wiggins (whose flute is on display). Visitors will learn of both “sacred” steel guitar player Felton Williams and early ’70s Detroit punk band Death.
Some artists may seem shortchanged. Sam Cooke is represented by a contract signature; the Jackson 5 by Jermaine’s costume (with the Gary, Indiana, musician representing Detroit), Janet Jackson by a cassette of “Control.” Frankie Beverly’s cap is there, but there does not seem to be anything from Al Green.
Hundreds of albums are on display in a record store flip format, but the covers are affixed to durable materials and fastened to their crates so as to withstand the expected crush of visitors. “We didn’t want album covers all over the floor, or tossing them around,” Reece says.
One area will allow visitors to spend time to sit in a producer or engineer’s seat to create a track. Another interactive area shows the relationships of songs to regions and other genres.
When asked to divulge her favorite object, Reece can’t ignore the triangular Parliament/Funkadelic mothership. “The thing that resonates the most for me is not only that George Clinton donated it, but it was the public reaction to the acquisition,” she says. “For some reason it touched a positive nerve in people, in people seeing the Smithsonian as their place, as being interested in their history.”
Sometimes, people think of a national museum as elite and apart from regular people, Reece says. “But this resonated with people,” she says. “And I’m so proud of that.”
The inaugural exhibition Musical Crossroads is on view in the National Museum of African American History and Culture. All free timed entry passes to visit the museum have currently been distributed through the month of December. Passes for 2017 are available beginning Monday, October 3, at 9 a.m. A limited number of same-day timed entry passes are offered each day at the museum and are distributed on a first-come, first-served basis beginning at 9:15 a.m.
What are Food Fridays like?
Every week, the Food Fridays team answers this question: What's the history behind the food we eat? The kitchen is part of an interactive learning space for dynamic, sensory-learning experiences, and we're looking forward to sharing our discoveries about food history with the public (and, in the future, via our website!). Each week, we'll fire up the gas range on our movable kitchen island, bring out our shiny new pots and pans, and invite a guest chef, home cook, or food innovator to join us as we cook and talk about food—and American history.
Why build a demonstration kitchen in a history museum?
We've learned that as our visitors peek inside Julia Child's kitchen, or explore our exhibition FOOD: Transforming the American Table 1950-2000, they discover that food is an essential expression of American culture, innovation, and history, but there's always something missing when you can't exhibit the food itself. Certainly our objects tell powerful parts of that story—when you can see Julia's whisk and copper bowl right next to her stand mixer, you are seeing the story of dramatic changes in food technology, and the place for both old and new tools in the modern kitchen. When we decide to display a drive-thru burger menu just a few feet away from Alice Waters' bouillabaisse cauldron, visitors can see how food has been made both fast and slow, for the on-the-go diner and the gourmand. We place wagons that transported the first machine-harvested lettuce adjacent to the first microwave oven for the home, to show how food gets from the field, to the market, and to our dinner tables.
But where, in all of this, is the food itself? By building a demonstration kitchen into the heart of the first floor of the museum's West Wing, we're able to tell a live, sizzling story about food's place in American history, and tell the story of the people and places that have shaped the way we experience food. This kitchen is a new space for special guests from the food world—chefs, home cooks, farmers, authors, and food innovators—to share their knowledge about food history with our visitors, as well as during ticketed after-hours programs.
So what's cooking?
Our focus in July is summertime cooking traditions in America. All around the country, summer's warm weather and sunshine-filled days mean food-centric social gatherings. Backyard parties centered on the family grill, church picnics, state fairs, and seafood cookouts on the beach have become signature culinary events of summertime in America, and different parts of the country have their own signature summer foods.
But each celebration has a story to tell, and prompts lots of questions that we want to answer. For example, how did potato salad, a German dish made with a vegetable that originated in Peru, become an American favorite? How did suburbs created after World War II lead to the backyard barbecue, and why was meat such an important part of that celebration? How does a Maine lobsterman trap the crustaceans that become the key ingredient in the state's signature roll? The questions that we're exploring here aren't just about what to eat—they're about where the food comes from, how it becomes part of the American diet, and why what we eat today reflects shared memory and history.
How do Food Fridays connect with the museum's collections and exhibitions?
As we discover and share these stories, we'll bring out objects from the museum's collections that reflect our changing experiences of food. Each demonstration will inspire visitors to explore all the museum's exhibitions in search of food knowledge—to seek out the model of the Dauntless fishing schooners in On the Water, to learn the story of home canning in Within These Walls, and of course to see the major changes over the last half century in FOOD: Transforming the American Table, 1950-2000. One bite of the Food Fridays program, and soon you'll be devouring many other parts of American history at the Smithsonian.
Each week we'll be cooking up something different, and later share the recipes for the dishes we prepared on stage online, so you too can cook your way through American history. We hope you'll join us to discover the new demonstration kitchen, and to visit during one of our upcoming Food Fridays programs!
For more information on the series, including individual program descriptions, check out our schedule. Jessica Carbone is the host of the upcoming Food Fridays program, and a project associate in the Division of Work and Industry, Food History Project. She's particularly looking forward to learning about Jamaican jerk chicken on July 24 and cold summertime dishes on July 31.
On a small island north of Concord, New Hampshire, stands a 25-foot-tall granite statue of Hannah Duston, an English colonist taken captive by Native Americans in 1697, during King William’s War. Erected in 1874, the statue bears close resemblance to contemporary depictions of Columbia, the popular “goddess of liberty” and female allegorical symbol of the nation, except for what she holds in her hands: in one, a tomahawk; in the other, a fistful of human scalps.
Though she’s all but forgotten today, Hannah Duston was probably the first American woman to be memorialized in a public monument, and this statue is one of three built in her honor between 1861 and 1879. The mystery of why Americans came to see patriotic “heroism” in Duston’s extreme—even gruesome—violence, and why she became popular more than 100 years after her death, helps explain how the United States sees itself in world conflicts today.
Born in 1657, Hannah Emerson Duston lived in Haverhill, Massachusetts, at a time when disputes among English colonists, the French in Canada, and various Native American nations resulted in a series of wars in the region. King Philip’s War (1675-1676), for example, decimated southern New England Indian nations, which lost between 60 and 80 percent of their population as well as their political independence. Many were sold into slavery. By the late 1680s and the start of King William’s War, fragments of those southern tribes had joined the Abenaki and other northern New England Indian nations allied with the French to fight the continuing expansion of the English colonists to the north and west. Native men conducted raids on frontier English settlements, burning property, killing or injuring some colonists, and taking others captive, either to ransom them back to their families, or to adopt them as replacements for their own lost family members.
Such was the context in which one group, most of whom were likely Abenaki, attacked the town of Haverhill on March 15, 1697—and encountered 40-year-old Hannah Duston at home with her neighbor Mary Neff. The Indians captured the women, along with some of their neighbors, and started on foot toward Canada. Duston had given birth about a week before. The captors are said to have killed her child early in the journey.
The group traveled for about two weeks, and then left Duston and Neff with a Native American family—two men, three women, and seven children—and another English captive, a boy who had been abducted a year and a half earlier from Worcester, Massachusetts. 14-year-old Samuel Leonardson may have been adopted by the family; he certainly had their trust. At Duston’s request, he asked one of the men the proper way to kill someone with a tomahawk, and was promptly shown how.
One night when the Indian family was sleeping, Duston, Neff, and Leonardson—who were not guarded or locked up—armed themselves with tomahawks and killed and scalped 10 of the Indians, including six children. They wounded an older woman, who escaped. A small boy managed to run away. Duston and her fellow captives then left in a canoe, taking themselves and the scalps down the Merrimack River to Massachusetts, where they presented them to the General Assembly of Massachusetts and received a reward of 50 pounds.This statue of Hannah Duston was the second one erected in Haverhill, Massachusetts. In other statues, she holds scalps, but here she points her finger accusingly. (Gregory Rodriguez)
Hannah Duston never wrote down her story. Most of what we know about her comes from the influential Puritan minister Cotton Mather, who published three versions of her tale between 1697 and 1702, embedded in his larger works on New England history. Mather frequently portrayed Indian people as instruments used by the devil to thwart the Puritan mission. He described Duston as a righteous ringleader who had every reason to convince the other captives to act. He stressed the “savagery” of her Indian captors, providing a horrific description of the murder of her child (“they dash’d out the Brains of the Infant, against a Tree.”). We will never know the full truth of Duston’s ordeal—was her baby murdered or did it die?—but Mather’s version of the death highlighted Indian violence to justify Duston’s gruesome vengeance.
Mather asserted that Duston and Neff never meant to kill the small boy who escaped; he was “designedly spared” so they could bring him home with them, if he hadn’t run away. At the same time, Mather was apparently unconcerned that six of the “wretches” the captives scalped were children. He compared Duston to the biblical heroine Jael, who saved her people by driving a spike through Sisera’s head while he slept. Cotton Mather understood the wars between New England Puritans and Indians as battles between good and evil and this clearly shaped the way he told Duston’s story. She was a heroine saving her people from “savage” outsiders, fighting a justified war.
After 1702, Americans forgot about Hannah Duston until the 1820s, when there was a half-century-long revival of interest in her story, stoked by the nation’s expansion westward into Indian lands. The nation’s foremost literary figures, including Nathaniel Hawthorne, Henry David Thoreau, and John Greenleaf Whittier, all wrote about her. Virtually all histories of the United States from that time contained a version of the story, as did numerous magazines, children’s books, biographies of famous Americans, and guidebooks. A mountain in northern New Hampshire was named “Mt. Dustan” in her honor—and of course, communities erected the three monuments.
It is no coincidence that Americans renewed their interest in the Duston story during this time. From the 1820s, when Georgia began pressing for the forced removal of native people, through the Battle of Wounded Knee in 1890, the so-called “Indian problem” was almost always in the news. 19th-century white Americans were well aware of the moral issues that Indian removal raised, and engaged in heated national debates. As an 1829 “Circular: Addressed to Benevolent Ladies of the United States” put it, “The present crisis in the affairs of Indian Nations in the United States, demands the immediate and interested attention of all who make any claims to benevolence or humanity.” The circular described Indians as “free and noble” yet “helpless,” and “prey of the avaricious and the unprincipled” who wanted to steal their land, not caring that Indians would “perish” if removed.
Women, excluded from formal politics at this time, were active in the anti-removal campaign. They justified their involvement in a political issue by framing Indian removal as a moral question. In the 1820s, virtue was central to American national identity, and embodied in women. This is why Columbia became such a popular symbol of the nation—and why some turned to the story of Hannah Duston as ammunition in the debate over Indian removal.
How could a virtuous democratic nation evict Native Americans from their homelands, and wage war against them when they refused to give up those lands? It was possible only if those Indians were “bloodthirsty savages” who attacked innocent white Americans. Because female virtue was linked to the nation’s virtue, what violent act could be more innocent than that of a grief-stricken mother who had just witnessed the murder of her newborn child?
The idea of a feminized, always-innocent America has become the principle by which the United States has structured many interactions with enemy others.
Accordingly, like Cotton Mather’s accounts, 19th-century versions of the Duston story depicted Native Americans as excessively violent. In a popular 1823 history textbook by Charles Goodrich, the Indians who took Duston captive burned “with savage animosity” and “delighted” “in the infliction of torment.” Goodrich claimed that “[w]omen, soon expecting to become mothers, were generally ripped up” by Indian captors and that some captives were even “roasted alive.”
But one problem remained: How could an “innocent” wronged mother murder someone else’s children herself? Tellingly, the fact that the “innocent” Duston killed six children was increasingly erased from accounts of her actions from the 1830s on. She thus became an American heroine.
Efforts to commemorate Duston began in earnest with the acceleration of western expansion in the 1850s. The first monument, built in Haverhill in 1861, was a marble column. On its base was a shield, surrounded by a musket, bow, arrows, tomahawk, and scalping knife. Engravings on its sides told the story of the “barbarous” murder of Duston’s baby and her “remarkable exploit;” the column was topped by an eagle, symbol of the American nation. The monument’s builders, however, never fully paid for it, and in August 1865 it was stripped and resold to another town as a Civil War memorial.
The second monument was the 1874 New Hampshire scalp-wielding statue. Located on the island where it was thought Duston had killed the Native American family, it was unveiled on June 17th, the anniversary of the Battle of Bunker Hill, making the link between Duston, her violent acts, and American patriotism explicit. Haverhill built the last monument in 1879, as a replacement for the repossessed column. This time around, Duston, in long flowing hair and a gown, held a tomahawk in one hand and pointed the other outward in accusation, both highlighting her violence and suggesting that responsibility for it lay elsewhere. The scalps were gone. At its installation, the philanthropist who donated money for the statue emphasized its patriotism, stating that the purpose of the monument was to remember Duston’s “valor” and to “animate our hearts with noble ideas and patriotic feelings.”
As long as the so-called “Indian problem” continued, Duston remained an important historical figure, her story presented as moral justification for American expansionism onto Indian lands and into Mexico. But by 1890 officials had pronounced the “frontier” closed. The Indian population had reached a historic low, and the U.S. government confined virtually all Natives who remained in the West to reservations; the “Indian problem” was over. The nation reassessed its attitudes toward Native Americans, and public interest in Duston’s story plummeted correspondingly. The tale disappeared from textbooks and popular culture.
Still, the powerful dynamic the story helped to establish remains with us today. The idea of a feminized, always-innocent America has become the principle by which the United States has structured many interactions with enemy others. In international wars as on frontiers past, it has portrayed itself as the righteous, innocent, mother-goddess-of-liberty patriotically defending herself against its “savage” enemies.
Working with museum collections, I am often reminded that the names we attach to objects can reflect powerful social and political forces. A case in point is the small instrument of ancient Asian tradition that, in popular parlance, became known as the "opium scale" in the late 19th century, as the use of opium for recreational purposes increased, and relations between white Americans and Chinese immigrants worsened. While scales of this sort were used for all sorts of purposes, the new term served to demonize all people of Chinese heritage.
Early European voyagers to the Far East often noted that the dotchin—that was the common Anglicized version of the Chinese name—had long been used in China, and probably elsewhere in Asia as well, to weigh small treasures and substantial commodities. They described it as an unequal arm balance similar to the one known in English as a steelyard or Roman Statera. And they brought some examples home with them. "A China STATERA, in the form of a Steel-Yard" appeared in the 1681 list of objects in the museum of the Royal Society of London, along with the comment that "The Chineses [sic] carry it about them, to weigh their Gems, and the like." Don Saltero's Coffee House and Tavern, a popular London venue, had a Chinese dotchin and its case in its cabinet of curiosities.
Following the discovery of gold in California, Chinese men flocked to the United States, and some brought their dotchins with them. Thus, an American physician who visited a Chinese pharmacy in Sacramento saw scales which showed "the great antiquity of the people," adding that the Chinese "still disdain to use other than those which have been in use for centuries." The physician went on to say that these "have but a single plate and a long beam, the weight sliding on this last, similar to the old-fashioned steelyard. Many, however, are of fine workmanship, and in the hands of a skillful person prove very accurate." Carl Hinrichs, a professor of chemistry at the Medical School of St. Louis University who visited the International Exhibition of 1904, wrote a report on "China, Its Druggists, Medicines and Chemical Manufactures." After noting with approval that the Chinese pharmacists sell drugs and medicines rather than cigars and soda water, Hinrichs mentioned that they used the common equal-arm balance for weighing drugs and coins, and the Chinese steelyard for "less accurate and commercial work."
The scarcity of early connections between dotchins and opium reflects the fact that opium was seldom used or abused in China. That situation changed in the late 18th century when, seeking a commodity to strengthen their balance of payments, the British East India Company began smuggling vast quantities of the narcotic into the Celestial Empire. American merchants soon followed suit. The Chinese emperor objected, but to no avail. Thus, in the 1840s, an Anglican missionary describing an "opium den" in Amoy (the city now known as Xiamen), noted that the proprietor stood in the principal room, with delicate steelyards, weighing out the prepared drug.
European physicians had long recognized the value of opium for pain relief, and their ideas found purchase in the American colonies. Benjamin Franklin, for instance, turned to the opium-alcohol preparation known as laudanum to alleviate the pain of kidney stones. In the 19th century, opium-based medicines were used to calm teething babies, relax women with menstrual pains, and soothe wounded soldiers on Civil War battlefields.
The origin of opium-smoking in the United States, at least on a large scale, probably dates from the 1850s. The first published accounts of the habit didn't appear until the 1870s when, in the face of economic downturns, white Americans were became increasingly hostile to their Asian neighbors. In his article on "Opium Smoking Among the Celestials," a Philadelphia pharmacist noted the "immense traffic in an article which is seemingly part of the very life necessities of this curious people," and described shops in San Francisco's Chinatown where merchants weighed carefully "a small portion of the much coveted drug."
Another observer described a San Francisco shop in which the customer "places upon the counter a tiny round box of horn, his opium-box, and lays a dime or nickel beside it; thereupon the shop-keeper gravely places the box in the scale pan of his diminutive steelyard, which, like all we shall see, either small or large, is of wood, and sliding the 'cash,' or Chinese coin, which, hanging by a loop of thread, serves as a weight along the bar determines the 'tare.'" He then reaches into a drawer and dips out a small amount of "black, viscid stuff, which is just sufficiently fluid to drop slowly into the box. This is opium prepared for smoking."
In his Opium-Smoking in America and China (New York, 1882), Harry Hubbell Kane described an "opium den" in which stood "a pleasant-faced Chinaman who, with scales in hand, was weighing out some opium."
In 1875, having found that many white men and women patronized the local "opium dens," the San Francisco Board of Supervisors attempted to check the "growing evil." To that end, they imposed hefty fines on any Chinese proprietor who let a white person smoke in his "den," as well as on any white person found in such an establishment. This was the first American anti-drug law, and information about it appeared across the country. Police raids became common as other towns followed suit, with accounts of "opium scales" being found on the premises. In Hawaii, where it was illegal to sell opium except for medicinal purposes, the attorney general suggested that the law be amended "so that where persons are arrested, charged with selling, and in whose possession opium scales or weights are found, the finding of such opium scales should be taken as prima faciae evidence of their having been guilty as charged."
But still the practice persisted. The Washington Post proclaimed in 1897: "Wherever the Chinese are found there also traffic in opium is carried out in utter defiance of all laws," adding that the "Chinese opium fiends, with their peculiarities and cunning, are far harder to handle than the drunkard." Scientific American mentioned the scores of "opium dens" in the Chinese quarter of every large city, where "the Chinaman can buy his pipe and smoke in peace." It also noted that "white people" had places of their own, which were well known to the police, and "the vice is ever spreading and increasing."
As opium addiction gained widespread attention, Congress finally decided to act, imposing hefty import taxes on opium and morphine in 1890. The Pure Food and Drug Act of 1906 required that drug packages provide accurate information about their contents. And the Opium Exclusion Act of 1909 banned the importation, possession, and smoking of opium all together.
This brief history shows some of the challenges to naming things. These instruments might be called balances, steelyards, or scales, but the technically correct terms obscure the cultural implications. "Opium scale" is imprecise—since the same instrument might be used for many purposes—and it is culturally insensitive. Dotchin comes from the Chinese who designed these instruments many centuries ago and have used them ever since, but may be unknown to most visitors. Such is the curatorial dilemma.
Deborah Warner is a curator in the Division of Medicine and Science who blogs about connections between science and culture. For this project she worked with Priscilla Wegars, Ph.D., volunteer curator of the Asian American Comparative Collection at the University of Idaho, in Moscow, Idaho.
This St. Patrick’s Day, shamrocks will be everywhere: on clothing, shot glasses, beer mugs, funny hats and other sometimes questionable fashion accessories. It’s easy to think of those three bright green leaves as inviolably Irish, an icon of the Emerald Isle since the beginning of time. According to Irish folklore, the shamrock is so entirely Irish it won’t even grow on foreign soil. And in America, only the three-leaved image of the shamrock persists, having been associated with Irish immigrant communities for more than 100 years—it’s just as important on St. Patrick’s Day as wearing green clothing and drinking emerald-hued libations. The catch, however, is that shamrocks, at least as a term of scientific nomenclature, don’t really exist.
The “shamrock” is a mythical plant, a symbol, something that exists as an idea, shape and color rather than a scientific species. Its relationship to the plant world is a bit like the association between cartoon hearts we draw and the anatomical ones inside our bodies. The word "shamrock" first appears in plays and poetry in the 1500s, but the first person to link it to a recognizable plant was the English herbalist John Gerard, who in 1596 wrote that common meadow trefoil, also known as clover, was "called in Irish Shamrockes." Botanists have been trying to match the idea of the shamrock with a particular species for centuries, so far without unanimous success. Although the plant is assumed to be a type of clover—the term “shamrock” comes from the Gaelic seamrog, or "little clover"—the clover genus (Trifolium) includes hundreds of species. Other herbs, such as wood sorrel, have also been labeled and sold as “shamrock” over the years. The confusion stems in part from the time of year when St. Patrick’s Day approaches on the calendar: In Ireland, the holiday comes along in spring, when plants are at their most nascent stages and many species are just sprouting leaves. When fully grown, white clovers bloom white flowers and red clovers bloom reddish flowers (naturally), but most laypeople won’t be able to tell the difference when pinning just the baby clover leaves on a jacket.
Of course, attempts to pinpoint the shamrock’s species aren’t exactly of earth-shaking significance. No wars have been fought over their true nature, no fortunes ruined, no reputations destroyed. At most, it’s caused 19th-century botanists writing in natural history journals to get a little flushed in the face.
In 1830, James Ebenezer Bicheno, a London botanist and colonial official stationed in Ireland, claimed that the true shamrock was Oxalis acetosella, or wood sorrel. He based his claim in part on selections from Irish literature and traveller reports that described the Irish eating shamrocks in times of war and disaster, arguing the “sharp” taste reported in those descriptions matched wood sorrel better than clover. Bicheno also claimed, falsely, that clover wasn’t native to Ireland, and that it was a relatively recent addition to the countryside, while wood sorrel would have been more plentiful in days of yore. In 1878, English botanists James Britten and Robert Holland addressed the "vexed question" of the true shamrock by saying the Trifolium minus (yellow clover) was the species most often sold as shamrock in Covent Garden on St. Patrick’s Day, although they noted that Medicago lupulina (black medick) occasionally took its place, and was more often sold in Dublin.
About ten years later, Nathaniel Colgan, a young police clerk and amateur botanist in Dublin, decided to make matters more scientific. Writing in an 1892 edition of The Irish Naturalist, Colgan noted “the species of the Shamrock had never been seriously studied by any competent botanist … perhaps because any attempt to go into it exhaustively may have been checked at the outset by the thought that the Irishman was content to wear, as the national badge, any well-marked trifoliate leaf. Such a thought, however, could only have entered the mind of an alien. Every Irishman … well knows that the Irish peasant displays great care in the selection of his Shamrock. There is for him one true Shamrock and only one.”
Seeking to find a scientific answer to the question of the “one true Shamrock,” Colgan asked correspondents in 11 Irish counties to collect, around the time of St. Patrick's Day, samples of shamrocks they considered to be the real deal. After potting them and allowing them to flower, Colgan discovered that eight were Trifolium minus (yellow clover) and five Trifolium repens (white clover). He repeated the study the following year, after contacting clergymen in parishes around the country to send more samples. This time, out of a total of 35 specimens, 19 were white clover, 12 yellow clover, 2 red clover and 2 black medick. The results varied by county, with many parts of Ireland evenly split along yellow and white, while the counties of Cork and Dublin favored the black medick. (Colgan’s initial experiment had avoided Dublin and its environs, where he felt "the corrosive rationalism of cities" would blunt "the fine instinct which guides the Irish Celt in the discrimination of the real Shamrock.")
Almost a century later, in 1988, E. Charles Nelson, then horticultural taxonomist in Ireland's National Botanic Gardens, decided to repeat the study to see if anything had changed. Nelson made an appeal in the national press asking Irish people to send examples of plants they considered the “real shamrock” to the Botanic Gardens. This time, he found that yellow clover accounted for 46 percent of the 243 samples, followed by white clover at 35 percent, black medick at 7 percent, wood sorrel at 5 percent and red clover at 4 percent. The results were very similar to Colgan’s study, showing that Irish ideas of the “real” shamrock had held steady. The experiments “also demonstrated that there is no single, uniquely Irish species that can be equated with shamrock,” as Nelson wrote.
According to Dublin-based writer and tour guide Mary Mulvihill, it was 20th-century international commerce that forced the need to settle on a single species, at least for export. “When the Department of Agriculture had to nominate an ‘official’ one for commercial licenses to companies that export shamrock, it chose the most popular species, yellow clover (T. dubium),” she writes. Today, T. dubium is the species sold most often as shamrock by commercial growers in Ireland, and it’s the most likely seed to be in packets labeled “true” shamrock, which are mostly sold to gullible tourists, according to Nelson.
But what makes the search for the true shamrock so loaded with meaning? It goes back to the day, and the man, most closely related to the symbol. Legend has it that St. Patrick, patron saint of Ireland, used the three-leaf clover to explain the concept of the Holy Trinity (Father, Son and Holy Ghost) in the fourth century A.D. while converting the Irish to Christianity. (St. Patrick, by the way, is the one who was supposed to have driven all the snakes out of Ireland, although scholars today say the serpents were a metaphor for paganism.) But the story of St. Patrick and the shamrock, as we know it, is just that: There’s no mention of the shamrock in the saint’s writings, and the first written reference to the idea of St. Patrick using the plant to explain the Trinity is in the early 18th century, more than a thousand years after his supposed lessons. That reference appears in the first book ever published about Irish plants, written by Caleb Threlkeld, a British minister and doctor. In his Synopsis Stirpium Hibernicarum, Threkeld writes of white clover:
"This plant is worn by the people in their hats on the 17th day of March yearly, which is called St Patrick’s day. It being the current tradition that by this 3-leafed grass [Patrick] emblematically set forth the mystery to them of the Holy Trinity.”
He added, judgmentally: “However that be, when they wet their Seamar-oge [shamrock], they often commit Excess in Liquor … generally leading to debauchery.”
These days, few believe St. Patrick actually used the shamrock. “If he did use a three-leaved plant to explain the Trinity, he probably wouldn’t have chosen something as tiny as the shamrock,” says Mulvihill. “He probably would have used bog bean or something with bigger leaves—something you could see at the back of the hall.”
But aside from its connection to St. Patrick’s Day, the shamrock is firmly rooted in Irish history. At some point in the Middle Ages, shamrocks started showing up in the floral emblems of Britain and Ireland, appearing alongside English roses, Scottish thistles and Welsh leeks, according Nelson, who is also author of Shamrock: Botany and History of an Irish Myth. The earliest reference to the wearing of shamrocks is in 1681, and by the 1720s the plants were worn on hats. In the beginning of the 1800s, they started showing up as popular decorative motif carved into churches, splashed across fashion and jewelry, and festooning books and postcards. By the 1820s almost anything meant to have an Irish connection had a shamrock on it, Nelson says. Over time, wearing the shamrock would alternate between being a charged nationalist symbol to a more innocent display of Irish pride.
In the end, the species of the “true shamrock” may not matter. Attempts to translate the cultural world into the scientific one can be fraught (witness the debate over what to call the symbol of this year’s Chinese New Year). But if the shamrock provides a cultural touchstone, a way to transmit the idea of Irishness all across the world, that’s likely what’s most important. And besides, yellow clover, wood sorrel and black medick all probably taste the same drowned in whiskey.
This article originally referred to Charles Nelson as the onetime director of the Irish Botanical Gardens. He was actually a horticultural taxonomist at the National Botanic Gardens, which the text now indicates.
Americans consume 5,062,500 gallons of jellied cranberry sauce—Ocean Spray’s official name for the traditional Thanksgiving side dish we know and love that holds the shape of the can it comes in—every holiday season. That’s four million pounds of cranberries—200 berries in each can—that reach a gel-like consistency from pectin, a natural setting agent found in the food. If you’re part of the 26 percent of Americans who make homemade sauce during the holidays, consider that only about five percent of America’s total cranberry crop is sold as fresh fruit. Also consider that 100 years ago, cranberries were only available fresh for a mere two months out of the year (they are usually harvested mid-September until around mid-November in North America making them the perfect Thanksgiving side). In 1912, one savvy businessman devised a way to change the cranberry industry forever.
Marcus L. Urann was a lawyer with big plans. At the turn of the 20th century, he left his legal career to buy a cranberry bog. “I felt I could do something for New England. You know, everything in life is what you do for others,” Urann said in an interview published in the Spokane Daily Chronicle in 1959, decades after his inspired career change. His altruistic motives aside, Urann was a savvy businessman who knew how to work a market. After he set up cooking facilities at as packinghouse in Hanson, Massachusetts, he began to consider ways to extend the short selling season of the berries. Canning them, in particular, he knew would make the berry a year-round product.
“Cranberries are picked during a six-week period,” Robert Cox, coauthor of Massachusetts Cranberry Culture: A History from Bog to Table says. “Before canning technology, the product had to be consumed immediately and the rest of the year there was almost no market. Urann’s canned cranberry sauce and juice are revolutionary innovations because they produced a product with a shelf life of months and months instead of just days.”
Native Americans were the first to cultivate the cranberry in North America, but the berries weren’t marketed and sold commercially until the middle of the 18th century. Revolutionary war veteran Henry Hall is often credited with planting the first-known commercial cranberry bed in Dennis, Massachusetts in 1816, but Cox says Sir Joseph Banks, one of the most important figures of his time in British science, was harvesting cranberries in Britain a decade earlier from seeds that were sent over from the states—Banks just never marketed them. By the mid-19th century, what we know as the modern cranberry industry was in full swing and the competition among bog growers was fierce.
The business model worked on a small scale at first: families and members of the community harvested wild cranberries and then sold them locally or to a middle man before retail. As the market expanded to larger cities like Boston, Providence and New York, growers relied on cheap labor from migrant workers. Farmers competed to unload their surpluses fast—what was once a small, local venture, became a boom or bust business.
What kept the cranberry market from really exploding was a combination of geography and economics. The berries require a very particular environment for a successful crop, and are localized to areas like Massachusetts and Wisconsin. Last year, I investigated where various items on the Thanksgiving menu were grown: “Cranberries are picky when it comes to growing conditions… Because they are traditionally grown in natural wetlands, they need a lot of water. During the long, cold winter months, they also require a period of dormancy which rules out any southern region of the U.S. as an option for cranberry farming.”
Urann’s idea to can and juice cranberries in 1912 created a market that cranberry growers had never seen before. But his business sense went even further.
“He had the savvy, the finances, the connections and the innovative spirit to make change happen. He wasn’t the only one to cook cranberry sauce, he wasn’t the only one to develop new products, but he was the first to come up with the idea,” says Cox. His innovative ideas were helped by a change in how cranberries were harvested.
In the 1930s, techniques transitioned from “dry” to “wet”— a confusing distinction, says Sharon Newcomb, brand communication specialist with Ocean Spray. Cranberries grow on vines and can be harvested either by picking them individually by hand (dry) or by flooding the bog at time of harvest (wet) like what we see in many Ocean Spray commercials. Today about 90 percent of cranberries are picked using wet harvesting techniques. “Cranberries are a hearty plant, they grow in acidic, sandy soil,” Newcomb says. “A lot of people, when they see our commercials think cranberries grow in water.”
The water helps to separate the berry from the vine and small air pockets in the berries allow them to float to the surface. Rather than taking a week, you could do it in an afternoon. Instead of a team of 20 or 30, bogs now have a team of four or five. After the wet harvesting option was introduced in the mid to late 1900s, growers looked to new methods of using their crop, including canning, freezing, drying, juicing berries, Cox says.
Urann also helped develop a number of novel cranberry products, like the cranberry juice cocktail in 1933, for example, and six years later, he came up with a syrup for mixed drinks. The famous (or infamous) cranberry sauce “log” we know today became available nationwide in 1941.
Urann had tackled the challenge of harvesting a crop prone to glut and seesawing prices, but federal regulations stood in the way of him cornering the market. He had seen other industries fall under scrutiny for violating antitrust laws; in 1890, Congress passed the Sherman Anti-Trust Act, which was followed by additional legislation, including the Clayton Act of 1914 and the Federal Trade Commission Act of 1914.
In 1930, Urann convinced his competitors John C. Makepeace of the AD Makepeace company—the nation’s largest grower at the time—and Elizabeth F. Lee of the New Jersey-based Cranberry Products Company to join forces under the cooperative, Cranberry Canners, Inc. His creation, a cooperative that minimized the risks from the crop’s price and volume instability, would have been illegal had attorney John Quarles not found an exemption for agricultural cooperatives in the Capper-Volstead act of 1922, which gave “associations” making agricultural products limited exemptions from anti-trust laws.
After World War II, in 1946, the cooperative became the National Cranberry Association and by 1957 changed its name to Ocean Spray. (Fun Fact: Urann at first “borrowed” the Ocean Spray name and added the image of the breaking wave, and cranberry vines from a fish company in Washington State from which he later bought the rights). Later, Urann would tell the Associated Press why he believed the cooperative structure worked: ”grower control (which) means ‘self control’ to maintain the lowest possible price to consumers.” In theory, the cooperative would keep the competition among growers at bay. Cox explains:
From the beginning, the relationship between the three was fraught with mistrust, but on the principle that one should keep one’s enemies closer than one’s friends, the cooperative pursued a canned version of the ACE’s fresh strategy, rationalizing production, distribution, quality control, marketing and pricing.
Ocean Spray still is a cooperative of 600 independent growers across the United States that work together to set prices and standards.
We can’t thank Urann in person for his contribution to our yearly cranberry intake (he died in 1963), but we can at least visualize this: If you lay out all the cans of sauce consumed in a year from end to end, it would stretch 3,385 miles—the length of 67,500 football fields. To those of you ready to crack open your can of jellied cranberry sauce this fall, cheers.
The striking, nearly monochromatic works of Romaine Brooks are receiving a fourth major showing at the Smithsonian American Art Museum in Washington, D.C., which owns about half the known output of the American expatriate who lived in Paris.
But the new exhibition, “The Art of Romaine Brooks” on view this summer, speaks most frankly about her sexual identity—her work is almost exclusively about women, and her own self-portraits show her in men’s clothing and a top hat.
The exhibition includes the 18 paintings and 32 drawings in the museum's collections—works we’ve seen before—but Joe Lucchesi, the contributing curator, says “the thing that is profoundly different about this show is the framing around the artist’s life itself and the issues of gender and sexuality that are really at the core at the work.”
The last Smithsonian showing of Brooks, in 1986, came at a time when feminist scholarship was just beginning, says Lucchesi, an associate professor of art history and the Women, Gender and Sexuality Studies program coordinator at St. Mary’s College of Maryland.
“There’s a profound cultural change that’s happened between the 1980s and now,” he says. “It’s actually quite interesting to me to think about that show and the one that’s up now as being on opposite sides of a huge culture shift that’s occurred over the last 30 years.”
It results in a higher profile for an artist who should be recognized as a leading cultural figure of the 20th century, according to biographer Cassandra Langer, author of Romaine Brooks, A Life, who recently spoke at a Smithsonian symposium on Brooks. “She stands alongside Virginia Woolf and Gertrude Stein as a major participant in the intellectual and artistic life of her times and beyond,” Langer says.
The American artist was born in Rome in 1874 as Beatrice Romaine Goddard, heiress of a mining fortune after following a troubled childhood where her father left the family, her mother became emotionally abusive and her brother was mentally ill.
"Brooks had a Gothic childhood replete with a mad cousin in the attic, an abusive and cruel mother, a conservative and cold sister and an insane brother,” Langer says. “As a child she was beaten and humiliated.”
Even living in a mansion, she often had to fend for herself. “It’s a little Tale of Two Cities,” Lucchesi says. “She’s a super rich girl, living like a street urchin. And nobody believes she’s a rich girl.”
She became a poor art student in Italy and France before she inherited the windfall that allowed her independence and a new way of depicting her world.
“She was one of the first modern artists to depict women’s resistance to patriarchal representations of the female in art,” Langer says. “She understood that women in art had been treated as objects rather than subjects. She made it her mission to change all that.”
That put her ahead of her time.
“Sexuality, gender and identity are now at the cutting edges of the current arts scene,” Langer says. Brooks (who got that name from a marriage that lasted less than a year) “started this conversation long before it became fashionable to do so.”
Her early nude, Azalées Blanches from 1910, was an unusual subject for a woman. “I grasped every occasion no matter how small, to assert my independence of views,” Brooks said in her unpublished memoir. Its provocative pose led to comparisons to the figure in Édouard Manet’s Olympia.
Brooks turned to performance artist Ida Rubinstein, whom Langer calls “the Lady Gaga of her day,” as a model for one of her best known paintings, that of a Red Cross relief worker outside a burning French city in the 1914 La France Croisee.
That Brooks was in love with Rubinstein was not as well known but certainly not hidden.
“Some of the critics at the time danced around some of the sexual identity issues, but they always understood it as a little bit of boundary pushing, and almost always characterized it as something very inventive, very forward thinking,” Lucchesi says.
Reproductions of the image exhibited at the Bernheim Gallery in Paris in 1915 raised money for the Red Cross, and as a result Brooks won a Cross of the Legion of Honor from the French government in 1920.
Brooks was proud enough of the medal to include it, as one of the few spots of color in her celebrated, typically gray 1923 Self Portrait, in which she devised a proudly androgynous mask for herself as carefully as did an artist much later in the century, Langer says. “Like David Bowie, she became very good at projecting her confected self. But this was just a cover for the very vulnerable and needy child she still remained.”
Because of her sexuality, Brooks “has been marginalized,” according to Langer, “most significantly due to the homophobic misunderstandings of her domesticity.”
But her chosen artistic style was also at odds with the increasingly fashionable cubist abstractions of the era. At the time when Stein’s nearby salon was celebrating the work of Picasso, Brooks’ moodier representational works were more comparable to that of Whistler.
Brooks retreated from paintings for decades, concentrating on fascinating, psychological drawings that Lucchesi says are of equal interest (and also on display).
She stayed true to her vision throughout, although by the time she died in Paris in 1970 at the age of 96, she had been largely forgotten. (Her own defiant epitaph was: “Here remains Romaine, who Romaine remains.”)
“It’s very difficult for female artists historically to garner a lot of attention, and then you add the sexual identity issues—I think all of those things kept her out of the mainstream,” Lucchesi says.
For her part, Langer says, “I always considered her queerness paradoxically essential and beside the point. The simple truth is she was a great artist whose work has been misinterpreted and overlooked.”
More and more people are aware of Brooks, thanks in part to a 2000 show at the National Museum of Women in Art, a few blocks away from the American Art Museum, also curated by Lucchesi.
But in the last big Smithsonian show in 1986, her sexual identity issues were “pretty coded,” he says. The American expatriate writer “Natalie Barney barely shows up in that catalog even though they were basically together for 50 years,” he says.
It wasn’t the Institution that was conservative, “it’s kind of the way the world was.”
But to take in the work now, he says, “what you’re seeing is an LGBT subculture in the active process of trying to define itself,” Lucchesi says. “And that’s really exciting to me.”
In her paintings, he says, “she’s participating in an effort to shape a visible image of what it means to be a lesbian in that era. And I think that’s very significant."
In 2016, “I think there’s a lot of interest in her work because there’s a bit of a recognition with things that are going on now with, for example, trans identities or more gender-fluid identities, and it’s very interesting to look back at someone 100 years ago who was also navigating things that weren’t so clear and developing a language really for the first time.”
That the show of 18 paintings and 32 drawings opened days after a LGBT-targeted massacre in Orlando makes the exhibition bittersweet. And yet its portraits in grays and black reflect a somber mood of the community after that tragedy.
“There’s a kind of quietness about her work, there’s a kind of heaviness to it, a seriousness to it that I think suddenly was very apparent in that moment of mourning,” Lucchesi says. “I hate that it became interesting for that reason. But there is real opportunity to have the show participate in some of the conversations that are happening right now.”
“The Art of Romaine Brooks” continues through October 2, 2016, at the Smithsonian American Art Museum in Washington, D.C.
Colored Pigments and Complex Tools Suggest Humans Were Trading 100,000 Years Earlier Than Previously Believed
What the heck are these? thought Rick Potts. The Smithsonian paleoanthropologist was looking at a small, round, charcoal-colored lump. The stubby rock was accompanied by 85 others, all excavated from the Olorgesailie Basin site in southern Kenya.
Over the past decade, the site had revealed a bevy of finds to Potts and his team of researchers from the Smithsonian and the National Museums of Kenya, including thousands of hominin-made tools, fossilized mammal remains and sediment samples spanning hundreds of thousands of years. But the lumps were a mystery.
Back at the lab, researchers analyzed them to find that they were black pigments: The oldest paleo-crayons ever discovered, dating back to around 300,000 years ago.
That was only the beginning of the intrigue. Having long studied this site and this period in human evolution, Potts knew that early humans generally sourced their food and materials locally. These “crayons,” however, were clearly imported. They’d formed in a briny lake, but the closest body of water that fit that description was some 18 miles away. That was much farther than most inhabitants likely would’ve traveled on a regular basis, given the uneven terrain. So what was going on?
The pigments, Potts and his co-authors now believe, were part of a prehistoric trade network—one that existed 100,000 years earlier than scientists previously thought.At the Olorgesailie Basin site, Smithsonian researchers found evidence of long distance trade, the use of color pigments and sophisticated tools all dating back tens of thousands of years before previously believed. Researchers believe the environment during this crucial time was notably changeable, with high mammal turnover and unreliable resources. (Smithsonian / Human Origins Program)
In addition to the pigment lumps, the researchers point to the transformation in stone tool technology as proof of this claim. At the same site, they found thousands of newer tools made from materials that had been transported over long distances. They report these findings in a series of three related papers out today in Science; in addition to Potts, lead authors include Alan Deino, a geochronologist at the University of California at Berkeley, and paleoanthropologist Alison Brooks of George Washington University.
“The earliest evidence for Homo sapiens in eastern Africa is about 200,000 years ago, so this Middle Stone Age evidence we’re finding is significantly before that,” says Potts, who is the director of the National Museum of Natural History's Human Origins Program and has been leading research in Olorgesailie for more than 30 years. “[Early humans] were rare in their environment based on the fossil record itself, but left these durable calling cards behind, these stone tools. So we know a lot more about the transition in behavior than we do the timing or who actually made these tools.”
These complex behavioral changes signal a major shift in cognition, which may have given modern humans an edge over other hominin lineages out there. The researchers even offer a possible explanation for the shift: Environmental instability. By examining markers of change in the surrounding environment, researchers find that this profound cognitive leap happened at the same time as dramatic transformations in climate and landscape.
The transition in question at Olorgesailie stretches from 500,000 years ago to 300,000 years ago. At the start of that shift, the dominant hominin was Homo erectus, the oldest known early humans who first appeared around 1.8 million years ago and spread across the world. The “upright man” is often accompanied by the handaxe, a stone tool that has been discovered at sites in Africa, Asia and Europe. The pear-shaped stone tools belong to a tool technology tradition known as the Acheulean, which lasted more than a million years.
But sometime around 500,000 years ago, these handaxes started looking a bit more refined, says cognitive archaeologist Derek Hodgson of the University of York, who was not involved in the new research. “You get three-dimensional symmetry in the handaxes, as if the hominins are able to rotate the object in the minds’ eye, which is a very complex skill to achieve,” Hodgson says. “These tools seem too refined, and some are far too large for functional needs.” In other words, these later tools might have been used to indicate social status, or for aesthetic purposes.
Potts and his team at Olorgesailie also observed this evolution in handaxes. What began as strictly functional tools made from local stone were gradually infiltrated by the occasional smaller tool and transported material. By 300,000 years ago, the transition at Olorgesailie was complete. Handaxes had essentially disappeared, leading to a new technological era called the Middle Stone Age—and a new kind of hominins wielding those smaller tools.Older handaxes used by early humans in Kenya, prior to 320,000 years ago. (Smithsonian / Human Origins Program)
When and why this change happened, and who was behind it, has been debated for years. The challenge in the past has been the lack of record. “The attempt to pin down the timing and circumstances of this process suffers from a number of conceptual and practical difficulties,” write archaeologists Sally McBrearty and Christian Tryon in a 2006 paper. Namely, archaeologists have never been able to find archaeological sites with continuous sediment layers spanning that transition, likely because the Rift Valley was undergoing such enormous tectonic disruptions.
The Olorgesailie Basin sediments suffer from the same missing gap, which stretches from 499,000 years ago to 320,000 years ago. What happened in those mysterious years is still up for debate. But what emerged on the other side at Olorgesailie is something never before seen at such an early date: Humans who had the social and cognitive skills to create refined tools; long-distance trade networks to obtain optimum tool-making materials, like obsidian; and the adaptability to survive in an environment that included earthquakes, volcanoes and wildly fluctuating wet-dry cycles.
So did the tools themselves spur the neurological change, or did the larger brains of Homo heidelbergensis—who usurped the hominin throne from Erectus and is believed to be the shared ancestor of Homo sapiens and Neanderthals—allow for the creation of these new tools? It’s a query the physical remains can’t quite answer. “It’s kind of like, were humans really smart before there were computers?” Potts says. “That’s a major invention, and yet obviously we’re the same people before computers as after."
Hodgson agrees that neural networks probably had to be in place for the creation of new tools, and maybe those neural networks were also related to new social behaviors like trade alliances and pigment use. But understanding the relationship between humans and their unpredictable environment is still a crucial piece of the puzzle.Potts surveys an assortment of Early Stone Age handaxes in the Olorgesailie Basin. (Smithsonian / Human Origins Program)
The Great Rift Valley is named for its location atop an intra-continental ridge system that has been tectonically active for millions of years. During the transition the team was studying, there was also a move to a drier environment with intervals of humidity. Animals, plants, landscapes were shifting: By examining the faunal fossil record, Potts and his team found that 85 percent of mammal species experienced local extinction during that transition between the Acheulean and the Middle Stone Age.
These enviromental challenges may have pushed humans toward greater cooperation and exploration. “If it was every hominin for itself, that would have been a disaster, and that could’ve been one of the reasons why the Acheulean way of life disappeared,” Potts says. Maybe that’s why the Middle Stone Age peoples in Olorgesailie got 50 to 60 percent of their tool-making materials from far away—they used trade as a means of survival.
It’s an intriguing narrative, but researchers still need to fill in the gaps in the geologic record to verify it. Which is exactly what's on the horizon for Potts, and for paleogeologists like Andrew Cohen, a professor of geosciences, ecology and evolutionary biology at the University of Arizona who has worked with Potts in the past. Cohen leads the Hominin Sites and Paleolakes Drilling Project and has submitted work based on core samples from the region, which will further elucidate our knowledge of local climate fluctuations.
“The finding of a fairly continuous record of the late Acheulean into the Middle Stone Age is a spectacular find,” Cohen says of Potts’ work. “Trying to narrow down the timing of the transition is a pretty big step forward.” He hopes to add on to the next step with much more detailed climactic records for the same time and same place.
This kind of research does more than help us understand where we came from. Studying these milestones in humanity's past, says Cohen, could help us prepare for a future in which Earth's climate is once again unpredictable. “We’ve got 10 or perhaps more species of hominids out there, and they all went extinct for reasons that we don’t understand,” Cohen says. “I think it’s imperative that we try to understand them. It’s not just an evolutionary event—it’s also extinction events.”
The cover of this year’s Sports Illustrated swimsuit issue, featuring a honey-haired model tugging at the bottom of her snake-print string bikini, generated swift reaction. The steamy glimpse of her pelvis prompted howls of outrage—risque, racy, inappropriate, pornographic, declared the magazine's detractors. “It’s shocking, and it’s meant to be,” wrote novelist Jennifer Weiner in the New York Times.
But when French automobile engineer-cum-swimsuit designer Louis Réard launched the first modern bikini in 1946, that seemingly skimpy suit was equally shocking. The Vatican formally decreed the design sinful, and several U.S. states banned its public use. Réard’s take on the two-piece—European sunbathers had worn more ample versions that covered all but a strip of torso since the 1930s—was so flesh-baring that swimsuit models were unwilling to wear it. Instead, he hired nude dancer Micheline Bernardini to debut his creation at a resort-side beauty pageant on July 5, 1946. There, Réard dubbed the “four triangles of nothing” a “Bikini,” named after the Pacific Island atoll that the United States targeted just four days earlier for the well-publicized “Operation Crossroads,” the nuclear experiments that left several coral islands uninhabitable and produced higher-than-predicted radiation levels.
Réard, who had taken over his mother’s lingerie business in 1940, was competing with fellow French designer Jacques Heim. Three weeks earlier, Heim had named a scaled-down (but still navel-shielding) two-piece ensemble the Atome, and hired a skywriter to declare it “the world’s smallest bathing suit.”
Réard’s innovation was to expose the bellybutton. Purportedly, Réard—who hired his own skywriter to advertise the new bikini as smaller than the world’s smallest bathing suit—claimed his version was sure to be as explosive as the U.S. military tests. A bathing suit qualified as a bikini, said Réard, only if it could be pulled through a wedding ring. He packaged the mere thirty squares inches of fabric inside a matchbox. Though Heim’s high-waisted version was embraced immediately and worn on international beaches, Réard’s bikini would be the one to endure.A bikini designed by the California swim suit company Mabs of Hollywood is held in the Smithsonian collections. (National Museum of American History)
Beyond Europe, reception for Réard’s teenie, weenie bikini was as lukewarm as the San Tropez shores that inspired the all but bare-bottomed design. U.S. acceptance of the suit would require not only bikini-clad appearances on the silver screen by Brigitte Bardot, but also by Disney’s wholesome mouseketeer Annette Funicello. A later version of the bellybutton-baring bikini is held in the collections of Smithsonian’s National Museum of American History in Washington, D.C. It was designed by Mabs of Hollywood and dates to the 1960s and is quite modest compared to Réard’s initial conception.
World War II rations on fabric set the stage for the bikini’s success. A U.S. Federal law enacted in 1943 required that the same synthetics used for bathing-suit production be reserved for the production of parachutes and other frontline necessities. So the thriftier two-piece suit was deemed patriotic–but of course, the design modestly hid the bellybutton, not unlike the halter-topped “retro” swimsuits famously favored today by pop superstar Taylor Swift. In the meantime, Mabs of Hollywood, the designer of the shiny black Smithsonian suit, gained its reputation making those modest two pieces during World War II, when American fashion mavens were limited to stateside designers.The "Baker" atomic bomb explosion at Bikini Atoll on July 25, 1946—the last of three American tests—blasted a water column 5,000 feet into the air. (Corbis)
The competition between swimsuit designers in 1946 laced with language related to the new weapons of mass destruction was not just a curious fluke. Historians of the Cold War Era such as the authors of Atomic Culture: How We Learned to Stop Worrying and Love the Bomb have noted that advertisers capitalized both on the public’s lurid fascination, as well as its fear, of nuclear annihilation.
One of the hot stories of the summer in 1946 was the naming of the first Operation Crossroads bomb after actress Rita Hayworth. All summer, international news reports buzzed with details of the Pacific Island nuclear tests designed to study the effects of atomic weapons on warships, and the homage to the leggy star was no exception.
Actor Orson Welles, who happened to be married to Hayworth at the time, broadcast a radio show on the eve of the first bomb’s release near the Bikini Atoll. He added a “footnote on Bikini. I don’t even know what this means or even if it has meaning, but I can’t resist mention of the fact that this much can be revealed concerning the appearance of tonight’s atom bomb: it will be decorated with a photograph of sizeable likeness of the young lady named Rita Hayworth.” An image of the star was stenciled onto the bomb below Gilda, her character’s name in the current film of the same name, whose trailer used the tagline: “Beautiful, Deadly. . .Using all a woman’s weapons.”
In that same radio show, Welles mentioned a new garishly red “Atom Lipstick” as an example of “the cosmetic being fashioned according to the popular conceptions of the original war-engine.” That very week, Réard would offer the bikini as yet another, more enduring example of the same.
Equating military conquest and romantic pursuits is nothing new—we’ve all heard that “all’s fair in love and war.” But this trope got considerably sexed up during the war between the Axis and the Allies. Pin-up girls pasted on the noses of WWII bombers (“nose art”) kept American soldiers company on long tours, and the sexy songstresses who entertained troops were dubbed “bombshells.” But an even weirder tone to the innuendoes crept into the lingo once nuclear weaponry appeared. Women’s bodies, more readily on display than ever before, became dangerous and tempting in magazine advertizements, even weaponized in competitions like the 1957 Miss Atomic Bomb champion. The scandalously scant bikini was simply an early example of this postwar phenomenon.Designer Louis Réard, seen here in 1974, invented the modern bikini in 1946, naming it for the location of the testing site for the atomic bomb. (Bettmann/CORBIS)
Allusions to nuclear destruction multiplied after Russia developed its A-bomb in 1949 and the Cold War escalated. In the battle between capitalism and communism, economic growth took top billing. Tensions between the U.S. and Russia included debates over which system provided the best “stuff” for their citizens—like the famous 1959 “Kitchen Debates” between then vice-president Richard Nixon and Soviet Premier Nikita Khrushchev over which country’s “housewives” had better home conveniences. Technological resources and consumer satisfaction became a popular measure of Cold War American success.
As Cold War anxieties grew, Americans bought more consumer goods and a greater variety of them than ever before. Mad Men-style advertisers and product designers eager to capture valuable consumer attention played to the public’s fixation with nuclear disaster—and its growing interest in sex. Hit songs like “Atomic Baby” (1950) and “Radioactive Mama” (1960), paired physical allure and plutonium effects, while Bill Haley and the Comets’ 1954 hit “Thirteen Women” turned the fear of nuclear catastrophe into a fantasy of masculine control and privilege. All in all, a startling number of the songs in Conelrad’s collection of Cold War music link love, sex and atomic disaster.Brigitte Bardot, playing the role of Javotte Lemoine, waves from the shore in a scene from the 1952 French comedy Le Trou Normand. ( Bettmann/CORBIS)
We all know sex sells. In 1953—the same year Senator Joseph McCarthy’s widely publicized communist witchhunt peaked and the Korean War suffered its dissatisfying denouement—Hugh Hefner upped the ante with his first, Marilyn Monroe-festooned issue of Playboy. The 1950s Playboy magazines did not just sell male heterosexual fantasies; they also promoted the ideal male consumer, exemplified by the martini-drinking, city-loft-living gentlemen rabbit featured on the June 1954 cover. The bikini, like lipstick, girly mags, blackbuster films and pop music, was something to buy, one of many products available in capitalist countries.
Clearly, plenty of American women chose to expose their tummies without feeling like dupes of Cold War politics. Women’s own preferences had a firm hand in shaping most 20th-century fashion trends—female sunbathers at St. Tropez reportedly inspired Réard’s trim two piece because they rolled down their high-waisted suits to tan. But if the 2015 Sports Illustrated swimsuit issue controversy is any indication, the bikini is still all about getting an explosive reaction. The barely-there beachwear’s combative reputation, it seems, has a half-life not unlike plutonium. So maybe, given the bikini’s atomic origins and the continuing shock-waves of its initial detonation, pacifism (along with Brazilian waxes and punishing ab routines) gives women another reason to cover up this summer—a one-piece for peace?
Most people think of synchronized swimming, which gained Olympic status in 1984, as a newcomer sport that dates back only as far as Esther Williams' midcentury movies. But the aquatic precursors of synchronized swimming are nearly as old as the Olympics themselves.
Ancient Rome’s gladiatorial contests are well known for their excessive and gruesome displays, but their aquatic spectacles may have been even more over the top. Rulers as early as Julius Caesar commandeered lakes (or dug them) and flooded amphitheaters to stage reenactments of large naval battles— called naumachiae—in which prisoners were forced to fight one another to the death, or drown trying. The naumachiae were such elaborate productions that they were only performed at the command of the emperor, but there is evidence that other—less macabre—types of aquatic performances took place during the Roman era, including an ancient forerunner to modern synchronized swimming.
The first-century A.D. poet Martial wrote a series of epigrams about the early spectacles in the Colosseum, in which he described a group of women who played the role of Nereids, or water nymphs, during an aquatic performance in the flooded amphitheater. They dove, swam and created elaborate formations and nautical shapes in the water, such as the outline or form of a trident, an anchor and a ship with billowing sails. Since the women were portraying water nymphs, they probably performed nude, says Kathleen Coleman, James Loeb Professor of the Classics at Harvard University, who has translated and written commentaries on Martial’s work. Yet, she says, “There was a stigma attached to displaying one’s body in public, so the women performing in these games were likely to have been of lowly status, probably slaves.”
Regardless of their social rank, Martial was clearly impressed with the performance. “Who designed such amazing tricks in the limpid waves?” he asks near the end of the epigram. He concludes that it must have been Thetis herself—the mythological leader of the nymphs—who taught “these feats” to her fellow-Nereids.
Fast forward to the 19th century and naval battle re-enactments appear again, this time at the Sadler’s Wells Theater in England, which featured a 90-by-45 foot tank of water for staging “aqua dramas.” Productions included a dramatization of the late-18th-century Siege of Gibraltar, complete with gunboats and floating batteries, and a play about the sea-god Neptune, who actually rode his seahorse-drawn chariot through a waterfall cascading over the back of the stage. Over the course of the 1800s, a number of circuses in Europe, such as the Nouveau Cirque in Paris and Blackpool Tower Circus in England, added aquatic acts to their programs. These were not tent shows, but elegant, permanent structures, sometimes called the “people’s palaces,” with sinking stages or center rings that could be lined with rubber and filled with enough water to accommodate small boats or a group of swimmers.Royal Aquarium, Westminster. Agnes Beckwith, c. 1885 (© The British Library Board)
In England, these Victorian swimmers were often part of a performing circuit of professional "natationists" who demonstrated "ornamental" swimming, which involved displays of aquatic stunts, such as somersaults, sculling, treading water and swimming with arms and legs bound. They waltzed and swam in glass tanks at music halls and aquariums, and often opened their acts with underwater parlor tricks like smoking or eating while submerged. Though these acts were first performed by men, female swimmers soon came to be favored by audiences. Manchester (U.K.) Metropolitan University's sports and leisure historian, Dave Day, who has written extensively on the subject, points out that swimming, "packaged as entertainment," gave a small group of young, working-class women the opportunity to make a living, not only as performers, but also as swimming instructors for other women. But as more women in England learned to swim, the novelty of their acts wore off.
Image by hippodrome memories. Water circus at Blackpool (original image)
In the United States, however, the idea of a female aquatic performer still seemed quite avant-garde when Australian champion swimmer Annette Kellerman launched her vaudeville career in New York in 1908. Billed as the "Diving Venus" and often considered the mother of synchronized swimming, Kellerman wove together displays of diving, swimming and dancing, which The New York Times called "art in the making." Kellerman's career—which included starring roles in mermaid and aquatic-themed silent films and lecturing to female audiences about the importance of getting fit and wearing sensible clothing—reached its pinnacle when she, and a supporting cast of 200 mermaids, replaced prima-ballerina Pavlova as the headline act at the New York Hippodrome in 1917.
While Kellerman was promoting swimming as a way to maintain health and beauty, the American Red Cross, which had grown concerned about high drowning rates across the country, turned to water pageants as an innovative way to increase public interest in swimming and water safety. These events, which featured swimming, acting, music, life-saving demonstrations or some combination of these, became increasingly popular during the 1920s. Clubs for water pageantry, water ballet and "rhythmic" swimming—along with clubs for competitive diving and swimming—started popping up in every pocket of America.Annette Kellerman (1887-1975), Australian professional swimmer, vaudeville and film star in her famous custom swimsuit (Library of Congress via Wikicommons)
One such group, the University of Chicago Tarpon Club, under the direction of Katharine Curtis, had begun experimenting with using music not just as background, but as a way to synchronize swimmers with a beat and with one another. In 1934, the club, under the name Modern Mermaids, performed to the accompaniment of a 12-piece band at the Century of Progress World's Fair in Chicago. It was here that "synchronized swimming" got its name when announcer Norman Ross used the phrase to describe the performance of the 60 swimmers. By the end of the decade, Curtis had overseen the first competition between teams doing this type of swimming and written its first rulebook, effectively turning water ballet into the sport of synchronized swimming.
While Curtis, a physical education instructor, was busy moving aquatic performance in the direction of competitive sport, American impresario Billy Rose saw a golden opportunity to link the already popular Ziegfeld-esque “girl show” with the rising interest in water-based entertainment. In 1937, he produced the Great Lakes Aquacade on the Cleveland waterfront, featuring—according to the souvenir program—"the glamour of diving and swimming mermaids in water ballets of breath-taking beauty and rhythm."
The show was such a success that Rose produced two additional Aquacades in New York and San Francisco, where Esther Williams was his star mermaid. Following the show, Williams became an international swimming sensation through her starring roles in MGM's aquamusicals, featuring water ballets elaborately choreographed by Busby Berkeley.
Though competitive synchronized swimming—which gained momentum during the middle of the century—began to look less and less like Williams' water ballets, her movies did help spread interest in the sport. Since its 1984 Olympic induction, synchronized swimming has moved farther from its entertainment past, becoming ever "faster, higher, and stronger," and has proven itself to be a serious athletic event.
But regardless of its roots, and regardless of how it has evolved, the fact that synchronized swimming remains a spectator favorite—it was one of the first sporting events to sell out in Rio—just goes to show that audiences still haven't lost that ancient appetite for aquatic spectacle.
How to watch synchronized swimming
If synchronized swimming looks easy, the athletes are doing their jobs. Though it is a grueling sport that requires tremendous strength, flexibility, and endurance—all delivered with absolute precision while upside down and in the deep end—synchronized swimmers are expected to maintain "an illusion of ease," according to the rulebook issued by FINA, the governing body of swimming, diving, water polo, synchronized swimming and open water swimming.
Olympic synchronized swimming includes both duet and team events, with scores from technical and free routines combined to calculate a final rank. Routines are scored for execution, difficulty and artistic impression, with judges watching not only for perfect synchronization and execution, both above and below the surface, but also for swimmers' bodies to be high above the water, for constant movement across the pool, for teams to swim in sharp but quickly changing formations, and for the choreography to express the mood of the music.
The United States and Canada were the sport's early leaders, but Russia—with its rich traditions in dance and acrobatics, combined with its stringent athletic discipline—has risen to dominance in recent years, winning every gold Olympic medal of the 21st century and contributing to the ever-changing look of the sport. Russia, followed by China, remains the team to watch in Rio this year, while the U.S. is hoping for a win from American duet pair Mariya Koroleva and Anita Alvarez.
When scientists first suggested in the early 1980s that volcanic activity had wiped out most dinosaurs 66 million years ago, Paul Olsen wasn’t having any of it. He wasn’t even convinced there had been a mass extinction.
Olsen, a paleontologist and geologist at Columbia University, eventually came to accept the idea of mass extinctions. He also acknowledged that volcanoes played a role in certain extinction events. But even then, he wasn’t entirely convinced about the cause of these extinctions.
The leading hypothesis holds massive eruptions blasted carbon dioxide into Earth's atmosphere, cranking up global temperatures within a relatively short period of time. Such a sudden change, the theory goes, would have killed off terrestrial species like the huge ancestors of crocodiles and large tropical amphibians and opened the door for dinosaurs to evolve.
Olsen, who discovered his first dinosaur footprint in the 1960s as a teenager in New Jersey and still uses the state’s geological formations to inform his work, wondered whether something else may have been at work—such as sudden cooling events after some of these eruptions, rather than warming.
It's an idea that's been around in some form for decades, but the 63-year-old Olsen is the first to strongly argue that sulfate aerosols in the atmosphere could have been responsible for the cooling. A sudden chill would explain the selective nature of the extinctions, which affected some groups strongly and others not at all.
His willingness to revive an old debate and look at it from a fresh angle has earned Olsen a reputation as an important voice in the field of earth sciences.Olsen thinks that the wavy band of rock near the bottom of this image—composed of tangled, cylindrical strands that could be tree roots or other debris—may be the remains of a sudden mass extinction. It could line up with a well-dated giant meteorite that hit what is now southern Canada 215.5 million years ago. (Columbia University Earth Institute)
From the moment Olsen abandoned dreams of becoming a marine biologist as a scrawny teenager and fell in love with dinosaurs, he courted controversy and earned a reputation for making breathtaking discoveries.
Olsen’s first breakthrough came as a young teen, when he, his friend Tony Lessa and several other dinosaur enthusiasts discovered thousands of fossilized footprints at a quarry near his house in Rosemount, New Jersey. They were the remnants of carnivorous dinosaurs and tiny crocodile relatives that dated back to the Jurassic, 201 million years ago. The teens' efforts to successfully designate the quarry as a dinosaur park inspired a 1970 Life magazine article.
Olsen even sent a letter to President Richard Nixon urging his support for the park, and followed that with a cast of a dinosaur footprint. "It is a miracle that nature has given us this gift, this relic of the ages, so near to our culturally starved metropolitan area," the young Olsen wrote in a later letter to Nixon. "A great find like this cannot go unprotected and it must be preserved for all humanity to see." (Olsen eventually received a response from the deputy director of the Interior Department's Mesozoic Fossil Sites Division.)
Olsen shook things up again as an undergraduate student at Yale. In this case, he and Peter Galton published a 1977 paper in Science that questioned whether the end-Triassic mass extinction had even happened, based on what he called incorrect dating of the fossils. Subsequent fossil discoveries showed that Olsen was wrong, which he readily acknowledged.
In the 1980s, Olsen demonstrated that Earth’s orbital cycles—the orientation of our planet on its axis and the shape of its path around the sun—influenced tropical climates and caused lakes to come and go as far back as 200 million years ago. It was a controversial idea at the time, and even today has its doubters.
More recently, Olsen and colleagues dated the Central Atlantic Magmatic Province—large igneous rock deposits that were the result of massive volcanic eruptions—to 201 million years ago. That meant the eruptions played a role in the end-Triassic mass extinction. They published their results in a 2013 study in the journal Science.
But it is his latest project—reexamining the causes of mass extinctions—that could be his most controversial yet.
Researchers generally recognize five mass extinction events over the past 500 million years, Olsen explains. We may be in the middle of a sixth event right now, which started tens of thousands of years ago with the extinction of animals like the mastodon.
Determining the causes and timing of these extinctions is incredibly difficult. Regardless of cause, however, these events can pave the way for whole new groups of organisms. In fact, the disappearance of nearly all synapsids—a group that includes mammals and their relatives—in the Triassic may have allowed for the evolution of dinosaurs about 230 million years ago.
The accepted theory for the end-Triassic extinction states that gases from enormous volcanic eruptions led to a spike in carbon dioxide levels, which in turn increased global temperatures by as much as 11 degrees F. Terrestrial species, like the huge ancestors of crocodiles and large tropical amphibians, would have perished because they couldn't adapt to the new climate.The remains of the Triassic are "interesting because [they give] us a different kind of world to look at, to try and understand how earth's systems work," says Olsen. "But it's not so different that it's beyond the boundaries of what we see going on today." (Columbia University Earth Institute)
However, this explanation never sat well with Olsen. “If we are back in the time of the Triassic and the dominant life forms on land are these crocodile relatives, why would a three degree [Celsius] increase in temperature do anything?” asks Olsen, sitting in his office on the campus of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York.
Some inland tropical areas would have become lethally hot, Olsen says, surrounded by fossils, dinosaur memorabilia and a Nixon commendation on the wall. But the mountains and coastlines would still be bearable. "It’s hard to imagine the temperature increase would be a big deal,” he says.
Three years ago, Olsen began looking at the fossil record of species that survived other mass extinctions, like the Cretaceous-Tertiary (K-T) event 66 million years ago and the Permian event roughly 250 million years ago. What he saw suggested a completely different story: Earth's climate during and after these volcanic eruptions or asteroid impacts got briefly but intensely cold, not hotter, as volcanic ash and droplets of sulfate aerosols obscured the sun.
Scientists generally agree that the reduced sunlight would have disrupted photosynthesis, which plants need to survive. During the K-T extinction event, plant losses would have left many herbivorous dinosaurs, and their predators, with little to eat.
In this case, size became the determining factor in whether a species went extinct. Large animals need more food than smaller animals to survive, Olsen explains.
With his fluffy white mustache and hearty laugh, Olsen is hard to miss at paleontology meetings. He's not afraid to insert himself into mass extinction debates, but is quick to point out that he counts even his most ardent critics among his friends.
Supporters praise his creativity, persistence and willingness to consider the big unanswered questions in paleontology that, if solved, would alter our understanding of important events like mass extinctions.
“Among academics, you see two types. You see the parachutists and you see the truffle hunters, and Paul is a parachutist,” says Hans Sues, chairman of the department of paleobiology at the Smithsonian National Museum of Natural History. “The parachutist is the one who helps build the big frame in which other people operate.” Sues and Olsen, who have pieced together fossils in the past, have known each other for 30 years.
Olsen's latest project—the volcanic winter theory—has him looking for ancient ash deposits from the United States to Morocco to the United Kingdom. He hopes to find the fingerprints of certain sulfur isotopes and metals that could indicate that sulfur-rich super-eruptions occured. They would also pinpoint the timing of the eruptions relative to the extinctions, Olsen explains.
Evidence of ancient ice would also bolster his case. For those clues, Olsen must look to mud flats laid down in what would have been the tropics—some of which are in areas in New Jersey, where he searched for dinosaurs as a teenager. “If you find these little crystals on mud flats, you know it froze in the tropics," Olsen says.
Sues is among those who believe Olsen’s hypothesis has merit, partly because Olsen is focused on the sulfate aerosols from eruptions. In the recent past, massive volcanic eruptions—like Mount Pinatubo in 1991—belched the sulfate aerosols into the atmosphere, which reduced global temperatures. The trick is finding evidence of extreme cold in rocks, Sues says.
But other scientists, like Spencer G. Lucas, curator of paleontology at the New Mexico Museum of Natural History and Science, have their doubts.
As someone who has long sparred with Olsen on mass extinctions, Lucas agrees that volcanism played a role in extinctions and isn’t ruling out cooling as the cause. But finding chemical evidence of that in the rocks or preserved ash will be difficult, if not impossible, to find, he says.
Searching for those clues isn't a waste of time though, says Lucas. He wants someone who cares about the problem, like Olsen, to collect the evidence and makes a convincing case for the Earth either cooling or warming during these extinctions.
“Paul is sort of the Don Quixote of extinctions,” Lucas says. “He is tilting at a windmill in my mind. But I’m glad he’s doing it because he knows he has got the background, the smarts and the opportunity. If anybody can figure this out, he will.”
As the weather heats up, some of the Smithsonian’s exhibits are preparing to cool down. To make way for future shows, a dozen current ones at various museums will close their doors by summer’s end, so don’t miss a chance to see some of these historic, unique, beautiful, innovative and thought-provoking exhibits. Here is a list of all exhibits closing before September 15.
Thomas Day was black man living in North Carolina before the Civil War. An expert cabinetmaker with his own business and more success than many white plantation owners, he was a freedman whose craftmanship earned him both respect and brisk sales. His style was classified as “exuberant” and was adapted from the French Antique tradition. Step back in time to the Victorian South and view Day’s ornate cabinetry work on display. Ends July 28. Renwick Gallery.
The Madrid-based artist group DEMOCRACIA created a video featuring the art of movement in a socio-political context. The film features practitioners of “parkour,” a kind of urban street sport with virtually no rules or equipment and where participants move quickly and efficiently through space by running, jumping, swinging, rolling, climbing and flipping. The actors are filmed practicing parkour in a Madrid cemetery, providing a spooky backdrop for their amazing acrobatics and interspersed with symbols of the working class, internationalism, anarchy, secret societies and revolution that pop up throughout the film. Ends August 4. Hirshhorn Museum.
The Edo period (1603-1868) marked a peaceful and stable time in Japan, but in the world of art, culture and literature, it was a prolific era. These companion exhibitions showcase great works of the Edo period that depict natural beauty as well as challenge the old social order. “Edo Aviary” features paintings of birds during that period, which reflected a shift toward natural history and science and away from religious and spiritual influence in art. “Poetic License: Making Old Words New” showcases works demonstrating how the domain of art and literature transitioned from wealthy aristocrats to one more inclusive of artisans and merchants. Ends August 4. Freer Gallery.
This exhibit, held at the American Indian Museum’s Gustav Heye Center in New York City, explores the significant contributions of Native Americans to contemporary music. From Jimi Hendrix (he’s part Cherokee) to Russell “Big Chief” Moore of the Gila River Indian Community to Rita Coolidge, a Cherokee, and Buffy Sainte-Marie, a Cree, Native Americans have had a hand in creating and influencing popular jazz, rock, folk, blues and country music. Don’t miss your chance to see the influence of Native Americans in mainstream music and pop culture. Ends August 11. American Indian Museum in New York.
The exhibition featuring works by the innovative Korean-American artist Nam June Paik, whose bright television screens and various electronic devices helped to bring modern art into the technological age during the 1960s, features 67 pieces of artwork and 140 other items from the artist’s archives. Ends August 11. American Art Museum.
Come to the Sackler Gallery and learn about the Japanese precursor to today’s electronic mass media: the woodblock-printed books of the Edo period. The books brought art and literature to the masses in compact and entertaining volumes that circulated Japan, passed around much like today’s Internet memes. The mixing of art with mass consumption helped to bridge the gap between the upper and lower classes in Japan, a characteristic of the progression during the Edo period. The exhibit features books in a variety of genres, from the action-packed to the tranquil, including sketches from Manga, not related to the Japanese art phenomenon of today, by the famous woodblock printer Hokusai. Ends August 11. Sackler Gallery.
In this seventh installation of the “Portraiture Now” series, view contemporary portraits by artists Mequitta Ahuja, Mary Borgman, Adam Chapman, Ben Durham, Till Freiwald and Rob Matthews, each exploring different ways to create such personal works of art. From charcoal drawings and acrylic paints to video and computer technology, these artists use their own style in preserving a face and bringing it alive for viewers. Ends August 18. National Portrait Gallery.
Celebrate Asian Pacific American history at the American History Museum and view posters depicting Asian American history in the United States ranging from the pre-Columbian years to the present day. The exhibit explores the role of Asian Americans in this country, from Filipino fishing villages in New Orleans in the 1760s to Asian-American involvement in the Civil War and later in the Civil Rights Movement. The name of the exhibit comes from the famed Filipino American poet Carlos Bulosan, who wrote, “Before the brave, before the proud builders and workers, / I say I want the wide American earth / For all the free . . .” Ends August 25. American History Museum.
This exhibit features a collection of eight portraits of influential women in American history, but you may not know all their names. They came long before the Women’s Rights Movement and questioned their status in a newly freed America by fighting for equal rights and career opportunities. Come see the portraits of these forward-thinking pioneers—Judith Sargent Murray, Abigail Smith Adams, Elizabeth Seton and Phillis Wheatley. Ends September 2. National Portrait Gallery.
Take a peek into the creative world of Chinese artist Xu Bing in this exhibition showcasing materials Bing used to create his massive sculpture Phoenix Project, which all came from construction sites in Beijing. The two-part installation, weighing 12 tons and extending nearly 100 feet long, features the traditional Chinese symbol of the phoenix, but the construction materials add a more modern message about Chinese economic development. While Phoenix Project resides at the Massachusetts Museum of Contemporary Art, the Sackler’s companion exhibition displays drawings, scale models and reconfigured construction fragments. Ends September 2. Sackler Gallery.
Stroll through the London of the 1800s in this exhibit featuring works by painter James McNeill Whistler, who lived in and documented the transformation of the Chelsea neighborhood. Whistler witnessed the destruction of historic, decaying buildings that made way for mansions and a new riverbank, followed by a wave of the elite. With artistic domination of the neighborhood throughout the transition, Whistler documented an important part of London’s history. The exhibit features small etchings and watercolor and oil paintings of scenes in Chelsea during the 1880s. Ends September 8. Freer Gallery.
From Picasso to Man Ray to present-day sculptor Doris Salcedo, many of the most innovative and prolific modern artists have set aside paint brush and canvas to embrace mixed media. View works by artists from all over the world during the last century and see the evolution of the collage and assemblage throughout the years. Featured in this exhibit is a tiny Joseph Stella collage made with scraps of paper and Ann Hamilton’s room-sized installation made of newsprint, beeswax tablets and snails, among other things. Ends September 8. Hirshhorn Museum.
When Guy Riefler pursued a bachelor’s degree in environmental engineering at Cornell University in 1991, it was with the intention that he would spend his career cleaning up pollution. So, after earning advanced degrees and completing his post-doctoral work at University of Connecticut, he landed a position as a professor at Ohio University, and made acid mine drainage (pdf)—the environmental bane of the area in and around Athens, Ohio—a major focus of his research.
In the state of Ohio, Riefler explains, there are hundreds of square miles of underground coal mines, all abandoned sometime before the Surface Mining Control and Reclamation Act of 1977 was passed. Operators of the mines simply picked up and left, since, prior to the act, they had no legal obligation to restore the land to its previous condition. They turned off pumps and, as a result, the water table rose and flooded the underground passageways. The water became acidic, as the oxygen in it reacted with sulfide minerals in the rock, and picked up high concentrations of iron and aluminum.
“When this water hits streams, it lowers the pH and kills fish,” says Riefler. “The iron precipitates form an orange slimy sludge that coats the sediments and destroys habitat.”
To tackle this problem, Riefler, an associate professor of environmental engineering, and his students started to flesh out an idea: they would take this slimy, metal-laden runoff from coal mines and turn it into paint. Beginning in 2007, some undergraduate students explored the possibility. Then, in 2011, Riefler received funding to look into the process in greater detail and devote a group of graduate students to the effort.
Toxic runoff from coal mines and commercial red and yellow paints, you see, have a common ingredient—ferric oxyhydroxides. Once the acidic ground water hits the air, the metals in it oxidize and the once-clear water turns yellow, orange, red or brown. To make paints of these colors, international companies basically mimic this reaction, adding chemicals to water tanks containing scrap metals.
After more than half a decade of dabbling in making pigments, Riefler and his team have a practiced method for producing paints. They start by collecting water directly from the seep in the ground; the water sample is still fairly clear because it just barely has made contact with the air. The scientists then take the sample to their laboratory, where they raise its pH using sodium hydroxide and expose it to oxygen at a certain rate, bubbling air through the water to oxidize the iron. While this is going on, the metal components, invisible up until this point, blossom into rich colors.
The particles within the water settle, and the researchers collect the iron sludge. Riefler dries the sludge and then mills it into a fine powder. The powder can then be added to alkali refined linseed oil, a traditional binder, to create an oil paint.
Riefler acknowledges one rather critical shortfall. ”I understood the chemistry and the process engineering, but didn’t have a clue how to tell a good pigment from a bad pigment,” he says.
Luckily, Riefler didn’t have to look far to find an eager partner in the art world. John Sabraw, an associate professor of art at Ohio University, uses sustainable materials in his own artwork and encourages his students to think about how they too can be sustainable in their practice. In fact, one of his courses, which students have dubbed “The Save the World Class,” brings together undergraduate students from a variety of disciplines—business, political science and art majors, for example—and asks that they collaborate to design and execute a sustainable solution to an environmental issue in their local community.
Sabraw has also studied the history of pigments and taught classes on making paints from scratch. He was already familiar with acid mine drainage when Riefler approached him. On a visit to some effected streams nearby with a group from the university, he had actually been tempted to collect some of the colored sludge.
“They tapped me to see if I could be a tester for the pigments, to test whether they would be a viable paint product,” says Sabraw.
For a little over a year now, Sabraw has been using acrylic and oil paints made from the dried pigments in his paintings. He has been impressed with the range of colors that can be made with the iron oxides. “You can get anything from a mustardy yellow all the way to an incredibly rich, deep, deep almost-black brown out of it,” he says. Like any brand of paint, this one has a consistency and other qualities that any artist has to adjust to, but Sabraw says its comparable to other paints on the market, and he enjoys working with it.
Riefler’s plan is to continue tweaking different variables in the process—things like temperature and pH—to perfect his paint product over the next year. In this research and development phase, he is being mindful to create something that is economically viable and that meets industry standards. Sabraw reports that the paints are safe to both produce and use.
He will be sending the product to pigment vendors. Ultimately, the plan is to sell the paint commercially, with the proceeds going to cleaning up polluted streams in Ohio.
“Our latest estimate is that one highly productive AMD seep near us would produce over 1 ton of dry pigment per day that could generate sales of $1,100 per day,” says Riefler. Costs are still being calculated, so it is unclear at this point whether or not the venture will turn a profit. “Even if we just break even, that would be a success, because we would be cleaning up a devastated stream for free and creating a few local jobs,” he adds.
The project is certainly a clever model for stream remediation, and both Riefler and Sabraw are driven to bring their product to the market, so that they can have a positive impact on the environment. Here, something that is nasty—acid mine drainage—is turned into something useful—paint—and beautiful—Sabraw’s paintings, with organic shapes reminiscent of trees, streams and landforms.
“What we are doing is trying to make the streams viable. We want life back in the streams,” says Sabraw. “It is certainly possible, and what we are doing is enabling that to happen.”
John Sabraw’s exhibition “Emanate” is on display at Kathryn Markel Fine Arts in Bridehampton, New York, from July 27 to August 10, 2013. He also has a show, “Luminous,” which opens at the Richard M. Ross Art Museum at Ohio Wesleyan University on August 22 and runs through October 6, 2013. Both exhibitions feature works made with the paints.
What do Supreme Court Justice Sonia Sotomayor, legendary baseball player Roberto Clemente, salsa icon Tito Puente, actor Raúl Juliá, and award-winning performer Rita Moreno have in common? From politics to sports to entertainment, these renowned Puerto Ricans have defined part of American history and culture.
Puerto Rico and the United States have a long history. Puerto Ricans have been U.S. citizens since 1917 when the U.S. Congress passed the Jones-Shafroth Act of 1917. Signed by President Woodrow Wilson on March 2, 1917, the law was followed two months later by the Selective Service Act of 1917 which allowed for the United States to draft soldiers, including Puerto Ricans. America entered World War I on April 6, 1917.
With the upcoming centennial of the Jones Act, my fellowship project was to identify objects in the National Museum of American History that can represent the legacy of this legal document. However, because Puerto Rican history, politics, society, and culture have been a part of the United States for so long, it is hard to imagine a single object or archival item that can embody the Jones Act as a whole. As a Puerto Rican myself, I took the task very personally and was excited to find some materials in the museum's collections through which I could contextualize the political, cultural, and historical contributions by famous and not-so-famous Puerto Ricans.
The 65th Infantry Regiment, nicknamed Borinqueneers after the indigenous Taíno name for Puerto Rico, Borinquén, was the first Hispanic segregated group of soldiers in U.S. history. While Puerto Ricans cannot vote for the president of the United States and have representation but no vote in Congress, a sizable amount of the island's population has served in the U.S. Armed Forces since 1917.
The Borinqueneers fought during World War I, World War II, and in the Korean War, for which they received the Congressional Gold Medal in 2016. This object means a lot to me because it represents the thousands of Puerto Ricans who have joined, fought, and even died with U.S. troops in the Armed Forces.
The Manuel Quiles films are a series of homemade videos that document the lives of a group of Puerto Ricans living in the Bronx, New York City, during the 1940s. The Puerto Ricans depicted in these films migrated to the U.S. mainland in the 1930s, shortly after the Jones Act was signed.
These videos were donated to the museum's Archives Center by Priscilla Wood (1942–2014), daughter of Manuel Quiles. Wood herself had an interesting career from 1990–2005 at the National Museum of American History. She worked with and researched 20th century costumes in the collection, specializing in Latino and high end fashion designers. Like Wood, many Puerto Ricans work at various institutions and organizations of significance to the United States, such as the Smithsonian Institution.
Semana del Emigrante Poster from DIVEDCO
DIVEDCO or Division of Community Education (División de Educación de la Comunidad in Spanish) was a governmental program implemented by the first democratically elected Puerto Rican governor, Luis Muñoz Marín, who took office in 1949. It was intended to educate the public—particularly rural communities—about issues of education, health, and economy, among others. One of the topics on which Puerto Ricans received information: opportunities in the United States. The Semana del Emigrante or Emigrants' Week during the 1950s was intended to orient Puerto Ricans on the island who were considering migrating to the U.S. mainland. This initiative informed local residents about job opportunities and migration services available to them. (See posters from other DIVEDCO programs.)
After World War II, thousands of Puerto Ricans left the island and settled in the U.S. mainland, mostly in cities like New York City or Chicago. An average of about 47,400 Puerto Ricans migrated to various cities in the U.S. mainland per year throughout the 1950s. This number has only recently been surpassed when 83,000 Puerto Ricans migrated in 2014, in search of better economic and professional opportunities.
Teodoro Vidal Collection of Puerto Rican History
With more than 3,200 objects, the museum's Vidal collection allows for exploration of Puerto Rican history through the lenses of everyday life, religion, carnival, music, economics, and family life. Puerto Rican collector Teodoro Vidal Santoni, an amateur historian and folklorist, amassed the incredible collection, which he donated to the museum in 1997. Browse the collection online and enjoy the colorful carnival costumes and masks, decorative tiles, figures of santos and more.
The Puerto Rico quarter
This quarter was issued in 2009, as part of the District of Columbia and United States Territories Program. The program also included Washington, D.C., Guam, American Samoa, the U.S. Virgin Islands, and the Northern Mariana Islands. Puerto Rico has been a U.S. territory since 1898, when the United States and Spain signed the Treaty of Paris, ending the Spanish-American War. With this treaty, Spain ceded some of its colonies including Puerto Rico, Guam, and the Philippines to the United States. This event changed the course of Puerto Rico's history and tied it to that of the United States. In the decades that followed, Puerto Rico adopted the dollar as its official currency and English became its second language (Spanish being its official language). One of the quarters is in our National Numismatic Collection.
The archival materials and objects found in the museum can spark discussions around the legacy of the Jones Act, both in Puerto Rican and U.S. histories. Since 1917, the mobility of Puerto Ricans between the U.S. mainland and the island have made possible the contributions of people such as Sonia Sotomayor and Tito Puente, who were both born in New York City to Puerto Rican parents. Others, such as Roberto Clemente, Raúl Juliá, and Rita Moreno, were born in Puerto Rico and moved to the U.S. mainland later in their lives. From these famous figures to the not-so-famous soldiers, staff members of the Smithsonian Institution, and countless other community members, Puerto Ricans have made and continue to make historical, political, and cultural contributions to the American fabric. On the centennial of the Jones Act, we remember and celebrate them.
Verónica Rivera-Negrón was a Latino Studies Fellow in residence at the National Museum of American History in summer 2016.
The chirping of cicadas is deafening, my clothes are sticky and heavy with heat and sweat, my right hand is swollen from ant bites, I am panting, almost passing out from exhaustion – and I have a big grin on my face. At last I’ve reached my goal, Rajah Brooke’s cottage, at the top of Bukit Peninjau, a hill in the middle of Borneo’s jungle.
This is where, in February 1855, naturalist Alfred Russel Wallace wrote his hugely influential “Sarawak Law” paper. It’s as crucial to Wallace’s own thinking in disentangling the mechanisms of evolution as the Galàpagos Islands famously were to his contemporary, Charles Darwin.
Three years later, in 1858, two papers that would change our understanding of our place in the natural world were read before the Linnean Society of London. Their authors: Charles Darwin and Alfred Russel Wallace. In another year, Charles Darwin would publish “The Origin of Species by means of Natural Selection,” squarely positioning him as the father of evolution. Whether Darwin or Wallace should justly be credited for the discovery of the mechanisms of evolution has stirred controversy pretty much ever since.
Comparatively little has been written about Wallace’s seminal work, published four years earlier. In what’s commonly known as his “Sarawak Law” paper, Wallace pondered the unique distribution of related species, which he could only explain by means of gradual changes. This insight would ultimately mature into a fully formed theory of evolution by natural selection – the same theory Charles Darwin arrived at independently years before, but had not yet published.
I am an evolutionary biologist who has always been fascinated by the mechanisms of evolution as well as the history of my own field, and it’s like visiting hallowed ground for me to trace Wallace’s footsteps through the jungle where he puzzled through the mechanics of how evolution works.An 1874 map of the Malay Archipelago, tracing Wallace’s travels. Trustees of the Natural History Museum, 2018, (CC BY-ND)
Forgotten founder of evolutionary theory
Alfred Russel Wallace, originally a land surveyor from a modest background, was a naturalist at heart and an adventurer. He left England to collect biological specimens in South America to finance his quest: to understand the great laws that shape life. But his trip back home was marred by terrible weather resulting in his ship sinking, all specimens being lost and a near-death experience for Wallace himself.
In order to make back the money he’d lost in the shipwreck, he headed to the Malay Archipelago, a region to which few Europeans had ever ventured. Wallace spent time in Singapore, Indonesia, Borneo and the Moluccas.
There he wrote a succinct, yet brilliant, paper, which he sent to Charles Darwin. In it, he described how organisms produce more offspring than necessary, and natural selection only favors the most fit. The ideas he’d arrived at on his own were revolutionary – and closely mirrored what Darwin had been mulling over himself.
Receiving Wallace’s paper – and realizing that he might be scientifically “scooped” by this unknown naturalist – prompted Darwin to rush his own writings, resulting in the presentation to the Linnean Society in 1858. Wallace’s paper, now known as the “Ternate paper,” was an elaboration of his thinking, based on an earlier, first foray into the realm of evolutionary biology.Portrait of Alfred Russel Wallace taken in Singapore in 1862. (James Marchant)
A few years earlier, when in Singapore, Wallace had met James Brooke, a British adventurer, who through incredible circumstances became the rajah of Sarawak, a large state on the island of Borneo. James Brooke would create a dynasty of Sarawak rulers, known as the white rajahs.
Upon their encounter, Brooke and Wallace became friends. Wallace fell in love with Sarawak and realized that it was a perfect collecting ground, mostly for insects, but also for the much sought after orangutans. He stayed in the area a total of 14 months, his longest stay anywhere in the archipelago. Toward the end of his sojourn, Wallace was invited by Brooke to visit his cottage, a place up on the Bukit Peninjau that was pleasantly cool, surrounded by a lush and promising forest.
A waterfall in Sarawak. Hugh Low, 'Sarawak; its inhabitants and productions; being notes during a residence in that country with the Rajah Brooke.' (Public Domain)
“This is a very steep pyramidal mountain of crystalline basaltic rock, about a thousand feet high, and covered with luxuriant forest. There are three Dyak villages upon it, and on a little platform near the summit is the rude wooden lodge where the English Rajah was accustomed to go for relaxation and cool fresh air…. The road up the mountain is a succession of ladders on the face of precipices, bamboo bridges over gullies and chasms, and slippery paths over rocks and tree-trunks and huge boulders as big as houses.”
The jungle surrounding the cottage was full of collecting possibilities – it was particularly good for moths. Wallace would sit in the cottage’s main room with the lights on at night, working, sometimes furiously fast, at pinning hundreds of specimens. In just three evening sessions, Wallace would write his “Sarawak Law” paper in this remote setting.
Whether consciously or not, Wallace was laying the foundation for understanding the processes of evolution. Working things through in this out-of-the-way cottage, he started to synthesize a new evolutionary theory that he’d fully develop in his Ternate paper.The birdwing butterfly Trogonoptera brookiana was named by Wallace for Sir James Brooke, the rajah of Sarawak. (Lyn, CC BY-ND)
Following in Wallace’s Sarawak footsteps
I’ve been teaching evolution to college students for over two decades and have always been fascinated by the story of the “Sarawak Law” paper. On a recent trip to Borneo, I decided to try to retrace Wallace’s steps up to the cottage to see for myself where this pioneering paper was written.
Tracking down information about the exact location of Bukit Peninjau turned out to be a challenge in itself, but after a few mistakes and contradictory directions obtained from local villagers, my 16-year-old son Alessio and I found the trailhead.
The moment we started, it was obvious we had ventured off the beaten path. The trail is narrow, steep, slippery and at times barely recognizable as a path. The very steep incline, combined with the heat and humidity, make it difficult to negotiate.The author with an Amorphophallus flower. (Alessio Bernardi, CC BY-ND)
While much has disappeared since Wallace’s time, a huge diversity of lifeforms is still visible. In the thick of the jungle along the lower part of the trail, we spotted several stands of the tallest flower in the world, the aptly named Amorphophallus. Hundreds of butterflies were everywhere, along with other peculiar arthropods including giant ants and giant pill millipedes.
In some stretches, the trail is so steep that we had to rely on the knotted ropes that have been installed to help with the climb. Apparently red ants love those ropes as well – and our grasping hands just as much.The author on the former site of the Brooke cottage. Locals sprayed the area with weed-killer to reclaim the clearing from the jungle. (Alessio Bernardi, CC BY-ND)
Eventually, after about an hour and a half of climbing and struggling, we reached a somewhat flat portion of the trail, not more than 30 feet long. On the right, a small path led up to a clearing, the former site of the cottage. It’s hard not to imagine Alfred Russel Wallace, thousands of miles from home, in complete scientific isolation, pondering the meaning of biological diversity. I was at a loss for words, though my teenage son was puzzled by the emotional meaning of the moment for me.
I walked around the cleared space where the cottage used to be, imagining the rooms, the jars, the nets, the moths and the notebooks. It’s an incredible feeling to share that space.
We walked down a slope to the huge overhanging rock where Brooke and Wallace found “refreshing baths and delicious drinking water.” The pools are gone now, filled in with natural debris, but the cave is still a welcome shelter from the sun.The author in the spot where Wallace described ‘a cool spring under an overhanging rock just below the cottage.’ (Alessio Bernardi, CC BY-ND)
We decided to climb to the top of the hill. Thirty minutes and buckets of sweat later, we arrived at a viewpoint where we could take in a view of the entire valley, unobstructed by the jungle. We saw oil palm farms, houses and roads. But my focus was on the river in the distance, used by Wallace to reach this place. I imagined what the primary forest, full of orangutans, birdwing butterflies and hornbills, must have looked like 160 years ago.
In the midst of this gorgeous but very harsh environment, Wallace was able to keep a clear head, think deeply about what it all meant, put it down on paper and send it to the most prominent biologist of the time, Charles Darwin.
Like many other evolution aficionados, I’ve visited the Galàpagos Islands and retraced Darwin’s footsteps. But it’s in this remote jungle, far from anyone and anything – perhaps because of the physical difficulties of reaching Rajah Brooke’s cottage combined with the raw beauty of the surroundings – that I felt a deeper connection with that long-ago time, when evolution was discovered.
Englishman James Smithson is best known for leaving his personal fortune to the United States government for the creation of the Smithsonian Institution. But Smithson, who died in 1829, was more than just a wealthy philanthropist. He was an accomplished scientist who published research papers on many subjects, including how to make the best cup of coffee.
Smithson published his paper on coffee in 1823 in a monthly publication called Thomson's Annals of Philosophy, which was sort of a combination between a scientific journal and a modern popular science magazine. Smithson "enters chemistry when it's just beginning," says Pamela Henson, director of the Institutional history division of the Smithsonian Institution Archives. "And they have no idea of all the things they are going to be able to do. For example, science is much more generalized back then. You were looking at everything in the world. You don't have the broken down disciplines like you do now."
In an age before automatic drip coffee machines, Smithson was trying to solve several problems at once on his way to the perfect cup of coffee. Smithson wanted the coffee to be properly hot; economically used; and above all else he was striving for “the preservation of the fragrant matter.”
He had probably noticed the same thing that generations of later coffee drinkers would figure out. The better the smell of coffee brewing, the less flavor the coffee will have. When aromatic compounds are driven out of the coffee during brewing, less flavor remains to the coffee drinker. Smithson wanted to find a way of keeping those aromatic compounds in the coffee.
Smithson instructed the reader to put coffee grounds in a glass bottle. Then, to pour cold water over the grounds and put a cork loosely in the mouth of the bottle before placing the bottle in a pot of boiling water. When the coffee is done, the bottle is removed from the boiling water and allowed to cool without removing the cork. This gives those aromatic compounds time to condense from their gaseous form and seep back into the liquid of the coffee. Next, Smithson's method called for pouring the coffee grounds and liquid through a filter, then quickly reheating the sieved coffee to drink it.
Would this brewing system work? Was Smithson really keeping any extra flavor in his coffee? And would this same idea make beer any better, as he also suggested? To find out, I recreated and taste-tested Smithson's long-forgotten idea. But first I had to fill in some gaps.Turns out the founder of the Smithsonian James Smithson, a scientist by training, figured out how to brew a pretty good cup of coffee. (National Portrait Gallery)
Most recipes written before the 20th century are short on details and exact measurements. Smithson doesn't say what volume of water to use, how much ground coffee to add, or what shape and volume of glass vessel to select. Not very scientific. But people's taste in coffee probably varied as much in 1823 as it does today.
Some prefer a strong brew and others like something weaker. How many cups do you intend to drink at once? There was no point in getting specific about the recipe. Smithson was offering a method that he knew everyone would adapt to their own taste.
I selected a clear wine bottle to brew in because a tinted glass would make it difficult to judge when the coffee was ready. Out of concern that the bottle could explode under pressure, I decided to leave about a third of its volume empty so that a small amount of steam could build up.
While a bottle of Smithson coffee was warming up on one burner, I heated an identical volume of water on another burner in order to prepare my control group. I needed to compare Smithson's system to something, so I chose the popular pour-over method using a Chemex. (The Chemex-style pour-over method was not popular during the early 19th century, but I chose it for the control group because it is the favored method of most modern coffee connoisseurs.)
In Smithson's era, he was comparing his method against two types of coffee preparation that are no longer common in either his native England or the United States.
The most common method was to heat a pot of water over a fire and toss coffee grounds into the pot. When the grounds sank to the bottom, the coffee would be poured into cups and served. Beginning around 1800, there was also a preparation known as percolation, which was not the same thing as the tall, cylindrical percolators that were popular in the U.S. until the late 1970's before drip coffee makers became state-of-the-art. Percolation of Smithson's era involved pressing coffee grounds into a short, even cylinder and pouring boiling water through a metal filter.
The idea of approaching coffee as a subject of serious scientific inquiry began with the 1813 publication of an essay entitled “Of the Excellent Qualities of Coffee,” by Sir Benjamin Thompson, Count Rumford. Thompson also designed Munich's famous English Garden; as well as a furnace to produce quicklime; and he invented thermal underwear.
In his essay, he outlined the problems with making the perfect cup of coffee and offered an early method of percolation to counter them (Thompson is very precise in his recipes, measurements and instructions for making novel coffee roasting and brewing equipment. Any reader interested in diving deeper into the recreation of coffee history should start there).
Thompson identified the most aromatic chemical part of the coffee that he believed was lost through boiling. “. . .This aromatic substance, which is supposed to be an oil, is extremely volatile,” wrote Thompson, “and is so feebly united to the water that it escapes from it into the air with great facility.”
Preventing the loss of this aromatic oil was a focus of both Thompson and Smithson's research into coffee. Smithson's paper was almost certainly intended in part as a belated response to Thompson's essay.
That lack of clear lines between disciplines was why scientists of the early 19th century were able to move between subjects as far-ranging as Smithson's coffee experiments and his better-known work on chemistry and geology.
"There isn't the distinction between academic science and practical science back then," says Henson. "So it's not that unusual for him to be interested in the coffee. At that time coffee is a very precious substance. So you wanted to get the maximum effect from whatever coffee beans you had. By doing it with that closed vessel, you got the maximum effect and it didn't just go up in the air through steam."
Smithson's best-known scientific work was on the subject of a group of minerals called calamines. Calamines contain varying amounts of zinc, a valuable metal. Miners "would go after these veins of calomine not knowing how much zinc they were going to get out of it," Henson says. But often the effort would be wasted when they later found that a particular deposit of calomine was low in zinc. "He came up with this method for finding out how much zinc was in there before they began mining. So you see all of those zinc rooftops in Paris, Smithson really enabled that."
As my bottle sat in the boiling water for eight minutes I was surprised to find that the water within it never came to a boil and so the cork was never in danger of being blown off. I removed it from the pan of water when the color looked suitably dark enough.
Four cups of coffee from each method were prepared in identical glasses marked only with a number. Number one was made in the Chemex and number two used Smithson's method. The tasters had no idea which they were about to drink.
“Number one is more robust,” said Dale Cohen, one of my taste-testers. “Number two is smoother, lighter.”
“It's a very stark difference to me,” said Stefan Friedman, another taster. “I want to say there is less bitterness and acidity in number two.”
There was no question that each type of coffee tasted different. But including myself, half of the my taste-testing subjects preferred the modern pour-over method and the other half preferred Smithson's coffee.
Sitting among a group of colleagues discussing scientific ideas over coffee, as we did while experimenting with Smithson's method, would have been a very recognizable scenario to James Smithson.
"He is a part of what is called coffee house culture," says Henson. "Very early on he's at Oxford he's hanging out with [British scientist] Henry Cavendish and people like that. And he's hanging out in these coffee houses and this is where you discuss your scientific ideas. He's the youngest member of the royal society. . . He has this focus on practicalities."
During the following month, I experimented more with Smithson's method. Leaving the bottle in boiling water for 15 minutes instead of eight minutes yielded better results. I noticed more flavor in the coffee. When I was in a hurry, I tended to use the pour-over method. But if I had plenty of time to wait for the coffee to cool before removing the cork, I found myself gravitating towards using Smithson's method.
One more line in Smithson's paper intrigued me as my experiments came to a close.
“Perhaps [this method] may also be employed advantageously in the boiling of hops, during which, I understand, that a material portion of their aroma is dissipated,” Smithson wrote.
As a life-long homebrewer, I decided to apply Smithson's corked bottle method to the brewing of beer. At C'Ville-ian Brewing Company in my home town of Charlottesville, Virginia, I talked the manager into allowing me to appropriate his brewing system in order to make an experimental 30-gallon batch of 1820's styled India pale ale at the brewery.
In a dozen glass bottles, I placed all of the boiling hops that are used to make beer bitter. In place of the plain water used in the coffee experiment, I used a mixture of water and malt in the bottles (some of the desirable chemicals in hops are not fully soluble in water that does not also contain malt). My hope was that the aromatic compounds that are usually driven off during the 90-minute boiling process would be retained in the beer, making it more flavorful. After the dozen bottles had been heated for 90 minutes in their water baths, I decanted them into the fermentation vessel along with the rest of the beer.
The result is an interesting beer that is worth drinking but does not resemble what would have been recognized as an India pale ale either in the 1820's nor today. I had hoped that this would produce some sort of super-IPA, but the beer tastes lighter and less bitter than a conventional IPA.
If I were going to try this experiment again, I would use Smithson's method for the finishing hops towards the end of the boil rather than for the boiling hops. But regardless of the outcome, I like to think that James Smithson would appreciate the effort that a reader had made to finally test his ideas, 193 years later.
At Crater of Diamonds State Park in Arkansas, visitors can pay a $7 admission fee, grab a shovel and try their hand at diamond prospecting. The rule is "finders keepers." Over the past three years, annual visitation has tripled to 170,000, and in 2007 tourists pulled more than 1,000 precious stones from the ground. Some visitors use a special screen known as a seruca to wash and separate the heavier diamonds from the lighter debris. Others just get down on their hands and knees, squinting for jewels in the furrows. The 800-acre park holds out the hope, however slim, that just about anyone can strike it rich. Unfortunately, the park may also hold out a temptation for mineralogical mischief.
Eric Blake, a 33-year-old carpenter, has been coming to Crater of Diamonds two or three times a year ever since his grandfather first took him there when he was a teenager. In October 2007, his hard work finally paid off with the discovery of a whopping 3.9-carat stone—nearly the size of the site's Kahn Canary diamond that Hillary Clinton borrowed for her Arkansas-born husband's presidential inaugural galas. It's the kind of rare find that's spectacular enough to attract national attention. Blake reportedly spotted the elongate, white diamond along a trail just as he was plunking down a 70-pound bucket of mud and gravel he planned to sort through.
His lucky stone could be worth as much as $8,000—if he can prove it came from Arkansas soil. In the year since his discovery, fellow collectors, park officials and law enforcement officers have started wondering how Blake and his family uncovered an unprecedented 32 diamonds in less than a week.
"We have a concern of maintaining the integrity of not only the park, but the state of Arkansas," says park superintendent Tom Stolarz, who caught a glimpse of the diamond as Blake was packing to leave the park. Although Stolarz is not a geologist, he has been at the park for 26 years and has handled more than 10,000 diamonds, paying special attention to large stones. Blake's rough-hewn gem was certainly a diamond to Stolarz's eyes, but was it an American diamond?
The answer is more important than one might think. Diamonds are merely crystallized carbon and today they can be created economically in a lab. But the stones fascinate people; the National Museum of Natural History's diamond exhibit, featuring the Hope Diamond, is one of the most popular destinations in the Smithsonian Institution. For many diamond buyers, history buffs and a quirky subculture of dedicated diamond hunters, provenance is everything.
Diamonds were discovered in Arkansas in August 1906, when a farmer named John Wesley Huddleston found a "glittering pebble" on his property. The next year the New York Times described "Diamond John's" treasure in epic terms: "The story of the discovery of diamond fields in one of the poorest counties of the not over-rich State of Arkansas reads like a chapter of Sinbad's adventures."
More than 10,000 dreamers flocked to nearby Murfreesboro, filling up the ramshackle Conway Hotel and striking up a tent city between town and the diamond field. It was not an easy life, says Mike Howard of the Arkansas Geological Survey. "Many people came, few people found," he says. "Most were gone within a couple of years." The majority of Arkansas diamonds, then as now, come in at under ten points, or about 1/10th of a carat. But in 1924, one lucky miner pulled a 40-carat monster out of the ground. Christened Uncle Sam, it remains the largest diamond ever discovered in the United States and a twinkle in every miner's eye.
A lot of funny business has gone on around the diamond field over the past century. After failing to gain full control of the area in 1910, the London-based Diamond Syndicate allegedly set up a sham operation to downplay the mine's potential and sabotage production, according to a Justice Department investigation. In 1919, two rival processing plants burned to the ground on the same January night, fueling rumors that someone was out to destroy the mine's profitability. In the late 1920s, Henry Ford was set to buy Arkansas industrial diamonds for his assembly lines, but the Diamond Syndicate and De Beers bribed the mine's owner to keep it out of commission. Shenanigans continued into the 1950s, when, for instance, an entrepreneur trucked some gravel from the diamond field to his own five acres north of town and plunked down a sign claiming he had a diamond mine. Locals found him beaten up in a ditch the next morning, according to a story one Arkansas geologist has told over the years.
The state of Arkansas purchased Huddleston's former property in 1972 and established Crater of Diamonds State Park, but that wasn't enough to ensure the site's integrity. According to the book Glitter & Greed by Janine Roberts, mining companies tried, and failed, to get legislation passed to open the park up for commercial exploration. By the mid-1980s, several companies were running aerial magnetic surveys to hunt for undiscovered pipes of diamond-rich rock outside the park's boundaries. "It was something else," says Howard, who recalls seeing their helicopters in motel parking lots. They identified one new pipe, but it was far too small to be worth exploiting.
In 1987, then-governor Bill Clinton put together a controversial task force to explore the Crater's commercial mining prospects. One diamond executive estimated it could hold diamonds worth $5 billion. The Sierra Club, the Arkansas Wildlife Federation and Friends of Crater of Diamonds State Park fought unsuccessfully in federal court to halt the plans. By 1992, exploratory drilling was approved—with environmental caveats—and geologist Howard was assigned to keep abreast of the work being conducted by four mining companies. If the drilling had been successful, tourists would have been barred from the main pipe itself, although rock and debris would have been set aside for them to root through, and they could have toured the processing plant. Some locals were miffed; others looked forward to the estimated 800 jobs mining could bring to the economically depressed region.
Image by Arkansas Department of Parks and Tourism. Denis Tyrell holding a 4.42 ct. diamond. It took Tyrell ten days to find his first diamond when he arrived at the park in June 2006. (original image)
Image by Arkansas Department of Parks and Tourism. Diamond demonstration at Crater of Diamonds State Park in Arkansas. Some visitors use a special screen known as a seruca to wash and separate the heavier diamonds from the lighter debris. (original image)
Image by Arkansas Department of Parks and Tourism. Over the past three years, annual visitation has tripled to 170,000 at Crater of Diamonds State Park in Arkansas. (original image)
But after they processed 8,000 tons of rock, diamonds proved too rare to make the scheme profitable. The miners packed up their processing plant and shipped it to Canada. Their drilling cores, however, provided geologists with the first extensive maps of the diamond-bearing cone of lamproite rock. "Being a scientist, I wanted to have that information," says Howard. The surface area of the diamond field is 83 acres, and the cone funnels to a point some 700 feet below, making it the tenth-largest cone known in the world. Howard says it's shaped like a martini glass.
Arkansas diamonds originally formed more than three billion years ago under intense heat and pressure some 60 to 100 miles below the earth's surface. Then, about 100 million years ago, a giant gas bubble formed in the earth's roiling magma and shot up to the surface at 60 to 80 miles per hour, pulling diamonds and other material with it before launching into the air and raining debris back down. Some 60 to 80 percent of the diamonds forced to the surface were probably destroyed during this violent process. The park contains the largest cone, but five others—covering just a few acres each—are also in the area.
Though the diamonds could not support a commercial operation, there is still room for profit. Arkansas diamonds fetch about ten times more per carat than comparable stones, largely because collectors value the diamonds' American provenance and unique character. Many of the stones are smooth and rounded like a drop of glass, and they are among the hardest in the world. They come in three colors: white, yellow and brown. There's practically no other major mine in the world with stones that could pass for Arkansas natives, except maybe the Panna mines in India. (The similarity among the two sites' stones is likely to be skin-deep, says Howard, although no one has documented the trace elements that could be used to fingerprint Arkansas diamonds.) If Blake's 3.9-carat stone were an import, it wouldn't net more than several hundred dollars. The rest of his stones would fetch far less.
When park superintendent Stolarz saw Blake's diamond, he suggested Blake show it to Howard at the Arkansas Geological Survey. Howard was on vacation but made a special trip to his Little Rock office when he got the call about the big diamond. But Blake, who was driving back to Wisconsin with his fiancée and her daughter and sister, never showed up. Howard called Blake's cell phone again and again to no avail. He reached Blake a few days later, and Blake explained that he "had a flat tire and didn't have time to come by," Howard recalls.
A few weeks later, photographs of Blake's stones popped up on eBay and Blake's own Web site, Arkansas Diamond Jewelry.
When word of Blake's finds reached the Murfreesboro Miner's Camp, a trailer park and campground that hosts a population of good-natured diamond hunters, people were a tinge jealous. And suspicious. "I was like 'Jeez!'" says Denis Tyrell, 49-year-old licensed handyman who says he has made a living digging diamonds for the last 18 months. "You don't just come here, pick a spot, find 40 diamonds, and say 'I'll see you next year!'" It took Tyrell ten days to find his first diamond when he arrived at the park in June 2006. His personal best rate has been 38 diamonds in 31 days, a record he achieved in October 2008.
For all their suspicions, there was no evidence of wrongdoing. Then a fossil and mineral dealer named Yinan Wang noticed something strange. In September 2007, he had purchased one of Blake's smaller diamonds for $200. That December, Wang was interested in doing business with an Indian dealer named Malay Hirani. Wang asked Hirani to share a copy of a recent Kimberley Process Certificate, which would ensure that his rough diamonds were not the so-called blood diamonds traded by warlords in Africa and would verify that Hirani had previously done business in the United States. By chance, the certificate Hirani copied for Wang had come from an order Hirani had sent to Blake. Wang—simply sizing up his potential business partner—decided to ask Blake if Hirani was trustworthy. To his surprise, Blake denied the connection: All our diamonds are from the U.S., he said, according to Wang.
Wang didn't think much about the incident until March 2008. He was chatting with Hirani about sources for rough diamonds, and Wang mentioned Blake's Web site. The dealer looked at it and immediately thought he recognized some of Blake's jewels as his own. "I realized I had stumbled on something relatively big," Wang says. Hirani shared his receipts, shipping confirmation numbers, and photographs with Wang, and the duo later tracked the 3.9-carat diamond to another source, they say: a Belgian dealer named Philippe Klapholz. Tipped off by Wang, who was operating under the alias "Hal Guyot," the Web site Fakeminerals.com spelled out the alleged fraud.
If Blake really did plant foreign diamonds in Arkansas soil, was it a crime? Pike County Sheriff Preston Glenn is investigating Blake and expects to complete his work in early 2009, but says it would be up to the prosecuting attorney to determine what charges, if any, to pursue. In the meantime, officials say that Blake has agreed not to return to Crater of Diamonds State Park.
Blake says he has done nothing wrong and simply posted the wrong photos on his site. "A couple of diamonds were in question, but nobody has proven anything," he says.
One Friday afternoon this past August, the diamond hunter Tyrell finally had his own lucky strike—he pulled a 4.42-carat stone out of the ground. For a while, it seemed, Blake's alleged chicanery was no longer the talk of Murfreesboro. It was Tyrell's big day, and no one around there doubts that Tyrell's stone is legitimate. Stolarz sees him out in the park nearly every day, sorting through pebbles and taking samples home to examine come nightfall.
Author Bio: Brendan Borrell wrote about Cassowaries, the world's most dangerous bird, in the October 2008 issue of Smithsonian magazine
On a sunny day this spring Josh Chase, an archaeologist for the Bureau of Land Management, stood on the bluff above Montana’s Milk River and watched as flames raced through one of the most unique archaeological sites on the northern Plains. But instead of worrying about the fate of smoldering teepee rings or stone tools, Chase was excited. He had planned the controlled burn, and even the firefighters on scene could see the fire instantly uncovering a rich record of the bison hunters who lived there 700 to 1,000 years ago.
By burning the 600-acre stretch of grassland in northeastern Montana named after one-time landowner Henry Smith, Chase gained perspective that would have been nearly impossible to achieve with traditional archaeological techniques. A research aircraft later flew over to image the freshly exposed artifacts, including the remains of rock structures used to corral and kill bison, stone vision quest structures where people fasted and prayed and stones arranged in human and animal shapes.
“Before the fire, if we were looking at the site through a door, we were just looking through the peephole,” says Chase. “Now that we’ve burned it and recorded it, we’ve opened the door so we can see everything there.”
As far as Chase knows, it’s the first time an archaeologist has intentionally set a cultural site ablaze. It’s much more common for archaeologists in the Western U.S. to worry about wildfires--or fire-fighting efforts--damaging a site. But since grasslands are adapted to natural fire cycles, Chase had a rare opportunity to use fire as an archaeological tool. It's a tool that has had surprisingly successful results thus far. Chase is still analyzing the flight data from this year’s 400-acre burn, but an initial burn last spring revealed 2,400 new stone features – about one every three to five feet.
When Chase began working on the Henry Smith site in 2010 realized it was going to be too large to map by hand. Plus, vegetation obscured much of it. He knew grass fires to be a natural part of the plains ecosystem, and most of the artifacts there are durable quartzite stones. To Chase, a former wildland firefighter, a controlled burn seemed like a sensible way to expose any artifacts on the surface without harming them.
Since much of the data about fire’s impacts on archaeological sites comes from studying high-intensity forest fires, Chase wanted to be sure that a low-intensity grass fire wouldn’t harm the archaeological record, especially fragile animal bones. So for last year's 300-acre burn, Chase selected a location with only stone artifacts. Within that burn, a crew from the U.S Forest Service's Missoula Fire Science Laboratory fitted mock stone and bone artifacts with heat sensors and burned test plots in different vegetation types. The fire raced over them for only 30 seconds and left the artifacts unscathed. That gave him confidence that this year’s blaze wouldn’t harm the sensitive bison bone fragments in the Henry Smith site.
Archaeologists have known about the existence of a buffalo kill site there since the 1930s. Arrowheads found at Henry Smith identify it as part of the Avonlea Period, when northern Plains bison hunters first started using bows and arrows. But no one studied it systematically until the 1980s, when a researcher identified two spiritually significant stone effigies, and excavated a buffalo jump. To harvest bison, hunting groups built miles-long lines of rock piles, called drivelines. The drivelines helped the hunters herd the running bison towards a rocky bluff where the animals “jumped” into a ravine by tripping and stumbling.
Henry Smith’s overwhelming density of features including vision quest sites, four more effigies and additional drive lines didn’t come into focus until last year’s test burn. This year’s burn revealed stone tools and teepee rings indicating the site was used for day-to-day living in addition to spiritual and hunting purposes. Chase says that it’s very unusual to find all of those features in one location.
While the site is within the traditional territories of multiple American Indian tribes, archaeologists and tribal members have not yet linked it to a specific one, and the area is no longer used by native groups. Chase notififed 64 tribes throughout the U.S. before the burn and had face-to-face meetings with Montana tribes to gather feedback on the burn technique. No one had a problem with it, according to Chase.
This summer, Chase will have more meetings with the region's tribes to get their perspectives on interpreting the site. He'll also be be doing fieldwork to confirm that his is correctly interpreting the aerial images and he’s now developing hypotheses about the Henry Smith site’s significance.
“I would speculate that it probably started as a very good place to get and process bison, and due to that fact it turned into a spiritual place,” he says. “Now we’re looking at that snapshot in time with all those features from all those years of activity laying on top of one another.”
Image by Great Falls Tribune/Rion Sanders. Stones arranged in a circle form a vision quest site, a place where people fasted and prayed. Until a controlled burn swept the area, this site had been hidden by vegetation for hundreds of years. (original image)
Image by Great Falls Tribune/Rion Sanders. Bison teeth found at the foot of a buffalo jump, a site where Native Americans herded bison into a ravine. (original image)
Image by Great Falls Tribune/Rion Sanders. Stone tools are part of the features at an archeological site near Malta, in northeastern Montana. (original image)
Fire has also influenced how Larry Todd, an emeritus anthropology professor at the Colorado State University, interpreted the archaeology of Wyoming’s wilderness. Instead of excavating deep into a small area, he surveys the surface for artifacts that provide a big-picture view while making minimal impact on the land. Todd had spent five years been mapping a site in the Absaroka Mountains just southeast of Yellowstone National Park when the Little Venus wildfire burned through in 2006. In the aftermath, he realized that he had been studying a severely watered down version of the archaeological record.
The fire increased the artifacts visible on the surface by 1,600 percent. The vegetation had also hid high-quality artifacts. There were many more bone fragments, fire pits, trade beads and ceramic figurines – the kinds of objects that contain a lot of information for archaeologists.
That changed Todd’s interpretation of the site. He now thinks that Native Americans used Wyoming’s mountains much more intensively and for more of the year than his earlier work showed. “The most amazing thing that the fire has exposed is our ignorance,” he says.
For Todd though, the increased knowledge comes with a cost. Fires expose artifacts to looting, erosion, weathering, and the hooves of free-ranging cattle that “take that beautiful crisp picture of what life was like in the past and make it look like it went through a Cuisinart.”
It pains Todd that he can’t get to every site in time. “When a fire burns through an area, and they are literally some of the most spectacular archeological sites you have ever seen, it’s a real mix of emotion,” he says. “You’re sort of saying, ‘Oh my God this is going be gone, and I don’t have the time, and I don’t have the people, and I don’t have the funding to record it properly.’ It’s thrilling, but depressing at the same time.”
Chase avoided those tradeoffs at Henry Smith because many of its artifacts aren’t fire-sensitive, the site is protected from looters by the private ranches surrounding it, and he had the luxury of planning for a controlled burn. His work will be important to understanding not only the people who lived and hunted there, but also how to protect and study grassland cultural sites after future wildfires or prescribed burns.For a test burn in 2015, BLM architects placed temperature sensors within mock cultural sites. (Bureau of Land Management)
Ana Steffen, an archaeologist working at New Mexico’s Valles Caldera National Preserve, has seen some of the worst of what fire can do. In 2011, the Las Conchas fire burned 156,000 acres in the Jemez Mountains and set a new record for the state’s largest fire at the time. The fast-moving conflagration spread at the rate of about two football fields per second, denuding much of the forest.
“What we realized was Las Conchas the worst-case scenario by every measure for archaeology,” says Steffen. “Not only did it burn a huge area, it burned large areas really, really badly with severe direct effects, and with terrible indirect effects later on.”
In the end, the Las Conchas fire affected more than 2,500 archaeological sites. After withstanding centuries of more moderate fires, Ancestral Puebloan dwellings crumbled, pottery disintegrated, and flint and obsidian artifacts shattered. Then flash floods ripped through the bare soils, carrying away 25-acre obsidian quarries used by Archaic period hunter-gatherers.
Steffen is now part of a team trying to make the most out of the Las Conchas fire. Researchers are doing controlled lab experiments to model how archaeological materials respond to a variety of fire conditions. That will help archaeologists and fire managers figure out when it’s safe to do prescribed burns, and how to protect features from wildfire. It will also help archaeologists understand past fire severity when they are looking at a site.
A history of suppressing low-intensity wildfires helped contributed to the Las Conchas fire’s severity, so Steffen applauds using prescribed fire as an archaeological tool. “Being able to return fire to the landscape is a wonderful way of humans interacting with the environment,” she says. “I find it to be very, very healthy. So mobilizing a case study such as this one where you can get archaeologists out on the landscape, where you can see what’s happening after the fire, that’s just smart science.”
There's still a lot to learn by studying how fires affect cultural sites, and researchers have ample opportunity to do that work. For example, on the Shoshone National Forest where Todd works, fires have been getting larger and more frequent over the last 20 years. During one field season the ashes of an active wildfire fell on him as he examined the aftermath of an old one. “There’s a whole suite of really complex interactions going on that are probably going make fire archaeology something we’re going to see more of in the future,” he says.
From the Sphinx in Egypt to the Statue of Liberty in the United States, the world’s largest monuments are typically the ones that get the most recognition, filling up people’s Instagram feeds and topping many travelers’ bucket lists. But for every massive monolith that gets its time in the spotlight, there’s a smaller yet equally interesting monument that is harder to spot—but worth hunting for. Here are six of the world’s smallest monuments worthy of a visit.
Chizhik-Pyzhik, Saint Petersburg, Russia(Dmitry Alexeenko)
Tiny monuments are easy to overlook. Most tourists passing over the First Engineer Bridge where the Fontanka and Moyka rivers meet miss the four-inch statue perched on a small ledge on the stonework below. This statue, called Chizhik-Pyzhik, is a miniature bronze sculpture of a siskin (chizhik in Russian), a bird related to the finch.
Georgian sculptor Rezo Gabriadze created the piece in 1994 as a tribute to the often rowdy students that attended the Imperial Legal Academy that once occupied the same site. The figure is a nod to the students' green and yellow uniforms, which mimicked the color pattern of the bird. The school, founded in 1835 under the approval of Tsar Nicholas I, taught jurisprudence to the children of Russia's nobility for over 80 years. Although alcohol was forbidden at the school, the students' covert social activities were memorialized in a popular folk song known throughout Russia: “Chizhik Pyzhik, where've you been? Drank vodka on the Fontanka. Took a shot, took another, got dizzy.” The school was closed in 1918, following the Bolshevik Revolution.
One of the problems with having a mini monument is that thieves often see it as a free souvenir. Over the years, the sculpture has been the victim of theft on numerous occasions, so in 2002 the staff of the Museum of Urban Sculpture had several copies made, just to be safe.
If you spot the small sculpture, it’s believed that dropping a coin that lands on the ledge brings good luck.
Dwarfs, Wrocław, Poland
Image by Krugli/iStock . Bronze statues at Wroclaw Market square near Old Town hall. (original image)
Image by Photon-Photos/iStock. A dwarf statuette climbs a lamp post on Świdnicka Street. (original image)
Image by Alexabelov/iStock. A dwarf statuette perched on a bridge rail. (original image)
Image by <ahref="flickr url"="">Klearchos Kapoutsis - Flickr/Creative Commons. Statuettes of two dwarfs on Świdnicka Street. (original image)
Since 2001, more than 300 miniature bronze statues of dwarfs have sprouted up throughout the city of Wrocław, lurking in the alleyways or standing in plain sight outside of businesses. But while they may be cute to look at, they have an unusual history tied to resistance to Communism.
The dwarfs are a nod to the Orange Alternative, an underground anti-Communism group that often used graffiti, particularly drawings of dwarfs, to get their message across. The dwarfs originally started popping up in the early 1980s when protest artists started adding arms and legs to the "blobs" that resulted when more overt anti-government slogans were painted over. These dwarf figures caught on, becoming the symbol of the movement. On June 1, 1987, the coalition held a massive rally where thousands of demonstrators donned red hats and marched through the city.
As a way to commemorate the Orange Alternative’s contribution to the fall of Communism in central Europe, the city commissioned local artists to create bronze sculptures of dwarfs. And today, its annual Wrocław Festival of Dwarfs proves popular every September.
The Two Mice Eating Cheese, London"The Two Mice Eating Cheese" is considered the smallest statue in London. (<ahref="flickr url"="">Matt Brown - Flickr/Creative Commons)
You have to crane your neck to spot London’s smallest statue, a carving of two mice battling over a hunk of cheese, located on the upper façade of a building at the intersection of Philpot Lane and Eastcheap in London. “The Two Mice Eating Cheese” is in remembrance of two men who died during the construction of the Monument to the Great Fire of London, a stone column built in 1677 in memory of those who perished in a devastating citywide fire that had occurred in 1666. Although details of the incident are murky at best, the legend is that the men fell to their deaths after a fight broke out after one of them accused the other of eating his cheese sandwich. It was later learned that the real culprit was a mouse.
Frog Traveler, Tomsk, RussiaLocated in Tomsk, Russia, the "Frog Traveler" is known as the smallest monument in the world, standing 1.7 inches in height. (Tomsk Hotel)
If you blink, you might miss the “Frog Traveler,” considered the smallest public monument in the world. Located outside Hotel Tomsk in Russia, the barely two-inch bronze statue, created in 2013, is the work of sculptor Oleg Tomsk Kislitsky. In a statement, the artist says that his goal was to create the world’s smallest monument while also giving a nod to the travelers of the world. He based the idea for the piece on a popular Russian children’s book called The Frog Went Travelling, by author Vsevolod Garshin, which tells the tale of a traveling amphibian and the creatures he meets along the way.
Miniature Washington Monument, Washington, D.C.Hidden under a manhole cover, this 12-foot-tall replica of the Washington Monument is easy to miss. (Dru Smith)
By far, one of the most recognizable structures in Washington, D.C., is the Washington Monument—but it’s what’s underfoot that deserves a second look. Located underneath a manhole cover nearby sits a 12-foot replica of the towering obelisk that commemorates George Washington. Known as Bench Mark A, the replica is actually a Geodetic Control Point used by surveyors when working on government maps. It’s just one of approximately one million such control points spread throughout the country, though most are less interestingly shaped. Although this one technically belongs to the National Parks Service, the National Geodetic Survey uses it when it’s surveying the Washington Monument and the National Mall. (For example, the NGS used it in 2011 after an earthquake took place in Virginia.) It dates back to the 1880s, and it's obvious that its creators had a sense of humor. Just make sure to talk to a park ranger before attempting to open the manhole.
Mini-Europe, Brussels, BelgiumMini-Europe is an amusement park in Brussels, Belgium, dedicated to the continent's many monuments. (<ahref="flickr url"="">Miguel Discart - Flickr/Creative Commons)
From Big Ben in the United Kingdom to the Leaning Tower of Pisa in Italy, Europe is home to some of the world’s most recognizable monuments. The only problem is that it may require multiple trips to see them all. An alternative option would be to spend the day at Mini-Europe, an amusement park in Brussels, Belgium, where you can behold all the great sites before suppertime.
Opened in 1989, Mini-Europe re-creates each structure on a scale of 1 to 25. So expect to see a 43-foot tall Eiffel Tower (the real one is 984 feet in height) and a 13-foot Big Ben (the actual size is 315 feet) all down to the tiniest of details—meaning the Mount Vesuvius here actually erupts. In total, the park encompasses 350 monuments from roughly 80 cities. With Brexit on the horizon, the fate of the park’s UK display remains to be decided.
(Correction: The story previously incorrectly stated that the Monument to the Great Fire of London was constructed in 1841. Construction began in 1671 and was completed in 1677.)
The atomic clock comes in many varieties. Some are chip-sized electronics, developed for the military but available commercially now, while bigger and more accurate atomic clocks keep track of time on GPS satellites. But all atomic clocks work on the same principle. Pure atoms—some clocks use cesium, others use elements like rubidium—have a certain number of valence electrons, or electrons in the outer shell of each atom. When the atoms are hit with a specific frequency of electromagnetic radiation (waves of light or microwaves, for example), the valence electrons transition between two energy states.
In the 1960s, scientists turned away from measuring time based on the orbits and rotations of celestial bodies and began using these clocks based on the principles of quantum mechanics. It may seem like a strange way to measure time, but the duration of a specific number of oscillations, or “ticks,” in a wave of electromagnetic radiation is the official method by which scientists define the second. Specifically, a second is the duration of 9,192,631,770 oscillations of a microwave laser that will cause cesium atoms to transition.
But we have even better atomic clocks than the ones that measure cesium.
“If our two ytterbium clocks had been started at the beginning of the universe, at this point in time they would disagree with each other by less than one second,” says William McGrew, a physicist at the National Institute of Standards and Technology (NIST), in an email.NIST's ultra-stable ytterbium lattice atomic clock. Ytterbium atoms are generated in an oven (large metal cylinder on the left) and sent to a vacuum chamber in the center of the photo to be manipulated and probed by lasers. Laser light is transported to the clock by five fibers (such as the yellow fiber in the lower center of the photo). (James Burrus/NIST)
The ytterbium clocks at NIST, Yb-1 and Yb-2, are a unique type of atomic clock known as an optical lattice clock. Essentially, the clocks use electromagnetic radiation in the optical frequency, or lasers, to trap thousands of ytterbium atoms and then cause their outer electrons to transition between a ground energy state and an excited energy state. Compared to cesium, a higher frequency of electromagnetic radiation is required to cause ytterbium to transition.
All electromagnetic waves, from radio waves to gamma rays, and all the visible light in between, are the same type of waves made up of photons—the difference is simply that waves with higher frequencies oscillate more rapidly. Microwaves, which are used to transition cesium, are stretched into longer wavelengths and lower frequencies than visible light. Using atoms that transition at higher frequencies is key to building a better clock. While a second is currently about 9 billion oscillations of a microwave, the same duration of time would be represented by closer to 500 trillion oscillations of a wave of visible light, enhancing scientists’ ability to precisely measure time.
If the measurement laser on an ytterbium clock is dialed in to exactly the right frequency, the ytterbium atoms will jump up to the excited energy state. This occurs when the laser is at a frequency of exactly 518,295,836,590,863.6 Hertz—the number of “ticks” in one second.
“This corresponds to a wavelength of 578 nanometers, which appears yellow to the eye,” McGrew says.
New measurements with Yb-1 and Yb-2, led by McGrew’s team at NIST, have achieved new records in three key areas of measurement precision, producing, in some respects, the best measurements of the second ever achieved. Specifically, the clocks set new records for systematic uncertainty, stability and reproducibility. The new measurements are detailed in a paper published today in Nature.
The ytterbium optical clocks are even more precise in these aspects than the cesium fountain clocks that are used to determine the definition of a second. The ytterbium clocks are technically not more accurate than the cesium clocks, as accuracy is specifically how close a measurement is to the official definition, and nothing can be more accurate than the cesium clocks that the definition is based on. Even so, the key metric here is systematic uncertainty—a measure of how closely the clock realizes the true, unperturbed, natural oscillation of the ytterbium atoms (the exact frequency that causes them to transition).
The new measurements match the natural frequency within an error of 1.4 parts in 1018, or about one billionth of a billionth. The cesium clocks have only achieved a systematic uncertainty of about one part in 1016. So compared to the cesium clocks, the new ytterbium measurements “would be 100 times better,” says Andrew Ludlow, a NIST physicist and co-author of the paper.
The challenge with these types of measurements is dealing with external factors that can affect the natural frequency of the ytterbium atoms—and because these are some of the most sensitive measurements ever achieved, every physical effect of the universe is a factor. “Almost anything that we could arbitrarily think of right now eventually has some effect on the oscillation frequency of the atom,” Ludlow says.
The external effects that shift the natural frequency of the clocks include blackbody radiation, gravity, electrical fields, and slight collisions of the atoms. “We spend a lot of our time trying to carefully go through and … understand exactly all of the effects that are relevant for messing up the ticking rate of the clock—that transition frequency—and going in and making measurements of those on the actual atoms to characterize them and help us figure out how well we can really control and measure these effects.”
To reduce the effects of these natural physical factors, the ytterbium atoms, which occur naturally in some minerals, are first heated to a gaseous state. Then laser cooling is used to reduce the temperature of the atoms from hundreds of degrees kelvin to a few thousandths of a degree, and then further cooled to temperatures of about 10 microkelvin, or 10 millionths of a degree above absolute zero. The atoms are then loaded into a vacuum chamber and thermal shielding environment. The measurement laser is beamed through the atoms and reflected back on itself, creating the “lattice” that traps the atoms in high energy parts of a standing wave of light, rather than a running wave, such as a typical laser pointer.
Improving the “stability” and “reproducibility” of the measurements, which the ytterbium clocks also set new records for, helps to further account for any outside forces affecting the clocks. The stability of the clocks is essentially a measure of how much the frequency changes over time, which has been measured for Yb-1 and Yb-2 at 3.2 parts in 1019 over the course of a day. Reproducibility is a measure of how close the two clocks match one another, and through 10 comparisons the frequency difference between Yb-1 and Yb-2 has been determined to be less than a billionth of a billionth.
“It is crucial to have two clocks,” McGrew says. “Uncertainty is characterized by examining every shift that could change the transition frequency. However, there is always the possibility of ‘unknown unknowns,’ shifts which are not yet understood. By having two systems, it is possible to check your characterization of uncertainty by seeing if the two independent systems agree with each other.”
Such precision in measuring time is already used by scientists, but the practical applications of improved measurements of the second include advances in navigation and communications. Though no one could have known it at the time, the early work with atomic clocks in the mid-20th century would ultimately enable the Global Positioning System and every industry and technology that relies on it.
“I don’t think I could predict completely what applications in 20 or 50 years will benefit the most from this, but I can say that as I look back in history, some of the most profound impacts of atomic clocks today were not anticipated,” Ludlow says.The yellow lasers of one of NIST's ytterbium optical lattice clocks. (Nate Phillips/NIST)
The ytterbium clocks could also be used in advanced physics research, such as gravitational field modeling and the possible detection of dark matter or gravitational waves. Essentially, the clocks are so sensitive that any interference due to changing gravity or other physical forces could be detected. If you positioned multiple ytterbium clocks around the world, you could measure the minute changes in gravity (which is stronger closer to sea level as well as closer to the poles), allowing scientists to measure the shape of Earth’s gravitational field with more precision than ever before. Similarly, an interaction with dark matter particles, or even possibly gravitational waves affecting two clocks spread far apart, could be detected.
“Scientifically, we use this amazing precision today already for some of these fundamental physics studies—looking for dark matter, looking for variation of the fundamental constants, looking for violations in some of Einstein’s theories and other things. … If we ever do discover any violations [of the laws of physics] by using these incredible measurement tools, that could be a huge game changer in our understanding of the universe, and therefore how science and technology will evolve from there on out.”
In the next 10 years or so, it is possible that the measurement science institutions of the world will decide to redefine the second based on an optical clock rather than a cesium clock. Such a redefinition is likely inevitable, because optical lasers operate at much higher frequencies than microwaves, increasing the number of “ticks” of the clock contained in a second. An ytterbium clock measurement would be a good candidate for a new definition, but optical lattice clocks using mercury and strontium have also produced promising results, and ion optical clocks, which suspend and transition a single atom, present another intriguing possibility for a new definition.
These measurements of atomic phenomena are getting more and more precise, and where our evolving understanding of time will take us, it is impossible to know.
In a quiet neighborhood on the outskirts of Panama City, David Roubik, one of the world's top bee experts, led me into a cramped workshop at the back of his one-story, red-roofed house, pried open a wooden chest filled with bees, and told me to stick my hand in.
The chest held a hive of Melipona triplaridis, a beefy black- and yellow-striped bee with sleek wings and a tan coat of hairs around its thorax. As Roubik does with many hives, he had brought this one home by sawing its cavernous, amber-hued wax layers out of a tree somewhere in Panama's tropical forests. He had just used a pocketknife to slice open a pea-sized pod on the hive's surface and revealed a tiny pool of gold.
“That's some of the best honey in the world,” he said. “Have a taste.”With more than 40 years of experience as a staff scientist at the Smithsonian Tropical Research Institute, Roubik is one of the closet things on earth to a walking bee encyclopedia. (Photograph by Paul Bisceglio)
It's easy to trust Roubik. He looks a bit like Santa Claus and always is on the verge of a chuckle, and as a staff scientist at Smithsonian's Tropical Research Institute (STRI) in Panama City for 35 years, he is one of the closest things on earth to a walking bee encyclopedia. During his tenure, he has revolutionized the study of bees in the tropics, and established himself as a renowned authority on bee varieties including the Meliponini tribe, orchid bees and the invasive Africanized honeybee. He has been stung, without exaggeration, thousands of times in his life—his personal record is 50 times in a day—but he assured me as I lowered my hand into the chest of bees that Melipona triplaridis actually can't sting; the species is one of roughly 550 tropical honey-making members of a tribe named Meliponini, commonly referred to as “stingless bees.”
Roubik now uses his expertise to combat the world's general ignorance about bees. Some scientific evidence suggests bees' numbers are dwindling as factors like climate change and deforestation disrupt ecological balances around the globe. Honey-producing bees, especially, have frequented the news in recent years due to worries of colony collapse disorder, the precise causes and actual prevalence of which are hotly debated. Honey bees are the world's primary pollinators, used commercially to grow hundreds of billions of dollars of crops each year, so a major loss would be economically catastrophic. But Roubik says there is much to be understood about bees' lives and our influence on them before we start to panic.
“I'm electrified by bees,” he told me once I had poked my finger through the scampering crowd in front of me and sampled their hard-earned honey. It was tangy, soft and delicious as promised. I followed him to another wooden box, this one home to a hive of metallic green orchid bees named Euglossa imperialis. “Bees go everywhere and do everything," he added. "I love watching them interact with their environments and each other, discovering the amazing things they do by direct observation.”Euglossa imperialis is a metallic, green orchid bee. Red and blue bee species exist, too. (Photograph by Paul Bisceglio)
Roubik's patience and inventiveness as a bee observer, in fact, is largely what has distinguished him among experts. Bee research often takes place in apiaries or labs, but Roubik prefers to study bees in the wild, having spent years, if not decades, hiking the forests in Panama, where he can sample and monitor bees in their natural environments, and gather otherwise unobtainable data on details like the flowers they prefer, their foraging habits and how they get along with other species.
“I study nature, where it exists,” he told me. “Bees have basically nothing to do with apiaries or labs. Their artificial congregation there leads to problems and behaviors that do not exist in a normal ecological or evolutionary setting.”
A lauded taxonomist, Roubik collects specimens as he goes about his field studies, often by bringing a chainsaw on his drives deep into the forest and hiking around until he finds trees they live in. To identify new species—he has discovered more than 30—he spends hours over a microscope examining details as minute as the lengths of bees' hairs and the shapes of small, jagged teeth along their mandibles.
“David is basically a pioneer,” says James Nieh, head of a prominent bee research lab at the University of California-San Diego, who remembers being amazed by the dedication required to gather even the most basic information about tropical bees the first time he collaborated with Roubik at STRI. (Researchers of western honeybees, by contrast, can order their bees by mail, he notes.) “If we think back to people who founded this area [of tropical bee biology], in a modern sense, David is in that group of illustrious people who have posed a lot of very interesting questions: How do these bees live? What is their basic biology? How do they find food? These are all fascinating types of things that he has studied, which other scientists will carry into the future.”Roubik has no problem allowing the stingless Melipona triplaridis bees to dance around his hand. Just don't crush any, he cautioned; they release chemicals that send their nest mates into a biting frenzy when injured. (Photograph by Paul Bisceglio)
As the future of bees increasingly becomes a concern, however, Roubik has focused his energy more and more on being a public voice of reason. The scientist now jokingly likes to call himself a "consultant," because he spends less time researching and more time sharing his expertise in workshops and planning committees around the world to devise best practices for managing bees. (In our e-mail correspondence following my visit, almost every message he sent arrived from a different country.) His goal is to spread good information about the insects, not to sensationalize; while the possibility of world-wide spontaneous colony failure is worth looking into, he told me, the colony disappearances that are grabbing headlines frequently are caused by natural fluctuations or human error, not a pandemic.
“One benefit of doing long-term studies is that I see what happens when an El Niño year comes in the tropics, which causes sustained and super-productive flowering and feeds a lot more bees than the normal,” he said. “This makes populations go up and then go down—they're supposed to do that. After a year or two of big decline people will start saying Henny Penny the sky is falling, but you can't predict anything on the basis of one or two years of study. Stability is not the norm, not here or anywhere else.”In the tropical forest, Roubik saws hives out of trees, then fits them into wooden boxes at home (Photograph by Paul Bisceglio)
He shared anecdote after anecdote of what he referred to as the "stupidity of people” as he introduced me to a few more hives around the back of his house: things like major beekeepers being mystified by their bees' falling numbers as they fed them nutrient-deficient high-fructose corn syrup, and farmers exclusively planting clones of a self-sterile apple tree then worrying all the bees in their region had died off when the apples weren't pollinated. Recently, he flew down to the Yucatán Peninsula to advise farmers who reported alarming hive losses, only to discover they simply were failing to replace aging colonies.
“Things may be obvious to me, but other people aren't looking at the same things I am. This is totally obscure for most people,” he said, noting that he was one of only two people in the world who had the field data to show individual Yucatán colonies could only last about 20 years. “I've always felt a sense of obligation. I know I can help in certain areas, and I also know I'm often about the only person who can.”Tetragonisca angustula, a.k.a. "Angel bees," one of the species Roubik has at his home. Some bees are huge, others are almost microscopic (Photograph by Paul Bisceglio)
After I had met his various bees, Roubik walked me to the front of his house and we settled onto a shaded bench, one of many wooden things around the place that he has hand crafted from the wood from fallen trees he has collected during his forest ramblings. Reflecting on his frustration with how little is known about bees, he admitted that ignorance is also part of the fun; there are around 21,000 known bee species in the world and thousands more to be named, and scientists are “still discovering new things bees do that we didn't have any idea they were doing,” he said. Only recently did scientists realize some bees forage at night, for instance. Some bees use smaller bugs to make honey for them. And there even are a few species that feed on flesh, which Roubik himself discovered in the 80s when he tossed a Thanksgiving turkey carcass into his backyard.
“That's the beauty of the research,” he said. “Because we're still short on info, everything's worth knowing about.”
Advertising photography is more than a thousand words: Al Rendon remembers a photography session with Selena
Advertising agencies have relied on images to engage consumers since the late 19th century. Images convey both information and emotion in a glance; images can tell us how to feel about a product.
Photography has been critical to modern advertising’s success. Ad agencies regularly work with professional, freelance photographers who seamlessly blend art and commerce in order to craft just the right image. This is the story of one such photographer: Al Rendon.
Rendon, a professional photographer in San Antonio, Texas, has been photographing Tejano music and culture in that city since the late 1970s. When we collected from the advertising agency of Sosa, Bromley, Aguilar & Associates in 2015, Rendon’s photos of Tejano music star Selena stood out. These photos, taken for a Coca-Cola advertising campaign, showed an energetic, beautiful young woman that embodied the idea of the "all-American" girl, but with a mix of glamour and sex appeal that Selena mastered. The photos let Selena's natural sparkle bubble up and illuminate the product. Who wouldn't want to share a Coke with Selena?
We were so interested in the story behind the photographs that we asked Rendon to tell us about his work and the process of photographing Selena. Amelia Thompson interviewed him in September. The following is an excerpt of that conversation. The full transcript is also available.
Why have you focused on the Tejano or Mexican American experience?
I had started my business back around 1979-1980. I had been mostly doing simple public relations, black and white photography, back then, and running a photo lab. In 1985 I got an opportunity to be the photographer for the Guadalupe Cultural Arts Center, which is a Latino arts organization here on the west side of San Antonio, where it's a predominately Mexican American community. I rediscovered my Hispanic roots…. Before I knew it, that's what I wanted to do. I wanted to document [Hispanic culture].
Had you worked with Sosa, Bromley, Aguilar & Associates prior to the Selena/Coke campaign?
I already had a working relationship with them. Back in the '80s, Hispanic advertising was coming into its own. Sosa, Bromley, Aguilar was one of the largest Hispanic ad agencies, not just in San Antonio, but in the country. When these large corporations hired them, they also wanted them to use Hispanic talent. Being one of the few Mexican American commercial photographers in San Antonio, I got to work with them on some of their projects…. By the time they were doing the Coca-Cola account and got Selena involved, I had already been doing work for Selena. They recognized that I had a good rapport with her.
How did that rapport shape your approach to this particular photo shoot?
This particular shoot came about kind of quickly. They had signed her up for a special promotion where they were going to do life-size cutouts and point-of-purchase posters and all kinds of different materials to promote Coke. As part of that promotion, they had a contest where people could enter to win a trip to one of her concerts and get to meet her backstage and be photographed with her. Apparently, the ad agency had used another photographer to take some pictures for this promotion, and Selena and her family were not happy with the photos and so they needed a reshoot. The family, Selena particularly, made it pretty clear to the ad agency they wanted them to use me.
[We] got some direction from the art director from the ad agency. There was a representative there from Coca-Cola. We all put our heads together and decided what we were going to do and what order we were going to do things in.
Selena looks like the "all-American" girl in the Coke photos. Can you talk about how you tried to capture a certain image of her for Coca-Cola?
I think they were actually going for that look. We didn't want her to look like a glamour shot and we didn't want her in clothing that looked like something she had just stepped off stage. We wanted her to look more "everyday" so that the consumer could relate to her better. That's why in the life-size cutout she's wearing jeans with just a simple white top and a jean vest.…
At this point they were kind of relying on me more than anything because they had already gone through one shoot. They wanted to let her and me make a lot of those artistic decisions, so that she would be happy with the finished product. All through the process, we were taking Polaroids and looking at them and dissecting them and trying different things. There are some things we tried that we didn’t even put on film because when we looked at the Polaroids it was obvious it wasn't working. Selena had very, very good taste. She was always very conscious of her image and the image she was projecting.
What are your memories of working with Selena?
I remember her being very much the opposite of a diva. She was very humble. She was very easy to work with, very friendly. She just came in and lit up the room.
The Coke images and the photo you took that is now at the National Portrait Gallery capture two very different aspects of her as a popular icon. Can you talk about the formal portrait and what you hoped to show in that image?
The portrait was part of a photo session I had done a year prior to the Coca-Cola shoot. It was for a live album that she was recording in Corpus…. I wanted a couple of serious pictures. I knew that's not a shot that the record company would probably use for anything; I took that picture more for me because that was kind of my image of how I saw her. To me, she was a very serious artist.
The entire interview with Al Rendon is also available online.
Kathleen Franz is a co-curator of the American Enterprise exhibition and chair and curator of the Work and Industry Division at the National Museum of American History. Amelia Thompson worked in the museum’s Office of Communications and Marketing.