Found 12,257 Resources containing: Story
What is butter? Is it a condiment? A baking fat? The scientific product of an intense process involving matters like temperature, friction, and fat content? Or something more like a miracle? It’s all of the above, argues food writer Elaine Khosrova in her new book, Butter: A Rich History.
Butter was discovered, most likely, by accident. Khosrova suggests it was probably the result of a Stone Age herdsman storing his milk inside an animal skin, and a bumpy ride agitating that milk into greatness. Khosrova writes that early people couldn’t understand the seemingly magical process by which liquid cream was whipped and beaten into a rich, golden, semi-solid state. Nor could they always reliably make it happen. This unpredictability cloaked butter in an aura of mystery and reverence. In preindustrial Europe, Khosrova writes, the dairymaid occupied a vaunted status, “at once a paragon of domestic virtue and a hidden dairy cabalist.”
Butter’s history is spread worldwide, meaning Khosrova’s journey to study its origins had her hopscotching the globe. Along the way, she traveled to Bhutan to witness the making of yak butter in an ancient style, toured the Butter Museum in Ireland, and watched the ritual sculpting of the Iowa State Fair butter cow. She also studied 20th century health fears about butter, which have, in recent decades, begun to turn around. (American butter consumption recently hit at a 40-year high, with each of us eating an average of 5.6 pounds a year.)
We interviewed Khosrova about humankind’s long, torrid, not-always-smooth affair with a food many of us love but take for granted. You’ll never look at this golden toast-topper the same way again.
How did you get interested in this, er, rich topic?
I’ve been a food writer since 1990. Before that I was a pastry chef. For most of my career, I was deeply involved in food—whether it was hands on or writing about it— but I didn’t give butter much thought at all. It’s sort of an incredible irony that I sort of woke up one day to how unique and almost mysterious it is.
About nine years ago, I was working for a restaurant trade magazine and I had to do a lot of product tastings and write them up as product reviews. One day I had 12 or 15 different butters on the table. I wasn’t thinking much about it—it’s so elemental, butter; how different could they be? I had tasted really nice French butter, I knew the difference [in taste of] a higher fat butter, but I hadn’t really give it much attention.
That one day was a little bit of an epiphany because I had them all in front of me. The textures were impressively different, from kind of slack and greasy to an almost fudgy, waxy rich texture. And the flavors evolved from something kind of sweet and milky to something quite tangy, and others were pretty salty. I was like, “Huh, I can’t really explain how this happens.” I went to get a book on butter—and there wasn’t one.
I’ve always loved butter. So now I really wanted to understand it better. I got to understand how dynamic the butter-making world is, where you have three things that come together: man, land and beast. If you look at this globally, there’s so much variety among those three elements.
You write that “since many large dairies succeed or fail based on the volume of their milk production, Holstein (cows) continue to be the most common cow in our national farmscape. They’ve been bred to ably top the charts of milk output … but not necessarily make the most or best butter.” Can you tell me more about what breeds of cow make the best butter?
Holsteins produce a ton of milk. It’s perfectly good milk. But when you get into butter makers and cheese markers who are really concerned about the protein content, the solids, the butterfat content, they’re looking at other breeds. Jerseys are really popular; Guernseys make fabulous cream; Brown Swiss is another good breed.
Much depends on what that animal is fed, how old the animal is, and the period of lactation. —There’s an incredible number of variables. But, in general, if I was going to go out and make butter tomorrow, I would love to get my hands on Guernsey cream.
We know that butter-making goes way back. Neolithic communities used animal skin pouches that they filled with milk, hung and rocked for butter making, while you write that “Sumerians of 2500 BCE used special terra-cotta jugs for holding the milk and a plunger-type tool” for churning. By the first century CE, you write that butter was common in much of the developing world, although olive oil was more popular in the Mediterranean. Tell me about some of the more unusual historical uses for butter.
The Greek and Romans didn’t consider butter really food. They didn’t like it; it wasn’t part of their cuisine at all. But it was in their medicine chest. They used it to make different ointments, and they had weird remedies using butter applied to various orifices on the body.
It was considered a mystical, magical compound and many early cultures really did feel that way because they couldn’t explain how it happened—how is it that you have milk and hidden within that milk is this substance we get when we churn it, although sometimes we don’t get it when we churn it? They didn’t have the science to understand how butter happened, they just knew it was kind of this magical thing, like rainbows, and pearls in oysters. So butter always had that quality and that mystique about it. That’s why you found so much butter used as a ritual tool in early civilizations—from the Sumerians to the Vedic Aryans to the Druids. And certainly the Tibetans with their tormas, their butter carvings, that are still being done today.
You write that in pre-industrial Europe, butter was often adulterated. I’m curious what it was adulterated with.
Usually anything that would add weight, because they were selling it by the pound. So you could get rocks in the butter, old turnips, things that were dense. There was also a lot of coloring added. You had “May” butter, a beautiful golden butter, which was natural, because cows were on fresh grass getting more beta carotene, so that made their butter this gorgeous yellow color. But people figured out that they could dye the butter and get more money for it. There were all kinds of shysters in the butter world.
You say that “milk fat is a complicated mistress.” Can you elaborate?
I’ve made butter myself a lot. You need headspace [room for air in the churn], and the right temperature, and the right proportion of fat. But also, in the industrial world, it’s complicated because they’re going for a really velvety cohesive beautiful texture, and the way that you get that is by tempering the cream—it’s called physical ripening. The process changes depending on the season of the milking. The temperature goes up, and then they bring it down, and then it goes up a little bit, over 12 to 14 hours. The aim of tempering to get this ideal ratio of liquid and crystalline fats. If you have a lot of liquid fat you end up with a greasy butter, and if you have a lot of hard fat, you end up with one that’s more brittle, that doesn’t spread nicely.
[The home chef] could sometimes get lucky and end up with a cream that just naturally has the right proportions, and I’ve had a couple of butter batches that I was very pleased with, but the amateur can’t really control that very much.
Is butter always the best fat for a baked good in your opinion?
Certainly for flavor, you can’t beat butter. As far as getting great texture, you can get great textures from margarine products. But it won’t have the same mouthfeel, it won’t dissolve in your mouth the same way. Butter can trap air and make things lighter. It makes things richer and lighter. I do like lard in pastry crust, it’s really great to work with, but most people are put off by lard. It can have a slightly meaty quality.
If you put in oil, you would find that [baked goods] are heavy. If you want the texture of a carrot cake or a dense muffin, oil is great. But if want a fluffy tender buttermilk cake or a lovely layer cake, you can’t beat butter.
For decades there’s been a huge butter versus margarine debate. You really looked into this—what’s the latest science on the health differences between them?
They’ve taken the trans fats out of margarine, so we can’t make that an indictment anymore against margarine. [Editor’s note: According to the Food and Drug Administration, “various studies have consistently linked trans fat consumption to heart disease.”] However, vegetable oils [that go into margarines] for the most part are a highly synthetic food. They go through a 20-step process involving a lot of chemicals and bleaching agents and things that vacuum off any flavors and change the color. So it’s a very unnatural product.
Looking at the big picture, when we started to have more heart disease after the Second World War, we were essentially increasingly doing everything that was bad for our hearts. We were eating more processed foods, we had the trans fats from margarine, we were more sedentary, we were eating more sugar, we smoked more, we had more stress—all of these things were on the rise. And we’re blaming butter for heart disease when butter’s been around for thousands of years! We’re so eager to have one demon that we can slay, and its mostly fallen on butter.
You suggest in the book, though, that there is such a thing as eating too much butter. Why is that?
It’s a very rich food, and unless you’re a lumberjack, you can’t use that much caloric download every day. I study food and nutrition and I keep coming back to the same old not-sexy message of moderation. I wouldn’t tell people to eat a stick of butter a day. But they should certainly enjoy a nice big piece on their mashed potatoes or cook their fish in it with some fresh herbs. You don’t need a lot of butter. A little goes a long way.
This interview has been edited for length and clarity.
When H.R. Haldeman agreed to be what incoming president Richard Nixon called his head “son of a bitch,” he knew what he was getting into. The job would require absolute authority over the rest of the White House staff. He would need an organized structure for transferring information. And above all else, Haldeman wanted to avoid end-running: private meetings between an agenda-driven individual and the president.
“That is the principal occupation of 98 percent of the people in the bureaucracy,” he ordered. “Do not permit anyone to end-run you or any of the rest of us. Don't become a source of end-running yourself, or we'll miss you at the White House.”
Those orders were more than an annoyed attempt to keep the president’s schedule clear. Haldeman may not have known it, but as head S.O.B. he would make history, essentially creating the modern chief of staff. Part gatekeeper, part taskmaster, a chief of staff is the White House’s most put-upon power broker—an employer who must juggle the demands of all branches of government and report to the chief executive.
“When government works, it is usually because the chief [of staff] understands the fabric of power, threading the needle where policy and politics converge,” writes Chris Whipple in the opening pages of his new book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. From Richard Nixon to Barack Obama, Whipple explores the relationship between president and chief of staff and how those relationships have shaped the country over the past 50 years.
The role is an enormously taxing one, with an average tenure of just over 18 months. But when filled by competent people, it can make all the difference.
“Looking at the presidency through the prism of these 17 living White House chiefs who make the difference between success and disaster changed my understanding of the presidency,” Whipple says. “It was eye opening.”
To learn more about how the position came into existence, how it has changed over time, and what it means for the country today, Smithsonian.com spoke with Whipple about his research.
Why did you decide to cover this topic?
This whole journey began with a phone call out of the blue with a filmmaker named Jules Naudet. [He and his brother] wanted to know if I would partner with them on a White House chiefs documentary for Discovery. Even though it was four hours, I thought it barely scratched the surface of this incredible untold story about the men who really made the difference between success and disaster. After the documentary aired, I started to dig much deeper, went back for follow up interviews, talked to the chiefs’ colleagues, their staffers, two presidents and CIA directors, national security advisors. The result was the book.
When did this model of empowered chiefs of staff begin?
Presidents going all the way back to Washington had confidants. But the modern White House chief of staff began with Eisenhower and Sherman Adams, who was so famously gruff and tough they called him the Abominable No-man.
Haldeman created the template for the modern empowered White House chief of staff. Nixon and Haldeman were obsessed with this. Nixon wanted a powerful chief of staff who would create time and space for him to think. It’s a model that presidents have strayed from at their peril ever since.
It’s hard to overstate the importance of the position. He’s not only the president’s closest confidant, but the president’s gatekeeper. He’s the honest broker who makes sure every decision is teed up with information and only the tough decisions get into the oval office. He’s what Donald Rumsfeld called “the heat shield,” the person who takes fire so the president doesn’t have to. He is who tells the president what people can’t afford to tell the president themselves. And at the end of the day, he’s the person who executes the president’s policies.
What has happened when presidents have abandoned that model?
Every president who tried a different model has paid the price. Jimmy Carter really tried to run the White House by himself and he found himself overwhelmed. Two-and-a-half years into his presidency, he realized he had to appoint a chief of staff. Bill Clinton tried to run the White House much as he ran his campaign, without empowering the chief of staff to take charge. Mack McLarty was his friend, but he wasn’t given enough authority. Leon Panetta replaced McLarty and turned it around. Every president learns, often the hard way, that you cannot govern effectively unless the White House chief of staff is first among equals. That’s a lesson our current president has yet to learn.
Why did we need a new model for the modern political system?
When it comes to the White House, the team of rivals [model] is so 19th-century; it doesn’t work in the modern era. Gerald Ford tried to govern according to a model called “spokes of the wheel,” with five or six advisors of equal authority coming to him. It was a disaster. As someone put it, he was learning by fire hose.
You can’t imagine the demands of the office and how impossible it is to try and govern without an effective gatekeeper, who makes sure you get only the toughest decisions and are not drowning in minutiae. That’s the difference between governing in the modern era and governing in the 19th century.
How important is the decision about who to appoint as chief of staff?
That choice of chief makes all the difference. Reagan was famously called an amiable dunce, and that was unfair, but Reagan understood something [his predecessor] Carter did not. An outsider president needs a consummate insider to get things done. Reagan intuited this with help from Nancy Reagan and other advisers. He knew he needed somebody who could really get his agenda done, who knew Capitol Hill and how the White House worked. And James Baker was a 50-year-old smooth-as-silk Texas lawyer who wasn’t afraid to walk into the Oval Office and tell Reagan what he didn’t want to hear.
What role does personality play in the success of the chief of staff?
I think [a steady] temperament is an underrated attribute that means a lot. James Baker had it. Leon Panetta had it. He was Clinton’s second chief of staff and really turned the White House around. He was a guy who’d been around the block. He was comfortable in his own skin, could walk into the Oval Office and tell Bill Clinton hard truths. It takes somebody who is grounded and comfortable in their skin.
No president can govern by himself. It’s important to have a chief of staff who compliments his weaknesses, who is strong where the president may be weak. I think having a friend in that job is risky because friends have a hard time telling the president what they don’t want to hear. As Nancy Reagan famously said, the most important word in the title is 'staff' not 'chief.'
How has technology changed the role of the chief of staff?
Technology has obviously exploded, and there’s no such thing as a news cycle anymore. The news cycle is 24/7, and there are more platforms than ever. I do think it makes it more challenging for the president to govern and the chief of staff to execute policy, but it makes it all the more important that you have a chief of staff who understands the nexus between policy and communications. You have to be able to manage the administration’s message and make sure everyone is on the same page.
At the beginning of the book you recount the time when numerous chiefs of staff gathered together to help President Obama's first chief, Rahm Emanuel, get started. How do chiefs of staff build on each other’s legacies?
One of the extraordinary things I discovered is that no matter how fiercely partisan they may be, at the end of the day they care about the country, how the White House functions, and about the position of chief of staff, which is so little understood. I think that’s why they came together that day, December 5, 2008, that really bleak morning when it looked as though the country was on the verge of a great depression, the auto industry was about to go belly-up, and there were two wars in a stalemate. As Vice PresCheney put it, they were there to show Rahm the keys to the men’s room.
As the quote from Cheney suggests, there have been no women chiefs of staff. Can you talk about that?
I think there will be, there definitely will be. Maybe not under this administration, but there almost was under Obama. There was one woman in contention. How many female presidents have we had? How many female campaign managers have we had? Up to this point it’s been a boys’ club. I think that’s going to change.
Does Reince Priebus face any unique challenges as the current chief of staff?
Absolutely. At the end of the day, the problem, the challenge is fundamentally Donald Trump’s. If he heeds the obvious lessons of recent presidential history he will realize that he has to empower a White House chief of staff as first among equals if he wants to be able to govern.
Back in December, ten [former chiefs of staff] went to see Reince Priebus at the invitation of Denis McDonough [Obama’s last chief of staff] to give him advice, much the way they did for Rahm back in 2008. They all had the same message. This is not going to work unless you are first among equals. But [the success of the chief of staff] really all depends on the president at the end of the day. There’s almost nothing a chief of staff can do unless he’s empowered to do it.
After handing them their suicide capsules, Norwegian Royal Army Colonel Leif Tronstad informed his soldiers, “I cannot tell you why this mission is so important, but if you succeed, it will live in Norway’s memory for a hundred years.”
These commandos did know, however, that an earlier attempt at the same mission by British soldiers had been a complete failure. Two gliders transporting the men had both crashed while en route to their target. The survivors were quickly captured by German soldiers, tortured and executed. If similarly captured, these Norwegians could expect the same fate as their British counterparts, hence the suicide pills.
Feb. 28 marks the 75th anniversary of Operation Gunnerside, and though it hasn’t yet been 100 years, the memory of this successful Norwegian mission remains strong both within Norway and beyond. Memorialized in movies, books and TV mini-series, the winter sabotage of the Vemork chemical plant in Telemark County of Nazi-occupied Norway was one of the most dramatic and important military missions of World War II. It put the German nuclear scientists months behind and allowed the United States to overtake the Germans in the quest to produce the first atomic bomb.
While people tend to associate the United States’ atomic bomb efforts with Japan and the war in the Pacific, the Manhattan Project – the American program to produce an atomic bomb – was actually undertaken in reaction to Allied suspicions that the Germans were actively pursuing such a weapon. Yet the fighting in Europe ended before either side had a working atomic bomb. In fact, a rehearsal for Trinity – America’s first atomic bomb test detonation – was conducted on May 7, 1945, the very day that Germany surrendered.
So the U.S. atomic bomb arrived weeks too late for use against Germany. Nevertheless, had the Germans developed their own bomb just a few months earlier, the outcome of the war in Europe might have been completely different. The months of setback caused by the Norwegians’ sabotage of the Vemork chemical plant may very well have prevented a German victory.The Norwegian saboteurs’ target (Jac Brun, CC BY)
Nazi bomb effort relied on heavy water
What Colonel Tronstad, himself a prewar chemistry professor, was able to tell his men was that the Vemork chemical plant made “heavy water,” an important ingredient for the Germans’ weapons research. Beyond that, the Norwegian troops knew nothing of atomic bombs or how the heavy water was used. Even today, when many people have at least a rudimentary understanding of atomic bombs and know that the source of their vast energy is the splitting of atoms, few have any idea what heavy water is or its role in splitting those atoms. Still fewer know why the German nuclear scientists needed it, while the Americans didn’t.Normal hydrogen, left, has just a proton; deuterium, the heavy form of hydrogen, right, has a proton and a neutron. (Nicolae Coman, CC BY-SA)
“Heavy water” is just that: water with a molecular weight of 20 rather than the normal 18 atomic mass units, or amu. It’s heavier than normal because each of the two hydrogen atoms in heavy H2O weighs two rather than one amu. (The one oxygen atom in H2O weighs 16 amu.) While the nucleus of a normal hydrogen atom has a single subatomic particle called a proton, the nuclei of the hydrogen atoms in heavy water have both a proton and a neutron – another type of subatomic particle that weighs the same as a proton. Water molecules with heavy hydrogen atoms are extremely rare in nature (less than one in a billion natural water molecules are heavy), so the Germans had to artificially produce all the heavy water that they needed.
In terms of their chemistries, heavy water and normal water behave very similarly, and you wouldn’t detect any differences in your own cooking, drinking or bathing if heavy water were to suddenly start coming out of your tap. But you would notice that ice cubes made from heavy water sink rather than float when you put them in a glass of normal drinking water, because of their increased density.
Those differences are subtle, but there is something heavy water does that normal water can’t. When fast neutrons released by the splitting of atoms (that is, nuclear fission) pass through heavy water, interactions with the heavy water molecules cause those neutrons to slow down, or moderate. This is important because slowly moving neutrons are more efficient at splitting uranium atoms than fast moving neutrons. Since neutrons traveling through heavy water split atoms more efficiently, less uranium should be needed to achieve a critical mass; that’s the minimum amount of uranium required to start a spontaneous chain reaction of atoms splitting in rapid succession. It is this chain reaction, within the critical mass, that releases the explosive energy of the bomb. That’s why the Germans needed the heavy water; their strategy for producing an atomic explosion depended upon it.
The American scientists, in contrast, had chosen a different approach to achieve a critical mass. As I explain in my book, “Strange Glow: The Story of Radiation,” the U.S. atomic bomb effort used enriched uranium – uranium that has an increased concentration of the easily split uranium-235 – while the Germans used unenriched uranium. And the Americans chose to slow the neutrons emitted from their enriched uranium with more readily available graphite, rather than heavy water. Each approach had its technological trade-offs, but the U.S. approach did not rely on having to synthesize the extremely scarce heavy water. Its rarity made heavy water the Achilles’ heel of the German nuclear bomb program.
Stealthy approach by the Norwegians
Rather than repeating the British strategy of sending dozens of men in gliders, flying with heavy weapons and equipment (including bicycles!) to traverse the snow-covered roads, and making a direct assault at the plant’s front gates, the Norwegians would rely on an alternate strategy. They’d parachute a small group of expert skiers into the wilderness that surrounded the plant. The lightly armed skiers would then quickly ski their way to the plant, and use stealth rather than force to gain entry to the heavy water production room in order to destroy it with explosives.
Six Norwegian soldiers were dropped in to meet up with four others already on location. (The four had parachuted in weeks earlier to set up a lighted runway on a lake for the British gliders that never arrived.) On the ground, they were joined by a Norwegian spy. The 11-man group was initially slowed by severe weather conditions, but once the weather finally cleared, the men made rapid progress toward their target across the snow-covered countryside.Bridge in to the Vemork site (martin_vmorris, CC BY-SA)
The Vemork plant clung to a steep hillside. Upon arriving at the ravine that served as a kind of protective moat, the soldiers could see that attempting to cross the heavily guarded bridge would be futile. So under the cover of darkness they descended to the bottom of the ravine, crossed the frozen stream, and climbed up the steep cliffs to the plant, thus completely bypassing the bridge. The Germans had thought the ravine impassible, so hadn’t guarded against such an approach.
The Norwegians were then able to sneak past sentries and find their way to the heavy water production room, relying on maps of the plant provided by Norwegian resistance workers. Upon entering the heavy water room, they quickly set their timed explosives and left. They escaped the scene during the chaotic aftermath of the explosion. No lives were lost, and not a single shot was fired by either side.
Outside the plant, the men backtracked through the ravine and then split into small groups that independently skied eastward toward the safety of neutral Sweden. Eventually, each made his way back to their Norwegian unit stationed in Britain.
The Germans were later able to rebuild their plant and resume making heavy water. Subsequent Allied bomber raids on the plant were not effective in stopping production due to the plant’s heavy walls. But the damage had already been done. The German atomic bomb effort had been slowed to the point that it would never be finished in time to influence the outcome of the war.
Today, we don’t hear much about heavy water. Modern nuclear bomb technology has taken other routes. But it was once one of the most rare and dangerous substances in the world, and brave soldiers – both British and Norwegian – fought courageously to stop its production.
Last winter, salt farmer Ben Jacobsen opened a saltworks on the grounds of an old oyster farm stationed on a lonely stretch of the northwest Oregon coast. Jacobsen’s delicate, crunchy flake salt has quickly and quietly become the essential mineral underpinning some of the best cooking in America, beloved by the likes of Thomas Keller and April Bloomfield. (Or perhaps not so quietly: recently, Bloomfield sang its praises while preparing peas on toast for Jimmy Fallon on late-night television). Though he is little known outside the rarefied world of top chefs, Jacobsen is intent on bringing high-end American salt to the home table.
“Ben’s salt is all about the story, our connection to where the food comes from, which I respect,” the salt expert Mark Bitterman told Portland Monthly earlier this year. He carries Jacobsen flake salt at both the New York and Portland locations of The Meadow, his high-end salt boutique. “But he is a guy who has been playing with salt for a few years; he could never come close to a Frenchman following a hundred-year-old tradition for making fleur de sel.”
The slight stung. But as it happened, Jacobsen’s attempt at making America’s first-ever fleur de sel was already underway. Despite the fact that the United States is the second-largest industrial producer of salt in the world, behind China, very little of it is used for cooking; chefs have always looked elsewhere for their salts. The labor-intensive process of making fleur de sel, the most prized of the sea salts, traditionally involves harvesting by hand from the salt ponds of Guèrande, Brittany, on the coast of France, when the weather is warm and the seas still (between June and September.)
Paludiers, trained for years in the art of salt harvesting, carefully rake and collect the top layer of crystals (the “flower,” which only holds its shape in calm conditions). The salt is valued by chefs for its high moisture content — it maintains its integrity when finishing hot dishes like steak or fish — and for the mineral richness that imparts a sense of place. Flake salt, on the other hand, has flat, large crystals and a brighter, cleaner taste; it’s recommended for use on salads, vegetables, and baked goods. Ancestral salt fields have been found everywhere from Peru and the Philippines to Portugal, and the best fleur de sel today is still carefully picked in those places.
“It’s so peculiar that we haven’t had a fleur de sel to call our own,” Jacobsen said recently. Hanging out with Jacobsen in his Portland neighborhood shows him to be a surprisingly appropriate ambassador for the humble-yet-essential role of salt in cooking: he’s an unassuming, amiable guy in a plaid shirt and denim trucker hat who’s liked by all, and you don’t notice that he’s everywhere until you actually start looking around. (His flake salt is used in the city’s top restaurants, and carried in boutiques from here to the Atlantic coast.) Jacobsen is earnest when he says he thinks it’s about time for a great American salt, given that the country is surrounded by salt water. “As chefs and home cooks,” he observes, “we’ve forgotten about our resources.”
It turns out that the Oregon coast has a salt-making pedigree of its own, hosting an operation during the winter of 1805-1806, when five men on the Lewis and Clark expedition were dispatched to the sea to gather salt for elk meat that was already spoiling. For two months, they camped a hundred paces from the ocean and kept five brass kettles of seawater boiling around the clock, eventually producing three and a half bushels of salt for the return journey across the continent. Lewis called the product “excellent, fine, strong, & white.”
At the modern-day operations of Jacobsen Salt Co., not much has changed with regard to the science: it still involves boiling seawater down to make salt. But with regard to rigor, the process is a great deal more stringent (in scaling up, Jacobsen has hired a chemist to help streamline production with precision). To make his flake salt, Jacobsen pipes seawater up from pristine Netarts Bay, a protected conservation estuary; filters it through seven different systems; and boils it down to remove calcium and magnesium (the minerals give salt a bitter aftertaste, and also interrupt crystal formation). Once the desired salinity is achieved, Jacobsen evaporates the rest in custom stainless-steel pans kept at a constant temperature, so that salt crystals form on the surface. On a recent visit, I watched as series of crystals grew to completion and fell to the bottom of the pan, one by one, drifting like snowflakes.
Making fleur de sel — though laborious in its own way — involves even more waiting. At the time of this writing, Jacobsen is patiently evaporating the first batch of fleur de sel in a hoop house outside the main facility, using just the sun. Unlike flake salt, fleur de sel is made from unfiltered seawater, so that the natural minerality comes through. Each batch can take anywhere from two to twelve weeks, depending on the weather, and each pond can produce 100 pounds of salt. As the water evaporates, Jacobsen uses a pond skimmer to carefully collect the crystals. He is wrapping up plans to farm an acre of fleur de sel at a new location on the coast, with a facility dedicated to the specialty salt (with the use of greenhouses, he expects to be able to extend the traditional fleur de sel “season” by a month or two on either end).
According to Jacobsen, the quality of Netarts Bay seawater is among the best in the world, and it’s validated by the chefs who buy his flake salt every week. So it only follows that fleur de sel made from that water would have an excellent flavor profile that’s uniquely representative of this part of the Pacific coast.
Despite the care put into each jar of product, the salts are meant to be used, and not in a precious way. The fetishizing of artisanal food products, Jacobsen says, has made it difficult for the average American consumer to feel comfortable buying and using really good salt. “People will spend $150 for a bottle of wine for a two-hour dinner,” he told me. “But good salt is one of those things you can spend less than $10 on, and it will last a household for two months. It elevates everything, and it’s a luxury you can have at your table.”
You’ll be able to buy his fresh-off-farm fleur de sel for your table on October 3 from Jacobsen’s website and various retail outlets.
Good Salt for Your Kitchen
We asked Jason French — chef at the Portland restaurant Ned Ludd, and fan of Jacobsen Salt — to give us an easy home recipe that highlights what a good salt like fleur de sel can do. Here’s what he came up with.
Salt-and-spice-cured trout and arugula salad with capers and lemon cream
Serves four as an appetizer, or two as a main course
For the trout:
2 boneless skin-on trout fillets
6 thinly sliced lemons
For the cure:
2 T. Jacobsen fleur de sel
3 T. sugar
1 heaping T. garam masala (a traditional North Indian spice mix easily found in any supermarket)
For the salad:
1 large bunch arugula, washed, soaked in ice water, and spun dry
3 T. brined small capers, rinsed
1/2 c. parsley leaves
1 T. lemon juice
2 T. extra virgin olive oil
Jacobsen fleur de sel
For the lemon cream:
1 shallot, peeled and minced
Zest and juice of 1 lemon
1/2 cup heavy cream
Jacobsen fleur de sel
1. Lightly toast the spices in a pan until aromatic. Cool and mix with the fleur de sel and sugar. Place the trout on a small sheet pan lined with plastic wrap. Coat the flesh of the trout fillet well with the cure and lay three slices of lemon to cover. Place a sheet of plastic wrap over the trout and cover with another sheet pan and weight with some canned items from your pantry. Place in the refrigerator for 4 hours.
2. Make the lemon cream by macerating the shallots in the lemon juice and zest for 20-30 minutes. Season with a pinch of fleur de sel. In a separate bowl whisk the cream until just starting to thicken and mix with the shallots. Continue to whisk until lightly thickened. This should be made just before the salad is served.
3. For the salad, chop the capers and parsley together. Add the lemon juice and olive oil and whisk lightly. Season with a pinch of salt. Toss with the arugula.
4. Divide the arugula between the plates. Rinse and dry the trout fillet and slice thinly at an angle using broad strokes, peeling the flesh away from the skin with each slice. Divide among the plates. Drizzle the lemon cream over the trout and arugula and serve. (Note: the trout may be done ahead of time, but make sure to rinse and dry it so it doesn’t over cure.
Bonnie Tsui writes frequently for The New York Times, and is a contributing writer for The Atlantic.
In August, a total solar eclipse will traverse Ameica for the first time in nearly a century. So many tourists are expected to flood states along the eclipse’s path that authorities are concerned about illegal camping, wildfire risks and even devastating porta-potties shortages. There’s a reason for all this eclipse mania. A total solar eclipse—when the moon passes between the sun and the Earth—is a stunning natural event. For a few breathtaking minutes, day turns to night; the skies darken; the air chills. Stars may even appear.
As awe-inspiring as an eclipse can be, it can also evoke a peculiar fear and unease. It doesn’t seem to matter that science has reassured us that eclipses present no real dangers (aside from looking straight into the sun, of course): When that familiar, fiery orb suddenly winks out, leaving you in an eerie mid-day darkness, apprehension begins to creep in.
So it’s perhaps not surprising that there’s a long history of cultures thinking of eclipses as omens that portend significant, usually bad happenings. The hair-raising sense that something is “off” during these natural events has inspired a wealth of myths and rituals intended to protect people from supposed evils. At the same time, eclipse anxiety has also contributed to a deeper scientific understanding of the intricate workings of the universe—and even laid the foundation for modern astronomy.A clay tablet inscribed in Babylonian with a ritual for the observances of eclipses. Part of the translated text reads: "That catastrophe, murder, rebellion, and the eclipse approach not... (the people of the land) shall cry aloud; for a lamentation they shall send up their cry." (Mesopotamia, third-first century B.C. Record ID: 215816. The Morgan Library & Museum)
The idea of eclipses as omens stems from a belief that the heavens and the Earth are intimately connected. An eclipse falls outside of the daily rhythms of the sky, which has long been seen as a sign that the universe is swinging out of balance. “When anything extraordinary happens in nature ... it stimulates a discussion about instability in the universe,” says astronomer and anthropologist Anthony Aveni, author of In the Shadow of the Moon: The Science, Magic, and Mystery of Solar Eclipses. Even the biblical story of Jesus connects Christ’s birth and death with celestial events: the first by the appearance of a star, the second by a solar eclipse.
Because eclipses were considered by ancient civilizations to be of such grave significance, it was of utmost importance to learn how to predict them accurately. That meant avidly monitoring the movements of the sun, moon and stars, keeping track of unusual celestial events and using them to craft and refine calendars. From these records, many groups—the Babylonians, the Greek, the Chinese, the Maya and others—began to tease out patterns that could be used to foretell when these events occurred.
The Babylonians were among the first to reliably predict when an eclipse would take place. By the eighth century B.C., Babylonian astronomers had a firm grasp of the pattern later dubbed the Saros cycle: a period of 6,585.3 days (18 years, 11 days, 8 hours) in which sets of eclipses repeat. While the cycle applies to both lunar and solar eclipses, notes John Dvorak, author of the book Mask of the Sun: The Science, History and Forgotten Lore of Eclipses, it’s likely they could only reliably predict lunar eclipses, which are visible to half of the planet each time they occur. Solar eclipses, by contrast, cast a narrow shadow, making it much rarer to see the event multiple times at any one place.
Babylonians believed that an eclipse foretold the death of their ruler, leading them to use these predictions to put kingly protections in place. During the period of time that lunar or solar eclipses might strike, the king would be replaced with a substitute. This faux ruler would be dressed and fed like royalty—but only for a brief time. According to ancient Babylonian astronomers’ inscriptions on cuneiform tablets, “the man who was given as the king’s substitute shall die and … the bad omens will not affect that [ki]ng.”
The Babylonian predictions, though accurate, were all based purely on observations, says Dvorak; as far as scholars know, they never understood or sought to understand the mechanism behind planetary motions. “It was all done on the basis of cycles,” he says. It wasn’t until 1687, when Isaac Newton published the theory of universal gravitation—which drew heavily on insights from Greek astronomers—that scientists began to truly grasp the idea of planetary motion.This Chinese oracle bone dates from around 1300 to 1050 B.C. Bones like this were used to predict a range of natural happenings, including solar and lunar eclipses. (Freer Gallery of Art and Arthur M. Sackler Gallery)
Surviving records from the ancient Chinese make up the longest continuous account of celestial happenings. Beginning around the 16th century B.C., Chinese star-gazers attempted to read the skies and foretell natural events using oracle bones. Ancient diviners would carve questions on these fragments of tortoise shell or oxen bone, and then heat them till they cracked. Similar to the tradition of reading tea leaves, they would then seek divine answers among the spidery network of fractures.
These methods may not have been scientific, but they did have cultural value. The sun was one of the imperial symbols representing the emperor, so a solar eclipse was seen as warning. When an eclipse was foretold to be approaching, the emperor would prepare himself by eating vegetarian meals and performing sun-rescuing rituals, while the Chinese people would bang pots and drums to scare off the celestial dragon that was said to devour the sun. This long-lived ritual is still part of Chinese lore today.
As far as accurate astronomical prediction, it would be centuries until Chinese predictions improved. By the first century AD they were predicting eclipses with fair accuracy using what is known as the Tritos cycle: a period of eclipse repetition that falls one month short of 11 years. Historians debate how exactly each culture developed its own system of eclipse prediction, says Dvorak, but the similarities in their systems suggest that Babylonian knowledge may have contributed to the development of others. As he writes in Mask of the Sun, “what the Babylonians knew about eclipses was diffused widely. It moved into India and China and then into Japan.”
In ancient India, legend had it that a mythical demon named Swarbhanu once attempted to outsmart the gods, and obtain an elixir to make himself immortal. Everything was going to plan, but after Swarbhanu had already received several drops of the brew, the sun and moon gods recognized the trick and told the supreme god Vishnu, who had taken the form of a beautiful maiden Mohini. Enraged, she beheaded Swarbhanu. But since the beast had already become immortal, its head lived on as Rahu and its torso as Ketu.
Today, according to the legend, Rahu and Ketu continue to chase the Sun and the Moon for revenge and occasionally gulp them down. But because Swarbhanu’s body is no longer whole, the eclipse is only temporary; the moon slides down his throat and resumes its place in the sky.
Eclipses in India were seen as a time when the gods were in trouble, says Dvorak, and to counter these omens land owners donated land to temples and priests. Along with the sun, moon and five brightest planets, they tracked Rahu and Ketu’s movement through the sky. In 499 AD, Indian mathematician and astronomer Aryabhata included these two immortal beings, dubbed “dark planets,” in his accurate description of how eclipses occur. His geometric formulation showed that the beasts actually represent two lunar nodes: positions in the sky in which the paths of sun and moon cross to produce a lunar or solar eclipse.
“They followed the nine wanderers up in the sky, two of them invisible,” says Dvorak. “From that, it was not a big step to predicting lunar eclipses.” By the sixth century A.D.—whether through independent invention, or thanks to help from the Babylonians—the Indians were successfully predicting eclipses.
Eclipse fears aren't just limited to ancient times. Even in the modern era, those seeking signs of Earthly meaning in the movements of the heavens have managed to find them. Astrologists note that Princess Diana’s fatal car crash occurred in the same year as a solar eclipse. An eclipse darkened England two days before the British King Henry I departed for Normandy; he never graced England’s shores again. In 1918, the last time an eclipse swept from coast-to-coast across the United States, an outbreak of influenza killed up to 50 million people worldwide and proved one of the deadliest pandemics in history.
Of course, there is no scientific evidence that the eclipse had anything to do with the outbreak, nor the other events. Thousands of people are born and die every day—and solar and lunar eclipses are far from rare. In any given year, up to four solar and three lunar eclipses darken the surface of the Earth. Because of this, as Dvorak writes, “it would be surprising if there were no examples of monarchs dying on or close to days of eclipses.”
In their time, ancient Babylonians weren’t trying to create the foundation of modern mathematics. But in order to predict celestial events—and thus, from their perspective, better understand earthly happenings—they developed keen mathematical skills and an extensive set of detailed records of the cosmos. These insights were later adopted and expanded upon by the Greeks, who used them to make a lasting mark on geometry and astronomy as we know it. Today, astronomers still use these extensive databases of ancient eclipses from Babylon, China and India to better understand Earth's movements through the ages.
So if you feel a little uneasy when the sun goes dark on August 21st, you’re not alone. Just remember: It was this same unease that helped create modern astronomy as we know it.
For decades before the Civil War, slave markets, pens and jails served as holding cells for enslaved African-Americans who were awaiting sale. These were sites of brutal treatment and unbearable sorrow, as callous and avaricious slave traders tore apart families, separating husbands from wives, and children from their parents. As the Union army moved south during the Civil War, however, federal soldiers captured and repurposed slave markets and jails for new and often ironic functions. The slave pens in Alexandria, Virginia, and St. Louis, Missouri, became prisons for Confederate soldiers and civilians. When one inmate in St. Louis complained about being held in such “a horrible place,” an unsympathetic Unionist replied matter-of-factly, “Yes, it is a slave-pen.” Other slave markets, such as the infamous “Forks of the Road” at Natchez, Mississippi, became contraband camps—gatherings points for black refugees from bondage, sites of freedom from their masters, and sources of protection and assistance by Union soldiers.
Ex-slaves relished seeing these paradoxical uses of the old slave pens. Jermain Wesley Logan had escaped slavery to New York in 1833 and returned to Nashville in the summer of 1865, where he found his elderly mother and old friends he had not seen for more than 30 years. “The slave-pens, thank God, have changed their inmates,” he wrote. In place of “the poor, innocent and almost heartbroken slaves” who for years had been held captive there as they awaited sale to the Deep South, Loguen found “some of the very fiends in human shape who committed those diabolical outrages.”
Loguen turned his eyes to the heavens. “Their sins have found them out,” he wrote, “and I was constrained to give God the glory, for He has done a great work for our people.”
During and after the war freedmen and women used old slave jails as sites of public worship and education. A black Congregational church met at Lewis Robard’s slave jail in Lexington, Kentucky, while Robert Lumpkin’s notorious brick slave jail in Richmond became the home of a black seminary that is now known as Virginia Union University, a historically black university. “The old slave pen was no longer the ‘devil’s half acre’ but God’s half acre,” wrote one of the seminary’s founders. For slave markets to become centers of black education was an extraordinary development since southern states had prohibited teaching slaves how to read and write.
In December 1864, the local slave market at the corner of St. Julian Street and Market Square in Savannah became a site for black political mobilization and education. A white observer noted the irony of the new use of this place. “I passed up the two flights of stairs down which thousands of slaves had been dragged, chained in coffle, and entered a large hall,” he wrote. “At the farther end was an elevated platform about eight feet square,—the auctioneer’s block. The windows were grated with iron. In an anteroom at the right women had been stripped and exposed to the gaze of brutal men.”
Now, instead of men and women begging unsympathetic buyers and sellers for mercy, a black man was leading a group of the emancipated in prayer, “giving thanks to God for the freedom of his race, and asking for a blessing on their undertaking.” After the prayers, the group broke into song. “How gloriously it sounded now,” wrote the white observer, “sung by five hundred freedmen in the Savannah slave-mart, where some of the singers had been sold in days gone by! It was worth a trip from Boston to Savannah to hear it.”
The next morning, black teachers sat on the auctioneer’s platform in that same room, teaching a school of 100 young black children. “I listened to the recitations, and heard their songs of jubilee,” wrote the witness. “The slave-mart transformed to a school-house! Civilization and Christianity had indeed begun their beneficent work.” Such joy reflected an incredible change. This site “from which had risen voices of despair instead of accents of love, brutal cursing instead of Christian teaching.”
Image by Library of Congress. Interior view of slave pen in Alexandria, Virginia (original image)
Image by Library of Congress. Exterior view of slave pen in Alexandria, Virginia (original image)
Image by Library of Congress. Interior view of slave pen in Alexandria, Virginia (original image)
Image by Library of Congress. Exterior view of slave pen in Alexandria, Virginia (original image)
Image by Library of Congress. Interior view of slave pen in Alexandria, Virginia (original image)
When Union forces entered Charleston, South Carolina, in February 1865, they found the buildings of the business district silent and badly damaged. Prior to the war Charleston had been one of the largest slave markets in the South, and slave traders plied their wares openly and proudly in the city. The slave dealers had set up shop in a slave mart in a “respectable” part of town, near St. Michael’s Church, a seminary library, the courthouse, and other government buildings. The word “MART” was emblazoned in large gilt letters above the heavy iron front gate. Passing through the outer gate, one would enter a hall 60-feet long and 20-feet wide, with tables and benches on either side. At the far end of the hall was a brick wall with a door into the yard. Tall brick buildings surrounded the yard, and a small room to the side of the yard “was the place where women were subjected to the lascivious gaze of brutal men. There were the steps, up which thousands of men, women, and children had walked to their places on the table, to be knocked off to the highest bidder.”
Walking along the streets, northern journalist Charles C. Coffin saw the old guardhouse where “thousands of slaves had been incarcerated there for no crime whatever, except for being out after nine o’clock, or for meeting in some secret chamber to tell God their wrongs, with no white man present.” Now the guardhouse doors “were wide open,” no longer patrolled by a jailor. “The last slave had been immured within its walls, and St. Michael’s curfew was to be sweetest music thenceforth and forever. It shall ring the glad chimes of freedom,—freedom to come, to go, or to tarry by the way; freedom from sad partings of wife and husband, father and son, mother and child.”
While Coffin stood gazing at these sites, imaging innumerable scenes of hopelessness and horror, a black woman named Dinah More walked into the hall and addressed him. “I was sold there upon that table two years ago,” she told him. “You never will be sold again,” Coffin replied; “you are free now and forever!” “Thank God!” replied More. “O the blessed Jesus, he has heard my prayer. I am so glad; only I wish I could see my husband. He was sold at the same time into the country, and has gone I don’t know where.”
Coffin went back to the front of the building and took down a gilt star from the front of the mart and, with the assistance of a freedman, he also removed the letters “M-A-R-T” and the lock from the iron gate. “The key of the French Bastile hangs at Mount Vernon,” wrote Coffin, “and as relics of the American prison-house then being broken up, I secured these.”
Coffin next went to the offices of the slave brokers. The cellar dungeons were complete with bolts, chains and manacles for securing captives to the floors. Books, papers, letters and bills of sale were strewn upon the floor. He picked up some papers and read them. Their callous disregard of human life and feeling was appalling. One stated, “I know of five very likely young negroes for sale. They are held at high prices, but I know the owner is compelled to sell next week, and they may be bought low enough so as to pay. Four of the negroes are young men, about twenty years old, and the other a very likely young woman about twenty-two. I have never stripped them, but they seem to be all right.”
Another offered to “buy some of your fancy girls and other negroes, if I can get them at a discount.” A third spoke of a 22-year-old black woman: “She leaves two children, and her owner will not let her have them. She will run away. I pay for her in notes, $650. She is a house woman, handy with the needle, in fact she does nothing but sew and knit, and attend to house business.”
Taking in these horrors, Coffin thought that perhaps some of the Massachusetts abolitionists, like Governor John A. Andrew, Wendell Phillips, or William Lloyd Garrison, might like to speak from the steps of the slave mart. Within a month, such a scene would take place. Coffin sent the steps northward to Massachusetts, and on March 9, 1865, Garrison gave a rousing speech while standing on them at Music Hall in Boston. Garrison and Coffin stood on the stage, which also featured the large gilt letters, “MART” and the lock from the iron door where black women had been examined for sale. The audience raised “thunders of applause” and waved “hundreds of white handkerchiefs for a considerable interval.”
And Garrison took great pride in the proceedings. “I wish you could have seen me mounted on the Charleston slave auction-block, on Thursday evening of last week, in Music Hall, in the presence of a magnificent audience, carried away with enthusiasm, and giving me their long protracted cheers and plaudits!” Garrison wrote to a friend. A few days later the “slave steps” went to Lowell, Massachusetts, where Garrison, Coffin and others delivered speeches celebrating the end of slavery and the Civil War. The audience applauded wildly as they listened to the speakers at the steps.
In the postwar era, slave markets and jails served as signposts of how far the nation had come since the Civil War. In 1888 a group of Ohio state legislators traveled to New Orleans, where they saw the Planters’ House, which still featured the words “Slaves for sale” painted on the outside wall. Now, however, the house served as “the headquarters for colored men in New Orleans.” Seeing these men “now occupying this former slave market, as men and not as chattels, is one of the pleasing sights that cheer us after an absence of thirty-two years from the city,” wrote Jeremiah A. Brown, a black state legislator traveling with the group. Upon visiting the old slave market in St. Augustine, Florida, in 1916, another African-American man similarly reflected on the meaning of this old “relic of slavery” and “the wonderful progress made.” He concluded, “The Lord hath done great things for us, whereof we are glad.”Jeremiah A. Brown (Wikimedia Commons)
The open-air market at St. Augustine still stands today in the middle of the city’s historic quarter. In the twentieth century it became a focal point for anti-discrimination protests in the city. In 1964, Martin Luther King, Jr., led nonviolent civil rights marches around the building, but violence broke out there between civil rights marchers and white segregationists on other occasions. In 2011, the city erected monuments to the “foot soldiers”—both white and black—who had marched in St. Augustine for racial equality in the 1960s. The juxtaposition of the market with the monuments to the Civil Rights Movement tells a story powerful of change over time in American history.
Several former slave markets now house museums about African-American history. The old slave mart in Charleston, South Carolina, has been interpreting the history of slavery in that city since 1938. More recently, the Northern Virginia chapter of the Urban League established the Freedom House museum at its headquarters in Alexandria—the old slave pen that had become a prison for Confederates during the Civil War. Further west, the slave pen from Mason County, Kentucky, is now on display at the National Underground Railroad Freedom Center in Cincinnati. Historical markers also commemorate the sites of slave markets throughout the nation, reminding the public that human beings were not only bought and sold in the South. In 2015, New York City mayor Bill de Blasio unveiled a marker about the slave trade in Lower Manhattan. And those slave steps from Charleston? According to the South Carolina museum, they are believed to be in a collection in Boston, but their true location is unclear.Facade of the Old Slave Mart in Charleston, South Carolina (Wikimedia Commons)
The transformation and commemoration of old slave markets into educational institutions and sites of political mobilization serve as powerful reminders of the massive social change that swept through the United States during the Civil War. Four million enslaved human beings became free between 1861 and 1865, forever escaping the threat of future sale. And nearly 200,000 black men donned the blue uniform of the Union so that they, too, could join in the fight for freedom. The old abolitionist William Lloyd Garrison sensed this transformation when he delivered his address at Music Hall in Boston, while standing on the steps of the Charleston slave mart. “What a revolution!” he exulted.
National conventions, once riveting political theater that held America in suspense for days, have been reduced to a made-for-television, political promo for the two parties. Since primary elections now routinely determine the candidates, this quadrennial dog-and-pony show offers a ho-hum pageant, in which windy speeches are delivered, party platforms hammered out and often ignored, and delegates don silly hats and hold up handmade signs extolling the virtues of candidates, causes and home states. Once the scene of bare-knuckle politicking and backroom deals, the modern conventions now provide comforting tableaus –full of sound and fury, but mostly signifying nothing.
That is why the once-trumpeted network “gavel-to-gavel” coverage has gone the way of disco and leisure suits.
The convention had essentially become obsolete by the 1972 Democratic Convention in Miami. Following the party reforms of the early 1970s, state primary elections could provide enough delegates to choose the nominee. Senator George McGovern—who had helped write the Democratic Party’s new nominating rules — garnered a majority of Democratic delegates by the time the convention began. (McGovern was then crushed by Nixon in a landslide.) So we may never again have a repeat of 1924, when the Democrats took 17 days and 103 ballots in the longest convention ever to nominate John W. Davis –who was and remains an obscure congressman from West Virginia.
But once upon a time, conventions mattered. They chose the candidates, often with plenty of intrigue and horse-trading in the notorious “smoke-filled rooms” of yesteryear. And for that reason, some memorable conventions have changed the course of history. Here, in chronological order, are the Ten Most Consequential Conventions, also highlighting a few significant convention “Firsts.”
1. 1831 Anti-Masonic Convention—Why start with one of the most obscure third parties in American history? Because they invented nominating conventions. The Anti-Masons, who feared the growing political and financial power of the secret society of Freemasons, formed in upstate New York; among their members was future president Millard Fillmore.
Before the Anti-Masons met in Baltimore in September 1831, candidates for president were chosen in the Congressional caucuses of two major parties –then the Federalists and the Democratic-Republicans (soon to be the Democratic Party). In December 1831, the short-lived National Republican party followed the Anti-Mason lead and met in Baltimore to nominate Henry Clay, the powerful Kentucky congressman. The Democrats followed suit, also in Baltimore, selecting Andrew Jackson, the ultimate victor, in May 1832.
“King Caucus” was dead. The political convention had been born. And the country never looked back.
2. 1856 Republican Convention—The first national convention of the Republican Party marks the beginning of the two-party system as we know it. Meeting in Philadelphia, the new party chose John C. Frémont –the “Pathfinder” who mapped the way West for a generation of pioneers. A popular hero, Frémont also provided the new party with its slogan: “Free Soil, Free Speech, Free men, Frémont.” The slavery issue had become America’s undeniable fault line, even if most Republicans, including Abraham Lincoln, sought only to end the extension of slavery, not abolish it outright..
Frémont also ignited the first “birther” controversy. Opponents claimed he was born in Canada–and worse, back then, he was Catholic! (Former president Fillmore, onetime Anti-Mason, was nominated that year by the Know-Nothings, another odd third party which opposed immigration and foreigners.)
Image by Library of Congress. The cradle of the G.O.P. The first Republican convention was held at LaFayette Hall, in Pittsburgh, Pennsylvania, on February 22, 1856. (original image)
Image by Library of Congress. Meeting of the Southern seceders from the Democratic Convention at St. Andrew's Hall, Charleston, South Carolina, April 30, 1860. Illus. in: Harper's Weekly, (1860 May 12). (original image)
Image by Sketch by Frank H. TaylorIllus. in: Harper's Weekly, Library of Congress. The Republican National Convention at Chicago, 1880. (original image)
Image by © CORBIS. Delegates gathered into a large convention hall in Philadelphia for the 1900 Republican National Convention. (original image)
Image by © Bettmann/CORBIS. Kennedy addressing Democratic National Convention on July 14, 1960. (original image)
Image by Leffler, Warren K, Library of Congress. Illinois delegates at the Democratic National Convention of 1968 react to Senator Ribicoff's nominating speech in which he criticized the tactics of the Chicago police against anti-Vietnam War protesters. (original image)
Image by Library of Congress. President Gerald Ford's supporters at the Republican National Convention, Kansas City, Missouri. (original image)
Image by Courtesy of the publisher. Kenneth C. Davis's book, Don’t Know Much About® the American Presidents, will be published on September 18. (original image)
3. 1860 and its Four Conventions—This was the year of not one but four of the most important conventions, producing four candidates—two of them Democrats. In April, the Democrats met in Charleston, South Carolina, but produced no candidate, the first and only time to-date a convention has come up empty. Slavery split the party as southern delegates walked out.
In June, northern Democrats met in Baltimore and chose Stephen Douglas, the powerful Illinois Senator who had famously debated Abraham Lincoln in the 1858 Illinois Senate race. The disaffected southern Democrats also met in Baltimore and chose Kentucky’s John C. Breckenridge and demanding federal protection of slavery.
In the meantime, the Republicans met in the Wigwam, a huge building in Chicago, and on the third ballot, chose one-term Illinois Representative Abraham Lincoln. Another splinter group, the Constitutional Union Party, chose former Speaker of the House John Bell.
As all four candidates campaigned, the 1860 election went to Lincoln with about 40 percent of the vote. And the headlong race toward secession and Civil War quickly followed.
4. 1880 Republican Convention—The post-Civil War period produced lively conventions but few fireworks as Republicans dominated presidential politics for a generation. But the GOP meeting in Chicago in 1880 was stuck between two battling wings of the party: the “Stalwarts” who wanted to maintain the “boss system” in which powerful congressmen made the decisions; and the “Half-Breeds” who sought civil service reform among other changes. After 35 ballots, Civil War veteran, Ohio congressman James A. Garfield, was a surprise “dark horse” compromise, with the vice presidential nod going to Chester A. Arthur as a concession to the Stalwarts. A New York lawyer, Arthur had built his career on patronage jobs. Then an assassin’s bullet made Arthur, the “gentleman boss,” the president.
5. 1900 Republican Convention—With the death of Garret Hobart, William McKinley’s first vice president, in November 1899, the GOP was looking for a replacement for the upcoming election. (At the time, there was no Constitutional mechanism for replacing a vice president who died or succeeded to the presidency, a problem resolved in 1967 by the 25th Amendment.) “Under no circumstances could I or would I accept the nomination for the vice-president,” the young governor of New York announced in February 1900. But in June, Theodore Roosevelt changed his tune.
Powerful New York bosses wanted this reform-minded governor out of the way and pushed him onto the McKinley ticket at the Philadelphia convention where frenzied delegates rallied to the Rough Riding hero of San Juan Hill. “Don’t any of you realize,” warned McKinley advisor Senator Mark Hanna, “that there is only one life between that madman and the Presidency.”
In September 1901, McKinley was assassinated. Theodore Roosevelt became America’s youngest president.
6. 1912 Republican Convention: After Theodore Roosevelt completed his own full term in 1908, he contemplated another run but opted to uphold the two-term precedent. He turned the reins over to William Howard Taft, whose last name was said to stand for, “Take Advice From Theodore.”
But following a four-year hiatus, Roosevelt wanted to return to the White House and challenged his successor, winning several primaries but not a majority of delegates. The party regulars remained steadfast to the incumbent Taft and Roosevelt bolted the Chicago convention, claiming he had been robbed, and formed a third party, the Progressive, or “Bull Moose Party,” soon thereafter. The most successful third party candidate ever, Roosevelt finished second; he and Taft had split the Republican vote, leaving an opening for Democrat Woodrow Wilson to win the presidency.
Watch this video in the original article
7. 1932 Democratic Convention—No surprise here. As the Great Depression worsened, Democrats were confident that the GOP’s 12-year hold on the White House would end with Herbert Hoover’s defeat. But who would get the nod? New York Governor Franklin D. Roosevelt and former Governor Al Smith, who lost to Hoover in 1928, were rivals. On the fourth ballot, FDR was anointed, aided by Speaker of the House, Texas’ John Nance Garner who became his vice president.
FDR signaled a new era in American politics when he became the first candidate to address the convention, held in Chicago. In his acceptance speech, he promised America a “New Deal.”
In 1940, Eleanor Roosevelt became the first First Lady to address a convention in Chicago –also notable for giving FDR his third consecutive nomination and an unprecedented third term.
8. 1960 Democratic Convention—There was nothing new about television at the Democratic convention in Los Angeles. The first televised convention had been Philadelphia’s Republican gathering in 1940—but a lot more people had television sets 20 years later. And what they saw was America’s first great made for-television candidate, John F. Kennedy, deliver an acceptance speech promising a “New Frontier” echoing FDR’s “New Deal.” And the presidential game would never be the same. A few months later, the first televised debates against Republican Richard Nixon cemented TV’s place in the American political landscape.
9. 1968 Democratic Convention—Television also played a huge role when the Democrats met in Chicago. But it was mostly about what was happening outside the hall. The nation watched the spectacle of anti-war protestors in full battle with Chicago policemen. One Democratic Senator told the convention there were, “Gestapo tactics in the streets of Chicago.” The convention selected Hubert Humphrey, who lost a close race to Richard Nixon. But the violent debacle in Chicago led to the first wave of primary reforms that chipped away at the power of the convention.
This convention also marked the last time that Chicago, which had hosted more conventions than any other city, would welcome a convention until the Democrats returned in 1996 to nominate Bill Clinton for a second term.
10. 1976 Republican Convention—This may have been the last hurrah for the national convention as a meaningful political battlefield. The incumbent President, Gerald Ford had succeeded to the office after Richard Nixon’s resignation. The only president never elected president or vice president, Ford faced a furious challenge from the right from former California Governor Ronald Reagan. Ford held onto the nomination in Kansas City, but lost the election to Jimmy Carter. And Ronald Reagan was probably thinking, “You ain’t seen nothing yet.”
Kenneth C. Davis is the author of Don’t Know Much About® History and Don’t Know Much About® the American Presidents, which will be published on September 18. His website is www.dontknowmuch.com
© 2012 Kenneth C. Davis
Editor's note: This story originally mistakenly referred to Garfield's assassin, Charles Guiteau, as an anarchist. This was not the case and we regret the error.
America has long been the land of innovation. More than 13,000 years ago, the Clovis people created what many call the “first American invention” – a stone tool used primarily to hunt large game. This spirit of American creativity has persisted through the millennia, through the first American patent granted in 1641 and on to today.
One group of prolific innovators, however, has been largely ignored by history: black inventors born or forced into American slavery. Though U.S. patent law was created with color-blind language to foster innovation, the patent system consistently excluded these inventors from recognition.
As a law professor and a licensed patent attorney, I understand both the importance of protecting inventions and the negative impact of being unable to use the law to do so. But despite patents being largely out of reach to them throughout early U.S. history, both slaves and free African-Americans did invent and innovate.
Why patents matter
In many countries around the world, innovation is fostered through a patent system. Patents give inventors a monopoly over their invention for a limited time period, allowing them, if they wish, to make money through things like sales and licensing.Patent Office relief on the Herbert C. Hoover Building (Neutrality)
The patent system has long been the heart of America’s innovation policy. As a way to recoup costs, patents provide strong incentives for inventors, who can spend millions of dollars and a significant amount of time developing a invention.
The history of patents in America is older than the U.S. Constitution, with several colonies granting patents years before the Constitution was created. In 1787, however, members of the Constitutional Convention opened the patent process up to people nationwide by drafting what has come to be known as the Patent and Copyright Clause of the Constitution. It allows Congress:
“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”
This language gives inventors exclusive rights to their inventions. It forms the foundation for today’s nationwide, federal patent system, which no longer allows states to grant patents.
Though the language itself was race-neutral, like many of the rights set forth in the Constitution, the patent system didn’t apply for black Americans born into slavery. Slaves were not considered American citizens and laws at the time prevented them from applying for or holding property, including patents. In 1857, the U.S. commissioner of patents officially ruled that slave inventions couldn’t be patented.
Slaves’ inventions exploited by owners
During the 17th and 18th centuries, America was experiencing rapid economic growth. Black inventors were major contributors during this era – even though most did not obtain any of the benefits associated with their inventions since they could not receive patent protection.
Slave owners often took credit for their slaves’ inventions. In one well-documented case, a black inventor named Ned invented an effective, innovative cotton scraper. His slave master, Oscar Stewart, attempted to patent the invention. Because Stewart was not the actual inventor, and because the actual inventor was born into slavery, the application was rejected.
Stewart ultimately began selling the cotton scraper without the benefit of patent protection and made a significant amount of money doing so. In his advertisements, he openly touted that the product was “the invention of a Negro slave – thus giving the lie to the abolition cry that slavery dwarfs the mind of the Negro. When did a free Negro ever invent anything?”
Reaping benefits of own inventions
The answer to this question is that black people – both free and enslaved – invented many things during that time period.The “Boyd Bedstead” (The Conversation)
One such innovator was Henry Boyd, who was born into slavery in Kentucky in 1802. After purchasing his own freedom in 1826, Boyd invented a corded bed created with wooden rails connected to the headboard and footboard.
The “Boyd Bedstead” was so popular that historian Carter G. Woodson profiled his success in the iconic book “The Mis-education of the Negro,” noting that Boyd’s business ultimately employed 25 white and black employees.
Though Boyd had recently purchased his freedom and should have been allowed a patent for his invention, the racist realities of the time apparently led him to believe that he wouldn’t be able to patent his invention. He ultimately decided to partner with a white craftsman, allowing his partner to apply for and receive a patent for the bed.
Some black inventors achieved financial success but no patent protection, direct or indirect. Benjamin Montgomery, who was born into slavery in 1819, invented a steamboat propeller designed for shallow waters in the 1850s. This invention was of particular value because, during that time, steamboats delivered food and other necessities through often-shallow waterways connecting settlements. If the boats got stuck, life-sustaining supplies would be delayed for days or weeks.
Montgomery tried to apply for a patent. The application was rejected due to his status as a slave. Montgomery’s owners tried to take credit for the propeller invention and patent it themselves, but the patent office also rejected their application because they were not the true inventors.
Even without patent protection, Montgomery amassed significant wealth and become one of the wealthiest planters in Mississippi after the Civil War ended. Eventually his son, Isaiah, was able to purchase more than 800 acres of land and found the town of Mound Bayou, Mississippi after his father’s death.
A legacy of black innovators
The patent system was ostensibly open to free black people. From Thomas Jennings, the first black patent holder, who invented dry cleaning in 1821, to Norbert Rillieux, a free man who invented a revolutionary sugar-refining process in the 1840s, to Elijah McCoy, who obtained 57 patents over his lifetime, those with access to the patent system invented items that still touch the lives of people today.
This legacy extends through the 21st century. Lonnie Johnson generated more than US$1 billion in sales with his Super Soaker water gun invention, which has consistently been among the world’s top 20 best-selling toys each year since 1991. Johnson now owns more than 80 patents and has since developed different green technologies.
Bishop Curry V, a 10-year-old black inventor from Texas, has already applied for a patent for his invention, which he says will stop accidental deaths of children in hot cars.
Black women are also furthering the legacy of black inventors. Lisa Ascolese, known as “The Inventress,” has received multiple patents and founded the Association for Women Inventors and Entrepreneurs. Janet Emerson Bashen became the first black woman to receive a patent for a software invention in 2006. And Dr. Hadiyah Green recently won a $1 million grant related to an invention that may help treat cancer.
True to the legacy of American innovation, today’s black inventors are following in the footsteps of those who came before them. Now patent law doesn’t actively exclude them from protecting their inventions – and fully contributing to American progress.
[Editor's Note: This story originally included a photo we believed to be Thomas Jennings, the first black holder of a patent, but it was not him. We apologize for the error.]
Two months after Ben Franklin helped draft the Declaration of Independence, a surprise visitor walked into his Philadelphia shop. The young man’s curly brown hair cascaded down toward his shoulders, and his English was so broken he switched to French. Thaddeus Kosciuszko, a 30-year-old Pole just off the boat from Europe via the Caribbean, introduced himself and offered to enlist as an officer in the new American nation’s army.
Franklin, curious, quizzed Kosciuszko about his education: a military academy in Warsaw, studies in Paris in civil engineering, including fort building. Franklin asked him for letters of recommendation. Kosciuszko had none.
Instead, the petitioner asked to take a placement exam in engineering and military architecture. Franklin’s bemused answer revealed the inexperience of the Continental Army. “Who would proctor such an exam,” Franklin asked, “when there is no one here who is even familiar with those subjects?”
On August 30, 1776, armed with Franklin’s recommendation and high marks on a geometry exam, Kosciuszko walked into Independence Hall (then the Pennsylvania State House) and introduced himself to the Continental Congress.
In his native Poland, Kosciuszko is known for leading the Kosciuszko Uprising of 1794, a brave insurrection against foreign rule by Russia and Prussia. But that came before the liberty-loving Pole played a key but overlooked role in the American Revolution. Though not nearly as well known as the Marquis de Lafayette, America’s most celebrated foreign ally of the era, Kosciuszko (pronounced cuz-CHOOSE-co), was in many ways his equal. Both volunteered with an idealistic belief in democracy, both had a major impact on a climactic battle in the Revolution, both returned home to play prominent roles in their own country’s history, and both enjoyed the friendship and high esteem of American Founding Fathers. Kosciuszko did something more: he held his American friends to the highest ideals of equality on the issue of slavery.
Kosciuszko was born in 1746 and grew up in a manor house, where 31 peasant families worked for his father. His early education included the democratic ideals of John Locke and ancient Greeks. Trained at Warsaw’s School of Chivalry, he enrolled in Paris’ Royal Academy of Painting and Sculpture, where his real goal was to learn civil engineering and the strategies of Sébastien Le Prestre de Vauban, Europe’s authority on forts and sieges.
Back in Poland, Kosciuszko was hired to tutor Louise Sosnowska, a wealthy lord’s daughter, and fell in love with her. They tried to elope in the fall of 1775 after Lord Sosnowski refused Kosciuszko’s request to marry her and instead arranged a marriage with a prince. According to the story Kosciuszko told various friends, Sosnowski’s guards overtook their carriage on horseback, dragged it to a stop, knocked Kosciuszko unconscious, and took Louise home by force. Thwarted, heartbroken, nearly broke – and in some accounts, fearing vengeance from Sosnowski -- Kosciuszko embarked on his long years as an expatriate. Back in Paris, he heard that the American colonists needed engineers and set sail across the Atlantic in June 1776. Detoured when his ship wrecked off Martinique, he arrived in Philadelphia two months later.
His Paris studies, though incomplete, quickly made him useful to the Americans. John Hancock appointed him a colonel in the Continental Army in October, and Franklin hired him to design and build forts on the Delaware River to help defend Philadelphia from the British navy. Kosciuszko befriended General Horatio Gates, commander of the Continental Army’s northern division, and in May 1777, Gates sent him north to New York to evaluate Fort Ticonderoga’s defenses. There, Kosciuszko and others advised that a nearby hill needed to be fortified with cannons. Superiors ignored his advice, believing it impossible to move cannons up the steep slope. That July, the British, under the command of General John Burgoyne, arrived from Canada with 8,000 men and sent six cannons up the hill, firing into the fort and forcing the Americans to evacuate. A floating log bridge designed by Kosciuszko helped them escape.
Kosciuszko’s greatest contribution to the American Revolution came later that year in the Battle of Saratoga, when the defenses along the Hudson River helped the Continental Army to victory. The British war plan called for troops from Canada and New York City to seize the Hudson Valley and divide the colonies in two. Kosciuszko identified Bemis Heights, a bluff overlooking a bend in the Hudson and near a thick wood, as the spot for Gates’ troops to build defensive barriers, parapets and trenches.
When Burgoyne’s troops arrived in September, they couldn’t penetrate Kosciuszko’s defenses. So they tried an end run through the woods, where Virginia riflemen picked them off and soldiers commanded by Benedict Arnold aggressively charged, killing and wounding 600 redcoats. Two weeks later, Burgoyne tried to attack even farther west, but the Americans surrounded and beat the British. Historians often describe Burgoyne’s surrender as the turning point of the war, since it convinced France’s King Louis XVI to negotiate to enter the war on the American side. Gates and Arnold got most of the credit, which Gates deflected to Kosciuszko. “The great tacticians of the campaign were hills and forests,” Gates wrote to Dr. Benjamin Rush of Philadelphia, “which a young Polish Engineer was skilful enough to select for my encampment.”
Kosciuszko spent the next three years improving the defense of the Hudson River, taking part in the design of Fort Clinton at West Point. Though he bickered about the fort’s design with Louis de la Radière, a French engineer also serving the Continental Army, the Americans valued his skills. George Washington often praised Kosciuszko in his correspondence and unsuccessfully asked Congress to promote him—despite spelling his name 11 different ways in his letters, including Kosiusko, Koshiosko, and Cosieski. During Benedict Arnold’s failed betrayal, he attempted to sell details about West Point’s defenses, designed by Kosciuszko, Radière, and others, to the British.
In 1780, Kosciuszko traveled south to serve as chief engineer of the Americans’ southern army in the Carolinas. There, he twice rescued American forces from British advances by directing the crossing of two rivers. His attempt to undermine the defenses of British fort in South Carolina with trench-digging failed, and in the ensuing battle, he was bayoneted in the buttocks. In 1782, the war’s waning days, Kosciuszko finally served as a field commander, spying, stealing cattle and skirmishing during the siege of Charleston. After the war, Washington honored Kosciuszko with gifts of two pistols and a sword.
After the war, Kosciuszko sailed back to Poland, hoping that the American Revolution could serve as a model for his own country to resist foreign domination and achieve democratic reforms. There, King Stanislaw II August Poniatowski was trying to rebuild the nation’s strength despite the menacing influence of Russian czarina Catherine the Great, his former lover and patron. Back home, Kosciuszko resumed his friendship with his love, Louise (now married to a prince), and joined the Polish army.
After Poland’s partition by Russia and Prussia in 1793, which overturned a more democratic 1791 constitution and chopped 115,000 square miles off Poland, Kosciuszko led an uprising against both foreign powers. Assuming the title of commander in chief of Poland, he led the rebels in a valiant seven months of battles in 1794. Catherine the Great put a price on his head and her Cossack troops defeated the rebellion that October, stabbing its leader with pikes during the battle. Kosciuszko spent two years in captivity in Russia, until Catherine’s death in 1796. A month later, her son, Paul, who disagreed with Catherine’s belligerent foreign policy, freed him. He returned to the United States in August 1797.
Kosciuszko lived in a boarding house in the capital, Philadelphia, collecting back pay for the war from Congress, and seeing old friends. By then, Americans had splintered into their first partisan conflict, between the Federalists, who admired the British system of government and feared the French Revolution, and the Republicans, who initially admired the French Revolution and feared a Federalist-led government would come to resemble the British monarchy. Kosciuszko took the side of the Francophile Republicans, resenting England’s support of Russia and seeing the Federalists as Anglophile elitists. So he avoided President John Adams, but developed a close friendship with Vice-President Thomas Jefferson.
“General Kosciuszko, I see him often,” Jefferson wrote Gates. “He is as pure a son of liberty as I have ever known, and of that liberty which is to go to all, and not to the few or rich alone.”
Kosciuszko took liberty so seriously that he was disappointed to see friends like Jefferson and Washington own slaves. During the American and Polish revolutions, Kosciuszko had employed black men as his aides-de-camp: Agrippa Hull in America, Jean Lapierre in Poland. When he returned to Europe in May 1798, hoping to organize another war to liberate Poland, Kosciuszko scribbled out a will. It left his American assets – $18,912 in back pay and 500 acres of land in Ohio, his reward for his war service -- for Jefferson to use to purchase the freedom and provide education for enslaved Africans. Jefferson, revising the draft into better legal English, also rewrote the will so that it would allow Jefferson to free some of his slaves with the bequest. The final draft, which Kosciuszko signed, called on “my friend Thomas Jefferson” to use Kosciuszko’s assets “in purchasing negroes from among his own as [well as] any others,” “giving them liberty in my name,” and “giving them an education in trades and otherwise.”
Though Kosciuszko returned to Paris, hoping to fight Russia and Prussia again, he never did. When Napoleon offered to help liberate Poland, Kosciuszko correctly sized him up, intuiting that his offer was disingenuous. (Later, many Poles in Napoleon’s service died in Haiti when they were ordered to put down Toussaint Louverture’s slave revolt.) Kosciuszko spent most of the remainder of his life in Paris, where he befriended Lafayette and celebrated American independence at Fourth of July parties with him.
One month before his 1817 death, Kosciuszko wrote Jefferson, reminding him of the terms of his will. But Jefferson, struggling with age, finances, inquiries about the estate from heirs in Europe, appeared in federal court in 1819 and asked a judge to appoint another executor of Kosciuszko’s affairs.
Kosciuszko’s will was never implemented. A year after Jefferson’s 1826 death, most of his slaves were sold at auction. A court-appointed executor squandered most of the estate, and in 1852, the U.S. Supreme Court declared the American will invalid, ruling that he had revoked it in an 1816 will. (Kosciuszko’s 1817 letter to Jefferson proves that was not his intent.)
Today, Kosciuszko is remembered with statues in Washington, Boston, Detroit and other cities, many of them the products of Polish-Americans’ efforts to assert their patriotism during the 1920s backlash against immigration. A 92-year-old foundation in his name awards $1 million annually in college scholarships and grants to Poles and Polish-Americans. There’s even a mustard named for him. Yet as Lafayette’s status as a foreign ally of the American Revolution continues to grow, Kosciuszko remains relatively obscure. Perhaps it’s because he mastered the subtle art of military fortifications; war heroes are made by bold offensives, not fort-making.
“I would say his influence is even more significant than Lafayette,” says Alex Storozynski, author of The Peasant Prince, the definitive modern biography of Kosciuszko. Without Kosciuszko’s contributions to the Battle of Saratoga, Storozynski argues, the Americans might have lost, and France might never have entered the war on the American side.
Larrie Ferriero, whose new book Brothers at Arms examines France and Spain’s role in the Revolution, says that though Kosciuszko’s role in America’s founding is less decisive than Lafayette’s, the abolitionist sentiment behind his will makes him more important as an early voice of conscience.
“He was fighting next to people who believed they were fighting for independence, but not doing it for all,” Ferriero says. “Even before Americans themselves fully came to that understanding, he saw it.”
Hundreds of years ago, a small group of Polynesians rowed their wooden outrigger canoes across vast stretches of open sea, navigating by the evening stars and the day's ocean swells. When and why these people left their native land remains a mystery. But what is clear is that they made a small, uninhabited island with rolling hills and a lush carpet of palm trees their new home, eventually naming their 63 square miles of paradise Rapa Nui—now popularly known as Easter Island.
On this outpost nearly 2,300 miles west of South America and 1,100 miles from the nearest island, the newcomers chiseled away at volcanic stone, carving moai, monolithic statues built to honor their ancestors. They moved the mammoth blocks of stone—on average 13 feet tall and 14 tons—to different ceremonial structures around the island, a feat that required several days and many men.
Eventually the giant palms that the Rapanui depended on dwindled. Many trees had been cut down to make room for agriculture; others had been burned for fire and used to transport statues across the island. The treeless terrain eroded nutrient-rich soil, and, with little wood to use for daily activities, the people turned to grass. "You have to be pretty desperate to take to burning grass," says John Flenley, who with Paul Bahn co-authored The Enigmas of Easter Island. By the time Dutch explorers—the first Europeans to reach the remote island—arrived on Easter day in 1722, the land was nearly barren.
Although these events are generally accepted by scientists, the date of the Polynesians' arrival on the island and why their civilization ultimately collapsed is still being debated. Many experts maintain that the settlers landed around 800 A.D. They believe the culture thrived for hundreds of years, breaking up into settlements and living off the fruitful land. According to this theory, the population grew to several thousand, freeing some of the labor force to work on the moai. But as the trees disappeared and people began to starve, warfare broke out among the tribes.
In his book Collapse, Jared Diamond refers to the Rapanui's environmental degradation as "ecocide" and points to the civilization's demise as a model of what can happen if human appetites go unchecked.
But new findings by archaeologist Terry Hunt of the University of Hawai'i may indicate a different version of events. In 2000, Hunt, archaeologist Carl Lipo of California State University, Long Beach, and their students began excavations at Anakena, a white sandy beach on the island's northern shore. The researchers believed Anakena would have been an attractive area for the Rapanui to land, and therefore may be one of the earliest settlement sites. In the top several layers of their excavation pit, the researchers found clear evidence of human presence: charcoal, tools—even bones, some of which had come from rats. Underneath they found soil that seemed absent of human contact. This point of first human interaction, they figured, would tell them when the first Rapanui had arrived on the island.
Hunt sent the samples from the dig to a lab for radiocarbon dating, expecting to receive a date around 800 A.D., in keeping with what other archaeologists had found. Instead, the samples dated to 1200 A.D. This would mean the Rapanui arrived four centuries later than expected. The deforestation would have happened much faster than originally assumed, and the human impact on the environment was fast and immediate.
Hunt suspected that humans alone could not destroy the forests this quickly. In the sand's layers, he found a potential culprit—a plethora of rat bones. Scientists have long known that when humans colonized the island, so too did the Polynesian rat, having hitched a ride either as stowaways or sources of food. However they got to Easter Island, the rodents found an unlimited food supply in the lush palm trees, believes Hunt, who bases this assertion on an abundance of rat-gnawed palm seeds.
Image by Terry L. Hunt. Two statues sit on the slopes of the Rano Raraku statue quarry. Nearly half of Easter Island's statues remain near this area. (original image)
Image by Terry L. Hunt. Hanga Roa Village is one of Easter Island's main settlements. (original image)
Image by Terry L. Hunt. The moai at Ahu Tongariki form the island's largest ceremonial platform. A tidal wave in 1960 sent 15 of these statues inland. Some 30 years later, archaeologists finally restored the site. (original image)
Image by Terry L. Hunt. Students with the University of Hawai'i Rapa Nui Archaeological Field School inspect the stratification at Anakena Beach in 2005. (original image)
Image by Terry L. Hunt. Petroglyphs still remain at the Orongo Ceremonial Village. (original image)
Image by Terry L. Hunt. Polynesians chiseled the moai (above, on the lower slopes of the Rano Raraku statue quarry) out of volcanic rock. Carved in honor of ancestors, the statues stood on average 13 feet tall and weighed 14 tons. (original image)
Image by Terry L. Hunt. At Anakena Beach, several moai, perched on a four-foot tall stone wall called an "ahu," stand with their back to the sea. (original image)
Image by Terry L. Hunt. Participants in the University of Hawai'i Rapa Nui Archaeological Field School fly a kite at Anakena Beach. The moai of Ahu Nau Nau provide the backdrop. (original image)
Under these conditions, he says, "Rats would reach a population of a few million within a couple of years." From there, time would take its toll. "Rats would have an initial impact, eating all of the seeds. With no new regeneration, as the trees die, deforestation can proceed slowly," he says, adding that people cutting down trees and burning them would have only added to the process. Eventually, the degeneration of trees, according to his theory, led to the downfall of the rats and eventually of the humans. The demise of the island, says Hunt, "was a synergy of impacts. But I think it is more rat than we think."
Hunt's findings caused a stir among Easter Island scientists. John Flenley, a pollen analyst at New Zealand's University of Massey, accepts that the numerous rats would have some impact on the island. "Whether they could have deforested the place," he says, "I'm not sure."
Flenley has taken core samples from several lakebeds formed in the island's volcanic craters. In these cores, he has found evidence of charcoal. "Certainly there was burning going on. Sometimes there was a lot of charcoal," he says. "I'm inclined to think that the people burning the vegetation was more destructive [than the rats]."
Adding to the civilization's demise, European explorers brought with them Western diseases like syphilis and smallpox. "I think that the collapse happened shortly before European discovery of the island," Flenley says. "But it could be that the collapse was more of a general affair than we think, and the Europeans had an effect on finishing it off."
Flenley, who initially surveyed Easter Island in 1977, was one of the first scientists to analyze the island's pollen—a key indicator of foresting. The island's volcanic craters, which once housed small lakes, were ideal sites for his research. "The sediment was undisturbed. Each layer was put down on top of the layer before," says Flenley, referring to core samples from one crater's lakebeds. "It's like a history book. You just have to learn to read the pages." The samples showed an abundance of pollen, indicating that the island had once been heavily forested. The pollen rate then dropped off dramatically. "When I dated the deforestation at that site, it came starting at about 800 A.D. and finishing at this particular site as early as 1000 A.D.," a finding in line with other radiocarbon dates on the island. Since this was one of the first settlement sites, Flenley says, it makes sense that deforestation would have occurred even earlier than it did on other parts of the island.
This crater, Flenley believes, would have been one of the only sources of freshwater on the island, and therefore one of the first places the Polynesians would have settled. "It wasn't only a site of freshwater, it was also a very sheltered crater," he says. "It would have been possible to grow tropical crops." Anakena, the beach where Hunt did his research, would have been a good place to keep their canoes and to go fishing, but not a good place to live. Hunt, Flenley says, "has definitely shown a minimum age for people being there, but the actual arrival of people could have been somewhat earlier."
Other scientists who work on the island also remain skeptical of Hunt's later colonization date of 1200 A.D. Jo Anne Van Tilburg, founder of the Easter Island Statue Project and a scientist at the University of California, Los Angeles, is one of the island's leading archaeologists and has studied the moai for nearly 30 years. "It's not logical that they were constructing megalithic sites within a few years of arrival on the island," she says. Van Tilburg and her colleagues have surveyed all 887 of the island's statues. "By 1200 A.D., they were certainly building platforms," she says referring to the stone walls on which the islanders perched the moai, "and others have described crop intensification at about the same time. It's hard for me to be convinced that his series of excavations can overturn all of this information."
Despite these questions, Hunt remains confident in his findings. Many scientists, he says, "get a date, tell a story, invest a lot in it, and then don't want to give it up. They had a very good environmental message."
Hunt, Lipo, and their students continue to do excavation work on the island. They have recently moved on from Anakena to do work on the northwest coast. They also plan to date the earliest rat-gnawed seeds. "We keep getting a little more evidence," says Hunt, who has published his findings in Science. "Everything looks very consistent."
Scientists may never find a conclusive answer to when the Polynesians colonized the island and why the civilization collapsed so quickly. Whether an invasive species of rodent or humans devastated the environment, Easter Island remains a cautionary tale for the world.
Whitney Dangerfield, a freelance writer in Washington, D.C. whose work has appeared in National Geographic and the Washington Post, is a regular contributor to Smithsonian.com.
In the late 1700s, a large percentage of Europeans feared the tomato.
A nickname for the fruit was the “poison apple” because it was thought that aristocrats got sick and died after eating them, but the truth of the matter was that wealthy Europeans used pewter plates, which were high in lead content. Because tomatoes are so high in acidity, when placed on this particular tableware, the fruit would leach lead from the plate, resulting in many deaths from lead poisoning. No one made this connection between plate and poison at the time; the tomato was picked as the culprit.
Around 1880, with the invention of the pizza in Naples, the tomato grew widespread in popularity in Europe. But there’s a little more to the story behind the misunderstood fruit’s stint of unpopularity in England and America, as Andrew F. Smith details in his The Tomato in America: Early History, Culture, and Cookery. The tomato didn’t get blamed just for what was really lead poisoning. Before the fruit made its way to the table in North America, it was classified as a deadly nightshade, a poisonous family of Solanaceae plants that contain toxins called tropane alkaloids.
One of the earliest-known European references to the food was made by the Italian herbalist, Pietro Andrae Matthioli, who first classified the “golden apple” as a nightshade and a mandrake—a category of food known as an aphrodisiac. The mandrake has a history that dates back to the Old Testament; it is referenced twice as the Hebrew word dudaim, which roughly translates to “love apple.” (In Genesis, the mandrake is used as a love potion). Matthioli’s classification of the tomato as a mandrake had later ramifications. Like similar fruits and vegetables in the solanaceae family—the eggplant for example, the tomato garnered a shady reputation for being both poisonous and a source of temptation. (Editor’s note: This sentence has been edited to clarify that it was the mandrake, not the tomato, that is believed to have been referenced in the Old Testament)
But what really did the tomato in, according to Smith’s research, was John Gerard’s publication of Herball in 1597 which drew heavily from the agricultural works of Dodoens and l’Ecluse (1553). According to Smith, most of the information (which was inaccurate to begin with) was plagiarized by Gerard, a barber-surgeon who misspelled words like Lycoperticum in the collection’s rushed final product. Smith quotes Gerard:
Gerard considered ‘the whole plant’ to be ‘of ranke and stinking savour.’… The fruit was corrupt which he left to every man’s censure. While the leaves and stalk of the tomato plant are toxic, the fruit is not.
Gerard’s opinion of the tomato, though based on a fallacy, prevailed in Britain and in the British North American colonies for over 200 years.
Around this time it was also believed that tomatoes were best eaten in hotter countries, like the fruit’s place of origin in Mesoamerica. The tomato was eaten by the Aztecs as early as 700 AD and called the “tomatl,” (its name in Nahuatl), and wasn’t grown in Britain until the 1590s. In the early 16th century, Spanish conquistadors returning from expeditions in Mexico and other parts of Mesoamerica were thought to have first introduced the seeds to southern Europe. Some researchers credit Cortez with bringing the seeds to Europe in 1519 for ornamental purposes. Up until the late 1800s in cooler climates, tomatoes were solely grown for ornamental purposes in gardens rather than for eating. Smith continues:
John Parkinson the apothecary to King James I and botanist for King Charles I, procalimed that while love apples were eaten by the people in the hot countries to ‘coole and quench the heate and thirst of the hot stomaches,” British gardeners grew them only for curiousity and fo the beauty of the fruit.
The first known reference to tomato in the British North American Colonies was published in herbalist William Salmon’s Botanologia printed in 1710 which places the tomato in the Carolinas. The tomato became an acceptable edible fruit in many regions, but the United States of America weren’t as united in the 18th and early 19th century. Word of the tomato spread slowly along with plenty of myths and questions from farmers. Many knew how to grow them, but not how to cook the food.
By 1822, hundreds of tomato recipes appeared in local periodicals and newspapers, but fears and rumors of the plant’s potential poison lingered. By the 1830s when the love apple was cultivated in New York, a new concern emerged. The Green Tomato Worm, measuring three to four inches in length with a horn sticking out of its back, began taking over tomato patches across the state. According to The Illustrated Annual Register of Rural Affairs and Cultivator Almanac (1867) edited by J.J. Thomas, it was believed that a mere brush with such a worm could result in death. The description is chilling:
The tomato in all of our gardens is infested with a very large thick-bodied green worm, with oblique white sterols along its sides, and a curved thorn-like horn at the end of its back.
According to Smith’s research, even Ralph Waldo Emerson feared the presence of the tomato-loving worms: They were “an object of much terror, it being currently regarded as poisonous and imparting a poisonous quality to the fruit if it should chance to crawl upon it.”
Around the same time period, a man by the name of Dr. Fuller in New York was quoted in The Syracuse Standard, saying he had found a five-inch tomato worm in his garden. He captured the worm in a bottle and said it was “poisonous as a rattlesnake” when it would throw spittle at its prey. According to Fuller’s account, once the skin came into contact with the spittle, it swelled immediately. A few hours later, the victim would seize up and die. It was a “new enemy to human existence,” he said. Luckily, an entomologist by the name of Benjamin Walsh argued that the dreaded tomato worm wouldn’t hurt a flea. Thomas continues:
Now that we have become familiarized with it these fears have all vanished, and we have become quite indifferent towards this creature, knowing it to be merely an ugly-looking worm which eats some of the leaves of the tomato…
The fear, it seems, had subsided. With the rise of agricultural societies, farmers began investigating the tomato’s use and experimented with different varieties. According to Smith, back in the 1850s the name tomato was so highly regarded that it was used to sell other plants at market. By 1897, innovator Joseph Campbell figured out that tomatoes keep well when canned and popularized condensed tomato soup.
Today, tomatoes are consumed around the world in countless varieties: heirlooms, romas, cherry tomatoes—to name a few. More than one and a half billion tons of tomatoes are produced commercially every year. In 2009, the United States alone produced 3.32 billion pounds of fresh-market tomatoes. But some of the plant’s night-shady past seems to have followed the tomato in pop culture. In the 1978 musical drama/ comedy “Attack of the Killer Tomatoes,” giant red blobs of the fruit terrorize the country. “The nation is in chaos. Can nothing stop this tomato onslaught?”
The year is 1895. Louise Gibson and her bicycle, Sylvia, are at the forefront of the bicycle craze sweeping America in the late 19th century. Louise has just ridden in from the recently established railroad town of Takoma Park to visit the nation's capital and the Smithsonian Institution on the National Mall for the day. To Louise, the bicycle boom represents new opportunities for women like herself.
Our very own Wheelwoman character "Louise" was created for the Patrick F. Taylor Foundation Object Project, a new interactive learning space that features a section on the history of cycling in America, and through the generous support of the Smithsonian Women's Committee. Louise is portrayed by actor Julie Garner, who has formerly played another role at the museum as Mary Pickersgill, the real-life seamstress who sewed The Star-Spangled Banner. Louise rides around the museum on an antique 1898 Reliance Model D bicycle, speaking with visitors about her independent journey to Washington, D.C., and the freedom she feels after learning how to ride the bicycle.
The Wheelwoman character does not represent a specific historical figure; rather, she is an everyday 1890s woman who has learned how to ride a bicycle and is going out on her own for the first time. Unlike the high wheel bicycle, which had one large wheel in the front and a small wheel in the back, the groundbreaking safety bicycle she rides has two wheels of equal size and a drop frame that accommodates a woman's full skirt. Before the 1890s, the bicycle was a dangerous toy for aristocrats and adventurers. With the invention of the safety, everyone could ride, leading Susan B. Anthony to christen it the "freedom machine."
When developing the Wheelwoman character, we hoped to demonstrate to visitors how important the bicycle was in fostering a greater sense of independence for women. Our character's story begins when Louise and her husband, a railway man, move to Takoma Park from an urban area, leaving Louise in need of transportation. Not willing to keep a horse, her husband buys her a bicycle, which she uses to travel freely wherever she pleases. By the late 19th century, the bicycle became a symbol of this newfound freedom and innovation, one that allowed women like Louise to leave their homes and demonstrate their independence and self-sufficiency.
At the age of 35 and with two children, Louise embodies the women who embraced self-discovery and wondered about a woman's place in the modern world. A new sense of freedom and time to devote to oneself allowed women to start contemplating issues such as temperance, child labor, and woman suffrage. If a woman like Louise could ride a machine all on her own across the country, the possibilities were endless for what else she could do!
We drew inspiration for Louise from her contemporaries such as Annie Londonderry, who in January 1895 had just begun her cross-country bicycle tour; Frances Willard, a leader of the Women’s Christian Temperance Union and author of A Wheel Within a Wheel: How I Learned to Ride the Bicycle; Maria E. Ward, who wrote guides such as Bicycling for Ladies. These women taught not only about etiquette and technique when riding a bicycle, but also about the health benefits one would receive and the importance of knowing how the machine works. Their texts helped us create the Wheelwoman character to form the basis of Louise's bicycling knowledge.
Louise subscribed to the principles of the League of American Wheelmen (L.A.W.), the premiere national bicycle club, which established the Good Roads Movement. As bicycle clubs began to emerge throughout the country, small outings turned into large social tours, closing the distance between cities and towns. In order to travel these distances, country roads needed to be smoothed, and the Good Roads Movement hoped to push the government to improve infrastructure in rural areas. After researching the importance of the bicycle in literally paving the way for the automobile and forming the beginning of our modern road system, we realized that the roads would have had a big impact on Louise's six-mile journey from Takoma Park to Washington, D.C. Louise discusses the importance of the Good Roads Movement with visitors, since she was not able to enjoy the smooth paved surfaces we have today!
The program development team (which included educators, curators, the actor who would become Louise, and interns like me) hoped to allude to a special historic meaning when searching for a name for the Wheelwoman. Her surname "Gibson" references artistic personification of the ideal woman of her time, the "Gibson Girl." The Gibson Girl was popularized by illustrator Charles Dana Gibson and was meant to portray the ideal of feminine beauty. She was a tall and slender, yet curvy, woman dressed in the latest fashions, who was athletic, confident, independent, and focused on self-fulfillment. She could enter the workforce or attend college to find a mate, but she was not the type to participate in radical movements such as woman suffrage.
To some degree, the Gibson Girl provides a contrast to the New Woman, a feminist ideal popularized by American writer Henry James. This modern persona represented the educated, independent career woman who advocated for issues such as women's rights. Louise is a composite of these two types. She is a woman who is still concerned with traditional feminine responsibilities but is also exploring new possible roles for women through her bicycle.
Louise still wears a corset, but it is a sport corset with elastic, designed for comfort during exercise; she is even considering purchasing bloomers to replace her long skirts. The Rational Dress Movement was formed in the late 1800s to reform the Victorian-era dress in favor of more practical and comfortable clothes for women. As women began to engage in physical activities such as bicycling, the large and heavy skirts of the Victorian era became increasingly impractical. For now, Louise is equipped with a skirt lifter to separate her skirts and a skirt guard to make sure that her clothes do not get caught in the wheel.
After hours of role-playing and modifying the character with our actor, the Wheelwoman made her debut on the floors of the museum at the July opening of our new Innovation Wing. Look for her on your next visit. You can also explore bicycling history in our online exhibition and learn about the conservation of a very fancy bike.
Brianna Mayer was a summer 2015 intern with the Office of Public Programs and Strategic Initiatives. She is a studying history and anthropology at the University of Michigan.
There was a time when standing desks were a curiosity—used by eccentrics like Hemingway, Dickens and Kierkegaard, but seldom seen inside a regular office setting.
That's changed, in large part due to research showing that the cumulative impact of sitting all day for years is associated with a range of health problems, from obesity to diabetes to cancer. Because the average office worker spends 5 hours and 41 minutes sitting each day at his or her desk, some describe the problem with a pithy new phrase that's undeniably catchy, if somewhat exaggerated: "Sitting is the new smoking."
Much of this research has been spurred by James Levine, an endocrinologist at the Mayo Clinic. "The way we live now is to sit all day, occasionally punctuated by a walk from the parking lot to the office," he recently said during a phone interview, speaking as he strolled around his living room. "The default has become to sit. We need the default to be standing."
All this might sound suspiciously like the latest health fad, and nothing more. But a growing body of research—conducted both by Levine and other scientists—confirms that a sedentary lifestyle appears to be detrimental in the long-term.
The solution, they say, isn't to sit for six hours at work and then head to the gym afterward, because evidence suggests that the negative effects of extended sitting can't be countered by brief bouts of strenous exercise. The answer is incorporating standing, pacing and other forms of activity into your normal day—and standing at your desk for part of it is the easiest way of doing so. Here's a list of some of the benefits scientists have found so far.
Reduced Risk of Obesity
Levine's research began as an investigation into an age-old health question: why some people gain weight and others don't. He and colleagues recruited a group of office workers who engaged in little routine exercise, put them all on an identical diet that contained about 1000 more calories than they'd been consuming previously and forbid them from changing their exercise habits. But despite the standardized diet and exercise regimens, some participants gained weight, while others stayed slim.
Eventually, using underwear stitched with sensors that measure every subtle movement, the researchers discovered the secret: the participants who weren't gaining weight were up and walking around, on average, 2.25 more hours per day, even though all of them worked at (sitting) desks, and no one was going to the gym. "During all of our days, there are opportunities to move around substantially more," Levine says, mentioning things as mundane as walking to a colleague's office rather than emailing them, or taking the stairs instead of the elevator.
Failing to take advantage of these constant movement opportunities, it turns out, is closely associated with obesity. And research suggests that our conventional exercise strategy—sitting all day at work, then hitting the gym or going for a run—"makes scarcely more sense than the notion that you could counter a pack-a-day smoking habit by jogging," as James Vlashos puts it in the New York Times. The key to reducing the risk of obesity is consistent, moderate levels of movement throughout the day.
Scientists are still investigating why this might be the case. The reduced amount of calories burned while sitting (a 2013 study found that standers burn, on average, 50 more calories per hour) is clearly involved, but there may also be metabolic changes at play, such as the body's cells becoming less responsive to insulin, or sedentary muscles releasing lower levels of the enzyme lipoprotein lipase.
Of course, all this specifically points to danger of sitting too much, not exactly the same as the benefit of standing. But Levine believes the two are closely intertwined.
"Step one is get up. Step two is learn to get up more often. Step three is, once you're up, move," he says. "And what we've discovered is that once you're up, you do tend to move." Steps one and two, then, are the most important parts—and a desk that encourages you to stand at least some of the time is one of the most convenient means of doing so.
Reduced Risk of Type 2 Diabetes and Other Metabolic Problems
The detrimental health impacts of sitting—and the benefits of standing—appear to go beyond simple obesity. Some of the same studies by Levine and others have found that sitting for extended periods of time is correlated with reduced effectiveness in regulating levels of glucose in the bloodstream, part of a condition known as metabolic syndrome that dramatically increases the chance of type 2 diabetes.
A 2008 study, for instance, found that people who sat for longer periods during their day had significantly higher levels of fasting blood glucose, indicating their their cells became less responsive to insulin, with the hormone failing to trigger the absorption of glucose from the blood. A 2013 study [PDF] came to similar findings, and arrived at the conclusion that for people already at risk of developing type 2 diabetes, the amount of time spent sitting could be a more important risk factor than the amount of time spent vigorously exercising.
Reduced Risk of Cardiovascular Disease
Scientific evidence that sitting is bad for the cardiovascular system goes all the way back to the 1950s, when British researchers compared rates of heart disease in London bus drivers (who sit) and bus conductors (who stand) and found that the former group experienced far more heart attacks and other problems than the latter.
Since, scientists have found that adults who spend two more hours per day sitting have a 125 percent increased risk of health problems related to cardiovascular disease, including chest pain and heart attacks. Other work has found that men who spend more than five hours per day sitting outside of work and get limited exercise were at twice the risk of heart failure as those who exercise often and sit fewer than two hours daily outside of the office. Even when the researchers controlled for the amount of exercise, excessive sitters were still 34 percent more likely to develop heart failure than those who were standing or moving.
Reduced Risk of Cancer
A handful of studies have suggested that extended periods of sitting can be linked with a higher risk of many forms of cancer. Breast and colon cancer appear to be most influenced by physical activity (or lack thereof): a 2011 study found that prolonged sitting could be responsible for as much as 49,000 cases of breast cancer and 43,000 cases of colon cancer annually in the U.S. But the same research found that significant amounts of lung cancer (37,200 cases), prostate cancer (30,600 cases), endometrial cancer (12,000 cases) and ovarian cancer (1,800 cases) could also be related to excessive sitting.
The underlying mechanism by which sitting increases cancer risk is still unclear, but scientists have found a number of biomarkers, such as C-reactive protein, that are present in higher levels in people who sit for long periods of time. These may be tied to the development of cancer.
Lower Long-Term Mortality Risk
Because of the reduced chance of obesity, diabetes, cardiovascular disease and cancer, a number of studies have found strong correlations between the amount of time a person spends sitting and his or her chance of dying within a given period of time.
A 2010 Australian study, for instance, found that for each extra hour participants spent sitting daily, their overall risk of dying during the study period (seven years) increased by 11 percent. A 2012 study found that if the average American reduced his or her sitting time to three hours per day, life expectancy would climb by two years.
These projects control for other factors such as diet and exercise—indicating that sitting, in isolation, can lead to a variety of health problems and increase the overall risk of death, even if you try to get exercise while you're not sitting and eat a healthy diet. And though there are many situations besides the office in which we sit for extended periods (driving and watching TV, for instance, are at the top of the list), spending some of your time at work at a standing desk is one of the most direct solutions.
If you're going to start doing so, most experts recommend splitting your time between standing and sitting, because standing all day can lead to back, knee or foot problems. The easiest ways of accomplishing this are either using a desk that can be raised upward or a tall chair that you can pull up to your desk when you do need to sit. It's also important to ease into it, they say, by standing for just a few hours a day at first while your body becomes used to the strain, and move around a bit, by shifting your position, pacing, or even dancing as you work.
When scientists first suggested in the early 1980s that volcanic activity had wiped out most dinosaurs 66 million years ago, Paul Olsen wasn’t having any of it. He wasn’t even convinced there had been a mass extinction.
Olsen, a paleontologist and geologist at Columbia University, eventually came to accept the idea of mass extinctions. He also acknowledged that volcanoes played a role in certain extinction events. But even then, he wasn’t entirely convinced about the cause of these extinctions.
The leading hypothesis holds massive eruptions blasted carbon dioxide into Earth's atmosphere, cranking up global temperatures within a relatively short period of time. Such a sudden change, the theory goes, would have killed off terrestrial species like the huge ancestors of crocodiles and large tropical amphibians and opened the door for dinosaurs to evolve.
Olsen, who discovered his first dinosaur footprint in the 1960s as a teenager in New Jersey and still uses the state’s geological formations to inform his work, wondered whether something else may have been at work—such as sudden cooling events after some of these eruptions, rather than warming.
It's an idea that's been around in some form for decades, but the 63-year-old Olsen is the first to strongly argue that sulfate aerosols in the atmosphere could have been responsible for the cooling. A sudden chill would explain the selective nature of the extinctions, which affected some groups strongly and others not at all.
His willingness to revive an old debate and look at it from a fresh angle has earned Olsen a reputation as an important voice in the field of earth sciences.Olsen thinks that the wavy band of rock near the bottom of this image—composed of tangled, cylindrical strands that could be tree roots or other debris—may be the remains of a sudden mass extinction. It could line up with a well-dated giant meteorite that hit what is now southern Canada 215.5 million years ago. (Columbia University Earth Institute)
From the moment Olsen abandoned dreams of becoming a marine biologist as a scrawny teenager and fell in love with dinosaurs, he courted controversy and earned a reputation for making breathtaking discoveries.
Olsen’s first breakthrough came as a young teen, when he, his friend Tony Lessa and several other dinosaur enthusiasts discovered thousands of fossilized footprints at a quarry near his house in Rosemount, New Jersey. They were the remnants of carnivorous dinosaurs and tiny crocodile relatives that dated back to the Jurassic, 201 million years ago. The teens' efforts to successfully designate the quarry as a dinosaur park inspired a 1970 Life magazine article.
Olsen even sent a letter to President Richard Nixon urging his support for the park, and followed that with a cast of a dinosaur footprint. "It is a miracle that nature has given us this gift, this relic of the ages, so near to our culturally starved metropolitan area," the young Olsen wrote in a later letter to Nixon. "A great find like this cannot go unprotected and it must be preserved for all humanity to see." (Olsen eventually received a response from the deputy director of the Interior Department's Mesozoic Fossil Sites Division.)
Olsen shook things up again as an undergraduate student at Yale. In this case, he and Peter Galton published a 1977 paper in Science that questioned whether the end-Triassic mass extinction had even happened, based on what he called incorrect dating of the fossils. Subsequent fossil discoveries showed that Olsen was wrong, which he readily acknowledged.
In the 1980s, Olsen demonstrated that Earth’s orbital cycles—the orientation of our planet on its axis and the shape of its path around the sun—influenced tropical climates and caused lakes to come and go as far back as 200 million years ago. It was a controversial idea at the time, and even today has its doubters.
More recently, Olsen and colleagues dated the Central Atlantic Magmatic Province—large igneous rock deposits that were the result of massive volcanic eruptions—to 201 million years ago. That meant the eruptions played a role in the end-Triassic mass extinction. They published their results in a 2013 study in the journal Science.
But it is his latest project—reexamining the causes of mass extinctions—that could be his most controversial yet.
Researchers generally recognize five mass extinction events over the past 500 million years, Olsen explains. We may be in the middle of a sixth event right now, which started tens of thousands of years ago with the extinction of animals like the mastodon.
Determining the causes and timing of these extinctions is incredibly difficult. Regardless of cause, however, these events can pave the way for whole new groups of organisms. In fact, the disappearance of nearly all synapsids—a group that includes mammals and their relatives—in the Triassic may have allowed for the evolution of dinosaurs about 230 million years ago.
The accepted theory for the end-Triassic extinction states that gases from enormous volcanic eruptions led to a spike in carbon dioxide levels, which in turn increased global temperatures by as much as 11 degrees F. Terrestrial species, like the huge ancestors of crocodiles and large tropical amphibians, would have perished because they couldn't adapt to the new climate.The remains of the Triassic are "interesting because [they give] us a different kind of world to look at, to try and understand how earth's systems work," says Olsen. "But it's not so different that it's beyond the boundaries of what we see going on today." (Columbia University Earth Institute)
However, this explanation never sat well with Olsen. “If we are back in the time of the Triassic and the dominant life forms on land are these crocodile relatives, why would a three degree [Celsius] increase in temperature do anything?” asks Olsen, sitting in his office on the campus of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York.
Some inland tropical areas would have become lethally hot, Olsen says, surrounded by fossils, dinosaur memorabilia and a Nixon commendation on the wall. But the mountains and coastlines would still be bearable. "It’s hard to imagine the temperature increase would be a big deal,” he says.
Three years ago, Olsen began looking at the fossil record of species that survived other mass extinctions, like the Cretaceous-Tertiary (K-T) event 66 million years ago and the Permian event roughly 250 million years ago. What he saw suggested a completely different story: Earth's climate during and after these volcanic eruptions or asteroid impacts got briefly but intensely cold, not hotter, as volcanic ash and droplets of sulfate aerosols obscured the sun.
Scientists generally agree that the reduced sunlight would have disrupted photosynthesis, which plants need to survive. During the K-T extinction event, plant losses would have left many herbivorous dinosaurs, and their predators, with little to eat.
In this case, size became the determining factor in whether a species went extinct. Large animals need more food than smaller animals to survive, Olsen explains.
With his fluffy white mustache and hearty laugh, Olsen is hard to miss at paleontology meetings. He's not afraid to insert himself into mass extinction debates, but is quick to point out that he counts even his most ardent critics among his friends.
Supporters praise his creativity, persistence and willingness to consider the big unanswered questions in paleontology that, if solved, would alter our understanding of important events like mass extinctions.
“Among academics, you see two types. You see the parachutists and you see the truffle hunters, and Paul is a parachutist,” says Hans Sues, chairman of the department of paleobiology at the Smithsonian National Museum of Natural History. “The parachutist is the one who helps build the big frame in which other people operate.” Sues and Olsen, who have pieced together fossils in the past, have known each other for 30 years.
Olsen's latest project—the volcanic winter theory—has him looking for ancient ash deposits from the United States to Morocco to the United Kingdom. He hopes to find the fingerprints of certain sulfur isotopes and metals that could indicate that sulfur-rich super-eruptions occured. They would also pinpoint the timing of the eruptions relative to the extinctions, Olsen explains.
Evidence of ancient ice would also bolster his case. For those clues, Olsen must look to mud flats laid down in what would have been the tropics—some of which are in areas in New Jersey, where he searched for dinosaurs as a teenager. “If you find these little crystals on mud flats, you know it froze in the tropics," Olsen says.
Sues is among those who believe Olsen’s hypothesis has merit, partly because Olsen is focused on the sulfate aerosols from eruptions. In the recent past, massive volcanic eruptions—like Mount Pinatubo in 1991—belched the sulfate aerosols into the atmosphere, which reduced global temperatures. The trick is finding evidence of extreme cold in rocks, Sues says.
But other scientists, like Spencer G. Lucas, curator of paleontology at the New Mexico Museum of Natural History and Science, have their doubts.
As someone who has long sparred with Olsen on mass extinctions, Lucas agrees that volcanism played a role in extinctions and isn’t ruling out cooling as the cause. But finding chemical evidence of that in the rocks or preserved ash will be difficult, if not impossible, to find, he says.
Searching for those clues isn't a waste of time though, says Lucas. He wants someone who cares about the problem, like Olsen, to collect the evidence and makes a convincing case for the Earth either cooling or warming during these extinctions.
“Paul is sort of the Don Quixote of extinctions,” Lucas says. “He is tilting at a windmill in my mind. But I’m glad he’s doing it because he knows he has got the background, the smarts and the opportunity. If anybody can figure this out, he will.”
In the best tradition of skulduggery, claim and counterclaim, foosball (or table football) ,that simple game of bouncing little wooden soccer players back and forth on springy metal bars across something that looks like a mini pool table, has the roots of its conception mired in confusion.
Some say that in a sort of spontaneous combustion of ideas, the game erupted in various parts of Europe simultaneously sometime during the 1880s or ’90s as a parlor game. Others say that it was the brainchild of Lucien Rosengart, a dabbler in the inventive and engineering arts who had various patents, including ones for railway parts, bicycle parts, the seat belt and a rocket that allowed artillery shells to be exploded while airborne. Rosengart claimed to have come up with the game toward the end of the 1930s to keep his grandchildren entertained during the winter. Eventually his children’s pastime appeared in cafés throughout France, where the miniature players wore red, white and blue to remind everyone that this was the result of the inventiveness of the superior French mind.
There again, though, Alexandre de Finesterre has many followers, who claim that he came up with the idea , being bored in a hospital in the Basque region of Spain with injuries sustained from a bombing raid during the Spanish Civil War. He talked a local carpenter, Francisco Javier Altuna, into building the first table, inspired by the concept of table tennis. Alexandre patented his design for fútbolin in 1937, the story goes, but the paperwork was lost during a storm when he had to do a runner to France after the fascist coup d'état of General Franco. (Finesterre would also become a notable footnote in history as one of the first airplane hijackers ever.)
While it’s debatable whether Señor Finisterre actually did invent table football, the indisputable fact is the first-ever patent for a game using little men on poles was granted in Britain, to Harold Searles Thornton, an indefatigable Tottenham Hotspur supporter, on November 1, 1923. His uncle, Louis P. Thornton, a resident of Portland, Oregon, visited Harold and brought the idea back to the United States and patented it in 1927. But Louis had little success with table football; the patent expired and the game descended into obscurity, no one ever realising the dizzying heights it would scale decades later.
The world would have been a much quieter place if the game had stayed as just a children’s plaything, but it spread like a prairie fire. The first league was established in 1950 by the Belgians, and in 1976, the European Table Soccer Union was formed. Although how they called it a ‘union’ when the tables were different sizes, the figures had different shapes, none of the handles were the same design and even the balls were made of different compositions is a valid question. Not a unified item amongst them.
The game still doesn’t even have a single set of rules – or one name. You’ve got lagirt in Turkey, jover au baby-foot in France, csocso in Hungary, cadureguel-schulchan in Israel, plain old table football in the UK, and a world encyclopedia of ridiculous names elsewhere around the globe. The American “foosball” (where a player is called a “fooser”) borrowed its name from the German version, “fußball”, from whence it arrived in the United States. (And, really, you can’t not love a game where they have a table with two teams made up only of Barbie dolls, or that is played in tournaments with such wonderful names as the 10th Annual $12,000 Bart O’Hearn Celebration Foosball Tournament, held in Austin, Texas, in 2009.)
Foosball re-arrived on American shores thanks to Lawrence Patterson, who was stationed in West Germany with the U.S. military in the early 1960s. Seeing that table football was very popular in Europe, Patterson seized the opportunity and contracted a manufacturer in Bavaria to construct a machine to his specification to export to the US. The first table landed on American soil in 1962, and Patterson immediately trademarked the name “Foosball” in America and Canada, giving the name “Foosball Match” to his table.
Patterson originally marketed his machines through the “coin” industry, where they would be used mainly as arcade games. Foosball became outrageously popular, and by the late ’80s, Patterson was selling franchises, which allowed partners to buy the machines and pay a monthly fee to be guaranteed a specific geographical area where only they could place them in bars and other locations. Patterson sold his Foosball Match table through full-page ads in such prestigious national publications as Life, Esquire and the Wall Street Journal, where they would appear alongside other booming franchise-based businesses such as Kentucky Fried Chicken. But it wasn’t until 1970 that the U.S. had its own home-grown table, when two Bobs, Hayes and Furr, got together to design and build the first all-American-made foosball table.
From the perspective of the second decade of the third millennium, with ever more sophisticated video games, digital technology and plasma televisions, it’s difficult to imagine the impact that foosball had on the American psyche. During the 1970s, the game became a national phenomenon.
Sports Illustrated and “60 Minutes” covered tournaments where avid and addicted players, both amateur and professional, traveled the length and breadth of America following big bucks prizes, with the occasional Porsche or Corvette thrown in as an added incentive. One of the biggest was the Quarter-Million Dollar Professional Foosball Tour, created by bar owner and foosball enthusiast E. Lee Peppard of Missoula, Montana. Peppard promoted his own brand of table, the Tournament Soccer Table, and hosted events in 32 cities nationwide with prizes of up to $20,000. The International Tournament Soccer Championships (ITSC), with a final held on Labor Day weekend in Denver, reached the peak of prize money in 1978, with $1 million as the glimmering star for America’s top professionals to reach out for.
The crash of American foosball was even more rapid than its rise. Pac-man, that snappy little cartoon character, along with other early arcade games, were instrumental in the demise of the foosball phenomenon. The estimated 1000 tables a month that were selling around the end of the ’70s crashed to 100, and in 1981, the ITSC filed for bankruptcy. But the game didn’t die altogether; in 2003, the U.S. became part of the International Table Soccer Federation, which hosts the Multi-Table World Championships each January in Nantes, France.
But it’s still nice to know that even in a globalized world of evenrmore uniformity, table football, foosball, csosco, lagirt or whatever you want to call it still has no absolutely fixed idea of what really does constitute the core of the game. The American/Texas Style is called “Hard Court” and is known for its speed and power style of play. It combines a hard man with a hard rolling ball and a hard, flat surface. The European/French Style, “Clay Court” is exactly opposite of the American style. It features heavy (non-balanced) men, and a very light and soft cork ball. Add to that a soft linoleum surface and you have a feel best described as sticky. In the middle is European/German Style, “Grass Court,” characterized by its “enhanced ball control achieved by softening of components that make up the important man/ball/surface interaction.” And even the World Championships use five different styles of table, with another 11 distinct styles being used in various other international competitions.
Until recently this dilettante approach to the tables and rulebooks also applied to the competitions. Up until a few years ago, Punta Umbrí in Huelva, Spain, hosted the World Table Football Cup Championship in August each year. Well, sort of. It was played on a Spanish-style table and, according to Kathy Brainard, co-author with Johnny Loft of The Complete Book of Foosball and past president of the United States Table Soccer Federation, “If the tournament is run on a Spanish-made table and has the best players from wherever that table can be found, then it could honestly be called the World Championship of Foosball, on that specific table.” A bit of diplomatic looking down the nose there.
Brainard went on to say that the real championship, called the World Championship of Table Soccer, was played in Dallas on a U.S.-made table and offered $130,000 in prize money. Although, admittedly, that was before 2003, at which time the American associations had to accept the ignominy of being part of a truly international World Championship, and not simply be able to hold their own table football version of the baseball World Series
In the general roly-poly of life, table football is mainly something that people play for fun in a smoky bar—at least they did before cigarettes were banned.
While British “foosers” might not be able to look forward to winning such large prizes as American players, they still take the game seriously. Oxford University is one of the top table football venues in England, with many highly thought of players on the national scene. Thirty college teams and one pub team play regularly on Garlando brand tables against other top pub and university sides.
Dave Trease is captain of Catz I (St. Catherine’s College, Oxford) who says his position as captain hangs on the fact that he has the only “brush shot” in the university.
“A brush shot is where you have the ball stationary and then you have to flick it very hard at an angle. To be honest, I think it’s more luck than anything, but it looks good when it works.” And he admits that his skills on the Garlando don’t travel.
“I’m rubbish on anything else! I’ve found something I’m good at, where I can have a laugh and not take it all too seriously. And you don’t get any table football hooligans either, although you’ve got to keep an eye on people greasing the ball or jamming the table.”
Ruth Eastwood, captain of Catz II, beat all her female opponents (all five of them anyway) to win the women’s event, ranking her fourth nationally. But having won the tournament, does she see big contracts being offered?
“I don’t think it’s likely, particularly when you take into account that my prize money was only £15 and the prizes for the whole competition were only £300. I don’t think we’re in the same league as the World Championships, but at least I can say I was women’s champion, even if there were only five other women!”
It's probably stretching the imagination just that bit too far to think that table football will every become an Olympic sport, but they probably thought the same about beach volleyball at one time. Sadly, the small figures that populate the field during playing time won't be able to collect the medals themselves. That will have to be left to the flick-wristed humans who control their every move.
Gordon Goody is the type of gentleman criminal celebrated by George Clooney’s Oceans trilogy. In the early 1960s, Goody was a dashing, well-dressed, seasoned thief who knew how to manipulate authority. At the height of his criminal game, he helped to plan and execute a 15-man heist that resulted in the largest cash theft in international history. Scotland Yard’s ensuing investigation turned the thieves into celebrities for a British public stuck in a post-war recession funk. Authorities apprehended Goody and his team members, but they failed to uncover one important identity: that of the operation’s mastermind, a postal service insider. Nicknamed “The Ulsterman” because of his Irish accent, the informant has gone unnamed for 51 years.
“It was a caper, an absolute caper,” says Chris Long, the director of the upcoming documentary A Tale of Two Thieves. In the film, Gordon Goody, now 84 and living in Spain, reconstructs the crime. He is the only one of three living gang members to know “The Ulsterman’s” name. At the end of the film, Goody confirms this identity – but he does so with hesitation and aplomb, aware that his affirmation betrays a gentleman’s agreement honored for five decades.
At 3 a.m. on Thursday, August 8, 1963, a British mail train heading from Glasgow to London slowed for a red signal near the village of Cheddington, about 36 miles northwest of its destination. When co-engineer David Whitby left the lead car to investigate the delay, he saw that an old leather glove covered the light on the signal gantry. Someone had wired it to a cluster of 6-volt batteries and a hand lamp that could activate a light change.
An arm grabbed Whitby from behind.
“If you shout, I will kill you,” a voice said.
Several men wearing knit masks accompanied Whitby onto the conductor’s car, where head engineer Jack Mills put up a fight. An assailant’s crowbar knocked him to the ground. The criminals then detached the first two of the 12 cars on the train, instructing Mills, whose head bled heavily, to drive half a mile further down the track. In the ten cars left behind, 75 postal employees worked, unaware of any problem but a delay.
The bandits handcuffed Whitby and Mills together on the ground.
“For God’s sake,” one told the bound engineers, “don’t speak, because there are some right bastards here.”
In the second car, four postal workers guarded over £2 million in small notes. Because of a bank holiday weekend in Scotland, consumer demand had resulted in a record amount of cash flow; this train carried older bills that were headed out of circulation and into the furnace. Besides the unarmed guards, the only security precaution separating the criminals from the money was a sealed door, accessible only from the inside. The thieves hacked through it with iron tools. Overwhelming the postal workers, they threw 120 mail sacks down an embankment where two Range Rovers and an old military truck awaited.
Fifteen minutes after stopping the train, 15 thieves had escaped with £2.6 million ($7 million then, over $40 million today).
Image by © Bettmann/CORBIS. The train after the initial police investigation in Cheddington, Buckinghamshire. (original image)
Image by Bettmann/CORBIS. Detectives at Cheddington Station inspect one of the cars of the traveling post office. (original image)
Image by Bettmann/CORBIS. Interior of one of the train's ransacked mail cars. (original image)
Image by © Bettmann/CORBIS. Leatherslade Farm served as a hideout for the bandits after the robbery, as evidenced by the empty mailbags and getaway vehicles found by Scotland Yard on the premises. (original image)
Image by Gary Ede/Corbis. Seven of the Great Train Robbers in 1979. From left: Buster Edwards, Tom Wisbey, Jim White, Bruce Reynolds, Roger Cordrey, Charlie Wilson, and Jim Hussey. (original image)
Image by © Lee Thomas/Demotix/Corbis. Members of the Hells Angels led the procession for Ronnie Biggs's funeral on January 3, 2014. (original image)
Within the hour, a guard from the back of the train scouted the delay and rushed to the closest station with news of foul play. Alarms rang throughout Cheddington. The police spent a day canvassing farms and houses before contacting Scotland Yard. The metropolitan bureau searched for suspects through a criminal index of files that categorized 4.5 million felons by their crimes, methodologies and physical characteristics. It also dispatched to Cheddington its “Flying Squad,” a team of elite robbery investigators familiar with the criminal underground. Papers reported that in the city and its northern suburbs, “carloads of detectives combed streets and houses,” focusing on the homes of those “named by underworld informants” and also on “the girlfriends of London crooks.”
The New York Times called the crime a “British Western” and compared it to the darings of the Jesse James and the Dalton Brothers gangs. British papers criticized the absence of a national police force, saying that a lack of communication between departments fostered an easier getaway for the lawbreakers. Journalists also balked at the lack of postal security, and suggested that the postal service put armed guards on mail trains.
“The last thing we want is shooting matches on British railways,” said the Postmaster General.
The police knew that the crime required the assistance of an insider with a detailed working knowledge of postal and train operations: someone who would have anticipated the lack of security measures, the amount of money, the location of the car carrying the money, and the right place to stop the train.
The postal service had recently added alarms to a few of its mail cars, but these particular carriages weren’t in service during the robbery. Detective Superintendent G. E. McArthur said the robbers would have known this. “We are fighting here a gang that has obviously been well organized.”
All 15 of the robbers would be arrested, but the insider would remain free. For his role in planning the robbery, the Ulsterman received a cut (the thieves split the majority of the money equally) and remained anonymous but to three people for decades. Only one of those three is still alive.
Director Chris Long says that Gordon Goody has a “1950s view of crime” that makes talking to him “like warming your hands by a fire.” Goody describes himself at the start of the film as “just an ordinary thief.” He recounts the details of his criminal past – including his mistakes -- with a grandfatherly matter-of-factness. “Characters like him don’t exist anymore,” continued Long. “You’re looking at walking history.” While his fellow train gang members Bruce Reynolds and Ronnie Biggs later looked to profit from their criminal histories by writing autobiographies, Gordon Goody moved to Spain to live a quiet life and “shunned the public,” in Long’s words.
The producers trusted Goody’s information the more that they worked with him. But they also recognized that their documentary centered on a con artist’s narrative. Simple research could verify most of Goody’s details, but not the Ulsterman’s real name; it was so common in Ireland that Long and Howley hired two private investigators to search through post office archives and the histories of hundreds of Irishmen who shared a similar age and name.
Scotland Yard reached a breakthrough in their case on August 13, 1963, when a herdsman told police to investigate Leatherslade Farm, a property about 20 miles away from the crime. The man had grown suspicious over increased traffic around the farmhouse. When police arrived, they found 20 empty mailbags on the ground near a 3-foot hole and a shovel. The getaway vehicles were covered nearby. Inside the house, food filled kitchen shelves. The robbers had wiped away many fingerprints, but police lifted some from a Monopoly game board and a ketchup bottle. One week later, police apprehended a florist named Roger Cordrey in Bournemouth. Over the next two weeks, tips led to the arrests of Cordrey’s accomplices.
By January of 1964, authorities had enough evidence to try 12 of the criminals. Justice Edmund Davies charged the all-male jury to ignore the notoriety that the robbers had garnered in the press.
“Let us clear out of the way any romantic notions of daredevilry,” he said. “This is nothing less than a sordid crime of violence inspired by vast greed.”
On March 26, the jury convicted the men on charges ranging from robbery and conspiracy to obstruction of justice. The judge delivered his sentence a few weeks later. “It would be an affront if you were to be at liberty in the near future to enjoy these ill-gotten gains,” he said. Eleven of the 12 received harsh sentences of 20 to 30 years. The prisoners immediately started the appeals process.
Within five years of the crime, authorities had incarcerated the three men who had evaded arrest during the initial investigation – Bruce Reynolds, Ronald “Buster” Edwards, and James White. But by the time the last of these fugitives arrived in jail, two of the robbers had escaped. Police had anticipated one of these prison breaks. They had considered Charles F. Wilson, a bookmaker dubbed “the silent man,” a security risk after learning that the London underground had formed “an escape committee” to free him. In August of 1964, Wilson’s associates helped him break out of the Winson Green Prison near Birmingham and flee to Canada, where Scotland Yard located and re-arrested him four years later.
Ronnie Biggs became the criminal face of the operation after escaping from a London prison in 1965. On one July night, he made his getaway by scaling a wall and jumping into a hole cut into the top of a furniture truck. Biggs fled to Paris, then Australia before arriving in Brazil in the early 1970s. He lived there until 2001, when he returned to Britain to seek medical treatment for poor health. Authorities arrested him, but after Biggs caught pneumonia and suffered strokes in jail, he received “compassionate leave” in 2009. He died at the age of 84 this past December.
Police recovered approximately 10% of the money, although by 1971, when decimalisation led to a change in UK currency, most of the cash that the robbers had stolen was no longer legal tender.
Last year marked the 50th anniversary of the Great Train Robbery, inviting the type of publicity that Gordon Goody chose to spend his life avoiding. One reason that he shares his story now, says Chris Long, is that he has become “sick of hearing preposterous things about the crime.” In addition to recounting his narrative, Goody agreed to give the filmmakers the Ulsterman’s name because he assumed the informant had died --- the man had appeared middle-aged in 1963.
At the end of A Tale of Two Thieves, Goody is presented with the Ulsterman’s picture and basic information about his life (he died years ago). Asked if he is looking at the mastermind of the Great Train Robbery, Goody stares at the photo, winces, and shifts in his seat. There is a look of disbelief on his face, as if he is trying to understand how he himself got caught in an act.
Goody shakes his head. “I’ve lived with the guy very vaguely in my head for 50 years.”
The face doesn’t look unfamiliar. Gordon Goody’s struggle to confirm the identity reveals his discomfort with the concrete evidence before him, and perhaps with his effort to reconcile his commitment to the project with a promise he made to himself decades ago. Goody could either keep “The Ulsterman” in the abstract as a legendary disappearing act, or give him a name, and thereby identify a one-time accomplice.
He says yes.
A Dozen Indigenous Craftsman From Peru Will Weave Grass into a 60-Foot Suspension Bridge in Washington, D.C.
As much as maize, or mountains, or llamas, woven bridges defined pre-Columbian Peru. Braided over raging rivers and yawning chasms, these skeins of grass helped connect the spectacular geography of the Inca empire: its plains and high peaks, rainforests and beaches, and—most importantly—its dozens of distinct human cultures.
Now a traditional Inca suspension bridge will connect Washington, DC to the Andean highlands. As part of the Smithsonian’s upcoming Folklife Festival, which focuses on Peru this year, a dozen indigenous craftsmen will weave together grass ropes into a 60-foot span. It will be strung on the National Mall parallel to 4th Street Southwest, between Jefferson and Madison Avenues, where it will hang from several decorated containers (in lieu of vertical cliff faces) and hover—at its ends—16 feet above the ground. It should be able to hold the weight of ten people.
“One of the major achievements of the Andean world was the ability to connect itself,” says Roger Valencia, a festival research coordinator. “How better to symbolize ideological, cultural and stylistic integration than by building a bridge?” The ropes are now ready: the mountain grass was harvested last November, before the Peruvian rainy season, then braided into dozens of bales of rope and finally airlifted from Peru to America.
The finished bridge will become part of the National Museum of the American Indian’s collections. One section will be featured in a new exhibition, “The Great Inka Road: Engineering an Empire,” while another length of bridge will travel to the museum’s New York City location in time for the fall 2016 opening of the children’s imagiNATIONS Activity Center.
For native Peruvians, traditional bridge-building is an important tie not only to new people and places, but also to the pre-colonial past.
“I learned it from my father and grandfather,” says Victoriano Arisapana, who is believed to be among the last living bridge masters, or chakacamayocs, and who will be supervising the folklife project. “I lead by birthright and as the heir to that knowledge.”
His own son is now learning the techniques from him, the latest in an unbroken bloodline of chakacamayocs that Arisapana says stretches all the way back to the Incas, like a hand-twisted rope.
The Incas—who, at the height of their influence in the 15th century, ruled much of what is now Peru, Ecuador, Argentina, Bolivia and Chile as well as parts of Colombia—were the only pre-industrial American culture to invent long-span suspension bridges. (Worldwide, a few other peoples, in similarly rugged regions like the Himalayas, developed suspension bridges of their own, but Europeans didn’t have the know-how until several centuries after the Inca empire fell.) The Inca likely rigged up 200 or more of the bridges across gorges and other previously impassable barriers, according to analysis by John Ochsendorf, an architecture scholar at the Massachusetts Institute of Technology. Though anchored by permanent stone abutments, the bridges themselves had to be replaced roughly every year. Some of them were at least 150 feet long and could reportedly accommodate men marching three abreast.
Ochsendorf believes that Inca bridges may have first been developed in the 13th century. The engineering breakthrough coincided with—and likely enabled—the rise of the empire, which maintained a sprawling road network (the subject of “The Great Inka Road” exhibition) that united previously isolated cultures under Inca rule.
The bridges allowed for many Inca military victories: Inca commanders would send their strongest swimmers across a river so building could begin from both sides. But the exquisite structures apparently so dazzled some neighboring tribes that they became vassals without any bloodshed. “Many tribes are reduced voluntarily to submission by the fame of the bridge,” wrote Garcilaso de la Vega, a 16th-century historian of Inca culture. “The marvelous new work seemed only possible for men come down from heaven.”
The invading Spanish were similarly amazed. The Andean spans were far longer than anything that they’d seen in 16th-century Spain, where the longest bridge stretched only 95 feet. The Incas’ building materials must have seemed almost miraculous. European bridge-building techniques derived from stone-based Roman technology, a far cry from these floating webs of grass. No wonder some of the bravest conquistadors were said to have inched across on hands and knees.
“The use of lightweight materials in tension to create long-span structures represented a new technology to the Spanish,” Ochsendorf writes, “and it was the exact opposite of the 16th-century European concept of a bridge.”
Ultimately, the bridges—and indeed, the whole meticulously maintained Inca roadway system—facilitated the Spanish conquest, especially when it became clear that the bridges were strong enough to bear the weight of horses and even cannons.
Despite the Inca bridges’ utility, the Spanish were determined to introduce more familiar technology to the Andes landscape. (Perhaps they weren’t keen to swap out each woven overpass every year or two, as the Inca carefully did.) In the late 1500s, the foreigners embarked on an effort to replace the grass suspension bridge over Peru’s Apurimac River with a European-style stone compression bridge, which depended on a masonry arc. But “to construct a timber arch of sufficient strength to support the weight of stone over the rushing river was simply beyond the capacity of colonial Peru,” writes Ochsendorf. “The bridge construction was abandoned after great loss of life and money.”
The colonists wouldn’t be able to match the Inca technology until the Industrial Revolution two hundred years later, with the invention of steel cable bridges. Some of traditional grass bridges remained in use until the 19th century.
An Inca rope bridge still hangs over a canyon near the highlands community of Huinchiri, Peru, more than a four-hour’s drive from the capital city of Cusco. It is one of just a handful remaining. This is the bridge that Arisapana’s family has overseen for five centuries, and it’s similar to the one to be built on the National Mall.
“The bridge is known worldwide,” Arisapana says. “Twenty people could cross it together carrying a large bundle.”
The old bridge stands near a modern long-span steel bridge, built in the late 1960s and typical of the sort that eventually made the Inca bridges obsolete. Unlike a handmade grass bridge, it doesn’t need to be rewoven every year because of exposure to the elements, with last year’s masterpiece discarded.
Yet Arisapana says his community will build a new grass bridge every June.
“For us, the bridge is the soul and spirit of our Inca (ancestors), that touches and caresses us like the wind,” he says. “If we stop preserving it, it would be like if we die. We wouldn’t be anything. Therefore, we cannot allow our bridge to disappear.”
Raw materials probably varied according to the local flora across the Inca empire, but Arisapana’s community still uses ichu, a spiky mountain grass with blades about two feet long. The grass is harvested just before the wet season, when the fiber is strongest. It is kept damp to prevent breakage and pounded with stone, then braided into ropes of varying thickness. Some of these, for the longest Inca bridges, would have been “as thick as a man’s body,” Garcilaso claims in his history. According to Ochsendorf’s testing, individual cables can support thousands of pounds. Sometimes, to test the ropes on site, workers will see if they can use it to hoist a hog-tied llama, Valencia says.
To do everything by himself would take Arisapana several years, but divided among community members the work takes only a few days.
“We have a general meeting beforehand,” he says, “and I remind (the people) of each person, family and community’s obligations, but they already know what their obligations are.” The bridge-raising becomes a time for celebration. “The young people, the children, and even the grandchildren are very happy…they are the ones that talk and tell the story of how the bridge was built by our Inca ancestors, and then they sing and play.”
The old Inca bridge style differs from more recent versions. In modern suspension bridges, the walkway hangs from cables. In Inca bridges, however, the main cables are the walkway. These large ropes are called duros and they are made of three grass braids each. The handrails are called makis. Shorter vertical ropes called sirphas join the cables to the railings and the floor of the bridge consists of durable branches.
The bridge on the National Mall will be made of hundreds of ropes of varying thicknesses. The math involved is formidable.
“It’s like calculus,” Valencia says. “It’s knowing how many ropes, and the thickness of the ropes, and just how much they will support. They test the strength of the rope, every piece has to go through quality control, and everything is handmade.”
Even for those fully confident in the math, crossing an Inca rope bridge requires a certain courage. “You feel it swaying in the wind,” Valencia recalls, “and then all of a sudden you get used to it.”
“Our bridge…can call the wind whenever he wants to,” Arisapana says. Traditionally those who cross the dizzying Andes spans first make an offering, of coca, corn, or “sullu,” a llama fetus. “When we don’t comply…or maybe we forget to demonstrate our reverence, (the bridge) punishes us,” he says. “We could suffer an accident. That’s why, to do something on the bridge or to cross on it, first one must pay respects and offer it a plate.”
Even tourists from other countries visiting his remote village know to not to approach the bridge empty-handed. “We ask our visitors to ask permission and give an offering…at least a coca—that way they can cross and come back without any problems.”
Visitors will not be permitted to cross the Folklife Festival’s bridge, but perhaps an offering can’t hurt.
The bridge builders—who are accustomed to receiving curious visitors back home, but who have never traveled to the United States—are pleased that their ancient craft is carrying them to new lands.
“All of them are very excited,” says Valencia. “They are going to a different world, but their own symbol of continuation and tradition, the bridge, is the link that connects us.
“The bridge is an instrument, a textile, a trail, and it’s all about where it takes you.”
The annual Smithsonian Folklife Festival featuring Perú: Pachamama will be held on June 24–28 and July 1–5 on the National Mall in Washington, D.C. “The Great Inka Road: Engineering an Empire” will be on view at the Smithsonian's National Museum of the American Indian through June 1, 2018.
An 1896 women's safety bicycle, currently on view in the Patrick F. Taylor Foundation Object Project, has proven to be one of the museum's more glamorous but mysterious objects. I spoke with conservator Diana Galante, who cleaned and restored the intricately ornamented bicycle over the course of 200 hours. She uncovered some interesting physical clues about the object that may lead to future research. This blog is the first part of a series that explores of the bicycle's history; the second installment will be a Q&A with road transportation curator Roger White, who is delving into the story behind its owner, how she may have used it, and how it came to be at the museum.
Diana, before we get started on this really interesting (and shiny) bicycle, could you tell me a little about what your job as a conservator entails?
Conservation includes the preservation, restoration, and technical study of artifacts. I am an objects conservator, which means that I treat three-dimensional art and artifacts. Objects conservation can cover pretty much anything that's not paper, textiles, or paintings, but there's overlap between the specialties.
I work on metals, organic materials, plastic, ceramics, glass, stone, and mixed media. I have studied art history, fine arts, history and chemistry, and trained through graduate study, self-driven exploration, and apprenticeships.
So, this bicycle that's currently on display in the Taylor Foundation Object Project—what were your first impressions when it came into the lab?
We had several people in the Objects Lab working on exhibits for the new innovation wing that opened on July 1, and we rotated choosing which objects we wanted to work on. I saw the bicycle come in and said, "I want that one!" Even though it needed a lot of treatment, I could tell it was going to be an artifact that really packed a punch.
Based on initial research, we know this is an 1896 women's safety bicycle that was manufactured by Columbia with decorations added by Tiffany & Co. It was owned by a woman named Mary Noble Wiley of Montgomery, Alabama. What sort of background information did you have before you started working on the bicycle?
I talked to the curator responsible for this object, Roger White, in advance. The bicycle was donated to the museum in 1950 by the son of Mrs. Wiley. Recently, a letter that the son wrote in 1930 came to light, and the curator shared it with me. Mrs. Wiley's son had written to Tiffany & Co. asking them for more information about the bicycle. For us, that letter helped summarize what he knew—or thought he knew—about the bicycle, and the questions that remained. The curator focuses on researching the documented and oral history of the bicycle, while I look to the physical object for clues. Those pieces of information work together, and Roger and I learned from each other during this project.
What does your observation involve? What are the initial steps?
First, as with every conservation treatment, I document what I know about an artifact through observation, research, and scientific analysis. I photograph the artifact and write a condition report that details the current states of the structure and surface.
From first glance, I could tell the bicycle had corrosion as well as dust and grime. When it arrived in the lab, it had been in storage for many years and it had some condition issues that distracted from its luxurious composition. The gilt silver had been polished regularly during its years in use, and over time the gilding had been partially abraded to reveal the silver below. Silver reacts with sulfur and moisture in the air to tarnish, and this covered some of the gilt surface. It's a process that is not unexpected, especially near the rubber tires that contain sulfur. The rubber was distorted and cracked over time due to its self-degradation process that was practically inevitable.
It was clear to me that this object was supposed to be bright and shiny—I mean, it was so elaborately embellished by Tiffany. It was like the Rolls Royce of bicycles, meant to distinguish its owner from all the other women on their run-of-the-mill bicycles. There are intricate gilt sterling silver plaques, ivory handles, bird's eye maple wheel rims, and diamonds and emeralds in Mrs. Wiley's gold monogram on the front. I love the beautifully woven twine over the chain guard and from the back fender to the wheel hub, safety mechanisms to prevent the rider's skirt from catching.
During my exploration of the object, I used an X-ray fluorescence spectrometer to do elemental analysis. It's nondestructive; you hold the instrument to the surface and a spectrum is produced. Like a fingerprint, there are distinct peaks in the spectrum for each element. So, through that process, I determined that the bicycle frame is nickel-plated steel and the decorative elements were cast in silver that is covered in a thin layer of gold.
How did you come up with a plan for treating the bicycle?
As a conservator, I always think about the fact that after something is treated, the artifact is changed. Most of the time it's for the better, but there's always a chance that the outcome won't be what you hoped or expected. First, I did a general cleaning to remove dust and dirt and aged wax. Beneath that, there was still corrosion—which I use to describe any sort of oxidation to the metal, including silver tarnish. I polished a test area for the curator to see what it would look like when I reduced the corrosion, using gentle abrasives. I always start out with a cleaning agent that I predict will do the least amount of work, instead of starting with a powerful chemical that can rush through the process with little control. I showed the curator a little flower that I test-treated and talked to him about what I anticipated for the outcome for the bicycle as a whole. We realized it actually could look really spectacular.
I'm intrigued by the decorations and the level of detail, and so is pretty much everyone who sees the bicycle! What did you learn about them?
There are different motifs at different locations on the bicycle. There are rosettes and other organic Art Nouveau-inspired motifs repeated throughout the frame, and then the handlebars have a very different motif—dogwood flowers with leaves. After the bicycle was polished, the gilded sections were coated with a resin to protect them from getting tarnished while on exhibition.
Where did the bicycle go next, after the conservation treatment?
It was sent to the museum photographer for its glamour shots, to show the object in its best light for publication or use on the web. I also photographed it, but in a way that's more like your middle school yearbook photo. Your acne is out there, and I need to see that.
I documented every step of the treatment to record the "before" and "after" and see how it has changed. Then it went to the mount maker who created a structure to suspend the bicycle above the floor so the aged rubber wheels wouldn't have more strain.
What's something you want visitors to think about while looking at the bicycle?
The way this piece was initially made, it would have been like a diamond necklace. It would completely dazzle, with every point of the gilding reflecting light. If the owner went out on a sunny day, you saw that bicycle. You have to have a bit of imagination about that now that it is confined to its exhibit case. Something that I love about historical objects in this museum is that you can think about actual people using them. You just have to think, she rode this, probably wearing a full skirt and fancy hat. She was stylin'.
Caitlin Kearney is a new media assistant for the Taylor Foundation Object Project. She is a student in the Museum Studies program at The George Washington University. Previously, she has blogged about the magic scrapbook also on display.
Image by Special Collections, National Agricultural Library. State poster, Pennsylvania, 1917. (original image)
Image by Special Collections, National Agricultural Library. State poster, Kansas 1917. (original image)
Image by Special Collections, National Agricultural Library. State poster, Connecticut, 1917. (original image)
Image by Special Collections, National Agricultural Library. State poster, Texas, 1917. (original image)
Image by Special Collections, National Agricultural Library. State poster, Louisiana, c. 1917. (original image)
Image by Special Collections, National Agricultural Library. Bureau of Education poster, 1917. (original image)
Image by Special Collections, National Agricultural Library. U.S. Food Administration poster, 1917. (original image)
Image by Special Collections, National Agricultural Library. U.S. Office of War Information poster featuring art by Norman Rockwell, 1943. (original image)
Image by Special Collections, National Agricultural Library. U.S. Office of War Information Poster, c. 1944. (original image)
Image by Special Collections, National Agricultural Library. Posters from both World War I and II aimed messages about homefront food conservation at women. Left: National Food Emergency Food Garden Commission poster, c.1917. Right: U.S. Office of War Information Poster, 1943. (original image)
Image by Special Collections, National Agricultural Library. U.S. Women’s Land Army recruitment poster, 1944. (original image)
Image by Special Collections, National Agricultural Library. Posters from World War I (left) were often stern and text-heavy, while by World War II they had begun to reflect the more colorful, upbeat style of commercial advertising. Left: Pennsylvania poster, c. 1917. Right: U.S. Office of War Information poster, 1944. (original image)
Cory Bernat is the creator of an intriguing online exhibit of American food posters related to World Wars I and II, culled from the National Agricultural Library's collection. Blogger Amanda Bensen recently spoke with her about the project.
What kind of messages about food was the government sending to the American public through these posters?
Bernat: Actually, as a professor pointed out to me, most of them are not really about food—they're about behavior modification. Both times, with both wars, the government needed the public to modify their behavior for the national good. (And today, that’s exactly what Michelle Obama is trying to get people to do: change their behavior to curb childhood obesity.) As the Food Administration's publications director put it to state officials back in 1917: “All you gentlemen have to do is induce the American people to change their ways of living!” He’s saying it with irony, of course, because that's a very hard task.
Talk about what some of the specific posters mean. Any favorites?
I have a preference in general for the World War I posters because they're just more informative. Look at the one called "Bread: The Nation's Loaf and How We Used It in 1916." This is a really impressive infographic, and it’s only a state poster, from Kansas. Not only is the text informative—it tells you how many bushels of wheat per person are consumed in the U.S.—but they've used true imagery. And on top of that, there are the strong messages: "Economy of food is patriotism," and "Without it democracy is doomed; personal sacrifice must supplant previous extravagance." What incredible statements! I like to wonder what people would make of this today.
I also like the one after it in the online gallery. The saluting potato alone would be enough, but the information is good, too. And that “Be Loyal to Connecticut” line is basically telling people to eat locally—this was almost 100 years ago!
Then there's one from Arizona called "Good Eats" that urges people to preserve and eat more "perishables" than "staples" , and it says this will bring both savings and "fewer doctor bills." That’s a really prescient poster, and it strikes me as a good message for a contemporary audience. We’re rarely encouraged anymore to make the connections between diet and health and expense.
I notice there are also some posters from the years between the two World Wars. What issues did those address?
Well, take the one that says "America Has Plenty of Food," from the 1930s. That's at a time when the FDR administration was trying to achieve some parity between the price of food and the price that farmers were paid for that food. Increased production during World War I had put farmers into debt, buying land and equipment—and then there was a depression after the war, and farmers were in this terrible position of not being able to sell what they were growing.
So FDR began paying farmers to not grow things, and this poster was a way to reassure everyone that his policies were working—yes, we are paying farmers to not grow, but don't worry, there’s still enough food for everyone. See that flag in the background? It's from the "Ever-Normal Granary." That's a nice touch.
There are a lot of posters with the theme of reducing food waste, eating scraps and even saving "used fats" for the war effort. It’s kind of amazing how quickly things have changed.
Yes, one of the interesting questions this could lead to is, why is there no similar communal effort or awareness today, when we are technically at war? Even soldiers, I’ve heard, find that a little disheartening. I would almost call these messages subversive now.
Putting these posters in chronological order showed me how the government's methodology changed over the years, and how they borrowed from professional advertising and were influenced by what was going on in the private sector. It also really shows the shift to an industrialized food system. You look at the WWII posters and think—where are the agriculture ones? Well, there aren’t any. It’s suddenly about consumers, not farmers.
Was anything consistent?
One thing that remained consistent was the use of women. Women are all over the food ads, still, today. And canning was very consistently popular as a topic because it was comforting. It was a way to show abundance instead of sacrifice, and these very typical, homey kitchen scenes with a woman in an apron. That’s not Rosie the Riveter.
How did you become interested in these posters? Did you know the Ag Library had such a collection?
Basically, it was a lucky find. I started this project in 2007 as a paper in a museum studies class, and it evolved into my thesis for a master’s degree. A history professor who heard I was interested in food history suggested that I check out the agricultural library up the road . When I went to look, what I found was a pile of unprocessed posters. The library didn’t even know what they had. But that was good for me, because it forced me to really study them. It allowed me to combine my research interests with my background in graphic design. And it helped that I had the structure of grad school to force me to propose some sort of project.
I took little snapshots of all the posters I thought I might want to study, and I had them all spread out on my floor, trying to figure out where they all belong in relation to each other. My professor wanted to know: What are you going to say about them? And I didn’t know at first, which was kind of unusual. Most historians begin with text and find visual material to illustrate it—I was doing the flip.
I tried to view this as real curatorial work, looking at them in historical context and telling the story in a way that means something to today's audience, but also explains how they would have been viewed at the time.
I’ve been working on it, donating my time for about 2 years, and it has gone through several iterations.I ended up covering an unusually large time period for just a master’s thesis, but I’m glad I did! I’m pleased with the result. I’m still learning things.
Were your professors pleased, too, I hope?
(Laughs). Yes, I got an A, and I graduated in December with a master's in cultural history and museum studies. Now, in my day job I’m a project archivist at the National Park Service, but I’d like to work in exhibit design.
Well, you've done a great job with this online exhibit. Will it ever become a physical exhibit, too?
It goes on display from June 21st through September August 30th at the National Agricultural Library in Beltsville (MD), and will eventually move to the USDA building in downtown DC. The originals can't be shown, because they are too light-sensitive. But I was actually glad when I heard that, because I don’t think these posters should be shown in a conventional, framed way. I want to show them as the mass-produced objects that they were, so I'll be pasting them on fence panels.
“In this place, there are sometimes 100 salmon at a time,” says Luis Menendez to me as we stand side by side on a bridge over a deep green pool on the Cares River in Niserias, a five-building cluster of old bars and a hotel, just across from a famed fish ladder and only miles downstream of the huge summits and canyons of the Picos de Europa. Menendez is a local lifelong fisherman and a professional fly fishing guide. Born in the nearby cider-making town of Nava, Menendez knows the sight of a stream full of 10-, 15- and 20-pound salmon. But on this drizzling afternoon, we see none—and it’s a safe bet that there are no salmon in the pool at all, for this spring’s return of fish has been a poor one compared to historical returns. We take a drive along the river, canyon walls to either side, and pass through the thriving mountaineers’ and hikers’ town of Las Arenas. Menendez rolls down the window to call over a friend. He asks if he has heard of any salmon recently caught.
“None,” the man says.
That, Menendez says as we drive on, is one of the best fishermen in the area and was once one of the best known professionals, on whom local restaurants could often depend for a fresh salmon before the government banned the sale of river-caught fish about 10 years ago. Now, about the only way to taste Spanish salmon is to buy a fishing license and catch one.
The Cares River isn’t the only salmon stream of Spain. Another dozen or so rivers that run into the sea along the northern Spanish coast support native runs of Atlantic salmon, or Salmo salar. The species also spawns in rivers on the East Coast of America and Northern Europe. It is most commonly encountered as the product of aquatic factory farms in Scotland, Norway and Canada, but—surprise it may be to the uninitiated—it is also a famed resident of Asturias, Cantabria and Galicia. Local lore tells of the days when General Francisco Franco vacationed here, waded these streams and pulled out three-footers. Photos can be found, too, showing the general with trophies bound for the grill. Other black-and-white images show fishermen in the early 20th century with an afternoon’s catch of more salmon than most Spanish anglers today could hope to catch in a lifetime.
Today, the salmon’s numbers are declining, and Menendez is concerned about the future of the fish. Menendez advocates catch and release—”pesca sin muerte”—and requires his clients to put their salmon back, but catching fish at all this season hasn’t been easy. The health of the fishery is gauged largely by the mandatory reports to the local fisheries office from anglers who catch, and keep, a salmon. As of June 16, anglers had reported only 245 salmon from the Sella River, the most important salmon stream in Spain, and just 208 salmon from the Narcea. Though a jump from recent poor years, these numbers are still way down from historical figures. Jaime de Diego, head warden of the forests and streams of Asturias, met with me at his family’s riverside hotel, La Salmonera, and told me that in 1959 fishermen took 2,781 salmon from the Sella. In 1968, 2,090 salmon were taken and in 1970, 1,800.
2010 was a disaster, with the Asturias total topping out at 247 salmon caught and killed. This year, as of June 16, in every salmon stream in Asturias (there are a handful), 748 salmon had been caught, kept and reported (released salmon are not reported).
Menendez says there are several reasons for the decline. For one, he tells me, cormorants have expanded their range in the last decade, their population responding to the artificial food supply produced by the salmon farming operations of Norway. The birds have moved into northern Spain, he says, where they find salmon juveniles to be easy prey in the small and shallow rivers.
Cheese production is another issue, especially in the Cares-Deva drainage. In the green alpine hills above the fishing pools where the fishermen tiptoe over the boulders, herds of goats, sheep and cows graze the slopes. They wade in the streams, Menendez explains, crushing beds of fertilized fish eggs and dousing them with the toxins of their excrement. (We are all the while nibbling and praising a strong and faintly-veined blue cheese, produced by these salmon-stomping grazers.)
Another cause of the decline is the catch of adult salmon at sea by commercial fishermen, locals tell me. De Diego says Japanese fleets are the main culprits—but another fishing tour guide, George Luis Chang of Pesca Travel, a fishing tour company that leads fishing trips throughout Spain, says commercial fishermen have been selected as a scapegoat for Spain’s salmon decline. Chang says he recognizes that catch-and-kill sport fishing itself has an effect on fish populations—but not all sport fishermen are willing to accept such a viewpoint, Chang says. When the Asturias government decided to limit anglers to three salmon in a season after the 2010 return, many local anglers were outraged, he says (Chang was in full support). Then, following a turnover in local government offices in 2011, the new three-fish limit was scrapped—and boosted to 35.
And so, Chang says, “most salmon fishermen in Asturias are happy again, but they probably don’t realize that all the salmon caught and killed this season are just hastening the decline of salmon fishing in Asturias for the years to come.” He says stocks are so low that just a few hundred salmon killed will heavily dent the genetic stock of the local runs. Chang, like Menendez, wants salmon sport fishing to continue here, but the killing to stop. So does another experienced guide, Jose Carlos Rodriguez, who lives in the coastal town of Gijon. He says most fishermen in Asturias—especially older ones—are opposed to mandatory catch-and-release policies. Traditional practice is to catch and eat, and old customs die hard among the veterans of the local river fishing culture. Rodriguez says the tourists he guides from abroad—British, French, Scandinavian and American—have largely adopted catch-and-release ethics, but until the local populace does so, it will mean a death rate in the local salmon populations that may be unsustainable.
“It is very difficult to make the older fishermen understand this,” Rodriquez says. “But the future of fishing here, and in other parts of the world, depends on catching and releasing.”
Menendez and I drive further along the Cares River, upstream of its confluence with the Deva, and we see cars are parked along the highway.
“Pescadores,” Menendez says. It’s a Saturday, and the anglers are out in force—all pursuing a handful of salmon. It’s a predator-prey balance precariously top-heavy. Just 98 salmon had been reported from the Cares-Deva system as of June 16, and surely hundreds of fishermen are working the waters each week. I would spend several days riding my bike along the rivers of the area. In one pool in the Sella, by the Salmonera Hotel, I saw just three adult salmon—and that’s it.
Meanwhile, scientists are on the case to understand, and hopefully solve, the problems in Spain’s salmon streams. Franco, in fact, was a conservationist and scientist who implemented a monitoring program of salmon caught in the Ason River of Cantabria. Comparing data of today to Franco’s time, scientists have observed that returning adults (which don’t die after spawning as do the five main Pacific salmon species) are on average smaller than in the past. De Diego believes that the reason for the size decline is that the fish are younger on average today, and instead of returning a half dozen times—larger and heavier at each reappearance—they now can manage only two or three spawning runs, then die, killed by the pollutants in the rivers.
But unfurling dramas in other European salmon rivers indicate that there is hope for the salmon of Spain. Atlantic salmon stopped returning to the Seine about a hundred years ago—but they’re back, returning in annual droves past the Eiffel Tower and under the famed bridges, in waters that for decades were too putrid for nearly any fish to live in. Hundreds of salmon have been returning each of the past several years. A similar rebound has occurred in the Rhine of Germany, reminding us that salmon are among the simplest of nature’s miracles; give them a clean river, keep out the cows and hold back the goats, and the fish will come back.
Fishing guide Luis Menendez can be contacted by email at firstname.lastname@example.org.
Fishing guide Jose Carlos Rodriguez can be contacted on the web.
By the time he was 20 years old, colonial American Benjamin Franklin had already spent two years working as a printer in London. He returned to Philadelphia in 1726. During the sea voyage home, he kept a journal that included many of his observations of the natural world. Franklin was inquisitive, articulate and interested in mastering the universe.
During one afternoon calm on September 14, Franklin wrote:
“...as we sat playing Draughts upon deck, we were surprised with a sudden and unusual darkness of the sun, which as we could perceive was only covered with a small thin cloud: when that was passed by, we discovered that that glorious luminary laboured under a very great eclipse. At least ten parts out of twelve of him were hid from our eyes, and we were apprehensive he would have been totally darkened.”
Total solar eclipses are not rare phenomena; every 18 months on average one occurs somewhere on Earth. Franklin and his shipmates likely had seen eclipses before. What was different for Franklin and his generation was a new understanding of the causes of eclipses and the possibility of accurately predicting them.
Earlier generations in Europe relied on magical thinking, interpreting such celestial events through the lens of the occult, as if the universe were sending a message from heaven. By contrast, Franklin came of age at a time when supernatural readings were held in suspicion. He would go on to spread modern scientific views of astronomical events through his popular almanac—and attempt to free people from the realm of the occult and astrological prophecy.Ptolemy’s Earth-centered universe with the moon, Mercury, Venus, the sun, Mars, Jupiter and Saturn orbiting our planet. (Andreas Cellarius, CC BY)
Ancient people conceived of the heavens as built around human beings. For centuries, people subscribed to the Ptolemaic belief about the solar system: The planets and the sun revolved around the stationary Earth.
The idea that God drove the heavens is very old. Because people thought that their god (or gods) guided all heavenly occurrences, it’s not surprising that many people—ancient Chinese, for example, and Egyptians and Europeans—believed that what they witnessed in the skies above provided signs of future events.
For this reason, solar eclipses were for many centuries understood to be harbingers of good or evil for humankind. They were attributed magical or mysterious predictive qualities that could influence human lives. During the first century A.D., people—including astrologers, magicians, alchemists and mystics—who claimed to have mastery over supernatural phenomena held sway over kings, religious leaders and whole populations.
Nicholas Copernicus, whose life straddled the 15th and 16th centuries, used scientific methods to devise a more accurate understanding of the solar system. In his famous book, “On the Revolutions of the Celestial Spheres” (published in 1543), Copernicus showed that the planets revolved around the sun. He didn’t get it all right, though: He thought planetary bodies had circular orbits, because the Christian God would have designed perfect circles in the cosmos. That planetary motion is elliptical is a later discovery.
By the time Benjamin Franklin grew up in New England (about 150 years later), few people still believed in the Ptolemaic system. Most had learned from living in an increasingly enlightened culture that the Copernican system was more reliable. Franklin, like many in his generation, believed that knowledge about the scientific causes for changes in the environment could work to reduce human fears about what the skies might portend.By measuring the height of celestial objects with an astrolabe, a user could predict the position of stars, planets and the sun. (Pom², CC BY-SA)
It was an age of wonder, still, but wonder was harnessed to technological advances that could help people understand better the world they lived in. Accurate instruments, such as the astrolabe, allowed people to measure the motion of the planets and thus predict movements in the heavens, particularly phenomena like solar and lunar eclipses and the motions of planets like Venus.
In his earliest printed articles, Franklin criticized the idea that education belonged solely to the elite. He hoped to bring knowledge to common people, so they could rely on expertise outside of what they might hear in churches. Franklin opted to use his own almanacs—along with his satirical pen—to help readers distinguish between astronomical events and astrological predictions.
Printing was a major technological innovation during the 16th, 17th and 18th centuries that helped foster information-sharing, particularly via almanacs.
These amazing compilations included all kinds of useful information and were relied on by farmers, merchants, traders and general readers in much the same way we rely on smartphones today. Colonial American almanacs provided the estimated times of sunrises and sunsets, high and low tides, periods of the moon and sun, the rise and fall of constellations, solar and lunar eclipses, and the transit of planets in the night skies. More expensive almanacs included local information such as court dates, dates of markets and fairs, and roadway distances between places. Most almanacs also offered standard reference information, including lists of the reigns of monarchs of England and Europe, along with a chronology of important dates in the Christian Era.
Almanac culture dominated New England life when Franklin was a youth. They were the most purchased items American printers offered, with many a printer making his chief livelihood by printing almanacs.
Almanacs were money-makers, so Franklin developed his own version shortly after he opened his own shop in Philadelphia. The city already had almanac-makers – Titan Leeds and John Jerman, among others – but Franklin aimed to gain the major share of the almanac trade.
Franklin considered astrological prediction foolish, especially in light of new scientific discoveries being made about the universe. He thought almanacs should not prognosticate on future events, as if people were still living in the dark ages. So he found a way to make fun of his competitors who continued to pretend they could legitimately use eclipses, for instance, to predict future events.Franklin dispensed many aphorisms in the guise of ‘Poor Richard,’ such as ‘Love your Enemies, for they tell you your Faults.’ (Oliver Pelton, CC BY)
In addition to the usual fare, Franklin’s almanac provided stories, aphorisms and poems, all ostensibly curated by a homespun character he created: Richard Saunders, the fictional “author” of Franklin’s “Poor Richard’s Almanac.”
The “Poor Richard” Saunders persona allowed Franklin to satirize almanac makers who still wrote about eclipses as occult phenomena. Satire works because it closely reproduces the object being made fun of, with a slight difference. We’re familiar with this method today from watching skits on “Saturday Night Live” and other parody programs.Title page of Franklin’s first ‘Poor Richard’ almanac, for 1733 (‘Poor Richard’ Almanac)
Franklin’s voice was close enough to his satirical target that “Poor Richard” stole the market. For instance, Poor Richard began his career by predicting the death of Titan Leeds, his competitor. He later would do the same thing to John Jerman. Franklin was determined to mock almanac-makers who pretended to possess occult knowledge. Nobody knows when a person might die, and only astrologers would pretend to think a solar or lunar eclipse might mean something for humans.
Franklin included a wonderfully funny section in his almanac for 1735, making light of his competitors who did offer astrological prognostications. As “Poor Richard,” he wrote:
“I shall not say much of the Signification of the Eclipses this Year, for in truth they do not signifie much; only I may observe by the way, that the first Eclipse of the Moon being celebrated in Libra or the Ballance, foreshews a Failure of Justice, where People judge in their own Cases. But in the following Year 1736, there will be six Eclipses, four of the Sun, and two of the Moon, which two Eclipses of the Moon will be both total, and portend great Revolutions in Europe, particularly in Germany….”
Richard Saunders is clear in the opening remark that “Eclipses … do not signifie much.” He nonetheless goes on to base amazing predictions for 1736 on them, in effect lampooning anyone who would rely on the stars to foretell human events. Great revolutions were taking place in Europe, but no one needed to read eclipses in order to figure that out; they needed only to read the day’s newspapers.
The next year, Franklin decided go a step further than just satirizing these occult prognostications. He had Richard Saunders explain his understanding of some of the science behind eclipses. He characterized the “Difference between Eclipses of the Moon and of the Sun” by reporting that:
“All Lunar Eclipses are universal, i.e. visible in all Parts of the Globe which have the Moon above their Horizon, and are every where of the same Magnitude: But Eclipses of the Sun do not appear the same in all Parts of the Earth where they are seen; being when total in some Places, only partial in others; and in other Places not seen at all, tho’ neither Clouds nor Horizon prevent the Sight of the Sun it self.”
The goal of an explanation like this? To eclipse occult belief. He hoped people would become more confident about the universe and everything in it and would learn to rely on scientifically validated knowledge rather than an almanac-maker’s fictions.
This story originally appeared on Travel + Leisure.
Intrepid travelers know that when you’re exhausted from exploring historical sites, when you can’t stand the thought of visiting one more museum, and you’ve trudged through every open-air market, there’s only one thing left to do—head underwater.
While scuba divers have the most freedom to explore underwater, snorkeling is easy enough for children, and exciting enough for even the most jaded traveler. Whether you’re taking your budding marine biologist to explore an underwater ecosystem or simply want to get up close and personal with a friendly shark, snorkeling is an opportunity to truly immerse yourself in nature.
To help plan your next adventure, we’ve pulled together 10 of the best places to snorkel around the world. The list ranges from U.S. National Parks to once-in-a-lifetime vacation destinations like the Maldives or Komodo Island. Whichever one you end up visiting, you’ll see underwater sights that would make your jaw drop—if you weren’t breathing through a snorkel, of course.
The underwater scenery in these islands, atolls, cayes, and reefs is unmatched, but sadly climate change is endangering the watery wonderland. Coral bleaching is already affecting many of the world’s reefs, coral is disappearing across the globe, and some scientists expect it could die out entirely as soon as 2050. Even more of a reason to start planning that snorkeling trip you’ve been dreaming about.
Ambergris Caye, Belize
Image by PNiesen/iStock. Hol Chan Marine Reserve. (original image)
Image by Diegograndi/iStock. Ambergris Caye. (original image)
Image by PNiesen/iStock. Hol Chan Marine Reserve. (original image)
Image by Argiope/iStock. a nurse shark in Ambergris Caye. (original image)
Image by Diegograndi/iStock. (original image)
Home to the largest barrier reef outside of Australia (185 miles!), Belize has many opportunities to get up close and personal with eels, rays, and all kinds of brightly colored fish. There are hundreds of cayes and atolls that dot the Caribbean coastline, filled with colorful coral sunken beneath the turquoise waters. Some of the best options for divers and snorkelers are found off of Ambergris Caye, including the Hol Chan Marine Reserve and the self-explanatory shark-ray alley teeming with nurse sharks happy to let you live out your swimming-with-the-sharks fantasies.
Ilha Grande, Brazil
Image by Gustavoferretti/iStock. (original image)
Off the coast of Brazil, halfway between São Paolo and Rio de Janeiro, sits the wilderness wonderland of Ilha Grande. There are hotels on the island, but it manages to feel largely untouched with monkey-filled jungles surrounded by brilliant blue waters teeming with brilliantly colored fish. Dive into the warm waters of the Blue Lagoon (Lagoa Azul) to swim with seahorses, ogle the underwater coves, and follow a turtle or angelfish through a sunken jungle. The waters off of Ilha Grande are also home to dozens of shipwrecks—remnants of the battles between pirates and the Portuguese.
The Big Island, Hawaii
Image by HeatherLeaPoole/iStock. Spinner dolphins, Kealakekua Bay. (original image)
The entire Hawaiian archipelago is surrounded by incredible snorkeling spots, but the Big Island—with more square footage than all the other islands combined—has the most to offer. The underwater state park at Kealakekua Bay not only has technicolor coral and colorful fish, but it a good dose of history, too, as it marks the spot where Captain James Cook landed on the island. Hit the water near the Captain Cook Monument to see dolphins, turtles and more. For more underwater adventures, head to the crystal waters of Honaunau Bay to explore its coral gardens alongside dolphins and tropical fish.
Palawan, The Philippines
Image by Mihtiander/iStock . Whale Shark. (original image)
Image by Fototrav/iStock. Clownfish. (original image)
While the Philippines may not seem like the most obvious snorkeling destination, the waters surrounding the 7,000 islands in the archipelago make up a diverse eco-system filled with breathtaking wildlife. There is no shortage of snorkeling opportunities from diving into the Bay of Donsol for the chance to swim with whale sharks or visiting the coral reefs outside Noa Noa Island. The stunning Palawan island offers something for every underwater explorer though. Visit the island’s fish-filled lagoons, dive into Honda Bay, explore Tubbataha reef, and plan a daytrip to meet the underwater inhabitants of Starfish and Cowrie Island.
Buck Island, St. Croix, USVI
Image by Grandriver/iStock. Buck Island. (original image)
Visits to national parks tend to conjure up visions of majestic mountains and roaming buffalo, but on Buck Island in the U.S. Virgin Island you’re more likely to run into a friendly octopus than a picnic-basket-stealing bear. Snorkel between the elkhorn coral barrier reefs under Buck Island’s brilliant blue waters as you follow a colorful parrot fish along an underwater trail through this sunken national treasure. Three species of sea turtles nest at the park, brain coral abound, and both endangered brown pelicans and threatened least terns call home The shallow, gentle waters are ideal for beginning snorkelers.
Komodo Island, Indonesia
Image by Ifish/iStock. Leather coral, Komodo National Park. (original image)
Image by Ifish/iStock. Midnight Snappers, Komodo National Park. (original image)
Image by USO/iStock. Komodo dragons. (original image)
Image by Strmko/iStock. (original image)
Image by AndamanSE/iStock. (original image)
While the giant lizards that call this island home get most of the attention from visitors, Komodo has some fascinating inhabitants under the water, too. Head to Pink Beach to swim with rays, schools of groupers, and hawksbill turtles in the undersea garden that grows there. Alternatively, visit the sea surrounding the Komodo National Park, which offers unmatched underwater exploration with over 1000 species of fish 260 types of coral, and 14 types of endangered whales, dolphins, and giant turtles. If that’s not enough to strap on a snorkel, there are also rays, sharks, and a flourishing coral reef to make for a memorizing journey.
Image by Inusuke/iStock . Sea Goldies. (original image)
Image by Cinoby/iStock. Powder blue sturgeonfish. (original image)
Image by iStock. Convict surgeonfish in the Maldives. (original image)
Image by Inusuke/iStock. (original image)
Image by Oksanavg/iStock. A mimic octopus. (original image)
The Maldives are one of the most beautiful destinations in the world, but some of the islands’ greatest sights lie beneath the waves. The tiny islands that make up the archipelago are surrounded by aquamarine water that is home to some 700 species of fish, including tuna wahoos, and butterfly fish. The water holds a multitude of other marine wonders, too, like sharks, turtles, anemones, coral, and perhaps a friendly octopus or two. If someone in your party doesn’t like to snorkel, they can enjoy the undersea gardens and wildlife, too, thanks to the islands’ crystal clear water.
Eil Malk Island, Palau
Image by Global_Pics/iStock. (original image)
Image by Global_Pics/iStock. (original image)
Image by Global_Pics/iStock. (original image)
Image by Evenfh/iStock. (original image)
Only one of the marine lakes that dot Palau is open to snorkeling, but it’s definitely worth the trip. Jellyfish Lake on the uninhabited island of Eil Malk lives up to its name, filled with millions of golden jellyfish that have thrived in the isolated lake for hundreds, if not thousands, of years. For a truly other worldly experience, visitors can snorkel among the floating, gelatinous creatures. While jellyfish are known for their stings, these have a non-poisonous sting, as they eat algae—not other animals—and reportedly, their stings can hardly be felt by humans who take the plunge into their waters.
Great Barrier Reef, Australia
Image by Homeworker/iStock. Clownfish and anemone. (original image)
Image by Byrneck/iStock. Heart Reef in the Great Barrier Reef. (original image)
Image by Tane-Mahuta/iStock. Juvenile emperor angelfish. (original image)
Image by PNiesen/iStock . Coral colony and soldierfish. (original image)
Image by Wrangel/iStock. Ocellaris clownfish. (original image)
Image by Aussiesnakes/iStock. (original image)
It’s impossible to talk about the world’s best snorkeling spots without mentioning the largest coral reef ecosystem in the world—Australia’s Great Barrier Reef. The reef is actually made up of 2,900 individual reefs that stretch over 1,400 miles off the Australian shoreline. Eye-popping coral, brilliant marine life, barracuda, manta rays, and the bones of ships that crashed on the reef all make the Great Barrier Reef a must-visit destination for ocean aficionados. For an easy place to start your exploration, head to the Whitsunday Islands right off the shore of Queensland.
Galapagos Islands, Ecuador
Image by MakingSauce/iStock. Galapagos Sea Lions. (original image)
Image by LFPuntel/iStock. (original image)
Image by pkphotoscom/iStock. Black tip reef shark. (original image)
Image by PeskyMonkey/iStock. Galapagos Sea Lions. (original image)
The land that makes up the 19 volcanic islands that form the Galapagos offers a glimpse into the natural world of finches, iguanas, and tortoises that inspired Charles Darwin, but beneath the waves that surround those islands lies an equally fascinating natural treasure trove. The various islands is home to diverse marine life—sea turtles, dolphins, orcas, humpback whales, Galapagos penguins, fur seals, and sea lions. Brave souls can swim in Devil’s Crown, the sunken cone of a volcano near Floreana Island, to see brilliantly-colored fish, moray eels, and more.
Other articles from Travel + Leisure:
The Basque country’s salt cod dish, known as bakailaoa pil-pilean, is perhaps one of the most ancient of traditions, and has its origins in a deep maritime history that required the preservation of food for the long journey to distant fishing stocks. But today’s chefs are incorporating new ideas to prepare the creamy yellow-white sauce that is the hallmark of a dish that some say is finer and better tasting than even the freshest of cod pulled directly from the sea.
The recipe with just three ingredients—salt cod with skin, olive oil and garlic—is not easy to make. In the txokoak, or Basque gastronomy clubs, chefs compete to make the tastiest of bakailaoa pil-pilean.
The main ingredient salt cod with skin is difficult to find in the United States, but the sauce can’t be made without the natural gelatin found in its skin. This summer as chefs Igor Ozamiz Goiriena and Igor Cantabrana prepared for their journey to Washington D.C. for the Smithsonian Folklife Festival, which honored the music, craft and artisanal traditions of Basque Country on the National Mall in June and July, the pair found that the easiest way to assure that the main ingredient would be available for their cooking demonstrations was to pack 18 pounds of dried cod into their suitcases with their clothing.
The two chefs told us that the first thing we must do is to “eat with our eyes,” and the dishes they prepared at the Festival’s demonstration tent, the Ostatua Kitchen, were certainly a treat to see. Visitors were treated to a sensory experience of the sights, sounds, and smells of some of the world’s best cuisine; and the chefs shared their history and showed how Basque gastronomy is intimately tied to tradition and innovation. Bakailaoa pil-pilean, or cod in pil-pil sauce, is no exception.
Bakailaoa pil-pilean has a long history tied to Basque whaling expeditions of the 15th century. While there is no archeological evidence, it is possible that the Basques crossed the Atlantic even earlier, at least 100 years before Columbus. The Basque first embarked to distant ports in Greenland and Newfoundland, aided in their hunt for whales by sustaining themselves on the large schools of Atlantic cod, caught and salted to preserve aboard ship. Later as whales were overhunted and became scarce, cod fishing supplanted it, and whalers returned with their bounty to the Basque country to trade for salt and wine, and other goods.
The Basque were among the first people to preserve cod, using salt from the natural springs in the Añana valley, at a spring that is 300 times saltier than the ociean, where people have been producing salt for more than 6,000 years. This made the fish longer lasting and easier to trade.
As early as the 11th century, Basque salt cod was sold on an international market.
Chef Igor Ozamiz Goiriena recalled the rich personal stories of his grandfather, a captain of a cod ship that sailed to St. John’s, Newfoundland. His grandfather used to boast of so many cod in the sea that “you could walk on water.” His traditional recipe for bakailaoa pil-pilean came from his grandmother.
There is a science to the tradition of creating the Bakailaoa pil-pilean. The seemingly simple dish requires the emulsion of olive oil with the gelatin of the cod—two substances that are not normally soluble.The main ingredient salt cod with skin is difficult to find in the United States, but the sauce can’t be made without the natural gelatin found in its skin. (Ralph Rinzler Folklife Archives)
To make pil-pil sauce, Ozamiz Goiriena said that it needed the right balance of four things—water, fat, air and an emulsifying agent. With the perfect balance, a rich creamy, yellow-white sauce swirls to a perfection.
Atlantic cod is a cold-water fish, meaning its gelatin is high in both fat and amino acids. The gelatin easily provides the fat required to make pil-pil sauce while also serving doubly as an emulsifying agent. Gelatin works well as an emulsifier because it has elements that are both water soluble and fat soluble, allowing it to form a barrier between fat droplets and the liquid in which they are dispersed. The amino acids form a tangled net that traps fat droplets, further dispersing the fat and stabilizing the mixture. The olive oil brings in more fats and water.
With just gelatin, olive oil and air, the fish has all four main requirements for an emulsion. However, emulsions do not form on their own, as they are rather unstable. You need to give energy and air to the emulsion by shaking and stirring the gelatin as you mix in olive oil. This turns the fat into droplets that can then be trapped by the gelatin.
Chef Gorka Mota explained that in one Basque legend, pil-pil sauce was first created by the rocking motion of fishing ships at sea. Another legend suggests that the dish has roots in the First Carlist War: a merchant’s request for salt cod was misinterpreted by the telegraph operator, and he ended up ordering a million salt cod. It was a fortunate error, as Bilbao was soon under siege, and the only food available was salt cod, olive oil, garlic and dried pepper.Chef Igor Ozamiz Goiriena demonstrates the separated oil and gelatin. (Ralph Rinzler Folklife Archives)
To survive, the people of Bilbao ate cod cooked in olive oil. By the end of the century, Basques discovered that if the cod was cooked in an earthen casserole dish and moved in a circular motion, the sauce would become creamy and white.
In his own cooking, Igor Ozamiz Goiriena uses a mesh colander to create pil-pil sauce, a technique he learned in culinary school.
Using the bottom of colander as a stirring tool, he slowly pours the olive oil through it and mixes it with the gelatin in the pan. This technique is highly effective at adding air and causing it to emulsify. The innovative use of the colander, he jokes, would have infuriated his grandmother. She did not know, as chefs do today, that pil-pil sauce is an emulsion. However, she knew from experience and tradition that if she rotated the pot in circles, the pil-pil sauce would become creamy.
These new techniques have been adopted by chefs as Basque cooking incorporates more knowledge from chemistry and other sciences into the ancient traditions.In his own cooking, Chef Ozamiz Goiriena uses a mesh colander to create pil-pil sauce, a technique he learned in culinary school. (Ralph Rinzler Folklife Archives)
Recipe: Bakailaoa pil-pilean (Cod in Pil-Pil Sauce)
Makes two servings
8 oz. (1 loin) salted and dried cod with the skin attached
1-2 cups extra virgin olive oil
3-4 garlic cloves
1. Soak the dried salt cod in cold water for 48 hours to rehydrate. Change the water every 8 hours. Once done, cut into 2 even chunks about 2 inches wide.
2. In a 3-quart saucepan, add a layer of olive oil (~1/2 cup). Add 2 tablespoons of chopped garlic. Place the pot on low heat (~158 degrees Fahrenheit). Let the garlic perfume the oil for 2 to 3 minutes. Strain the oil to remove the garlic. Put the oil back in the pot.
Note: The olive oil should not change color. If it does, the heat is too high, and the garlic is frying.
3. Add the slices of cod to the pot. Add more olive oil to cover the fish (~1 cup). Keeping the heat low (158-176 degrees), bring the oil to a boil. Let the fish poach slowly so that it releases its gelatin. The gelatin will come out as bubbles, separate from the oil, and settle at the bottom of the pot.
Note: Place the fish cuts close to one another to conserve oil. You will use a lot of oil but it is needed to poach the fish. Make sure to keep the heat low or the gelatin will evaporate.
4. After poaching for around 20 minutes, remove the fish. It is done when the meat comes off in petals and has a white coloring.
5. Stir the remaining oil and gelatin in small circles to further separate. Pour out the oil and keep it for later.
6. Put the gelatin in a 10-inch sauté pan at room temperature. Stir the gelatin with the bottom of a mesh colander or tea strainer to solidify it. Using the colander, slowly add the oil back while continuing to stir the gelatin. Add oil until it becomes a thick sauce. It will be yellow-white and creamy.
7. Place the fish in the pan with the finished pil-pil sauce. On low heat, reheat the cod and pil-pil sauce. Use a spoon to baste the fish in the sauce for 1 to 2 minutes. Remove the fish, and place it in a serving dish. Stir the sauce a few times, and then add it to the serving dish so that it lightly covers the fish.
8. Optional: As a finish, add roasted garlic to the top of the fish.
Shanna Killeen is currently working on a master’s in English at Oregon State University. A version of this article was previously published on the Smithsonian Folklife Blog.
That males are naturally promiscuous while females are coy and choosy is a widely held belief. Even many scientists—including some biologists, psychologists and anthropologists—tout this notion when interviewed by the media about almost any aspect of male-female differences, including in human beings. In fact, certain human behaviors such as rape, marital infidelity and some forms of domestic abuse have been portrayed as adaptive traits that evolved because males are promiscuous while females are sexually reluctant.
These ideas, which are pervasive in Western culture, also have served as the cornerstone for the evolutionary study of sexual selection, sex differences and sex roles among animals. Only recently have some scientists—fortified with modern data—begun to question their underlying assumptions and the resulting paradigm.
It all comes down to sperm and eggs?
These simple assumptions are based, in part, on the differences in size and presumed energy cost of producing sperm versus eggs—a contrast that we biologists call anisogamy. Charles Darwin was the first to alludeto anisogamy as a possible explanation for male-female differences in sexual behavior.
His brief mention was ultimately expanded by others into the idea that because males produce millions of cheap sperm, they can mate with many different females without incurring a biological cost. Conversely, females produce relatively few “expensive,” nutrient-containing eggs; they should be highly selective and mate only with one “best male.” He, of course, would provide more than enough sperm to fertilize all a female’s eggs.
In 1948, Angus Bateman—a botanist who never again published in this area—was the first to test Darwin’s predictions about sexual selection and male-female sexual behavior. He set up a series of breeding experiments using several inbred strains of fruit flies with different mutations as markers. He placed equal numbers of males and females in laboratory flasks and allowed them to mate for several days. Then he counted their adult offspring, using inherited mutation markers to infer how many individuals each fly had mated with and how much variation there was in mating success.
One of Bateman’s most important conclusions was that male reproductive success—as measured by offspring produced—increases linearly with his number of mates. But female reproductive success peaks after she mates with only one male. Moreover, Bateman alleged this was a near-universal characteristic of all sexually reproducing species.
In 1972, theoretical biologist Robert Trivers highlighted Bateman’s work when he formulated the theory of “parental investment.” He argued that sperm are so cheap (low investment) that males evolved to abandon their mate and indiscriminately seek other females for mating. Female investment is so much greater (expensive eggs) that females guardedly mate monogamously and stay behind to take care of the young.
In other words, females evolved to choose males prudently and mate with only one superior male; males evolved to mate indiscriminately with as many females as possible. Trivers believed that this pattern is true for the great majority of sexual species.
The problem is, modern data simply don’t support most of Bateman’s and Trivers’ predictions and assumptions. But that didn’t stop “Bateman’s Principle” from influencing evolutionary thought for decades.A single sperm versus a single egg isn’t an apt comparison. (Gametes image via www.shutterstock.com)
In reality, it makes little sense to compare the cost of one egg to one sperm. As comparative psychologist Don Dewsbury pointed out, a male produces millions of sperm to fertilize even one egg. The relevant comparison is the cost of millions of sperm versus that of one egg.
In addition, males produce semen which, in most species, contains critical bioactive compounds that presumably are very expensive to produce. As is now also well-documented, sperm production is limited and males can run out of sperm—what researchers term “sperm depletion.”
Consequently, we now know males may allocate more or less sperm to any given female, depending on her age, health or previous mated status. Such differential treatment among preferred and nonpreferred females is a form of male mate choice. In some species, males may even refuse to copulate with certain females. Indeed, male mate choice is now a particularly active field of study.
If sperm were as inexpensive and unlimited as Bateman and Trivers proposed, one would not expect sperm depletion, sperm allocation or male mate choice.
Birds have played a critical role in dispelling the myth that females evolved to mate with a single male. In the 1980s, approximately 90 percent of all songbird species were believed to be “monogamous”—that is, one male and one female mated exclusively with one another and raised their young together. At present, only about 7 percent are classified as monogamous.
Modern molecular techniques that allow for paternity analysis revealed both males and females often mate and produce offspring with multiple partners. That is, they engage in what researchers call “extra-pair copulations” (EPCs) and “extra pair fertilizations” (EPFs).
Because of the assumption that reluctant females mate with only one male, many scientists initially assumed promiscuous males coerced reluctant females into engaging in sexual activity outside their home territory. But behavioral observations quickly determined that females play an active role in searching for nonpair males and soliciting extra-pair copulations.
Rates of EPCs and EPFs vary greatly from species to species, but the superb fairy wren is one socially monogamous bird that provides an extreme example: 95 percent of clutches contain young sired by extra-pair males and 75 percent of young have extra-pair fathers.
This situation is not limited to birds—across the animal kingdom, females frequently mate with multiple males and produce broods with multiple fathers. In fact, Tim Birkhead, a well-known behavioral ecologist, concluded in his 2000 book “Promiscuity: An Evolutionary History of Sperm Competition,” “Generations of reproductive biologists assumed females to be sexually monogamous but it is now clear that this is wrong.”
Ironically, Bateman’s own study demonstrated the idea that female reproductive success peaks after mating with only one male is not correct. When Bateman presented his data, he did so in two different graphs; only one graph (which represented fewer experiments) led to the conclusion that female reproductive success peaks after one mating. The other graph—largely ignored in subsequent treatises—showed that the number of offspring produced by a female increases with the number of males she mates with. That finding runs directly counter to the theory there is no benefit for a “promiscuous” female.
Modern studies have demonstrated this is true in a broad range of species—females that mate with more than one male produce more young.What’s happening in society outside the lab can influence what you see inside it. (National Library of Ireland on The Commons)
So if closer observation would have disproved this promiscuous male/sexually coy female myth, in the animal world at least, why didn’t scientists see what was in front of their eyes?
Bateman’s and Trivers’ ideas had their origins in Darwin’s writings, which were greatly influenced by the cultural beliefs of the Victorian era. Victorian social attitudes and science were closely intertwined. The common belief was that males and females were radically different. Moreover, attitudes about Victorian women influenced beliefs about nonhuman females. Males were considered to be active, combative, more variable, and more evolved and complex. Females were deemed to be passive, nurturing; less variable, with arrested development equivalent to that of a child. “True women” were expected to be pure, submissive to men, sexually restrained and uninterested in sex—and this representation was also seamlessly applied to female animals.
Although these ideas may now seem quaint, most scholars of the time embraced them as scientific truths. These stereotypes of men and women survived through the 20th century and influenced research on male-female sexual differences in animal behavior.
Unconscious biases and expectations can influence the questions scientists ask and also their interpretations of data. Behavioral biologist Marcy Lawton and colleagues describe a fascinating example. In 1992, eminent male scientists studying a species of bird wrote an excellent book on the species—but were mystified by the lack of aggression in males. They did report violent and frequent clashes among females, but dismissed their importance. These scientists expected males to be combative and females to be passive—when observations failed to meet their expectations, they were unable to envision alternative possibilities, or realize the potential significance of what they were seeing.
The same likely happened with regard to sexual behavior: Many scientists saw promiscuity in males and coyness in females because that is what they expected to see and what theory—and societal attitudes—told them they should see.
In fairness, prior to the advent of molecular paternity analysis, it was extremely difficult to accurately ascertain how many mates an individual actually had. Likewise, only in modern times has it been possible to accurately measure sperm counts, which led to the realization that sperm competition, sperm allocation and sperm depletion are important phenomena in nature. Thus, these modern techniques also contributed to overturning stereotypes of male and female sexual behavior that had been accepted for more than a century.What looks like monogamy at first glance very often isn’t. (Waved Albatross image via www.shutterstock.com.)
Besides the data summarized above, there is the question of whether Bateman’s experiments are replicable. Given that replication is an essential criterion of science, and that Bateman’s ideas became an unquestioned tenet of behavioral and evolutionary science, it is shocking that more than 50 years passed before an attempt to replicate the study was published.
Behavioral ecologist Patricia Gowaty and collaborators had found numerous methodological and statistical problems with Bateman’s experiments; when they reanalyzed his data, they were unable to support his conclusions. Subsequently, they reran Bateman’s critical experiments, using the exact same fly strains and methodology—and couldn’t replicate his results or conclusions.
Counterevidence, evolving social attitudes, recognitions of flaws in the studies that started it all—Bateman’s Principle, with its widely accepted preconception about male-female sexual behavior, is currently undergoing serious scientific debate. The scientific study of sexual behavior may be experiencing a paradigm shift. Facile explanations and assertions about male-female sexual behaviors and roles just don’t hold up.