Found 31,968 Resources containing: Shapes
Carl Van Vechten, a familiar figure among New York City’s literary and artistic circles in the early 20th century, tried his hand as a novelist, critic and journalist, to varying results, before picking up a camera in 1932. He proved a natural photographer. But perhaps more importantly, he had built relationships (in some cases decades-long) with many of the brightest artistic lights of the era, who were happy to pose for him: James Baldwin, W.E.B. Du Bois, Ella Fitzgerald, Lena Horne and dozens of others.
Visitors to the Smithsonian American Art Museum in Washington, D.C., have a rare opportunity to see a selection of his images—39 photographs, many of which are on view for the first time since they were acquired in 1983. The works cover a period of three decades and are some of the most striking portraits created of the groundbreaking writers, athletes, politicians, musicians of the Harlem Renaissance. Yet the man behind the camera is remembered more as a socialite and writer than a photographer. The museum’s exhibition “Heroes of Harlem: Photographs by Carl Van Vechten” aims to change that.
“Carl Van Vechten had a relatively natural style,” explains John Jacob, the museum’s curator of photography and the curator of this show. “His portraits are posed, but they’re close-up and direct, focusing on the facial and bodily expressions of his subjects. They’re formal, but they have the familiar qualities of a snapshot.”
This natural approach and the fact that Van Vechten was perceived as a polymath or dilettante—depending on your point of view, partly explain why his photography has not received more consideration.
Studio photographers such as James Van Der Zee and James Latimer Allen lived in the area and captured their community on film. Others, like Henri Cartier-Bresson, came as reporters. But Van Vechten’s motives were different than theirs.
“Van Vechten the photographer didn’t plan his portrait of Harlem. African Americans were among the social milieu in which he circulated, and their inclusion in it, at a time when exclusion was the norm, makes his project unique,” says Jacob.
While other photographers of the era saw themselves as creating art, Van Vechten saw himself creating a catalog—first of his friends and fellow artists, and after several years, focusing particularly on African-American artists and people of prominence.
“He wanted to capture the breadth of American artistic culture, including the African-American community,” says Jacob. More so than perhaps any other individual, he succeeded in this mission, leaving behind thousands of photographs, spread throughout the archives of the Smithsonian American Art Museum, Yale University, the Library of Congress, and elsewhere.
The 39 portraits included in this exhibition are delicate 35 mm nitrate negatives, restored by photographer Richard Benson for art book publisher Eakins Press Foundation. They were part of two collections Van Vechten had created: Heroes of Harlem (a portfolio of 30 portraits of African-American men) and Noble Black Women (a collection of 19 portraits of African-American women). While the Eakins Press Foundation would eventually combine both portfolios into the collection O, Write My Name: American Portraits, Harlem Heroes, the current exhibition displays the portraits from these prototype portfolios in its entirety, organized chronologically by exposure date (when the photograph was made).
“Visitors to the exhibition will see that Carl Van Vechten’s portraiture shaped an inclusive catalog of the era in which he lived and worked,” says Jacob. “That era, and the Harlem Renaissance within it, was a defining moment in our history that reverberates to this day in American culture.”
Collecting was Van Vechten’s focus.
“He tried to capture every important figure of the [Harlem Renaissance],” says Emily Bernard, professor of English and ALANA U.S. Ethnic Studies at the University of Vermont, as well as author of the 2012 Van Vechten biography Carl Van Vechten and the Harlem Renaissance. “He was interested in knowing people and collecting people and creating bonds for others—understanding how people could help each other.”
Bernard describes him as an “under-considered figure in African-American cultural history,” and attributes this in part to the fact that the photographer was white, but also to the fact that he seemed restless in his artistic pursuits, jumping from one interest to another throughout his life.
A pioneering dance and music critic, Van Vechten was also a novelist, who published a book set within the Harlem nightlife scene—and which included a startling racial epithet in its title. The novel’s depiction of African Americans and the offensive title, led it to receive wide derision (and patches of praise) among the Harlem community. Historian David Levering Lewis would famously dub it a “colossal fraud.” After this book, Van Vechten published another novel and book of essays, but then stopped writing altogether, outside of his letters.
“That’s just who he was—‘I’m done with that,’” says Bernard.
But if there is one effort that consumed Van Vechten throughout his life, it was meeting the creative figures of his era, placing himself in the center of any social circle.
Bernard is also the editor of Remember Me to Harlem (2001), a collection of letters between Van Vechten and Langston Hughes over their long and lively friendship. In addition to Hughes, Van Vechten corresponded with dozens of Harlem writers, musicians and intellectuals, saving all the letters and even making notes such as “met” next to the name. He painstakingly catalogued and preserved these letters, as well as hundreds of slides, which he donated to Yale University’s Beinecke Rare Book and Manuscript Library.
Van Vechten saw it as a badge of accomplishment to meet a prominent person—or introduce two important people to one another.
“It’s inarguable that he was a megalomaniac,” says Bernard. “He understood his place in the culture—that he was at the vortex, that he was the person who brought Gertrude Stein together with so many Harlem Renaissance figures that she would never have met.”
But he was not selfish in his sociability. Bernard sees both Van Vechten’s archive and his photography as “another arm of his work to connect people. He created the archives so people could understand the totality of the culture and what was happening in the early 20s through the 30s and 40s, so writers and readers could make a connection with this time.” She adds that, “He really wanted to educate from beyond the grave, ‘here’s what was happening in the culture.’”
Instead of seeing his photographs as reflective of his own art, he saw it as a way of preserving the world and the figures he is observing, saving them for posterity.
“His photography is unapologetically about the subject,” says Bernard. “He had a really precise sense that those photos were going to be archived. That was part of the artistic process for him.”
To help with this educational mission, he would even introduce props into his work, such as flowers surrounding Altonell Hines or a guitar for Josh White; and used the setting or backdrop to help convey something about the person, such a boxing ring for Joe Louis or landscape backdrop for Bessie Smith.
Collectively, these photographs try to make sense of the exciting and fast-changing culture of the time and “capture the essence of his subjects,” as Bernard puts it. “When you read about them you sense there is a whole matrix, not just individual subjects, but a whole world—and Van Vechten is the insider to that world; there was no one who was more important.”
She emphasizes that looking at these images today, a viewer will see how well Van Vechten knew his subjects, and that he wants to share this knowledge.
“He really was concerned about the viewer—he did this for you,” says Bernard. “He wanted the audience to know them as he knew them.”
"Harlem Heroes: Photographs by Carl Van Vechten" is on view at the Smithsonian American Art Museum in Washington, D.C. through March 29, 2017.
Picture a lion: The male has a luxuriant mane, the female doesn’t. This is a classic example of what biologists call sexual dimorphism—the two sexes of the same species exhibit differences in form or behavior. Male and female lions pretty much share the same genetic information, but look quite different.
We’re used to thinking of genes as responsible for the traits an organism develops. But different forms of a trait—mane or no mane—can arise from practically identical genetic information. Further, traits are not all equally sexually dimorphic. While the tails of peacocks and peahens are extremely different, their feet, for example, are pretty much the same.
Understanding how this variation of form—what geneticists call phenotypic variation—arises is crucial to answering several scientific questions, including how novel traits appear during evolution and how complex diseases emerge during a lifetime.
So researchers have taken a closer look at the genome, looking for the genes responsible for differences between sexes and between traits within one sex. The key to these sexually dimorphic traits appears to be a kind of protein called a transcription factor, whose job it is to turn genes “on” and “off.”
In our own work with dung beetles, my colleagues and I are untangling how these transcription factors actually lead to the different traits we see in males and females. A lot of it has to do with something called “alternative gene splicing”—a phenomenon that allows a single gene to encode for different proteins, depending on how the building blocks are joined together.The gene doublesex produces visually obvious sexual dimorphism in the butterfly Papilio polytes, the common Mormon. Female (top), male (bottom). (Jeevan Jose, Kerala, India, CC BY-SA)
Over the years, different groups of scientists independently worked with various animals to identify genes that shape sexual identity; they realized that many of these genes share a specific region. This gene region was found in both the worm gene mab-3 and the insect gene doublesex, so they named similar genes containing this region DMRT genes, for “doublesex mab-related transcription factors.”
These genes code for DMRT proteins that turn on or off the reading, or expression, of other genes. To do this, they seek out genes in DNA, bind to those genes, and make it either easier or harder to access the genetic information. By controlling what parts of the genome are expressed, DMRT proteins lead to products characteristic of maleness or femaleness. They match the expression of genes to the right sex and trait.
DMRTs almost always confer maleness. For instance, without DMRT, testicular tissue in male mice deteriorates. When DMRT is experimentally produced in female mice, they develop testicular tissue. This job of promoting testis development is common to most animals, from fish and birds to worms and clams.
DMRTs even confer maleness in animals where individuals develop both testes and ovaries. In fish that exhibit sequential hermaphroditism—where gonads change from female to male, or vice versa, within the same individual—the waxing and waning of DMRT expression results in the appearance and regression of testicular tissue, respectively. Likewise, in turtles that become male or female based on temperatures experienced in the egg, DMRT is produced in the genital tissue of embryos exposed to male-promoting temperatures.
The situation is a little different in insects. First, the role of DMRT (doublesex) in generating sexual dimorphism has extended beyond gonads to other parts of the body, including mouthparts, wingspots and mating bristles aptly named “sex combs.”Depending on how the pieces are put together, one gene can result in a number of different proteins. (Cris Ledón-Rettig, CC BY-ND)
Secondly, male and female insects generate their own versions of the doublesex protein through what’s called “alternative gene splicing.” This is a way for a single gene to code for multiple proteins. Before genes are turned into proteins, they must be turned “on”; that is, transcribed into instructions for how to build the protein.
But the instructions contain both useful and extraneous regions of information, so the useful parts must be stitched together to create the final protein instructions. By combining the useful regions in different ways, a single gene can produce multiple proteins. In male and female insects, it’s this alternative gene splicing that results in the doublesexproteins behaving differently in each sex.
So in a female, instructions from the doublesex gene might include sections 1, 2 and 3, while in a male the same instruction might include only 2 and 3. The different resulting proteins would each have their own effect on what parts of the genetic code are turned on or off—leading to a male with huge mouthparts and a female without, for instance.
How do male and female forms of doublesex regulate genes to produce male and female traits? Our research group answered this question using dung beetles, which are exceptionally numerous in species (over 2,000), widespread (inhabiting every continent except Antarctica), versatile (consuming about every type of dung) and show amazing diversity in a sexually dimorphic trait: horns.Thanks to the doublesex gene, in the stag beetle Cyclommatus metallifer, mandibles of males (right) are much larger than those of females (left). (http://dx.doi.org/10.1371/journal.pgen.1004098)
We focused on the bull-headed dung beetle Onthophagus taurus, a species in which males produce large, bull-like head horns but females remain hornless. We found that doublesex proteins can regulate genes in two ways.
In most traits, it regulates different genes in each sex. Here, doublesex is not acting as a “switch” between two possible sexual outcomes, but instead bestowing maleness and femaleness to each sex independently. Put another way, these traits don’t face a binary decision between becoming male or female, they are simply asexual and poised for further instruction.
The story is different for the dung beetles’ head horns. In this case, doublesex acts more like a switch, regulating the same genes in both sexes but in opposite directions. The female protein suppressed genes in females that would otherwise be promoted by the male protein in males. Why would there be an evolutionary incentive to do this?
Our data hinted that the female doublesex protein does this to avoid what is known as “sexual antagonism.” In nature, fitness is sculpted by both natural and sexual selection. Natural selection favors traits increasing survival, whereas sexual selection favors traits increasing access to mates.
Sometimes these forces are in agreement, but not always. The large head horns of male O. taurus increase their access to mates, but the same horns would be a hassle for females who have to tunnel underground to raise their offspring. This creates a tension between the sexes, or sexual antagonism, that limits the overall fitness of the species. However, if the female doublesex protein turns “off” genes that in males are responsible for horn growth, the whole species does better.
Our ongoing research is addressing how doublesex has evolved to generate the vast diversity in sexual dimorphism in dung beetles. Across species, horns are found in different body regions, grow differently in response to different quality diets, and can even occur in females rather than males.
In Onthophagus sagittarius, for instance, it’s the female that grows substantial horns while males remain hornless. This species is only five million years diverged from O. taurus, a mere drop of time in the evolutionary bucket for insects. For perspective, beetles diverged from flies about 225 million years ago. This suggests that doublesex can evolve quickly to acquire, switch, or modify the regulation of genes underlying horn development.
How will understanding the role of doublesex in sexually dimorphic insect traits help us understand phenotypic variation in other animals, even humans?
Despite the fact that DMRTs are spliced as only one form in mammals and act primarily in males, the majority of other human genes are alternatively spliced; just like insects’ doublesex gene, most human genes have various regions that can be spliced together in different orders with varying results. Alternatively spliced genes can have distinct or opposing effects based on which sex or trait they’re expressed in. Understanding how proteins that are produced by alternatively spliced genes behave in different tissues, sexes and environments will reveal how one genome can produce a multitude of forms depending on context.
In the end, the humble dung beetle’s horns can give us a peek into the mechanisms underlying the vast complexity of animal forms, humans included.
In an interview in January 2010, President Obama told Diane Sawyer of ABC News, “I’d rather be a really good one-term president than a mediocre two-term president.”
The comment didn’t really jibe well with Robert W. Merry, an acclaimed biographer of James Polk, who served as president from 1845 to 1849. Polk is ranked as a “near great” president in polls by scholars, but he is an exception. “History has not smiled upon one-term presidents,” wrote Merry in an editorial in the New York Times. “The typical one-term president generally falls into the ‘average’ category, occasionally the ‘above average.’ ”
In his new book, Where They Stand, Merry opens up the rating game beyond historians, to include what voters and contemporaries said in their own times. The editor of the National Interest, a foreign policy publication, argues that while historians’ views are important, presidential greatness is best seen through the eyes of voters of the president’s time. The greatest of the “greats,” in other words, have the election records to show it. They earned the trust of Americans in their first terms, won second terms and, in some cases, paved the way for their party to maintain control of the White House for the next four years.
Historians and others take joy in ranking the presidents, and debating these ranks. To you, what’s the fun in this?
It is the same fun that we have in trying to determine who is the greatest first baseman of all time. Most people would say Lou Gehrig, but there is plenty of room for debate. Who is the greatest American singer of the postwar period? But the presidents really have the national destiny in their hands. It is a much more significant pursuit than these others, which are more in the realm of trivia. Who was great? Who wasn’t so great? And, why were they great? Ranking presidents is a way we bring order to our thinking about our history.
What factors, do you think, need to be considered when assessing presidential greatness?
Greatness is as greatness does. It is really a question of what a president has accomplished with the country. Reagan’s question, “Are you better off than you were four years ago?” is very apt. Put another way, is the country better off? How is the country different? Are those differences good or are they not so good?
The great presidents all did something that changed the political landscape of America and set the country on a new course. That’s not easy to do. That is really the key to presidential greatness.
In your book, your big claim is that we should listen to the electorate at the time of the president’s term, and not just historians. Why do you put such emphasis on the voters?
Presidential politics is like retailing. The customer is always right. In our system, we put faith in the voters, because that is at the bedrock of how we think we should order our affairs politically. If you don’t believe that, then it is kind of hard to believe very strongly in American democracy.
The whole idea is that the voters emerge with a collective judgment, maybe even occasionally a collective wisdom. I happen to buy that. Therefore, I felt that the polls of historians were significant. I didn’t debunk them or toss them aside. But I thought they were incomplete, because they didn’t always take into account what the voters were saying, thinking or doing with regard to their presidents contemporaneously. I wanted to sort of crank that into the discussion.
There are six presidents that you refer to as “Leaders of Destiny.” What makes a president deserving of this title?
The six, in order, are Washington, Jefferson, Jackson, Lincoln, Teddy Roosevelt and Franklin Roosevelt. I happen to believe that Reagan will get into that circle, but right now, the polls of historians don’t quite have him there, although his standing is rising rather dramatically.
The six leaders of destiny pass a three-part test. They are consistently hailed among the greats or near greats by the historians. They are two-term presidents succeeded by their own party, meaning that the voters liked them both times that they served. And then, as I described earlier, they transformed the political landscape of the country and set it on a new course.
What were the major traits that these presidents shared? They all understood the nature of their time, what was really going on in the country, what the country needed, what the voters collectively were hungry for. There are a lot of presidents who don’t understand their time; they think they do, but they don’t. You have to have a vision. All of these leaders of destiny were elected at a time when the country needed tremendous leadership, and these presidents are the ones who stepped up and gave it. Then, they have political adroitness, the ability to get their hands on the levers of power in America and manipulate those levers in a way that gets the country moving affectively in the direction of that vision.
In your opinion, FDR and Ronald Reagan are the two greatest presidents of the 20th century.
The voters hailed them both at the time. What is interesting, in my view, is that Roosevelt was probably the most liberal president of the 20th century, and Reagan was probably the most conservative president of the 20th century. It indicates that the country is not particularly ideological. It is looking for the right solutions to the problems of the moment. The country is willing to turn left or to turn right.
What is the difference between good and great?
We have had a lot of good presidents. I’ll give you a good example of a good president, Bill Clinton. Clinton was elected because the country wasn’t quite satisfied with George H.W. Bush. They didn’t think he was a terrible president, but he didn’t quite lead the country in a way that made him eligible for rehire. The country gets Bill Clinton, and he tries to govern in his first two years as if his aim is to repeal Reaganism. The result was that the American people basically slapped him down very, very decisively in the midterm elections of 1994, at which point Bill Clinton did an about-face and said, “The era of big government is over.” He crafted a center left mode of governing that was very effective. He had significant economic growth. He wiped out the deficit. We didn’t have major problems overseas. There was no agitation in the streets that led to violence or anything of that nature. He gets credit for being a good president.
Once he righted his mode of government and moved the country solidly forward, he was beginning to build up some significant political capital, and he never really felt the need or desire to invest that capital into anything very bold. So, he governed effectively as a status quo president and ended eight years as a very good steward of American polity, but not a great president. To be a great president, you have to take risks and make changes.
Just as we can learn from the successes, there are lessons to be learned from the failures. What can you say about character traits that do not bode well for a successful presidency?
Scandal harms you tremendously. But I would say that the real failures are people like James Buchanan who faced a huge crisis—the debate over slavery that was descending upon America—and just simply didn’t want to deal with. He wasn’t willing to put himself out in any kind of politically risky way in order to address it. The result was it just got worse. It festered and got worse.
Occasionally, a president will make a comeback in historians’ minds. What would you say is the most reputation-altering presidential biography?
Grover Cleveland is the only president we have who actually is a two-time, one-term president. He is the only president who served two nonconsecutive terms. Each time he served four years, the voters said, “I’ve had enough. I’m going to turn away to either another person in the party or another candidate.”
Meanwhile, however, the first poll by Arthur Schlesinger Sr. in 1948 had Grover Cleveland at Number 8. That ranking came a few years after the great historian Allan Evans wrote a two volume biography of Grover Cleveland, in which he hailed him as a man of destiny and a man of character. I am sure that biography had a significant impact.
So, you describe a manner of assessing the greatest of past presidents. But, it is an election year. How do you suggest we evaluate current presidential candidates?
I don’t think the American people need a lot of instruction from me or anyone else in terms of how to make an assessment on the presidents when they come up for reelection. Presidential elections are largely referendums on the incumbent. The American people don’t pay a lot of attention to the challenger. They basically make their judgment collectively, based on the performance of the incumbent or the incumbent party. They pretty much screen out the trivia and the nonsense—a lot of the stuff that we in the political journalistic fraternity (and I’ve been a part of it for a long, long time) tend to take very seriously—and make their assessment based on sound judgments on how the president has fared, how well he has led the country and whether the country is in better shape than it was before. I am pretty confident that the American people know what they are doing.
Do you have any comment, then, on what qualities we might look for in a candidate, so that we maximize our chances of electing a leader of destiny?
One thing that we know from history is that the great presidents are never predicted as being great. They are elected in a political crucible. While supporters are convinced he is going to be great—or she; someday we will have a woman—his detractors and opponents will be absolutely convinced that he is going to be a total and utter disaster. Even after he is succeeding, they are going to say he is a disaster.
You can never really predict what a president is going to do or how effective he is going to be. Lincoln was considered a total country bumpkin from out there in rural Illinois. Oliver Wendell Holmes famously judged Franklin Roosevelt as having a first-rate temperament and a second-rate intellect. Ronald Reagan was viewed as a failed movie actor who just read his lines from 3-by-5 cards. And all three were great presidents.
What idea are you turning to next?
I wrote a history of the James Polk presidency [A Country of Vast Designs] and how the country moved west and gained all of that western and southwestern territory, Washington, Oregon, Idaho and then California to Texas. I am fascinated now by the subsequent time in our history when we busted out of our continental confines and went out into the world in the Spanish-American War. I am looking at the presidency of William McKinley and the frothy optimism of the country at that time when we decided to become something of an imperial power.
This interview series focuses on big thinkers. Without knowing whom I will interview next, only that he or she will be a big thinker in their field, what question do you have for my next interview subject?
I guess a big question I would have in terms of the state of the country is, why is the country in such a deadlock? And how in the world are we going to get out of the crisis that is a result of that deadlock?
From my last interviewee, Frank Partnoy, a University of San Diego professor and author of Wait: The Art and Science of Delay: How do you know what you know? What is it about your research and experience and background that leads you to a degree of certainty about your views? With what degree of confidence do you hold that idea?
I am not a young man. I have been around a long time. I had certainty when I was young, but I have had a lot of my certitudes shaken over the years. But, if you have enough of that, you tend to accumulate at least a few observations about the world that seem pretty solid and grounded. So, you go with them.
You have to take it on faith that you have seen enough and you know enough and you have certain principal perceptions of how things work and how events unfold and how the thesis-antithesis leads to synthesis in politics or government or history. And, so you pull it together as best you can. Ultimately, the critics will determine how successful you were.
Earth has no shortage of stunning landforms: Mt. Everest rises majestically above the clouds; the Grand Canyon rents deep into desert rock layers; the mountains that make up the Ethiopian Highlands, aka the Roof of Africa, tower above the rest of the continent. But all of these natural icons pale in comparison to the dramatic formations that lie beneath the ocean. Next to the deep sea's mountains and gorges, the Grand Canyon is a mere dimple, Mount Everest a bunny slope and the Highlands an anthill on the horn of Africa.
The shape of the ocean floor helps determine weather patterns, when and where tsunamis will strike and management of fisheries that feed millions. And yet we’ve barely begun to understand it. To borrow an analogy from oceanographer Robert Ballard, best known for re-discovering the Titanic: With only 5 percent of the ocean floor mapped, our knowledge of what’s beneath is about as detailed as a set dinner table with a wet blanket thrown over it. You can see the outlines, but how do you tell the candelabra from the turkey?
Fortunately, we’re about to whip the blanket off and reveal this aquatic meal in exquisite detail. In June, an international team of oceanographers launched the first effort to create a comprehensive map of all the world’s oceans. To map some 140 million square miles of sea floor, the Seabed 2030 project is currently recruiting around 100 ships that will circumscribe the globe for 13 years. The team, united under the non-profit group General Bathymetric Chart of the Oceans (GEBCO), recently announced it had received $18.5 million dollars from the Nippon Foundation for its efforts.
Many oceanographers hail the project as an illumination of a geological and biological world that is long overdue. It could also be potentially lifesaving: Even today, the lack of a detailed map can be deadly, as was the case when the USS San Francisco crashed into an uncharted mountain in 2005. “People have been excited about going to different planets,” says Martin Jakobsson, professor of marine geology and geophysics at Stockholm University, but “we haven’t been able to bring the attention to our own Earth in the same way as Mars. It hasn’t been easy to rally the whole world behind us.”
Yet at the same time, some ecologists fear that such a map will also aid mining industries who seek profit in the previously unattainable depths of the Earth.
It’s a common sentiment among Earth scientists—often a lament—that we know more about other planets in the solar system than we do our own. Indeed, astronomers have a more complete topographical understanding of the moon, Mars, ex-planet Pluto and the dwarf planet Ceres than we do of the seabed. This is shocking, because the topography of the seafloor plays such a huge role in keeping the planet habitable—a role we need to fully understand in order to predict what the future of our climate holds.
The reason we have no comprehensive map is dumbfoundingly simple, considering that we’ve traversed and charted our solar system: “It’s not so easy to map the ocean, because the water is in the way,” says Jakobsson. The ocean is big, deep and impermeable to the laser altimeter that made mapping our less watery neighbor planets possible. To complete a map of Earth’s ocean floor, you’ve got to take to the high seas by boat.We've come a long way in ocean exploration since the days of the HMS Challenger, launched in 1858. (The Report of the Scientific Results of the Exploring Voyage of HMS Challenger during the years 1873–1876)
The first oceanographic researchers—like those onboard the H.M.S. Challenger expedition—built sea floor maps by “sounding” with weighted lines lowered to reach the sediment below. Compiled one data point at a time, this painstaking yet critical undertaking aided navigation and prevented ships from running aground. At the same time, it helped satisfy simple scientific curiosity about the depths of the ocean.
Thankfully the technology used today has advanced beyond dangling plumb lines over the side of the ship. Modern ships like those that will be employed by Seabed 2030 are outfitted with multibeam bathymetry systems. These sensors ping large swaths of ocean floor with sound waves that bounce back, and are analyzed by computers on deck. One ship can now provide thousands of square kilometers' worth of high-resolution maps during an expedition. Still, it would take a lone ship approximately 200 years to chart all 139.7 million square miles of ocean.
That's where Seabed 2030 comes in. It will facilitate the collection of multibeam measurements on a coalition of ships charting previously unexplored territory, while also serving as a repository of existing map data. “When you look at a world map it seems like we’ve got it all figured out,” Jakobsson says. But those maps are just rough, artistic estimations of what the seafloor looks like. “I foresee a lot of new discoveries,” he says of the mapping project. After all, “ur major discoveries have been because of mapping”—and there’s a lot more to be found.
The discoveries lying in wait beneath the waves aren’t only of interest to oceanographers. Hidden in the subsea mountains and valleys are vast pools of resources like precious metals, rare earth elements and even diamonds. “It’s like the old Klondike [Gold Rush], but the streams lead to the ocean,” says Steven Scott, professor of geology at the University of Toronto and consultant to the marine mining industry. “There’s mining for diamonds off of Southern Africa, tin deposits off of Indonesia, gold off Alaska.”
Currently, seafloor mining only takes place in these relatively shallow, near-shore locations, rather than in deep international waters. That’s partly because prospectors can’t target mining operations without accurate maps of most of the sea floor, but also because international laws make it challenging to exploit resources in international waters.
“Seabed minerals and areas beyond national jurisdiction are part of the Common Heritage of Mankind,” says Kristina Gjerde, the high seas policy advisor for the International Union for Conservation of Nature. In 1982 the United Nations amended the Convention on the Law of the Sea that laid down rules to govern the use of the ocean’s resources. The law states that deep-sea life must be protected, and that revenue made from mining in the deep sea must be shared with the international community.
“We know so little about potential environmental impacts” of ocean mining, Gjerde says. “Some are starting to question if we know enough to authorize mining to proceed. We really need a better understanding of the deep sea before we start to do any irremediable harm.” Gjerde is co-author on a recent editorial in the journal Nature Geoscience arguing that while deep-sea mining might fuel economic development, the industry should increase its efforts to protect marine habitats.
This, say Gjerde and other concerned biologists, is the catch 22 of generating a comprehensive topology of the seafloor: It will undoubtedly help scientists better understand the rich and crucial geology of our planet. But it could also serve as a treasure map for the mining industry.
Scott agrees that habitats around mining operations will be impacted. Still, based on his experience, he says, “I think [the effects] will be less substantial” than mining on land, which is known to have catastrophic environmental consequences ranging from acid mine drainage that pollutes water to toxic clouds of dust. “None of those things will be a problem in the ocean,” Scott says.
There won’t be any holes because the targeted resources are near the surface of the seabed, he points out. Dust isn’t a factor in a liquid medium, and alkaline seawater would quickly neutralize any acidic byproducts. Proponents of ocean prospecting also point out that we simply need the resources that are out there.
“Mines on land are soon going to run out,” Scott says. “Every electronic device in the world has rare earth [metals] in it ... we need raw resources.” And what happens when we eventually run out of things to mine from the ocean? Scott says, “We start mining asteroids, or Mars.” Well, at least we’ve already got the maps for those.
But back to the sea floor. As Ballard said last year at the Forum for Future Ocean Floor Mapping: “They tell children that their generation is going to explore more of Earth than all previous generations combined. As soon as we finish that map, the explorers are right behind.” The question of just what kind of explorers those will be—those searching for knowledge or riches, seeking to preserve or extract—remains to be seen.
Soon after I enrolled as a graduate student at Cambridge University in 1964, I encountered a fellow student, two years ahead of me in his studies, who was unsteady on his feet and spoke with great difficulty. This was Stephen Hawking. He had recently been diagnosed with a degenerative disease, and it was thought that he might not survive long enough even to finish his PhD. But he lived to the age of 76, passing away on March 14, 2018.
It really was astonishing. Astronomers are used to large numbers. But few numbers could be as large as the odds I’d have given against witnessing this lifetime of achievement back then. Even mere survival would have been a medical marvel, but of course he didn’t just survive. He became one of the most famous scientists in the world—acclaimed as a world-leading researcher in mathematical physics, for his best-selling books and for his astonishing triumph over adversity.
Perhaps surprisingly, Hawking was rather laid back as an undergraduate student at Oxford University. Yet his brilliance earned him a first class degree in physics, and he went on to pursue a research career at the University of Cambridge. Within a few years of the onset of his disease, he was wheelchair-bound, and his speech was an indistinct croak that could only be interpreted by those who knew him. In other respects, fortune had favored him. He married a family friend, Jane Wilde, who provided a supportive home life for him and their three children.
The 1960s were an exciting period in astronomy and cosmology. This was the decade when evidence began to emerge for black holes and the Big Bang. In Cambridge, Hawking focused on the new mathematical concepts being developed by the mathematical physicist Roger Penrose, then at University College London, which were initiating a renaissance in the study of Einstein’s theory of general relativity.
Using these techniques, Hawking worked out that the universe must have emerged from a “singularity”—a point in which all laws of physics break down. He also realised that the area of a black hole’s event horizon—a point from which nothing can escape—could never decrease. In the subsequent decades, the observational support for these ideas has strengthened—most spectacularly with the 2016 announcement of the detection of gravitational waves from colliding black holes.Hawking at the University of Cambridge (Lwp Kommunikáció/Flickr, CC BY-SA)
Hawking was elected to the Royal Society, Britain’s main scientific academy, at the exceptionally early age of 32. He was by then so frail that most of us suspected that he could scale no further heights. But, for Hawking, this was still just the beginning.
He worked in the same building as I did. I would often push his wheelchair into his office, and he would ask me to open an abstruse book on quantum theory—the science of atoms, not a subject that had hitherto much interested him. He would sit hunched motionless for hours—he couldn’t even to turn the pages without help. I remember wondering what was going through his mind, and if his powers were failing. But within a year, he came up with his best ever idea—encapsulated in an equation that he said he wanted on his memorial stone.
The great advances in science generally involve discovering a link between phenomena that seemed hitherto conceptually unconnected. Hawking’s “eureka moment” revealed a profound and unexpected link between gravity and quantum theory: he predicted that black holes would not be completely black, but would radiate energy in a characteristic way.
This radiation is only significant for black holes that are much less massive than stars—and none of these have been found. However, “Hawking radiation” had very deep implications for mathematical physics—indeed one of the main achievements of a theoretical framework for particle physics called string theory has been to corroborate his idea.
Indeed, the string theorist Andrew Strominger from Harvard University (with whom Hawking recently collaborated) said that this paper had caused “more sleepless nights among theoretical physicists than any paper in history.” The key issue is whether information that is seemingly lost when objects fall into a black hole is in principle recoverable from the radiation when it evaporates. If it is not, this violates a deeply believed principle of general physics. Hawking initially thought such information was lost, but later changed his mind.
Hawking continued to seek new links between the very large (the cosmos) and the very small (atoms and quantum theory) and to gain deeper insights into the very beginning of our universe—addressing questions like “was our big bang the only one?” He had a remarkable ability to figure things out in his head. But he also worked with students and colleagues who would write formulas on a blackboard—he would stare at it, say whether he agreed and perhaps suggest what should come next.
He was specially influential in his contributions to “cosmic inflation”—a theory that many believe describes the ultra-early phases of our expanding universe. A key issue is to understand the primordial seeds which eventually develop into galaxies. Hawking proposed (as, independently, did the Russian theorist Viatcheslav Mukhanov) that these were “quantum fluctuations” (temporary changes in the amount of energy in a point in space)—somewhat analogous to those involved in “Hawking radiation” from black holes.
He also made further steps towards linking the two great theories of 20th century physics: the quantum theory of the microworld and Einstein’s theory of gravity and space-time.
In 1987, Hawking contracted pneumonia. He had to undergo a tracheotomy, which removed even the limited powers of speech he then possessed. It had been more than ten years since he could write, or even use a keyboard. Without speech, the only way he could communicate was by directing his eye towards one of the letters of the alphabet on a big board in front of him.
But he was saved by technology. He still had the use of one hand; and a computer, controlled by a single lever, allowed him to spell out sentences. These were then declaimed by a speech synthesiser, with the androidal American accent that thereafter became his trademark.
His lectures were, of course, pre-prepared, but conversation remained a struggle. Each word involved several presses of the lever, so even a sentence took several minutes to construct. He learnt to economize with words. His comments were aphoristic or oracular, but often infused with wit. In his later years, he became too weak to control this machine effectively, even via facial muscles or eye movements, and his communication—to his immense frustration—became even slower.Hawking in zero gravity (NASA)
At the time of his tracheotomy operation, he had a rough draft of a book, which he’d hoped would describe his ideas to a wide readership and earn something for his two eldest children, who were then of college age. On his recovery from pneumonia, he resumed work with the help of an editor. When the U.S. edition of A Brief History of Time appeared, the printers made some errors (a picture was upside down), and the publishers tried to recall the stock. To their amazement, all copies had already been sold. This was the first inkling that the book was destined for runaway success, reaching millions of people worldwide.
And he quickly became somewhat of a cult figure, featuring on popular TV shows ranging from the Simpsons to The Big Bang Theory. This was probably because the concept of an imprisoned mind roaming the cosmos plainly grabbed people’s imagination. If he had achieved equal distinction in, say, genetics rather than cosmology, his triumph probably wouldn’t have achieved the same resonance with a worldwide public.
As shown in the feature film The Theory of Everything, which tells the human story behind his struggle, Hawking was far from being the archetype unworldy or nerdish scientist. His personality remained amazingly unwarped by his frustrations and handicaps. He had robust common sense, and was ready to express forceful political opinions.
However, a downside of his iconic status was that that his comments attracted exaggerated attention even on topics where he had no special expertise—for instance, philosophy, or the dangers from aliens or from intelligent machines. And he was sometimes involved in media events where his “script” was written by the promoters of causes about which he may have been ambivalent.
Ultimately, Hawking’s life was shaped by the tragedy that struck him when he was only 22. He himself said that everything that happened since then was a bonus. And what a triumph his life has been. His name will live in the annals of science and millions have had their cosmic horizons widened by his best-selling books. He has also inspired millions by a unique example of achievement against all the odds—a manifestation of amazing willpower and determination.
While walking in the shadow of their people’s sacred volcano, Maasai villagers in 2006 stumbled across a set of curious footprints. Clearly made by human feet, but set in stone, they appeared to be the enigmatic traces of some long-forgotten journey.
Now scientists have teased out some of story behind those ancient prints and the people who, with some help from the volcano, left them behind. It begins while they were walking through the same area as the Maasai—separated by a span of perhaps 10,000 years.
“It’s kind of amazing to walk alongside these footprints and say, ‘Wow, thousands of years ago somebody walked here. What were they doing? What were they looking for? Where were they going?’” says Briana Pobiner, a paleoanthropologist at the Smithsonian’s National Museum of Natural History with the Human Origins Program. Pobiner is one of the scientists who has studied the prints at Engare Sero in Tanzania during the 14 years since their initial discovery.
An in-depth footprint analysis has now produced an intriguing theory to explain what the walkers were doing on the day when impressions of their toes and soles were preserved on a mudflat. Pobiner and her colleagues, in a study recently published in Scientific Reports, suggest that a large collection of the tracks, moving in the same direction at the same pace, were made by a primarily female group that was foraging around what was then on or near a lakeshore. This practice of sexually-divided gathering behavior is still seen among living hunter-gather peoples, but no bone or tool would ever be able to reveal whether it was practiced by their predecessors so long ago.
Footprints, however, allow us to quite literally retrace their steps.
When Kevin Hatala, the lead author of the study, and his colleagues began working the site in 2009 they found 56 visible footprints that had been exposed by the forces of erosion over the centuries. But they soon realized that the bulk of the site remained hidden from view. Between 2009 and 2012 the researchers excavated what has turned out to be the largest array of modern human fossil footprints yet found in Africa, 408 definitively human prints in total. It’s most likely that the prints were made between 10,000 and 12,000 years ago, but the study’s conservative dating range stretches from as early as 19,000 to as recently as 5,760 years ago.
A previous analysis, involving some of the same authors, determined that as these people walked, their feet squished into an ashy mudflat produced by an eruption of Ol Doinyo Lengai volcano, which even today is still active and looms over the site of the footprints.“It’s kind of amazing," says Briana Pobiner, "to walk alongside these footprints and say, ‘Wow, thousands of years ago somebody walked here. What were they doing? What were they looking for?" (William Harcourt-Smith)
Deposits from the volcano were washed down into the mudflat. After the human group walked across and over the area, creating so many prints that scientists have nicknamed one heavily-trod area “the dance floor,” the ashy mud hardened in a matter of days or even hours. Then it was buried by a subsequent sediment flow which preserved it until the actions of erosion brought dozens of prints to light—and the excavations of the team unearthed hundreds more.
Fossil footprints capture behavior in a way that bones and stones cannot. The process of preservation happens over a short period of time. So while bones around a hearth don’t necessarily mean that their owners circled the fire at exactly the same time, fossilized footprints can reveal those kinds of immediate interactions.
“It’s a snapshot of life at a moment in time, the interaction of individuals, the interaction of humans with animals that’s preserved in no other way. So it’s a real boon to behavioral ecology.” says Matthew Bennett an expert on ancient footprints at Bournemouth University. Bennett, who wasn’t involved in the study, has visited the Engare Sero site.
Fossil footprints are analyzed by size and shape, by the orientation of the foot as it created the print, and by the distances between the prints which, combined with other aspects, can be used to estimate how fast the individual walked or ran. One of the ancient travelers who left a trackway heading in a different direction than the larger group appears to have been passing through the area in a hurry, running at better than six miles per hour.As these people walked, their feet squished into an ashy mudflat produced by an eruption of Ol Doinyo Lengai volcano, which even today (above) is still active and looms over the site of the footprints. (Cynthia Liutkus-Pierce)
The main group, heading to the southwest, moved at a more leisurely pace. The team’s footprint analysis suggests it most likely consisted of 14 adult females accompanied, intermittently at least, by two adult males and a juvenile male.
“I think it looks like it’s a good reflection of what we see in some modern hunter-gatherers with groups of women foraging together,” says Pobiner. Tanzania’s Hadza and Paraguay’s Aché peoples still tackle these tasks in a similar manner. “Oftentimes there is basically gender foraging, where women will forage together and men will forage together. There are sometimes mixed groups, but we often see this kind of sexual division of labor in terms of food gathering,” Pobiner says. “It doesn’t mean that these 14 women always foraged together,” she adds. “But at least on this one day or this one instance, this is what we see in this group.”
While no animals appear to have been traveling with the group, there are prints nearby of zebra and buffalo. The humans and the animals were apparently sharing a landscape that even today isn’t far from the southern shoreline of Lake Natron. Depending on exactly when the prints are made the water may have been much closer to the current site.“We’re able to give a level of accessibility to everyone,” says Vince Rossi whose team (above on location) has made the 3D footprints available online, and the data from a selection of prints can even be downloaded to a 3D printer. (Adam Metallo, Smithsonian Digitization Program Office)
“It’s possible that these were just people and animals kind of wandering along the lakeshore all looking for something to eat,” Pobiner says. Other sets of footprints, like those made in northwestern Kenya, capture just this sort of behavior among ancient hominins like Homo erectus.
“They did a very nice study on a very nice set of footprints. It’s well executed and they have come up with some really interesting conclusions,” Matthew Bennett says of the research, adding that it’s a welcome addition to a rapidly growing body of scientific literature on the subject of ancient trackways.
Fossilized footprints were once thought to be extremely rare, “freaks of geological preservation,” Bennett notes. An explosion of fossil footprint discoveries over the past decade suggests they aren’t so rare after all, but surprisingly common wherever our ancient relatives put one foot in front of the other, from Africa to New Mexico.
“If you think about it there’s something like 206 bones in the body, so maybe 206 chances that a body fossil will be preserved,” Bennett says. “But in an average modern lifetime you’ll make millions and millions of footprints, a colossal number. Most won’t be preserved, but we shouldn’t be surprised that they aren’t actually so rare in the geological record.”
A famous set of prints from nearby Laetoli, Tanzania dates to some 3.6 million years ago and was likely made by Australopithecus afarensis. At New Mexico's White Sands National Monument, ancient footprints of human and beast may be evidence of an ancient sloth hunt.
Study co-author Vince Rossi, supervisor of the 3D program at the Smithsonian Digitization Program Office, aims to give these particular fossil footprints even wider distribution. His team created 3D images of the site that initially supported scientific research and analysis efforts. Today they are extending the footprints’ journey from a Tanzanian mudflat to the farthest corners of the globe.
“How many people can travel to this part of Tanzania to actually see these footprints? We’re able to give a level of accessibility to everyone,” he says. Rossi’s team has made the 3D footprints available online, and the data from a selection of prints can even be downloaded to a 3D printer so that users can replicate their favorite Engare Sero footprints.
Because 3D images capture the footprints as they appeared at a specific moment in time they’ve also become a valuable tool for preservation. The study employed two sets of images, Rossi’s 2010 array and a suite of 3D images taken by an Appalachian State University team in 2017. Comparing those images reveals visible degradation of the exposed prints during that relatively short time, and highlights the urgency of protecting them now that they’ve been stripped of the overlying layers that protected them for thousands of years.
Finding ways to preserve the footprints is a key prerequisite for uncovering more, which seems likely because the tracks heading northward lead directly under sediment layers that haven’t been excavated. Future finds would add to a paleoanthropological line of investigation that is delivering different kinds of results than traditional digs of tools or fossils.
“Footprints give us information about anatomy and group dynamics that you just can’t get from bones,” Pobiner says. “And I love the idea that there are different and creative ways for us to interpret behaviors of the past.”
On August 21, 1915, the Conklin family departed Huntington, New York on a cross-country camping trip in a vehicle called the “Gypsy Van.” Visually arresting and cleverly designed, the 25-foot, 8-ton conveyance had been custom-built by Roland Conklin’s Gas-Electric Motor Bus Company to provide a maximum of comfort while roughing it on the road to San Francisco. The New York Times gushed that had the “Commander of the Faithful” ordered the “Jinns… to produce out of thin air… a vehicle which should have the power of motion and yet be a dwelling place fit for a Caliph, the result would have fallen far short of the actual house upon wheels which [just] left New York.”
For the next two months, the Conklins and the Gypsy Van were observed and admired by thousands along their westward route, ultimately becoming the subjects of nationwide coverage in the media of the day. Luxuriously equipped with an electrical generator and incandescent lighting, a full kitchen, Pullman-style sleeping berths, a folding table and desk, a concealed bookcase, a phonograph, convertible sofas with throw pillows, a variety of small appliances, and even a “roof garden,” this transport was a marvel of technology and chutzpah.
For many Americans, the Conklin’s Gypsy Van was their introduction to Recreational Vehicles, or simply, RVs. Ubiquitous today, our streamlined motorhomes and camping trailers alike can trace their origins to the time between 1915 and 1930, when Americans’ urge to relax by roughing it and their desire for a host of modern comforts first aligned with a motor camping industry that had the capacity to deliver both.
The Conklins did not become famous simply because they were camping their way to California. Camping for fun was not novel in 1915: It had been around since 1869, when William H.H. Murray published his wildly successful Adventures in the Wilderness; Or, Camp-Life in the Adirondacks, America’s first “how-to” camp guidebook.
Ever since Murray, camping literature has emphasized the idea that one can find relief from the noise, smoke, crowds, and regulations that make urban life tiresome and alienating by making a pilgrimage to nature. All one needed to do was head out of town, camp in a natural place for a while, and then return home restored in spirit, health and sense of belonging. While in the wild, a camper—like any other pilgrim—had to undergo challenges not found at home, which is why camping has long been called “roughing it.” Challenges were necessary because, since Murray’s day, camping has been a recapitulation of the “pioneer” experience on the pre-modern “frontier” where the individual and family were central and the American nation was born.
Camping’s popularity grew slowly, but got more sophisticated when John B. Bachelder offered alternatives to Murray’s vision of traveling around the Adirondacks by canoe in his 1875 book Popular Resorts and How to Reach Them. Bachelder identified three modes of camping: on foot (what we call “backpacking”); on horseback, which allowed for more gear and supplies; and with a horse and wagon. This last was most convenient, allowing for the inclusion ‘of more gear and supplies as well as campers who were unprepared for the rigors of the other two modes. However, horse-and-wagon camping was also the most costly and geographically limited because of the era’s poor roads. In short order, Americans across the country embraced all three manners of camping, but their total number remained relatively small because only the upper middle classes had several weeks’ vacation time and the money to afford a horse and wagon.
Over the next 30 years, camping slowly modernized. In a paradoxical twist, this anti-modern, back-to-nature activity has long been technologically sophisticated. As far back as the 1870s, when a new piece of camping gear appeared, it was often produced with recently developed materials or manufacturing techniques to improve comfort and convenience. Camping enthusiasts, promoters, and manufacturers tended to emphasize the positive consequences of roughing it, but, they added, one didn’t have to suffer through every discomfort to have an authentic and satisfying experience. Instead, a camper could “smooth” some particularly distressing roughness by using a piece of gear that provided enhanced reliability, reduced bulk, and dependable outcomes.
Around 1910 the pace of camping’s modernization increased when inexpensive automobiles began appearing. With incomes rising, car sales exploded. At the same time, vacations became more widespread—soon Bachelder’s horses became motor vehicles, and all the middle classes started to embrace camping. The first RV was hand built onto an automobile in 1904. This proto-motorhome slept four adults on bunks, was lit by incandescent lights and included an icebox and a radio. Over the course of the next decade, well-off tinkerers continued to adapt a variety of automobiles and truck chassis to create even more spacious and comfortable vehicles, but a bridge was crossed in 1915 when Roland and Mary Conklin launched their Gypsy Van.
Unlike their predecessors, the wealthy Conklins modified a bus into a fully furnished, double-deck motorhome. The New York Times, which published several articles about the Conklins, was not sure what to make of their vehicle, suggesting that it was a “sublimated English caravan, land-yacht, or what you will,” but they were certain that it had “all the conveniences of a country house, plus the advantages of unrestricted mobility and independence of schedule.” The family’s journey was so widely publicized that their invention became the general template for generations of motorhomes.
The appeal of motorhomes like the Conklins’ was simple and clear for any camper who sought to smooth some roughness. A car camper had to erect a tent, prepare bedding, unpack clothes, and establish a kitchen and dining area, which could take hours. The motorhome camper could avoid much of this effort. According to one 1920s observer, a motorhome enthusiast simply “let down the back steps and the thing was done.” Departure was just as simple.When the Conklin family traveled from New York to San Francisco in their luxury van, the press covered their travels avidly. (Courtesy of the George Grantham Bain Collection at the Library of Congress)
By the middle of the 1920s, many Americans of somewhat more average means were tinkering together motorhomes, many along the lines made popular by the Conklins, and with the economy booming, several automobile and truck manufacturers also offered a limited number of fully complete motorhomes, including REO’s “speed wagon bungalow” and Hudson-Essex’s “Pullman Coach.”
In spite of their comforts, motorhomes had two distinct limitations, which ultimately led to the creation of the RV’s understudy: the trailer. A camper could not disconnect the house portion and drive the automobile part alone. (The Conklins had carried a motorcycle.) In addition, many motorhomes were large and limited to traveling only on automobile-friendly roads, making wilder landscapes unreachable. As a consequence of these limitations and their relatively high cost, motorhomes remained a marginal choice among RV campers until the 1960s. Trailers, by contrast, became the choice of people of average means.
The earliest auto camping trailers appeared during the early 1910s but they were spartan affairs: a plain device for carrying tents, sleeping bags, coolers, and other camping equipment. Soon, motivated tinkerers began to attach tent canvas on a collapsible frame, adding cots for sleeping and cupboards for cooking equipment and creating the first “tent trailers.” By mid-decade, it was possible to purchase a fully equipped, manufactured one. In 1923’s Motor Camping, J.C. Long and John D. Long declared that urban Americans were “possessed of the desire to be somewhere else” and the solution was evident—trailer camping. Tent trailering also charmed campers because of its convenience and ease. “Your camping trip will be made doubly enjoyable by using a BRINTNALL CONVERTIBLE CAMPING TRAILER,” blared an advertisement by the Los Angeles Trailer Company. The trailer was “light,” incorporated “comfortable exclusive folding bed features,” and had a “roomy” storage compartment for luggage, which left the car free to be “used for passengers.”
Tent trailering, however, had some drawbacks that became clear to Arthur G. Sherman in 1928 when he and his family headed north from their Detroit home on a modest camping trip. A bacteriologist and the president of a pharmaceutical company, Sherman departed with a newly purchased tent trailer that the manufacturer claimed could be opened into a waterproof cabin in five minutes. Unfortunately, as he and his family went to set it up for the first time, a thunderstorm erupted, and claimed Sherman, they “couldn’t master it after an hour’s wrestling.” Everyone got soaked. The experience so disgusted Sherman that he decided to create something better.
The initial design for Sherman’s new camping trailer was a masonite body standing six-feet wide by nine-feet long and no taller than the family’s car. On each side was a small window for ventilation and two more up front. Inside, Sherman placed cupboards, icebox, stove, built-in furniture and storage on either side of a narrow central aisle. By today’s standards, the trailer was small, boxy and unattractive, but it was solid and waterproof, and required no folding. Sherman had a carpenter build it for him for about $500 and the family took their new “Covered Wagon” (named by the children) camping the following summer of 1929. It had some problems—principally, it was too low inside—but the trailer aroused interest among many campers, some of whom offered to buy it from him. Sherman sensed an opportunity.
That fall, Sherman built two additional Covered Wagons. One was for a friend, but the other one he displayed at the Detroit Auto Show in January 1930. He set the price at $400, which was expensive, and although few people came by the display, Sherman reported that they were “fanatically interested.” By the end of the show, he had sold 118 units, the Covered Wagon Company was born, and the shape of an RV industry was set.
Over the next decade the company grew rapidly and to meet demand, trailers were built on an assembly line modeled on the auto industry. In 1936, Covered Wagon was the largest trailer producer in an expanding American industry, selling approximately 6,000 units, with gross sales of $3 million. By the end of the 1930s, the solid-body industry was producing more than 20,000 units per year and tent trailers had more or less disappeared.
Arthur Sherman’s solid-body trailer quickly gained acceptance for two principal reasons. First, Sherman was in the right place, at the right time, with the right idea. Detroit was at the center of the Great Lakes states, which at that time contained the country’s greatest concentration of campers. Furthermore, southern Michigan was the hub of the automobile industry, so a wide range of parts and skills were available, especially once the Depression dampened demand for new automobiles. And, a solid-body trailer took another step along the path of modernization by providing a more convenient space that was usable at any time.
Today’s 34-foot Class A motorhome with multiple TVs, two bathrooms, and a king bed is a version of the Conklin’s “Gypsy Van” and fifth-wheel toy haulers with popouts are the descendants of Arthur Sherman’s “Covered Wagon,” and these, in turn, are modernized versions of Bachelder’s horse-and-wagon camping. Between 1915 and 1930, Americans’ desire to escape modern life’s pressures by traveling into nature intersected with their yearning to enjoy the comforts of modern life while there. This contradiction might have produced only frustration, but tinkering, creativity, and a love of autos instead gave us recreational vehicles.
The play and film Fiddler on the Roof is tradition. Indeed when Tevye, the Jewish dairyman and protagonist of this much-beloved musical begins his eight-minute jubilant tribute to tradition in song and dance, there are few among us who don’t unconsciously mouth the words alongside him: “Without our traditions, our lives would be as shaky as a fiddler on the roof.”
So it is most noteworthy when the new hit revival of Fiddler on the Roof—which opened December 20, 2015, at New York City’s Broadway Theatre—deliberately breaks with tradition in its opening and closing scenes.
Instead of portraying Tevye wearing his familiar turn-of-the-20th-century cap, work clothes and prayer shawl in his Russian village, the new version introduces him bareheaded, wearing a modern red parka, standing in front of a ghostly, weathered sign reading Anatevka. As Tevye begins reciting the familiar words about keeping one’s balance with tradition, the villagers gradually gather on stage.
Similarly, when Anatevka’s Jews are forced to leave their homes by order of Russian authorities, ca. 1906, Tevye reappears once again wearing his red parka and silently joins the group of migrants being displaced.
“You see him enter the line of refugees, making sure we place ourselves in the line of refugees, as it reflects our past and affects our present,” Bartlett Sher, the show’s director, told the New York Times. “I’m not trying to make a statement about it, but art can help us imagine it, and I would love it if families left the theater debating it.”A 1964 pen and ink drawing by Al Hirschfeld of Zero Mostel in his role as Tevye in Fiddler on the Roof (© Al Hirschfeld, National Portrait Gallery)
Popular musicals on Broadway are often regarded as escapist, but the worldwide issue of migration and displacement is inescapable. “Wars, conflict and persecution have forced more people than at any other time since records began to flee their homes and seek refuge and safety elsewhere,” according to a June 2015 report from the Office of the United Nations High Commissioner for Refugees.
With worldwide displacement at the highest level ever recorded, the UNHCR reported “a staggering 59.5 million compared to 51.2 million a year earlier and 37.5 million a decade ago.” It was the highest increase in a single year and the report cautioned that the “situation was likely to worsen still further.”
Migration and displacement were central to Fiddler on the Roof’s story long before the musical made its Broadway debut on September 22, 1964, and then ran for 3,242 performances until July 2, 1972—a record that was not eclipsed until 1980, when Grease ended its run of 3,388 performances.
The stories of Tevye and Jewish life in the Pale of Settlement within the Russian Empire were created by humorist Shalom Rabinovitz (1859–1916), whose Yiddish pen name Sholem Aleichem literally translates as “Peace be unto you,” but which may also mean more colloquially “How do you do?”
Although successful as a writer, Rabinovitz continually had difficulty managing his earnings. When he went bankrupt in 1890, he and his family were forced to move from a fancy apartment in Kiev to more modest accommodations in Odessa. Following the 1905 pogroms—the same anti-Semitic activities that displaced the fictional Jews of Anatevka from their homes—Rabinovitz left the Russian Empire for Geneva, London, New York, and then back to Geneva. He knew firsthand the travails of migration and dislocation.
Rabinovitz’s personal travails shape his best-known book, Tevye the Dairyman, a collection of nine stories that were published over a period of 21 years: the first story, “Tevye Strikes It Rich,” appeared in 1895, though Rabinovitz wrote it in 1894, not imagining that it would be the first of a series; the final story, “Slippery,” was published in 1916.
Numerous adaptations appeared, including several stage plays and a 1939 Yiddish-language film, Tevye, before the team of Jerry Bock (music), Sheldon Harnick (lyrics), Jerome Robbins (choreography and direction), and Joseph Stein (book) adapted several of the Tevye stories to create Fiddler on the Roof for Broadway, taking their title not from Rabinovitz, but from one of Marc Chagall’s paintings.
Going back to the original stories reveals a Tevye who suffers much more than the joyful, singing character seen on Broadway in 1964 and also as played by the Israeli actor Topol in the 1971 film version.
The riches that Tevye strikes in the first of the published stories are lost completely in the second. The hopes that Tevye holds in finding rich husbands for five of his daughters are dashed again and again. Tsaytl marries a poor tailor; Hodel marries a poor revolutionary, who is exiled to Siberia; Chava marries a non-Jew, causing Tevye to disown her; Shprintze drowns herself when rejected by a wealthy man; and Beylke’s husband deserts her when his business bankrupts. Tevye’s wife Golde dies, and he laments, “I have become a wanderer, one day here, another there. . . . I have been on the move and know no place of rest.”
A Broadway musical like Fiddler on the Roof needed an ending not so bleak for Tevye, but still managed to convey some of the pain of forced migration and dislocation. In “Anatevka,” for instance, members of the chorus solemnly sing, “Soon I'll be a stranger in a strange new place, searching for an old familiar face.” The song concludes with one character lamenting, “Our forefathers have been forced out of many, many places at a moment’s notice”—to which another character adds in jest, “Maybe that’s why we always wear our hats.”
When Fiddler first appeared on stage in 1964, several critics noted how the musical was able to raise serious issues alongside both the jesting and the schmaltz. Howard Taubman’s review in the New York Times observed, “It touches honestly on the customs of the Jewish community in such a Russian village [at the turn of the century]. Indeed, it goes beyond local color and lays bare in quick, moving strokes the sorrow of a people subject to sudden tempests of vandalism and, in the end, to eviction and exile from a place that had been home.”
Fiddler on the Roof has been revived on Broadway four times previously—in 1976, 1981, 1990, and 2004—and it is relevant to note that when Broadway shows like Fiddler or Death of a Salesman (1949) or A Raisin in the Sun (1959) return to stage, we call them revivals.
A revival brings something back to life, but a remake suggests something much more mechanical, as if we’re simply giving an old film like Psycho (1960) a new look in color. The current revival of Fiddler not only brings the old show back to life; it also invests it with something more meaningful and enduring—and not at all shaky, like a fiddler on the roof.
It has been almost 15 years since artist Todd McGrain embarked on his Lost Bird Project. It all began with a bronze sculpture of a Labrador duck, a sea bird found along the Atlantic coast until the 1870s. Then, he created likenesses of a Carolina parakeet, the great auk, a heath hen and the passenger pigeon. All five species once lived in North America, but are now extinct, as a result of human impact on their populations and habitats.
McGrain's idea was simple. He would memorialize these birds in bronze and place each sculpture in the location where the species was last spotted. The sculptor consulted with biologists, ornithologists and curators at natural history museums to determine where the birds were last seen. The journal of an early explorer and egg collector pointed him toward parts of Central Florida as the last-known whereabouts of the Carolina parakeet. He followed the tags from Labrador duck specimens at the American Museum of Natural History to the Jersey shore, Chesapeake Bay, Long Island and ultimately to the town of Elmira, New York. And, solid records of the last flock of heath hens directed him to Martha's Vineyard.
McGrain and his brother-in-law, in 2010, took to the road to scout out these locations—a rollicking roadtrip captured in a documentary called The Lost Bird Project—and negotiated with town officials, as well as state and national parks, to install the sculptures. His great auk is now on Joe Batt's Point on Fogo Island in Newfoundland; the Labrador duck is in Brand Park in Elmira; the heath hen is in Manuel F. Correllus State Forest in Martha's Vineyard; the passenger pigeon is at the Grange Audubon Center in Columbus, Ohio; and the Carolina parakeet is at Kissimmee Prairie Preserve State Park in Okeechobee, Florida.
McGrain is no stranger to the intersection of art and science. Before focusing on sculpture at the University of Wisconsin, Madison, he studied geology. "I've always thought that my early education in geology was actually my first education in what it means to be a sculptor. You look at the Grand Canyon and what you see there is time and process and material. Time and process and material have remained the three most important components in my creative life," he says. The Guggenheim fellow is currently an artist-in-residence at Cornell University's Lab of Ornithology. He says that while he has always had an interest in natural history and the physical sciences, these passions have never coalesced into a single effort the way they have with the Lost Bird Project.
Since deploying his original sculptures throughout the country, McGrain has cast identical ones that travel for various exhibitions. These versions are now on display in Smithsonian gardens. Four are located in the Enid A. Haupt Garden, near the Smithsonian Castle, and the fifth, of the passenger pigeon, is in the Urban Habitat Garden on the grounds of the National Museum of Natural History, where they will stay until March 15, 2015.
The sculpture series comes to the National Mall just ahead of "Once There Were Billions: Vanished Birds of North America," a Smithsonian Libraries exhibition opening at the Natural History Museum on June 24, 2014. The show, commerating the 100th anniversary of the death of Martha the passenger pigeon, the last individual of the species, will feature Martha and other specimens and illustrations of these extinct birds. The Smithsonian Libraries plans to screen McGrain's film, The Lost Bird Project, and host him for a lecture and signing of his forthcoming book at the Natural History Museum on November 20, 2014.
Image by Courtesy of The Lost Bird Project. McGrain used natural history specimens, drawings and, in some cases, photographs, as reference when sculpting his birds. (original image)
Image by Courtesy of Jonathan Kavalier. Farmers frustrated with the birds' eating of their crops, feather hunters and dealers who sold them as pets contributed to the decline of North America's once-booming population of Carolina parakeets. (original image)
Image by Courtesy of Jonathan Kavalier. The great auk, a penguin-like bird, was hunted for its meat and feathers. It has been extinct since the 1840s. (original image)
Image by Courtesy of Jonathan Kavalier. In the 19th century, heath hens were hunted and consumed regularly. A last flock lived on Martha's Vineyard until the 1920s. (original image)
Image by Courtesy of Jonathan Kavalier. The last Labrador duck was shot in Elmira, New York, on December 12, 1878. Diminishing numbers of mollusks, the bird's prey, likely led to the population's demise. (original image)
Image by Courtesy of James Gagliardi. Martha, the very last passenger pigeon, died at the Cincinnati Zoo a century ago. (original image)
What were your motivations? What inspired you to take on the Lost Bird Project?
As a sculptor, most everything I do starts with materials and an urge to make something. I was working on the form of a duck, which I intended to develop into a kind of an abstraction, when Chris Cokinos’ book entitled, Hope is the Thing With Feathers, sort of landed in my hands. That book is a chronicle of his efforts to come to grips with modern extinction, particularly birds. I was really moved. The thing in there that really struck me was that the Labrador duck had been driven to extinction and was last seen in Elmira, New York, in a place called Brand Park. Elmira is a place I had visited often as a child, and I had been to that park. I had no idea that that bird was last seen there. I had actually never even heard of the bird. I thought, well, as a sculptor that is something that I can address. That clay study in my studio that had started as an inspiration for an abstraction soon became the Labrador duck, with the intention of placing it in Elmira to act as a memorial to that last sighting.
How did you decide on the four other species you’d sculpt?
They are species that have all been driven to extinction by us, by human impact on environmental habitat. I picked birds that were driven to extinction long enough ago that no one alive really has experienced these birds, but not so far back that their extinction is caused by other factors. I didn’t want the project to become about whose fault it is that these are extinct. It is, of course, all of our faults. Driving other species to extinction is a societal problem.
I picked the five because they had dramatically different habitats. There is the prairie hen; the swampy Carolina parakeet; the Labrador duck from someplace like the Chesapeake Bay; the Great Auk, a sort of North American penguin; and the passenger pigeon, which was such a phenomenon. They are very different in where they lived, very different in their behaviors, and they also touch on the primary ways in which human impact has caused extinction.
How did you go about making each one?
I start with clay. I model them close to life-size in clay, based on specimens from natural history museums, drawings and, in some cases, photographs. There are photographs of a few Carolina parakeets and a few heath hens. I then progressively enlarge a model until I get to a full-size clay. For me, full-size means a size that we can relate to physically. The scale of these sculptures has nothing to do with the size of the bird; it has to do with coming up with a form that we meet as equals. It is too big of a form to possess, but it’s not so big as to dominate, the way that some large-scale sculptures can. From that full-scale clay, basically, I cast a wax, and through the process of lost wax bronze casting, I transform that original wax into bronze.
In lost wax casting, you make your original in wax, that wax gets covered in a ceramic material and put into an oven, the wax burns away, and in that void where the wax once was you pour the molten metal. These sculptures are actually hollow, but the bronze is about a half an inch thick.
Why did you choose bronze?
It is a medium I have worked in for a long time. The reason I chose it for these is that no matter how hard we work on material engineering bronze is still just this remarkable material. It doesn’t rust. It is affected by the environment in its surface color, but that doesn’t affect its structural integrity at all. So, in a place like Newfoundland, where the air is very salty, the sculpture is green and blue, like a copper roof of an old church. But, in Washington, those sculptures will stay black forever. I like that it is a living material.
What impact did placing the original sculptures in the locations where the species were last spotted have on viewers, do you think?
I think what would draw someone to these sculptures is their contour and soft appealing shape. Then, once that initial appreciation of their sculptural form captures their imagination, I would hope that people would reflect on what memorials are supposed to do, which is [to] bring the past to the present in some meaningful way. In this way, I would think the sculpture's first step is to help you recognize that where you are standing with this memorial is a place that has a significance in the natural history of this country and then ultimately ask the viewer to give some thought to the preciousness of the resources that we still have.
Has ornithology always been an interest of yours?
I am around too many ornithologists to apply that label to myself. I would say I’m a bird lover. Yeah, I think birds are absolutely fantastic. It is the combination that really captures my imagination; it is the beautiful form of the animals; and then it is the narrative of these lost species that is really captivating.
In a scene from the classic film A Christmas Story (1983), the arrival of a lamp in the shape of a woman's leg throws the Parker home into discord. Young Ralphie (Peter Billingsley) can't keep his eyes (or his hands) off the thing; his mother (Melinda Dillion) looks on in pure horror. She can't stop her husband (Darren McGavin) from displaying his “major award” in their front window, but she knows just how to divert her son's attention elsewhere. All she has to do is remind him that he's missing his “favorite radio program,” Little Orphan Annie.
Ralphie immediately plops himself down and stares up at the family radio the way later generations would gaze unblinkingly at the TV. “Only one thing in the world could've dragged me away from the soft glow of electric sex gleaming in the window,” Ralphie's older self, voiced by the humorist Jean Shepherd (upon whose book the movie is based), says in narration.
This scene perfectly captures the powerful hold that radio in general, and Little Orphan Annie in particular, had on young minds in the 1930s and 1940s, when A Christmas Story is set. The exploits of the redheaded comic-strip heroine and her dog Sandy—who battled gangsters, pirates, and other scoundrels on air from 1931 to 1942—had a surprisingly wide listenership. “All people during that period, budding delinquents, safecrackers, stock market manipulators, or whatever, listened to Little Orphan Annie,” wrote Richard Gehman in the Saturday Review in 1969.
Because radio’s “theater of the mind” requires a fertile imagination, it has always had a special appeal for children. The same lively imagination Ralphie uses to picture himself defending his family with a Red Ryder BB gun, or reduced to a blind beggar by the effects of Lifebuoy soap, brought Annie's adventures to life more vividly than a television ever could.
This imaginative power is precisely why some parents and reformers saw the radio in much the same way Ralphie's mother saw the leg lamp: as a seductive villain, sneaking into their homes to harm the minds and corrupt the morals of their children. They saw the intense excitement Annie and other shows inspired in children and quickly concluded that such excitement was dangerous and unhealthy. One father, in a letter to The New York Times in 1933, described the effects on his child of the “all-too-hair-raising adventures” broadcast during radio’s “Children’s Hours.” “My son has never known fear,” he wrote. “He now imagines footsteps in the dark, kidnappers lurking in every corner and ghosts appearing and disappearing everywhere and emitting their blood-curdling noises, all in true radio fashion.”
Many claims about the harm allegedly caused by violent video games, movies, and other media today—that they turn kids into violent criminals, rob them of sleep, and wreak havoc with their nervous systems—were lobbed just as strongly at radio in the 1930s. “These broadcasts are dealing exclusively with mystery and murder,” wrote a Brooklyn mother to the Times in 1935. “They result in an unhealthy excitement, unnecessary nervousness, irritability and restless sleep.”
The year before, noted educator Sidonie Gruenberg told the Times “that children pick as favorites the very programs which parents as a whole view with special concern—the thriller, the mystery, the low comedy and the melodramatic adventure.” She asked, rhetorically: “Why is it that the children seem to get their greatest pleasure from the very things which the parents most deplore?”
Among the programs most adored by kids but deplored by parents was Ralphie’s favorite: Little Orphan Annie. In March 1933, Time reported that a group of concerned mothers in Scarsdale, New York, got together to protest radio shows that “shatter nerves, stimulate emotions of horror, and teach bad grammar.” They singled out Little Orphan Annie as “Very Poor,” because of the protagonist’s “bad emotional effect and unnatural voice.” That same year, wrote H. B. Summers in his 1939 book Radio Censorship, “a Minneapolis branch of the American Association of University Women, and the Board of Managers of the Iowa Congress of Parents and Teachers adopted resolutions condemning the ‘unnatural overstimulation and thrill’ of children’s serials—principally the ‘Orphan Annie’ and ‘Skippy’ serials.” (Skippy was based on a comic strip about a “streetwise” city boy that served as a major influence on Charles Schulz’s Peanuts.)
These days, when Annie is known mainly as the little girl who sang brightly about “Tomorrow,” it may be hard to picture her radio series as the Grand Theft Auto of its day. But the radio show had a much closer relationship to its source material—a “frequently downbeat, even grim comic” created in 1924 by Harold Gray—than the relentlessly optimistic (and very loosely adapted) Broadway musical. The comic-strip Annie’s most defining and admired trait—her self-reliance—came from the fact that she existed in “a comfortless world, vaguely sinister,” surrounded by violence, where few could be trusted and no one could be counted on. “Annie is tougher than hell, with a heart of gold and a fast left, who can take care of herself because she has to,” Gray once explained. “She’s controversial, there’s no question about that. But I keep her on the side of motherhood, honesty, and decency.”
The radio series softened some of the strip’s sharp edges, most especially by dropping its virulently anti-Roosevelt politics. But the unceasing undercurrent of danger remained, heightened by the cliffhanger at the end of each episode. Those cliffhangers were key to the show’s success—and the element that most disturbed parents. Frank Dahm, who wrote the scripts for the series, discovered this very quickly after having Annie kidnapped at the end of one early episode. “The announcer had scarcely had time to sign off the program when the telephones began to ring,” Dahm told Radio Guide in 1935. “Frantic mothers unable to pacify their children all but blasphemed me for so jeopardizing their favorite.” Dahm dutifully put kidnapping on the list of the show’s “mustn’ts,” which soon grew to include other plot points that drew complaints.
The producers of Little Orphan Annie had to walk a very fine line, indulging their audience’s appetite for thrills while not offending adults. The adults, after all, held the purchasing power. The companies that sponsored Annie and other shows aimed at children knew, as Francis Chase, Jr., observed in his 1942 book Sound and Fury, that “kids love action. … And because kids like murder and excitement, such programs proved good merchandising mechanisms.” Annie, as A Christmas Story accurately depicted, was sponsored by “rich, chocolaty Ovaltine”—a malted powder added to milk. As much as a third of every fifteen-minute episode was devoted to having the announcer sing Ovaltine’s praises, telling kids it would give them added “pep” and imploring them to “do a favor” for Annie and tell their mothers about it.
Such advertising, as psychologists Hadley Cantril and Gordon Allport noted in their 1935 book The Psychology of Radio, was devilishly effective. They wrote about a 7-year-old boy named Andrew, whose favorite radio show (unnamed, but with a “little heroine” who is almost certainly Annie) was sponsored by “chocolate flavoring to be added to milk” (unquestionably Ovaltine). Andrew “insists that his mother buy it,” even after his mother reads up on the product and discovers that it has “no significant advantage over cocoa prepared with milk in the home” and isn’t worth the price. “In vain does she suggest that Andrew derive his pep from ordinary cocoa, or at least from one of the less expensive preparations,” write Cantril and Allport. “Andrew wins his point by refusing to drink milk at all without the costly addition!”
Ovaltine had another marketing strategy that was even more effective—the giveaway. Week after week, Annie announcer Pierre André instructed kids to send in a dime “wrapped in a metal foil seal from under the lid of a can of Ovaltine” so they could get the latest in a series of premiums: mugs, buttons, booklets, badges, masks, and on and on. Many other radio shows offered “free” items in exchange for wrappers or box tops, but, as Bruce Smith observed in his History of Little Orphan Annie, Ovaltine gave away more items than anyone else.
By far the most coveted item Ovaltine had to offer were the “secret decoder pins” awarded to members of “Annie’s Secret Circle,” so they could decipher the “secret message” read at the end of each episode. In A Christmas Story, Ralphie acquires one such pin after “weeks of drinking gallons of Ovaltine,” and memorably uses it to decipher a message reminding him to “BE SURE TO DRINK YOUR OVALTINE.” In real life, such messages were never so blatantly commercial. Brief references to the plot of next week’s show, such as “S-E-N-D H-E-L-P” or “S-A-N-D-Y I-S S-A-F-E,” were more typical. But Ralphie’s fervent desire for a decoder pin, and his excitement (admittedly short-lived) at finally being a member of the “Secret Circle,” is absolutely true to life.
Many parents resented having to battle their children over the grocery list week after week, as a growing list of giveaways threatened to break the bank. (“If a weak-willed mother should buy all these prize ‘box tops,’” wrote News-Week in December 1934, “her grocery budget…would swell at least $2 a week”—or about $35.50 today.) But they also knew that the show’s dependence on its advertiser gave them leverage. By threatening to boycott Ovaltine, or any company that sponsored a show they found objectionable, they could (and did) influence its content. Broadcasters listened to these complaints and tightened their standards for children’s programming.
By the end of the 1930s, Annie’s cliffhangers had been toned down, and this may have hastened its end. Ovaltine stopped sponsoring the show in 1940, and the series went off the air not long after—making Ralphie, who uses a decoder ring clearly marked “1940,” one of the last members of the “Secret Circle.” The cultural winds had shifted; in the early 1940s, writes Chase, parents clearly stated their preference for more “educational” children’s programming. But the style of advertising used on Annie remained, and—despite the occasional controversy every now and then—has never gone away.
There’s a certain irony here. Ralphie’s trusty decoder pin teaches him an important lesson—one that his “Old Man,” delighted at receiving his “major award” of a leg lamp, apparently never learned. Holed up in the family bathroom, Ralphie discovers that the “message from Annie herself” is nothing but “a crummy commercial”—an ad for the very stuff he had to drink by the gallon in order to get the decoder pin in the first place. “I went out to face the world again—wiser,” he says in narration. He’s learned a thing or two about the rules of commerce, and about the true cost of a “free” giveaway.
What could be more educational than that?
In June 2015 Secretary of the Treasury Jacob Lew announced that the 10 dollar note will be redesigned to feature a historic woman, marking the first major change in the appearance of U.S. paper money in nearly a century. This landmark announcement not only stimulated an unprecedented national discussion around the design of U.S. paper money, but it also provoked an extraordinary national conversation about the significant roles that women have played in the making of the nation.
To mark this historic moment, the museum will open a new display titled "Women on Money" on March 18, 2016, within the Stories on Money exhibition. This vibrant display will place the redesign of U.S. paper money into a global context and demonstrate that women have appeared on money from ancient times to the present day. These depictions commemorate women's contributions to national and world history and convey national ideals and ideas. Thus the display is organized around three themes: women on international money, women on American money, and female figures on money.
Women on international money
One of the first historic women to appear on money was Arsinoe II, a Ptolemaic queen of Egypt, in the 3rd century BCE. Since then, many national currencies have depicted women either during their lifetimes or posthumously. Female political leaders have appeared on money with the greatest frequency. Powerful women, like Pharaoh Cleopatra VII, Queen Elizabeth I, and Empress Maria Theresa, each issued coins with their portraits, helping them assert their influence over nations and empires. Modern female politicians, such as First Lady of Argentina Eva Perón and Prime Minister of India Indira Gandhi, have appeared on national currencies posthumously, commemorating their political leadership and helping to cement their places in their national histories.
In recent years, some governments have begun to reflect on the contributions women have made outside the political sphere and have chosen to honor women's achievements in the arts and sciences. For example, Poland has depicted the Nobel-prize winning chemist Marie Curie on the 20 zloty note; Italy has honored Maria Montessori, innovator in early childhood education, on the 1,000 lira note; and Fatma Aliye Topuz, a novelist and women's rights activist, has been honored on Turkey's 50 Lira note. Moreover, national communities are increasingly recognizing the role of female social activists as catalysts for change. Images of suffragettes, such as New Zealand's Kate Sheppard, have appeared on money as reminders of the importance of equality in a democratic society.
Women on American money
Although women have regularly appeared on money around the world, historic women have rarely been included on money issued by the U.S. government. Since the federal government began issuing paper money in 1861, male historic figures have almost exclusively enjoyed this honor. Pocahontas and Martha Washington are the exceptions, with both appearing on U.S. paper money for a relatively short period in the 19th century. Susan B. Anthony and Sacagawea made similar appearances on the one dollar coin in the late 20th century, but their coins are not as widely-used as the one dollar note depicting President George Washington.
In 2015 a social movement called Women on 20s brought the limited appearance of women on American money to the attention of the public. This grassroots organization helped to initiate a national conversation about the role of women in American history, helping to galvanize public support for the Treasury's redesign of U.S. paper money.
Female figures on money
More often than historic women, idealized, allegorical, and mythological female figures have appeared on money as symbols of ideals, values, and beliefs. They are effective tools of communication for governments because they can convey meaningful, and sometimes complicated, concepts on small and familiar objects. As an alternative to depicting potentially divisive political leaders, allegorical figures can promote a sense of national unity around a shared idea, such as freedom. Thus many nations depict the idea of freedom as the female figure Lady Liberty. She has appeared on American coinage, in a variety of poses and styles, since the U.S. Mint produced its first coins in 1792. Justice, victory, and peace are also conveyed through the female form on notes and coins. Many nations also include images of idealized women on their money, communicating national ideals and conveying the essential roles that women play in the marketplace, home, and community. Some countries even personify the nation itself as a woman. For example, Great Britain uses the figure Britannia as the allegorical representation of the British nation. She is typically depicted as a protective figure with a trident and shield and appears on both British coins and notes. In addition, some nations, both in ancient history and the present, depict female mythological and religious figures in an effort to encourage a particular set of religious beliefs or to promote a feeling of shared cultural heritage.
When America's new notes enter circulation, the woman or women they depict will take their place alongside the many women that have appeared on money over the last two millennia. The new notes will not only commemorate their contributions to the nation, but also serve as evidence of the historic national conversation in 2015 and 2016 about the role of American women in U.S. and world history—and a reminder of the many women that are still deserving of such an honor.
Ellen Feingold is the curator of the National Numismatic Collection. She has also blogged about curating "The Value of Money" exhibition and collecting contemporary monetary objects.
People have been fascinated and terrified by sharks for thousands of years, so you would think that we know a fair bit about the roughly 400 named species that roam the ocean. But we have little sense of how many sharks are out there, how many species there are, and where they swim, let alone how many existed before the advent of shark fishing for shark fin soup, fish and chips, and other foods.
But we are making progress. In honor of Shark Week, here’s an overview of what we have learned about these majestic citizens of the sea in the past year:
1. Sharks mostly come in shades of gray, and it’s likely that they only see that way as well. Now, that knowledge is being put to use to protect surfers and swimmers offshore. In 2011, researchers from the University of Western Australia found that, out of 17 shark species tested, ten had no color-sensing cells in their eyes, while seven only had one type. This likely means that sharks hunt by looking for patterns of black, white and grey rather than noticing any brilliant colors. To protect swimmers, whose bodies often look like a tasty seal from below, the researchers are working with a company to design wetsuits that are striped in colorblocked disruptive patterns. One suits will alert sharks that they aren’t looking at their next meal, and a second suit that will help camouflage swimmers and surfers in the water.
2. The thresher shark has a long, scythe-shaped tail fin that scientists long-suspected was used for hunting, but they didn’t know how. This year, they finally filmed how the thresher shark uses it to “tail slap” fish, killing them on impact. It herds and traps schooling fish by swimming in increasingly smaller circles before striking the group with its tail. This strike usually comes from above instead of sideways, an unusual technique that allows the shark to stun multiple fish at once—up to seven, the study found. Most carnivorous sharks only kill one fish at a time and so are comparatively less efficient.
3. How many sharks do people kill each year? A new study published in July 2013 used available shark catch information to estimate the global number—a staggering 100 million sharks killed every year. Although the data are incomplete and often do not include those sharks whose fins are removed and bodies are thrown back to sea, this is the most accurate estimate to date. Slow growth and low birth rates of sharks mean that they are not able to repopulate fast enough to catch up with the loss.
4. The 50-foot giant megalodon shark is a staple of shark week, reigning as the great white’s larger and even more terrifying ancestor. But a new fossil discovered in November turns that supposition on its head: it looks like the megalodon isn’t a great white shark ancestor after all, but is more closely related to the fish-munching mako sharks. The teeth of the new fossil look more like great white and ancient mako shark teeth than megalodon teeth, which also suggests that great whites are more closely related to mako sharks than previously thought.
5. Sharks are worth more alive in the water than dead on the plate (or bowl). In May, researchers found that shark ecotourism ventures—such as swimming with whale sharks and coral reef snorkeling—bring in 314 million U.S. dollars globally every year. What’s more, projections show that this number will double in the next 20 years. In contrast, the value of fished sharks is estimated at 630 million U.S. dollars and has been declining for the past decade. While dead sharks’ value terminates after they are killed and consumed, live sharks provide value year after year: in Palau, an individual shark can bring up to 2 million dollars in benefits over its lifetime from the tourist dollars that pour in just so that people can view the shark up close. One citizen science endeavor even has snorkeling travelers snapping photos of whale sharks in an effort to help researchers. Protecting sharks for future ecotourism endeavors just makes the most financial sense.
6. Bioluminescence isn’t just for jellyfish and anglers: even some sharks are able to light up to confuse predators and prey alike. Lanternsharks are named for this ability. It’s been long known that their bellies light up to blend in with sunlight shining down from above, an adaptation known as countershading. But in February, researchers reported that lanternsharks also have “lightsabers” on their backs. Their sharp, quill-like spines are lined with thin lights that look like Star Wars weaponry and send a message to predators that, “if you take a bite of me, you might get hurt!”
7. What can an old sword tell us about sharks? Far more than you might expect—especially when those swords are made of shark teeth. The swords, along with tridents and spears collected by Field Museum anthropologists in the mid-1800s from people living in the Pacific’s Gilbert Islands, are lined with hundreds of shark teeth. The teeth, it turns out, come from a total of eight shark species—and, shockingly, two of these species had never been recorded around the islands before. The swords give a glimpse into how many more species once lived on the reef, and how easy it is for human memory to lose track of history, a phenomenon known as “shifting baselines.”
8. Sharks know some pretty neat tricks even before they’re born. Bamboo shark embryos develop in egg cases that float on the high seas, where they are vulnerable to being eaten by all manner of predators. Even as developing embryos, they can sense electric fields in the water given off by a predator—just like adults. If they sense this danger nearby they can hold still, even stopping their breathing, so they won’t be noticed in their egg cases. But for sand tiger shark embryos, which develop inside the mother, their siblings can pose the biggest threat—the first embryos to hatch from eggs, at just roughly 100 millimeters long, will attack and devour their younger siblings.
9. Shark fin soup has been a delicacy in China for hundreds of years, and its popularity has only increased in the last several decades with the country’s growing population. This increasing demand has heightened the number of sharks killed every year, but the expensive dish may be losing some fans.
Even before last year’s Shark Week, the Chinese government banned the serving of shark fin soup at official state banquets—and the conversation hasn’t died down since. Countries and states banning the trade of shark fins and regulating the practice of shark finning have made headlines this year. And just a few weeks ago, New York Governor Andrew Cuomo signed a ban of the possession and sale of shark fins in the state that will go into effect in 2014.
10. Shark fin bans aren’t the only method of protecting sharks. The island nations of French Polynesia and the Cook Islands created the largest shark sanctuary in December of 2012—protecting sharks from being fished in an area of over 2.5 million square miles in the south Pacific Ocean. And member countries of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) voted to place export restrictions on five species of sharks in March 2013. Does this mean that the general perception of sharks is changing for the better and that the public image of sharks is veering away from its “Jaws” persona? That, in essence, is up to you!
–Emily Frost, Hannah Waters and Caty Fairclough co-wrote this post
A green synthetic fiber flight suit worn by Maj. Gen. Charles F. Bolden Jr. The suit is single piece with a zipper extending from the crotch to the neck. There are two diagonal zippered pockets across the chest of the jump suit, with an additional pocket on the PL upper arm. There are multiple zippered pockets on each leg, which have zippered leg cuffs. There are velcro fixtures with adjustment tabs at the waist, wrists. On the PR breast is a red diamond shaped patch with the insignia of the 3rd Marine Aircraft wing in yellow and black. On the PL breast is a black leather name patch. The Marine aviator wings emblem is embossed in gold on the top half of the badge with the lower half reading [CHARLES F. BOLDEN, JR./MAJ GEN USMC (RET)]. At the nape of the neck is a manufacturer tag with care instructions.
2014.243.31: Flight Jacket
Pilot's flight jacket in camo green, size medium. Jacket has elastic, knit cuffs and waist. Jacket has two (2) cargo pockets at front that fasten closed with Velcro. On the front upper PL torso is a rectangular patch of Velcro. On the front upper PR torso is a diamond patch of Velcro. On the upper PL sleeve is a pocket with a zipper closure. On the top of the pocket are four (4) smaller pockets for holding pens or pencils. Pen and pencil (TR2013-340.31.2 and TR2013-340.31.3) found in pencil pockets. Coin (TR2013-340.31.4) found in PL front cargo pocket. On the inside of the jacket at the nape of the neck is a white clothing tag with manufacturer information and size.
2014.243.32: Anti-G pants
Green pilot's CSU-13B/P anti-gravitational garment worn by Marj. Gen. Charles F. Bolden Jr. The garment is designed to cover the body from the ankles up to the abdomen. The garment does not cover the whole lower body, with gaps at the crotch and rear as well as in front of and behind the knees. There is one main zipper on the PR side of the abdominal section and each leg has a zipper on its interior. On the PL side of the abdominal section is a hose with a metal tube coated in black and orange plastic which feeds the interior air bladders. When hanging it falls to just above the knee. On the back of the abdominal section of the garment is a vertical yellow rectangle of fabric with [BOLDEN] written in black ink. On the interior back of the abdominal section are three white manufacturer tags detailing care and specifications of the garment. The middle tag has [MGen Bolden] handwritten on it as well. The legs of the garment have several zippered and buttoned pockets on the exterior, as well as a rectangle of velcro on the PR thigh.
2014.243.33: Parachute Harness
A green MA-2 style integrated torso harness worn by Maj. Gen. Charles F. Bolden Jr. The harness extends from the shoulders to the crotch and has a large zipper in the middle of the chest. The harness has two main quick release buckles, one at each shoulder. The harness also several other attachment and adjustment buckles and straps at the waist, thigh, and chest. The strap across the abdomen has [11/DOM/7-00/00] handwritten in black ink. At the top of the harness at the back of the neck is a rectangular yellow label with [BOLDEN] handwritten in black ink. There is a manufacturer's tag at the nape of the neck on the interior of the harness.
2014.243.34.1-.10: Survival Vest & Components
An SV-2B pilot survival vest (2014.243.34.1) owned by Charles F. Bolden. The vest includes several inner pockets, clips, velcro sections, and compartments for holding gear. Two flashlights (2014.243.34.3 & 2014.243.34.5), an emergency beacon (2014.243.34.2), hook blade (2014.243.34.4), pocket knife (2014.243.34.7), compass watch (2014.243.34.8), signal mirror (2014.243.34.9), whistle (2014.243.34.10), and plastic bottle (2014.243.34.6) are currently attached. The vest is zippered together in the front and has shoulder straps that criss-cross the back. There are also straps with metal hooks meant to clip the vest to other clothing or equipment hanging from the waist and chest. There is a large metal fastner on the PL front of the vest which is printed with manufacturer specifications. The back of the harness has four elastic straps and the waist has adjustable strapped bands. The manufacturer specifications are printed on the interior of the vest.
2014.243.35.1: Equipment Bag
Flight bag in camo green that zippers closed at the top. In the center of top on the front and bag is a handle. Below the handle on the front is a black metal carabineer-like clasp. On the front are two (2) large pockets that take up the whole front of the bag. Bag padded and lined inside. On the interior PR and PL sides are smaller pockets. On the back, inside, is a white tag with the number  stamped on it, left of the text [BAG, FLYER'S HELMET/SP0100-98-F-EC43/8415-00-782-2989/UNICOR]. Bag hold a pilot's helmet (TR2013-340.35.2ab), pair of flight gloves (TR2013-340.35.3ab) in the interior PR pocket, and notepad (TR2013-340.35.4ab) in the exterior PR pocket.
2014.243.35.2ab: Flight Helmet & O2 Mask
A pilot's HGU-68/P TACAIR flight helmet (2014.243.35.2a) and MBU-12/P oxygen mask (2014.243.35.2b) owned by Maj. Gen. Charles F. Bolden Jr. The helmet is a graphite and nylon composite shell with foam interior and a camo patterned canvas cover. At the back of the helmet just above the neck is a white label with printed black text reading [MAJ. GEN. BOLDEN]. The helmet has a tinted visor that is covered with a detachable gray visor, which has [BOLDEN] written in black ink along its lower edge. The oxygen mask locks into the front of the helmet with several buckles. The oxygen mask is a hard gray polysulfone shell with a flexible gray silicone rubber air hose. The mask comes with several communications wires and acessories attached. On either side of the mask [Bolden / CB5603] is handwritten in black ink.
2014.243.35.3ab: Flight Gloves
Pair of pilot flight gloves in olive green, size 8. Top of the gloves made of a stretchy knit material. Gloves have leather undersides. Strip of knit fabric on side of the gloves running up the side of the thumb to the top of the thumb. Gloves cinched with elastic at the inside wrists. Gloves have a wide cuff that extends beyond the wrist. Inside the gloves is printed, white text listing the size, materials, care instructions, and manufacturer information [8/GLOVES, FLYERS, SUMMER/TYPE SS/FRP-D SAGE GREEN/MIL-G-81188B/AMERICAN GLOVE CORP./SP0100-00-0-4005/8415-01-029-0111/THE CLOTH PORTION OF HTE GLOVE IS/FABRICATED FROM AN INHERENTLY FIRE/RESISTANT MATERIAL (NOMEX) THAT/DOES NOT MELT OR DRIP. CAN BE/LAUNDERED WITHOUT LOSING ITS FIRE/RESISTANT PROPERTIES AND NO/RETREATMENT IS NECESSARY.].
2014.243.35.4: Knee Board
Pilot kneeboard of black plastic and metal. Kneeboard has large clips at the top and bottom to accommodate notepad. Top clip has a plastic light attached. Two (2) wires come out of the bottom of the clip and go into the interior body of the board. Attached to the top of the clip is a metal spring. On the PR side of the board near the top is a white sticker with text [PILOT IDENTIFICATION/ALL BATTERIES MUST BE LOADED IN THE SAME DIRECTION]. On the PL side of the board near the bottom is a long plastic tube for storing a pencil (TR2013-340.35.2b). Above the pencil holder is a plastic knob that turns on the light. On the knob in the center is raised text [PPSP]. At the top of the PL side is a pencil sharpener inserted into the body of the board. The bottom clip is metal with engraved text [U.S. PROPERTY/TYPE MXU-163/P CLIPBOARD PILOTS/CONTRACT NO. DLA-100-92-C-4263/MFR'S. MODEL ACB-1000/P/PRECISION POLYMER MFG. INC.]. In the middle of the body is an adjustable elastic strap that clips closed. The back is curved with two (2) strips of vertically oriented camo green colored foam. Notepad attached to board has white sheets. Top sheet is a flight navigation sheet with calculations in pen and pencil. Included is a pencil in yellow. Imprinted text on the side [BONDED U.S.A. SKILCRAFT MED/2]. Pencil has pink eraser.
2014.243.36ab: Flight boots
Black leather boots that lace up to the top with black shoelaces. Boots come to mid-calf. Boot soles are black with heavy tread. Inside the tongue of the shoes is a white clothing tag with black text [ANSI/Z41-1983/75]. On the inside of the boot shaft on the outer leg is black text [ADDISON SHOE COMP./DLA100-90-0-4178/30. MARCH 1990 DPSC/9 1/2R 75083].
This object is on loan to the Anchorage Museum at Rasmuson Center, from 2010 through 2022.
Source of the information below: Smithsonian Arctic Studies Center Alaska Native Collections: Sharing Knowledge website, by Aron Crowell, entry on this artifact http://alaska.si.edu/record.asp?id=520, retrieved 3-31-2012: Rattle, Tsimshian. Frogs appear often on shamanic art because they were imagined as primordial, partly human creatures that retained supernatural power from early times. They lived in the dark before Raven brought the sun, and they made fun of the great trickster; in anger he caused the North Wind to blow the frogs away and freeze them onto rocks. This shaman’s rattle shows frogs that appear with the rain, springing from the eyes of South Wind, who brings rain and desires the world to be green as in spring. The back of the rattle shows the wind’s arms, legs, and body. “He is showing this look, like a trance; the eyes are underneath the lids, rolled back. Having these frogs come out, too – frogs were the shaman’s messengers.” - David Boxley (Tsimshian), 2009.
We live in a golden age of online dating, where complex algorithms and innovative apps promise to pinpoint your perfect romantic match in no time. And yet, dating remains as tedious and painful as ever. A seemingly unlimited supply of swipes and likes has resulted not in effortless pairings, but in chronic dating-app fatigue. Nor does online dating seem to be shortening the time we spend looking for mates; Tinder reports that its users spend up to 90 minutes swiping per day.
But what if there was a way to analyze your DNA and match you to your ideal genetic partner—allowing you to cut the line of endless left-swipes and awkward first dates? That’s the promise of Pheramor, a Houston-based startup founded by three scientists that aims to disrupt dating by using your biology. The app, which launches later this month, gives users a simple DNA test in order to match them to genetically compatible mates.
The concept comes at a time when the personalized genetics business is booming. “Companies such as 23andMe and Ancestry.com have really primed the market for personalized genetics,” says Asma Mizra, CEO and co-founder of Pheramor. “It's just becoming something that people are more familiar with.”
Here’s how it works: For $15.99, Pheramor sends users a kit to swab their saliva, which they then send back for sequencing. Pheramor analyzes the spit to identify 11 genes that relate to the immune system. The company then matches you with people whoare suitably genetically diverse. The assumption is that people prefer to date those whose DNA is different enough from their own that a coupling would result in a more diverse, likely-to-survive offspring. (The way we can sense that DNA diversity is through scent.)
Pheramor does not just look at genetic diversity, though. Like some dating apps, it also pulls metadata from your social media footprint to identify common interests. As you swipe through the app, each dating card will include percent matches for compatibility based on an algorithm that takes into account both genetic differences and shared common interests. To encourage their users to consider percentages above selfies, prospective matches’ photographs remain blurred until you click into their profiles.
“I've always been motivated to bring personalized genetics to everyday people,” says Brittany Barreto, Chief Security Officer and co-founder of Pheramor. “We don't want to be gatekeepers of the scientific community. We want people to be able to engage in science, everyday people. And realize that it is something that you can use to make more informed decisions and have that agency to make those decisions. So we're saying, you're not going to find your soulmate but you're probably going to go on a better first date.”
But can the science of attraction really solve your dating woes?(Courtesy of Pheramor)
The Genetics of Love
Pheramor claims to “use your attraction genes to determine who you are attracted to and who is attracted to you.” That's not entirely true; there are no "attraction genes." (Or if there are, we haven't found them yet.) What Pheramor is actually comparing are 11 genes of the major histocompatibility complex (MHC), which code for proteins on the surface of cells that help the immune system recognize invaders.
The idea of linking immune system genes to attraction stems from a 1976 study published in the Journal of Experimental Medicine, in which scientists found that male mice tended to select female mice with dissimilar MHC genes. The mice detected those genes through scent. Researchers hypothesized reasons for this selection ranging from the prevention of inbreeding to promoting offspring with greater diversity of dominant and recessive genes. In 1995, a Swiss study applied the concept to humans for the first time through the famous "sweaty T-shirt study." The research showed that, like the mice, the women who sniffed the sweaty garments were more likely to select the shirts of men with greater genetic difference.
But experts caution the science behind matching you with someone who has different immune system genes remains theoretical. One is Tristram D. Wyatt, a researcher at Oxford who authored a 2015 paper on the search for human pheromones published in the Proceedings of the Royal Society. As an example, Wyatt cites the International HapMap Project, which mapped patterns in genetic sequence variants from people from all around the world and recorded their marital data.
“You might expect that if this was a really strong effect, that people really were choosing their partners on the basis of genetic difference of the immune system genes, that you would get that ... out of the data," he says. “And it didn’t work out that way. One research group found, yes, people were more different than you’d expect by chance. And another research group using the same data but slightly different assumptions and statistics said the opposite. In other words: there was no effect."
Pheramor isn’t the first dating app to look to genetics for dating. Back in 2008, GenePartner launched with the tagline “Love is no coincidence,” and also calculated partner preference based on two people's diversity of MHC genes. In 2014, Instant Chemistry entered the market with a tailored concept to show people already in relationships how “compatible” they were based on their MHC diversity. That same year, SingldOut (which now redirects to DNA Romance) promised to use both DNA testing and social networking information from LinkedIn.
Unfortunately, the science behind these all companies’ claims stems from that same mouse research done back in the 1970s. “It’s a lovely idea,” says Wyatt, “but whether it actually is what people or for that matter other animals are doing when they choose a mate is up in the air.” In other words: No, you still can’t reduce love to genetics.
The Problem with Human Pheromones
On its website, Pheramor claims that these 11 “attraction” genes create pheromones, or chemical signals, that make you more or less attractive to a potential mate. The site’s science section explains “science of pheromones has been around for decades” and that they “are proven to play a role in attraction all the way from insects to animals to humans.” It continues: “if pheromones tickle our brain just the right way, we call that love at first sight.”
None of this is true. “Pheromone is a sexy word and has been since it was invented,” says Wyatt. But the science of pheromones—specifically human pheromones—is still cloudy at best.
First identified in 1959, pheromones are invisible chemical signals that trigger certain behaviors, and are used for communication in animals from moths to mice to rabbits. Since then, companies have claimed to use pheromones in everything from soap to perfume to help humans attract a mate. (Fun fact: If you’ve used a product that claims to use pheromones, most likely it was pig pheromones; pig sweat shares chemicals in common with human sweat but we have no idea if they have any effect on us, reports Scientific American.) In 2010, headlines began reporting on Brooklyn’s “Pheromone Parties,” a trend that seized on this idea by having people sniff each other’s t-shirts to supposedly detect genetic diversity.
In fact, we’ve never found pheromones in humans. Scientists are still searching for the fabled “sex pheromone,” but so far they’re nowhere close. In their defense, there are several challenges: For one, you have to isolate the right chemical compound. For another, there’s the chicken-and-the-egg problem: if a chemical does create a behavioral response, is that an innate response, or is it something learned over time through culture?
Pheramor points to that famous “sweaty T-shirt study,” as supporting evidence for pheromones. However, later attempts to isolate and test alleged pheromones—such as steroids in male sweat and semen or in female urine—have failed. And in 2015, a review on the scientific literature on pheromones found that most research on the topic was subject to major design flaws.
Right now, Wyatt thinks our best bet for hunting down the first human pheromone is in maternal milk. Infants seem use scent to find and latch on to their mother’s nipples, and some researchers believe a pheromone could be responsible. Looking at babies rather than adults has the added benefit of getting rid of the acculturation problem, since newborns haven’t yet been shaped by culture.
But until we find it, the idea of a human pheromone remains wishful hypothesizing.
In short, whether it’s worth it to swab for love is something that the scientific community is not ready yet to assert. “You’d need a lot more research, much more than you have at the moment,” says Wyatt. However, Pheramor could actually help expand that research—by increasing the data available for future research on MHC-associated partner choice.
The team has established a partnership with the Kinsey Institute at Indiana University, a leader in studying human attraction and sexuality, which plans to hire a dedicated post doc to look at the data Pheramor collects and publish papers on attraction. Justin Garcia, a research scientist at the Kinsey Institute, says that the data Pheramor is amassing (both biological and self-reported) will offer new insight into how shared interests and genetics intersect. “That is a pretty ambitious research question but one I think they in collaboration with scientists here and elsewhere are positioned to answer,” he says.
One area they want to expand on is the research on genetic-based matching in non-heterosexual couples. So far, research on MHC-associated partner choice has only been done in couples of opposite sexes—but Pheramor is open to all sexual preferences, meaning that researchers can collect new data. “We let [users] know, right from the get go that the research has been done in heterosexual couples. So the percentage that you see may not be completely accurate,” she says. “But your activity on this platform will help us to publish research papers on what the attraction profiles in people who identify as LGBTQ are.”
Beyond adding data to the research, Pheramor could also help address the lack of diversity on dating apps. Statistically speaking, Mizra points out, women of color are the most “swiped left on” and “passed” in dating apps. As a Pakistani-American who is also Muslim, she knows personally how frustrating that kind of discrimination can be.
“So how do we change that perspective if we truly believe that we're bringing a more authentic and genuine connection?” she says. “One of the things that we're doing is we're saying, ‘You know what? Let the genetics and let the data kind of speak for itself.’ So, if you have a 98 percent compatibility with someone that you probably wouldn't think you'd get along with, why don't you give it a try?”
For now, the team is focused on getting their app, currently in beta testing, ready for roll out. They’re hoping to launch with 3,000 members in Houston, after which they want to expand to other U.S. cities. “Our app is really novel, it's really new and I don't think it's for everybody,” says Barreto. “It's for people who understand which direction the future is heading and which direction technology is heading and how quickly it moves. And I think over time people will become more comfortable with it and realize the value in that.”
In the end, swabbing your DNA probably won’t get you any closer to love. On the other hand, none of those other fancy dating algorithms will, either. So swab away: what do you have to lose?
Well, another season of Timeless has ended with a bang. Some predictable twists, some less so. As always, these writeups contain not just history but major plot spoilers, so read with caution.
The last two episodes of the season take us to Civil War-era South Carolina and San Francisco's Chinatown, circa 1888. As they aired together, we'll tackle them together.
First, South Carolina, June 1, 1863. In real history, this is the day that a ragtag group of soldiers pull off one of the most audacious operations of the entire war: sail gunboats into the heart of enemy territory, burn Southern plantations, and rescue all the enslaved people. Their leader? Harriet Tubman.
Born into slavery, Tubman suffered under various cruel masters. She escaped to Pennsylvania, and freedom, in 1849, at about age 27 (her birth year is contested), then returned to Maryland to rescue her family on the Underground Railroad. She would ultimately make 19 trips into slaveholding states to rescue enslaved people; conservative accounts say she rescued 70 people, other accounts say up to 300. When the Civil War broke out, she worked for the Union as a cook, nurse and spy.
Which brings us to June 1, 1863. On this night and into the pre-dawn hours of the following day, Tubman, under the command of Union Colonel James Montgomery, took between 150 and 300 black Union soldiers (the 2nd South Carolina Volunteer Infantry (African Descent)) up the Combahee River. The river was filled with mines, but Tubman had gathered intel on where they were. Three ships traveled up the river under cover of darkness. By dawn, they had reached the first plantation. Enslaved families ran for the boats, and the soldiers burned everything else.
As the podcast UnCivil reports, the Union freed 700 enslaved people that night. Most of the men of fighting age immediately enlisted in the Union Army.
This is one of those arresting stories that really ought to be taught in history class. (In a nod to the unfortunate relative obscurity of this story, even Lucy needs a refresher, provided by Rufus) In "Timeless," which gets the story mostly right, the mission seems fated for failure because Emma (BOO! HISS!) has given the Rittenhouse sleeper, a fictional Confederate colonel, a modern-day military history of the Civil War that gives him a roadmap to victory, including exactly where Montgomery’s Union troops are camping out. The Rebs massacre members of the 2nd South Carolina, Montgomery flees, and the raid looks to be doomed before it even started.
The Time Team meets up with Tubman, who insists on raiding the plantations, troops or no troops; Rufus is starstruck. Lucy, realizing that the raid will be a disaster for the Union without the extra manpower, goes with Flynn to convince Montgomery to return. Meanwhile, Rufus and Wyatt go incognito at a nearby plantation to find the sleeper and destroy the Confederate version of Gray’s Sports Almanac. Spoiler, they do both.
Then—at the end of episode nine—a twist: Jessica is a Rittenhouse agent. Curse your sudden but inevitable betrayal! She swipes Wyatt’s gun, forces Jiya into the Lifeboat and vanishes just in time for Wyatt to realize what an incredible idiot he's been. Whoops.
The season finale picks up where we left off. A slightly sedated Jiya escapes Rittenhouse HQ thanks to some fancy fighting (where'd you learn that, Jiya?) and zooms off in the Lifeboat with Jessica shooting behind her. Still drugged and piloting a Lifeboat damaged by one of Jessica’s bullets, Jiya’s unable to bring the time machine back to the bunker. Instead, she has jumped in time and space. But Where? And When? Knowing that Jiya will try to change the historical record to communicate with the present, Lucy hits the stacks and finds a photo of Jiya in San Francisco's Chinatown, circa 1888. There's also a message in the photo (written in Klingon, natch): GPS coordinates to where the Lifeboat has been hidden and two words: "DON'T COME."
Of course the team ignores the message. After they've fixed up the Lifeboat, which has been hiding under some bushes since Jiya hid it there 130 years ago, they immediately jump to late-19th century San Francisco and soon find her.
She's spent the last three years working in a seedy saloon and refuses to leave, explaining that her vision shows Rufus dying if and when she attempts to go back to the future. (The gold rush miners filling the bar here aren't exactly cowboys, per her doomsday vision, but they do all have bad teeth and spurs on their boots, so close enough.) Finally convinced to return after Lucy speechifies about friendship and family, events play out almost exactly as Jiya foresaw them. Acting quickly, Jiya prevents her vision—of a Rittenhouse sleeper stabbing Rufus in the back—from coming true, but she can't, tragically, save him from what’s hiding across the street, Emma’s gun.
The Time Team returns down a man, with everyone shocked in disbelief. This would be a depressing note to end on; the "rules" of time travel say that the team can never go back to a place they already were, and it'd take too long to train a new pilot to mount a rescue mission. But just as all hope is lost, what appears but another version of the Lifeboat. Out step older, more badass versions of Wyatt and Lucy. Just before the show closes, Lucy says to a stunned audience (and probably a not-insignificant number of gleeful #Lyatt shippers): "You guys wanna get Rufus back, or what?"
More of note:
NBC has yet to announce whether or not “Timeless” will be renewed for a third season, leaving quite a compelling cliffhanger for the rabid “clockblockers” out there.
Should there be a new season, there’s been a reshuffling in the House of Rittenhouse. While in San Francisco, Emma murders Lucy’s mom and “big bad” Nicholas Keynes in cold blood, accurately sensing that she was being pushed out of the organization. It’s now her and Jessica as the new matriarchs of Rittenhouse.
Episode 9 takes us deep into metaphysical sci-fi territory, with Jiya learning more about her visions from a mentally unstable pilot who's also been seeing visions. He tells her that he's been spending weeks inside his visions, "time traveling" inside his own head. He says he believes the visions are a gift, just like the gifts given to, he says, Joan of Arc, Florence Nightingale, and Kirk Cameron. (Joan did say she spoke to God; your guesses are as good as ours on the other two people.) Another character who we learn is seeing visions? Harriet Tubman herself, who says that God told her to expect Wyatt and Rufus (and showed her a vision of them "stepping out of a giant metal ball, " aka the Lifeboat.) Are we to believe there were other famous people through history who traveled (at least in their heads) through time, and explained those visions through whatever lens made sense to them? Sure seems that way.
As in the show, Harriet Tubman did actually report having blackouts, seizures and visions. Historians believe they started when an overseer tried to throw a heavy object at another slave, but hit Tubman in the head instead. A devout Christian, Tubman attributed the visions to God speaking to her. She remained deeply religious her whole life. (Her hymnal is now in the collection of the Smithsonian National Museum of African American History and Culture.)
It seemed remarkably easy for Lucy to convince Montgomery to return—all she had to say is that there were 750 potential soldiers on the plantations. This number may have been exaggerated; again, historians believe that the total number of people freed, including including women and children, was closer to 700, putting the population of men of fighting age a bit lower. But, by 1863, the Union Army was in poor shape. After the Battle of Fredericksburg late the prior year, morale was low. Some historians believe that the Union was seeing 100 desertions a day. So perhaps Montgomery would have been thrilled to get a couple hundred replacement soldiers.
When Tubman first meets up with the Time Team, Wyatt says that General McClellan had sent them from the North to help. Actually, George McClellan had been sent back to his home in New Jersey months earlier after he failed to land a decisive victory against the Confederates after the Battle of Antietam in September 1862. Some historians believe he was doing the best he could; others argue that his own caution and incompetence caused the battle to end in more of a tie than a rout. McClellan's troops had, by '63, been transferred to Maj. Gen. Ambrose Burnside.
In the Chinatown episode, Lucy said she knew to look in a book about San Francisco—one she co-wrote with her mother, confusingly—because Jiya had been obsessed with it ever since she had her first vision of the Golden Gate Bridge under construction. (This happened at the end of season 1, as you may remember.) The Golden Gate, however, was far, far in the future in 1888. Construction began in 1933. The engineers primarily involved in its design, Joseph Strauss and Charles Ellis, were teenagers at the time.
Chinatown in San Francisco arguably was founded in the mid-1840s, when the first Chinese immigrants arrived. By 1880, the 12 square blocks of Chinatown were home to an estimated 22,000 people, and white San Franciscans were getting anxious. By that time, California and San Francisco had passed eight anti-Chinese laws, banning gongs, fining laundry operators and requiring men who wore their hair in a queue to have it cut, among other indignities. (Some of these laws were later repealed or declared unconstitutional.) The racist lawmakers were just getting started, though: 1882 saw the passage of the Chinese Exclusion Act, the first U.S. law passed that banned immigration on the basis of race. And in 1890, two years after our story is set, San Francisco passed a law that banned Chinese people, including citizens of Chinese descent, from living or working outside of “a portion set apart for...the Chinese.” (This law was mercifully declared unconstitutional the same year.)
That’s it for our writeups for now, unless NBC decides to renew this fan-favorite show for a third season. But we’re not quite done yet. Look out for our Q&A with co-creator Shawn Ryan to publish tomorrow.
Browse the aisles of any grocery store today, and you’re likely to find chocolate, and lots of it. Pastries, cakes, Hershey’s kisses and artisanal bars offer an array of choices sure to provide just the right Valentine’s Day fix.
The human love affair with chocolate extends back thousands of years, but the options for consuming chocolate weren’t always so abundant. When the Spanish first introduced the treat to Western Europe in the 17th century, there was really only one: hot chocolate. It was prepared in its very own vessel, the chocolatière, or chocolate pot.
At that time—centuries before the advent of pulverization, emulsification or any of the other industrial processes that would make chocolate widely available in its current forms— drinking hot chocolate was the easiest and tastiest way to indulge in this luxury import.
“I think that chocolate—particularly when mixed with sugar—was very readily appealing to almost any taste,” says Sarah Coffin, curator and head of the product design and decorative arts department at the Cooper Hewitt, Smithsonian Design Museum. “I suspect that tea and coffee people acquired tastes for but perhaps were a little less easy to immediately embrace.”
Preparing hot chocolate entailed a process distinctive from the other beverages popular at the time. Rather than infusing hot water with coffee grounds or tea leaves and then filtering out the sediment, hot chocolate required melting ground cacao beans in hot water, adding sugar, milk and spices and then frothing the mixture with a stirring stick called a molinet.
When Louis XIII married Anne of Austria in 1615, the queen’s enthusiasm for chocolate spread to the French aristocracy. During that early modern period, the French had refined the dining experience to the point of extravagance. In that spirit, they crafted the chocolatière, a vessel uniquely suited to preparing chocolate.
In reality, the origins of the chocolate pot date back to Mesoamerica, where traces of theobromine—the chemical stimulant found in chocolate—have been found on Mayan ceramic vessels dating back to 1400 B.C. But the chocolate pot that set the standard for Europe, however, looked nothing like the earthenware of the Americas. It sat perched on three feet, with a tall, slender body, and an ornate handle at 90 degrees from the spout. Most important was the lid, which had a delicate hinged finial, or cap, that formed a small opening for the molinet.
“It was inserted to keep the chocolate frothed and well-blended,” says Coffin of the utensil. “Because unlike the coffee I think that the chocolate tended to settle more. It was harder to get it to dissolve in the pot. So you’d need to regularly turn this swizzle stick.”
It was this hinged finial that came to define the form. “You can always tell a chocolate pot and the way you can tell is because it has a hole in the top,” says Frank Clark, master of historic foodways at the Colonial Williamsburg Foundation, who makes colonial-style chocolate—and sometimes, hot chocolate—for guests.
In the 17th and 18th centuries, chocolate pots were mostly made of silver or porcelain, the two most valuable materials of the time. “Chocolate was considered exotic and expensive,” says Coffin. “It was a rare commodity and so it was associated with luxury objects such as silver, and of course in the early days, porcelain.”
As chocolate spread throughout western Europe, each country interpreted the vessel according to their own tastes. Vienna became known for its elegant chocolate and coffee sets. Many German chocolate pots, including several in the Cooper Hewitt’s collection from the mid-to-late 18th century, featured gilded, Chinese-inspired designs known as Chinoiserie.
Image by Gift of Mrs. Edward Luckemeyer, 1912-13-1-a,b. Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . A mid-18th century, enamel and glazed porcelain chocolate pot and lid manufactured by the Meissen Porcelain Factory; Meissen, Saxony, Germany. (original image)
Image by Bequest of Erskine Hewitt, 1938-57-633, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott . A chocolate pot attributed to Meissen, in Saxony, Germany, ca. 1735. Gilt and glazed hardpaste porcelain. (original image)
Image by Bequest of Erskine Hewitt, 1938-57-307-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . A stoneware chocolate pot manufactured by Wedgwood, Staffordshire, England from the late 18th century. Molded, thrown and polished stoneware (Black Basaltware). (original image)
Image by Bequest of Erskine Hewitt, 1938-57-650-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott . A gilt and glazed porcelain chocolate pot, manufactured by Berlin Porcelain Factory, Berlin, Prussia, Germany, dates to around 1770. (original image)
Image by Bequest of Erskine Hewitt, 1938-57-665-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott. A porcelain chocolate pot, c. 1740, manufactured by the Meissen Porcelain Factory; Meissen, Saxony, Germany. Underglazed enameled, glazed and gilt hardpaste porcelain; gilt brass (original image)
Image by Bequest of Erskine Hewitt, 1938-57-676-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott. A gilt and glazed hardpaste porcelain chocolate pot manufactured by the Fürstenburg Porcelain Factory, in Lower Saxony, Germany dates to 1780–1800. (original image)
Image by Gift of Elizabeth Taylor, 1991-11-3-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . This gilt porcelain "Healy Gold" Chocolate pot was manufactured by Chryso Ceramics in Washington, D.C., ca. 1900. (original image)
“They suddenly had this new beverage and took it back to their courts. So then things were made in the different courts, so you get things made in Austrian porcelain or German porcelain and French ceramics and silver and so forth,” says Coffin.
Americans, too, had a thirst for chocolate, which they began drinking in the 1660s, soon after England acquired its own chocolate pipeline, Jamaica, in 1655. Chocolate pots weren’t often produced in the United States, but Coffin says the European imports were of extremely high quality because the wealthy people who purchased them wanted to keep up with the latest continental fashions.
In Europe and the United States, drinking hot chocolate became both a public and private practice. Around the end of the 17th century, chocolate and coffee houses cropped up that served as a meeting spot for lawyers, businessmen, and politicos well into the 18th century. In New England, Clark says those in charge of setting the price of tobacco and other important commodities were known to gather at a chocolate/coffee house to do so.
In private, chocolate was associated with the bedroom, as it was popular to drink first thing in the morning as well as in the evening before bed. A painting by French artist Jean-Baptiste Le Prince from 1769 depicts a woman lying in bed, reaching out for her departed lover, the morning light illuminating her figure. A chocolate pot and cups sit by her bedside. According to the book Chocolate: History, Culture, and Heritage by Louis E. Gravetti and Howard-Yana Shapiro, such images led to chocolate being associated with a leisurely lifestyle. This imbued the beverage with an added air of luxury.
As soon as the Industrial Revolution arrived, that started to change. Chocolate makers developed a method of using hydraulic and steam chocolate mills to process chocolate faster and at a lower cost. In 1828, Coenraad Johannes Van Houton invented the cocoa press, which removed the fat from cacao beans to make cocoa powder, the basis for most chocolate products today. Chocolate prices fell, and soon chocolate became a sweet that most everyone could afford.
The chocolate pot also evolved. Chocolate powder decreased the importance of the molinet, and chocolate pots began cropping up with finials that were stuck in place.
By the early 20th century, the golden age of hot chocolate had come and gone, but chocolate pots still enjoyed some popularity. In the late 19th and early 20th centuries, the Japanese had considerable success exporting porcelain chocolate pots and other wares to North America.
One example in the collections of the Freer and Sackler Galleries is a Satsuma style porcelain chocolate pot, fired with clear glaze and decorated with a colorful array of three-dimensional, enamel dots depicting a Buddhist scholar with his attendants. Ceramics curator Louise Cort says the scene is one of a few stock images commonly used at that time to cater to Western perceptions of Japanese culture.
Mineralogist A.E. Seaman purchased the piece at the 1904 World’s Fair in St. Louis. According to notes from his daughter, the family used the pot for tea rather than hot chocolate. This is not surprising; tea was growing more popular by then, and aside from the shape of the vessel, there is no removable finial that would indicate the pot should be used exclusively for hot chocolate. It could easily have been used to prepare other beverages.
By the 1950s, chocolate pot production died down. Very few, if any, are still made today, but one can still find virtually any style of chocolate pot online or in auction houses. Vessels ranging from pristine 17th-century French silver pots to Japanese Satsuma style ware sell regularly on eBay for anywhere from $20 to $20,000 dollars.
People like Clark at Colonial Williamsburg are managing to preserve the old chocolate tradition. In his demonstrations, he roasts the actual cacao beans, separates out the hard shell, and grinds the beans into a liquid paste. When he does prepare the actual beverage, he dissolves the chocolate in a traditional chocolate pot and adds sugar and spices.
“It really represents the way chocolate was made in colonial times for the very wealthy,” Clark says.
Those interested in imbibing true hot chocolate this Valentine’s Day can easily do so. It’s not hard to find an antique chocolate set and molinet for under $100, and many stores now sell cacao nibs, bits of roasted cacao beans that have been removed from their shells. Grind the nibs in a bowl or on a chocolate stone, and melt the paste in hot water, and you’ll be sipping hot chocolate in no time. (A few documented recipes are also available online from the hot chocolate heyday.)
As far as chocolate’s aphrodisiac powers go, research suggests that there’s very little validity to the lore. But all is not lost; Cort says hot chocolate would have been a worthy tool of seduction purely for the taste itself. “I suspect that… if you thought it had this [aphrodisiac] power and it was in any case sweet if you mixed a lot of sugar and vanilla with it, this would be a wonderful way to try and seduce somebody.”
Recent news of state and local elected officials stepping up their campaigns for the 2014 November elections brings to mind the fascinating array of campaign material housed in the museum's Division of Political History. My research for a display on the early 1960s for the museum's 50th anniversary this year required delving into the division's collection of memorabilia from the 1960 and 1964 Presidential campaigns.
Kennedy campaign sticker
For years political parties have used parades and rallies, slogans, songs, and signs to not only promote their favorite candidates but to disparage their opponents as well. An abundant amount of campaign trade material such as buttons, stickers, hats, postcards, playing cards, coasters, match books, and more was and continues to be produced.
During the 1960 Presidential campaign the young democratic Senator from Massachusetts John F. Kennedy was pitted against the experienced republican Vice-President Richard M. Nixon. Kennedy pledged to "get the country moving again." Emphasis was on finding new ways to deal with domestic problems of poverty and inequality and focusing on new challenges such as space exploration.
Hats like the one pictured below were worn by delegates and supporters of the Kennedy/Johnson presidential ticket at the 1960 Democratic convention in Los Angeles.
A woman's Kennedy/Johnson campaign hat
Richard Nixon campaigned as the more responsible and experienced candidate in both domestic and foreign policies and promised to continue the peace and prosperity of the previous eight years of the Eisenhower administration in which he played a part as Vice-President.
Nixon campaign bumper sticker
Nixon campaign sheet music
Defining moments of the 1960 campaign were the debates between the nominees which were televised for the first time in history and watched by millions of viewers.
First televised Kennedy-Nixon debate
Evidence of the popularity of these debates is this handmade community sign, with political buttons of the candidates attached, urging citizens to gather together to watch the fourth and last Kennedy-Nixon televised debate.
Handmade sign from 1960 urging voters to watch the Nixon-Kennedy debate
The election was very close as JFK barely edged Nixon in popular votes however the electoral votes gave him the lead. John F. Kennedy was on his way to the White House as he became the nation's youngest President and first Catholic ever elected to office.
Kennedy campaign sticker
In contrast to the narrow margin of victory in the 1960 Presidential election, the 1964 election was a landslide. Much had occurred during the previous four years. In November, 1963, before he could complete his dream of a "New Frontier," President Kennedy was assassinated and Vice-President Lyndon B. Johnson was sworn in as President. Johnson carried on the policies and goals of JFK under the slogan "Great Society." His proposals included civil rights legislation, education aid, and medical care for the elderly.
Comic book featuring Lyndon Johnson and the "Great Society"
However, many southerners, including some elected democratic officials, were not pleased when President Johnson signed the Civil Rights Act and it threatened to split the party. Just four weeks before the election first lady Claudia "Lady Bird" Johnson, born and raised in the south, embarked on a four-day whistle-stop tour throughout rural areas of the south to gather support for her husband's campaign and defend the idea of civil rights. To publicize the event postcards like the one below were sent from aboard the "Lady Bird Special" as the first lady traveled to 47 towns making 47 speeches from a platform on the back of the train.
Postcard for the Lyndon Johnson campaign's Lady Bird Special train
Johnson's opponent in the 1964 campaign was Arizona Senator Barry Goldwater. Conservative Goldwater was outspoken and often controversial in his views. He proposed limiting Federal Government involvement in activities such as welfare and medical care and was a strong advocate against communism. At one point in the campaign, he suggested using nuclear weapons as a means of dealing with the conflict in Vietnam. In his acceptance speech at the Republican convention he stated, "I would remind you that extremism in the defense of liberty is no vice."
Both sides of a fan supporting the Goldwater campaign
The campaign was heated as Johnson and Goldwater conflicted over every issue. The Republicans focused on Johnson's overspending and recklessness with the economy while Goldwater claimed the country was in "moral decay" with "violence in the streets" under Johnson's administration. The Democrats called Goldwater irresponsible and extreme in his views, especially on the use of nuclear weapons. Though Goldwater's supporters officially coined the slogan "In Your Heart You Know He's Right," the un-official slogan of his opponents became "In Your Guts You Know He's Nuts."
Political parties often distributed satirical material to discredit their opponents such as this "Bettor Deal Certificate" emphasizing Johnson and the Democratic Party's undesirable policies and this cartoon book that the Democratic National Committee used to decorate their office displaying Goldwater as a radical buffoonish character.
Johnson "Funny Money" certificate
Goldwater cartoon book
The candidates toned down their rhetoric later in the campaign but it was "LBJ All the Way" and Johnson went on to win the election with 486 of the 538 electoral votes and by a margin of more than 16 million popular votes.
Johnson campaign sticker
Bayonet: In the early 17th century, sportsmen in France and Spain adopted the practice of attaching knives to their muskets when hunting dangerous game, such as wild boar. The hunters particularly favored knives that were made in Bayonne—a small French town near the Spanish border long renowned for its quality cutlery.
The French were the first to adopt the “bayonet” for military use in 1671—and the weapon became standard issue for infantry throughout Europe by the turn of the 17th century. Previously, military units had relied on pikemen to defend musketeers from attack while they reloaded. With the introduction of the bayonet, each soldier could be both pikeman and musketeer.
Even as modern weaponry rendered bayonets increasingly obsolete, they endured into the 20th century—in part because they were deemed effective as psychological weapons. As one British officer noted, regiments “charging with the bayonet never meet and struggle hand to hand and foot to foot; and this for the best possible reason—that one side turns and runs away as soon as the other comes close enough to do mischief.”
Barbed Wire: Invented in the late 19th century as a means to contain cattle in the American West, barbed wire soon found military applications—notably during the Second Anglo-Boer War (1899-1902) in what is now South Africa. As the conflict escalated, the British Army adopted increasingly severe measures to suppress the insurgency led by Dutch settlers.
One such measure was constructing a network of fortified blockhouses connected by barbed wire, which limited the movement of the Boers in the veldt. When British forces initiated a scorched-earth campaign—destroying farms to deny the guerrillas a means of support—barbed wire facilitated the construction of what was then termed “concentration camps,” in which British forces confined women and children.
More than a decade later, barbed wire would span the battlefields of World War I as a countermeasure against advancing infantry. A U.S. Army College pamphlet published in 1917 succinctly summarized the advantages of a barbed-wire entanglement:
“1. It is easily and quickly made.
2. It is difficult to destroy.
3. It is difficult to get through.
4. It offers no obstruction to the view and fire of the defense.”
Steamship: “The employment of steam as a motive power in the warlike navies of all maritime nations, is a vast and sudden change in the means of engaging in action on the seas, which must produce an entire revolution in naval warfare,” wrote British Gen. Sir Howard Douglas in an 1858 military treatise.
He was correct, although this revolution in naval warfare was preceded by a gradual evolution. The early commercial steamships were propelled by paddle wheels mounted on both sides of the vessel—which reduced the number of cannons a warship could deploy and exposed the engine to enemy fire. And a steamship would need to pull into port every few hundred miles to replenish its supply of coal.
Still, steamships offered significant advantages: They were not dependent upon the wind for propulsion. They were fast. And they were more maneuverable than sailing ships, particularly along coastlines, where they could bombard forts and cities.
Arguably the most important enabler of steam-powered warships was the 1836 invention of the screw propeller, which replaced the paddle wheel. The next major breakthrough was the invention of the modern steam turbine engine in 1884, which was smaller, more powerful and easier to maintain than the old piston-and-cylinder design.
Locomotive: Justus Scheibert, an officer in the Royal Prussian Engineers, spent seven months with the Confederate Army observing military campaigns during the Civil War. “Railroads counted in both sides’ strategies,” he quickly concluded. “Trains delivered provisions until the final moments. Therefore the Confederacy spared nothing to rebuild tracks as fast as the enemy destroyed them.”
Although railroads had been occasionally used during the Crimean War (1853-1856), the Civil War was the first conflict where the locomotive demonstrated its pivotal role in rapidly deploying troops and material. Mules and horses could do the work, though far less efficiently; a contingent of 100,000 men would require 40,000 draft animals.
Civil War historians David and Jeanne Heidler write that, “Had the war broken out ten years before it did, the South’s chances of winning would have been markedly better because the inequality between its region’s railroads and those of the North would not have been as great.”
But, by the time war did break out, the North had laid more than 21,000 miles of railroad tracks—the South had only about a third of that amount.
Telegraph: The Civil War was the first conflict in which the telegraph played a major role. Private telegraph companies had been in operation since the 1840s—a network of more than 50,000 miles of telegraph wire connected cities and towns across the United States when war erupted.
Although some 90 percent of telegraph services were located in the North, the Confederates were also able to put the device to good use. Field commanders issued orders to rapidly concentrate forces to confront Union advances—a tactic that led to victory in the First Battle of Bull Run, in 1861.
Arguably the most revolutionary aspect of the device was how it transformed the relationship between the executive branch and the military. Before, important battlefield decisions were left to the discretion of field generals. Now, however, the president could fully exercise his prerogative as commander in chief.
“Lincoln used the telegraph to put starch in the spine of his often all too timid generals and to propel his leadership vision to the front,” writes historian Tom Wheeler, author of Mr. Lincoln’s T-Mails. “[He] applied its dots and dashes as an essential tool for winning the Civil War.”
Image by Bettmann / Corbis. DDT proved to be so effective at relieving insect-borne diseases that some historians believe World War II was the first conflict where more soldiers died in combat than from disease. (original image)
Image by Bettmann / Corbis. Invented in the late 19th century as a means to contain cattle in the American West, barbed wire soon found military applications. (original image)
Image by Corbis. The French were the first to adopt the "bayonet" for military use in 1671—and the weapon became standard issue for infantry throughout Europe by the turn of the 17th century. (original image)
Image by Medford Historical Society Collection / Corbis. Although railroads had been occasionally used during the Crimean War, the Civil War was the first conflict where the locomotive demonstrated its pivotal role in rapidly deploying troops and material. (original image)
Caterpillar tractor: During World War I, engineers sought to design a war machine robust enough to crush barbed wire and withstand enemy fire, yet agile enough to traverse the trench-filled terrain of no man’s land. The inspiration for this armored behemoth was the American tractor.
Or, more specifically, the caterpillar tractor invented in 1904 by Benjamin Holt. Since the 1880s, Holt’s company, based in Stockton, California, had manufactured massive, steam-powered grain harvesters. To allow the heavy machines to traverse the steep, muddy inclines of fertile river deltas, Holt instructed his mechanics to replace the drive wheels with “track shoes” made from wooden planks.
Later, Holt sought to sell his invention to government agencies in the United States and Europe as a reliable means for transporting artillery and supplies to the front lines during wartime.
One person who saw the tractor in action was a friend of Col. E. D. Swinton of the Engineering Corps of the British Army. He wrote a letter to Swinton in July 1914 describing “a Yankee machine” that “climbs like hell.” Less than a year later, Swinton drafted specifications for a tank—with a rhomboid shape and caterpillar treads—designed to cross wide trenches. It later became known as “Big Willie.” The tanks made their combat debut during the Battle of the Somme on September 15, 1916.
As historian Reynold Wik has noted, “the first military tanks had no American parts, neither motors, tracks, nor armament. However. . . the technological innovation which occurred in Stockton in November 1904 had proved that heavy machines could be moved over difficult terrain with the use of track-type treads.”
Camera: Aerial photographic reconnaissance came of age in World War I, thanks to higher-flying planes and better cameras. Initially, planes were deployed to help target artillery fire more accurately. Later, they were used to produce detailed maps of enemy trenches and defenses, assess damage after attacks and even scout “rear echelon” activities to glean insights into enemy battle plans. Baron Manfred von Richthofen—“the Red Baron”—said that one photoreconnaissance plane was often more valuable than an entire fighter squadron.
The opposing armies took measures to thwart photographic reconnaissance. Potential ground targets were disguised with painted camouflage patterns. (The French, naturalment, enlisted the help of Cubist artists.)
Of course, the most effective countermeasure was to mount guns on planes and shoot down the observation aircraft. To provide protection, fighter planes escorted reconnaissance craft on their missions. The era of the “dogfight” began—and with it the transformation of the airplane into a weapon of warfare.
Chlorine: Historians generally agree that the first instance of modern chemical warfare occurred on April 22, 1915—when German soldiers opened 5,730 canisters of poisonous chlorine gas on the battlefield at Ypres, Belgium. British records indicate there were 7,000 casualties, 350 of which were lethal.
German chemist Fritz Haber recognized that the characteristics of chlorine—an inexpensive chemical used by the German dye industry—made it an ideal battlefield weapon. Chlorine would remain in its gaseous form even in winter temperatures well below zero degrees Fahrenheit and, because chlorine is 2.5 times heavier than air, it would sink into enemy trenches. When inhaled, chlorine attacks the lungs, causing them to fill with fluid so that the victim literally drowns.
In response, all sides sought even more lethal gases throughout the remainder of the conflict. Chlorine was an essential ingredient in manufacturing some of those gases—including the nearly odorless phosgene, which was responsible for an estimated 80 percent of all gas-related deaths in World War I.
DDT: In the late 1930s, with war on the horizon, the U.S. military undertook preparations to defend soldiers against one of the most lethal enemies on the battlefield: insect-borne diseases. During World War I, typhus—a bacterial disease spread by lice—had killed 2.5 million people (military and civilian) at the eastern front alone. Health specialists also worried about the prospect of mosquito-borne diseases, such as yellow fever and malaria, in the tropics.
The military needed an insecticide that could be safely applied as a powder to clothes and blankets. Initially synthesized by an Austrian student in 1873, DDT (dichlorodiphenyltrichloroethane) remained a laboratory oddity until 1939, when Swiss chemist Paul Müller discovered its insecticidal properties while researching ways to mothproof wool clothing. After the military screened thousands of chemical compounds, DDT eventually emerged as the insecticide of choice: it worked at low dosages, it worked immediately and it kept working.
DDT proved to be so effective that some historians believe World War II was the first conflict where more soldiers died in combat than from disease. Yet, even before the war ended, entomologists and medical researchers warned that the insecticide could have long-term, dangerous effects on public health and the environment. The United States banned DDT in 1972.
Tide-Predicting Machine: As the Allies planned their invasion of Europe in 1944, they faced a dilemma: Should they land on the beaches of Normandy at high tide or low tide?
The argument in favor of high tide was that troops would have less terrain to cross as they were subjected to enemy fire. However, German Gen. Erwin Rommel had spent months overseeing the construction of obstacles and booby traps—which he called a “devil’s garden”—to thwart a potential Allied landing. During high tide, the devil’s garden would be submerged and virtually invisible; but during low tide it would be exposed.
Ultimately, military planners concluded that the best conditions for an invasion would be a day with an early-morning (but steadily rising) low tide. That way, landing craft could avoid the German obstacles, and Army engineers could begin clearing them away for subsequent landings.
To complicate matters, the Allies also wanted a date when, prior to the dawn invasion, there would be sufficient moonlight to aid pilots in landing paratroopers.
So the Allies consulted meteorologists and other experts to calculate the dates when the tides and the moon would meet the ideal conditions. Among those experts was Arthur Thomas Doodson, a British mathematician who had constructed one of the world’s most precise tide-predicting machines—which reduced the risk of ships running aground when entering a harbor. Doodson’s machine was essentially a primitive computer that produced calculations using dozens of pulley wheels. Doodson himself calculated the ideal dates for the D-Day invasion—a narrow set of options that included June 5-7, 1944. The Allied invasion of Europe commenced on June 6.
On January 21, 1976, two of what many aviation enthusiasts consider the most beautiful man-made object ever to fly—took off simultaneously from Heathrow Airport near London and Orly Airport near Paris with their first paying passengers. Those two airplanes, called Concorde, would fly faster than the speed of sound from London to Bahrain and from Paris to Rio de Janeiro, elegant harbingers of a brave new era in commercial air travel.
One of the three Concordes on public view in the United States stands regally in the hangar of Steven F. Udvar-Hazy Center of the Smithsonian’s National Air and Space Museum in Chantilly, Virginia, the red, white, and blue colors of Air France emblazoned on its vertical stabilizer. (The other two are at the Intrepid Museum in New York City and the Museum of Flight in Seattle.)
The performance of Concorde—airline pilot and author Patrick Smith tells me that one does not put a “the” in front of the plane’s name—was spectacular. Able to cruise at a near-stratospheric altitude of 60,000 feet at 1350 miles per hour, the plane cut travel times on its routes in half. But speed and altitude were not the only factors that made Concorde so remarkable. The plane was a beauty.
Since back when flight was only a dream, there has been an aesthetic element in imagined flying machines. It’s easy to imagine Daedalus fixing feathers onto the arms of his doomed son Icarus in a visually appealing, bird-like pattern. Leonardo da Vinci envisioned the symmetrical shape of a bat wing in his drawings of possible airplanes. Some of this aesthetic is still carried over (ironically perhaps) in military fighter jets, but in commercial aviation, where profit demands more and more passengers, aircraft designers have swapped beauty for capacity.
The workhorse 747, for instance, looks like a plane sculpted by Botero. At a time when airliners are called buses, Concorde, designed by Bill Strang and Lucien Servanty, was the dream of Daedalus come true. It seemed to embody the miracle of flight, long after that miracle was taken for granted. In my book on elegant industrial designs, the graceful creature occupies a two-page spread.
Concorde was one competitor in a three-team international race. In the U.S., Boeing won a design face-off with Lockheed for a supersonic airliner, but, according to Bob van der Linden, curator of air transportation and special purpose aircraft at the Air and Space Museum, Wall Street never invested in the U.S. version, and Congress turned down the funding necessary to build the plane for a combination of budget and environmental reasons.
Russia also entered the foray and produced the TU-144, a plane that looked somewhat similar to Concorde, and beat the Anglo-French plane into the air by a few months in December of 1968. The ill-fated Russian SST crashed during a demonstration flight at the Paris Air Show in 1973, and never flew again.
Concorde began test flights early in 1969 and—with pilots and crews specially trained and engineering honed—began carrying paying passengers in 1976. (And pay they did, with a first class ticket costing around $12,000.)
Smith, author of the blog “Ask the Pilot” and of the book Cockpit Confidential, told me that the sleek supersonic transport (SST) was “a difficult plane to engineer, and just as difficult to fly.” But, he continued, Concorde was an engineering triumph, a formidably complex machine “all done with slide rules.” Despite the cost of tickets, the plane was not luxurious inside, seating only about 144, with a single aisle in constant use by the aircrew needing to serve meals in half the usual time. A story, possibly apocryphal, tells of a passenger who was asked by the captain on debarkation how she liked Concorde: “It’s so ordinary,” she complained. An SST engineer, hearing this, responded: “That was the hardest part.”
Between 14 and 16 of the French and British Concordes made an average of two flights a day for several years. Smith says the plane’s stellar safety record was “more the work of probability than engineering. It’s possible that with a significantly larger number of Concordes on the roster of the world’s carriers, there would have been an altogether different safety record.”British Airways advertising poster, c. 1996 (National Air and Space Museum)
That safety record came to a terrible end on July 25, 2000. On takeoff from Paris, a flaming tail of fire followed Flight 4590 into the air, and seconds later the Air France Concorde crashed, killing all aboard, 109 passengers and crew members and four people on the ground. Initial reports blamed a piece of metal that had fallen off a Continental DC-10 taking off just ahead of Concorde and caused pieces of a blown tire to pierce the fuel tank.
Later investigations told a more complicated story, one that involved a cascade of human errors. The plane was over its recommended takeoff weight, and a last minute addition of baggage shifted the center of gravity farther back than normal, both of which changed the takeoff characteristics.
Many experts speculate that if it hadn’t been for the additional weight, Flight 4590 would have been in the air before reaching the damaging metal debris. After the tire was damaged, the plane skidded toward the edge of the runway, and the pilot, wanting to avoid losing control on the ground, lifted off at too slow a speed.
There is also a prevailing opinion that the engine fire that looks so disastrous in photos taken from an airliner next to the runway would have blown out once the plane was in the air. But apparently the flight engineer shut down another engine in an unnecessary abundance of caution, making the plane unflyable.
Perhaps because an unlikely coincidence of factors caused the crash, Concorde continued in service after modifications to the fuel tanks. But both countries permanently grounded the fleet in 2003.
In the end, the problem was not mechanical but financial. Concorde was a gorgeous glutton, burning twice as much fuel as other airliners, and was expensive to maintain.
According to curator Van der Linden, for a trans-Atlantic flight, the plane used one ton of fuel for each passenger seat. He also points out that many of the plane’s passengers didn’t pay in full for their seats, instead using mileage upgrades. Just as Wall Street had failed to invest in the plane, other airlines never ordered more Concordes, meaning that the governments of Britain and France were footing all the bills, and losing money despite the burnishing of national pride.
“The plane was a technological masterpiece,” says the curator, “but an economic black hole.”
In 1989, on the bicentennial of the French Revolution, when French officials came to the States to present the U.S. with a copy of the Declaration of the Rights of Man, an agreement was struck with the Smithsonian to present the Institution with one of the Concordes when the planes were finally phased out.
“We figured that wouldn’t be for many years,” says Van der Linden, who has edited a soon-to-be-released book called Milestones of Flight. “But in April of 2003, we got a call that our airplane would be coming. Luckily, it was just when the Udvar-Hazy Center was opening, and we managed to find room on the hangar floor. There was some initial worry that such a long aircraft would block access to other exhibits, but the plane stands so high that we could drive a truck under the nose.”
On June 12, 2003, the Smithsonian Concorde left Paris for Washington, D.C. Van der Linden happened to be in Paris on other business at the time, and was invited to fly gratis along with 50 VIPs. “We flew at between 55,000 and 60,000 feet, and at that altitude the sky, seen through the hand-size window, was a wonderful dark purple. One other great thing about the flight was that U.S. taxpayers didn’t have to pay for my trip home.”
Two months later, with the help of Boeing crews, the extraordinary plane was towed into place, and now commands the southern end of the building. Though first built more than four decades ago, Concorde still looks like the future. As Patrick Smith told me, “Concorde evoked a lot of things—a bird, a woman’s body, an origami mantis—but it never looked old. And had it remained in service that would still be true today.
‘Timeless’ is such an overused word, but very few things in the world of industrial design can still appear modern 50 years after their blueprints were first drawn up.”
In what is perhaps an inevitable post-script to the commercial SST era, a group that calls itself Club Concorde has come up with the nostalgic dream of buying one of the mothballed SSTs and putting it into service again for those who consider time money, and have plenty of money to spare.
According to newspaper reports in England, the club has so far raised $200 million to restore former glory aloft, and has approached current owner Airbus to buy one of that company’s planes.
The suggestion has met with a “talk to the hand” response. French officials have compared Concorde to the Mona Lisa (an apt da Vinci reference) as a national treasure, not to be sold off. And the expense and difficulty of resurrecting the plane, even if it could be purchased, are formidable obstacles.
David Kaminsky-Morrow, the air transport editor of Flightglobal.com, points out that “Concorde is an immensely complex supersonic aircraft and [civil aviation authorities] will not entrust the safe upkeep of its airframe to a group of enthusiasts without this technical support in place.”
So all those who missed the boat (or rather, the bird) when Concordes were still flying can still go to the Udvar-Hazy Center to exercise their right to gawk admiringly at a true milestone of flight.
Concorde is on display in the Boeing Aviation Hangar at the Smithsonian's Steven F. Udvar-Hazy Center, Chantilly, Virginia.
Headlines this summer have announced 2012 as America’s hottest year on record, with particularly brutal heat waves striking the Northeast, and stunning temperature highs all but cooking Death Valley and other Southwest desert hotspots.
What many papers have not pointed out, however, is that 2012 is shaping up to be among the warmest on record worldwide. In June, across the planet, the average land temperature was the highest since such record-keeping began in 1880. And factoring in ocean temperatures, the month of June was the fourth hottest June since 1880. The same data source, from the National Oceanic and Atmospheric Administration, shows that May 2012 was comparably scorching in the Northern Hemisphere. The global report for July is not yet available, but the national analysis is in—and the month burned like never a July has before. The lower 48 states’ 31-day temperature average of 77.6 degrees Fahrenheit made July 2012 the warmest single month ever recorded in America since national records began in 1895. Also during July, fires across America burned more than two million acres. Now, it’s August, and while we’re eagerly awaiting the next monthly summary, we don’t need a government climatologist to tell us that it’s broiling out there. Fires are sweeping the country, and farmers are grumbling about a drought. Global warming? It feels that way.
Following are a few of the hottest of hotspots where recent weather extremes are making 2012 a summer to write home about.
Spain. I was there, pedaling a bicycle through the Spanish interior in late June, and I almost cooked. The land was erupting in flames. Distant plumes of smoke marked brush and forest fires while helicopters in response came and went. Nights were balmy and comfortable, and mornings weren’t intolerable—but by noon each day the mercury edged past 100, and from 3 p.m. until about 7, the heat made riding a bike impossible. For four days I baked, spending one miserable afternoon on La Ruta de Don Quixote, a pathetic gravel trail through the scrub and desert, and itself the subject of a feeble tourism marketing campaign. Signage was poor and of water there was none. Windmills towered above me on a low ridge—but there was not a shade tree to be found. Relief came two days later, on the 26th, when, at last, I rolled into the air-conditioned terminal of Madrid-Barajas International Airport. June 2012 in Spain would clock out as the fourth-hottest Spanish June since 1960. The day I got out of that oven, temperatures peaked, reaching 111 degrees Fahrenheit in Cordoba.
Death Valley. On July 11, the temperature hit 128 degrees Fahrenheit in Death Valley. Through the night, the mercury crashed more than 20 degrees to 107, which tied the world’s record for the warmest daily low, and the 24-hour average for the same day was a world record 117.5 degrees. Just four days later, scores of ultramarathoners embarked on the annual 135-mile Badwater foot race, which leads from 282 feet below sea level, where asphalt can get hot enough to melt rubber, to 8,360 feet above, at Whitney Portal. And while the race is considered one of the most brutal competitions in the world, climbing almost two miles straight up from the aptly named Furnace Creek, starting point of the race, may be about the surest way to beat—or simply escape—the heat of Death Valley.
Austria. Since the country began keeping records in 1767, Austria recorded its sixth-hottest June this year. On June 30, temperatures maxed out at 99.9 degrees Fahrenheit in both the capital city of Vienna and in German-Altenburg, Nope.
Canary Islands. Recent soaring temperatures, preceded by one of the driest Spanish winters in seven decades, have sparked raging fires on the islands of Tenerife and La Gomera, of the Canary Islands. Four thousand residents have been evacuated and British tourists have been asked to report to the Foreign Office as firefighters struggle to control the flames. Eight fires were recently burning on Tenerife and ten on La Gomera, where the inferno has threatened Garajonay National Park, a Unesco World Heritage site containing prehistoric woodland dating back 11 million years. Authorities report that the La Gomera blazes may be the result of arson.
The Arctic. If it looks freezing, and it feels freezing, it still might be warmer than ever—and in the high Arctic this summer, the sea ice has shrunk to historic lows. Though July’s ice cap cover was up slightly from last year, it was the second lowest recorded by NASA’s satellite monitoring program for polar ice extent. But the ice has been melting in the past 30 days, and now the square mileage of sea ice—2.52 million—is the lowest ever recorded for the month of August.
Lassen Volcanic National Park. A fire that broke out on July 29 in the California park has since scorched 24,000 acres of forest. A recent article predicted that the fire might be contained by the final days of August. The main highway through the park and over the mountain—a living volcano and no stranger to heat and fire—has been closed, and numerous homes around the park are threatened. Elsewhere throughout California, Idaho, Oregon and Washington, fires have burned half a million acres of countryside, all of it parched by summer heat. In Redding, California, for instance, at the north end of the Sacramento Valley, summer started early, with the temperature reaching 102 on the last day of May. Twelve days in July were hotter than 100 degrees, and only four days in August so far have been less than triple digits. On August 12, the temperature reached 112.
In Related News:
Bearing the Heat. Across the United States, hungry black bears, facing a heat-induced food shortage, have resorted to breaking and entering to meet their daily caloric demands. With berries and other food forage shriveled by high temperatures, the animals have been raiding trash bins, cars and cabins with unprecedented frequency. In New York State, one black bear reportedly broke into a minivan stashed with goodies. When the door closed behind it, the bear became trapped and, in its efforts to escape, shredded the interior of the vehicle. And in June in Aspen, where searing heat has dried up the chokecherry and serviceberry crops, a female black bear with three cubs broke into at least a dozen cars in a guerrilla quest for calories.
Climate Change a Boon to English Tourism. While the subtropics burn, the higher latitudes are starting to feel just right for summer travelers. English officials expect the heat of continental Europe to be a great boon to tourism at U.K. beach towns. A document (PDF) produced by the University of Wales Swansea reports that erratic heat waves are expected to occur with frequency in the future in Europe—and whereas summers under the Greek, Spanish, Majorcan, Corsican and Tuscan suns have historically been regaled as idyllic icons of high-season tourism, replete with vineyards and wine tasting and so many pleasures Mediterranean, experts believe that, increasingly, Britons will stay home during the high season as southern Europe bakes under hotter and increasingly unpleasant summers.
Global Warming at Work? Maybe. Because federal government data like this is darn hard to argue with: “June 2012 also marks the 36th consecutive June and 328th consecutive month with a global temperature above the 20th century average.”
British Winemakers Say “Cheers” to Climate Change. The unfurling story of Southern England’s new and growing wine industry also seems to leave little doubt that global warming is real. More than 400 wineries are now producing good whites and reds in what scientists assure is a steadily warming region—one which they say warmed by 3 degrees Fahrenheit from 1961 to 2006. Don’t believe them? Then just look at the vines, which are thriving where 30 years ago winemakers say they couldn’t produce decent fruit. Sure: Data can get goofed—but grapes don’t lie.
He was known as “the Great Dissenter,” and he was the lone justice to dissent in one of the Supreme Court’s most notorious and damaging opinions, in Plessy v. Ferguson in 1896. In arguing against his colleagues’ approval of the doctrine of “separate but equal,” John Marshall Harlan delivered what would become one of the most cited dissents in the court’s history.
Then again, Harlan was remarkably out of place among his fellow justices. He was the only one to have graduated from law school. On a court packed with what one historian describes as “privileged Northerners,” Harlan was not only a former slave owner, but also a former opponent of the Reconstruction Amendments, which abolished slavery, established due process for all citizens and banned racial discrimination in voting. During a run for governor of his home state of Kentucky, Harlan had defended a Ku Klux Klan member for his alleged role in several lynchings. He acknowledged that he took the case for money and out of his friendship with the accused’s father. He also reasoned that most people in the county did not believe the accused was guilty. “Altogether my position is embarrassing politically,” he wrote at the time, “but I cannot help it.”
One other thing set Harlan apart from his colleagues on the bench: He grew up in a household with a light-skinned, blue-eyed slave who was treated much like a family member. Later, John’s wife would say she was somewhat surprised by “the close sympathy existing between the slaves and their Master or Mistress.” In fact, the slave, Robert Harlan, was believed to be John’s older half-brother. Even John’s father, James Harlan, believed that Robert was his son. Raised and educated in the same home, John and Robert remained close even after their ambitions put thousands of miles between them. Both lives were shaped by the love of their father, a lawyer and politician whom both boys loved in return. And both became extraordinarily successful in starkly separate lives.
Robert Harlan was born in 1816 at the family home in Harrodsburg, Kentucky. With no schools available for black students, he was tutored by two older half-brothers. While he was still in his teens, Robert displayed a taste for business, opening a barbershop in town and then a grocery store in nearby Lexington. He earned a fair amount of cash—enough that on September 18, 1848, he appeared at the Franklin County Courthouse with his father and a $500 bond. At the age of 32, the slave, described as “six feet high yellow big straight black hair Blue Gray eyes a Scar on his right wrist about the size of a dime and Also a small Scar on the upper lip,” was officially freed.
Robert Harlan went west, to California, and amassed a small fortune during the Gold Rush. Some reports had him returning east with more than $90,000 in gold, while others said he’d made a quick killing through gambling. What is known is that he returned east to Cincinnati in 1850 with enough money to invest in real estate, open a photography business, and dabble quite successfully in the race horse business. He married a white woman, and although he was capable of “passing” as white himself, Robert chose to live openly as a Negro. His financial acumen in the ensuing years enabled him to join the Northern black elite, live in Europe for a time, and finally return to the United States to become one of the most important black men in his adopted home state of Ohio. In fact, John’s brother James sometimes went to Robert for financial help, and family letters show that Robert neither requested nor expected anything in return.
By 1870, Robert Harlan caught the attention of the Republican Party after he gave a rousing speech in support of the 15th Amendment, which guarantees the right to vote “regardless of race, color or previous condition of servitude.” He was elected a delegate to the Republican National Convention, and President Chester A. Arthur appointed him a special agent to the U. S. Treasury Department. He continued to work in Ohio, fighting to repeal laws that discriminated on the basis of race, and in 1886 he was elected as a state representative. By any measure, he succeeded in prohibitive circumstances.
John Harlan’s history is a little more complicated. Before the Civil War, he had been a rising star in the Whig Party and then the Know Nothings; during the war, he served with the 10th Kentucky Infantry and fought for the Union in the Western theater. But when his father died, in 1863, John was forced to resign and return home to manage the Harlan estate, which included a dozen slaves. Just weeks after his return, he was nominated to become attorney general of Kentucky. Like Robert, John became a Republican, and he was instrumental in the eventual victory of the party’s presidential candidate in 1876, Rutherford B. Hayes. Hayes was quick to show his appreciation by nominating Harlan to the Supreme Court the following year. Harlan’s confirmation was slowed by his past support for discriminatory measures.
Robert and John Harlan remained in contact throughout John’s tenure on the court—1877 to 1911, years in which the justices heard many race-based cases, and time and again proved unwilling to interfere with the South’s resistance to civil rights for former slaves. But Harlan, the man who had opposed the Reconstruction Amendments, began to change his views. Time and again, such as when the Court ruled that the Civil Rights Act of 1875 was unconstitutional, Harlan was a vocal dissenter, often pounding on the desk and shaking his finger at his fellow justices in eloquent harangues.
“Have we become so inoculated with prejudice of race,” Harlan asked, when the court upheld a ban on integration in private schools in Kentucky, “that an American Government, professedly based on the principles of freedom, and charged with the protection of all citizens alike, can make distinctions between such citizens in the matter of their voluntary meeting for innocent purposes simply because of their respective races?”
His critics labeled him a “weather vane” and a “chameleon” for his about-faces in instances where he’d once argued that the federal government had no right to interfere with its citizens’ rightfully owned property, be it land or Negroes. But Harlan had an answer for his critics: “I’d rather be right than consistent.”
Wealthy and accomplished, Robert Harlan died in 1897, one year after his brother made his “Great Dissent” in Plessy v. Ferguson. The former slave lived to be 81 years old at a time when the average age expectancy for black men was 32. There were no records of correspondence between the two brothers, only confirmations from their respective children of introductions to each others’ families and acknowledgments that the two brothers had stayed in contact and had become Republican allies throughout the years. In Plessy, the Supreme Court upheld the constitutionality of Louisiana’s right to segregate public railroad cars by race, but what John Harlan wrote in his dissent reached across generations and color lines.
The white race deems itself to be the dominant race in this country. And so it is, in prestige, in achievements, in education, in wealth, and in power. So, I doubt not, it will continue to be for all time if it remains true to its great heritage and holds fast to the principles of constitutional liberty. But in view of the Constitution, in the eye of the law, there is in this country no superior, dominant, ruling class of citizens. There is no caste here. Our Constitution is colorblind and neither knows nor tolerates classes among citizens.
In respect of civil rights, all citizens are equal before the law. The humblest is the peer of the most powerful. The law regards man as man and takes no account of his surroundings or of his color when his civil rights as guaranteed by the supreme law of the land are involved. It is therefore to be regretted that this high tribunal, the final expositor of the fundamental law of the land, has reached the conclusion that it is competent for a state to regulate the enjoyment by citizens of their civil rights solely upon the basis of race.
The doctrine of “separate but equal” persisted until 1954, when the court invalidated it in Brown v. Board of Education; during that half-century, Jim Crow laws blocked racial justice for generations. But John Harlan’s dissent in Plessy gave Americans hope. One of those Americans was Thurgood Marshall, the lawyer who argued Brown; he called it a “bible” and kept it nearby so he could turn to it in uncertain times. “No opinion buoyed Marshall more in his pre-Brown days,” said NAACP attorney Constance Baker Motley.
Books: Loren P. Beth, John Marshall Harlan, the Last Whig Justice, University of Kentucky Press, 1992. Malvina Shanklin Harlan, Some Memories of a Long Life, 1854-1911, (Unpublished, 1915), Harlan Papers, University of Louisville.
Articles: Dr. A’Lelia Robinson Henry, “Perpetuating Inequality: Plessy v. Ferguson and the Dilemma of Black Access to Public and Higher Education,” Journal of Law & Education, January 1998. Goodwin Liu, “The First Justice Harlan,” California Law Review, Vol 96, 2008. Alan F. Westin, “John Marshall Harlan and the Constitutional Rights of Negroes,” Yale Law Review, Vol 66:637, 1957. Kerima M. Lewis, “Plessy v. Ferguson and Segregation,” Encyclopedia of African American History, 1896 to the Present From the Age of Segregation to the Twenty-First Century, Volume 1, Oxford University Press, 2009. James W. Gordon, “Did the First Justice Harlan Have a Black Brother?” Western New England University Law Review, 159, 1993. Charles Thompson, “Plessy v. Ferguson: Harlan’s Great Dissent,” Kentucky Humanities, No. 1, 1996. Louis R. Harlan, “The Harlan Family in America: A Brief History,” http://www.harlanfamily.org/book.htm. Amelia Newcomb, “A Seminal Supreme Court Race Case Reverberates a Century Later,” Christian Science Monitor, July 9, 1996. Molly Townes O’Brien, “Justice John Marshall Harlan as Prophet: The Plessy Dissenter’s Color-Blind Constitution,” William & Mary Bill of Rights Journal, Volume 6, Issue 3, Article 5, 1998.
You love wildlife. You have absolutely no interest in football. Yet, due to the idiosyncrasies of American culture, you're inevitably forced to watch exactly one football game per year: the Super Bowl.
Take heart. This year's game features two teams with animal mascots. Two rather charismatic animals, in fact. We've got you covered with 14 fun facts scientists have learned about each of them. Feel free to toss them out during a lull in the game's action.
1. There's no such thing as a "seahawk."
The Seattle franchise might spell it as one word, but biologists don't. In fact, they don't even use the term to refer to one particular species.
You could use the name sea hawk to refer to an osprey (pictured above) or a skua (itself a term that covers a group of seven related species of seabirds). Both groups share a number of characteristics, including a fish-based diet.The Seattle Seahawks' mascot is actually an augur hawk (pictured above), not a sea hawk. (Photo by Matt Edmonds)
2. The Seattle Seahawks' "seahawk" isn't actually a sea hawk.
Before every home game, the team releases a trained bird named Taima to fly out of the tunnel before the players, lead them onto the field and get the crowd jazzed up for the game. But the nine-year-old bird is an augur hawk (also known as an augur buzzard), native to Africa, not a seafaring species that can properly be called a sea hawk.
David Knutson, the falconer who trained Taima, originally wanted an osprey for authenticity's sake, but the U.S. Fish and Wildlife service prohibited him from using a native bird for commercial purposes. Instead, he ordered an augur hawk hatchling—which has markings roughly similar to an osprey—from St. Louis' World Bird Sanctuary and trained it to deal with the noise and chaos of a raucous football game.The range of the main osprey species (Pandion haliaetus), shown in blue, covers every continent except Antarctica. A different species, the eastern osprey, lives in Australia. (Image via Wikimedia Commons)
3. Ospreys live on every continent besides Antarctica.
Although they hunt over water, ospreys generally nest on land, within a few miles of either the ocean or a body of fresh water. Unlike most bird species, they are remarkably widespread, and even more surprising, nearly all these widely dispersed ospreys (with the exception of the eastern osprey, native to Australia) are part of one species.
Ospreys that live at temperate latitudes migrate to the tropics for the winter, before heading back to their home area for the summer breeding season. Other ospreys live in the tropics year-round, but also return to the specific nesting grounds (the same ones where they were born) each summer for breeding.(Image via USGS)
4. Ospreys have reversible toes.
Most other hawks and falcons have their talons arranged in a static pattern: three in the front, and one angled towards the back, as shown in the illustration on the left. But ospreys, like owls, have a unique configuration that lets them slide their toes back and forth, so they can create a two-and-two configuration (shown as #2). This helps them more firmly grip tubular-shaped fish as they fly through the air. They also frequently turn the fish to a position parallel to their flying direction, for aerodynamic purposes.
5. Ospreys have closable nostrils.
The predatory birds typically fly between 50 and 100 feet above the water before spotting a shallow-swimming fish (such as pike, carp or trout) and diving in for the kill. To avoid getting water up their noses, they have long-slitted nostrils that they can close voluntarily—one of the adaptations that allows them to consume a diet made up of 99 percent fish.
6. Ospreys usually mate for life.
After a male osprey reaches the age of three, upon returning to his natal nesting area for the summer breeding season in May, he stakes claim to a spot and begins performing an elaborate flight ritual overhead—often flying in a wave pattern while clutching a fish or nesting material in his talons—to attract a mate.
A female responds to his flight by landing at the nesting spot and eating the fish he supplies to her. Afterward, they begin building a nest together out of sticks, twigs, seaweed and other materials. Once bonded, the pair reunites every mating season for the rest of their lives (on average, they live about 30 years), only searching out other mates if one of the birds dies.
7. The osprey species is at least 11 million years old.
Fossils found in southern California show that ospreys were around in the Mid-Miocene, which occurred 15 to 11 million years ago. Although the particular species found have since gone extinct, they were recognizably osprey-like and assigned to their genus.
8. In the Middle Ages, people believed ospreys had magical powers.
It was widely though that if a fish looked up at an osprey, it would be somehow mesmerized by the sight of it. This would cause the fish to give itself up to the predator—a belief referenced in Act IV of Shakespeare's Coriolanus: "I think he'll be to Rome/As is the osprey to the fish, who takes it/By sovereignty of nature."A pomarine skua, frequently called a sea hawk. (Photo by Patrick Coin)
9. Skuas steal much of their food.
Unlike ospreys, skuas (the other birds often called "sea hawks") obtain much of their fish diet through a less noble strategy: kleptoparasitism. This means that a skua will wait until a gull, tern or other bird catches a fish, then chase after it and attack it, forcing it to eventually drop its catch so the skua can steal it. They're rather brazen in their extortion attempts—in some cases, they'll successfully steal from a bird three times their weight. During the winter, as much as 95 percent of a skua's diet can be obtained through theft.
10. Some skuas kill other birds, including penguins.
Although fish makes up the majority of their diet, some skuas use their aggressiveness to not only steal the catch away from other birds, but occasionally to kill them. South Polar skuas, in particular, are notorious for attacking penguin nesting sites, snapping up penguin chicks and eating them whole:
11. Skuas will attack anything that comes near their nests, including humans.
The birds are extremely aggressive in defending their young (perhaps from seeing firsthand what happens to less protective parents, like penguins) and will dive at the head of any animal that approaches their nest. This even applies to humans, with skuas occasionally injuring people in the act of defending their chicks.
12. Sometimes, skuas will fake injuries to distract predators.
In especially desperate situations, the birds will sometimes resort to a remarkably ingenious tactic: a distraction display, which involves an adult bird luring a predator away from a nest full of vulnerable skua chicks, generally by faking an injury. The predator (often a larger gull, hawk or eagle) follows the seemingly-debilitated skua away from the nest, intent on obtaining a larger meal, and then the skua miraculously flies away at full strength, having saved its offspring along with itself.
13. Skuas are attentive parents.
All this aggressiveness has a reasonable justification. Skuas (which mate for life, like ospreys) are attentive parents, guarding their chicks through a 57-day fledging process each year. Fathers, in particular, take on most of the responsibility, obtaining food for the chicks daily (whether by theft or honest hunting) during the entire period.
14. Some skuas migrate from the poles to the equator each year.
Among the most remarkable of all skua behaviors is the fact that pomarine skuas, which spend the summer nesting on Arctic tundra North of Russia and Canada, fly all the way down to the tropical waters off Africa and Central America each winter, a journey of several thousand miles. Next time you're judging the birds for their piratical ways, remember that they're fueling up for one of the longest journeys in the animal kingdom.
Graphic designer Jessica Helfand has been fascinated with visual biography since her days as a graduate student in the late 1980s, pouring over Ezra Pound’s letters and photographs in Yale’s rare book library. But the “incendiary moment,” as she calls it, which really sparked her interest in scrapbooks came in 2005, when she wrote critically of the hobby on her blog Design Observer. Helfand derided contemporary scrapbookers as “people whose concept of innovation is measured by novel ways to tie bows,” among other things, and was vilified by the craft’s enthusiasts. “I hit a nerve,” she says.
Spurred on by the rise of scrapbooking as the fastest growing American hobby, Helfand set out to study the medium, collecting, from antique stores and eBay auctions, over 200 scrapbooks dating from the beginning of the nineteenth century to the present. In the collages of fabric swatches, locks of hair, calling cards and even cigarette butts pasted on their pages, she found real artistry. Helfand’s latest book, Scrapbooks: An American History, tells the story of how personal histories, as told through the scrapbooks of civilians and celebrities, including writers Zelda Fitzgerald, Lillian Hellman, Anne Sexton and Hilda Doolittle, combine to tell American history.
What types of scrapbooks do you find the most interesting?
The more eclectic. The more insane. Scrapbooks that are pictures of just babies and cherubs or just clippings from the newspaper tend to interest me less. I like when they’re chaotic the way life is.
What are some of the strangest things you’ve seen saved in them?
Apparently it was custom in the Victorian age for people to keep scrapbooks just of obituaries. And they are weird obituaries, like one in which a woman watches in horror as streetcar claims the life of her six children. Incredibly macabre, gruesome things. We have one of these books from 1894 in Ohio, and in it there is every weird obituary. “Woman lives with remains of daughter for two weeks in a farmhouse before she’s discovered.” Just one after another, and it’s pasted onto the pages of a geometry textbook.
You see often in books by college and high school girls these bizarre juxtapositions, like a picture of Rudy Valentino next to a church prayer card, or a box of Barnum’s animal crackers pasted right next to some steamy, embraced Hollywood couple for some movie that had just come out. You could see the tension in trying to figure out who they were and what their identities were vis-à-vis these emblems of religious and popular culture. I’m a kid, but I really want to be a grownup. There’s something so dear about it.
What do you think goes through people’s minds as they paste things?
In antebellum culture just after the Civil War, there was this kind of carpe diem quality that pervaded American life. I have my own theory that one of the reasons for the rise in scrapbooking has been so meteoric since 9/11 is precisely that. People keep scrapbooks and diaries more during wartime and after wartime, and famine and disease and fear. When you feel an increased sense of vulnerability, what can you do to steel yourself against the inevitable tide of human suffering but to paste something in a book? It seems silly, but on the other hand, it’s quite logical.
Scrapbooks, like diaries, can get pretty personal. Did you ever feel like you were snooping?
I took pains not to be prurient. These people aren’t here to speak for themselves anymore. It was very humbling to me to think about the people who made these things in the moments that they made them, what they were thinking, their fears and trepidations. The Lindbergh kidnapping, the Hindenburg, all these things were happening, and they were trying to make sense of it. You fall in love with these people. You can’t have emotional distance. I wanted to have some analytical distance in terms of the composition of the books, but certainly when it comes to the emotional truths that these people were living with day by day, the best I could do was just be an ambassador for their stories.
How do scrapbooks of famous and non-famous people slip through the cracks and not end up with their families?
The reason scrapbooks secede from their families is that there are typically not children to keep them. Or it’s because the kids didn’t care. They’re old, falling apart. To a lot of people, they’re really forgettable. To me, they’re treasures.
But the other thing is the more curatorial, scholarly angle. There tends to be a very scientific, quantitative view of gathering evidence and then telling the story chronologically. These things just fly in the face of that logic. People picked them up, put them down, started over, ripped out pages. They’re so unwieldy. Typically historians are more methodical and meticulous in their research and in their compilation of stories. These things are the opposite, and so they were relegated to the bottom of the pile. They would just be anecdotally referenced, but certainly not held in point as really reliable historical documents. My editor tells me that there’s a more open mindedness to that kind of first person history today, so I may have written this book at a time when it could be accepted on some scholarly level in a way that it couldn’t have 20 years before.
Image by Scrapbooks: An American History / Yale University Press. Wooden Spoon. Enloe Scrapbook, 1922. (original image)
Image by Scrapbooks: An American History / Yale University Press. Delineator. June 1931. (original image)
Image by Scrapbooks: An American History / Yale University Press. The Hair Book. Natchitoches, La., 1733. (original image)
Image by Scrapbooks: An American History / Yale University Press. Blanchard Scrapbook. Natchitoches, La., 1922. (original image)
What was it like paging through poet Anne Sexton’s scrapbook for first time, seeing the key to the hotel room where she spent her wedding night?
It’s the most adorable, clumsy, newly wed, young, silly thing. It’s just not what you associate with her. Those kinds of moments were certainly exciting for me in terms of finding something I didn’t expect to find that was so out of sync with what the record books tell us. It was sort of like finding a little treasure, like you were going through your grandmother’s drawers and you found a stack of love letters from a man who wasn’t your grandfather. It had that kind of quality of discovery. I loved, for example, the little firecrackers from a fourth of July party and the apology note from the first marital spat she had with her husband, the goofy handwriting, the Campbell soup recipes, things that were very much a part of 1949-1951. They become such portals into social, economic and material culture history.
In your book, you describe how scrapbooking has evolved. Preformatted memory books, like baby and wedding books, were more about documenting. And scrapbooking today is more about purchasing materials than using vestigial ones. Why the shift?
It shows that there is an economic incentive. If you see that there is a trend that something is happening you want to jump on the bandwagon and be part of it. My guess is that some very savvy publishers in the 1930s, ‘40s and ‘50s said they were going to make memory books that told you what to remember. That to me is very interesting because it shaped the way we started to value certain memories over others. It was good and bad; they were doing what Facebook does for us now. Facebook will change the way we think about sharing pictures and stories about our mundane lives the same way those publishers made those books and told you to save the fingerprints of your babies.
You’ve been quite vocal and critical about contemporary scrapbooking, and yet you haven’t called it “crapbooking,” as other graphic designers have. Where do you stand?
What I’ve been trying to advocate is that it’s an extremely authentic form of storytelling. You just save something, reflect on it, put it next to something else and suddenly there’s a story instead of the story being sanctioned by pink ribbons and matching paper. I don’t say don’t go to the store and buy pretty stuff. But my fear is that a certain monotony will come out of our reliance on merchandise. How is it possible that all our scrapbooks will be beautiful because they look like Martha Stewart’s, when are lives are all so incredibly different? With so much reliance on the “stuff” a certain authenticity is lost. I kept seeing this expression of “getting it right,” women wanting to “get it right.” Everybody made scrapbooks a hundred years ago, and people didn’t worry about getting it right. They just made things, and they were messy, incomplete and inconsistent. To me, the real therapeutic act is being who you are. You stop and you think what was my day. I planted seeds. I went to the store. Maybe it’s really mundane but that’s who you are, and maybe if you think about it, save it and look at it, you’ll find some truth in that that’s actually very rewarding. It’s a very forgiving canvas, the scrapbook.
As journalists, we’re all wondering whether the print newspaper and magazine will survive the digital age. Do you think the tangible scrapbook will survive in the advent of digital cameras, blogs and Facebook?
I hope they won’t disappear. I personally think there is nothing that replaces the tactile—the way they smell, the way they look, the dried flowers. There’s just something really amazing about seeing a fabric sample from 1921 in a book when you haven’t ever seen a piece of fabric that color before. There’s a certain recognition about yourself and about your world when you see something that no longer exists. When it’s on the screen, it’s a little less of that immersive experience. At the same time, if there is a way to keep scrapbooking relevant, move it forward, make it be a satellite of its former self and move into some new zone and become something else, then that’s a progressive way of thinking about it moving into the next generation.