Skip to Content

Found 2,961 Resources

Motown Turns 50

Smithsonian Magazine

Editor’s note: It’s been 50 years since Berry Gordy founded Motown, a record company that launched scores of careers, created a signature sound in popular music and even helped bridge the racial divide. This article first appeared in the October 1994 issue of Smithsonian; it has been edited and updated in honor of the anniversary.

It was nearly 3 A.M. but Berry Gordy couldn’t sleep. That recording kept echoing in his head, and every time he heard it he winced. The tempo dragged, the vocals weren’t perky enough, it just didn’t have the edge. Finally, he got out of bed and went downstairs to the homemade studio of his struggling record company. He grabbed the phone and rang his protégé Smokey Robinson, who had written the lyrics and sang lead with a little-known group called the Miracles: “Look, man, we’ve got to do this song again . . . now . . . tonight!” Robinson protested, reminding Gordy that the record had been distributed to stores and was being played on the radio. Gordy persisted, and soon he had rounded up the singers and the band, all except the pianist. Determined to go ahead with the session, he played the piano himself.

Under Gordy’s direction, the musicians picked up the tempo, and Robinson pepped up his delivery of the lyrics, which recounted a mother’s advice to her son on finding a loving bride: “Try to get yourself a bargain son, don’t be sold on the very first one . . . . ” The improved version of “Shop Around” was what Gordy wanted—bouncy and irresistibly danceable. Released in December 1960, it soared to No. 2 on Billboard’s pop chart and sold more than a million copies to become the company’s first gold record. “Shop Around” was the opening salvo in a barrage of smash hits in the 1960s that turned Gordy’s humble studio into a multimillion-dollar corporation and added a dynamic new word to the lexicon of American music: “Motown.”

Gordy, a Detroit native, started the company in 1959, deriving its name from the familiar moniker “Motor City.” Motown combined elements of blues, gospel, swing, and pop with a thumping backbeat for a new dance music that was instantly recognizable. Competing for teen attention primarily against records by the Beatles, who were at the height of their popularity, Motown radically altered the public’s perception of black music, which for years had been kept out of the mainstream.

White youths as well as black were captivated by the rhythmic new sound, though the musicians who produced it were black and many of the performers were teenagers from Detroit’s housing projects and rundown neighborhoods. Prodding and grooming those raw talents, Gordy transformed them into a roster of dazzling artists who stunned the pop music world. The Supremes, Mary Wells, the Temptations, the Miracles, the Contours, Stevie Wonder, the Marvelettes, Diana Ross, Marvin Gaye, Martha and the Vandellas, the Four Tops, Gladys Knight and the Pips, Michael Jackson—those were just some of the performers who had people singing and dancing all over the world.

In 1963, when I was in junior high school and completely infatuated with Motown music, I persuaded my dad to drive me past Hitsville U.S.A., which is what Gordy called the little house where he did his recording. We had just moved to Detroit from the East Coast, and the possibility of seeing some of the music makers was the only thing that soothed the pain of relocation. I was disappointed to find not one star lolling about the yard, as was rumored to happen, but a few months later my dream came true at the Motown Christmas show in downtown Detroit. A girlfriend and I queued up at the Fox Theater for an hour one chilly morning and paid $2.50 to see the revue. We rocked our shoulders, snapped our fingers, danced in our seats and sang along as act after act lit up the stage. I grew hoarse from screaming for the fancy footwork of the Temptations and the romantic crooning of Smokey Robinson. Today I still burst into song whenever I hear a Motown tune.

No longer star-struck but still awed by the company’s unparalleled success, I recently visited Gordy at his hilltop mansion in Bel-Air, an opulent enclave of Los Angles. We settled into a stately sitting room furnished with a plump damask sofa and large armchairs. An array of black-and-white photographs of family, Motown celebrities and other stars adorned the walls. Gordy was dressed casually in an olive-green sweatsuit. His 1950s processed pompadour has given way to a graying, thinning close-cut, but he remains exuberant and passionate about his music.

Twice during our conversation he steered me to the photographs, once to point out a youthful Berry with singer Billie Holiday at a Detroit nightclub, and again to show himself with Doris Day. Brash and irrepressible, he had sent Day a copy of the very first song he had written, almost 50 years ago, certain she would record it. She did not, but Gordy still remembers the lyrics, and, without any prodding from me, rendered the ballad in his trilling tenor voice. His bearded face erupted into an impish grin as he finished. “With me you might get anything,” he chuckled. “You never know.”

He talked about his life and the music and the people of Motown, his reminiscences burbling forth—stories animated with humor, snatches of songs and imitations of instruments. He told how he shirked piano practice as a child, preferring instead to compose boogie-woogie riffs by ear, and consequently never learned to read music. He recalled how 18-year-old Mary Wells badgered him at a nightclub one evening about a song she had written. After hearing her husky voice, Gordy persuaded her to record it herself, launching Wells on a course that made her Motown’s first female star.

A music lover since his tender years, Gordy didn’t set out to build a record company. He dropped out of high school when he was a junior and spent a decade finding his niche. Born in 1929, the seventh of eight children, he inherited an entrepreneurial instinct from his father. Gordy senior ran a plastering and carpentry business and owned the Booker T. Washington Grocery Store. The family lived above the store, and as soon as the kids could see over the counter, they went to work serving customers. Young Berry hawked watermelons from his father’s truck in the summer and shined shoes on downtown streets after school. On Christmas Eve, he and his brothers would huddle around an oil-can fire selling trees until late in the evening.

After quitting school, Gordy stepped into the boxing ring, hoping to pummel his way to fame and fortune like Detroit’s Joe Louis, every black boy’s hero in the 1940s. Short and scrappy, Gordy put in a tenacious but ultimately unrewarding few years before being drafted. When he returned from the Army, where he earned his high school equivalency diploma, he opened a record store specializing in jazz. Set on attracting an urbane audience, he eschewed the earthy, foot-stomping music of singers like John Lee Hooker and Fats Domino. Ironically, it was just what his customers wanted, but Gordy was slow to catch on, and his store failed.

He found work on the Ford Motor Company assembly line, earning about $85 a week attaching chrome strips to Lincolns and Mercurys. To relieve the tedium of the job, he made up songs and melodies as the cars rolled by. In the late ’50s Gordy frequented Detroit’s black nightclubs, establishing his presence, peddling his songs and mentoring other songwriters. His big break came when he met Jackie Wilson, a flamboyant singer with matinee-idol looks who had just launched a solo career. Gordy wrote several hit songs for Wilson, including “Reet Petite,” “Lonely Teardrops” and “That is Why.” It was during this time that he also met William (Smokey) Robinson, a handsome, green-eyed teenager with a mellow falsetto voice and a notebook full of songs.

Gordy helped Robinson’s group, the Miracles, and other local wannabes find gigs and studios to cut records, which they sold or leased to big companies for distribution. There wasn’t much money in it, however, because the industry regularly exploited struggling musicians and songwriters. It was Robinson who persuaded Gordy to set up his own company.

Such a venture was a major step. Ever since the dawn of the recording industry at the turn of the century, small companies, and especially black-owned companies, had found it almost impossible to compete in a business dominated by a few giants who could afford better promotion and distribution. Another frustration was the industry’s policy of designating everything recorded by blacks as “race” music and marketing it only to black communities.

By the mid-50s the phrase “rhythm and blues” was being used to refer to black music, and “covers” of R&B music began flooding the mainstream. Essentially a remake of an original recording, the cover version was sung, in this instance, by a white performer. Marketed to a large white audience as popular, or “pop,” music, the cover often outsold the original, which had been distributed only to blacks. Elvis Presley rose to prominence on such covers as “Hound Dog” and “Shake, Rattle and Roll;” Pat Boone “covered” several R&B artists, including Fats Domino. Covers and skewed marketing for R&B music posed formidable challenges for black recording artists. To make big money, Gordy’s records would have to attract white buyers; he had to break out of the R&B market and cross over to the more lucrative pop charts.

Gordy founded Motown with $800 that he borrowed from his family’s savings club. He bought a two-story house on West Grand Boulevard, then an integrated street of middle-class residences and a sprinkling of small businesses. He lived upstairs and worked downstairs, moving in some used recording equipment and giving the house a new coat of white paint. Remembering his days on the assembly line, he envisioned a “hit factory.” “I wanted an artist to go in one door as an unknown and come out another a star,” he told me. He christened the house “Hitsville U.S.A,” spelled out in large blue letters across the front.

Gordy didn’t start out with a magic formula for hit records, but early on a distinct sound did evolve. Influenced by many types of African-American music—jazz, gospel, blues, R&B, doo-wop harmonies—Motown musicians cultivated a pounding backbeat, an infectious rhythm that kept teenagers gyrating on the dance floor. To pianist Joe Hunter, the music had “a beat you could feel and could hum in the shower. You couldn’t hum Charlie Parker, but you could hum Berry Gordy.”

Hunter was one of many Detroit jazzmen Gordy lured to Motown. Typically, the untrained Gordy would play a few chords on the piano to give the musicians a hint of what was in his head; then they would flesh it out. Eventually, a group of those jazz players became Motown’s in-house band, the Funk Brothers. It was their innovative fingerwork on bass, piano, drums and saxophone, backed up by handclaps and the steady jangling of tambourines that became the core of the “Motown Sound.”

Image by Michael Ochs Archives / Corbis. Famous for Motown hits like “My Girl” and “Get Ready,” the Temptations spin and glide through their polished choreography at the Apollo Theater in New York City in 1964. (original image)

Image by Associated Press. With his gift for identifying, nurturing and marketing talented musicians, Berry Gordy, a former auto assembly-line worker, turned an $800 loan into a multimillion-dollar company. (original image)

Image by Bettmann / Corbis. Though early recordings lingered at the bottom of the charts, the Supremes produced a breakout number-one hit in 1964 called “Where Did Our Love Go,” a danceable song full of foot stomps and handclaps. (original image)

Image by Apis / Sygma / Corbis. Blind from birth, singer Stevie Wonder (performing in 1963 at age 13) played drums, piano and harmonica, which featured prominently on his first hit “Fingertips (Part 2).” A winner of more than 20 Grammy awards, he still records on the Motown label. (original image)

Image by Michael Ochs Archives / Corbis. In 1960 Smokey Robinson and the Miracles recorded “Shop Around,” one of the early Motown songs that would rise to the top of record charts and help launch the young company. (original image)

Image by Bettmann / Motown. Entrants in a rural Michigan high school talent show in 1961, the Marvelettes within months had delivered Motown its first number one single, “Please Mr. Postman,” in 1961. (original image)

Adding words to the mix fell to the company’s stable of producers and writers, who were adroit at penning squeaky-clean lyrics about young love—yearning for it, celebrating it, losing it, getting it back. Smokey Robinson and the team of Lamont Dozier and brothers Eddie and Brian Holland, known as HDH, were especially prolific, churning out hit after hit chock-full of rhyme and hyperbole. The Temptations sang about “sunshine on a cloudy day” and a girl’s “smile so bright” she “could’ve been a candle.” The Supremes would watch a lover “walk down the street, knowing another love you’d meet.”

Spontaneity and creative wackiness were standard at Motown. The Hitsville house, open round the clock, became a hangout. If one group needed more backup voices or more tambourines during a recording session, someone was always available. Before the Supremes ever scored a hit, they were often summoned to provide the insistent handclapping heard on many Motown records. No gimmick was off limits. The loud thumping at the beginning of the Supremes’ “Where Did Our Love Go” is literally the footwork of Motown extras stomping on wooden planks. The tinkling lead notes on one Temptations record came from a toy piano. Little bells, heavy chains, maracas and just about anything that would shake or rattle were employed to boost the rhythm.

An echo chamber was rigged up in an upstairs room, but occasionally the microphone picked up an unintended sound effect: noisy plumbing from the adjacent bathroom. In her memoirs, Diana Ross recalls “singing my heart out beside the toilet bowl” when her microphone was put in it to achieve an echo effect. “It looked like chaos, but the music came out wonderful,” Motown saxophonist Thomas (Beans) Bowles mused recently.

Integrating symphonic strings with the rhythm band was another technique that helped Motown cross over from R&B to pop. When Gordy first hired string players, members of the Detroit Symphony Orchestra, they balked at requests to play odd or dissonant arrangements. “This is wrong, this is never done,” they’d say. “But that’s what I like, I want to hear that,” Gordy insisted. “I don’t care about the rules because I don’t know what they are.” Some musicians stalked out. “But when we started getting hits with strings, they loved it.”

The people who built Motown recall Hitsville in the early years as a “home away from home,” in the words of the Supremes’ Mary Wilson. It was “more like being adopted by a big loving family than being hired by a company,” the Temptations’ Otis Williams wrote. Gordy, a decade or so older than many of the performers, was the patriarch of the whole rambunctious bunch. When the music makers weren’t working they loafed on the front porch or played Ping-Pong, poker or a game of catch. They cooked lunch at the house—chili or spaghetti or anything that could be stretched. Meetings ended with a rousing chorus of the company song, written by Smokey Robinson: “Oh, we have a very swinging company / working hard from day to day / nowhere will you find more unity / than at Hitsville U.S.A.”

Motown was not just a recording studio; it was a music publisher, a talent agency, a record manufacturer and even a finishing school. Some performers dubbed it “Motown U.” While one group recorded in the studio, another might be working with the voice coach; while a choreographer led the Temptations through some flashy steps for a drop-dead stage routine, writers and arrangers might be banging out a melody on the baby grand. When not refining their acts, the performers attended the etiquette-and-grooming class taught by Mrs. Maxine Powell, an exacting charm school mistress. A chagrined tour manager had insisted the singers polish up their show-biz manners after witnessing one of the Marvelettes chomping a wad of gum while onstage.

Most of the performers took Mrs. Powell’s class seriously; they knew it was a necessary rung on the ladder to success. They learned everything from how to sit in and rise gracefully from a chair, to what to say during an interview, to how to behave at a formal dinner. Grimacing onstage, chewing gum, slouching and wearing brassy makeup were forbidden; at one time, gloves were mandatory for the young women. Even 30 years later, Mrs. Powell’s graduates still praise her. “I was a little rough,” Martha Reeves told me recently, “a little loud and a little undone. She taught us class and how to walk with the grace and charm of queens.”

When it came time to striving for perfection, no one was tougher on the Motown crew than Gordy. He cajoled, pressured and harangued. He held contests to challenge the writers to come up with hit songs. It was nothing for him to require two dozen takes during a single recording session. He would insist on last-minute changes in stage routines; during shows, he took notes on a legal pad and went backstage with a list of complaints. Diana Ross called him “my surrogate father . . . Controller and slave driver.” He was like a tough high school teacher, Mary Wilson says today. “But you learned more from that teacher, you respected that teacher, in fact you liked that teacher.”

Gordy instituted the quality-control concept at Motown, again borrowing an idea from the auto assembly line. Once a week, new records were played, discussed and voted on by sales people, writers and producers. During the week, tension and long hours mounted as everyone hustled to create a product for the meeting. Usually, the winning tune was released, but occasionally Gordy, trusting his intuition, vetoed the staff’s choice. Sometimes when he and Robinson disagreed over a selection, they invited teenagers in to break the impasse.

In 1962, thirty-five eager music makers squeezed into a noisy old bus for Motown’s first road tour, a grueling itinerary of some 30 one-nighters up and down the East Coast. Several shows were in the South, where many of the young people had their first encounters with segregation, often being denied service at restaurants or directed to back doors. As they were boarding the bus late one night after a concert in Birmingham, Alabama, shots rang out. No one was hurt, but the bus was peppered with bullet holes. At another stop, in Florida, the group disembarked and headed for the motel pool. “When we started jumping in, everyone else started jumping out,” Mary Wilson recalls, now laughing. After discovering that the intruders were Motown singers, some of the other guests drifted back to ask for autographs. Occasionally, or when, in the frenzy of a show, black and white teenagers danced together in the aisles, the music helped bridge the racial divide.

Though Motown was a black-owned company, a few whites recorded there and several held key executive positions. Barney Ales, the white manager of Motown’s record sales and marketing, was dogged in his efforts to move the music into the mainstream—this at a time when some stores in the country would not even stock an album with African-Americans on the cover. Instead of a photograph of the Marvelettes, a rural mailbox adorns their “Please Mr. Postman” album. In 1961, the single became Motown’s first song to occupy the number-one spot on the Billboard Hot 100.

Notwithstanding Ales’ success, it was three black teenage girls from a Detroit housing project who made Motown a crossover phenomenon. Mary Wilson, Diana Ross and Florence Ballard auditioned for Gordy in 1960, but he showed them the door because they were still in school. The girls then began dropping by the studio, honoring all requests to sing background and clap on recordings. Several months later they signed a contract and started calling themselves “the Supremes.”

Over the next few years, they recorded several songs, but most withered at the bottom of the charts. Then HDH merged plaintive singsong lyrics with a chorus of “baby, baby” and a driving beat, and called it “Where Did Our Love Go.” The record catapulted the Supremes to No. 1 on the pop charts and set off a chain reaction of five No. 1 hits in 1964 and ’65, all HDH compositions.

The young women continued to live in the projects for nearly a year, but otherwise their whole world changed. A summer tour with Dick Clark and an appearance on The Ed Sullivan Show were followed by other TV spots, nightclub performances, international tours, magazine and newspaper articles, even product endorsements. They soon traded their homemade stage dresses for glamorous sequined gowns, the dusty tour bus for a stretch limousine.

With the Supremes’ slicked-up sound leading the way, Motown proceeded to blaze a trail to the top of the pop charts, keeping pace with the Beatles, the Rolling Stones and the Beach Boys. Never mind that some fans complained that the Supremes’ music was too commercial and lacked soul. Motown sold more 45 rpm records in the mid-’60s than any other company in the nation.

Capitalizing on that momentum, Gordy pushed to broaden his market, getting Motown acts into upscale supper clubs, such as New York’s Copacabana, and glitzy Las Vegas hotels. The artists learned to sing “Put on A Happy Face” and “Somewhere,” and to strut and sashay with straw hats and canes. At first they were not entirely comfortable doing the material. Ross was crushed when a Manchester, England, audience started fidgeting while the Supremes sang “You’re Nobody ‘til Somebody Loves You.” Smokey Robinson called the middle-of-the-road standards “cornball.” Others were on unfamiliar territory, as well. Ed Sullivan once introduced Smokey and the Miracles thusly: “Let’s have a warm welcome for… Smokey and the Little Smokeys!”

By 1968 Motown had exceeded all expectations and was still growing. That was the year the company set up headquarters in a ten-story building on the edge of downtown Detroit. Four years later Motown’s first movie, Lady Sings the Blues, debuted. The story of Billie Holiday, played by Diana Ross, the film received five Academy Award nominations. Intent on further expansion into the film industry, Gordy moved the company to Los Angeles. Robinson had tried to dissuade him with a stack of books about the San Andreas Fault, to no avail. Gordy hungered to work his magic in Hollywood.

But the move to Los Angeles was the beginning of the end of Motown music’s golden era. “It became just another big company instead of the little company that thought it could,” Janie Bradford said recently. She started as a Motown receptionist, stayed with the company 22 years and even helped Gordy write one of his early hits, “Money (That’s What I Want).” After relocating, Gordy found little time for creating music or screening records. So much was changing. Lead singers left their groups for solo careers. Some wanted more creative and financial control. Gone were the house band and the cadre of young producers. Many of the performers, now famous, were being wooed away by other recording companies; some were disgruntled about old contracts and earnings, and complained that Motown had cheated them. Lawsuits ensued. Gossip and rumor would pursue Gordy for decades as the once most successful black-owned company in the country began a downward spiral.

Epilogue:

In 1988 Gordy sold Motown’s record division to MCA records for $61 million. A few years later it was sold again to Polygram Records. Eventually Motown merged with Universal Records and today is known as Universal Motown. Among the company’s recording artists are Busta Rhymes, Erykah Badu and Stevie Wonder.

The old Hitsville USA house in Detroit is now a museum and popular tourist destination.

Paleoartist Brings Human Evolution to Life

Smithsonian Magazine

A smiling 3.2-million-year-old face greets visitors to the anthropology hall of the National Museum of Anthropology and History in Mexico City. This reconstruction of the famous Australopithecus afarensis specimen dubbed “Lucy” stands a mere 4 feet tall, is covered in dark hair, and displays a pleasant gaze.

She’s no ordinary mannequin: Her skin looks like it could get goose bumps, and her frozen pose and expression make you wonder if she’ll start walking and talking at any moment.

This hyper-realistic depiction of Lucy comes from the Atelier Daynès studio in Paris, home of French sculptor and painter Elisabeth Daynès. Her 20-year career is a study in human evolution—in addition to Lucy, she’s recreated Sahelanthropus tchadensis, as well as Paranthropus boisei, Homo erectus, and Homo floresiensis, just to name a few. Her works appear in museums across the globe, and in 2010, Daynès won the prestigious J. Lanzendorf PaleoArt Prize for her reconstructions.

Though she got her start in the make-up department of a theater company, Daynès had an early interest in depicting realistic facial anatomy and skin in theatrical masks. When she opened her Paris studio, she began developing relationships with scientific labs. This interest put her on the radar of the Thot Museum in Montignac, France, and in 1988, they tapped Daynès to reconstruct a mammoth and a group of people from the Magdalenian culture who lived around 11,000 years ago.

Through this initial project, Daynès found her calling. “I knew it straight away after [my] first contact with this field, when I understood how infinite [scientific] research and creativity could be,” she says.

Although her sculpting techniques continue to evolve, she still follows the same basic steps. No matter the reconstruction, Daynès always starts with a close examination of the ancient human’s skull—a defining feature for many hominid fossil groups.

Computer modeling of 18 craniometric data points across a skull specimen gives her estimates of musculature and the shape of the nose, chin, and forehead. These points guide Daynès as she molds clay to form muscles, skin and facial features across a cast of the skull. Additional bones and teeth provide more clues to body shape and stature.

Images of the skull cast of a 18,000-year-old Homo floresiensis skull with cranial measurements marked with toothpicks. Using cranial measurements, the artist adds layers of clay to form muscles and skin. (Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris)

Next, Daynès makes a silicone cast of the sculpture, a skin-like canvas on which she’ll paint complexion, beauty spots and veins. For hair, she typically uses human hair in members of the Homo genus, mixing in yak hair for a thicker effect in older hominids. Dental and eye prosthetics complete the sculpture’s form.

For hair and eye color decisions, Daynès gets inspiration from the scientific literature: for example, genetic evidence suggests that Neanderthals had red hair. She also consults with scientific experts on the fossil group at each stage of the reconstruction process.

Her first collaboration with a scientist on a reconstruction came in 1998 when she teamed up with longtime friend Jean-Nöel Vignal, a paleoanthropologist and former head of the Police Forensic Research Institute in Paris, to reconstruct a Neanderthal from France’s La Ferrassie cave site. Vignal had developed the computer modeling programs used to estimate muscle and skin thickness.

Forensic sleuthing, she says, is the perfect guide: She approaches a reconstruction like a investigator profiling a murder victim. The skull, other bone remains and flora and fauna found in the excavation all help develop a picture of the individual: her age, what she ate, what hominid group she belonged to, any medical conditions she may have suffered from, and where and when she lived. More complete remains yield more accurate reconstructions. “Lucy” proved an exceptionally difficult reconstruction, spanning eight months.

Image by Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris. The clay model of Daynès' reconstruction of "Toumai", a Sahelanthropus tchadensis skull found in Chad in 2005. One of the earliest known human ancestors, "Toumai" lived 6 to 7 million years ago. (original image)

Image by Photo: © E.Daynès – Reconstruction Atelier Daynès Paris. The artist's reconstruction of Lucy, a 3.1 million-year-old female Australopithecus afarensis discovered in 1974 in Hadar, Ethiopia. Because only fragments of Lucy's cranium were found, Daynès had to draw from the skull of another A. afarensis female (AL 417). (original image)

Image by Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris. A Homo habilis reconstruction by Daynès at the CosmoCaixa museum in Barcelona. (original image)

Image by Photo: © E.Daynès – Reconstruction Atelier Daynès Paris. A reconstruction of a Paranthropus boisei made directly on the cast of a 2.5 million-year-old skull, discovered in 1959 at Olduvai Gorge in Tanzania. (original image)

Image by Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris. Setting up a museum exhibit, Daynès carries a hyper realistic reconstruction of Homo georgicus. The sculpture is based on a skull (D2280) unearthed in Georgia. Scientists still debate whether Homo georgicus is a distinct species or an early form of Homo erectus. (original image)

Image by Photo: © S. Entressangle/E.Daynès – Reconstruction Atelier Daynès Paris. A reconstruction of a male Homo erectus based on the skull Sangiran 17, the most complete Homo erectus skull found in East Asia. This hominid lived in Indonesia 1.3 to 1.0 million years ago. (original image)

Image by Photo: © E.Daynès – Reconstruction Atelier Daynès Paris. Daynès' reconstruction of the Sangiran 17 Homo erectus skull at an earlier stage of the artistic process. (original image)

Image by Photo: © S. Entressangle/E.Daynès – Reconstruction Atelier Daynès Paris. Hyper-realistic reconstruction of a Homo floresiensis female based on the cast of the skull LB1, discovered in 2003 in the Liang Bua cave on the Indonesian island of Flores. This female stood about 1.06 meters high and lived around 18,000 years ago. (original image)

Image by Photo: © S. Entressangle/E.Daynès – Reconstruction Atelier Daynès Paris. A reconstruction of a Neanderthal woman from the Saint Césaire site in France. (original image)

Image by Photo: © S. Entressangle/E.Daynès – Reconstruction Atelier Daynès Paris. A reconstruction of an early modern human child for an exhibition on the culture behind the Lascaux cave paintings, which date to 17,300 years ago. (original image)

Image by Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris. Daynès puts the finishing touches on a reconstruction at her studio in Paris. (original image)

Image by Photo: © P.Plailly/E.Daynès – Reconstruction Atelier Daynès Paris. Daynès' studio in Paris is filled with casts for reconstructions. (original image)

Daynès synthesizes all of the scientific data about that point in hominid evolution into one sculpture, presenting a hypothesis of what the individual looked like. But the full reconstruction “is both an artistic and scientific challenge,” she says. “Reaching an emotional impact and transmitting life requires important artistic work unlike a conventional reconstruction that would be realized in a forensic laboratory,” explains Daynès.

There’s no scientific method to predict what anger or wonder or love might have looked like on the face of Homo erectus, for example. So for facial expressions, Daynès goes with artistic intuition, based on the hominid family, exhibition design, and any inspiration conjured by the skull itself.

She also turns to the expressions of modern humans: “I cut out different looks from recent photos in magazines that hit me and that I think can apply to a specific individual.” For example, Daynès modeled a Neanderthal man looking powerlessly at his companion, wounded in a hunting accident, for the CosmoCaixa Museum of Barcelona, on a Life magazine photo of two American soldiers in Vietnam.

Through these expressions and the realistic feel of the sculptures, Daynès also tries to dispel stereotypes of ancient hominids being violent, brutish, stupid, or inhuman. “I am proud to know that they will shake up common preconceptions,” Daynès says. “When this happens, the satisfaction is great—this is the promise that visitors will wonder about their origins.”

Daynes has several upcoming exhibitions at museums around the world. At the Montreal Science Centre, four of Daynès’ reconstructions of Magdalenian painters is on view through September 2014. In Pori, Finland, the Satakunta Museum features Daynès’ reconstructions of Neanderthals in an exhibition focused on the world they inhabited. Two additional exhibitions will launch later this year in Bordeaux, France, and in Chile.

The Centuries-Old History of Venice's Jewish Ghetto

Smithsonian Magazine

In March 2016 the Jewish Ghetto in Venice will celebrate its 500th anniversary with exhibitions, lectures, and the first ever production of Shakespeare’s Merchant of Venice in the Ghetto’s main square. Shaul Bassi, a Venetian Jewish scholar and writer, is one of the driving forces behind VeniceGhetto500, a joint project between the Jewish community and the city of Venice. Speaking from the island of Crete, he explains how the world’s first “skyscrapers” were built in the Ghetto; how a young Jewish poetess presided over one of the first literary salons; and why he dreams of a multicultural future that would restore the Ghetto to the heart of Venetian life again.

Venice’s Jewish Ghetto was one of the first in the world. Tell us about its history and how the geography of the city shaped its architecture.

The first Jewish ghetto was in Frankfurt, Germany. But the Venetian Ghetto was so unique in its urban shape that it became the model for all subsequent Jewish quarters. The word “ghetto” actually originated in Venice, from the copper foundry that existed here before the arrival of the Jews, which was known as the ghèto.

The Jews had been working in the city for centuries, but it was the first time that they were allowed to have their own quarter. By that time’s standards it was a strong concession and was negotiated by the Jews themselves. After a heated debate, on March 29, the Senate proclaimed this area as the site of the Ghetto. The decision had nothing to do with modern notions of tolerance. Up until then, individual [Jewish] merchants were allowed to operate in the city, but they could not have their permanent residence there. But by ghettoizing them, Venice simultaneously included and excluded the Jews. In order to distinguish them from the Christians, they had to wear certain insignia, typically a yellow hat or a yellow badge, the exception being Jewish doctors, who were in high demand and were allowed to wear black hats. At night the gates to the Ghetto were closed, so it would become a kind of prison. But the Jews felt stable enough that, 12 years into the existence of the place, they started establishing their synagogues and congregations. The area was so small, though, that when the community started growing, the only space was upward. You could call it the world’s first vertical city.

The Jews who settled in the Ghetto came from all over Europe: Germany, Italy, Spain, Portugal. So it became a very cosmopolitan community. That mixture, and the interaction with other communities and intellectuals in Venice, made the Ghetto a cultural hub. Nearly one-third of all Hebrew books printed in Europe before 1650 were made in Venice.

Tell us about the poetess Sara Copio Sullam and the role the Ghetto in Venice played in European literature.

Sara Copio Sullam was the daughter of a wealthy Sephardic merchant. At a very young age, she became a published poet. She also started a literary salon, where she hosted Christians and Jews. This amazing woman was then silenced in the most terrible way: She was accused of denying the immortality of the soul, which was a heretical view for both Jews and Christians. The one published book we have by her is a manifesto where she denies these accusations. She had a very sad life. She was robbed by her servants and marginalized socially. She was hundreds of years ahead of her time. So one of the things we are doing next year is celebrating her achievements by inviting poets to respond to her life and works.

We can’t talk about Venice and Jewish history without mentioning the name Shylock. What are the plans for staging The Merchant of Venice in the Ghetto next year?

We’re trying to bring Shylock back by organizing the first ever performance of The Merchant of Venice in the Ghetto next year. Shylock is the most notorious Venetian Jew. But he never existed. He is a kind of ghost that haunts the place. So we’re trying to explore the myth of Shylock and the reality of the Ghetto. I don’t think that Shakespeare ever visited Venice or the Ghetto before the publication of the play in the First Quarto, in 1600. But news of the place must have reached him. The relationship between Shylock and the other characters is clearly based on a very intimate understanding of the new social configurations created by the Ghetto.

As a city of merchants and dealmakers, was Venice less hostile, less anti-Semitic to Jewish moneylending than other European cities?

The fact that Venice accepted the Jews, even if it was by ghettoizing them, made it, by definition, more open and less anti-Semitic than many other countries. England, for example, would not allow Jews on its territory at the time. Venice had a very pragmatic approach that allowed it to prosper by accepting, within certain limits, merchants from all over the world, even including Turks from the Ottoman Empire, which was Venice’s enemy. This eventually created mutual understanding and tolerance. In that sense, Venice was a multiethnic city ahead of London and many others.

Image by Ziyah Gafić. Literary scholar Shaul Bassi is leading ambitious plans to restore the vibrant cultural life of the Ghetto’s streets and canals beyond the quiet contemplation found in front of the Holocaust memorial. (original image)

Image by © Sarah Quill, Bridgeman. During World War II, around 250 Venetian Jews were deported to death camps. In 1979, Lithuanian-Jewish sculptor Arbit Blatas installed seven bas-reliefs in the Ghetto in memory of the deported. (original image)

Image by Ziyah Gafić. Despite strict rules imposed by the City Council, the Ghetto became a hub of cultural activity by the 17th century. Of roughly 4,000 Hebrew books printed in Europe up to 1650, nearly one-third were printed in Venice. (original image)

Image by Bridgeman Images. In 1434, a foundry referred to as the ghèto became too small for the military demands of the Venetian Republic and was turned into a residential area, acquiring the name Ghèto Novo. (original image)

Image by © Tarker, Bridgeman. Around 1600 Shakespeare’s The Merchant of Venice is published. There is no record of the bard having visited the city. (original image)

Image by Ziyah Gafić. A Lubavitch Jewish boy naps in a Ghetto store window. This Hasidic sect arrived 25 years ago and does missionary work. (original image)

One of the most interesting descriptions of the Ghetto was by the 19th-century American traveler William Dean Howells. What light does it shed on the changing face of the Ghetto and non-Jewish perceptions?

The first English travelers to Venice in the 17th century made a point of visiting the Ghetto. But when the grand tour becomes popular, in the late 18th century, the Ghetto completely disappears from view. Famous writers, like Henry James or John Ruskin, don’t even mention it. The one exception is Howells, who writes about the Ghetto in his book Venetian Life. He comes here when the Ghetto has already been dismantled. Napoleon has burned the gates; the Jews have been set free. The more affluent Jews cannot wait to get away from the Ghetto and buy the abandoned palazzi that the Venetian aristocracy can no longer afford. The people who remain are poor, working-class Jews. So the place Howells sees is anything but interesting.

How did the Holocaust affect the Ghetto—and the identity of Italy’s Jewish population?

When people visit the Ghetto today, they see two Holocaust memorials. Some people even think the Ghetto was created during the Second World War! The Holocaust did have a huge impact on the Jewish population. Unlike in other places, the Jews in Italy felt totally integrated into the fabric of Italian society. In 1938, when the Fascist Party, which some of them had even joined, declared them a different race, they were devastated. In 1943, the Fascists and Nazis started rounding up and deporting the Jews. But the people they found were either the very elderly, the sick, or very poor Jews who had no means of escaping. Almost 250 people were deported to Auschwitz. Eight of them returned.

Today the Ghetto is a popular tourist site. But, as you say, “its success is in inverse proportion to the … decline of the Jewish community.” Explain this paradox.

Venice has never had so many tourists and so few residents. In the past 30 years, the monopoly of mass tourism as the prime economic force in the city has pushed out half the population. In that sense the Jews are no different from others. Today the Ghetto is one of the most popular tourist destinations, with nearly a hundred thousand admissions to the synagogue and Jewish Museum per year. But it is the community that makes the Ghetto a living space, not a dead space. Less than 500 people actually live here, including the ultra-Orthodox Lubavitchers. They market themselves as the real Jews of Venice. But they only arrived 25 years ago. Mostly from Brooklyn! [Laughs]

You are at the center of the 500th-anniversary celebrations of the Ghetto, which will take place next year. Give us a sneak preview.

There will be events throughout the year, starting with the opening ceremony on the 29th of March 2016, at the famous Teatro La Fenice Opera House. From April to November, there will be concerts and lectures, and from June a major historical exhibition at the Doges’ Palace: “Venice, the Jews and Europe: 1516-2016.” Then, on the 26th of July, we will have the premiere of The Merchant of Venice, an English-language production with an international cast—a truly interesting experiment with the play being performed not in the theater but in the Ghetto’s main square itself.

You write that “instead of a mass tourism basking in melancholic fantasies of dead Jews, I dream of a new cultural traffic.” What is your vision for the future of Venice’s Ghetto?

“Ghetto” is a word with very negative connotations. There is a risk that Jewish visitors will see it primarily as an example of one of the many places in Europe where Jewish civilization was almost annihilated. I may sound harsh, but it could be said that people like the Jews when they are dead, but not when they are alive. The antidote, in my humble opinion, is to not only observe the past but to celebrate our culture in the present. This could be religious culture but also Jewish art and literature. Why could the Ghetto not become the site of an international center for Jewish culture? We also need more interaction between visitors and locals, so that people who come to the Ghetto experience a more authentic type of tourism. I think that is the secret to rethinking this highly symbolic space. The anniversary is not a point of arrival. It’s a point of departure.

Read more from the Venice Issue of the Smithsonian Journeys Travel Quarterly.

Nine Delicious Holiday Drinks From Around the World

Smithsonian Magazine

In the United States, the winter holidays might conjure the image of a crackling fire, wrapping paper, lit candles and the taste of warm cider, eggnog or piping-hot chocolate. These libations—iced, boozy or once-a-year delicacies—reflect the culinary traditions, weather, religion and agriculture of the places they originated. Here are nine beverages that will be served at special occasions around the globe this holiday season.

Coquito – Puerto Rico

Rum-spiked Puerto Rican coquito. (bhofack2 / iStock)

“If I go through a Christmas and I haven’t tasted coquito, it’s not Christmas,” says Roberto Berdecia, co-founder of the San Juan bars La Factoria, JungleBird and Caneca. Coquito, a cold, coconutty cousin to eggnog, is a fridge staple throughout the island’s long holiday season, which Berdecia explains starts essentially the day after Halloween and lasts until San Sebastián Street Festival fills its namesake street with art and revelry in mid-January. Most families have a passed-through-the-generations recipe, but basic ingredients include coconut cream, three types of milk (evaporated, condensed, coconut), rum (Berdecia prefers gold rum, but the drink can be made with white rum or whatever’s on hand), and cinnamon and nutmeg for flavor. At Puerto Rican holiday gatherings with family and friends, the “little coconut” drink gets raised up for toasts—¡Salúd!–and served cold, either on the rocks or sans ice.

Here’s a recipe published in the Washington Post and developed by Alejandra Ramos, who runs a food blog called “Always Order Dessert."

Kompot – Ukraine, Russia, Poland, other Slavic countries

Fruity kompot being poured in Russia. (Valery Matytsin / TASS via Getty Images)

Think jam, but drinkable: kompot, an Eastern European drink, comes from boiling fresh or dried fruits (depending on seasonal availability) with water and sugar until the fruits’ flavor suffuses the drink. “Kompot is essentially a non-carbonated and non-alcoholic juice made with real fruit,” explains Natasha Kravchuk, a Boise-based food blogger who immigrated to the U.S. at age four from Ukraine and shares recipes on her website, “Natasha’s Kitchen.” The exact taste, Kravchuk says, changes depending on the types of fruit used and how heavy-handed the cook is with the sugar, and the fruity beverage can be ladled up cold or warm, depending on whether the weather’s frosty or scorching.

Natasha’s kompot recipe strains the fruit out, but others, like this one from Kachka: A Return to Russian Cooking author Bonnie Frumkin Morales, keep the boiled fruit in. In Poland, kompot has a place among the twelve dishes traditionally served for Wigilia, the Christmas Eve dinner.

Sorrel - Jamaica

Sorrel, a hibiscus-based Christmas staple in Jamaica, has other names in other regions. (alpaksoy / iStock)

This deep-red drink comes in slightly different formsbissap in Senegal (the drink’s roots lie in West Africa), for instance, and agua de Jamaica in Spanish-speaking countries in and near the Caribbean. In Jamaica, sorrel punch became a Christmas drink because it was during the last months of the year when hibiscus, the signature ingredient of the drink, grew, as Andrea Y. Henderson reports for NPR. Served cold, sorrel punch has notes of cinnamon, sometimes a kick from rum or wine, and other times hints of ginger or mint. One crucial ingredient for sorrel, however, is time; the flavor intensifies the longer it sits. NPR has sisters Suzanne and Michelle Rousseau’s sorrel recipe, excerpted from their cookbook Provisions: The Roots of Caribbean Cooking.

Tusu Wine – China

The emperor Qianlong, who reigned over China the 18th century, drank tusu wine out of this gold chalice. (Larry Koester under CC BY 2.0)

This medicinal rice wine has had a place in Chinese customs since at least the fourth and fifth centuries C.E., according to the National Palace Museum in Taiwan. The name tusu is said to reference the drink’s ability to protect the drinker from ghosts. Traditionally, on New Year’s Day in China (Chinese New Year, not January 1), a family will drink tusu, imbibing in order of age, youngest to oldest, as a way to jointly wish for their relatives’ health in the coming year. This ritual departs from typical Chinese drinking customs, as a family’s eldest members usually take the first sips of a beverage. Janet Wang, author of The Chinese Wine Renaissance: A Wine Lover’s Companion, tells Smithsonian that the preparation of tusu wine is similar to mulled wine; the base rice wine is simmered with spices. The herbal blend for tusu varies regionally, Wang explains, but frequently includes pepper, cinnamon, atractylodes (a sunflower relative), Chinese bellflower, rhubarb and dried ginger. The tusu-maker would place the herbs in a red pouch for luck, soak them in a well overnight, cook the herbs with the wine and serve the resulting tusu still steaming. But you won’t have much luck finding tusu wine at a market, even in China—it “is really a historical tradition that is still preserved only in small local pockets.” In Japan, the drink is called o-toso, says Wang, adding that “tusu wine” is now a catch-all term for any old wine enjoyed for the Chinese New Year.

Palm Wine - Nigeria, Western Africa and Other Regions

Anthony Ozioko taps a 50-foot palm tree in southeastern Nigeria. (Pius Utomi Ekpei / AFP via Getty Images)

In Western Africa, being a palm tree tapper is a full-time job. Palm wine, extracted from various species of palm trees by cutting into the tree and letting its sap drip and accumulate, has long been a celebratory drink of choice in Nigeria. The “milky and powerfully sweet” beverage, as Atlas Obscura’s Anne Ewbank describes it, ferments quite rapidly thanks to naturally occurring yeast. Within hours of tapping, it reaches four percent alcohol content—the tipsy-making potential of a light beer. Soon after that, it’s fermented to the point of becoming vinegar. Palm wine goes by many names, among them emu, tombo and palmy, and often plays a role in Igbo and Yoruba weddings. “Since Christmas is an adopted holiday,” Nigerian chef Michael Adé Elégbèdé, who trained at the Culinary Institute of America and runs a test kitchen called ÌTÀN in Lagos, tells Smithsonian, “we don’t have specific food traditions affiliated to it other than the same dishes and drinks people would have generally for celebratory purposes.” Palm wine, he offers up, is a year-round festive delicacy. Because of palm wine’s blink-and-you’ll-miss-it shelf life, in-store varieties can be hard to come by on the other side of the Atlantic, but here’s a recipe for another popular Nigerian adult beverage, the sangria-esque Chapman.

Sujeonggwa – Korea

Korean cinnamon punch, known as sujeonggwa, is made using dried persimmons. (Topic Images Inc. via Getty Images)

Another fruit-based beverage, sujeonggwa gets a kick from the cinnamon, fresh ginger and dried persimmons with which it’s brewed. The drink has been around for about a millennium, and for the last century or so, it’s been linked to the New Year, according to the Encyclopedia of Korean Seasonal Customs. Koreans serve this booze-free “cinnamon punch” at the end of a meal, sprinkled with pine nuts and sometimes other touches like citrus peel or lotus petals. Here’s a recipe from YouTube Korean cooking guru Maangchi.

Salep – Turkey

Powdered orchid tubers give salep its creamy consistency. (alpaksoy / iStock)

Over 100 species of orchid grow in Turkey, and a large portion of those flora can be transformed into the principle ingredient for salep. When harvested, boiled and ground up, the flower turns into a flour that thickens a milk-and-spice (often cinnamon, rosewater and pistachios, per Atlas Obscura) brew. You can buy the toasty drink from stands in the streets of Istanbul, at least for now—environmentalists warn that orchid harvesting poses a major threat to wild orchid populations.

Genuine salep powder might prove tricky to track down outside of Turkey, but glutinous rice flour or other starch can stand in while whipping up a batch. Özlem Warren, author of Özlem’s Turkish Table, shares her recipe here.

Cola de Mono – Chile

Cola de mono, or colemono is a coffee-and-cinnamon-laced spiked refreshment Chileans drink for the end-of-year holidays. (LarisaBlinova / iStock)

Hailing from the northern stretches of Chile, this drink incorporates the flavors of cinnamon, cloves, vanilla, coffee and sometimes citrus into its milky base. A Chilean spirit called aguardiente made from grape residue (for those outside of South America, substitute pisco, brandy or rum) adds an alcoholic zip. The drink traditionally gets prepared the day before it’s served, chilled, to ward off the December heat in the Southern hemisphere. The story behind the spiked coffee drink’s name remains somewhat murky, but the most common version involves Pedro Montt, who served as president of Chile in the early 20th century. According to two variants of the origin story related by folklorist Oreste Plath, cola de mono—“tail of the monkey” in Spanish—comes from Montt’s nickname among friends (“El Mono”) and, depending on which tale you subscribe to, either an inventive ice cream shop owner whose concoction comforted Montt after an electoral defeat or a late-night party where Montt brought along his Colt revolver.

Chef and cultural anthropologist Maricel Presilla gave Food Network her recipe, which uses pisco and both lemon and orange peel.

Poppy Seed Milk - Lithuania

In Lithuania, Christmas Eve steals the show. Families feast on 12 dishes—12 for the number of Jesus’s apostles and the number of months in a year—that avoid using meat, dairy or alcohol. (The dietary restrictions stem from the bygone tradition of pre-Christmas fasting, as Lithuania is majority Catholic.) Along with herring and mushrooms, aguonų pienas, or poppy seed milk, has a place at that night-before-Christmas table, where empty dishes are set out for recently departed relatives. To make poppy seed milk, says Karile Vaitkute, who immigrated to the U.S. from Lithuania 25 years ago and now edits the Lithuanian Museum Review, one first takes poppy seeds (a garden bounty in her home country) and scalds them in close-to-boiling water. Then the cook pulverizes the poppy seeds using a mortar and pestle, meat grinder or other tool. “It starts giving you this whitish water, and that’s why it’s called milk,” Vaitkute explains. Sugar or honey lends the unstrained drink some sweetness. The lactose-free “milk” often accompanies crispy Christmas poppy seed biscuits known as kūčiukai. Here are recipes for both the milk and cookies from Draugas News.

This Tiny, Uninhabitable Islet in the North Atlantic Has Attracted Fishermen and Adventurers for Decades

Smithsonian Magazine

After a week at sea, fishermen round Drumanoo Head before dropping anchor in Killybegs Harbour, Ireland. Carefully, they unload their catch onto the quay, box after box of mackerel, haddock, monkfish, and squid; spindly tentacles and scaled bodies packed tightly under ice. These trawlermen have come back from the North Atlantic, where conditions are treacherous. High waves and powerful gales range across those waters even in the summer months. Protection only comes with the return to Killybegs, sheltered as it is from the worst of the storms up its narrow bay.

This geographic advantage has helped make Killybegs the largest fishing port in Ireland. Last year, its trawlermen landed almost 200,000 tonnes of fish, helping to feed a burgeoning national export market for seafood. A large part of this catch is found around 420 kilometers north in the Rockall Trough, a remote stretch of the Atlantic between Ireland, Scotland, and Iceland. Here, the fish gather in vast schools, especially near the region’s namesake pinnacle: Rockall, a tiny, uninhabited, jet-black outcrop of granite crowned by a pointillist splattering of guano.

This unassuming speck on the map was thrust into the spotlight this past summer when the Scottish government accused Irish trawlermen of overfishing in its territorial waters, before announcing that its coast guard would board any Irish fishing boat venturing into a 19-kilometer zone around the islet of Rockall. Trawlermen from the town of Killybegs, who have been casting their nets in those waters since the late 1980s, were dumbfounded.

“They find it incredible,” says Sean O’Donoghue, chief executive of the Killybegs Fishermen’s Organisation. “The attitude, certainly among my members, [has been], we are not going to take this. They can come and arrest us and we’ll fight this all the way.”

Map data by OpenStreetMap via ArcGIS

For fishermen in Killybegs—where economic activity is concentrated overwhelmingly in the harbor—any exclusion from the water around Rockall could prove economically disastrous. O’Donoghue estimates that up to a third of the town’s herring and blue whiting catch comes from the 19-kilometer area around the outcrop. What’s more, he argues, Scotland has no right to prevent Irish fishermen from plying these waters. British claim of ownership over Rockall has never been legitimate, he says. “We … as an industry, and as an Irish government, have never recognized that.”

As Edinburgh and Dublin clash in distant boardrooms, Irish trawlermen continue to drop nets around Rockall, now under the watchful eye of Scottish enforcement vessels. For the moment, the outcrop’s status remains uncertain. But with Brexit threatening to cut off access to these waters to European Union trawlermen, Killybegs’s fishing community is set to be the first casualty in a maritime legal dispute decades in the making.

***

Rockall is at least 52 million years old, the battered remnant of an extinct volcano. As high as a four-story building and slightly wider than a city bus, the seamount only began appearing on navigational charts in 1606. Early descriptions portray a familiar, if unusual, sight for mariners crossing the Atlantic. “Rokel [sic] is a solitary island … not unlike [Sule] Stack, but higher and bigger, and white from the same cause,” wrote Captain William Coats of the Hudson’s Bay Company in 1745.

That cause—namely, the abundance of guano deposited by resting gannets and guillemots—along with Rockall’s almost vertical cliffs must have put off most sailors from landing, because it wasn’t set foot upon until 1811 when Lieutenant Basil Hall of the HMS Endymion led a small crew in two longboats to its summit. After having mistaken the islet for a ship under sail, an expedition was mounted as, Hall later wrote, “we had nothing better on our hands.”

The trip was a waking nightmare. First came the difficult landing and ascent, complicated by a high swell and Rockall’s slippery cliffs: one false step, Hall wrote, “might have sent the explorer to investigate the secrets of the deep.” By some miracle, the crew clambered up to the summit, only for a dense fog to descend. Frightened about losing their ship, Hall and his men hopped back onto their boats as fast as the rising swell would allow. After several hours rowing through dense mist, they made it back to the Endymion.

The Royal Navy wouldn’t return in force until 1955—this time with a helicopter, four marines, and a plaque declaring Rockall British territory to prevent it from being used as a base for the Soviet Union to spy on the United Kingdom’s missile tests. The annexation briefly fixed the islet at the forefront of British cultural imagination. Many found the episode faintly ridiculous. Satirists Michael Flanders and Donald Swann captured the public’s bemusement in a loving ditty:

We sped across the planet
To find this lump of granite
One rather startled gannet
In fact, we found Rockall.

Lord of the Flies author William Golding used the islet as a convenient, if unlikely, metaphor for the human condition. In his 1956 novel Pincher Martin, Golding’s protagonist is stranded after his ship is torpedoed, only to slowly realize that he is dead and Rockall is his purgatory.

The United Kingdom’s annexation also provoked a spree of visits from a cavalcade of nationalists and adventurers who considered the rock their personal ultima Thule. In 1975, the Dublin rock climber Willie Dick almost drowned attempting to plant the Irish tricolor on the summit, an act that grew out of the simmering outrage among Irish nationalists at Rockall’s incorporation into Inverness-shire, Scotland, three years earlier. A decade later, British Special Air Service (SAS) veteran Tom McClean sought to reaffirm British sovereignty over Rockall by becoming the first man to live on the rock. He spent 40 days huddled in a plywood box.

McClean was followed by activists from Greenpeace in 1997, who rechristened Rockall the Republic of Waveland in protest of oil and gas exploration in its surrounding waters; a group of Belgian ham radio operators in 2011 who became so violently seasick during their trip to the island that they had to return to Scotland the next day; and Englishman Nick Hancock, who holds the world occupation record of 45 days for his stay on the islet.

As the founder of the Rockall Club, membership in which is extended to anyone who has successfully landed on the islet, Hancock is probably the world’s leading expert on its history and morphology. Hancock spent most of his stay sitting and sleeping inside an adapted water tank hauled up on the islet’s flattest ledge. He remembers a windswept, barren place that stank of dead fish. Legacies from past landings, he says, were easy to find.

“There’s a couple of plaques left by the Royal Navy, and one commemorating Tom McClean’s stay,” says Hancock. On the summit lies the remnants of a light beacon installed by British military engineers in 1972 which, from the sea, resembles a subterranean hatch, and a piece of half-carved graffiti on the side of the main ledge left by the SAS veteran. “He got as far as Tom McCl——”

Hancock worried about getting lonely on the rock and vowed to keep himself busy. Sometimes that meant making friends with passing birds, including two pigeons and a starling. For the most part, though, Hancock spent his time reading, learning the harmonica, and conducting a series of scientific experiments, including the successful confirmation of Rockall’s height (0.85 meters lower than previously thought). Aside from the fierce storm that cut his expedition to just three days over the existing record, the memories that stick most in his mind are of sitting under crystal blue skies, “watching gannets diving and minke whales surfacing around the rock. And you were the only person there.”

***

For fishermen like O’Donoghue, however, Rockall is less important for its natural beauty than its capacity to block access to vital fishing grounds. Their concerns are shared by the Irish government, which has never recognized the United Kingdom’s claim to Rockall.

On the face of it, Dublin’s position is in opposition to international maritime law, as defined by the United Nations Convention on the Law of the Sea (UNCLOS). This agreement, signed by the vast majority of the world’s governments, lays out the rules for deciding a country’s maritime territory, stating that rocks that “cannot sustain human habitation or economic life of their own shall have no economic exclusive zone or continental shelf.” However, it does permit the creation of territorial waters around said outcrop if a country stakes a valid claim to it.

The Irish government, however, refuses to recognize the United Kingdom’s title over Rockall. This means, in turn, that the waters around Rockall are not British territory at all, but just the far reaches of the United Kingdom’s exclusive economic zone (EEZ). Since both nations are currently members of the European Union, Irish trawlermen are entitled to fish in the United Kingdom’s EEZ under the European Union’s Common Fisheries Policy. In Dublin’s eyes, therefore, Rockall should have as much bearing on fishing rights as an iceberg or a shipwreck.

The United Kingdom, of course, believes otherwise. It considers Rockall and the water around it to be British territory, and therefore exempt from the Common Fisheries Policy. It has continuously reinforced this claim through symbolic acts, including fixing various plaques by the Royal Navy on the outcrop proclaiming British sovereignty over it and legally incorporating the islet into Inverness-shire in 1972.

Vintage engraving from 1862 showing Rockall, a small, uninhabited, remote rocky islet in the North Atlantic Ocean. (duncan1890/Getty Images)

Though this may not seem like much, a “symbolic act on a tiny, uninhabitable speck of land is very significant in terms of getting international ownership,” explains Clive Symmons, a professor of maritime law at Trinity College Dublin. What is actually more unusual, Symmons says, is that though the United Kingdom maintains Rockall is its territory, it has given up using the islet to further its EEZ into the North Atlantic. Typically, a country’s EEZ is calculated to extend 200 nautical miles (370 kilometers) from its claimed territory. In 1997, however, the United Kingdom unilaterally decided to pull back the starting point for this calculation from Rockall to St. Kilda, an archipelago around 180 kilometers off the Scottish mainland.

Ironically, it is the last remnant of Britain’s hold over Rockall that is proving to be the most troublesome. And with Brexit looming, this situation could deteriorate even further.

The latest version of the Political Declaration on withdrawal seeks to preserve the status quo of fishing rights until a new agreement on access is reached between London and Brussels by July 2020, a deal the Scottish Fishermen’s Federation has endorsed provided no further concessions are made in permitting its European Union rivals to fish in British waters. However, because the federation’s members consider Rockall’s waters United Kingdom territory, and therefore never subject to the Common Fisheries Policy, access to the outcrop is likely to become an object of intense negotiation.

The situation will become much simpler if Britain leaves the European Union without a deal. Since Rockall lies at the westernmost edge of the United Kingdom’s EEZ, the British government would be within its rights to throw all Irish (and all European Union) fishermen out of these waters. The inverse is also true for British fishermen in European Union waters, says O’Donoghue. “They’re not going to accept that we can put them out.”

The Killybegs association chief executive suspects the Scottish government’s newfound belligerence is an attempt to whip up support for its own nationalist party against the Conservative Party in the country’s fishing communities, a claim disputed by officials in Edinburgh. As the United Kingdom’s withdrawal date nears, few can predict whether Brexit will lead to new opportunities for British trawlermen no longer bound by the Common Fisheries Policy or clashes with their European counterparts over who can fish where, as occurred last summer when French boats rammed their British counterparts in a row over scallop stocks in the Baie de la Seine.

Other observers, however, are keen to see how the British claim to the islet might evolve under these pressures.

“The Japanese are particularly interested in Rockall,” Symmons says. Japan, too, claims ownership of an isolated rocky outcrop hundreds of kilometers from shore. While the largest islet in the Okinotorishima reef is no larger than a double bed, the Japanese government has spent an estimated US $600-million literally shoring up its island status with concrete barriers and titanium netting. Unlike the United Kingdom, though, Japan continues to claim a 200-nautical-mile EEZ around the formation, “much to the displeasure of the Chinese, [who] of course cite the UNCLOS convention,” says Symmons.

Any future horse-trading over the territoriality of a granite outcrop in the North Atlantic could, therefore, set a valuable precedent in the ongoing tussle over an artificially sheltered atoll in the western Pacific.

The capacity for Rockall to create such mischief would have been unimaginable to its first visitors in 1811. To Hall and his crew, it was nothing but a curiosity that broke the endless monotony of the North Atlantic. “The smallest point of a pencil could scarcely give it a place on any map, which should not exaggerate its proportion to the rest of the islands in that stormy ocean,” he wrote. A moot point now, perhaps.

Related stories from Hakai Magazine:

A Fleet of Taxis Did Not Really Save Paris From the Germans During World War I

Smithsonian Magazine

On the night of September 6, 1914, as the fate of France was hanging in the balance, a fleet of taxis drove under cover of darkness from Paris to the front lines of what would become known as the Battle of the Marne. Carrying reinforcements that turned the tide of battle against the Germans, the taxi drivers saved the city and demonstrated the sacred unity of the French people.

At least, that’s the story.

Still, as we know from our own past, heroic stories about critical historic moments such as these can have but a grain of truth and tons of staying power. Think Paul Revere, who was just one of three riders dispatched the night of April 18, 1775, who never made it all the way to Concord and who never said, “The British are coming!”

Yet, his legend endures, just as it does, a century later, with the Taxis of the Marne—which really did roll to the rescue, but weren’t remotely close to being a decisive factor in the battle. That doesn’t seem to matter in terms of their popularity, even today.

“When we welcome school children to the museum, they don’t know anything about the First World War, but they know the Taxis of the Marne,” says Stephane Jonard, a cultural interpreter at La Musee de la Grand Guerre, France’s superb World War I museum, located on the Marne battlefield, near Meaux, about 25 miles east of Paris.

One of the actual taxis is on exhibit in the Museum, and in the animated wall map that shows the movements of troops, the arrival of reinforcements from Paris is shown through the icon of a taxi.

For Americans, understanding why the taxis are still fondly remembered a century later requires a better grasp on the pace of events that roiled Europe a century ago. Consider this: the event generally considered the match that ignited the already bone-dry timber of European conflict—the assassination of Austria’s Archduke Ferdinand in Sarajevo—took place on June 28, 1914. A flurry of declarations of war and a dominos-like series of military mobilizations followed so quickly that less than eight weeks later, German armies were already rolling through Belgium and into France, in what the German high command hoped would be a lightning strike that would capture Paris and end the war quickly.

“The Germans gambled all on a brilliant operational concept,” wrote historian Holger H. Herwick in his 2009 book, The Marne: 1914. “It was a single roll of the dice. There was no fallback, no Plan B.”

***

This early phase of the conflict that would eventually engulf much of the world was what some historians call “The War of Movement” and it was nothing like the trench-bound stalemate that we typically envision when we think of World War I.

Yet even in these more mobile operations, losses were staggering. The clash between the world’s greatest industrial and military powers at the time was fought on the cusp of different eras. Cavalry and airplanes, sword-wielding officers and long-range artillery, fife and drums and machine guns, all mixed anachronistically in 1914. “Masses of men advanced against devastatingly powerful modern armaments in the same fashion as warriors since ancient times,” writes Max Hastings in his acclaimed 2013 book Catastrophe 1914: Europe Goes To War. “The consequences were unsurprising, save to some generals.”

On August 22, 27,000 French soldiers were killed in just one day of fighting near the Belgian and French borders in what has become known as the Battle of the Frontiers. That’s more than any nation had ever lost in a single day of battle (even more infamous engagements later in World War I, such as the Battle of the Somme, never saw a one-day death tally that high.)

The Battle of the Marne took place two weeks after that at the Battle of the Frontiers and with most of the same armies involved.  At that point the Germans seemed unstoppable, and Parisians were terrified over the very real prospect of a siege of the city; their fears hardly assuaged by the appearance of a German monoplane over the city on August 29 that lobbed a few bombs. The government decamped for Bordeaux and about a million refugees (including the writer Marcel Proust) followed. As Hastings relates in his book, a British diplomat, before burning his papers and exiting the city himself, fired off a dispatch warning that “the Germans seem sure to succeed in occupying Paris.”       

Is it any wonder that the shocked, grieving and terrified citizens of France need an uplifting story? A morale boost?

Enter Gen. Joseph Gallieni, one of France’s most distinguished military men, who had been called from retirement to oversee the defense of Paris. The 65-year-old took command with energy and enthusiasm, shoring up defenses and preparing the city for a possible siege.

 “Gallieni’s physical appearance alone commanded respect,” wrote Herwig. “Straight as an arrow and always immaculate in full-dress uniform, he had a rugged, chiseled face with piercing eyes, a white droopy mustache and a pince-nez clamped on the bridge of his nose.”

Image by © adoc-photos/Corbis. French soldiers survey their German enemies from a trench in Marne circa 1915. (original image)

Image by © Corbis. Gallieni served as the governor of French Sudan and Madagascar, in addition to serving as military governor of Paris during World War I. (original image)

Image by © Bettmann/Corbis. One of the Parisian taxis sent to reinforce the Marne sector. (original image)

Image by © Bettmann/Corbis. Villages of the Marne region were left in ruin. (original image)

An old colleague of the French commander-in-chief General Joseph Joffre, Gallieni knew what was unfolding out in the expansive farmlands around Meaux. By September 5, the German armies had reached the area, hell-bent for Paris, only 30 miles away. They were following a script developed by the German high command before the war that called for a rapid encirclement of the city and the Allied armies.

Gallieni knew that Joffre needed all the men he could get. Trains and trucks were commandeered to rush reinforcements to the front. So were taxis, which, even as early in the automobile’s history as 1914, were a ubiquitous part of Parisian life. However, of the estimated 10,000 taxis that served the city at that time, 7,000 were unavailable, in large part because most of the drivers were already in the army. Still, those that could respond, did. In some cases, whether they liked it or not: “In every street in the capital,” wrote Henri Isselin in his 1966 book The Battle of the Marne, “police had stopped taxis during working hours, turned out the passengers, and directed the vehicles towards the Military College, where they were assembled.”

While the taxis were being commandeered, an epic battle was developing east of Paris. Today, the wide open farm fields around Meaux, itself a charming Medieval city, are much the way they were in 1914. Bicyclists whizz down the roads that bisect the fields and small villages, often passing memorials, mass graves and ancient stone walls still pockmarked with bullet holes. One hundred years ago, there would have been nothing bucolic or peaceful here. What was then the largest battle in history was about to be fought on this land.

***

On the night of September 6, the first group of taxis assembled on the Place des Invalides—next to the military compound in Paris’s 7th arrondisement. Many were from the G-7 cab company, which still exists today. The taxis of 1914 were Renault AG1 Landaulets. They could seat five men per vehicle, but averaged a speed of only about 20-25 miles per hour. With orders from the French command, the first convoy of about 250 left the plaza and headed out of the city on National Road 2. Chugging along single-file, the taxi armada crept towards the fighting, their mission still secret. They were soon joined by another fleet of cabs.

“The drivers were far from happy,” wrote Isselin. “What was the point of the nocturnal sortie? What was going to happen to them?” At first, the whole exercise seemed pointless. On September 7, the officers directing the convoy couldn’t find the troops they were supposed to transport. Somewhere outside of Paris, Hastings notes, “they sat in the sun and waited hour after hour, watching cavalry and bicycle units pass en route to the front, and giving occasional encouraging cries. ‘Vive les dragons! Vive les cyclistes.”

Finally that night, with the rumble of artillery audible in the distance, they found their passengers: Three battalions of soldiers.  Yet another convoy picked up two more battalions. The troops, for the most part, were delighted to find that they would be taxied to the front. “Most had never ridden in such luxury in their lives,” Hastings writes.

Although estimates vary on the final count, by the morning of September 8, the taxis had transported about 5,000 men areas near the front the front lines where troops were being assembled. But 5,000 men mattered little in a battle involving more than one million combatants. And as it turned out, most of the troops carried by taxi were held in reserve.

Meanwhile, a stunning turn of events had changed the shape of the battle.

What happened, essentially, is that one of the German generals, Alexander von Kluck, had decided to improvise from the high command’s plan. He had opted to pursue the retreating French armies, who he (and most of his fellow commanders) believed were a shattered, spent force. In doing so, he exposed his flank, while opening up a wide gap between his and the nearest German army. The white-haired, imperturbable Joffre—known to his troops as Papa—sprang into action to exploit Kluck’s move. He counterattacked, sending his troops smashing into von Kluck’s exposed flank.

Still, the battle swung back and forth, and the French commander needed help. In a famous scene often recounted in histories of the Marne, Joffre lumbered over to the headquarters of his reluctant British allies—represented at that point in the war by a relatively small force—and personally pleaded with them to join him, reminding them, with uncharacteristic passion, that the survival of France was at stake. His eyes tearing, the usually-petulant British Field Marshall Sir John French, agreed. The British Expeditionary Force joined the counter-offensive.

The German high command was taken by surprise.

“It dawned on (them) at long last that the Allies had not been defeated, that they had not been routed, that they were not in disarray,” wrote Lyn MacDonald in her 1987 book on the first year of the war, 1914.

Instead, aided by reinforcements rushed to the front (although most of the ones that were engaged in the fighting came by train) Joffre and his British allies repulsed the German advance in what is now remembered as “The Miracle of the Marne.” Miraculous, perhaps, because the Allies themselves seemed surprised at their success against the German juggernaut.

“Victory, victory,” wrote one British officer. “When we were so far from expecting it!”

It came at the cost of 263,000 Allied casualties. It’s estimated that the German losses were similar.

The Taxis almost instantly became part of the Miracle—even if they didn’t contribute directly to it. “Unique in its scale and speed,” writes Arnaud Berthonnet, a historian at the Sorbonne University in Paris, “[the taxis episode] had a real effect upon the morale of the both the troops and the civilian population, as well as upon the German command. More marginal and psychological than operational and militaristic in importance, this `Taxis of the Marne’ epic came to symbolize French unity and solidarity.”

It didn’t even seem to matter that some of the cab drivers had complained about being pressed into service; or that when the cabs returned to Paris, their meters were read and the military was sent a bill. Somehow, the image of those stately Renaults rolling resolutely towards the fighting, playing their role in the defense of Paris and the survival of their republic, filled the French with pride.

While Paris was saved, the Battle of the Marne marked the beginning of the end of the War of Movement. By the end of 1914, both sides had dug in along a front that would eventually extend from the Swiss border to the North Sea. The nightmare of trench warfare commenced, and would continue for four more years. (It would end, in part, after what is often called the Second Battle of the Marne in 1918, fought in the same region, in which American Doughboys played an important role in a decisive counter-offensive that finally broke the back of the German armies).

The memory of the Marne and particularly its taxis, lived on. In 1957, a French writer named Jean Dutourd published a book called The Taxis of the Marne that became a best-seller in France, and was widely read in the Untied States as well. Dutourd’s book, however, was not really about the taxis, the battle or even World War I. It was, rather, a lament about French failings in the Second World War and a perceived loss of the spirit of solidarity that had seemed to bond civilians and soldiers in 1914.  Dutourd—who, as a 20-year-old soldier, had been captured by the Nazis as they overran France in 1940—was aiming to provoke. He called the Taxis of the Marne “the greatest event of the 20th century...The infantry of Joffre, in the taxis of Gallieni arrived on the Marne...and they transformed it into a new Great Wall of China.”

Hardly, but historical accuracy wasn’t the point of this polemic. And some of the facts of the episode don’t seem to get in the way of the cabs’ enduring symbolic value.

So much so that school children still know about it. But at the Great War Museum, Stephane Jonard and his colleagues are quick to explain to them the truth of the Taxi’s role. “What’s important,” he says, “is that, at the moment we tell them about the real impact of the taxis, we also explain to them what a symbol is.”

And a century later, there are few symbols more enduring or important in France than the Taxis of the Marne.

For information on France’s World War I museum, in Meaux: http://www.museedelagrandeguerre.eu/en

For information on tourism to Seine et Marne and Meaux: http://www.tourism77.co.uk/

Two Museum Directors Say It’s Time to Tell the Unvarnished History of the U.S.

Smithsonian Magazine

“History matters because it has contemporary consequence,” declared historian Jennifer Guiliano, explaining to an audience how stereotypes affect children of all races. “In fact, what psychological studies have found, is when you take a small child out to a game and let them look at racist images for two hours at a time they then begin to have racist thoughts.”

The assistant professor affiliated with American Indian Programs at Indiana University-Purdue University Indianapolis went on to explain what that means to parents who have taken their kid for a family-oriented excursion to a sporting event with a racist mascot.

“We’re taking children who are very young, exposing them to racist symbology and then saying ‘But don’t be a racist when you grow up,’” Guiliano says. “This is the irony of sort of how we train and educate children. When we think about these issues of bringing children up, of thinking about the impact of these things, this is why history matters.”

Guiliano was among the speakers at a day-long symposium, “Mascots, Myths, Monuments and Memory,” examining racist mascots, the fate of Confederate statues and the politics of memory. The program was held in Washington, D.C. at the Smithsonian’s National Museum of African American History and Culture in partnership with the National Museum of the American Indian.

Lonnie Bunch, the founding director of the African American History museum, says this all came about after a conversation with his counterpart Kevin Gover at the American Indian museum. Bunch says he learned that the creation of Confederate monuments and the rise of racist Indian mascots in sporting events occurred during the same period in American history, between the 1890s and 1915. This gathering was one way to help people understand the how and why between that overlap.

“It’s all about white supremacy and racism. The notion of people, that you’re concerned about African-American and Native people, reducing them so they are no longer human,” Bunch explains. “So for African-Americans these monuments were really created as examples of white supremacy—to remind people of that status where African-Americans should be—not where African-Americans wanted to be. For Native people, rather than see them as humans to grapple with, reduce them to mascots, so therefore you can make them caricatures and they fall outside of the narrative of history.”

American Indian museum director Kevin Gover took the audience on a riveting trip through several 19th-century monuments, including four by Daniel Chester French that adorn the exterior of the 1907 Alexander Hamilton U.S. Custom House, now home to the National Museum of the American Indian in New York City.  The French sculptures, female figures representing the four continents and entitled, AmericaAsiaEurope and Africa, says Gover, send disturbing messages to the public.

Image by David Sundberg/ESTO. Four sculptures by Daniel Chester French on the exterior of the 1907 Alexander Hamilton U.S. Custom House, now home to the National Museum of the American Indian in New York City send disturbing messages to the public. (original image)

Image by SAAM, A. B. Bogart negative acquired by Peter A. Juley & Son. Model for The Continents: Africa by Daniel Chester French (original image)

Image by SAAM, A. B. Bogart negative acquired by Peter A. Juley & Son. Model for The Continents: America by Daniel Chester French (original image)

Image by SAAM, A. B. Bogart negative acquired by Peter A. Juley & Son. Model for The Continents: Asia by Daniel Chester French (original image)

Image by SAAM, A. B. Bogart negative acquired by Peter A. Juley & Son. Model for The Continents: Europe by Daniel Chester French (original image)

“You can see that America is rising from her chair, leaning forward, looking far into the distance. The very symbol of progress. Bold. Surging. Productive. . . . Behind America is this depiction of an Indian.  . . . . But here, what we really see is this Indian being led to civilization,” he says.

Gover describes the Europe figure as regal and confident, with an arm resting on the globe that she conquered. The figure representing Asia, he explains, is depicted as inscrutable and dangerous, resting on a throne of skulls from those murdered throughout the Asian empire. Then, there’s the female figure representing Africa.

“As you can see, Africa is asleep. It’s unclear whether she is exhausted or merely lazy. The lion to her left is also asleep. To the right is the Sphinx, which is of course in decay, indicating that Africa’s best days were behind her,” Gover says, adding that the sculptor was racist, but no more so than the rest of the American culture at that time that agreed with these stereotypes. Near the end of his career, French designed the statue of Abraham Lincoln that sits within the Lincoln Memorial, just a short walk from where the symposium was held.

Such public monuments were created in the same period that mascots came into being, such as the Cleveland Indians baseball team which got its name in 1915. Gover notes that it is one of the few mascots that became more racist over time, culminating in the insanely grinning, red-faced, Chief Wahoo. Beginning next year, Major League Baseball says the team will stop using what many find to be an offensive logo on its uniforms, saying that the popular symbol is no longer appropriate for use on the field.

“Racism and bigotry are not simply expressions of hate and animosity. They are instruments of broad political power," says Ray Halbritter. (Leah L. Jones, NMAAHC )

Most universities, have stopped using Native American team names, including the University of North Dakota which changed its name from the Fighting Sioux to the Fighting Hawks in 2015.

But many other teams, including the N.F.L.’s team in Washington D.C., have resisted increasing pressure to do so. Gover has been vocal in his opposition.

Team owner Daniel Snyder has vowed never to change its name, despite a suggestion from President Barack Obama that he do so, claiming it is actually a tribute. In fact, a 2016 Washington Post poll found that nine out of ten Native Americans were not bothered by the name activists refer to as the R-word. Ray Halbritter, whose Oneida Indian Nation is the driving force behind the Change the Mascot campaign, explains why he finds the term offensive.

“Racism and bigotry are not simply expressions of hate and animosity. They are instruments of broad political power. Those with political power understand that dehumanizing different groups is a way to marginalize them, disenfranchise them, and keep them down,” Halbritter says, adding that the name originated with one of the team's previous owners, George Preston Marshall, who held segregationist views. He notes that the team was the very last to sign African-American players, and that its name remains offensive to many, but particularly to Native Americans.

“This team’s name was an epithet screamed at Native American people as they were dragged at gunpoint off their lands,” Halbritter explains. “The name was not given to the team to honor us. It was given to the team as a way to denigrate us.”

Ibram X. Kendi, described what it was like arriving in Manassas, Virginia, as an African-American high school sophomore to tour Manassas National Battlefield Park and seeing Civil War reenactors swarming to the park to recreate Confederate victories. (Leah L. Jones, NMAAHC )

Historian Guiliano pointed out that at the start, before 1920, colleges and universities as well as sports teams began taking on names ranging from the “Indians” and the “Warriors.” But she says they didn’t become tied to a physical mascot, performing and dancing until the late 1920s and early 1930s.

“When you look across the nation, there’s sort of this groundswell beginning in 1926, and really by the early 1950s it proliferates everywhere,” Guiliano explains. “When those images are getting created. . . they’re doing it to create fans, to bring students to games, to get donors. But they’re drawing on a lot older imagery. . . . You can literally take one of these Indian-head images we use as mascots and you can find newspaper advertisements from the early 1800s when they’re using those symbols as advertisements for the bounties the federal government put on Indian people.”

She says the federal government had a program where it offered rewards for scalps for men, women and children, and the Indian-head symbols were signs that you could turn in your scalp here and be paid.

The movement to take down Confederate monuments is obviously mired in the pain of the memory and lingering effects of slavery, and has become more urgent as of late. Such was the case when white supremacists gathered in Charlottesville, Virginia, to protest the removal of an equestrian statue of Confederate General Robert E. Lee, clashing with anti-racist protestors and killing a woman in the process.

The symposium’s keynote speaker, American University professor and director of the anti-racist research and policy center, Ibram X. Kendi, described what it was like moving from Queens, New York, to Manassas, Virginia, as an African-American high school sophomore. He remembers tourists swarming to Manassas National Battlefield Park to relive Confederate victories. Appropriately, Kendi titled his keynote “The Unloaded Guns of Racial Violence.”

“I started to feel unsettled when people who despised my existence walked around me with unloaded guns. I knew these guns could not kill me,” Kendi explains. “But my historical memory of how many people like me these guns had killed sapped my comfort, injected me with anxiety, which sometimes went away. But most times it turned into fear of racial violence.”

He says he thought about what it felt like to be surrounded by so many Confederate monuments, and what it felt like to literally watch people cheer for mascots that are a desecration of their people. He also considered the relationship between racist ideas and racist policies.

“I found . . . that powerful people have instituted racist policies typically out of cultural, political and economic self interest. And then those policies then led to the creation of racist ideas to defend those policies,” Kendi says. “Historically, when racist ideas won’t subdue black people racial violence is oftentimes next. . . .  So those who adore Confederate monuments, those who cheer for the mascot are effectively cheering for racial violence.”

“History matters because it has contemporary consequence,” declared historian Jennifer Guiliano. (Leah L. Jones, NMAAHC)

Some at the symposium wondered whether Confederate monuments should be removed or covered, as they have been in some of the nation’s cities. But the African-American museum’s director Bunch isn’t sure that is the way to handle the controversy.

“I think as a historian of black America whose history has been erased I don’t ever want to erase history. I think you can prune history. However, I think the notion of taking down some of the sculptures is absolutely right. . . .  I also think it is important to say some of these monuments need to stand, but they need to be reinterpreted,” Bunch says. “They need to be contextualized. They need people to understand that these monuments tell us less about a Civil War and more about an uncivil peace.”

One way to do that, Bunch said, would be to place them in a park, as Budapest did after the fall of the Soviet Union. Gover doesn’t think that’s the way to go about it. But he thinks events like this one are part of a growing movement, in which institutions like this take a more active role in understanding the nation’s history differently.

Asked if the symposium represented a new path forward for museums to be more involved in hot-button topics of the day, Gover agreed that museums have much to share on these issues.

“The obvious thing to me was that when you have a platform like a Smithsonian museum dedicated to the interest of Native Americans you are to use it to their advantage and tell stories in ways that are advantageous to them. I know you know Lonnie (Bunch) feels the same way about the African American museum,” Gover says. “This notion that museums and scholars and experts of all types are objective, that’s nonsense. None of us is objective and it’s just nice that now some of these institutions are able to produce excellent scholarship that tells a vastly different story from what most Americans learn.”

Gover says some museums have to live under the demand of telling a pretty story. But he thinks now institutions that aren’t associated with a particular ethnic group, including the Smithsonian American Art Museum and the National Portrait Gallery, will now start moving in the same direction as the Native American and African American institutions.

“When you’ve created an American Indian and an African American museum, “Gover says with a laugh, “what Congress was really saying is, ‘Okay. Look. Tell us the truth.’”

Volume 2, appointment diary

Archives of American Art
Appointment Book : 1 v. : handwritten ; 20 x 14 cm.

One of ten appointment books ranging in date from 1946-1966. Document aspects of Lovet-Lorski's daily life, such as medical appointments, dates with friends and other typical activities.

A Brief History of the Chocolate Pot

Smithsonian Magazine

Browse the aisles of any grocery store today, and you’re likely to find chocolate, and lots of it. Pastries, cakes, Hershey’s kisses and artisanal bars offer an array of choices sure to provide just the right Valentine’s Day fix.

The human love affair with chocolate extends back thousands of years, but the options for consuming chocolate weren’t always so abundant. When the Spanish first introduced the treat to Western Europe in the 17th century, there was really only one: hot chocolate. It was prepared in its very own vessel, the chocolatière, or chocolate pot.

At that time—centuries before the advent of pulverization, emulsification or any of the other industrial processes that would make chocolate widely available in its current forms—  drinking hot chocolate was the easiest and tastiest way to indulge in this luxury import.

“I think that chocolate—particularly when mixed with sugar—was very readily appealing to almost any taste,” says Sarah Coffin, curator and head of the product design and decorative arts department at the Cooper Hewitt, Smithsonian Design Museum. “I suspect that tea and coffee people acquired tastes for but perhaps were a little less easy to immediately embrace.”

Preparing hot chocolate entailed a process distinctive from the other beverages popular at the time. Rather than infusing hot water with coffee grounds or tea leaves and then filtering out the sediment, hot chocolate required melting ground cacao beans in hot water, adding sugar, milk and spices and then frothing the mixture with a stirring stick called a molinet.

When Louis XIII married Anne of Austria in 1615, the queen’s enthusiasm for chocolate spread to the French aristocracy. During that early modern period, the French had refined the dining experience to the point of extravagance. In that spirit, they crafted the chocolatière, a vessel uniquely suited to preparing chocolate.

In reality, the origins of the chocolate pot date back to Mesoamerica, where traces of theobromine—the chemical stimulant found in chocolate—have been found on Mayan ceramic vessels dating back to 1400 B.C. But the chocolate pot that set the standard for Europe, however, looked nothing like the earthenware of the Americas. It sat perched on three feet, with a tall, slender body, and an ornate handle at 90 degrees from the spout. Most important was the lid, which had a delicate hinged finial, or cap, that formed a small opening for the molinet.

“It was inserted to keep the chocolate frothed and well-blended,” says Coffin of the utensil. “Because unlike the coffee I think that the chocolate tended to settle more. It was harder to get it to dissolve in the pot. So you’d need to regularly turn this swizzle stick.”

It was this hinged finial that came to define the form. “You can always tell a chocolate pot and the way you can tell is because it has a hole in the top,” says Frank Clark, master of historic foodways at the Colonial Williamsburg Foundation, who makes colonial-style chocolate—and sometimes, hot chocolate—for guests.

In the 17th and 18th centuries, chocolate pots were mostly made of silver or porcelain, the two most valuable materials of the time. “Chocolate was considered exotic and expensive,” says Coffin. “It was a rare commodity and so it was associated with luxury objects such as silver, and of course in the early days, porcelain.”

As chocolate spread throughout western Europe, each country interpreted the vessel according to their own tastes. Vienna became known for its elegant chocolate and coffee sets. Many German chocolate pots, including several in the Cooper Hewitt’s collection from the mid-to-late 18th century, featured gilded, Chinese-inspired designs known as Chinoiserie.

Image by Gift of Mrs. Edward Luckemeyer, 1912-13-1-a,b. Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . A mid-18th century, enamel and glazed porcelain chocolate pot and lid manufactured by the Meissen Porcelain Factory; Meissen, Saxony, Germany. (original image)

Image by Bequest of Erskine Hewitt, 1938-57-633, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott . A chocolate pot attributed to Meissen, in Saxony, Germany, ca. 1735. Gilt and glazed hardpaste porcelain. (original image)

Image by Bequest of Erskine Hewitt, 1938-57-307-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . A stoneware chocolate pot manufactured by Wedgwood, Staffordshire, England from the late 18th century. Molded, thrown and polished stoneware (Black Basaltware). (original image)

Image by Bequest of Erskine Hewitt, 1938-57-650-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott . A gilt and glazed porcelain chocolate pot, manufactured by Berlin Porcelain Factory, Berlin, Prussia, Germany, dates to around 1770. (original image)

Image by Bequest of Erskine Hewitt, 1938-57-665-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott. A porcelain chocolate pot, c. 1740, manufactured by the Meissen Porcelain Factory; Meissen, Saxony, Germany. Underglazed enameled, glazed and gilt hardpaste porcelain; gilt brass (original image)

Image by Bequest of Erskine Hewitt, 1938-57-676-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Ellen McDermott. A gilt and glazed hardpaste porcelain chocolate pot manufactured by the Fürstenburg Porcelain Factory, in Lower Saxony, Germany dates to 1780–1800. (original image)

Image by Gift of Elizabeth Taylor, 1991-11-3-a,b, Cooper Hewitt, Smithsonian Design Museum; Photo by Matt Flynn . This gilt porcelain "Healy Gold" Chocolate pot was manufactured by Chryso Ceramics in Washington, D.C., ca. 1900. (original image)

“They suddenly had this new beverage and took it back to their courts. So then things were made in the different courts, so you get things made in Austrian porcelain or German porcelain and French ceramics and silver and so forth,” says Coffin.

Americans, too, had a thirst for chocolate, which they began drinking in the 1660s, soon after England acquired its own chocolate pipeline, Jamaica, in 1655. Chocolate pots weren’t often produced in the United States, but Coffin says the European imports were of extremely high quality because the wealthy people who purchased them wanted to keep up with the latest continental fashions.

In Europe and the United States, drinking hot chocolate became both a public and private practice. Around the end of the 17th century, chocolate and coffee houses cropped up that served as a meeting spot for lawyers, businessmen, and politicos well into the 18th century. In New England, Clark says those in charge of setting the price of tobacco and other important commodities were known to gather at a chocolate/coffee house to do so.

In private, chocolate was associated with the bedroom, as it was popular to drink first thing in the morning as well as in the evening before bed. A painting by French artist Jean-Baptiste Le Prince from 1769 depicts a woman lying in bed, reaching out for her departed lover, the morning light illuminating her figure. A chocolate pot and cups sit by her bedside. According to the book Chocolate: History, Culture, and Heritage by Louis E. Gravetti and Howard-Yana Shapiro, such images led to chocolate being associated with a leisurely lifestyle. This imbued the beverage with an added air of luxury.

As soon as the Industrial Revolution arrived, that started to change. Chocolate makers developed a method of using hydraulic and steam chocolate mills to process chocolate faster and at a lower cost. In 1828, Coenraad Johannes Van Houton invented the cocoa press, which removed the fat from cacao beans to make cocoa powder, the basis for most chocolate products today. Chocolate prices fell, and soon chocolate became a sweet that most everyone could afford.

The chocolate pot also evolved. Chocolate powder decreased the importance of the molinet, and chocolate pots began cropping up with finials that were stuck in place.

By the early 20th century, the golden age of hot chocolate had come and gone, but chocolate pots still enjoyed some popularity. In the late 19th and early 20th centuries, the Japanese had considerable success exporting porcelain chocolate pots and other wares to North America.

One example in the collections of the Freer and Sackler Galleries is a Satsuma style porcelain chocolate pot, fired with clear glaze and decorated with a colorful array of three-dimensional, enamel dots depicting a Buddhist scholar with his attendants. Ceramics curator Louise Cort says the scene is one of a few stock images commonly used at that time to cater to Western perceptions of Japanese culture.

Mineralogist A.E. Seaman purchased the piece at the 1904 World’s Fair in St. Louis. According to notes from his daughter, the family used the pot for tea rather than hot chocolate. This is not surprising; tea was growing more popular by then, and aside from the shape of the vessel, there is no removable finial that would indicate the pot should be used exclusively for hot chocolate. It could easily have been used to prepare other beverages.

By the 1950s, chocolate pot production died down. Very few, if any, are still made today, but one can still find virtually any style of chocolate pot online or in auction houses. Vessels ranging from pristine 17th-century French silver pots to Japanese Satsuma style ware sell regularly on eBay for anywhere from $20 to $20,000 dollars.

People like Clark at Colonial Williamsburg are managing to preserve the old chocolate tradition. In his demonstrations, he roasts the actual cacao beans, separates out the hard shell, and grinds the beans into a liquid paste. When he does prepare the actual beverage, he dissolves the chocolate in a traditional chocolate pot and adds sugar and spices.

“It really represents the way chocolate was made in colonial times for the very wealthy,” Clark says.

Those interested in imbibing true hot chocolate this Valentine’s Day can easily do so. It’s not hard to find an antique chocolate set and molinet for under $100, and many stores now sell cacao nibs, bits of roasted cacao beans that have been removed from their shells. Grind the nibs in a bowl or on a chocolate stone, and melt the paste in hot water, and you’ll be sipping hot chocolate in no time. (A few documented recipes are also available online from the hot chocolate heyday.)

As far as chocolate’s aphrodisiac powers go, research suggests that there’s very little validity to the lore. But all is not lost; Cort says hot chocolate would have been a worthy tool of seduction purely for the taste itself. “I suspect that… if you thought it had this [aphrodisiac] power and it was in any case sweet if you mixed a lot of sugar and vanilla with it, this would be a wonderful way to try and seduce somebody.”

How Eclipse Anxiety Helped Lay the Foundation For Modern Astronomy

Smithsonian Magazine

In August, a total solar eclipse will traverse Ameica for the first time in nearly a century. So many tourists are expected to flood states along the eclipse’s path that authorities are concerned about illegal camping, wildfire risks and even devastating porta-potties shortagesThere’s a reason for all this eclipse mania. A total solar eclipse—when the moon passes between the sun and the Earth—is a stunning natural event. For a few breathtaking minutes, day turns to night; the skies darken; the air chills. Stars may even appear.

As awe-inspiring as an eclipse can be, it can also evoke a peculiar fear and unease. It doesn’t seem to matter that science has reassured us that eclipses present no real dangers (aside from looking straight into the sun, of course): When that familiar, fiery orb suddenly winks out, leaving you in an eerie mid-day darkness, apprehension begins to creep in.

So it’s perhaps not surprising that there’s a long history of cultures thinking of eclipses as omens that portend significant, usually bad happenings. The hair-raising sense that something is “off” during these natural events has inspired a wealth of myths and rituals intended to protect people from supposed evils. At the same time, eclipse anxiety has also contributed to a deeper scientific understanding of the intricate workings of the universe—and even laid the foundation for modern astronomy.

A clay tablet inscribed in Babylonian with a ritual for the observances of eclipses. Part of the translated text reads: "That catastrophe, murder, rebellion, and the eclipse approach not... (the people of the land) shall cry aloud; for a lamentation they shall send up their cry." (Mesopotamia, third-first century B.C. Record ID: 215816. The Morgan Library & Museum)

The idea of eclipses as omens stems from a belief that the heavens and the Earth are intimately connected. An eclipse falls outside of the daily rhythms of the sky, which has long been seen as a sign that the universe is swinging out of balance. “When anything extraordinary happens in nature ... it stimulates a discussion about instability in the universe,” says astronomer and anthropologist Anthony Aveni, author of In the Shadow of the Moon: The Science, Magic, and Mystery of Solar Eclipses. Even the biblical story of Jesus connects Christ’s birth and death with celestial events: the first by the appearance of a star, the second by a solar eclipse. 

Because eclipses were considered by ancient civilizations to be of such grave significance, it was of utmost importance to learn how to predict them accurately. That meant avidly monitoring the movements of the sun, moon and stars, keeping track of unusual celestial events and using them to craft and refine calendars. From these records, many groups—the Babylonians, the Greek, the Chinese, the Maya and others—began to tease out patterns that could be used to foretell when these events occurred. 

The Babylonians were among the first to reliably predict when an eclipse would take place. By the eighth century B.C., Babylonian astronomers had a firm grasp of the pattern later dubbed the Saros cycle: a period of 6,585.3 days (18 years, 11 days, 8 hours) in which sets of eclipses repeat. While the cycle applies to both lunar and solar eclipses, notes John Dvorak, author of the book Mask of the Sun: The Science, History and Forgotten Lore of Eclipses, it’s likely they could only reliably predict lunar eclipses, which are visible to half of the planet each time they occur. Solar eclipses, by contrast, cast a narrow shadow, making it much rarer to see the event multiple times at any one place.

Babylonians believed that an eclipse foretold the death of their ruler, leading them to use these predictions to put kingly protections in place. During the period of time that lunar or solar eclipses might strike, the king would be replaced with a substitute. This faux ruler would be dressed and fed like royalty—but only for a brief time. According to ancient Babylonian astronomers’ inscriptions on cuneiform tablets, “the man who was given as the king’s substitute shall die and … the bad omens will not affect that [ki]ng.”

The Babylonian predictions, though accurate, were all based purely on observations, says Dvorak; as far as scholars know, they never understood or sought to understand the mechanism behind planetary motions. “It was all done on the basis of cycles,” he says. It wasn’t until 1687, when Isaac Newton published the theory of universal gravitation—which drew heavily on insights from Greek astronomers—that scientists began to truly grasp the idea of planetary motion.

This Chinese oracle bone dates from around 1300 to 1050 B.C. Bones like this were used to predict a range of natural happenings, including solar and lunar eclipses. (Freer Gallery of Art and Arthur M. Sackler Gallery)

Surviving records from the ancient Chinese make up the longest continuous account of celestial happenings. Beginning around the 16th century B.C., Chinese star-gazers attempted to read the skies and foretell natural events using oracle bones. Ancient diviners would carve questions on these fragments of tortoise shell or oxen bone, and then heat them till they cracked. Similar to the tradition of reading tea leaves, they would then seek divine answers among the spidery network of fractures.

These methods may not have been scientific, but they did have cultural value. The sun was one of the imperial symbols representing the emperor, so a solar eclipse was seen as warning. When an eclipse was foretold to be approaching, the emperor would prepare himself by eating vegetarian meals and performing sun-rescuing rituals, while the Chinese people would bang pots and drums to scare off the celestial dragon that was said to devour the sun. This long-lived ritual is still part of Chinese lore today.

As far as accurate astronomical prediction, it would be centuries until Chinese predictions improved. By the first century AD they were predicting eclipses with fair accuracy using what is known as the Tritos cycle: a period of eclipse repetition that falls one month short of 11 years. Historians debate how exactly each culture developed its own system of eclipse prediction, says Dvorak, but the similarities in their systems suggest that Babylonian knowledge may have contributed to the development of others. As he writes in Mask of the Sun, “what the Babylonians knew about eclipses was diffused widely. It moved into India and China and then into Japan.”

In ancient India, legend had it that a mythical demon named Swarbhanu once attempted to outsmart the gods, and obtain an elixir to make himself immortal. Everything was going to plan, but after Swarbhanu had already received several drops of the brew, the sun and moon gods recognized the trick and told the supreme god Vishnu, who had taken the form of a beautiful maiden Mohini. Enraged, she beheaded Swarbhanu. But since the beast had already become immortal, its head lived on as Rahu and its torso as Ketu.

Today, according to the legend, Rahu and Ketu continue to chase the Sun and the Moon for revenge and occasionally gulp them down. But because Swarbhanu’s body is no longer whole, the eclipse is only temporary; the moon slides down his throat and resumes its place in the sky.

Eclipses in India were seen as a time when the gods were in trouble, says Dvorak, and to counter these omens land owners donated land to temples and priests. Along with the sun, moon and five brightest planets, they tracked Rahu and Ketu’s movement through the sky. In 499 AD, Indian mathematician and astronomer Aryabhata included these two immortal beings, dubbed “dark planets,” in his accurate description of how eclipses occur. His geometric formulation showed that the beasts actually represent two lunar nodes: positions in the sky in which the paths of sun and moon cross to produce a lunar or solar eclipse.

“They followed the nine wanderers up in the sky, two of them invisible,” says Dvorak. “From that, it was not a big step to predicting lunar eclipses.” By the sixth century A.D.—whether through independent invention, or thanks to help from the Babylonians—the Indians were successfully predicting eclipses.

...

Eclipse fears aren't just limited to ancient times. Even in the modern era, those seeking signs of Earthly meaning in the movements of the heavens have managed to find them. Astrologists note that Princess Diana’s fatal car crash occurred in the same year as a solar eclipse. An eclipse darkened England two days before the British King Henry I departed for Normandy; he never graced England’s shores again. In 1918, the last time an eclipse swept from coast-to-coast across the United States, an outbreak of influenza killed up to 50 million people worldwide and proved one of the deadliest pandemics in history.

Of course, there is no scientific evidence that the eclipse had anything to do with the outbreak, nor the other events. Thousands of people are born and die every day—and solar and lunar eclipses are far from rare. In any given year, up to four solar and three lunar eclipses darken the surface of the Earth. Because of this, as Dvorak writes, “it would be surprising if there were no examples of monarchs dying on or close to days of eclipses.”

In their time, ancient Babylonians weren’t trying to create the foundation of modern mathematics. But in order to predict celestial events—and thus, from their perspective, better understand earthly happenings—they developed keen mathematical skills and an extensive set of detailed records of the cosmos. These insights were later adopted and expanded upon by the Greeks, who used them to make a lasting mark on geometry and astronomy as we know it. Today, astronomers still use these extensive databases of ancient eclipses from Babylon, China and India to better understand Earth's movements through the ages.

So if you feel a little uneasy when the sun goes dark on August 21st, you’re not alone. Just remember: It was this same unease that helped create modern astronomy as we know it.

Hurry In! These Smithsonian Exhibitions Won’t Be Here Much Longer

Smithsonian Magazine

This gold and pearl hair ornament from the days of China’s Qing Dynasty shows the symbolic significance of the phoenix in Chinese culture. Come see an exhibit at the Sackler Gallery showcasing materials from the creation of Chinese artist Xu Bing’s Phoenix Project, on display until September 2.

As the weather heats up, some of the Smithsonian’s exhibits are preparing to cool down. To make way for future shows, a dozen current ones at various museums will close their doors by summer’s end, so don’t miss a chance to see some of these historic, unique, beautiful, innovative and thought-provoking exhibits. Here is a list of all exhibits closing before September 15.

Thomas Day: Master Craftsman and Free Man of Color

Thomas Day was black man living in North Carolina before the Civil War. An expert cabinetmaker with his own business and more success than many white plantation owners, he was a freedman whose craftmanship earned him both respect and brisk sales. His style was classified as “exuberant” and was adapted from the French Antique tradition. Step back in time to the Victorian South and view Day’s ornate cabinetry work on display. Ends July 28. Renwick Gallery.

Black Box: DEMOCRACIA

The Madrid-based artist group DEMOCRACIA created a video featuring the art of movement in a socio-political context. The film features practitioners of “parkour,” a kind of urban street sport with virtually no rules or equipment and where participants move quickly and efficiently through space by running, jumping, swinging, rolling, climbing and flipping. The actors are filmed practicing parkour in a Madrid cemetery, providing a spooky backdrop for their amazing acrobatics and interspersed with symbols of the working class, internationalism, anarchy, secret societies and revolution that pop up throughout the film. Ends August 4. Hirshhorn Museum.

Arts of Japan: Edo Aviary and Poetic License: Making Old Words New

The Edo period (1603-1868) marked a peaceful and stable time in Japan, but in the world of art, culture and literature, it was a prolific era. These companion exhibitions showcase great works of the Edo period that depict natural beauty as well as challenge the old social order. “Edo Aviary” features paintings of birds during that period, which reflected a shift toward natural history and science and away from religious and spiritual influence in art. “Poetic License: Making Old Words New” showcases works demonstrating how the domain of art and literature transitioned from wealthy aristocrats to one more inclusive of artisans and merchants. Ends August 4. Freer Gallery.

Up Where We Belong: Native Musicians in Popular Culture

This exhibit, held at the American Indian Museum’s Gustav Heye Center in New York City, explores the significant contributions of Native Americans to contemporary music. From Jimi Hendrix (he’s part Cherokee) to Russell “Big Chief” Moore of the Gila River Indian Community to Rita Coolidge, a Cherokee, and Buffy Sainte-Marie, a Cree, Native Americans have had a hand in creating and influencing popular jazz, rock, folk, blues and country music. Don’t miss your chance to see the influence of Native Americans in mainstream music and pop culture. Ends August 11. American Indian Museum in New York.

Nam June Paik: Global Visionary

The exhibition featuring works by the innovative Korean-American artist Nam June Paik, whose bright television screens and various electronic devices helped to bring modern art into the technological age during the 1960s, features 67 pieces of artwork and 140 other items from the artist’s archives. Ends August 11. American Art Museum.

Hand-held: Gerhard Pulverer’s Japanese Illustrated Books

Come to the Sackler Gallery and learn about the Japanese precursor to today’s electronic mass media: the woodblock-printed books of the Edo period. The books brought art and literature to the masses in compact and entertaining volumes that circulated Japan, passed around much like today’s Internet memes. The mixing of art with mass consumption helped to bridge the gap between the upper and lower classes in Japan, a characteristic of the progression during the Edo period. The exhibit features books in a variety of genres, from the action-packed to the tranquil, including sketches from Manga, not related to the Japanese art phenomenon of today, by the famous woodblock printer Hokusai. Ends August 11. Sackler Gallery.

Portraiture Now: Drawing on the Edge

In this seventh installation of the “Portraiture Now” series, view contemporary portraits by artists Mequitta Ahuja, Mary Borgman, Adam Chapman, Ben Durham, Till Freiwald and Rob Matthews, each exploring different ways to create such personal works of art. From charcoal drawings and acrylic paints to video and computer technology, these artists use their own style in preserving a face and bringing it alive for viewers. Ends August 18. National Portrait Gallery.

I Want the Wide American Earth: As Asian Pacific American Story

Celebrate Asian Pacific American history at the American History Museum and view posters depicting Asian American history in the United States ranging from the pre-Columbian years to the present day. The exhibit explores the role of Asian Americans in this country, from Filipino fishing villages in New Orleans in the 1760s to Asian-American involvement in the Civil War and later in the Civil Rights Movement. The name of the exhibit comes from the famed Filipino American poet Carlos Bulosan, who wrote, “Before the brave, before the proud builders and workers, / I say I want the wide American earth / For all the free . . .” Ends August 25. American History Museum.

A Will of Their Own: Judith Sargent Murray and Women of Achievement in the Early Republic

This exhibit features a collection of eight portraits of influential women in American history, but you may not know all their names. They came long before the Women’s Rights Movement and questioned their status in a newly freed America by fighting for equal rights and career opportunities. Come see the portraits of these forward-thinking pioneers—Judith Sargent Murray, Abigail Smith Adams, Elizabeth Seton and Phillis Wheatley. Ends September 2. National Portrait Gallery.

Nine Deaths, Two Births: Xu Bing’s Phoenix Project

Take a peek into the creative world of Chinese artist Xu Bing in this exhibition showcasing materials Bing used to create his massive sculpture Phoenix Project, which all came from construction sites in Beijing. The two-part installation, weighing 12 tons and extending nearly 100 feet long, features the traditional Chinese symbol of the phoenix, but the construction materials add a more modern message about Chinese economic development. While Phoenix Project resides at the Massachusetts Museum of Contemporary Art, the Sackler’s companion exhibition displays drawings, scale models and reconfigured construction fragments. Ends September 2. Sackler Gallery.

Whistler’s Neighborhood: Impressions of a Changing London

Stroll through the London of the 1800s in this exhibit featuring works by painter James McNeill Whistler, who lived in and documented the transformation of the Chelsea neighborhood. Whistler witnessed the destruction of historic, decaying buildings that made way for mansions and a new riverbank, followed by a wave of the elite. With artistic domination of the neighborhood throughout the transition, Whistler documented an important part of London’s history. The exhibit features small etchings and watercolor and oil paintings of scenes in Chelsea during the 1880s. Ends September 8. Freer Gallery.

Over, Under, Next: Experiments in Mixed Media, 1913 to the Present

From Picasso to Man Ray to present-day sculptor Doris Salcedo, many of the most innovative and prolific modern artists have set aside paint brush and canvas to embrace mixed media. View works by artists from all over the world during the last century and see the evolution of the collage and assemblage throughout the years. Featured in this exhibit is a tiny Joseph Stella collage made with scraps of paper and Ann Hamilton’s room-sized installation made of newsprint, beeswax tablets and snails, among other things. Ends September 8. Hirshhorn Museum.

This charcoal portrait of Merwin (Merf) Shaw by Mary Borgman hangs in the National Portrait Gallery as part of the “Portraiture Now: Drawing on Edge” series. The exhibit features portraits created by artists using a variety of media to explore the different ways to create the personal works of art. Image courtesy of Mary Borgman

How His'n'Her Ponchos Became A Thing: A History Of Unisex Fashion

Smithsonian Magazine

At this year’s Coachella music festival 16-year-old Jaden Smith, the scion of Hollywood royalty Will Smith and Jada Pinkett-Smith, wore a floral-print tunic and a rose-flower crown. The pairing is so standard it's a festival cliché, yet Jaden's outfit made waves online. First, because he's a celebrity in his own right, and second, because he's a boy. "Coolest of cool teens Jaden Smith sails far beyond gender norms," gushed Racked. "Who Wore it Better? Jaden Smith vs. Paris Hilton" quipped TMZ.

There was a time when such an ensemble wouldn’t have turned so many heads. Between 1965 and 1975, gender bending infiltrated American life as part of a movement called "unisex." As Jo Paoletti writes in a new book, Sex and Unisex: Fashion, Feminism, and the Sexual Revolution, the term was first used in the mid-'60s to describe salons catering to girls and guys who wanted similar haircuts—long and unkempt. By the mid '70s, it was a social phenomenon, creeping up in debates about childrearing, the workplace, military conscription and yes, bathrooms.

Fashion is what got it there. The New York Times first used the word “unisex” in a 1968 story about chunky “Monster” shoes, and it came up five more times before the year was over. Department stores and catalogs created new sections of his’n’her clothing, advertised by couples in matching lace bell-bottoms and burnt-orange button downs. In 1968, one Chicago Tribune columnist described a common predicament in the “age of unisex”: "'Is it a boy or a girl?' Are you inquiring about a new born child? You are not. You are asking your wife to declare the sex of the unidentified object passing a few feet in front of you. She doesn't know either."

51av%2BRtfSuL._SL160_.jpg

Sex and Unisex: Fashion, Feminism, and the Sexual Revolution

~ Jo B. Paoletti (author) More about this product
List Price: $25.00
Price: $18.62
You Save: $6.38 (26%)

Unisex wasn't just about confusing old people, though. As Paoletti explains, it came to act as a catch-all for various movements that broke with traditional feminine and masculine styles. For instance, during the "peacock revolution" of the late '60s, men wore Edwardian shirts and tight pants in flamboyant patterns and colors. Also in that decade, designer Rudi Gernreich created futuristic, androgynous styles like a topless bathing suit for women and “No-Bra Bras” without underwire or padding. In the '70s, unisex clothing took the form of matching patchwork denim sets and fleece "loungewear" for the whole family.

Look at old catalog photos of happy families in coordinated separates, and you'll start to understand how unisex made the leap from fashion to childrearing debates. In the early '70s, ungendered parenting became a hot topic among progressive families. Abandoning pink and blue, many thought, could quash sexism in children before it took hold. "X: A Fabulous Child's Story," published in Ms. in 1972, tells of a baby whose parents keep its sex a secret from the world. As X grows up and attends school, rather than becoming an outcast, it becomes a role model: “Susie, who sat next to X in class, suddenly refused to wear pink dresses to school anymore... Jim, the class football nut, started wheeling his little sister's doll carriage around the football field.”

Image by © GoldenEye / London Entertainment/Splash News/Corbis. Jaden Smith, son of actor Will Smith, wore a dress during the second week of the Coachella Music Festival in Indio, California, this year. (original image)

Image by © Bettmann/CORBIS. Unisex clothing became a fashion trend in the late '60s and early '70s. An example of the trend is this shirt combo from sportswear designer Sir Bonser. Both models are fashioned in a bright floral print—Rome, July 1969. (original image)

Image by © Bettmann/CORBIS. Fashion designer Rudi Gernreich poses with two models dressed in his futuristic, unisex designs—Los Angeles, January 1970. (original image)

Image by © dpa/dpa/Corbis. Matching clothes for him and her in 1970s Germany: The shirt and dress on these models are made from the same material. (original image)

Image by © Hulton-Deutsch Collection/CORBIS. A couple sports unisex, fawn-colored, worsted flannel, hot pants and braces worn with pink, woolen, roll neck jumpers—London, March 1971. (original image)

Ultimately, Paoletti interprets unisex fashion as a reflection of political and social upheaval. As the feminist movement gained steam and women fought for equal rights, their clothing became more androgynous. Men, meanwhile, discarded grey flannel suits—and the restrictive version of masculinity that came with them—by appropriating feminine garments. Both genders, she argues, were questioning the idea of gender as fixed. This didn't unfold without controversy. The era saw a litany of lawsuits around institutional dress codes, including 73 on the issue of long hair on boys between 1965 and 1978. In liberal states like Vermont, courts tended to rule in favor of students, while in states like Alabama and Texas, they sided with schools. For Paoletti, this is evidence that the questions raised by the sexual revolution and the feminist movement were never resolved, ensuring that the debates around transgender identity, contraception, and gay marriage would still be active today.

Unisex fashion waned in the mid-to-late '70s. Workers struggling to land jobs in a weak economy sought a more conservative style, Paoletti argues, bringing back suits for men and inspiring Diane Von Furstenberg wrap dresses for women. Certain unisex elements lingered—pants for women, for instance. In other areas, like children's clothing, dressing became gendered to the extreme. In Paoletti's opinion, rigidly gendered clothes box us into categories that might not fit our true selves. "In an exercise in aspirational dressing, consider the possibilities if our wardrobes reflected the full range of choices available to each of us," she writes in the book’s last chapter. "Imagine that we dressed to express our inner selves and our locations not as fixed but as flexible."

What's ironic is that Paoletti herself analyzes fashion not as individual expression, but as collective political speech. At one point, she quotes journalist Clara Pierre, who commented wistfully (and prematurely) in 1976 that "clothes no longer have to perform the duty of [sexual] differentiation and can relax into just being clothes." Paoletti claims to share Pierre’s hope, yet her book never allows clothes to “relax” in such a manner. Rather, they are reflections of or rebellions against gender binaries. At times, Paoletti seems frightened of the prospect of clothes without subtext. "The fashion industry has spent billions of dollars convincing us that fashion is frivolous," she writes in the introduction. "Yes, fashion is fun, but clothing is also bound up with the most serious business we do as humans: expressing ourselves as we understand ourselves."

In reality, clothing communicates information not just about gender, but also about race, class, age, workplace, personality, sense of humor, social media habits, or musical tastes. Used in combination, its messages—serious and frivolous—lead to creative, original style. Of course, it would be impossible for a single book to consider the myriad identities expressed through dress. Paoletti acknowledges that her book bypasses, for instance, the influence of race on ‘60s and ‘70s fashion, when the Black Power movement helped popularize natural hairstyles. For reasons of clarity, she says, she limited her approach to gender—specifically, gender as expressed through middle-class, mainstream style.

Paoletti's scope, while restrictive, is also refreshing. To study fashion via the masses is rare. Much of fashion scholarship and criticism focuses on luxury designers, or else subcultural groups like punk, rave, or, most recently, normcore. Fashion isn't just a byproduct of mass social movements, as Paoletti analyzes it—but neither is it the confection of a few aesthetic geniuses, as it's often portrayed.

Of course, it's possible to dress originally and make a statement about gender. Which brings us back to Jaden Smith. In the weeks before Coachella, he posted this Instagram caption: "Went To Top Shop To Buy Some Girl Clothes, I Mean 'Clothes.'" Unisex is alive and well, it seems. If only Willow, Jada, and Will would don matching tunics and flower crowns for a family portrait, it would be an all-out revival.

14 Fun Facts About Piranhas

Smithsonian Magazine

Biting has played an unusually dominant role in this year’s World Cup conversations. But Luis Suarez is hardly the most feared biter in South America. The continent is home to the ultimate biters: piranhas.

Piranhas have never had the most darling of reputations. Just look at the 1978 cult film Piranha, in which a pack of piranhas escape a military experiment gone wrong and feast on unsuspecting lake-swimmers. Or the 2010 remake, where prehistoric piranhas devour humans in 3D detail.  

Then or now, Hollywood certainly hasn’t done the piranha any favors. But are these freshwater fish the vicious river monsters they’re made out to be? Not exactly.

Piranhas do indeed have sharp teeth, and many are carnivorous. But there’s a lot of diet variation among species—that’s one reason piranhas have proved hard to taxonomically classify. Piranhas are also hard to tell apart in terms of species, diet, coloration, teeth, and even geographic range. This lack of knowledge adds a bit of dark mystery to the creatures.

Sure, they're not cute and cuddly. But they may be misunderstood, and scientists are rewriting the piranha’s fearsome stereotype. Here are 14 fun facts about the freshwater fish:

1. Piranhas’ bad reputation is at least partially Teddy Roosevelt’s fault

When Theodore Roosevelt journeyed to South America in 1913, he encountered, among other exotic creatures, several different species of piranha. Here’s what he had to say about them in his bestseller, Through the Brazilian Wilderness:

“They are the most ferocious fish in the world. Even the most formidable fish, the sharks or the barracudas, usually attack things smaller than themselves. But the piranhas habitually attack things much larger than themselves. They will snap a finger off a hand incautiously trailed in the water; they mutilate swimmers—in every river town in Paraguay there are men who have been thus mutilated; they will rend and devour alive any wounded man or beast; for blood in the water excites them to madness. They will tear wounded wild fowl to pieces; and bite off the tails of big fish as they grow exhausted when fighting after being hooked.”

Roosevelt went on to recount a tale of a pack of piranhas devouring an entire cow. According to Mental Floss, locals put on a bit of a show for Roosevelt, extending a net across the river to catch piranhas before he arrived. After storing the fish in a tank without food, they tossed a dead cow into the river and released the fish, which naturally devoured the carcass.

A fish that can eat a cow makes for a great story. Given that Roosevelt was widely read, it’s easy to see how the piranha’s supervillain image spread. 

Scientists and explorers had knowledge of piranhas dating back to the 16th century, but Roosevelt’s tale is largely credited with dispersing the myth. Dated 1856, this sketch by French explorer Francis de Castelnau depicts a red-bellied piranha. (Wikimedia Commons/Francis de Castelnau)

2. Piranhas have lived in South America for millions of years

Today, piranhas inhabit the freshwaters of South America from the Orinoco River Basin in Venezuela up to the Paraná River in Argentina. Though estimates vary, around 30 species inhabit the lakes and rivers of South America today. Fossil evidence puts piranha ancestors in the continent’s rivers 25 million years ago, but modern piranha genera may have only been around for 1.8 million years.

A 2007 study suggests that modern species diverged from a common ancestor around 9 million years ago. Also, the Atlantic Ocean rose around 5 million years ago, expanding into the flood plains of the Amazon and other South American rivers. The high salt environment would have been inhospitable to freshwater fish, like piranhas, but some likely escaped upriver to higher altitudes. Genetic analysis suggests that piranhas living above 100 meters in the Amazon have only been around for 3 million years.

3. Piranhas found outside South America are usually pets on the lam

Piranhas attract a certain type of pet lover, and sometimes when the fish gets too large for its aquarium said pet lover decides its much better off in the local lake. In this manner, piranhas have shown up in waterways around the globe from Great Britain to China to Texas. It’s legal to own a piranha in some areas, but obviously never a good idea to release them into the wild, as the species could become invasive.

4. Piranha teeth are pretty intense but replaceable

Piranhas are known for their razor-sharp teeth and relentless bite. (The word piranha literally translates to “tooth fish” in the Brazilian language Tupí.) Adults have a single row of interlocking teeth lining the jaw. True piranhas have tricuspid teeth, with a more pronounced middle cuspid or crown, about 4 millimeters tall.

The shape of a piranha’s tooth is frequently compared to that of a blade and is clearly adapted to suit their meat-eating diet. The actual tooth enamel structure is similar to that of sharks.

It’s not uncommon for piranhas to lose teeth throughout their lifetime. But, while sharks replace their teeth individually, piranhas replace teeth in quarters multiple times throughout their lifespan, which reaches up to eight years in captivity. A piranha with half of its lower jaw chompers missing isn’t out of the ordinary.

The jaw bone of a red-bellied piranha (Pygocentrus nattereri) specimen. (Wikimedia Commons/Sarefo)

5. A strong bite runs in the family

Though they are hardly as menacing as fiction suggests, piranhas do bite with quite a bit of force. In a 2012 study in Scientific Reports, researchers found that black (or redeye) piranhas (Serrasalmus rhombeus)—the largest of modern species—bite with a maximum force of 72 pounds (that’s three times their own body weight).

Using a tooth fossil model, they found that piranhas' 10-million-year-old extinct ancestor, Megapiranha paranensis, had a jaw-tip bite force—the force that jaw muscles can exert through the very tip of its jaw—of as high as 1,068 pounds. For reference, the M. paranensis when alive weighed only 10 kilograms (about 22 pounds), so that’s roughly 50 times the animal’s body weight.

Science notes that T. rex’s estimated bite force is three times higher than that of this ancient piranha—but the king of the reptiles also weight a lot more. M. paranensis also had two rows of teeth, while modern piranhas have just the one. It’s not clear exactly what this ancient fish ate, but whatever it was, it must have required some serious chomps.

6. Humans and capybaras are only part of the piranha diet if these prey already dead or dying

The idea that a piranha could rip a human to shreds is probably more legend than fact, too. For the curious, Popular Science spoke to some experts who estimate that stripping the flesh from a 180-pound human in 5 minutes would require approximately 300 to 500 piranhas. Cases of heart attack and epilepsy that ended with the afflicted drowning in a South American river do show evidence of piranha nibbles, but in those instances, the victim was already deceased when piranhas got involved.

While the myth of the man-eating piranha belongs to movie theaters, the Internet has a wealth of mysterious footage of piranha packs taking down capybaras. Some piranhas do occasionally eat small mammals, but as with humans, it’s usually when the unfortunate animal is already dead or gravely injured.

This would pretty much never happen in real live. (Video: Piranha 3D/Dimension Films)

7. Some piranhas are cannibals

A typical piranha diet consists of insects, fish, crustaceans, worms, carrion, seeds and other plant material. A red-bellied piranha (Pygocentrus nattereri), for example, eats about 2.46 grams per day—about one-eighth of its average body mass. Crustaceans, bugs, and scavenged scraps make up the largest chunk of their meals, but the balance of this diet can shift depending on the fish’s age and the food sources available.

So occasionally when resources are low and competition for food is high, piranhas have been known to take a chunk out of a fellow piranha, living or dead. Even weirder, wimple piranhas (Catoprion mentofeed on fish scales, which contain a protein mucus layer that’s surprisingly nutritious.

8. And some are vegetarians

Despite their flesh-eating reputation, some piranhas are omnivorous, eating more seeds than meat, and some even subsist on plants alone. For example, in the Amazonian rapids of the Trombetas basin in Pará, Brazil, scientists discovered that Tometes camunani lives solely off of riverweeds.

A Tometes camunani specimen. (© WWF/Tommaso Giarrizzo)

Piranhas' closest relative, the pacu or tambaqui fish (Colossoma macropomum), also lives on a mostly meat-free diet. Pacus closely resemble some piranha species in size and coloration, and thus, are often sold at fish markets as, “vegetarian piranhas,” as well as other less flattering nicknames.

9. When hunting prey, piranhas go for the tail and eyes

A 1972 study in red-bellied piranhas found that the fish most frequently attacked goldfish in a lab setting beginning with their prey’s tail and/or eyes. The researchers concluded that such an attack strategy would effectively immobilize piranhas’ opponents and prove useful for survival.

10. Piranhas bark

From anecdotes and observational research, scientists have known for a while that red-bellied piranhas make bark-like noises when caught by fishermen. Upon further examination, a team of Belgian scientists found that they make three distinctive types of vocalization in different situations.

In a visual staring contest with another fish, they start making quick calls that sound similar to barks, meant as a warning along the lines of, “Don’t mess with me, buddy.” In the act of actually circling or fighting another fish, piranhas emit low grunts or thud sounds, which researchers believe communicates more of a direct threat to the other fish.

The fish makes these two sounds using its swimbladder, a gas-containing organ that keeps fish afloat. Piranhas contract and relax muscles around the swimbladder to make noises of different frequencies.

The third vocalization? Should the opposing fish not back down, the piranha will gnash its teeth together and chase its rival. 

Here are all three sounds back to back:

11. Piranhas run in packs for safety, not strength

Part of piranhas’ fierce reputation stems from the fact that they often swim in packs or shoals. Red-bellied piranhas are particularly known as pack hunters. Though it might seem an advantageous hunting technique—more fish could theoretically take down a larger foe—the behavior actually stems from fear.

A shoal of piranhas (Serrasalmus sp.). Scary, right? (© Science Photo Library/Corbis)

Piranhas aren’t apex predators—they’re prey to caimans, birds, river dolphins, and other large pescatarian fish. So traveling in shoals has the effect of protecting the inner fish from attack. Further, shoals tend to have a hierarchy of larger, older fish towards the center and younger fish on the outer edges, suggesting that safety might be the true motivation.

In 2005, researchers looked at shoal formation in captive red-bellied piranhas and found that the fish both breathed easier in larger shoals and responded more calmly to simulated predator attacks. The researchers also observed wild piranhas forming larger shoals in shallow waters where they might be more vulnerable.

A spectacled caiman (Caiman crocodilus eating fresh piranha in Venezuela. (© W. Perry Conway/CORBIS)

12. They’ll only attack you if you mess with them (or their eggs)

Though piranhas have a reputation for attacking, there’s not much evidence to support the legend. Like grizzly bears, wolves, sharks, and pretty much any large scary thing with teeth, piranhas will leave you alone if you leave them alone.

Black piranhas and red-bellied piranhas are considered the most dangerous and aggressive toward humans. Nonetheless, South American swimmers typically emerge from piranha-infested waters without loss of flesh. For swimmers, the danger comes when the water level is low, prey is scarce, or you disturb its spawn buried in the riverbed—basically situations where the fish either feel really threatened or really hungry, and thus become more aggressive.

For fishermen, untangling a piranha from a net or a hook is where things get dicey. In most cases, if they bite you, they only bite you once—and they usually go for the toes or feet.

13. Piranhas seem to be attracted to noise, splashing, and blood

 A 2007 study linked noise, splashing, and spilling food, fish, or blood into the river with three instances of piranha attacks on humans in Suriname. Piranhas might be naturally attuned to pick up on the sound of fruits and nuts falling from trees and hitting the water and, thus, mistake splashing children for the noise associated with food.

As for blood, it likely does not render a piranha senseless as the movies would suggest, but piranhas can smell a drop of blood in 200 liters of water. So, if you are a bleeding, rambunctious child, a dip in the Amazon might not be the best idea.

14. They’re great grilled or in soup

In some parts of the Amazon, eating piranha is considered taboo—a common cultural perception for predatory fish—while others are convinced it’s an aphrodisiacPiranha soup is popular in the Pantanal region of Brazil, but many choose to serve the fish grilled on a banana leaf with tomatoes and limes for garnish.

Perhaps it’s time to put the myth of evil piranhas to bed, and instead enjoy a nice bowl of piranha soup.

The Classy Rise of the Trench Coat

Smithsonian Magazine

The trench coat wasn’t exactly invented for use during the war that gave it its name, a war spent mired in muddy, bloody trenches across Europe. But it was during the First World War that this now iconic garment took the shape that we recognize today, a form that remains startlingly current despite being more than 100 years old.

The trench coat is, in some ways, emblematic of the unique moment in history that World War I occupies, when everything – from rigidly held social structures to military organization to fashion – was in upheaval; it is both a product of this time as well as a symbol of it. “It’s the result of the scientific innovation, technology, mass production… The story of the trench coat is a very modern story,” says Dr. Jane Tynan, lecturer in design history at Central Saint Martins, University of the Arts London and author of British Army Uniform and the First World War: Men in Khaki. 

Even so, the story of the trench coat starts roughly 100 years before the outbreak of World War I in 1914. As early as 1823, rubberized cotton was being used in weatherproof outerwear for both civilian and military use. These “macks”, named for their inventor Charles Macintosh, were great at keeping rain out, but equally – and unfortunately – great at keeping sweat in. They also had a distinctive and unpleasant smell of their own, and a propensity to melt in the sun. Nevertheless, Mackintosh’s outerwear, including rubberized riding jackets, were used by British military officers and soldiers throughout the 19th century. 

Inspired by the market the macks created – and the fabric’s initial shortcomings – clothiers continued to develop better, more breathable waterproofed textiles. In 1853, Mayfair gentlemen’s clothier John Emary developed and patented a more appealing (read: less stinky) water-repellant fabric, later renaming his company “Aquascutum” – from the Latin, “aqua” meaning “water” and “scutum” meaning “shield” – to reflect its focus on designing wet weather gear for the gentry. His “Wrappers” were soon necessities for the well-dressed man who wanted to remain well-dressed in inclement weather. 

Image by Burberry. Burberry had invented a breathable waterproof twill called gabardine that made its clothing useful for military uniforms. (original image)

Image by Burberry. Burberry swiftly transformed its sports coat into military wear. (original image)

Image by Burberry. Ads depicted the different functionalities of the Burberry trench coat. (original image)

Image by Aquascutum. Trench coats were known for their versatility and adaptability. (original image)

Image by Art of Manliness. Higher ranked military officers wore trench coats and were responsible for outfitting themselves. (original image)

Image by Wikimedia Commons Australian War Memorial. Fighting in the trenches was wet and slippery - waterproof coats helped to combat some of these elements. (original image)

Image by Wikimedia Commons The War Pictorial. "The trench coat was a very, very useful garment." (original image)

Thomas Burberry, a 21-year-old draper from Basingstoke, Hampshire, founded his eponymous menswear business in 1856; in 1879, inspired by the lanolin-coated waterproof smocks worn by Hampshire shepherds, he invented “gabardine”, a breathable yet weatherproofed twill made by coating individual strands of cotton or wool fiber rather than the whole fabric. Burberry’s gabardine outerwear, like Aquascutum’s, proved popular with upper class, sporty types, and with aviators, explorers and adventurers: When Sir Ernest Shackleton went to Antarctica in 1907, he and his crew wore Burberry’s gabardine coats and sheltered in tents made from the same material. 

“Lightweight waterproof fabric is] a technological development, like the Gore-Tex of that period, making a material that would be fit for purpose,” explains Peter Doyle, military historian and author of The First World War in 100 Objects (the trench coat is number 26). With the fabric, the factories, and the primary players – Burberry, Aquascutum, and, to some degree, Mackintosh – in place, it was only a matter of time before the trench coat took shape. And what drove the design was changes in how the British military outfit itself, and to a large degree, how war was now being waged.

**********

Warfare through the 1860s was Napoleonic, typically conducted in large fields where two armies faced off and fired or hacked at one another until one fell. In these scenarios, brightly colored uniforms helped commanders identify their infantry troops even through the smoke of battle. But with the technological advancements in long-range arms in place even by the Crimean War in the 1850s, this kind of warfare had become deeply impractical, not to mention deadly; bright, garish uniforms simply made soldiers easier targets. 

Military tactics needed to adapt to this new reality and so too did uniforms. The color khaki, which came to dominate British military uniforms, was the result of lessons learned in India; the word “khaki” means “dust” in Hindi. The first experiments at dyeing uniforms to blend in with the landscape began in 1840; during the Indian Rebellion of 1857, several British regiments dyed their uniforms drab colors. 

By the 1890s, khaki and camouflage had spread to the rest of the British military; in the Boer War in 1899, the utility of khaki uniforms had proven itself by allowing soldiers dealing with guerilla warfare to blend more easily with their surroundings. The British military was in some ways slow to change – bizarrely, mustaches for officers were compulsory until 1916 – but by World War I, there was an increasing recognition that uniforms needed to disappear into the landscape, allow for fluid, unencumbered movement, be adaptable to the fighting terrain, and be easily produced in mass quantities.

Trench coats offered utility during war and later, style for civilians. (Wikimedia Commons Imperial Warm Museums)

The terrain that British military outfitters were designing for even early in the war was, essentially, a disgusting hole in the ground. Trenches were networks of narrow, deep ditches, open to the elements; they smelled, of both the unwashed living bodies crammed in there and the dead ones buried close by. They were muddy and filthy, and often flooded with either rain or, when the latrines overflowed, something worse. They were infested with rats, many grown to enormous size, and lice that fed off the close-quartered soldiers. Life in the trench, where soldiers would typically spend several days at a stretch, was periods of intense boredom without even sleep to assuage it, punctuated by moments of extreme and frantic action that required the ability to move quickly. 

It was to deal with these conditions that the trench coat was designed. “This was really the modernizing of military dress. It was becoming utilitarian, functional, camouflaged … it’s a very modern approach to warfare,” says Tynan. 

In past wars, British officers and soldiers alike wore greatcoats, long overcoats of serge, a thick woolen material, that were heavy even when dry; they were warm, but unwieldy. But in the trenches, these were a liability: Too long, they were often caked with mud, making them even heavier, and, even without the soldiers’ standard equipment, were difficult to maneuver in. Soldiers in the trenches needed something that was shorter, lighter, more flexible, warm but ventilated, and still weatherproof. The trench coat, as it soon came to be known as, fit the bill perfectly.

But let’s be clear: Regular rank and file soldiers, who were issued their (now khaki) uniforms, did not wear trench coats. They had to make do with the old greatcoats, sometimes cutting the bottoms off to allow greater ease of movement. Soldiers’ clothing was a source of discomfort for them – coarse material, ill-fitting cuts, poorly made, and teeming with lice.

Uniforms for those with higher ranks, however, were a very different story. While their dress was dictated by War Office mandates, officers were tasked with the actual outfitting themselves. Up until 1914, officers in the regular army were even asked to buy the clothes themselves, often at considerable cost, rather than simply being given the money to spend as they saw fit: In 1894, one tailor estimated that a British officer’s dress could cost anywhere from £40 to £200. From the start of the war in 1914, British officers were provided a £50 allowance to outfit themselves, a nod to the fact that dressing like a proper British military officer didn’t come cheaply. 

Having officers outfit themselves also helped reinforce the social hierarchy of the military. Soldiers tended to be drawn from the British working classes, while the officers were almost exclusively plucked from upper, gentlemanly class, the “Downton Abbey” swanks. Dress was (and still is, of course) an important marker of social distinction, so allowing officers to buy their own active service kit from their preferred tailors and outfitters set them apart, fortifying their social supremacy. It also meant that though there were parameters for what an officer had to wear, they could, as Doyle says, “cut a dash”: “The latitude for creating their own style was enormous.

Burberry and Aquascutum both take credit for inventing the first trench coats. (Aquascutum)

The officers called on firms like Burberry, Aquascutum and a handful of others who marketed themselves as military outfitters; notably, these also tended to be the firms that made active, sporting wear for the very same aristocratic gentleman (Aquascutum, for example, enjoyed no less a patron than the Prince of Wales, later King Edward VII; he wore their overcoats and issued them their first royal warrant in 1897). This marriage of sporting wear and military gear was longstanding. Burberry, for example, designed the field uniform for the standing British army in 1902 and noted in promotional materials that it was based on one of their sportswear suits; Aquascutum was selling overcoats and hunting gear to aristocratic gentlemen and outfitting British officers with weatherproofed wool coats as far back as the Crimean War in 1853. Burberry and Aquascutum both created designs informed by their own lines of well-made, nicely tailored clothing for wealthy people who liked to fish, shoot, ride, and golf. This also tailored nicely with the image the British military wanted to convey: War was hell, but it was also a sporty, masculine, outdoorsy pursuit, a pleasure and a duty. 

**********

Both Burberry and Aquascutum take credit for the trench coat, and it’s unclear who really was the first; both companies had strong ties to the British military establishment and both already had weatherproof outerwear similar to the trench coat. Burberry may have a stronger claim: Khaki-colored Burberry “weatherproofs”, Mackintosh-style raincoats in Burberry gabardine, were part of officers’ kit during the Boer War and in 1912, Burberry patented a knee-length, weatherproofed coat very like the trench coat called a “Tielocken”, which featured a belt at the waist and broad lapels. But in truth, no one really knows. 

“Burberry and Aquascutum were very clever in adapting to military requirements,” says Tynan, especially as “what you’re talking about is a sport coat being adapted for military use.” The adaptation appears to have largely taken place within the first two years of war: Regardless of who really was the first, British officers had certainly adopted them by 1916, as this drawing of soldiers loading a cannon while being supervised by a trench coat-wearing officer attests. The first instance of the term “trench coat” in print also came in 1916, in a tailoring trade journal accompanied by three patterns for making the increasingly popular weatherproof coats. By this time, the coats’ form had coalesced into essentially the same thing sold by luxury “heritage” brands and cheap and cheerful retailers today. So what made a coat a “trench coat”? 

Before, during and after World War I, Burberry was one of the signature manufacturers of trench coats. (Burberry)

Firstly, it was a coat worn by officers in trenches. A blindingly obvious statement, sure, but it deserves some unpacking – because each part of the trench coat had a function specific to where and how it was used and who used it. Trench coats were double-breasted and tailored to the waist, in keeping with the style of officers’ uniform. At the belted waist, it flared into a kind of knee-length skirt; this was short enough that it wouldn’t trail in the mud and wide enough to allow ease of movement, but still covered a significant portion of the body. The belt, reminiscent of the Sam Browne belt, would have come with D-rings to hook on accessories, such as binoculars, map cases, a sword, or a pistol. 

At the back, a small cape crosses the shoulders – an innovation taken from existing military-issue waterproof capes – encouraging water to slough off; at the front, there is a gun or storm flap at the shoulder, allowing for ventilation. The pockets are large and deep, useful for maps and other necessities. The straps at the cuffs of the raglan sleeves tighten, offering greater protection from the weather. The collar buttons at the neck, and this was for both protection from bad weather and poison gas, which was first used on a large scale in April 1915; gas masks could be tucked into the collar to make them more airtight. Many of the coats also came with a warm, removable liner, some of which could be used as emergency bedding if the need arose. At the shoulders, straps bore epaulettes that indicated the rank of the wearer. 

In short, as Tynan notes, “The trench coat was a very, very useful garment.” 

But there was a tragic unintended consequence of officers’ distinctive dress, including the trench coat: It made them easier targets for snipers, especially as they lead the charge over the top of the trench. By Christmas 1914, officers were dying at a higher rate than soldiers (by the end of the war, 17 percent of the officer class were killed, as compared to 12 percent of the ranks) and this precipitated a major shift in the make-up of the British Army. The mass pre-war recruitment drives had already relaxed requirements for officers; the new citizen army was headed by civilian gentleman. But now, necessity demanded that the army relax traditions further and take officers from the soldiering ranks and the middle class. For the rest of the war, more than half of the officers would come from non-traditional sources. These newly created officers were often referred to by the uncomfortable epithet “temporary gentleman”, a term that reinforced both the fact that officers were supposed to be gentlemen and that these new officers were not. 

To bridge that gap, the newly made officers hoped that clothes would indeed make the man. “Quite a lot of men who had no money, no standing, no basis for working and living in that social arena were suddenly walking down the street with insignia on their shoulder,” says Doyle. “If they could cut a dash with all these affectations with their uniforms, the very thing that would have gotten them picked off the front line by snipers, that was very aspirational.” Doyle explains that one of the other elements that pushed the trench coat to the fore was the commercial competition built up to outfit this new and growing civilian army. “Up and down London, Oxford Street, Bond Street, there would be military outfitters who would be offering the solution to all the problems of the British military soldier – ‘Right, we can outfit you in a week.’ … Officers would say, ‘I’ve got some money, I don’t know what to do, I’ll buy all that’. There came this incredible competition to supply the best possible kit.” 

Interestingly, adverts from the time show that even as the actual make-up of the officer class was changing, its ideal member was still an active, vaguely aristocratic gentleman. This gentleman officer, comfortable on the battlefield in his tailored outfit, remained the dominant image for much of the war – newspaper illustrations even imagined scenes of officers at leisure at the front, relaxing with pipes and gramophones and tea – although this leisure class lifestyle was as far removed from the bloody reality of the trenches as the grand English country house was from the Western Front.

For the temporary gentleman, this ideal image would have been entrancing. And very much a part of this image was, by the middle of the war at least, the trench coat. It embodied the panache and style of the ideal officer, while at the same being actually useful, rendering it a perfectly aspirational garment for the middle class. New officers happily and frequently shelled out the £3 or £4 for a good quality trench coat (for example, this Burberry model); a sizeable sum when you consider that the average rank-and-file soldier made just one shilling a day, and there were 20 shillings to a pound. (Doyle pointed out that given the very real possibility of dying, maybe even while wearing the trench coat, newly made officers didn’t often balk at spending a lot of money on things.) And, of course, if one couldn’t afford a good quality trench coat there were dozens of retailers who were willing to outfit a new officer more or less on the cheap, lending to the increasing ubiquity of the trench coat. (This isn’t to say, however, that the cheaper coats carried the same social currency and in that way, it’s no different than now: As Valerie Steele, director of the Museum at the Fashion Institute of Technology in New York, puts it, “I wouldn’t underestimate people’s ability to read the differences between a Burberry trench and an H&M trench.”)

Image by Hulton-Deutsch Collection/CORBIS. Models wearing fashionable Burberry trench coats, which remain a staple today, 1973. (original image)

Image by Mirrorpix/Corbis. Flying nurses of the USAAF Ninth Troop Carrier Command, wearing special hooded trench coats in England during World War II, 1944. (original image)

Image by Corbis. Humphrey Bogart in a trench coat and fedora, 1940s. (original image)

Image by Sunset Boulevard/Corbis. American actor Humphrey Bogart and Swedish actress Ingrid Bergman on the set of Casablanca, 1942. (original image)

Image by Kirn Vintage Stock/Corbis. Four businessmen wearing trench coats as part of their work uniform, 1940. (original image)

Image by Alain Dejean/Sygma/Corbis. A model wears a trench coat as part of an outfit designed by Ted Lapidus, 1972. (original image)

Image by Paramount Pictures/Sunset Boulevard/Corbis. German actress and singer Marlene Dietrich sporting a trench coat on the set of A Foreign Affair, 1948. (original image)

Image by Imaginechina/Corbis. Burberry trench coats are still popular today, now available in many different patterns and styles. (original image)

Ubiquity is one measure of success and by that measure alone, the trench coat was a winner. By August 1917, the New York Times was reporting that even in America, the British import was “in demand” among “recently-commissioned officers”, and that a version of the coat was expected to be a part of soldiers’ regular kit at the front.

But it wasn’t only Allied officers who were adopting the coat in droves – even in the midst of the war, civilians of both sexes also bought the coats. On one level, civilians wearing a military coat was an act of patriotism, or perhaps more accurately, a way of showing solidarity with the war effort. As World War I ground on, savvy marketers began plastering the word “trench” on virtually anything, from cook stoves to jewelry. Doyle said that people at the time were desperate to connect with their loved ones at the front, sometimes by sending them well-meaning but often impractical gifts, but also by adopting and using these “trench” items themselves. “If it’s labeled ‘trench’ you get the sense that they’re being bought patriotically. There’s a slight hint of exploitation by the [manufacturers], but then they’re supplying what the market wanted and I think the trench coat fit into all that,” he says. “Certainly people were realizing that to make it worthwhile, you needed to have this magical word on it, ‘trench’.” For women in particular, there was a sense that too-flashy dress was somehow unpatriotic. “How are you going to create a new look? By falling into line with your soldier boys,” says Doyle. 

On another level, however, the war also had a kind of glamour that often eclipsed its stark, stinking reality. As the advertisements for trench coats at the time reinforced, the officer was the face of this glamour: “If you look at adverts, it’s very dashing … it’s very much giving a sense that if you’re wearing one of these, you’re at the height of fashion,” explains Doyle, adding that during the war, the most fashionable person in the U.K. was the trench coat-clad “gad about town” officer. And on a pragmatic level, Tynan pointed out, what made the coats so popular with officers – its practical functionality married to a flattering cut – was also what resonated with civilians.

**********

After the war, battle wounds scabbed over and hardened into scars – but the popularity of the trench coat remained. In part, it was buoyed by former officers’ tendency to keep the coats: “The officers realized they were no longer men of status and had to go back to being clerks or whatever, their temporary gentleman status was revoked… probably the echo into the 1920s was a remembrance of this kind of status by wearing this coat,” theorized Doyle.  

At the same time, the glamour attached to the coat during the war was transmuted into a different kind of romantic image, in which the dashing officer is replaced by the equally alluring world-weary returning officer. “The war-worn look was most attractive, not the fresh faced recruit with his spanking new uniform, but the guy who comes back. He’s got his hat at a jaunty angle... the idea was that he had been transformed, he looked like the picture of experience,” Tynan says. “I think that would certainly have given [the trench coat] a caché, an officer returning with that sort of war-worn look and the trench coat is certainly part of that image.”

The trench coat remained part of the public consciousness in the period between the wars, until the Second World War again put trench coats into military action (Aquascutum was the big outfitter of Allied military personnel this time). At the same time, the trench coat got another boost – this time from the golden age of Hollywood. “A key element to its continued success has to do with its appearance as costume in various films,” says Valerie Steele. And specifically, who was wearing them in those films: Hard-bitten detectives, gangsters, men of the world, and femme fatales. For example, in 1941’s The Maltese Falcon, Humphrey Bogart wore an Aquascutum Kingsway trench as Sam Spade tangling with the duplicitous Brigid O’Shaugnessy; when he said goodbye to Ingrid Bergman on that foggy tarmac in Casablanca in 1942, he wore the trench; and again in 1946 as private eye Philip Marlowe in The Big Sleep

“It’s not a question of power coming from an authority like the state. They’re private detectives or spies, they rely on themselves and their wits,” said Steele, noting that the trench coat reinforced that image. “[The trench coat] does have a sense of kind of world-weariness, like it’s seen all kinds of things. If you were asked ‘trench coat: naïve or knowing?’ You’d go ‘knowing’ of course.” (Which makes Peter Sellers wearing the trench coat as the bumbling Inspector Clouseau in The Pink Panther series all the funnier.)

Even as it became the preferred outerwear of lone wolves, it continued to be an essential part of the wardrobe of the social elite – a fascinating dynamic that meant that the trench coat was equally appropriate on the shoulders of Charles, Prince of Wales and heir to the British throne, as on Rick Deckard, hard-bitten bounty hunter of Ridley Scott’s 1982 future noir Blade Runner. “It’s nostalgic… it’s a fashion classic. It’s like blue jeans, it’s just one of the items that has become part of our vocabulary of clothing because it’s a very functional item that is also stylish,” says Tynan. “It just works.”

It’s also endlessly updatable. “Because it’s so iconic, it means that avant garde designers can play with elements of it,” says Steele. Even Burberry, which consciously recentered its brand around its trench coat history in the middle of the last decade, understands this – the company now offers dozens of variations on the trench, in bright colors and prints, with python skin sleeves, in lace, suede, and satin.

But as the trench coat has become a fashion staple, on every fashion blogger’s must-have list, its World War I origins are almost forgotten. Case in point: Doyle said that in the 1990s, he passed the Burberry flagship windows on London’s major fashion thoroughfare, Regent Street. There, in huge lettering, were the words “Trench Fever”. In the modern context, “trench fever” was about selling luxury trench coats. But in the original context, the context out of which the coats were born, “trench fever” was a disease transmitted by lice in the close, fetid quarters of the trenches. 

“I thought it astounding,” said Doyle. “The millions of people who walked down the street, would they have made that connection with the trenches? I doubt that.”

The Unlikely Medical History of Chocolate Syrup

Smithsonian Magazine

At first glance, nothing seems particularly odd about the December 1896 edition of The Druggists Circular and Chemical Gazette, a catalog of products that any self-respecting pharmacy ought to carry. But look closer: Hiding among medical necessities like McElroy's glass syringes and Hirsh Frank & Co's lab coats, you’ll find some more curious finds—including Hershey's cocoa powder.

"Perfectly soluble,” boasts the ad in bold, capital lettering. “Warranted absolutely pure.” It reads as if it was peddling medicine—and in fact, it sort of was.

Druggists of the day often used the dark powder to whip up a syrup sweet enough to mask the flavor of objectionable remedies, explains Stella Parks, a pastry chef with the food and cooking website Serious Eats. Parks happened upon these vintage advertisements while she was researching her new book, BraveTart: Iconic American Desserts, which features lesser-known histories of our favorite sweet treats.

The Hershey's ad intrigued her. "What in the world are these guys doing advertising to druggists?" she recalls wondering at the time. By digging into the history and tracking down more pharmaceutical circulars and magazines, she discover the rich history of chocolate syrup, which began not with ice cream and flavored milk—but with medicine.

(The Druggists' Circular and Chemical Gazette, Volume 40, 1896)

Our love of chocolate goes back over 3,000 years, with traces of cacao appearing as early as 1500 B.C. in the pots of the Olmecs of Mexico. Yet for most of its early history, it was consumed as a drink made from fermented, roasted, and ground beans. This drink was a far cry from the sweetened, milky stuff we call hot chocolate today: It was rarely sweetened, and likely very bitter.

Still, the roughly football-sized pods that cradled the beans were held in high esteem; the Aztecs even traded cacao as currency. Chocolate didn’t become popular overseas, however, until Europeans ventured into the Americas at the end of the 15th century. By the 1700s, the ground beans were avidly consumed throughout Europe and the American colonies as a sweetened, hot drink that was vaguely reminiscent to today’s hot cocoa.

At the time, chocolate was touted for its medicinal properties and prescribed as treatment for a range of diseases, says Deanna Pucciarelli, a professor of nutrition and dietetics at Ball State University who researches the medicinal history of chocolate. It was often prescribed for people suffering from wasting disease: The extra calories assisted in weight gain, and the caffeine-like compounds helped perk patients up. "It didn't treat the actual illness, but it treated the symptoms," she explains.

Yet for pharmacists, it wasn’t only the supposed health benefits but also the rich, velvety flavor that held such appeal. "One thing about medicines, even going way back, is that they are really bitter," says Diane Wendt, associate curator of the division of medicine and science at Smithsonian's National Museum of American History. Many medications were originally derived from plants and fall in a class of compounds known as alkaloids, which has an acrid, mouth-puckering flavor. The first of these alkaloids, isolated by a German chemist in the early 1800s, was none other than morphine.

Chocolate, it turns out, effectively covered the toe-curling taste of these foul flavors. "Few substances are so eagerly taken by children or invalids, and fewer still are better than [chocolate] for masking the taste of bitter or nauseous medicinal substances," according to the 1899 text, The Pharmaceutical Era.

It's unclear exactly when pharmacists first combined cocoa powder and sugar to brew the sticky syrup. But its popularity was likely helped along by the invention of cocoa powder. In 1828, Dutch chemist Coenraad J. Van Houten patented a press that successfully removed some of chocolate's natural fats, reducing its bitter flavor and making it easier to dissolve with water. Still, the result wasn’t exactly the "same kind of smooth mellow chocolate we have now," says Parks; to make it palatable, pharmacists would mix cocoa powder with at least eight times more sugar than chocolate.

The popularity of chocolate syrup exploded in the second half of the 19th century, coinciding with the golden age so-called patent medicines. These are named after the "letters of patent" the English crown awarded to inventors of supposedly curative formulas. The first English medicine patent was awarded in the late 1600s, but the name later came to refer to any over-the-counter drugs. American “patent medicines” went by the same name, but were not typically patented under this system.

Patent medicines emerged at a time when public need for treatments and cures outpaced medical knowledge. Many of these "cures" did more harm than good. Often marketed as cure-alls, the concoctions could contain anything from pulverized fruits and veggies to alcohol and opioids. At the time, the common use of these addictive substances in remedies was legal; regulation didn't come about until the 1914 passage of the Harrison Narcotic Act.

One popular remedy featuring tincture of opium as its active ingredient was Stickney and Poor's Paregoric. This syrup was marketed as a treatment for many ills, and given to cholicky infants as young as five days old. “Remedies” like this weren’t completely ineffective. The inclusion of narcotics and alcohol in the cures did indeed give customers temporary relief from illness—and, more sinisterly, their addictive nature kept them coming back for more.

Vintage Hershey's ad showing chocolate syrup as a "stepping stone to health." (Hershey's Company)

The boom of factory mass production in the 1900s brought with it the rise of easy-to-swallow medical pills. But before that, "pill making by hand is pretty labor intensive," says Wendt. "To actually make a pill of a certain dose—to mix it up and cut the pills, and roll the pills, and dry the pills, and coat the pills—that's a pretty lengthy process." That’s why, during this time, medications were mostly served up in liquid or powder form, says Wendt.

Druggists would mix each liquid remedy with a base of sugary flavored syrups, like chocolate, and take it either by the spoonful or mixed into a beverage, says Wendt. Alternatively, powders could be directly poured into your refreshment of choice. The base for these medicinal drinks could be anything from plain water to tea to a couple fingers of whiskey. But over the course of the 1800s, one particular drink was gaining popularity as a medicine masker: carbonated water.

Not unlike chocolate, soda water was initially considered a health drink in its own right. The carbonated beverage mimicked the mineral-rich waters bubbling up in natural springs that had become known for its curative and healing powers. Soda became a truly widespread phenomenon in America around the turn of the century thanks to the pharmacist Jacob Baur, who invented the process necessary to sell tanks of pressurized carbon dioxide.

Part health drink, part delicious treat, sweetened carbonated water began spreading like wildfire in the form of soda fountains, Darcy O'Neil writes in his book Fix the Pumps.

Syrups became ever more popular to keep pace with the soda craze. Many of these flavors are still common today: vanilla, ginger, lemon and, of course, chocolate. By the late 1800s hardly a pharmacist publication went without some mention of chocolate syrup, Parks writes in Bravetart. And hardly a drug store went without a soda shop: Soda fountains served as a lucrative side business for druggists and pharmacists who commonly struggled to make ends meet, says Parks.

At the time, carbonated concoctions were largely still seen as cures. "Soda is an excellent medium for taking many medicines," according to the 1897 book, The Standard Manual of Soda and Other Beverages. "For example, the best method of administering castor oil is to draw a glass of sarsaparilla soda in the usual manner and pour in the requisite amount of oil." (Sarsaparilla, a flavor derived from the root of a tropical vine, is still used today in some root beer variants.)

One example still very much available today is Coca Cola: Originally mixed up with cocaine, the fizzy drink was touted it as a healthful stimulant to revive the brain and body.

At the turn of the century, however, chocolate syrup began to shift from treatment to treat. "It just seemed to naturally segue into all the ice cream [desserts] that pharmacists had to keep on hand just to stay afloat," says Parks.

A fortuitous mix of events helped elevate the state of chocolate to commercial confection. First, in the early 20th century, concerns over false health claims and downright dangerous cures helped lead to the passage of the 1906 Pure Food and Drug Act, which required druggists to disclose the remedy ingredients with clear and accurate labels. Similarly, a clamp down on American patent medicines may have further driven the chocolatey transition.

At the same time, other forms of chocolate were gaining traction as confections in their own right. As the industrial revolution ushered in machinery that took over the time-intensive process of turning cacao to cocoa, prices began to fall, explains Pucciarelli. "It all comes together," she says. "The price of manufacturing drops, the price of sugar drops, and then you have [chocolate] bars."

In 1926, Hershey's began marketing pre-mixed chocolate syrup in both single and double strength varieties for commercial businesses. The cans were shelf stable, meaning druggists (and soda jerks) didn't need to continually mix up new batches. By 1930, both Hershey's and other chocolate companies like Bosco's had begun marketing chocolate syrup for home use.

The rest is sweet, sweet history. These days, despite many modern claims of health benefits—some founded and some unfounded—chocolate is considered more confection than cure. Chocolate accounts for the "vast majority" of the $35 billion confection market in the United States, according to the National Confectioners association.

Yet the use of a sweet cover for medications remains isn’t completely dead. You can find sweetness masking medicine in many forms, from cherry cough syrup to bubblegum-flavored amoxicillin. It seems Mary Poppins was right: A spoonful of sugar—or in this case, chocolate—really does help the medicine go down.

The Hunt for a Bottle of Asturias Cider and the Stories of More Drinks From Northern Spain

Smithsonian Magazine

Manuel Martinez, bartender at the family-operated La Figar Bar in Nava, pours a glass of cider. He stood by to provide pour after pour until the bottle was finished. Photo by Alastair Bland.

If you have lemons, you make lemonade, and if you have honey, you make mead, and if you have Semillon and Sauvignon Blanc grapevines in soil so rich and sweet that you could almost eat it with a little salt, you make Chateau d’Yquem.

And if you have apples, you make cider—and so the people do in Asturias, in northern Spain. Apple trees grow prolifically on the rolling green hills here, many stubby as shrubs, others as large and ragged as oaks. Many grow randomly, as scattered as the sheep and cows, while other property owners tend checkerboard orchards of trees. Just about every household has several, and behind many a roadside bar—usually sub-headed as a “sidreria”—grow trees used to make the house apple cider, which is often served from the spigot of a barrel.

Cider is a thirst quencher here, and it’s a way of life. In the fall, thousands of people participate in the harvest, sending the fruit to about two dozen local commercial producers (many other unregistered sellers bottle cider at home) where the fruit is crushed, the juice fermented and the drink eventually released in wine-sized bottles. Essentially every bar and restaurant in the area serves cider, and here is where one must go to experience cider as Asturians do—and to experience what a lot of fuss Asturian bartenders and patrons put up with for a bottle of some local farmhouse tipple. The bartender makes a grand show of popping the cork and pouring the cider from overhead into a glass held at waist level. The first splashes generally miss and hit the floor before he finds the stream. He fills the glass only about a quarter full, and the recipient must be standing by to drink immediately, to enjoy the bubbles created by the aeration (the cider here is not carbonated). The customary fashion is then to dump out the last splash, a gesture that supposedly freshens the glass for the next person (the presumption is that people are sharing glasses). Want more cider? Somebody, if not the bartender, must go through the pomp and circumstance again, often in a designated corner of the bar, and by the end of a 750-milliliter bottle, about a third has been spilled. I can only presume Asturian bartenders don’t wear their best shoes to work. Relax over a beer, then get back to work with another splash-dance of cider.

Asturias cider is protected by a Denomination of Origin status, the European Union system of guidelines that lays out laws for the making of regional products like cheese, wine, beer and breads. For cider to wear the proud name of Asturias on its bottle, it can be made using only 22 certain varieties of apples, though more than 250 grow in the region. Most producers use an unspecified mélange of apples, generally five or six varieties, and the wide range of possibilities allows for a great diversity in Asturian cider—though to some degree it’s all roughly the same: usually dry and a bit tart, about 6 percent alcohol by volume, with smells and flavors suggestive of hay and barnyard. Called sidra natural, it’s still as a swamp, and about as green and cloudy, too. It’s also delicious.

Sidra natural, as it’s called in Spanish, is simply apple juice, fermented, barrel aged and bottled without carbonation. This particular bottle pulled the author through an especially rigorous day of cycling over Puerto de Tarna. Photo by Alastair Bland.

In the town of Panes, I loafed around the streets for several hours, looking at the displays of ciders carried in every bakery, butchery, grocery store and gift shop—but nowhere, unfortunately, was there a place to taste through a lineup of ciders by the pour; that is, you’ve got to buy the whole bottle and be ready to get your feet wet. I visited the fish market—Huly Pescaderia, it’s called—and talked with the owner, named Julian. Our conversation diverged quickly from farmed Norwegian salmon to cider, for Julian said he makes his own. He invited me, in fact, to a cider party that night in his home, but I had other obligations. Julian doesn’t sell his cider but still abides by E.U. guidelines in making proper Asturian cider. His cider includes (he wrote these names for me) Francesca, Berdalona, Solalina and De La Ruega apples—and it takes about seven pounds of fruit for a liter of juice. Julian said he even ages some cider and has tasted some phenomenal stuff stashed, forgotten then found more than a decade after the cork went in.

But cider is generally had fresh, with the first bottles opened the May after the fall harvest—meaning the 2011 vintage is just hitting the floorboards—and things are about to get crazy. Because each July, the Nava Cider Festival draws thousands of people to the small town of Nava, just west of the Picos de Europa. This year, from July 6 to July 8, the population of 3,000 will boom for a weekend in the main plaza (where a large mural depicts a man pouring cider from overhead), with lectures and talks and demonstrations preceding the free tasting on Saturday. Sunday always includes a pouring competition, in which competitors show their skills in pouring cider from great height, with as little as possible splashed to the floor. I visited Nava, and stopped at La Figar Bar, a dark but cozy wood place with an old bar, animal mounts on the wall and cider festival memorabilia occupying almost every available surface. Bartender Manuel Martinez opened me a bottle of Asturias Foncueva Sidra and showed me the way the pouring is done—and with no annoyance that he had to get his shoes sticky on my account. He took me to the rear parlor, too, to show me the barrel in the wall, containing cider in bulk (no need for a whole bottle) and served me a portion from the spigot, glass held five feet from the tap (he admitted that the barrel is “falso,” fed by a tube from a keg behind the wall).

The mural above the festival plaza in Nava depicts the magnificent image of a champion cider server in action. Photo by Alastair Bland.

The next day was Huelga General, June 18, when everyone in Asturias does no work at all and instead stands on the streets in the drizzle, celebrating the holiday with their feet to the curb and watching the traffic go by. Not even the cafes were open, and I pedaled on empty the fastest way out of the province that there was—over a mile-high pass called Puerto de Tarna. Every restaurant was closed along the way, and I was nearly crazed with hunger by 2 p.m., when, halfway up the climb, I pounded on the door to a tavern and talked my way into buying a bottle of cider. I found a nearby bench and fueled up. It was gold and spritzy and would have done well with a blue cheese—but what I would really have almost killed for is a fig tree. The cider, with 6 calories per gram of alcohol and a few more in the residual apple sugar, pulled me through, over the pass and into the region of Castilla y Leon, where the towns were operating and the stores open. Now about 3,000 calories in the hole, I found a shop in Riano, 20 miles below that horrible pass. It was 6 p.m. I had gone all day without food, thanks to that strange Asturian holiday on which tourists are left to starve. I bought walnuts, beets, an avocado and a beautiful melon—and I asked for a bottle of “sidra natural.” The lady shrugged and said sorry.

“For cider,” she advised, “you should really go to Asturias.”

What Else to Drink in Northern Spain

Txakoli. The white wine of the Basques, txakoli (say cha-kho-lee), or txakolina, is spritzy and greenish, with an herbal grassiness and easy-drinking flow that gains it a reputation among some as a simple wine, not to be regarded seriously like the stodgy old bottles of Bordeaux or the other highbrow districts. Others revere it, handle the bottles like small babies, and charge 8 Euros or more for a bottle. Yikes. I have sampled several. I enjoyed each, especially the Santarba Txakolina, of 11 percent alcohol, with a mint-lime flavor and a cool aftertaste of spearmint, and very refreshing in the horse pasture where I drank it before bed.

Rioja. Grown below the southwest slope of the Spanish Pyrenees, Rioja is often red and made largely of  Tempranillo grapes. It tends to be heavyset, forceful and fruity, with powerful tones of raspberry and cherry. I’ve been seeking out the 2005 vintage for no reason other than that 2005 was a good, and eventful, year for me. I was in Spain that fall, watching the grape harvest. It was also one of the driest years in history on the Iberian peninsula, which was interesting. Goats, I recall, were ravenously sifting through the gravel in search of grass sprigs and chasing me in the hope of eating my dirty laundry. And that was also the trip which ended in a crash, dumping me on the asphalt near Valencia with a broken wrist and my splintered collarbone sticking through the skin of my neck. Wine is an experience of time and place, and the 2005 Rioja takes me back to a good one.

Say what? Never mind. Just drink. The language is Basque, and the wine is txakoli, the main white wine of the Basque country of northern Spain and southern France. Photo by Alastair Bland.

World’s Oldest-Known Figurative Paintings Discovered in Borneo Cave

Smithsonian Magazine

Hidden in a remote cave buried in the inaccessible rainforests of Indonesian Borneo, a series of rock art paintings are helping archaeologists and anthropologists to rewrite the history of artistic expression. There, scientists have found, enterprising painters may have been among the very first humans to decorate stone walls with images of the ancient world they inhabited.

The oldest painting in Lubang Jeriji Saléh cave on Borneo, the third-largest island in the world, is a large wild cattle-like beast whose relatives may still roam the local forests. The figure has been dated at 40,000 years old and perhaps older, possibly created about 51,800 years in the past.

These estimates, recently calculated using radiometric dating, may make the painting the oldest-known example of figurative cave art—images that depict objects from the real world as opposed to abstract designs. The figures also provide more evidence that an artistic flowering occurred among our ancestors, simultaneously, on opposite ends of the vast Eurasian continent.

Hundreds of ancient images, from abstract designs and hand stencils to animals and human figures, have been documented in Indonesian Borneo’s remote caves since scientists became aware of them in the mid-1990s. But like other signs of ancient human habitation in this part of the world, they are infrequently seen or studied. Borneo’s Sangkulirang–Mangkalihat Peninsula is a land of soaring limestone towers and cliffs, riddled with caves below and blanketed with thick tropical forests above that make travel arduous and have hidden local secrets for thousands of years.

Limestone karst of East Kalimantan, Indonesian Borneo. (Pindi Setiawan)

Maxime Aubert, an archaeologist and geochemist at Griffith University, Gold Coast, Australia, says the effort to study the cave paintings was well worth it, not least because of the unique connection one feels here to the distant past.

“When we do archaeological digs, we’re lucky if we can find some pieces of bone or stone tools, and usually you find what people have chucked out,” says Aubert, lead author of a new study detailing the Borneo paintings. “When you look at the rock art, it’s really an intimate thing. It’s a window into the past, and you can see their lives that they depicted. It’s really like they are talking to us from 40,000 years ago.”

The dating of this ancient southeast Asian cave art pens a new chapter in the evolving story of where and when our ancestors started painting their impressions of the outside world. A painted rhino in France’s Chauvet Cave had until recently been the oldest-known example of figurative cave art, dated to roughly 35,000 to 39,000 years old. Chauvet and a few other sites led scientists to believe that the birth of such advanced painting had occurred in Europe. But in 2014, Aubert and colleagues announced that cave art depicting stenciled handprints and a large pig-like animal from the same time period had been found on the other side of the world on the Indonesian island of Sulawesi.

“The 2014 paper on Sulawesi made a very big splash, as it showed that cave art was practiced both in Europe and in southeast Asia at about the same time,” Paleolithic archaeologist Wil Roebroeks says in an email. Roebroeks, of Leiden University in the Netherlands, added that Aubert’s team’s research, “killed Eurocentric views on early rock art.”

The Borneo find compliments this earlier work and expands an increasingly broad and intriguing worldview of ancient art—one with as many new questions as answers.

Aubert and colleagues were able to determine when Borneo’s ancient artists plied their trade by dating calcite crusts, known as “cave popcorn,” that seeping water slowly created over the top of the art. The team dated these deposits by measuring the amount of uranium and thorium in the samples. Because uranium decays into thorium at a known rate, uranium series analysis can be used to calculate a sample’s age. And because the paintings lie underneath these crusts, the researchers conclude they must be older than the calcite deposits. Indonesia’s National Research Centre for Archaeology (ARKENAS) and the Bandung Institute of Technology (ITB) also contributed to the study published today in Nature.

The world's oldest figurative artwork from Borneo dated to a minimum of 40,000 years. (Luc-Henri Fage)

Even though the uranium dating suggests these figures are the oldest-known example of such art in the world, Aubert is even more interested in the striking similarities between the Borneo cave art styles and those found across Europe. In fact, two styles of painting found in Indonesia’s Lubang Jeriji Saléh cave—which were superimposed over one another by peoples who frequented the same cave perhaps 20,000 years apart—also appear at roughly the same times more than 7,000 miles away in Western Europe.

The first style, which began between 52,000 and 40,000 years ago, uses red and orange hues and includes hand stencils and paintings of large animals that lived in the surrounding area. A second distinct style appeared around 20,000 years ago. It uses purple or mulberry colors, and its hand stencils, sometimes linked together by branch-like lines, feature internal decorations.

By 13,600 years ago, the Borneo cave art had undergone another significant evolution—it began depicting the human world. “We see small human figures. They are wearing head dresses, sometimes dancing or hunting, and it’s just amazing,” Aubert says.

Human figures from East Kalimantan, Indonesian Borneo. This style is dated to at least 13,600 years ago but could possibly date to the height of the last Glacial Maximum 20,000 years ago. (Pindi Setiawan)

“It’s more about a pattern that we can see now. We have really old paintings in Europe and southeast Asia, and not only did they appear at the same time on opposite sides of the world, but it seems that they are evolving at the same time on opposite sides of the world,” Aubert says. “The second distinct style appeared around the time of the last glacial maximum, so it could even be related to the climate. We just don’t know.”

Rock art painters might have developed simultaneously in more than one place, Roebroeks suggests. Alternatively, as he wrote in a 2014 Nature essay, rock art may have been “an integral part of the cultural repertoire of colonizing modern humans, from western Europe to southeast Asia and beyond.”

“We can only speculate about the more or less contemporaneous ‘emergence’ of rock art in western Eurasia and on the other extreme of the distribution of modern humans, Insular South East Asia,” Roebroeks says.

The idea that rock art was an “integral part” of modern human culture from the beginning seems most likely to Durham University archaeologist Paul Pettitt, who says a wide range of evidence supports the interpretation that non-figurative art had evolved in Africa by 75,000 years ago or earlier.

“This could have originated as a way to decorate the body with specific meanings,” he says in an email, “and included shell jewelry known from the north and south of the continent as early as 100,000 years ago.” The artistic expressions “had developed to include the use of red ochre and engraved signs on ochre lumps and stone by 75,000 [years ago] and decoration on ostrich eggshell water containers by 65,000. If we assume this repertoire left Africa with some of the earliest dispersals of Homo sapiens, perhaps on their bodies, it might explain the persistence of a form of art which, by at least 40,000 years ago had come to be extended off of the body, and things closely associated with it, to cave and rock shelter walls,” he says.

Composition of mulberry-coloured hand stencils superimposed over older reddish/orange hand stencils. The two styles are separated in time by at least 20,000 years. (Kinez Riza )

But even if we could understand the entire story of early human art, we might still be missing an even bigger picture.

A 2018 study describes Spanish rock art so old that it would have been created more than 20,000 years before modern humans arrived in the region—meaning the artists must have been Neanderthals. Though the dots, lines and hand stencils aren’t the same type of figurative art found in Borneo or Chauvet, the images suggest that artistic expression was part of the Neanderthal toolkit at least 64,000 years ago.

Roebroeks cautions that scientists should hesitate to infer that certain times or places are key to the emergence of a particular cultural behavior, simply because evidence for them is lacking in other eras or locales. As evidenced by the surprisingly old dates recently assigned to the Neanderthal rock art, or the emergence of Pleistocene rock art outside of Europe in Indonesia, these assumptions are often based on the absence of comparable phenomena in neighboring locales or time periods.

Just because we haven’t found them, however, doesn’t mean that they don’t exist. “One of the lessons we can learn from the studies by Aubert and colleagues on rock art from Sulawesi and now Borneo is that such ways of reasoning can be severely flawed.”

Prehistoric art may have been created in the distant past, but the future is likely to bring surprising discoveries that further transform our view of human artistic expression tens of thousands of years after the paint has dried.

When Hollywood Glamour Was Sold at the Local Department Store

Smithsonian Magazine

If a woman was in search of an evening gown in 1932, there’s a good chance she considered a particular dress. The floor-length, white organdy showstopper had voluminous pom-pom sleeves with a flouncy, ruffled hem, and was the “it” dress for years to come, sending shockwaves through the fashion world. Inspired by a look worn by movie star Joan Crawford in MGM’s smash-hit Letty Lynton, the gown was the brainchild of costume designer Adrian Greenberg. Its silhouette was so unprecedented that it inspired women to flock to department stores like Macy’s for one of their own.

But what seemed like a fashion fad was really a harbinger of things to come. Though it’s not clear exactly how many Letty Lynton gowns were manufactured and sold, the look was so popular that it has since gained an almost mythical status in the world of costume design and cinema-inspired fashion. That single gown marked a moment in American fashion—one in which costume designers in Hollywood, not couture houses in Paris, started telling American women what to wear. It was the beginning of an era of film-inspired apparel that brought silver-screen looks into the closets of ordinary women. 

It took 21 years from the time of the first Academy Awards for the Academy of Motion Picture Arts and Sciences to honor costume design, even though film costumes have captivated audiences ever since the first movies were screened. What few realize, though, is that costume design had a major impact on the global fashion industry.

The early 1930s, during the Great Depression, were Hollywood’s Golden Age and movies offered an exhilarating and accessible form of escape. As film captured America’s collective imagination, what was worn on screen became sensationalized. A new market emerged—and with it, a whole wardrobe's worth of ways to develop and sell products inspired by cinema costumes.

The race was on to capitalize on this new, largely female, consumer group. Heading up the effort were film studios including Paramount, Warner Brothers, 20th Century Fox and RKO. Since studios had creative control over every aspect of film production and distribution—from directors to actors to costume design—they pioneered new ways to publicize themselves, turning their lucrative movies into even more commercial gold.

Cinema-styled fashion provided more than just an element of intrigue and a clothing choice that differed from what was regularly sold in shops. It all came down to the magic of movies: the fantasy introduced through films’ various plotlines, eras and settings entered people’s homes through their personal wardrobes. These commercial adaptations (sometimes knockoffs, sometimes officially licensed) were sold to a mass market of moviegoers. Manufactured at a low cost with less tailoring and cheaper fabrics, the dresses were sold at an affordable retail price.

One of the first such endeavors came from Hollywood Fashion Associates, a group of fashion manufacturers and wholesalers who got the copyrights to popular Hollywood styles and sold them in exclusive stores in Los Angeles in the late 1920s. Similarly, in 1928, The Country Club Manufacturing Company relied on proprietary styles modeled by recognizable film stars to entice buyers.

Fashionable Americans had been taking their cues from French haute couture designers like Coco Chanel, Paul Poiret, Jeanne Lanvin and Madeleine Vionnet for years. These looks were of course reflected in glamorous Hollywood productions, but with this new merchandising brainchild, movie studios could capitalize on their own in-house designers instead. “The studios were determined never again to be at the mercy of a small group of French designers," wrote Edith Head, herself one of Hollywood's most famous costumers. "If stars were hot on the social circuit, studio designers were asked to fashion personal wardrobes for them too." 

Studios partnered with stores nationwide, producing themed shops with names such as Warner Brothers Studio Styles, Hollywood Fashions and Macy’s Cinema Fashion Shops. They worked with popular magazines to promote their movies as the place to discover fashionable trends.

Studios and retailers publicized the new looks alongside the movie release in fan publications similar to tabloids, including Hollywood Picture Play, Mirror Mirror, and Shadow Play, among others. Esteemed fashion magazines like Vogue also included advertisements for cinema fashion. This outlet turned costume designers into trendsetters. Often these magazines showcased or simply mentioned the contracted studio stars, as it had become apparent that they had a major influence on consumer behavior. In Crawford films like Letty Lynton, writes historian Howard Gutner, the focus on fashion “would become overwhelming, to the point where almost everything in films, including the direction, would take a backseat.”

Image by Library of Congress National Audio Visual Conservation Center. RKO Radio Pictures wrote about who was involved “in cooperation” with the designs copied from their 1935 film, Roberta. The Film Daily (p.16), January-March 1935. (original image)

Image by Library of Congress, Motion Picture, Broadcasting and Recorded Sound Division. Left: Photograph of a commercially sold by Warner Brothers Studio Styles designer by Orry-Kelly and inspired by a costume from the WB 1933 film Anthony Adverse. Right: This dress may not have been an Orry-Kelly design, however was copied by WB for their Studio Styles brand as well. Modern Screen (p.74) Dec 1935 - Nov 1936. (original image)

Image by Library of Congress Motion Picture, Broadcasting and Recorded Sound Division. The Warner Bros. strategy to elevate cinema fashion to buyers. Hollywood Magazine, January-November, 1935. (original image)

Image by Library of Congress National Audio Visual Conservation Center. Riding off the Lynton success, MGM creates a stir with new film Today We Live. The New Movie Magazine (p.53), January-June 1933. (original image)

Image by Vogue. This Studio Styles advertisement lists Warner Bros. shop locations situated inside the area’s bigger retail stores. Vogue, September 15 1935 (original image)

Image by Photoplay. An example of how cinema dress was displayed at The Carl Co. Cinema Fashions published in Photoplay (p.54), December 1934 (original image)

Image by Courtesy Ulanda Blair, ACMI. A letter from Warner Bros. Assistant Secretary Roy Obringer to Publicist Morris Ebenstein about Studio Styles. Orry was resistant to WB using his name on the Studio. (original image)

In 1930, Samuel Goldwyn of MGM took a reverse route by bringing Coco Chanel, one of the world’s most famous designers, to the U.S. to design costumes for his films in a short-lived collaboration. In the same year, Macy’s became the first department store to carry film-inspired fashion, selling evening to casual wear at price points in today’s moderate-to-better fashion range of $200 to $500.

The mainstream fashion industry leveraged formal couture showcases and print publications to spread trends. So did film fashions. Cinema-inspired clothing coincided with film debuts rather than seasonal fashion shows. Marketing in trade publications and on the radio created a sense of timely excitement. Fans could buy a ticket to see the desirable looks, or go to the shop to catch them before they disappeared.

Studios led the way in fashion trends, too, sharing their plans for upcoming films, as early as a year in advance, with Bernard Waldman’s Modern Merchandising Bureau (MMB), a large-scale clothing producer. The result was that when a film premiered, the new fashions would, too—and in turn, the apparel served as an ad for the movie and its studio.

Now, women of all walks of life and in all parts of the country could access cutting-edge fashion without traveling to Paris. But Waldman wasn’t done yet. He franchised more than 400 Cinema Fashion Shops nationwide and another 1,400 stores sold star-endorsed styles. He had competition, though, from Warner Brothers’ Studio Styles. Established in 1934, this highly lucrative product line featured licensed designs inspired by the studio’s leading costume designers. When not featuring actresses in promotions, Warner Brothers publicized its star designer, Orry-Kelly, making him a sought-after crossover costume to fashion designer—similar to Adrian Greenberg.  

Adrian—now famous enough to be known by his first name alone—had designed costumes for stars like Joan Crawford, Greta Garbo and Norma Shearer. He got in on the licensing action, too. Macy’s created a line based on Adrian’s costumes for MGM’s 17th-century drama Queen Christina (1933) starring Garbo. Eventually, he used his success to launch a fashion career, leaving Hollywood to start his own fashion house in the 1940s.

But, just as fashion trends come and go, so too did the commercialization of film-inspired fashion. Eventually, the power of the studio system waned, weakening their centralized marketing machine. And as the Golden Age of Hollywood faded, the movie industry was no longer seen as fashion-forward. In 1947, Christian Dior’s “new look” redefined the silhouette for modern women—and put French designers at the forefront of women’s fashion once more.

What became of the dresses that dictated a major shift in the entire fashion industry? Regretfully, early Hollywood costumes weren’t valued, preserved and exhibited as carefully as they are today. Over the years, costumes were rented out, refashioned, or simply lost. Similarly, relatively little evidence of cinema-inspired fashion survives. Through insider correspondence and 1930s fan magazines, we can see what was produced and sold in stores across the United States.

Many of the dresses that captured the American imagination through a bit of movie magic are treasures, stowed away in homes across the country. While not originals, retail replicas serve as an invaluable fashion reference, helping fill the gap left by original costumes worn in beloved films before they were deemed of sufficient value to collect. 

The evolution of Phyllis Diller's career in 7 objects

National Museum of American History

Phyllis Diller is widely considered the first female stand-up comic to perform as a solo act. While she is mainly known for her career in comedy, there were many other dimensions to her life. Not only could she tell jokes, but she could sing, act, paint, play piano, and more. Follow along as I trace the many talents of Phyllis Diller through time using objects in the museum's collection. You can help us learn even more about Diller's work by helping to transcribe jokes from her gag file with the Smithsonian Transcription Center.

1. The gag file

A view from the front of a large, medium brown file cabinet comprising many smaller drawers with labels on the outside. The bust of a woman with short hair sits on top and is flanked by two vonyl album covers portraying a woman with light colored hair

Diller began performing stand-up in 1955. As her act grew, she began to use a filing cabinet to store and organize the plethora of jokes she used on stage. Diller's gag file consists of a steel cabinet with 48 drawers, along with a three-drawer expansion, containing 52,569 three-by-five index cards that each hold a typewritten joke or gag. These index cards are each dated and organized alphabetically by subject, ranging from accessories to world affairs and covering almost everything in between. Throughout her career Diller would often be asked to perform a short comedy set on one particular topic. This gag file would help her quickly gather individual jokes to create a set about a specific theme. Diller continued to add to the file and edit jokes through the 1990s.

2. Pink dress from The Pruitts of Southampton

A satiny fuschia dress with a high neck, long sleeves, and a large skirt that looks like it is covered at the ends with petals, which also appear on the ends of the sleeves.

Diller's first television show, The Pruitts of Southampton, debuted in September 1966. She wore this dress during the title sequence for the show. The show focused on the main character, Phyllis Pruitt, and her family living in their large mansion and attempting to keep up appearances after losing all of their money. The show struggled to gain popularity and changed its title to The Phyllis Diller Show in January 1967 before airing its last episode in April of that year. While The Pruitts of Southampton was not a success, Diller starred in another television show in 1968 titled The Beautiful Phyllis Diller Show, as well as many other TV specials and guest appearances throughout her career.

3. 1966 USO Christmas Tour dress

A mannequin wears a blonde, spiky wig and a lacquered shift dress with green and gold daisies. The bottom hem, neckline, and sleeves are ruffled with teal, wrist-length gloves and a cigarette holder. Her shoes are green ankle-high booties with low heels and gold details.

After meeting him early in her career, Diller and Bob Hope became lifelong friends. She starred in three movies with him and appeared in many of his television specials. She also joined him for two of his USO Christmas tours. On the 1966 USO Christmas tour she wore this ensemble as she toured around Vietnam, Thailand, Guam, Wake Island, and the Philippines entertaining troops. Diller also joined Hope on his USO tour of the Persian Gulf in 1987 and was awarded the USO Liberty Bell Award "for demonstrating concern for the welfare and morale of America's armed forces" in 1978.

4. Hello, Dolly! costume

A red dress with a bold design that looks sewn on running down the front with a short fringe. It is sleeveless with a sweetheart neckline.

In 1970 Diller starred as Dolly Gallagher Levi in Hello, Dolly! for three months at the St. James Theatre on Broadway. Diller followed Carol Channing, Ginger Rogers, Martha Raye, Pearl Bailey (in a version with an all-black cast) and Betty Grable in the role and was replaced by Ethel Merman, who closed out the show in December 1970.

5. "Miss Fun Fishing 1973" trophy

A standard trophy with a golden fish leaping out of the top, creating splashes of water after it. Diller's name is inscribed on the front.

In 1973 Diller was named "Miss Fun Fishing 1973" after posing nude for Field and Stream magazine. The award was presented "on the occasion of her selection as the first centerfold in the magazine's 78 year history." The trophy goes on to explain that "fishing and humor are two universally popular sources of entertainment for people of all ages." The centerfold in the June 1973 issue of Field and Stream showed Diller wearing fishing waders that covered her entire body except for her shoulders and arms.

6. Beaded dress from The Symphonic Phyllis Diller

A cream-colored full-length gown with silver detailing near the hem with sequins and crystals, and leaves, flowers and starbursts up the front. The mannequin wearing it also has on white gloves and a spiky blonde wig.

While known for her stand-up comedy, Diller was originally trained as a classical pianist. She returned to the piano in the early 1970s, and from 1971 to 1982 she performed with over 100 symphony orchestras across the United States and Canada in a show titled The Symphonic Phyllis Diller. She wore this dress during the performances when she would seriously play pieces by Beethoven, Bach, and others as a solo pianist with an orchestra while integrating comedic elements.

7. Self-portrait

A portrait in a gold frame with a cream matting. The portrait is on a robins egg blue background. Coral smiling slips and bright turquoise eyes are surrounded by vigorous brush strokes that make a loose but representational portrait. "diller" is inscribed by the portrait neck

Diller began painting for pleasure in the mid-1980s. During this time she was staying in a large suite at Harrah's in Reno, Nevada, where she had enough space to set up several easels and canvases. She describes her technique as painting quickly and without too much thought about each individual painting. This quick style allowed her to complete anywhere from ten to 25 paintings per day. She mainly painted faces, including this self-portrait.

Although Diller was most known for her comedy success, she was able to explore her numerous other talents throughout life. Diller continued to perform stand-up comedy routines until she retired in 2002 at the age of 85. But along the way she honed her other skills to create an incredible, multifaceted career.

Hanna BredenbeckCorp is a project assistant in the Division of Culture and the Arts.

Join the transcription project over at the Smithsonian Transcription Center and join the conversation on Twitter with #DillerFile

The digitization of Phyllis Diller's index card collection was generously supported by Mike Wilkins and Sheila Duignan. 

Posted Date: 
Monday, March 13, 2017 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=Ly8OM27AGhw:PDLOa9Mpfvk:V_sGLiPBpWU OSayCanYouSee?i=Ly8OM27AGhw:PDLOa9Mpfvk:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

T - Minus An Hour and a Half

National Air and Space Museum
T-Minus an Hour and a Half. A crowd of people are sitting and standing, thus forming a low "V" shaped composition. The people on the left are sitting on bleachers, one man is standing with a camera on the right, and many are wearing sunglasses. A platform with a loudspeaker on its base is behind the people on the left. A launch pad is visible on the horizon to the right. Writing in the lower right corner reads: "T-minus an hour & half or so. SA-7 - Kennedy Air Force Eastern Test Range Site -2."

In March 1962, James Webb, Administrator of the National Aeronautics and Space Administration, suggested that artists be enlisted to document the historic effort to send the first human beings to the moon. John Walker, director of the National Gallery of Art, was among those who applauded the idea, urging that artists be encouraged "…not only to record the physical appearance of the strange new world which space technology is creating, but to edit, select and probe for the inner meaning and emotional impact of events which may change the destiny of our race."

Working together, James Dean, a young artist employed by the NASA Public Affairs office, and Dr. H. Lester Cooke, curator of paintings at the National Gallery of Art, created a program that dispatched artists to NASA facilities with an invitation to paint whatever interested them. The result was an extraordinary collection of works of art proving, as one observer noted, "that America produced not only scientists and engineers capable of shaping the destiny of our age, but also artists worthy to keep them company." Transferred to the National Air and Space Museum in 1975, the NASA art collection remains one of the most important elements of what has become perhaps the world's finest collection of aerospace themed art.

The spring of 1962 was a busy time for the men and women of the National Aeronautics and Space Administration. On February 20, John H. Glenn became the first American to orbit the earth. For the first time since the launch of Sputnik 1 on October 4, 1957, the U.S. was positioned to match and exceed Soviet achievements in space. NASA was an agency with a mission -- to meet President John F. Kennedy's challenge of sending human beings to the moon and returning them safely to earth by the end of the decade. Within a year, three more Mercury astronauts would fly into orbit. Plans were falling into place for a follow-on series of two-man Gemini missions that would set the stage for the Apollo voyages to the moon.

In early March 1962, artist Bruce Stevenson brought his large portrait of Alan Shepard, the first American to fly in space, to NASA headquarters.(1) James E. Webb, the administrator of NASA, assumed that the artist was interested in painting a similar portrait of all seven of the Mercury astronauts. Instead, Webb voiced his preference for a group portrait that would emphasize "…the team effort and the togetherness that has characterized the first group of astronauts to be trained by this nation." More important, the episode convinced the administrator that "…we should consider in a deliberate way just what NASA should do in the field of fine arts to commemorate the …historic events" of the American space program.(2)

In addition to portraits, Webb wanted to encourage artists to capture the excitement and deeper meaning of space flight. He imagined "a nighttime scene showing the great amount of activity involved in the preparation of and countdown for launching," as well as paintings that portrayed activities in space. "The important thing," he concluded, "is to develop a policy on how we intend to treat this matter now and in the next several years and then to get down to the specifics of how we intend to implement this policy…." The first step, he suggested, was to consult with experts in the field, including the director of the National Gallery of Art, and the members of the Fine Arts Commission, the arbiters of architectural and artistic taste who passed judgment on the appearance of official buildings and monuments in the nation's capital.

Webb's memo of March 16, 1962 was the birth certificate of the NASA art program. Shelby Thompson, the director of the agency's Office of Educational Programs and Services, assigned James Dean, a young artist working as a special assistant in his office, to the project. On June 19, 1962 Thompson met with the Fine Arts Commission, requesting advice as to how "…NASA should develop a basis for use of paintings and sculptures to depict significant historical events and other activities in our program."(3)

David E. Finley, the chairman and former director of the National Gallery of Art, applauded the idea, and suggested that the agency should study the experience of the U.S. Air Force, which had amassed some 800 paintings since establishing an art program in 1954. He also introduced Thompson to Hereward Lester Cooke, curator of paintings at the National Gallery of Art.

An imposing bear of a man standing over six feet tall, Lester Cooke was a graduate of Yale and Oxford, with a Princeton PhD. The son of a physics professor and a veteran of the U.S. Army Air Forces, he was both fascinated by science and felt a personal connection to flight. On a professional level, Cooke had directed American participation in international art competitions and produced articles and illustrations for the National Geographic Magazine. He jumped at the chance to advise NASA on its art program.

While initially cautious with regard to the time the project might require of one of his chief curators, John Walker, director of the National Gallery, quickly became one of the most vocal supporters of the NASA art initiative. Certain that "the present space exploration effort by the United States will probably rank among the more important events in the history of mankind," Walker believed that "every possible method of documentation …be used." Artists should be expected "…not only to record the physical appearance of the strange new world which space technology is creating, but to edit, select and probe for the inner meaning and emotional impact of events which may change the destiny of our race." He urged quick action so that "the full flavor of the achievement …not be lost," and hoped that "the past held captive" in any paintings resulting from the effort "will prove to future generations that America produced not only scientists and engineers capable of shaping the destiny of our age, but also artists worthy to keep them company."(4)

Gordon Cooper, the last Mercury astronaut to fly, was scheduled to ride an Atlas rocket into orbit on May 15, 1963. That event would provide the ideal occasion for a test run of the plan Cooke and Dean evolved to launch the art program. In mid-February, Cooke provided Thompson with a list of the artists who should be invited to travel to Cape Canaveral to record their impressions of the event. Andrew Wyeth, whom the curator identified as "the top artist in the U.S. today," headed the list. When the time came, however, Andrew Wyeth did not go to the Cape for the Cooper launch, but his son Jamie would participate in the program during the Gemini and Apollo years.

The list of invited artists also included Peter Hurd, Andrew Wyeth's brother-in-law, who had served as a wartime artist with the Army Air Force; George Weymouth, whom Wyeth regarded as "the best of his pupils"; and John McCoy, another Wyeth associate. Cooke regarded the next man on the list, Robert McCall, who had been running the Air Force art program, as "America's top aero-space illustrator. Paul Calle and Robert Shore had both painted for the Air Force program. Mitchell Jamieson, who had run a unit of the Navy art program during WW II, rounded out the program. Alfred Blaustein was the only artist to turn down the invitation.

The procedures that would remain in place for more than a decade were given a trial run in the spring of 1963. The artists received an $800 commission, which had to cover any expenses incurred while visiting a NASA facility where they could paint whatever interested them. In return, they would present their finished pieces, and all of their sketches, to the space agency. The experiment was a success, and what might have been a one-time effort to dispatch artists to witness and record the Gordon Cooper flight provided the basis for an on-going, if small-scale, program. By the end of 1970, Jim Dean and Lester Cooke had dispatched 38 artists to Mercury, Gemini and Apollo launches and to other NASA facilities.

The art program became everything that Jim Webb had hoped it would be. NASA artists produced stunning works of art that documented the agency's step-by-step progress on the way to the moon. The early fruits of the program were presented in a lavishly illustrated book, Eyewitness to Space (New York: Abrams, 1971). Works from the collection illustrated NASA publications and were the basis for educational film strips aimed at school children. In 1965 and again in 1969 the National Gallery of Art mounted two major exhibitions of work from the NASA collection. The USIA sent a selection of NASA paintings overseas, while the Smithsonian Institution Traveling Exhibition Service created two exhibitions of NASA art that toured the nation.

"Since we …began," Dean noted in a reflection on the tenth anniversary of the program, the art initiative had resulted in a long string of positive "press interviews and reports, congressional inquiries, columns in the Congressional Record, [and] White House reports." The NASA effort, he continued, had directly inspired other government art programs. "The Department of the Interior (at least two programs), the Environmental Protection Agency, the Department of the Army and even the Veterans Administration have, or are starting, art programs." While he could not take all of the credit, Dean insisted that "our success has encouraged other agencies to get involved and they have succeeded, too."(5)

For all of that, he noted, it was still necessary to "defend" the role of art in the space agency. Dean, with the assistance of Lester Cooke, had been a one-man show, handling the complex logistics of the program, receiving and cataloguing works of art, hanging them himself in museums or on office walls, and struggling to find adequate storage space. In January 1976, a NASA supervisor went so far as to comment that: "Mr. Dean is far too valuable in other areas to spend his time on the relatively menial …jobs he is often burdened with in connection with the art program."(6) Dean placed a much higher value on the art collection, and immediately recommended that NASA officials either devote additional resources to the program, or get out of the art business and turn the existing collection over the National Air and Space Museum, "where it can be properly cared for."(7)

In January 1974 a new building for the National Air and Space Museum (NASM) was taking shape right across the street from NASA headquarters. Discussions regarding areas of cooperation were already underway between NASA officials and museum director Michael Collins, who had flown to the moon as a member of the Apollo 11 crew. Before the end of the year, the space agency had transferred its art collection to the NASM. Mike Collins succeeded in luring Jim Dean to the museum, as well.

The museum already maintained a small art collection, including portraits of aerospace heroes, an assortment of 18th and 19th century prints illustrating the early history of the balloon, an eclectic assortment of works portraying aspects of the history of aviation and a few recent prizes, including several Norman Rockwell paintings of NASA activity. With the acquisition of the NASA art, the museum was in possession of one of the world's great collections of art exploring aerospace themes. Jim Dean would continue to build the NASM collection as the museum's first curator of art. Following his retirement in 1980, other curators would follow in his footsteps, continuing to strengthen the role of art at the NASM. Over three decades after its arrival, however, the NASA art accession of 2,091 works still constitutes almost half of the NASM art collection.

(1) Stevenson's portrait is now in the collection of the National Air and Space Museum (1981-627)

(2) James E. Webb to Hiden Cox, March 16, 1962, memorandum in the NASA art historical collection, Aeronautics Division, National air and Space Museum. Webb's preference for a group portrait of the astronauts was apparently not heeded. In the end, Stevenson painted an individual portrait of John Glenn, which is also in the NASM collection (1963-398).

(3) Shelby Thompson, memorandum for the record, July 6, 1962, NASA art historical collection, NASA, Aeronautics Division.

(4) John Walker draft of a talk, March 5, 1965, copy in NASA Art historical collection, NASM Aeronautics Division.

(5) James Dean, memorandum for the record, August 6, 1973, NASA art history collection, NASM Aeronautics Division.

(6) Director of Planning and Media Development to Assistant Administrator for Public Affairs, January 24, 1974, NASA art history collection, NASM Aeronautics Division.

(7) James Dean to the Assistant Administrator for Public Affairs, January 24, 1974, copy in NASA Art history Collection, Aeronautics Division, NASM.

Tom D. Crouch

Senior Curator, Aeronautics

National Air and Space Museum

Smithsonian Institution

July 26, 2007

The Bold Accomplishments of Women of Color Need to Be a Bigger Part of Suffrage History

Smithsonian Magazine

The history of women gaining the right to vote in the United States makes for riveting material notes Kim Sajet, the director of the Smithsonian’s National Portrait Gallery in the catalog for the museum’s upcoming exhibition, “Votes For Women: a Portrait of Persistence,” and curated by historian Kate Clarke Lemay. “It is not a feel-good story about hard-fought, victorious battles for female equality,” Sajet writes of the show, which delves into the “past with all its biases and complexities” and pays close attention to women of color working on all fronts in a movement that took place in churches and hospitals and in statehouses and on college campuses. With portraiture as its vehicle, the task to represent the story proved challenging in the search and gathering of the images—the Portrait Gallery collection itself is historically biased with just 18 percent of its images representing women.

In this conversation, Lemay and Martha S. Jones, Johns Hopkins University’s Society of Black Alumni presidential professor and author of All Bound Up Together, reflect on the diverse experiences of the “radical women” who built an enduring social movement.

Many Americans know the names Susan B. Anthony or Elizabeth Cady Stanton, but the fight for suffrage encompassed a much wider range of women than we might have studied in history class. What “hidden stories” about the movement does this exhibition uncover?

Lemay: Putting together this exhibition was revealing of how much American women have contributed to history but how little attention we have paid them.

For example, when you think of African-American women activists, many people know about Rosa Parks or Ida B. Wells. But I didn’t know about Sarah Remond, a free African-American who in 1853 was forcibly ejected from her seat at the opera in Boston. She was an abolitionist and was used to fighting for citizenship rights. When she was ejected, she sued and was awarded $500. I hadn’t heard this story before, but I was really moved by her courage and her activism, which didn’t stop—it just kept growing.

The exhibition starts in 1832 with a section called “Radical Women,” which traces women’s early activism. You don’t think of women in these very buttoned-up, conservative dresses as “radical” but they were—they were completely breaking from convention.

Jones: Some of these stories have been hiding in plain sight. In the section on “Radical Women,” visitors are re-introduced to a figure like Sojourner Truth. She is someone whose life is often shrouded in myth, both in her own lifetime and in our own time. Here, we have the opportunity to situate her as a historical figure rather than a mythical figure and set her alongside peers like Lucy Stone, who we more ordinarily associate with the history of women’s suffrage.

Image by NPG. Zitkála-Šá by Joseph T. Keiley, 1898 (original image)

Image by Stuart A. Rose Manuscript, Archives and Rare Book Library, Emory University. Frances Ellen Watkins Harper, unidentified artist, 1895 (original image)

Image by Courtesy of Oberlin College Archives. Anna Julia Haywood (Cooper) by H. M. Platt, 1884 (original image)

Image by Courtesy of Oberlin College Archives. Ida A. Gibbs Hunt by H. M. Platt, 1884 (original image)

Image by State Archives of Florida, Collection M95-2, Florida Memory Image #PROO755. Mary McLeod Bethune by William Ludlow Coursen, 1910 or 1911 (original image)

Image by Courtesy of Oberlin College Archives. Mary E. Church Terrell by H. M. Platt, 1884 (original image)

Image by NPG, gift of Frederick M. Rock. Lucretia Coffin Mott, unidentified artist, c. 1865 (original image)

Image by NPG. Ida B. Wells-Barnett by Sallie E. Garrity, c. 1893 (original image)

The exhibition introduces us to more than 60 suffragists primarily through their portraits. How does this particular medium bring the suffrage movement to life?

Lemay: It’s interesting to see how formal, conventional portraits were used by these “radical women” to demonstrate their respectability. For example, in a Sojourner Truth portrait taken in 1870, she made sure to be portrayed as someone who wasn’t formerly enslaved. Being portrayed as such would have garnered her much more profit as the image would have been considered a more “collectible” item. Instead, she manifested dignity in the way that she dressed and posed . . . she insisted in portraying herself as a free woman.

We see a strong element of self-awareness in these portraits. Lucretia Coffin Mott, a great abolitionist, dressed in Quaker clothing that she often made herself. She was specific about where she sourced her clothing as well, conveying the message that it wasn’t made as a result of forced labor.

On the exhibition catalogue cover, we see Mary McLeod Bethune, beautifully dressed in satin and lace. The exhibition presents the use of photography as a great equalizer; it afforded portraiture to more than just the wealthy elite.

Jones: The other context for African-American portraits, outside the bounds of this exhibition, is the world of caricature and ridicule that African-American women were subjected to in their daily lives. We can view these portraits as “self-fashioning,” but it is a fashioning that is in dialog with, and opposition to, cruel, racist images that are being produced of these women at the same time.

I see these images as political acts, both for making claims about womanhood but also making claims for black womanhood. Sojourner Truth’s garb is an interesting mix of Quaker self-fashioning and finely crafted, elegant fabrics. The middle-class trappings behind her are worth noticing. This is a contrast to later images of someone like Ida B. Wells, who is much more mindful of crafting herself in the fashion of the day.

African-American suffragists were excluded from many leading suffrage organizations of the late 19th and early 20th centuries due to discrimination. How did they make their voices heard in the movement?

Jones: I’m not sure African-American women thought there was only one movement. They came out of many movements: the anti-slavery movement, their own church communities, self-created clubs.

African-American women were oftentimes at odds with their white counterparts in some of the mainstream organizations, so they continued to use their church communities as an organizing base, to develop ideas about women’s rights. The club movement, begun to help African-American women see one other as political beings, became another foundation.

By the end of the 19th century, many of these women joined the Republican Party. In cities like Chicago, African-American women embraced party politics and allied themselves with party operatives. They used their influence and ability to vote at the state level, even before 1920, to affect the question of women’s suffrage nationally.

Lemay: The idea that there were multiple movements is at the forefront of “Votes for Women.” Suffrage, writ large, involves women’s activism for issues including education and financial independence. For example, two African-American women in the exhibition, Anna Julia Cooper and Mary McLeod Bethune, made great strides advocating for college preparatory schools for black students. It’s remarkable to see what they and other African-American women accomplished despite society’s constraints on them.

The 19th Amendment, ratified in 1920, did not resolve the issue of suffrage for many women of color and immigrant women, who continued to battle for voting rights for decades. Might we consider the Voting Rights Act of 1965 part of the 19th Amendment’s legacy?

Jones: Yes and no. I can’t say that the intention of the 19th Amendment was to guarantee to African-American women the right to vote. I think the story of the 19th Amendment is a concession to the ongoing disenfranchisement of African-Americans.

We could draw a line from African-Americans who mobilized for ratification of the 19th Amendment to the Voting Rights Act of 1965, but we’d have to acknowledge that is a very lonely journey for black Americans.

Black Americans might have offered a view that the purpose of the 19th Amendment was not to secure for women the right to vote, but to secure the vote so that women could use it to continue the work of social justice.

Of course, there was much work to be done on the question of women and voting rights subsequent to the 19th Amendment. The Voting Rights Act of 1965 was the point at which black men and women were put much closer to equal footing when it comes to voting rights in this country.

Is there one particular suffragist in “Votes for Women” who stood out for her persistence, perhaps serving as a guidepost for activists today?

Lemay: All of the suffragists showed persistence, but two that come to mind are Zitkála-Šá and Susette LaFlesche Tibbles—both remarkable Native-American women leaders. Their activism for voting rights ultimately helped to achieve the Indian Citizenship Act of 1924, which granted citizenship to all Native-Americans born in the United States. But their legacy stretched well beyond 1924. In fact, some states excluded Native-Americans from voting rights through the early 1960s, and even today, North Dakota disenfranchises Native-Americans by insisting that they have a physical address rather than a P.O. box. More than a century ago, these two women started a movement that remains essential.

Jones: My favorite figure in the exhibition is Frances Ellen Watkins Harper. Here’s a woman born before the Civil War in a slave-holding state who was orphaned at a young age. She emerges onto the public stage as a poet. She goes on to be an Underground Railroad and anti-slavery activist. She is present at the Women’s Convention of 1866 and joins the movement for suffrage.

The arc of her life is remarkable, but, in her many embodiments, she tells us a story that women’s lives aren’t only one thing. And she tells us that the purpose of women’s rights is to raise up all of humanity, men and women. She persists in advocating for a set of values that reflect the principles of human rights today.

On March 29, the Smithsonian’s National Portrait Gallery opens its major exhibition on the history of women’s suffrage—“Votes for Women: A Portrait of Persistence,” curated by Kate Clarke Lemay. The exhibition details the more than 80-year struggle for suffrage through portraits of women who represent different races, ages, abilities and fields of endeavor.

A version of this article was published by the American Women’s History Initiative.

Forget Bees: This Bird Has the Sweetest Deal With Honey-Seeking Humans

Smithsonian Magazine

Brrrr-Hm!

Cutting through the crushing morning heat of the African bush, that sound is the trill of the Yao honey hunters of Mozambique. The call, passed down over generations of Yao, draws an unusual ally: the palm-sized Indicator indicator bird, also known as the greater honeyguide.

These feathery creatures do just what their name suggests: lead their human compatriots to the sweet stuff. Mobilized by the human voice, they tree-hop through the African bush, sporting brown, tan and white plumage that blends into the dry landscape.

This remarkable bird-human relationship has been around for hundredsmaybe even hundreds of thousandsof years. And yet until now, no one has investigated exactly how effective the call is. A new study, published today in the journal Science, demonstrates just how powerful this local call is in guaranteeing a successful expedition.

The honeyguide collaboration is a striking example of mutualism, or an evolutionary relationship that benefits both parties involved. In this case, birds rely on humans to subdue the bees and chop down the hive, while humans rely on birds to lead them to the nests, which are often tucked away in trees high up and out of sight.

“There's an exchange of information for skills,” says Claire Spottiswoode, an evolutionary biologist at the University of Cambridge and lead author of the study. Neither species could accomplish the task alone. Cooperation begets a worthwhile reward for both: The humans gain access to the honey, while the honeyguides get to chow down on the nutritious beeswax.

The partnership can be traced back to at least 1588, when the Portuguese missionary João dos Santos took note of a small bird soaring into his room to nibble on a candle, and described how this wax-loving avian led men to honey. “When the birds find a beehive they go to the roads in search of men and lead them to the hives, by flying on before them, flapping their wings actively as they go from branch to branch, and giving their harsh cries,” wrote dos Santos (translated from Italian).

But it wasn’t until the 1980s that scientists got in on the game. Ornithologist Hussein Isack first studied the behavior among the Boran people of Kenya, armed with only a watch and compass. Isack elegantly demonstrated that honeyguides provide honey-seeking humans with reliable directional information. But it still remained unclear whether the flow of information was one-sided. Could humans also signal their desire for sweets to their feathered friends?

To answer this question, Spottiswoode and her colleagues recorded the the trill-grunt call of Yao honey-hunters living in the Niassa National Reserve in northern Mozambique. For comparison, they captured the call of local animals and the honey-hunters shouting Yao words. With GPS and speakers in hand, Spottiswoode and her colleagues set out with the Yao honey-hunters into the African bush. On each expedition, they played back a different recording, noting the honeyguides’ response.

The researchers repeated the trips over and over, walking more than 60 miles in total. But it was worth it: they found that the Brrrr-Hm call effectively attracts and holds a honeyguide’s attention, more than tripling the chance that a honeyguide will lead humans to a bees’ nest compared to the other recorded sounds, says Spottiswoode.

“They're not just eavesdropping on human sounds,” says Spottiswoode. Rather, the Yao honey-hunting call serves as a message to the honeyguides that the human hunters are ready to search for honey, just as picking up a leash signals to your dog that it’s time for a walk. What’s remarkable in this case is that honeyguides, unlike dogs, are not trained and domesticated pets but wild animals.

“This is an important paper which experimentally verifies what Yao honey hunters say is true: that honeyguides are attracted by the specialized calls honey-hunters use,” Brian Wood, anthropologist at Yale University, said in an e-mail. Wood works with the Hadza people of Tanzania, who have formed similar relationships with the honeyguides. He notes that across Africa, local people have developed a range of different honeyguide calls, including spoken or shouted words and whistles.

Image by Claire N. Spottiswoode. A male greater honeyguide shows off his plumage in the Niassa National Reserve, Mozambique. (original image)

Image by Claire N. Spottiswoode. A Yao honey-hunter eating part of the honey harvest from a wild bees’ nest in the Niassa National Reserve, Mozambique. (original image)

Image by Claire N. Spottiswoode. Yao honey-hunter Orlando Yassene hoists a bundle of burning dry sticks and green leaves up to a wild bees’ nest in the Niassa National Reserve to subdue the bees before harvesting their honey. (original image)

Image by Claire N. Spottiswoode. Yao honey-hunter Orlando Yassene holds a wild greater honeyguide female in the Niassa National Reserve, Mozambique. (original image)

Image by Claire N. Spottiswoode. Yao honey-hunter Orlando Yassene harvests honeycombs from a wild bees’ nest in the Niassa National Reserve, Mozambique. (original image)

Image by Romina Gaona. Researcher Claire Spottiswoode holds a wild greater honeyguide male that was temporarily captured for research. (original image)

Image by Claire N. Spottiswoode. Yao honey-hunter Orlando Yassene harvests honeycombs from a wild bees’ nest in the Niassa National Reserve. This bee colony was particularly aggressive and, even with the help of fire, could only be harvested at night when the bees are calmer. (original image)

Image by Claire N. Spottiswoode. Yao honey-hunter Musaji Muamedi gathers wax on a bed of green leaves, to reward the honeyguide that showed him a bees’ nest. (original image)

Image by Claire N. Spottiswoode. Honeyguides are brood parasites as well as mutualists. The pink chick—a greater honeyguide—stands over the corpses of three adopted bee-eater siblings that it killed using its sharp bill hooks. (original image)

Image by Claire N. Spottiswoode. The female honeyguide has slightly duller colors, a darker bill and lacks the black throat of the males, as shown here. (original image)

In the past, cooperation between humans and wild animals may have been common as our ancestors domesticated various creatures, such as the wolf. But these creatures were “specifically taught to cooperate,” Spottiswoode notes. In today’s age of modern technology and globalization of trade, such interactions are increasingly rare. One modern example that researchers cite in the paper is collaborative fishing between humans and dolphins in Laguna, Brazil. But most current human-wildlife interactions are one-sided, such as the human scavenging of carnivore kills, says Terrie Williams, a marine biologist at University of California, Santa Cruz who has studied the Laguna dolphins.

Indeed, as African cities grow and attain greater access to other forms of sugar, the honeyguide tradition is slowly dying out, Spottiswoode says. This makes it even more important to document the intricacies of such relationships while they still persist. [The decline] really underlines the importance of areas like the Niassa Reserve where humans and wildlife co-exist, and these wonderful human-wildlife relationships can still thrive,” she says.

Before you start seeking out your own honeyguide, you should know that these birds aren’t always so sweet-natured. Honeyguides are brood parasites, meaning that parents lay their eggs in the nest of another bird species. Once the chick hatches, the newborn pecks its adopted siblings to death in a violent effort to steal its new parents’ attentions and resources. “They're real Jekyll-and-Hyde characters,” says Spottiswoode, adding: “It's all instinctive, of course. [I’m] placing no moral judgement.”

The birds' parastic nature makes it all the more mysterious how they learn these calls, since they clearly can’t learn them from mom and dad. So now, Wood and Spottiswoode are teaming up to explore another option: that honeyguides might learn the calls socially, both within and between species. The researchers hope to study other honeyguide-hunter relationships to gain a better understanding of a collaboration that has endured throughout the ages. 

Here's hoping it sticks around.

Everything You Ever Wanted to Know About Earth’s Past Climates

Smithsonian Magazine

In Silent Spring, Rachel Carson considers the Western sagebrush. “For here the natural landscape is eloquent of the interplay of forces that have created it,” she writes. “It is spread before us like the pages of an open book in which we can read why the land is what it is, and why we should preserve its integrity. But the pages lie unread.” She is lamenting the disappearance of a threatened landscape, but she may just as well be talking about markers of paleoclimate.

To know where you’re going, you have to know where you’ve been. That’s particularly true for climate scientists, who need to understand the full range of the planet’s shifts in order to chart the course of our future. But without a time machine, how do they get this kind of data?

Like Carson, they have to read the pages of the Earth. Fortunately, the Earth has kept diaries. Anything that puts down yearly layers—ocean corals, cave stalagmites, long-lived trees, tiny shelled sea creatures—faithfully records the conditions of the past. To go further, scientists dredge sediment cores and ice cores from the bottom of the ocean and the icy poles, which write their own memoirs in bursts of ash and dust and bubbles of long-trapped gas.

In a sense, then, we do have time machines: Each of these proxies tells a slightly different story, which scientists can weave together to form a more complete understanding of Earth’s past.

In March, the Smithsonian Institution’s National Museum of Natural History held a three-day Earth’s Temperature History Symposium that brought teachers, journalists, researchers and the public together to enhance their understanding of paleoclimate. During an evening lecture, Gavin Schmidt, climate modeler and director of NASA’s Goddard Institute for Space Studies, and Richard Alley, a world-famous geologist at Pennsylvania State University, explained how scientists use Earth’s past climates to improve the climate models we use to predict our future.

Here is your guide to Earth’s climate pasts—not just what we know, but how we know it.

How do we look into Earth’s past climate?

It takes a little creativity to reconstruct Earth’s past incarnations. Fortunately, scientists know the main natural factors that shape climate. They include volcanic eruptions whose ash blocks the sun, changes in Earth’s orbit that shift sunlight to different latitudes, circulation of oceans and sea ice, the layout of the continents, the size of the ozone hole, blasts of cosmic rays, and deforestation. Of these, the most important are greenhouse gases that trap the sun’s heat, particularly carbon dioxide and methane.

As Carson noted, Earth records these changes in its landscapes: in geologic layers, fossil trees, fossil shells, even crystallized rat pee—basically anything really old that gets preserved. Scientists can open up these diary pages and ask them what was going on at that time. Tree rings are particularly diligent record-keepers, recording rainfall in their annual rings; ice cores can keep exquisitely detailed accounts of seasonal conditions going back nearly a million years.

Ice cores reveal annual layers of snowfall, volcanic ash and even remnants of long-dead civilizations. (NASA's Goddard / Ludovic Brucker)

What else can an ice core tell us?

“Wow, there’s so much,” says Alley, who spent five field seasons coring ice from the Greenland ice sheet. Consider what an ice core actually is: a cross-section of layers of snowfall going back millennia.

When snow blankets the ground, it contains small air spaces filled with atmospheric gases. At the poles, older layers become buried and compressed into ice, turning these spaces into bubbles of past air, as researchers Caitlin Keating-Bitonti and Lucy Chang write in Smithsonian.com. Scientists use the chemical composition of the ice itself (the ratio of the heavy and light isotopes of oxygen in H2O) to estimate temperature. In Greenland and Antarctica, scientists like Alley extract inconceivably long ice cores—some more than two miles long!

Ice cores tell us how much snow fell during a particular year. But they also reveal dust, sea salt, ash from faraway volcanic explosions, even the pollution left by Roman plumbing. “If it’s in the air it’s in the ice,” says Alley. In the best cases, we can date ice cores to their exact season and year, counting up their annual layers like tree rings. And ice cores preserve these exquisite details going back hundreds of thousands of years, making them what Alley calls “the gold standard” of paleoclimate proxies.

Wait, but isn’t Earth’s history much longer than that?

Yes, that’s right. Paleoclimate scientists need to go back millions of years—and for that we need things even older than ice cores. Fortunately, life has a long record. The fossil record of complex life reaches back to somewhere around 600 million years. That means we have definite proxies for changes in climate going back approximately that far. One of the most important is the teeth of conodonts—extinct, eel-like creatures—which go back 520 million years.

But some of the most common climate proxies at this timescale are even more miniscule. Foraminifera (known as “forams”) and diatoms are unicellular beings that tend to live on the ocean seafloor, and are often no bigger than the period at the end of this sentence. Because they are scattered all across the Earth and have been around since the Jurassic, they’ve left a robust fossil record for scientists to probe past temperatures. Using oxygen isotopes in their shells, we can reconstruct ocean temperatures going back more than 100 million years ago.

“In every outthrust headland, in every curving beach, in every grain of sand there is a story of the earth,” Carson once wrote. Those stories, it turns out, are also hiding in the waters that created those beaches, and in creatures smaller than a grain of sand.

Foraminifera. (Ernst Haeckel)

How much certainty do we have for deep past?

For paleoclimate scientists, life is crucial: if you have indicators of life on Earth, you can interpret temperature based on the distribution of organisms.

But when we’ve gone back so far that there are no longer even any conodont teeth, we’ve lost our main indicator. Past that we have to rely on the distribution of sediments, and markers of past glaciers, which we can extrapolate out to roughly indicate climate patterns. So the further back we go, the fewer proxies we have, and the less granular our understanding becomes. “It just gets foggier and foggier,” says Brian Huber, a Smithsonian paleobiologist who helped organize the symposium along with fellow paleobiologist research scientist and curator Scott Wing.

How does paleoclimate show us the importance of greenhouse gases?

Greenhouse gases, as their name suggests, work by trapping heat. Essentially, they end up forming an insulating blanket for the Earth. (You can get more into the basic chemistry here.) If you look at a graph of past Ice Ages, you can see that CO2 levels and Ice Ages (or global temperature) align. More CO2 equals warmer temperatures and less ice, and vice versa. “And we do know the direction of causation here,” Alley notes. “It is primarily from CO2 to (less) ice. Not the other way around.”

We can also look back at specific snapshots in time to see how Earth responds to past CO2 spikes. For instance, in a period of extreme warming during Earth’s Cenozoic era about 55.9 million years ago, enough carbon was released to about double the amount of CO2 in the atmosphere. The consequentially hot conditions wreaked havoc, causing massive migrations and extinctions; pretty much everything that lived either moved or went extinct. Plants wilted. Oceans acidified and heated up to the temperature of bathtubs.

Unfortunately, this might be a harbinger for where we’re going. “This is what’s scary to climate modelers,” says Huber. “At the rate we’re going, we’re kind of winding back time to these periods of extreme warmth.” That’s why understanding carbon dioxide’s role in past climate change helps us forecast future climate change.

That sounds pretty bad.

Yep.

I’m really impressed by how much paleoclimate data we have. But how does a climate model work?

Great question! In science, you can’t make a model unless you understand the basic principles underlying the system. So the mere fact that we’re able to make good models means that we understand how this all works. A model is essentially a simplified version of reality, based on what we know about the laws of physics and chemistry. Engineers use mathematical models to build structures that millions of people rely on, from airplanes to bridges.

Our models are based on a framework of data, much of which comes from the paleoclimate proxies scientists have collected from every corner of the world. That’s why it’s so important for data and models to be in conversation with each other. Scientists test their predictions on data from the distant past, and try to fix any discrepancies that arise. “We can go back in time and evaluate and validate the results of these models to make better predictions for what’s going to happen in the future,” says Schmidt.

Here's a model:

It's pretty. I hear the models aren’t very accurate, though.

By their very nature, models are always wrong. Think of them as an approximation, our best guess.

But ask yourself: do these guesses give us more information than we had previously? Do they provide useful predictions we wouldn’t otherwise have? Do they allow us to ask new, better questions? “As we put all of these bits together we end up with something that looks very much like the planet,” says Schmidt. “We know it’s incomplete. We know there are things that we haven’t included, we know that we’ve put in things that are a little bit wrong. But the basic patterns we see in these models are recognizable … as the patterns that we see in satellites all the time.”

So we should trust them to predict the future?

The models faithfully reproduce the patters we see in Earth’s past, present—and in some cases, future. We are now at the point where we can compare early climate models—those of the late 1980s and 1990s that Schmidt’s team at NASA worked on—to reality. “When I was a student, the early models told us how it would warm,” says Alley. “That is happening. The models are successfully predictive as well as explanatory: they work.” Depending on where you stand, that might make you say “Oh goody! We were right!” or “Oh no! We were right.”

To check models’ accuracy, researchers go right back to the paleoclimate data that Alley and others have collected. They run models into the distant past, and compare them to the data that they actually have.

“If we can reproduce ancient past climates where we know what happened, that tells us that those models are a really good tool for us to know what’s going to happen in the future,” says Linda Ivany, a paleoclimate scientist at Syracuse University. Ivany’s research proxies are ancient clams, whose shells record not only yearly conditions but individual winters and summers going back 300 million years—making them a valuable way to check models. “The better the models get at recovering the past,” she says, “the better they’re going to be at predicting the future.”

Paleoclimate shows us that Earth’s climate has changed dramatically. Doesn’t that mean that, in a relative sense, today’s changes aren’t a big deal?

When Richard Alley tries to explain the gravity of manmade climate change, he often invokes a particular annual phenomenon: the wildfires that blaze in the hills of Los Angeles every year. These fires are predictable, cyclical, natural. But it’d be crazy to say that, since fires are the norm, it’s fine to let arsonists set fires too. Similarly, the fact that climate has changed over millions of years doesn’t mean that manmade greenhouse gases aren’t a serious global threat.

"Our civilization is predicated on stable climate and sea level," says Wing, "and everything we know from the past says that when you put a lot of carbon in the atmosphere, climate and sea level change radically."

Since the Industrial Revolution, human activities have helped warm the globe 2 degrees F, one-quarter of what Schmidt deems an “Ice Age Unit”—the temperature change that the Earth goes through between an Ice Age and a non-Ice Age. Today’s models predict another 2 to 6 degrees Celsius of warming by 2100—at least 20 times faster than past bouts of warming over the past 2 million years.

Of course there are uncertainties: “We could have a debate about whether we’re being a little too optimistic or not,” says Alley. “But not much debate about whether we’re being too scary or not.” Considering how right we were before, we should ignore history at our own peril.

From Sea to Shining Sea: Great Ways to Explore Canada

Smithsonian Magazine

Lake Louise, one of the world’s most beautiful compositions of water, rock and ice, belongs to Canada. The small lake attracts throngs of tourists while serving as a stepping stone to surrounding wilderness areas of the Rocky Mountains. Photo courtesy of Flickr user biberfan.

Americans love Canada. Year after year, Americans polled by Gallup indicate that they have a strong affinity toward Britain, Germany, Japan, France and India. But Canada consistently scores higher than any other place. In 2013, 90 percent of Americans polled said they have a “favorable” impression of our neighbor to the north. Only 6 percent gave an “unfavorable” rating. Americans’ love of Canada may be easy to explain: Canada is friendly, safe, familiar and mostly English-speaking. Its cities are sophisticated and modern—especially Vancouver, at the edge of both mountain and sea, and Montreal, known largely for its 17th-century architecture. Though many travelers are true adventurers with an appetite for the strange and foreign, it may be Canada’s very lack of the exotic that so appeals to the majority of Americans.

But perhaps Canada’s greatest virtue is its wilderness—some of the finest, most unspoiled land anywhere. The wild Canadian Rockies resemble their counterpart peaks to the south, but they are less trammeled, less cut by highways and more extensive, running as far north as the lonesome Yukon. In the rivers of western British Columbia, salmon still teem, as lower-48 Americans can only imagine from black-and-white photos from a century ago. Far to the east, the cod-fishing communities of Newfoundland and Nova Scotia are quaint and cozy, with an irresistible Scandinavian charm. Canada’s wildlife, too, trumps America’s. Between grizzly bears, black bears, cougars and wolves, large predators roam virtually every acre of the nation, whereas the lower 48 states have been hacked into a fragile patchwork of preserved places. There are elk, caribou, bison and moose throughout Canada. Indeed, the nation’s wild creatures and places embody the Wild West that America conquered—and that’s before we consider the polar bears, all 15,000 or more of them living along Canada’s Arctic coast and Hudson Bay. Indeed, Canada’s far north is like no other place. Tundra studded by thousands of lakes and drained by long and wild rivers makes for a canoer’s and fisherman’s paradise.

Here are a few adventure travel ideas to bring you into the best of Canada’s wild country:

The brook trout is one of the most beautiful of salmonids and an iconic game fish in eastern Canada. This brookie, held by angler Bill Spicer, weighs about eight pounds and was caught and released in Osprey Lake, in Labrador. Photo courtesy of Colin McKeown and JenCor Entertainment Inc.

Fly Fishing for the Labrador Brook Trout. Many American anglers know the brook trout as a dainty sliver of fish, speckled beautifully with blue-and-red spots and worm-like vermiculations. It’s a fish as pretty as it is little, happy to bite a fly, and often grossly overpopulated in the waters to which it has been introduced throughout America. But in eastern Canada, the brook trout—actually a species of char—is comfortably at home—and big. The species originated in the streams and lakes here, and nowhere else do brookies grow so huge. Brook trout as large as 15 pounds or more have been caught throughout eastern Canada, but Labrador is especially famous for its consistently bulky specimens. The Churchill River system—both above and below the 245-foot Churchill Falls—boasts large brook trout, and lots of them. So does the smaller Eagle River system, among other drainages. Local lodges and guide services offer packaged trips based around river fly fishing, in case you need a soft pillow and someone to cook you dinner each night. More rewarding, if more challenging, can be to go yourself. Other species to expect while pursuing big brooks include northern pike, lake trout, Arctic char and, in some river systems, wild Atlantic salmon. As you hike, watch for bears, moose, eagles and other iconic creatures of the American wilderness. Canadian, that is.

From the heights of Gros Morne National Park, visitors find knee-buckling, jaw-dropping vistas of Newfoundland’s glacial lakes and fjords. Traveling by bicycle is an excellent way to see Canada’s easternmost island. Photo courtesy of Flickr user dugspr-Home for Good.

Cycle Touring Newfoundland. Rocky shorelines, small winding roads, villages hundreds of years old, mountains, cliffs, clear waters and fjords: Such features make up the eastern island of Newfoundland, one of Canada’s most beautiful islands. With its international airport, the capital city of St. John’s makes an ideal starting point for a cycling tour of the Avalon Peninsula. Though just a small promontory on Newfoundland’s south side, the Avalon Peninsula features a great deal of shoreline and enough scenery and culture to keep one occupied for weeks. Place names like Chance Cove, Random Island, Come by Chance, Witless Bay and Portugal Cove embody the rugged geography’s happenstance, blown-by-the-wind spirit. However early North American explorers may have felt about landing upon these blustery shores, for travelers of today, the area is a renowned gem. On the main body of the island of Newfoundland, cyclists also find magnificent exploration opportunities along the north-central coast—a land of deep inlets and rocky islands for hundreds of miles. Another touring option takes travelers from Deer Lake, near the western coast, northward through Gros Morne National Park, the Long Range Mountains, and all the way to the north end of the island, at L’Anse aux Meadows, the site of an excavated Viking dwelling. Camping in the wild is easy in Newfoundland’s open, windswept country—and even easier in the wooded interior. But note that distances between grocery stores may be great, so pack food accordingly. Also note that the folks here are reputably friendly, which—in Newfoundland—can translate into moose dinners in the homes of strangers. Pack wine or beer as a gift in return. Not a cyclist? Then get wet. The coast of the island offers a lifetime’s worth of kayak exploration. Want to get really wet? Then don a wetsuit and go snorkeling. The waters are clear and teeming with sea life and shipwrecks.

Clear blue waters make the coastal coves and reefs of eastern Canada prime SCUBA diving or snorkeling destinations. Photo by Matt Kadey.

Hiking in the Canadian Rockies. Though the mountains are rocky, the trout streams clear and the woods populated by elk, wolves and bears—you aren’t in Montana anymore. The Canadian Rockies are much like the same mountain range to the south—but they’re arguably better. Fewer roads mean less noise, less people and more wildlife. A great deal of the Canadian Rockies is preserved within numerous wilderness areas, as well as the famed Jasper and Banff national parks. Cycling is one way to access the vast reaches of wild country here—but no means of motion is so liberating in this rough country as walking. So tie your boot laces at Lake Louise, often considered the queen attraction of the region, or in the town of Banff itself, then fill a pack with all the gear and food of a self-sufficient backpacker and hike upward and outward into some of the most wonderful alpine country of Alberta, and the whole of North America.

Canoeing the South Nahanni River. This tributary of the great Arctic-bound Mackenzie River system is considered the iconic wilderness canoeing experience of Canada and one of the most epic places to paddle on our planet. The South Nahanni runs 336 miles from the Mackenzie Mountains, through the Selwyn Mountains and into the Liard River, which in turn empties into the mighty Mackenzie. The South Nahanni flows for much of its length through the Nahanni National Park Reserve, a Unesco World Heritage site, and has carved some spectacular canyons through the ages, making for cathedral-like scenery as spirit-stirring as Yosemite. The region is practically roadless, and while hikers may find their way through the mountains and tundra of the South Nahanni drainage, the most comfortable and efficient means of exploring the area is probably by canoe. Most paddlers here either begin or end their trips at the enormous Virginia Falls, a spectacular cascade that includes a free-fall of 295 feet and a total vertical plunge of 315 feet—twice the height of Niagara Falls. Others portage around the falls on full-river excursions that can last three weeks. Serious yet navigable whitewater sections can be expected, though most of these rapids occur in the first 60 miles of the river before the South Nahanni lays out en route to the Arctic Ocean. Not a single dam blocks the way, and wilderness enthusiasts have the rare option of continuing down many hundreds of miles of virgin river, all the way to the sea.

Not too close for comfort: Nowhere in the world can tourists get so close to polar bears while remaining so perfectly secure as in Churchill, Manitoba, where polar bears verily swarm the shoreline each fall waiting for the ice to freeze. Photo courtesy of Flickr user cell-gfx.

Seeing Churchill’s Polar Bears. Americans killed off most of their own big bears—namely the grizzly—as they pushed through the frontier and settled the West. In Churchill, however, locals have learned to live in a remarkably intimate relationship with the greatest bear of all. Polar bears gather along the coast of Hudson Bay in great numbers each autumn as the days shorten and temperatures drop. As long as the sea remains unfrozen, the bears stay around, and sometimes within, the town of 800 people. The animals wrestle, fight, climb over their mothers, roll on their backs and soak in the low-hanging sun, and tourists love it. Thousands come every year to see Churchill’s bears. If you do, don’t go hiking. The bears are wild animals and may be the most dangerous of all bear species. Instead, book in advance and join a tour in one of the bear-proof vehicles called “tundra buggies” that venture from Churchill onto the barren Canadian moors, rolling on monster tires as paying clients lean from the windows with cameras. The bears often approach the vehicles and even stand up against the sides to greet the awed passengers. Long lenses may never leave the camera bag, and wildlife photography rarely gets easier than in the town rightly dubbed the “Polar Bear Capital of the World.”

Taste Wine and Pick Peaches in the Okanagan Valley. Between so much adventuring through field, mountain and stream, wine tasting may be a welcomed diversion—and, yes, they make good wine in Canada. The Okanagan Valley of British Columbia is the chief producing region. A sliver of fertile farm country about 130 miles north to south, the Okanagan Valley lies just west of the Rockies and about a four hours’ drive east of Vancouver. Crisp white wines—like Pinot Blanc, Gewurztraminer and Riesling—are the Okanagan Valley‘s claim to fame, while many wineries produce reds like Syrah, Cabernet Franc and Pinot Noir. The valley is also famous for its roadside fruit stands,where heaps of apples, pears, apricots, peaches and cherries may prove irresistible to those pedaling bicycles. Many farms offer “U-Pick” deals—the best way to get the freshest fruit. But what sets this wine-and-fruit valley apart is how the vineyards are planted smack in the midst of some of the continent’s most tremendous and wild mountains—a juxtaposition of elegant epicurean delights and classic North American wilderness that, perhaps, only Canada could offer.

A rack of Canadian Cabernet Sauvignon proves the Okanagan Valley’s capacity to produce bold, burly red wines. Photo courtesy of Flickr user iwona_kellie.

2881-2904 of 2,961 Resources