Found 176 Resources containing: Manners and customs
The genus of spiders known as Ariamnes has always been known to have a few tricks up its sleeve. Measuring a max of just two centimeters in size, it can hide from predators by camouflaging itself to look like a stick in the forest. Now, scientists have discovered that these stick spiders also have a peculiar evolutionary history.
As Elizabeth Pennisi reports for Science, new research by Rosemary Gillespie, an evolutionary ecologist at the University of California, Berkeley, and her colleagues demonstrates parallel evolution in the genus — that is, spiders on different Hawaiian islands independently evolved species of three different colors: gold, black, and white.
They also found that since arriving in Hawaii, the original group of spiders has diversified, spreading out across five of the islands. That one species formed 14 more over the years, George Dvorsky reports for Gizmodo, creating a total of 15 different stick spider species still alive in Hawaii—four of which the researchers identified in this latest study. Their research was published Thursday in the journal Current Biology.
For the study, the researchers examined the genetics of the spiders on five of the Hawaiian islands where gold and dark species of the spider live. Two of the islands are also home to a white version. Using this data, they constructed a family tree to study the relationships between the creatures.
While it would make sense that spiders of the same color would be the most closely related, Gillespie’s team found that wasn't the case. Residents of a single island were most closely related, while like-colored critters of other islands were distant relatives. This suggests that each island was likely colonized by one spider that then evolved into different-colored species.
“It’s one of the coolest hidden [examples] of animals evolving new species,” Robert Fleischer, a conservation genomicist at the Smithsonian Conservation Biology Institute, tells Pennisi.
Gillespie and her team think the first Ariamnes was dark or gold. It likely landed on the islands about 2 to 3 million years ago, inhabiting the oldest island of Kauai. In Hawaii, there weren’t many webs to steal food from, according to UC Berkeley's website. So the creatures started to trap and eat other spiders, as they still do today.
They each developed different colors where they could be effectively camouflaged in their own niches, Ed Yong reports for The Atlantic. One species became brown, living on rocks. Others became gold to hide on the undersides of leaves. Still others became a matte-white color, similar to lichen. They slowly hopped to the younger islands. And as they did, they would evolve.
What’s interesting is that throughout their history, the spiders would evolve over and over again in a predictable manner. While the different species of spiders have a similar body type, each has distinct physical traits. And their different colors allowed them camouflage themselves in their environments.
“There are only a certain number of good ways to be a spider in these ecosystems, and evolution repeatedly finds those ways,” Catherine Wagner, an evolutionary biologist at the University of Wyoming, who was not involved in the new study, tells Yong.
This diversification of species into different environments is not unexpected. Most famously, Charles Darwin witnessed this phenomenon with finches on the Galapagos Islands. But with the spiders, there isn’t any additional differentiation during each bout of diversification. "They don't evolve to be orange or striped," Gillespie says in a press release. She believes that this suggests the spiders are preprogrammed to evolve quickly to be successful, but how that works exactly isn’t clear.
The study finds similar conclusions to Gillespie's earlier work on Hawaii’s Tetragnatha spiders. The "spiny leg" Tetragnatha spiders developed colors tailored to their hunting habits. But in another group of Tetragnatha spiders, the evolution did not repeat itself. As Pennisi reports, the researchers believe it’s because this other group of Tetragnatha spiders are web-building spiders that don’t have to find a place to hide from birds during the day.
Gillespie says in the statement that she hopes the research will provide insight into the predictable elements of evolution and the conditions that set the stage for such changes to occur.
No matter how you feel about them, cockroaches are something special. Cut a few legs off a nymph, and they grow back. Leave a few cookie crumbs in the carpet, and the critters seem to instantly zero in on them. Expose them to fecal material, bacteria and other pathogens, homemade antibiotics will keep them healthy. On top of it all, they can eat just about anything, live in brutal conditions and laugh in the face of the toughest insecticides.
So what gives them these seeming superpowers? As Maggie Fox at NBC News reports, a new study suggests the answer is in their genes. Researchers at the Chinese Academy of Sciences in Shanghai sequenced the genome of the American cockroach, Periplaneta Americana, revealing a Swiss army knife-like set of genes that makes the insects uber adaptable.
It turns out that cockroaches have a massive genome; of the insects yet studied, the cockroach is second only to the locust. The genes of the American cockroach—which isn’t really American: it was likely transported to the Americas from Africa as early as 1625—is more closely related to termites than to the German cockroach, another major house pest that had its genome sequenced earlier this year. That’s not surprising, since termites turn out to be “eusocial cockroaches” and were moved into the same order as roaches earlier this year.
GenomeWeb reports that 60 percent of the cockroach’s genome contains repetitive elements. But it also includes 21,336 protein-encoding genes, 95 percent of which actually produce proteins. Many of those genes give cockroaches the tools to survive in urban environments. For instance, the cockroach has over 1,000 genes that code for chemical receptors that help them navigate the environment, including 154 olfactory receptors—twice as many as the other creepy-crawlies in its insect order—that allow them to pinpoint the burrito bits you dropped. It also has 522 gustatory receptors, with many of them able to detect bitterness, which may help them tolerate potentially toxic foods. The bugs also encode for certain enzymes that can help them resist pesticides and survive extreme environments.
Fox reports that during its nymph stage, the cockroach is also capable of regeneration. This special skill is the main reason behind its Chinese nickname, xiao qiang, which means little mighty one, the researchers write in their paper in the journal Nature Communications. “It’s a tiny pest, but has very strong vitality,” lead author Sheng Li tells Steph Yin at The New York Times.
The research has several applications. For instance, knowing how the roaches are able to cope with insecticides could lead to better roach control. “The harm of American cockroaches is becoming more serious with the threat of global warming,” the authors write. “Our study may shed light on both controlling and making use of this insect.”
Li also tells Yin that his team hopes to look at the potential healing properties of cockroaches. In traditional Chinese medicine, cockroaches are ground up and used for all manner of treatments thanks to its regenerative power. The team hopes to uncover the “growth factor” that makes that particular trick possible. “We’ve uncovered the secret of why people call it ‘xiao qiang,’” Li says. “Now we want to know the secrets of Chinese medicine.”
In recent years, researchers have uncovered a lot we wish we didn’t know about cockroaches. For instance, researchers have found that gut bacteria infuse roach poo with chemical cues that cause them to congregate. In other words, they like the smell of each other’s poop. Another study has shown that cockroaches have evolved to avoid the sugary baits that used to work on the pests in the 1980s and 1990s. Hopefully this latest study and future work will help scientists find a new remedy for—or another use of—the often despised but always durable cockroach.
Floridian Formica archboldi ants have eclectic interior decorating tastes, to say the least: Whereas most ant species are content to cozy up in sand- or soil-filled mounds, F. archboldi prefer to litter their underground nests with the dismembered limbs and decapitated heads of hapless prey.
This behavioral tic has baffled scientists since the species’ discovery in 1958, but as Hannah Osborne reports for Newsweek, a new study published in Insectes Sociaux reveals exactly how the deceptively deadly F. archboldi—which is typically not known for preying on other ants—targets a specific species of trap-jaw ant, or Odontomachus.
Researchers led by Adrian Smith of North Carolina State University and the North Carolina Museum of Natural Sciences have found that the key to these skull-collecting ants’ success is formic acid. F. archboldi spray their trap-jaw prey with the immobilizing chemical, then drag their kills back to the nest for dismemberment.
But trap-jaw ants are far from easy prey, Gemma Tarlach writes for Discover. Thanks to a set of spring-loaded mandibles capable of striking enemies more than 41 times per second, the trap-jaw ant is actually the more likely predator of the two species. In fact, Cosmos’ Nick Carne notes, scientists have previously posited that F. archboldi is either a highly specialized predator or a moocher of sorts, simply moving into abandoned trap-jaw nesting sites.
To better understand the relationship between F. archboldi and the trap-jaw ant, Smith and his team created a miniature test arena and pitted either an F. archboldi or Formica pallidefulva ant—a related species that has no known connection with Odontomachus—against a trap-jaw. Over the course of 10 trials, F. pallidefulva partially immobilized the trap-jaw just one time. Comparatively, F. archboldi bested the trap-jaw 10 out of 10 times. Seven out of 10 contests resulted in the trap-jaw’s complete immobilization.
The process of spraying victims with formic acid is known as chemical mimicry, according to Inverse’s Sarah Sloat. Trap-jaws are capable of producing the same formic acid as F. archboldi, but the latter happen to be more effective sprayers. Typically, chemical mimicry occurs amongst parasitic species that invade and overtake their prey. But, Smith tells Sloat, there is no evidence that F. archboldi is parasitic. Instead, the researchers suggest the ants’ deployment of formic acid is a defense mechanism designed to provide camouflage and ward off stronger predators.
In addition to observing interactions between Formica and trap-jaw ants, the team recorded high-speed footage of attacks and time-lapse footage of attack aftermaths.
“You could see the Formica ants pull in a trap-jaw ant from where they get their food and bring it into the nest,” Smith says in an interview with The Verge’s Rachel Becker. “And they’d start licking it, biting it, moving it around on the ground like they would with food. And then all of a sudden, 18 hours later, you’d see the head start to pop off of the trap-jaw ant. They would pull it apart, and start to dismember it.”
The new report offers insights on how these skull-gathering creatures trap their prey, but the exact reasoning behind the process remains unclear. As Smith tells Newsweek, he thinks the F. archboldi feed on the trap-jaws and leave behind their hollow head casings in a manner similar to humans casting off chicken bones after eating a pile of wings. Still, this explanation doesn’t fully account for the ant’s use of chemical mimicry, nor the long evolutionary history hinted at by the unusual predator-prey relationship.
“Formica archboldi is the most chemically diverse ant species we know of,” Smith says in a statement. “Before this work, it was just a species with a weird head-collecting habit. Now we have what might be a model species for understanding the evolution of chemical diversification and mimicry.”
Something strange is happening to the birds of Gilbert, Minnesota. In recent days, they have been flying perilously close to windows and cars, and generally seem to be discombobulated. According to CBS affiliate WCCO 4, the cause of this erratic behavior is one that might be familiar to some humans: The birds are drunk.
Gilbert police, who have been fielding reports of close encounters with bumbling birds, took to Facebook to explain that the animals have been feasting on fermented berries, which in turn is making them “a little more ‘tipsy’ than normal.” An early frost this year sped up the fermentation process, police said. As James Owen explained in a 2014 piece for National Geographic, freezing prompts berries to convert starches into sugars, and when the berries subsequently thaw, “it [is] possible for yeast to get in” and expedite fermentation.
It’s not unusual for berries to turn boozy in late winter and early spring, but because the Minnesota frost occurred before birds in the area migrated south, their drunken antics have been particularly noticeable to Gilbert residents.
“Oh my!” a commenter exclaimed on the Facebook post. “That explains all the birds bouncing off my window lately!”
Species like robins, thrushes and waxwings, which tend to eat more berries than their other feathered friends, are most likely to get buzzed as they gorge themselves on alcoholic fruits while trying to bulk up for winter. Young birds may be particularly susceptible to drunkenness because their livers have not grown large enough to handle the unintentional imbibing.
“They just get sloppy and clumsy,” Matthew Dodder, a bird expert from California, tells Antonia Noori Farzan of the Washington Post. “They have actually fallen out of trees on occasion.”
In their statement, Gilbert police assure residents that there is “no need to contact law enforcement about these birds as they should sober up within a short period of time.” Police did note, however, that witnesses should be in touch if they see “Big bird operating a motor vehicle in an unsafe manner” or “Woodstock pushing Snoopy off the doghouse for no apparent reason.”
While the authorities are clearly having a giggle, there is reason to be concerned on the birds’ behalf. If they can’t fly properly or keep their balance, intoxicated birds are at risk of crashing into hard surfaces. In 2012, for instance, researchers performed necropsies on several flocks of cedar waxwings that had collided fatally with solid objects like windows and fences in California; they discovered that the unfortunate animals had stuffed themselves with overripe berries of the Brazilian Pepper Tree, and concluded that the birds’ deaths were the result of “flying under the influence of ethanol.”
Similar finds were made in 2011 in Cumbria, England, after the bodies of 12 common blackbirds appeared near a primary school. Police initially suspected, ahem, fowl play, but post-mortem examinations found berries in the birds’ gastrointestinal tracts and high alcohol levels in a liver sample, according to National Geographic’s Owen, suggesting that fermented fruits played a role in their deaths.
Putting decals on large reflective surfaces can help tipsy birdies avoid fatal crashes, according to the National Audubon Society, which also recommends that birds be left alone if they collide with windows and survive. If the bird may be in danger from cats or other predators, it’s best to pick it up gently with a towel and put it in a ventilated box in a dark and quiet spot. If the bird doesn’t recover within a few hours, the society says, call a local wildlife rehabilitation center.
Wildlife officials in the Yukon, Canada had their hands full several years ago, when Bohemian waxwings, drunk on fermented mountain ash berries, kept hitting windows and walls. The territory’s animal health unit would find them with juice-stained beaks, pop them into hamster cages (or “drunk tanks”), give them a few hours to sober up and then set them free.
As Dodder told The Post, “Sometimes, they just need a bit of time in a quiet setting to recover.”
Because as anyone recovering from a gnarly bender regrettably knows, time is often the only true remedy for a bad hangover—whether tequila or fermented berry-induced.
Since researchers discovered the first exoplanets in the 1990s, astronomers have gotten pretty good at finding satellites orbiting distant suns, cataloguing 4,000 planets in more than 3,000 planetary systems since then. Now, researchers are interested in learning how these planets form, and a new technique may help them find hard-to-locate baby planets.
Young stars often have a disk of gas and dust swirling around them. Planets typically coalesce from this material, and eventually grow large enough to clear a path through these protoplanetary disks. But researchers aren’t certain that all of the gaps they’ve found actually come from young planets. That’s why a team recently looked at these disks in a new way, as they describe in a new study published in the journal Nature.
Astrophysicist Richard Teague, who conducted the study at the University of Michigan, and his team examined new high-resolution data from the Atacama Large Millimeter Array (ALMA), a radio observatory in Chile. In particular, they were able to observe the velocity of carbon monoxide gas moving within the protoplanetary disc around a young star called HD 163296. While hydrogen makes up the majority of the gas in the disk, carbon monoxide emits the brightest wavelengths, giving researchers the most detailed picture of how gas moves within the disk.
“With the high fidelity data from this program, we were able to measure the gas’s velocity in three directions instead of just one,” Teague, who is now a research fellow at Harvard-Smithsonian Center for Astrophysics, says in a statement. “For the first time, we measured the motion of the gas rotating around the star, towards or away from the star, and up- or downwards in the disk.”
When the data was processed with computer modelling, it revealed three areas where gas from the surface of the disc flows toward the middle layers, like a waterfall. The findings line up with previous studies that suggested three giant planets—one half the size of Jupiter, one Jupiter-sized and one twice the size of Jupiter—are forming in the disk.
“What most likely happens is that a planet in orbit around the star pushes the gas and dust aside, opening a gap,” Teague says in a statement. “The gas above the gap then collapses into it like a waterfall, causing a rotational flow of gas in the disk.”
Erika K. Carlson at Astronomy reports that the findings also suggest that the movement of gases within these protoplanetary disks is pretty complex. “There’s a lot more going on than we previously thought,” Teague tells Carlson. “We thought it was just rotating in a rather smooth manner.”
Since researchers have not directly observed the young planets forming in the disk, it’s possible HD 163296’s magnetic field is causing the anomalies in the disk. But co-author Jaehan Bae of the Carnegie Institution for Science, who ran the computer simulations, says planet formation is the most likely cause.
“Right now, only a direct observation of the planets could rule out the other options,” Bae says in a statement. “But the patterns of these gas flows are unique and it is very likely that they can only be caused by planets.”
Carlson reports that the team hopes to look at HD 163296 using other wavelengths to see if they can get data on gas movements deeper within the protoplanetary disk. And after that, the hope is that such observations will be confirmed visually when a new class of telescopes comes online in the early part of the next decade, including the James Webb Space Telescope scheduled for launch in early 2021.
While most of us were sleeping last Sunday morning, Earth had a close call with an asteroid that had been detected a mere 21 hours before it zipped past.
As Live Science’s Elizabeth Howell writes, the asteroid, officially known as 2018 GE3, was about the size of a football field, measuring between 157 and 361 feet in diameter. At its closest point to Earth, it passed by some 119,500 miles away—about half the distance between the Earth and the moon.
First observed at Catalina Sky Survey in Arizona on Saturday, April 14, it flew closest to Earth in the wee hours of the following morning at 2:41 A.M. E.D.T. The asteroid was zipping along at 66,174 miles per hour, Eddie Irizarry reports for Earth Sky.
The asteroid is much larger than many of the other space rocks that have passed by or exploded over Earth, causing some curiosity about how it went undetected for so long. After all, asteroids do have the potential to wreak havoc on Earth.
A meteor that blew up over Chelyabinsk, Russia, in 2013, for example, resulted in nearly 1,500 injuries and thousands of buildings damaged. But the fragments of space rock didn’t directly hit anyone. Rather, as Katherine Hignett reports for Newsweek, experts believe the explosion caused a shockwave, and this resulted in shattered windows and damaged buildings.
Asteroid 2018 GE3 is actually three to six times the size of the Chelyabinsk meteor and roughly equal to the size of the space rock that exploded over Tunguska in 1908.
In fact, 2018 GE3 is one of the largest asteroids ever to come in such close proximity to Earth, Eric Mack reports for CNET. Larger asteroids flew by in closer proximity in 2001 and 2002, according to NASA’s near-Earth object observation database. But this is pretty rare. Asteroids of this size only approach Earth only once or twice a year.
So how did astronomers miss the asteroid until hours before flyby?
As Howell explains, asteroids are difficult to spot and track, since most are dark and generally much smaller than 2018 GE3. This means that they may not reflect enough light for telescopes to easily detect. “A telescope needs to be looking in just the right area, at the right time, to catch it,” she writes.
Many telescopes would have to be on the lookout at once to spot incoming space rocks. Though NASA does track potential threats in this manner, their current focus is tracking the most dangerous of these asteroids: space rocks at least 460 feet wide that will come within 4.65 million miles of Earth. 2018 GE3 is around 75 percent that size.
As Mack reports, another asteroid, 99942 Apophis, is set to pass by in 2029. As Smithsonian.com previously reported, this asteroid will become the closest flyby of its size. It will come as close as 19,400 miles from Earth.
But don’t worry: The chances of it actually hitting Earth are slim. And scientists have been working toward better preparation for such a disaster. Last month, researchers announced plans for a spacecraft called HAMMER that would collide and knock incoming asteroids in another direction or simply blow them up into tiny pieces, Space.com reported.
This, however, would require early detection.
A big change is coming to food packaging in New York, the city where takeout reigns supreme —among some more than others. As Nikita Richardson reports for Grub Street, a citywide ban on single-use plastic foam containers went into effect on Tuesday, and food establishments have until the end of June to start complying with the new prohibition.
The ban targets single-service products made of expanded polystyrene, which resembles, but is often erroneously referred to as Styrofoam—a distinct brand of the Dow Chemical company that has never been used in food and beverage containers. New York stores and restaurants will no longer be permitted to sell or possess spongy foam items like takeout clamshells, cups, plates, bowls and trays. Packing peanuts are also banned.
Exceptions will be made for food items that were packaged before they reached New York’s stores and restaurants, for foam containers used to store raw meat, seafood or poultry, and for small business owners who can demonstrate that buying alternative, non-foam products will “create financial hardship.” But all other establishments have until June 30 to use up their polystyrene stock; after that point, they will be charged up to $1,000 per offense.
New York is cracking down on expanded polystyrene (or EPS) containers because, according to the city, they “cannot be recycled in a manner that is economically feasible, environmentally effective, and safe for employees as part of the City’s curbside recycling program.” The products are made by steaming beads of the polymer polystyrene until they expand to 50 times their original size, according to the BBC. And this process makes EPS products difficult to recycle. Each time an EPS bowl or plate is made, “[w]hat you need are virgin polystyrene beads,” Joe Biernacki, professor of chemical engineering at Tennessee Tech University, told the BBC in 2015.
Also problematic is the fact that polystyrene often ends up in marine environments, where it gets gobbled up by animals, causing blocked digestive systems and, ultimately, starvation. Additionally, some experts worry about the health implications for humans who eat fish and other sea creatures that have ingested bits of expanded polystyrene and other microplastics.
New York’s new ban comes after a years-long effort to outlaw foam containers. According to the New York Times’ Michael Gold, the prohibition was first proposed by former Mayor Michael Bloomberg in 2013, and put into effect by Mayor Bill de Blasio in 2015. A coalition of restaurant owners, manufacturers and recyclers promptly sued the city, and a judge ruled that city officials had not proffered enough evidence to show that polystyrene containers cannot be recycled. The coalition sued again when the city attempted to implement the ban once more in 2017—with the support of a new report—but this time, a judge ruled in favor of the city.
New York now joins a number of cities that have banned plastic foam products, among them Chicago, Honolulu, Boston and Washington, D.C., which this week became the second major U.S. city to prohibit restaurants and other businesses from using plastic straws—another product that has been a focus of activists hoping to cut back on single-use items that have a harmful impact on the environment.
Since the first stars first started flickering about 100 million years after the Big Bang our universe has produced roughly one trillion trillion stars, each pumping starlight out into the cosmos. That’s a mind-boggling amount of energy, but for scientists at the Fermi Large Area Telescope Collaboration it presented a challenge. Hannah Devlin at The Guardian reports that the astronomers and astrophysicists took on the monumental task of calculating how much starlight has been emitted since the universe began 13.7 billion years ago.
So, how much starlight is there? According to the paper in the journal Science, 4×10^84 photons worth of starlight have been produced in our universe, or 4,000,000,000,000,000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000,000,000,000,000 photons.
To get to that stupendously ginormous number, the team analyzed a decades worth of data from the Fermi Gamma-ray Space Telescope, a NASA project that collects data on star formation. The team looked specifically at data from the extragalactic background light (EBL) a cosmic fog permeating the universe where 90 percent of the ultraviolet, infrared and visible radiation emitted from stars ends up. The team examined 739 blazars, a type of galaxy with a supermassive black hole in its center that shoots out streams of gamma-ray photos directly toward Earth at nearly the speed of light. The objects are so bright, even extremely distant blazars can be seen from Earth. These photons from the shiny galaxies collide with the EBL, which absorbs some of the photons, leaving an imprint the researchers can study.
Looking at blazars ranging in age from 2 million to 11.6 billion years old allowed the researchers to use the sensitive instruments on the Fermi telescope to analyze their light, measuring how much radiation it lost as it moved through the EBL. This let them create an accurate measure of the density or thickness of the EBL over time, essentially creating a history of starlight in the universe since, in deep space, distance and time are almost the same thing.
“By using blazars at different distances from us, we measured the total starlight at different time periods,” co-author Vaidehi Paliya of Clemson University says in a press release. “We measured the total starlight of each epoch – one billion years ago, two billion years ago, six billion years ago, etc. – all the way back to when stars were first formed. This allowed us to reconstruct the EBL and determine the star-formation history of the universe in a more effective manner than had been achieved before.”
Researchers have tried to measure the EBL in the past, but were unable to get past the localized dust and starlight close to Earth, making it almost impossible to collect good data on the EBL. The Fermi telescope, however, finally allowed the team to minimize that interference by looking at gamma rays. The data they collected is in line with previous estimates for the density of the EBL.
The study shows that the peak of star formation in the universe took place about 11 billion years ago. Over time, it has slowed drastically, but stars are still forming, with about seven new stars lighting up in the Milky Way every year alone.
The study was not just an exercise in smashing the zero key either. Ryan F. Mandelbaum at Gizmodo reports that the measurement gives scientist an upper limit to the number of galaxies that were floating around 12 billion years ago during the Epoch of Reionization, the period when dark matter, hydrogen and helium first coalesced into stars and ordinary matter. It’s also possible that the EBL measurement could help develop new ways to look for unknown particle types.
Clemson astrophysicist and lead author Marco Ajello says in the release that the study is also good step toward understanding the universe’s earliest days.
“The first billion years of our universe’s history are a very interesting epoch that has not yet been probed by current satellites,” he says. “Our measurement allows us to peek inside it. Perhaps one day we will find a way to look all the way back to the big bang. This is our ultimate goal.”
Thanks to the tiny star-shaped leaves that radiate from the momiji, the maple indigenous to eastern Asia, autumn in Japan is exhilarating. Walking through Kiyosumi Gardens in Tokyo on a recent visit, I glanced up at a constellation of red, orange, yellow and green leaves that interlocked to form a scrim. As the sun shone through, my world was bathed in kaleidoscopic color.
That evening, I went to Rikugi-en — like Kiyosumi, a classical Edo-period strolling garden. Stage lights illuminated the momiji, so that their bright bodies flexed against the night like lanterns. Fog machines generated mist, obscuring the ground. Both Rikugi-en and Kiyosumi are part of the Autumn Leaves Stamp Rally, an annual event during which ecstatic pilgrims visit all nine of Tokyo’s main gardens, receiving a stamp in a booklet for each one.Autumn foliage lit up at night in Rikugien Garden, Komagome, Tokyo. (Hiro1775 / iStock)
The Japanese, ever attuned to the seasons, love the cherry blossom. But kōyō, or fall color, is cherished with nearly the same ardor. Beginning in the 17th century, Japanese gardeners, in typically exacting manner, arranged more than 300 varieties of maple around temples, inns and residences in pleasure-giving color configurations. Momiji leaves are thin but taut, like sheets of crystallized honey, and can refract and filter light, like natural stained glass. Japan is full of unusually red trees, and in the sunlight the leaves glow like rubies.Fall colors at Lake Kawaguchiko with Mt. Fuji in the background. (Thitivong / iStock)
In recent years, media attention and foreign enthusiasm, particularly from the Chinese, have raised the passion for Japanese leaf-chasing to a kind of fervor. From mid-October until early December, websites track the changing of the leaves from northeast to southwest. There are colorful trees all over the country, but most visitors cluster around the major cities, where hotels print daily foliage updates for guests. Such obsessiveness adds to the frenzied quality of the pursuit. But a chance to see the leaves at full wattage is a lesson in savoring the moment before the startlingly vivid colors fade.
Because Kyoto was not bombed during World War II, its trees and temples are generally older than Tokyo’s and are particularly prized. The Zen temple Enrian is open only five weeks a year for connoisseurs to see its famed 350-year-old tree, bred so its leaves turn blood-red. Visiting Rurikōin, I saw a crowd of fiery maples, whose predominant color, orange, was projected through a window onto a black lacquered floor.Ruriko-in Temple in the suburbs of Kyoto. (Magicflute002 / iStock)
Founded in 778, Kiyomizu Temple is perched atop a 43-foot cliff. It looks like the biblical ark suspended on an amber ocean of maple leaves. Young women dressed in cream, teal, and camel lingered over the view of the hills and vermilion pagodas sprouting from the scarlet forests. I gazed out at the horizon, to a landscape pulsing with color, and my heart throbbed with happiness.The Katsura River in the autumn. (Pat138241 / iStock)
Other articles from Travel + Leisure:
The southeast Asian jumping spider, or Toxeus magnus, is unusual in more ways than one. Not only does it bear a striking resemblance to a long-legged ant, but it also appears to be the only arachnid known to “milk” its young—an unprecedented behavior newly published in the journal Science.
In this case, “milk” is worth writing in quotation marks because the sugar-, fat- and protein-filled droplets produced by jumping spider mothers don’t technically meet the parameters of the word—containing lactose produced by mammary glands—as it’s used in relation to mammals. Still, Ben Guarino writes for The Washington Post, the fluid fulfills the basic purpose of milk: offering nourishment to offspring via what Sasha Dall, a University of Exeter biologist who was not involved in the research, describes as “some aspect of yourself.”
Lead author Zhanqi Chen of the Chinese Academy of Sciences launched the study after noticing the jumping spider’s odd communal tendencies. Most spiders are solitary creatures, The Atlantic’s Ed Yong notes, but T. magnus cluster in family groups, with young spiderlings staying in their mothers’ nests for an extended period of time.
To better understand this unusual behavior, Chen and his colleagues reared jumping spiders in the lab and tracked how long it took babies to leave the nest. Surprisingly, neither newborns nor mothers ventured beyond the nest in search of food for 20 days, leading the scientists to wonder how the vulnerable young arachnids managed to not only survive, but significantly grow in size.
Upon closer inspection, the team observed the mother transferring drops of a sustaining liquid (later revealed to contain four times the protein of cow’s milk) from her abdominal epigastric furrow to the nest during the first week post-birth. Once the one-week mark passed, spiderlings drank fluid directly from the mother’s body, crowding around in a manner eerily similar to suckling puppies.
According to The New York Times’ Douglas Quenqua, T. magnus moms even produced the milk-like fluid after their roughly 20-day-old offspring began leaving the nest to forage for food. Suckling only stopped when the babies reached 40 days old, at which point they gained a bit of independence but still returned to the nest for the night.The baby spiders crowd around their mother in a manner similar to that of suckling puppies (Rui-Chang Quan)
Interestingly enough, Jason G. Goldman reports for National Geographic, only females were permitted to continue nursing beyond sexual maturity. Males received the short end of the stick; Motherboard’s Becky Ferreira says the mothers actually attacked their adult sons and chased them out of the nest, perhaps to prevent inbreeding between brothers and sisters. Given their newfound ability to forage for food, this exclusion didn’t necessarily doom them to an early death.
The scientists ran through multiple scenarios to better assess the importance of jumping spider milk production, alternately blocking the mothers’ epigastric furrows by covering them with Wite-Out and preventing mothers from nursing beyond day 20.
Spiders that only received milk for the first 20 days of their lives—but still benefitted from the presence of a maternal figure beyond this point—emerged with fewer parasites than those who lost both milk and mothers at the 20-day mark.
Of 187 spiderlings spread out across 19 nests, those that enjoyed both maternal care and a consistent diet of milk exhibited a survival rate of 76 percent. Survival amongst those who lost their mothers after 20 days dropped to about 50 percent.
Jumping spiders are far from the only non-mammals known to produce a milk-like nutritious substance. As Ryan F. Mandelbaum explains for Gizmodo that cockroaches, pigeons, tsetse flies and earwigs have all been observed engaging in the mammalian practice. The key difference, according to The Post’s Guarino, is that mammals possess a specialized organ designed for lactation. So far, researchers have not identified an equivalent gland in non-mammals.
Chen tells The Atlantic’s Yong that he and his colleagues “have no idea” why the unusual practice evolved amongst jumping spiders specifically. He proposes, however, that the sustenance boost equips the tiny arachnids , which measure just a millimeter long, for life in a competitive, predator-filled environment.
Some scientists still have questions surrounding the discovery: Joshua Benoit of the University of Cincinnati was not involved in the study, but he tells Gizmodo that it’s unclear whether jumping spiders would return to their mothers beyond the 20-day mark in the wild. Nathan Morehouse, another Cincinnati scientist not involved in the study, adds that the new research doesn’t account for why the spiders nurse for so long or why other arachnid species don’t produce milk.
For now, these queries remain unanswered. But given the revelatory nature of the study, it’s likely that follow-up research will join the mix soon.
As Chen concludes in a statement, "We anticipate that our findings will encourage a reevaluation of the evolution of lactation and extended parental care and their occurrences across the animal kingdom."
The poem “Der Rosendorn” or “The Rose Thorn” is known from two manuscript copies dating to around 1500. But a new fragment of the poem discovered in the library of Melk Abbey in Austria’s Wachau Valley dates from 200 years before that, meaning that someone was writing about a talking vulva much earlier in the Middle Ages than previously believed.
Yes, reports Kate Connolly at The Guardian, the poem is actually a dialogue between a woman and her vulva, discussing which of them men are more attracted to.
The fragment is a long thin strip of parchment on which a few letters per line are visible, according to a press release from the Austria's Academy of Sciences. When researchers tried to identify the letters, they found they corresponded with the text of “The Rose Thorn.” Previously, copies of the poem were found in the Dresden and Karlsrue Codices and were dated from around 1500.
The parchment on which the poem was written was cut up and reused as binding in a Latin theological text. Whether the poem was sacrificed because of its content is hard to say; we can “really only guess,” says Christine Glaßner of the Academy of Sciences’ Institute of Medieval Research.
The earlier date for the poem helps push back the timeline for Medieval erotic poetry and suggests openness about sexuality appeared in the German speaking world earlier than previously thought.
The tale of the loquacious genitalia begins with a male narrator who shares how he first came across a young woman arguing with her vulva during her daily soak in rosewater. The dialogue between the two is witty, and the woman contends that men are primarily attracted to her looks. The vulva contends the young woman puts too much emphasis on her appearance. The two decide to go their separate ways, to disastrous result. Ultimately, they realize they must reunite. The narrator then steps in to offer his aid, and —in moment that in 2019 that reads, decidedly, creepy—pushes the two back together in a less-than-chivalrous manner.
Glaßner says the poem is more than just an erotic Medieval fantasy. “[A]t its core is an incredibly clever story, because of the very fact that it demonstrates that you cannot separate a person from their sex,” she tells Connolly.
While this may be the German language’s earliest talking vulva, it’s not the only one in literature: The French tale Le Chevalier qui faisoit parler les cons et les culs employs talking vulvas. French philosopher Denis Diderot’s 1748 novel, Les bijoux indiscrets, is about a magic ring that makes vulvas talk. The premise even shows up in modern times, appearing, for instance, in the cult 1977 movie Chatterbox, or Virginia and the Talking Vagina.
The morning of March 14, 1968, began just like any other day in the rural, snow-covered hills of Skull Valley, Utah. But for Tooele County Sheriff Fay Gillette, the carnage of the day would be forever seared in his mind, and for the rest of the country, it would come to be a flashpoint for a national debate about the use of chemical weapons.
“I’ve never seen such a sight in my life,” Gillette later told investigative reporter Seymour Hersh about the thousands of dead livestock splayed across the landscape. “It was like a movie version of ‘death and destruction’—you know, like after the bomb goes off. Sheep laying all over. All of them down—patches of white as far as you could see.”
Had all those sheep eaten a poisonous plant? Had they come into contact with foliage sprayed with pesticides? Or maybe there was an even more alarming culprit: the Dugway Proving Ground, the Army’s largest base for chemical and biological weapons testing, located only 80 miles from Salt Lake City and a mere 27 miles from the stricken animals.
As more sheep sickened and died, spokespeople for the Dugway facility denied testing any weapons in the days before the die-off. But on March 21, U.S. Senator Frank Moss, a Democrat representing Utah, released a Pentagon document that proved otherwise: On March 13, the day before Sherriff Gilette came across the macabre scene, a high-speed jet had sprayed 320 gallons of nerve gas VX across the Dugway grounds in a weapons test. The odorless, tasteless chemical is so deadly that less than 10 milligrams is enough to kill a human by asphyxiation, via paralysis of the respiratory muscles.
In the weeks and months tha followed, local veterinarians and health officials investigated the matter. Their findings: the jet that sprayed VX gas had experienced a malfunction in its delivery tanks and had accidentally released the gas at a much higher altitude than intended, allowing it to be blown far from the testing grounds. The ill-fated sheep had been grazing on grass covered in the chemical. Some died within 24 hours while others remained ill for weeks before succumbing, “generally act[ing] dazed, [with] their heads tilted down and off to the side, walk[ing] in a stilted, uncoordinated manner,” reported Philip Boffey for the journal Science. It was exactly the suite of symptoms scientists would expect to accompany poisoning by VX nerve gas.
But the most damning report came from the National Communicable Disease Center in Atlanta, which tested the water and forage food from the area, as well as the blood and livers of dead sheep. Their tests “prove beyond doubt that these responses are in fact identical and can only be attributed to the same chemical” as the Army provided for comparison, stated the report.
Despite the widespread coverage of the incident, locally and nationally, few people in the region expressed real alarm in the immediate aftermath. This was in part due to the fact that the military was the largest employer in the state. “Concern, from the highest level of state officialdom on down, was that too much investigating or talking about the incident might make the Army move its base from Dugway,” reported Seymour Hersh.
Although the Army never released a full, detailed report, they paid $376,685 to rancher Alvin Hatch, whose sheep accounted for 90 percent of those afflicted. The military also lent bulldozers for the mass burial of the dead sheep, and initiated a review of the safety protocol at Dugway.
But even with the sheep buried and settlements paid, the Army couldn’t make the incident disappear: the deaths of the sheep was only the starting point of what became a years-long battle over chemical weapons in the context of the Cold War and American military action in Vietnam. It’s all because Richard McCarthy, a Democratic congressman from New York, happened to see an NBC documentary about the incident in February 1969.
“Chemical and biological weapons were another side of the nuclear arms race, but they were a much more secret and hidden aspect of it,” says science historian Roger Eardley-Pryor. “They were much less known until Richard McCarthy made this a national issue.”
Before that point, chemical weapons were largely believed to be banned from use by international agreement. After World War I, in which every major power deployed chemical weapons—resulting in 1 million casualties and more than 90,000 deaths—Western nations signed the 1925 Geneva Protocol. The agreement prohibited the use of chemical and biological weapons, and for a time it seemed as if it would be obeyed.
But the United States never signed the agreement. Between 1961 and 1969 alone, the U.S. military spent $2 billion on its chemical weapons stockpile, writes science historian Simone Müller in Historical Social Research. Over that same period, the military dumped hundreds of thousands of tons of old chemical weapons directly into the ocean, without bothering to keep records of precisely where or how many weapons were disposed of. The military also discovered multiple instances of chemicals leaking out of their containers, including 21,000 leaky bomb clusters discovered in the Rocky Mountain Arsenal in Denver.
Yet the American public was almost entirely unaware of any of the stockpiles, or the danger of testing, storing and transporting them. The only synthetic chemicals being discussed in the public sphere, Eardley-Pryor says, were pesticides harmful to the environment like DDT (Rachel Carson’s landmark research on the topic, Silent Spring, was published in 1962) and so-called “nonlethal” chemicals used in Vietnam, like the defoliating herbicide Agent Orange, and tear gas. (The defoliant would later be discovered to be carcinogenic, resulting in a multitude of health problems for Vietnam veterans and residents of the country.)
After McCarthy saw the NBC piece on the Dugway sheep kill, he was determined to learn more—and expose the chemical weapons complex to the rest of America. Beginning in May 1969, McCarthy instigated congressional hearings that revealed the extent of the U.S. chemical weapons program and uncovered a disposal program with a distasteful acronym: CHASE. It stood for the method by which toxic waste, moved onto ships and sent to sea, were disposed of: Cut Holes And Sink ’Em.
A little more than a year after the Dugway incident, in July of 1969, a small leak developed in a nerve gas weapon on the U.S. military base on Okinawa; 24 people were injured, though none fatally. The press and the public quickly drew a line between Okinawa and the Utah sheep. More incidents came to light. “The Pentagon admitted that, besides Dugway Proving Ground in Utah… Edgewood Arsenal, Md. and Fort McClellan, Ala., have also been the sites of open-air testing of Tabun, Sarin, Soman, VX and mustard gas,” reported Science.
Military officials argued that tear gas, at least, had an important place in the Vietnam War: it could protect U.S. soldiers by flushing Viet Cong soldiers out of hiding without killing innocent Vietnamese citizens. But after years of growing steadily more unpopular, even the argument for the humane use of tear gas in Vietnam lost its power. In 1975, Congress approved the protocol and President Gerald Ford ratified it. The U.S. would no longer use chemical weapons—lethal or nonlethal—in warfare. Ironically, tear gas has continued to be used as a weapon of pacification domestically; law enforcement from local police officers to the National Guard have continued to use tear gas to quell riots and prevent property damage.
But chemical weapons, which scientists of the 1960s and 70s described as emerging from Pandora’s box, continue to haunt us. From their deadly use by dictator Bashar al-Assad on his own people in Syria, to Russia’s apparent use of a nerve agent on former intelligence officials in the U.K., it’s clear that the use and legacy of synthetic chemicals is far from over.
While there is no definitive solution to preventing the use and spread of such weapons, Eardley-Pryor does add that it’s rare for countries to actually use them. “I’m very thankful, if surprised, that other nations have agreed to say this is a terrible thing, we’re not going to use it,” he says.
And in the U.S., at least, we may have the sheep to thank for it.
With its spice-infused creamy, orange filling and crisp crust, there’s nothing quite like pumpkin pie to herald the arrival of the Thanksgiving holiday (though some might argue in favor of its other forms, from pumpkin bread to pumpkin ale). The pumpkin features uniquely in this fall holiday and the autumn weeks generally, remaining absent from other celebrations like the Fourth of July or Christmas. But at one point, the squash was as ubiquitous as bread—and sometimes even more so, as American colonists would rely on it to make bread when their harvest of wheat fell short. How did the pumpkin go from everyday produce to seasonal treat? It’s a story more than 10,000 years in the making.
To understand the surprising trajectory of the orange pumpkin, it’s important to know something of its life history. The cheerful pumpkin is known by the species name Cucurbita pepo—a species that also includes acorn squash, ornamental gourds and even zucchini. All these different forms of Cucurbita pepo are cultivars, varieties of the same species that are selected in certain forms by human farmers. And yes, they are technically fruits, though many refer to them colloquially as vegetables.
Before humans arrived in the Americas, wild forms of these squashes grew in natural abundance around floodplains and other disrupted habitats, with the help of enormous mammalian herbivores. Creatures like giant ground sloths, mastodons and gomphotheres (elephant-like animals) created the perfect environment for wild squashes, and when humans arrived and hunted the massive herbivores to extinction, many of the wild squashes and gourds went extinct as well. Those that survived managed to do so because humans continued growing them, making squashes (including in the pumpkin form) the first domesticated plant in the Americas. Archaeologists unearthed the oldest example of orange field pumpkin seeds in Oaxaca, Mexico and dated them to an astonishing 10,000 years—millennia before the appearance of domesticated corn or beans.
Initially, indigenous people used the squashes for their seeds and as containers, but by 2500 B.C. Native Americans in the Southwest were cultivating corn, beans and squash on farms. The crop spread across the Americas, with communities from the Haudenosaunee in the northeast (also known as the Iroquois Confederacy) to the Cherokee of the southeast planting and sometimes venerating the squash.
When Europeans arrived, they encountered the endemic crop everywhere. “Columbus mentioned them on his first voyage, Jacques Cartier records their growing in Canada in the 1530s, Cabeza de Vaca saw them in Florida in the 1540s, as did Hernando de Soto in the 1550s,” writes historian Mary Miley Theobald. Native Americans cooked the squashes in all manner of ways: roasting them in the fire, cutting them into stews, pounding the dried flesh into a powder, or drying strips of it into something like vegetable jerky. (At one point George Washington had his farm manager attempt the same preparation with Mount Vernon pumpkins, only for the man to report, “I tried the mode you directed of slicing and drying them, but it did not appear to lengthen their preservation.”)
For these colonists, the squashes provided an abundant source of nutrition, and they rarely distinguished one form of Cucurbita pepo from another. “Through the colonial era they used the words interchangeable for pumpkin or squash,” says Cindy Ott, the author of Pumpkin: The Curious History of an American Icon. As to whether the Pilgrims ate pumpkin at their iconic meal with Native Americans, Ott says there’s no mention of it in the written records, but people “probably ate it that day, the day before, and the day after.”
It wasn’t until the early-19th century that Americans began to distinguish between the different forms of Cucurbita pepo, when masses of people moved from the rural countryside to urban areas during the Industrial Revolution. Zucchini and other summer squashes were sold as cultivars in city markets; the pumpkin, however, remained on farms, used as livestock feed. City-dwellers, meanwhile, ached with nostalgia for their connection to the land, Ott says. By the middle of the century, popular songs pined for happy childhoods spent on the farm. The pumpkin served as a symbol of that farming tradition, even for people who no longer actually worked on farms. “The pumpkin has no economic value in this new industrial economy,” Ott says. “The other squashes are associated with daily life, but the pumpkin represents abundance and pure agrarian ideals.”
Pumpkin pie first appeared as a recipe in the 1796 cookbook American Cookery, published by New England writer Amelia Simmons, and was sold mainly in that region. When the dessert gained popularity, it was billed as a New England specialty. That connection to the North translated to the pumpkin being appropriated by abolitionists leading up to and during the Civil War, Ott says. Women who championed the anti-slavery cause also wrote poetry and short stories about pumpkins, praising them as a symbol of the resilient, northern family farmer. The status of the squash rose to national prominence in 1863, when President Lincoln, at the behest of numerous women abolitionists, named the fourth Thursday in November as a national holiday.
“The women who [helped create] Thanksgiving as a holiday were strong abolitionists, so they associated pumpkin farms with northern virtue and very consciously compared it to Southern immoral plantation life,” Ott says. “That feeds into how Thanksgiving became a national holiday in the midst of the Civil War, when the pumpkin was a pivotal player in the northern harvest.”
The link between Thanksgiving and pumpkin pie has continued to this day, with American farmers growing more than a billion pounds of pumpkin annually, the vast majority for Halloween and Thanksgiving. Urbanites travel out to family farms to buy their jack-o-lantern pumpkins, and visit the grocery store for canned pumpkin before the big holiday. For Ott, learning the history of the pumpkin was a lesson in how every-day objects can tell deeper stories.
“These very romantic ideas are about farm life and how Americans like to imagine themselves, because farming is hard work and most people wanted to leave the farm as soon as they could,” Ott says. “But [the pumpkin shows] how we think about nature, ourselves and our past. A humble vegetable can tell all these stories.”
Researchers are hoping that a new technology will help them to begin reading charred scrolls dating back 2,000 years. If successful, the technique could help decipher other charred, faded or damaged scrolls and documents from the ancient world.
These particular scrolls were unearthed in 1752 in the ruins of Herculaneum, which was covered in ash by Mount Vesuvius in 79 A.D. They were discovered, specifically, in the library of a grand villa, believed to belong to Julius Caesar’s father-in-law, Lucius Calpurnius Piso Caesoninus. As Nicola Davis at The Guardian reports, the documents were a major find, since the site, which became known as the Villa of the Papyri, is the only known intact library from the ancient world. Most of the documents, however, were charred into rolled up logs, rendering the texts more or less useless.
“Although you can see on every flake of papyrus that there is writing, to open it up would require that papyrus to be really limber and flexible – and it is not anymore,” Brent Seales, director of the Digital Restoration Initiative at the University of Kentucky, tells Davis.
That hasn’t stopped researchers from trying to access the writings, most of which, it’s believed, were lost to history. Attempts have been made to unroll about half the scrolls using various methods, leading to their destruction or causing the ink to fade.
Seales and his team are now seeking to read the text using the Diamond Light Source facility, a synchrotron based in Oxfordshire in the U.K. that produces light that can be billions of times brighter than the sun. They will test out the method on two intact scrolls and four smaller fragments from L'institut de France.
“We... shine very intense light through (the scroll) and then detect on the other side a number of two-dimensional images. From that we reconstruct a three-dimensional volume of the object... to actually read the text in a non-destructive manner,” Laurent Chapon, physical science director of Diamond Light Source, tells George Sargent at Reuters.
Machine-learning algorithms will then attempt to use that data to decipher what was on the scrolls. “We do not expect to immediately see the text from the upcoming scans, but they will provide the crucial building blocks for enabling that visualization,” Seales says in a press release. Eventually, if the technique works, the team hopes to use it on 900 other Herculaneum scrolls from the villa. “The tool can then be deployed on data from the still-rolled scrolls, identify the hidden ink, and make it more prominently visible to any reader,” Seales says.
The isn’t the first time he’s unrolled ancient scrolls. As Jo Marchant reported for Smithsonian magazine in 2018, Seales began researching techniques for creating 3D images of ancient documents and deciphering faded or damaged scrolls back in 2000. In 2005, he first saw the Herculaneum scrolls, most of which are housed in a museum in Naples, and decided he’d focus his technical attention on the documents. “I realized that there were many dozens, probably hundreds, of these intact scrolls, and nobody had the first idea about what the text might be,” he says. “We were looking at manuscripts that represent the biggest mysteries that I can imagine.”
Since then, advancing technology has helped him dig deeper into the documents. In 2016, his team made news when they were able to use micro-CT scans to read a charred scroll found in an ark near the Dead Sea at En Gedi. Because the ink used metals, Seales was able to detect the writing. He then used his advanced software to digitally unroll the scroll and piece it back together to learn that the 1,500-year-old document was snippet from the Book of Leviticus.
But the Herculaneum scrolls pose a different problem: The Romans did not use heavy metals in their carbon-based inks, though some of their inks do contain lead. That makes the contrast between the ink and papyrus not very strong. That’s where the machine learning comes in. Davis reports the team is training its algorithms using bits of charred scrolls where the writing is still visible. The hope is that the software will learn the microscopic differences between parchment where ink once was and wasn’t.
The team has already collected the high-energy X-Ray data from the scrolls and are now training their algorithms. They hope to perfect the process in the next few months.
Most of the writings in open scrolls from the Villa of the Papyri have been philosophical works in Greek on Epicureanism. But there’s a chance that some of the charred scrolls contain Latin texts. It’s also possible that more scrolls remain undiscovered in parts of the Villa that have yet to be excavated. “A new historical work by Seneca the Elder was discovered among the unidentified Herculaneum papyri only last year, thus showing what uncontemplated rarities remain to be discovered there,” as Oxford classicist Dirk Obbink points out to Davis.
If and when the scrolls are revealed, it will be a windfall for historians, classicists and archaeologists alike. “It’s ironic, and somewhat poetic that the scrolls sacrificed during the past era of disastrous physical methods will serve as the key to retrieving the text from those survive but are unreadable,” Seales says in the press release. “And by digitally restoring and reading these texts, which are arguably the most challenging and prestigious to decipher, we will forge a pathway for revealing any type of ink on any type of substrate in any type of damaged cultural artifact.”
After 130 years, do we finally know the identity of Jack the Ripper? Unfortunately, no. After releasing test results of a controversial silk shawl stained with blood and, possibly, semen, supposedly found at the scene of one of the Ripper killings, forensic scientists are pointing the finger at Aaron Kosminski, a 23-year-old Polish barber in London who was one of the first suspects identified by London police in the Ripper case. But like all elements in the Jack the Ripper saga, the evidence they’re offering is not able to close the book on the string of murders that terrorized the London streets of 1888.
The case for the barber’s unmasking is tied to the shawl alleged to have been found next to Catherine Eddowes, the Ripper’s fourth victim. As David Adam at Science reports, the cloth was acquired by Ripper enthusiast Russell Edwards in 2007, who had it DNA tested. While Edwards published the results in his 2014 book, Naming Jack the Ripper, he kept the DNA results and methods under wraps, making it impossible to assess or verify the claims of Kosminski as Ripper. Now, the biochemists who ran those tests, Jari Louhelainen of John Moores University in Liverpool and David Miller of the University of Leeds, have published the data in the Journal of Forensic Sciences.
There, the researchers explain they subjected the shawl to infrared imagery and spectrophotometry testing. They also inspected the stains using a microscope to determine what made them. Under ultraviolet light, they found that one stain was possibly produced by semen.
The researchers then vacuumed up what DNA fragments the could from the shawl, finding little modern contamination and many degraded short fragments, consistent with DNA of that age. They compared mitochondrial DNA in the sample, which is passed down from mother to child, to a descendent of Eddowes, finding that it was a match. The team also found a match to a descendant of Kosminski in other bits of mitochondrial DNA.
“All the data collected support the hypothesis that the shawl contains biological material from Catherine Eddowes and that the mtDNA sequences obtained from semen stains match the sequences of one of the main police suspects, Aaron Kosminski,” they write in the study.
But as Adam at Science reports, this more detailed data still doesn’t say enough. As Hansi Weissensteiner, a mitochondrial DNA expert, points out, mitochondrial DNA can’t be used to positively ID a suspect, it can only rule one out since thousands of other people could have had the same mitochondrial DNA. Additionally, experts have critiqued the way the results were published, as some of the data is shown as graphs instead of the actual results. Forensic scientist Walther Parson says the authors should publish the mitochondrial DNA sequences. “Otherwise the reader cannot judge the result,” Parson says.
Beyond the results, there’s an even bigger obstacle afoot—the provenance of the shawl. For The Conversation, Mick Reed explains the shawl’s origin story is full of problems. Was a shawl even picked up by Metropolitan Police officer Amos Simpson at the crime scene that night? Even if that were true, whether this scarf is the authentic one is up for debate; the cloth was previously dated to the Edwardian period, from 1901 to 1910, as well as to the early 1800s, and could come from anywhere in Europe.
Historian Hallie Rubenhold, author of the new book The Five: The Untold Lives of the Women Killed by Jack the Ripper, has been among the Ripper experts to criticize the conclusions. “[T]here is no historical evidence, no documentation that links this shawl at all to Kate Eddowes. This is history at its worst,” she wrote on Twitter in response to a headline that claimed the newly published research "proved" Jack the Ripper had been identified.
While it seems there's no way we'll ever know for certain who the murderer was, Rubenhold makes the case that it doesn't matter all that much. What she prioritizes are the identities of the women he murdered, whose names we have record of. As Meilan Solly recently reported for Smithsonian.com, Rubenhold’s research "dedicates little space to the man who killed her subjects and the gory manner in which he did so." Instead, it shifts the focus of the Jack the Ripper narrative to lives—not deaths—of his victims.
If you came down with frenzy, love sickness, venereal disease or any other manner of ailment in 17th-century England, you might opt to pay a visit to Simon Forman, a self-taught astrologer and physician who claimed to diagnose and treat illnesses through consultation with celestial bodies. Even 400 years ago, the medical establishment regarded Forman’s brand of medicine with hostility and suspicion. But he was hugely popular among patients, as evidenced by the 80,000-odd case notes that he and his protégé, Richard Napier, left behind.
Now, as the BBC reports, Cambridge historians have transcribed and digitized 500 of their favorite case notes, offering a fascinating glimpse into what Lauren Kassell, a professor of history of science and medicine at the university, calls “the grubby and enigmatic world of seventeenth-century medicine, magic and the occult.”
Under Kassell’s leadership, researchers have spent the past 10 years editing and digitizing Forman and Napier’s notes. The images of complete casebooks can be found here.
Sorting through the thousands of pages of notes has been no easy task. The documents are, for one, covered in cryptic astral symbols. The authors’ style of writing has posed another problem.
“Napier produced the bulk of preserved cases, but his penmanship was atrocious and his records [were] super messy,” Kassell explains. “Forma’s writing is strangely archaic, like he’d read too many medieval manuscripts. These are notes only intended to be understood by their authors.”
But thanks to the researchers’ perseverance, lay readers can now peruse a hefty selection of transcribed texts, which have been tweaked with modern spellings and punctuation to make them more accessible. The website where the digitized notes have been posted divides the cases into categories—among them “dreams, visions, voices;” “bad marriages;” “chastity diseases.” One section is devoted to Napier’s consultations with angels, who did not mince words with their diagnoses. “He will die shortly,” the angel Michael said of one patient, according to the physician’s reports.
It is hard not to be bemused by some of the complaints the physicians dealt with—take, for instance, one John Wilkingson, who slept with married women and contracted the “French disease” (syphilis, that is). Not only had poor John lost his hair to the illness, but he had also been “thrust with a rapier in his privy parts.” Then there was Edward Cleaver, who paid a visit to the healers because he had been having “ill” thoughts—like “kisse myne arse.”
The treatments that Forman and Napier prescribed are equally fascinating and, at times, rather horrifying. Most often, they recommended bloodletting, fortifying brews and purges induced by “potent” concoctions, Kassell explains. But they were also known to prescribe the touch of a dead man’s hand and “pigeon slippers”—“a pigon slitt & applied to the sole of each foote.”
Sometimes, the physicians offered predictions instead of prescriptions. One 31-year-old Anne Tymock paid a visit to find out if she would be able to have a child. Her astrological chart, according to the case notes, indicated that she would—but “by some other man and not by her husband.”
While they make for a lively read, the cases also testify to the often-brutal hardships of life in 17th-century Europe. Entries on birth and other women’s health concerns are littered with references to children who did not survive. “[C]hild was pulled from her dead,” details one account. The notes refer to the execution of purported witches who were blamed for various ailments. And those who struggled with mental illness were not treated gently. One 60-year-old woman was “bound in her bed with cords at night & at daytime is chained at a post.”
For centuries, these illuminating documents were kept in 66 calf-bound volumes at Oxford's Bodleian Library. With the digitization and transcriptions projects, the records have become increasingly accessible—though Kassel cautions that they are a “rabbit hole.”
“The cases of Forman and Napier,” she says, “may well suck you in."
For almost 400 years, the Taj Mahal, just south of the Indian city of Agra, has stood as a gleaming white monument to love; the iconic mausoleum was built at the command of Mughal emperor Shah Jahan to commemorate his favorite wife, Mumtaz Mahal who died during childbirth. But lately the tomb has lost some of its shine—bug poop and industrial pollution have begun to turn its white marble green, black, brown and yellow, and state caretakers have struggled to keep the building clean. Now, reports Gareth Harris at The Art Newspaper, the Supreme Court of India has handed down an ultimatum—“Either you demolish [the Taj Mahal] or you restore it.”
The BBC reports this is not the first time the court has weighed in on the state of the Taj. In May, the court instructed the state of Uttar Pradesh, where the Unesco World Heritage Site is located, to seek out foreign experts to help stop the “worrying change in color” of the monument since it appeared state experts were unable or unwilling to save the monument. Since that order, however, the federal and state governments had not filed any sort of action plan or follow-up, prompting the court to accuse them of “lethargy” and to issue the hyperbolic mandate that they might as well demolish the site if they weren’t going to take care of it.
The once-gleaming Taj Mahal faces several threats, most of them manmade. In another article, the BBC reports that an insect called Chironomus calligraphus has invaded the monument, leaving patches of green-black frass in many parts of the structure. While the bug is native to the Yamuna River, which flows past the Taj, its population has exploded in recent years due to pollution of the waterway. “Fifty-two drains are pouring waste directly into the river and just behind the monument, Yamuna has become so stagnant that fish that earlier kept insect populations in check are dying. This allows pests to proliferate in the river,” environmental activist DK Joshi tells the BBC.
The bug poo can be scrubbed away, but frequent scrubbing of the marble is labor intensive and dulls its shine.
Industrial pollution is also taking its toll. Nearby oil refineries, a 200-year-old wood-burning crematorium, and other factories have caused the marble to start turning yellow. Though the government has closed dozens of nearby factories, it has not stopped the yellowing of the Taj. While conservators use a special type of mud plastered to the walls to pull out the pollutants every few years, the pollution stains keep returning.
The threat to demolish the iconic landmark is certainly a bluff, but one the federal government is not planning to call. Today, Dipak K. Dasha and Vishwa Mohan of The Times of India report that the government is preparing to file an affidavit with the court including a 100-year plan for the Taj in response to the Supreme Court’s admonishment. The plan includes closing down more industries near the Taj, cleaning up and preventing pollution discharge into the Yamuna, establishing a green mass transit system in Agra, improving the area’s sewage treatment plants and establishing a rubber dam to maintain the flow of water in the river, which can help in conservation efforts.
“We’ll take all possible measures on a war footing in a time bound manner to conserve the Taj Mahal and protect it from all kinds of pollution, be it air or water,” water resources minister Nitin Gadkari tells The Times. “We are sad over the Supreme Court’s observations. We, perhaps, couldn’t tell the court as to what all we have already done and what all we have been doing. We’ll inform the court all this in our affidavit.”
Any investment to preserve the Taj Mahal is probably worth it. The nation’s top tourist attraction draws up to 70,000 visitors per day, and all the dollars that go along with that. Of course, tourism is a double-edged sword, too: All that foot traffic is impacting the foundations of the aging structure and the touch of oily human hands and moist breath is discoloring the interior. That’s why earlier this year the Archaeological Survey of India proposed capping the number of Indian visitors to the site at 40,000 per day. And in March the Survey implemented a 3-hour limit to visits, also an attempt to keep crowd sizes down.
The story of France’s Louis IX, known as Saint Louis to Catholics, is that the pious monarch died of plague while leading the Eighth Crusade, an attempt to shore up control of the Holy Land in the name of Christianity. But a new study of Louis’ jawbone suggests it wasn’t plague that took the king down in the summer of 1270 A.D. but a stubborn refusal to eat the local food in Tunisia during his long journey.
Agence-France Presse reports that an international collaboration of researchers came to that conclusion after taking a look at Louis’ jawbone, which is buried in Notre Dame Cathedral. Using radiocarbon dating, the team first established that the jaw was about 50 years too old to belong to the warrior-king. But adjusting for the fact that Louis is known to have consisted mostly on a diet of fish, which would have skewed the carbon ratios in his bones, they said it's reasonable to believe the bones are from the right time period. They also compared the jaw shape to sculptures of the king, finding that it appeared the match.
Looking at the jaw, the team saw very strong signs that Louis suffered from a bad case of scurvy, a disease caused by a lack of vitamin C in the diet that attacks the gums and bones. The research appears in the Journal of Stomatology, Oral and Maxillofacial Surgery.
The historical record supports their diagnosis. The researchers say that contemporary accounts of Louis’s demise recount the king spitting out bits of gum and teeth, consistent with what was found in the mandible and signs of late-stage scurvy.
The real head-scratcher is why the king would suffer from such a disease when it’s likely plenty of fresh fruit and vegetables, which could have saved him, were available in the Tunisian countryside.
French forensic pathologist and study co-author Philippe Charlier tells AFP that it was likely a combo of poor logistics and excess piety that sealed the king’s fate. “His diet wasn’t very balanced,” he says of the king. “He put himself through all manner of penances, and fasting. Nor was the crusade as well prepared as it should have been. They did not take water with them or fruit and vegetables.”
And, it appears, his army did not supplement their rations with local produce. It wasn’t just Louis that suffered. While laying siege to the city of Tunis, up to one sixth of the Crusader army died, including Louis’ son John Tristan, may have also died of the disease.
Rafi Letzer at LiveScience reports that Jean de Joinville, who chronicled the crusade, described the crusaders' gory ordeal. “Our army suffered from gum necrosis [dead gums],” he wrote, "and the barbers [doctors] had to cut the necrotizing tissue in order to allow the men to chew the meat and swallow. And it was a pity to hear the soldiers shouting and crying like women in labor when their gums were cut.”
Scurvy wasn’t the only disease they suffered from. Both armies during the battle were struck with trench disease, a pathogen transmitted by lice that also plagued armies during World War I and World War II.
Scurvy may not have been the primary cause of Louis’ death, but it likely weakened him enough to allow another pathogen to finish him off. There are some reports that Louis also suffered from dysentery around the time of his death.
The researchers doubt the king’s death was caused by plague. “Tradition has conserved a cause of death as plague but this could be related to a bad translation of the ancient word ‘pestilence,’" the authors write in the paper.
“That he died of the plague is still there in the history books,” Charlier tells AFP, “and modern science is there to rectify that.”
Going forward, the team hopes to definitely answer what bug killed the king off by examining parts of his stomach, which was cut up and boiled in wine to preserve it before it was shipped back to Paris with the rest of his remains.
While Louis’ piety and ministrations to the poor and lepers earned him sainthood, his reputation as a military leader is decidedly mixed. In 1242, he repulsed an English incursion into France by Henry III, though it was less battle, more standoff.
In 1244, after suffering from a bout of malaria, the young king decided to lead the Seventh Crusade to the Holy Lands to lend support to Christian Kingdoms established by previous crusades, which had recently fallen to Egyptian Mamluk armies.
He set out with a fleet of 100 ships, carrying 35,000 soldiers to fight in 1248. The idea was to attack Egypt, then trade captive Egyptian cities for those in the Holy Land. But after an auspicious beginning in which they captured various strongholds on the way to Cairo, the exhausted army was struck by plague at Mansourah. As they retreated back up the river, the Egyptians caught up, taking Louis and many high nobles into captivity.
Louis was ransomed and the original plan had to be abandoned. But instead of returning home, he went to the Crusader kingdom of Acre, in present-day Israel, where he arranged alliances and fortified Christian positions in the area for four years before returning to France.
Sixteen years later, the Crusader States were once again threatened, this time by Mongols coming from the east. Louis decided the time was right to strike, and planned to cross the Mediterranean and capture Tunis, which he would then be able to use as a base to attack Egypt and secure the Christian states as part of the Eighth Crusade. But everything fell apart on the first leg of the venture; Louis died, and the armies returned to Europe after negotiating a deal with the Emir of Tunis. In 1291, the city of Acre finally fell, ending the brief, turbulent history of Crusader states in the Near East.
Shortly after the fall of the Berlin Wall in 1991, aerial photographers identified a so-called “German Stonehenge” southwest of Berlin. Now, reports Michael Price at Science, a new study of the site at the Pömmelte enclosure suggests it shares similarities to its famed cousin in Britain, and its builders performed many of the same rituals, though they added a new twist: human sacrifice.
The henge-like enclosures at Pömmelte consist of seven concentric rings made of ditches and banks, the largest stretching about 380 feet in diameter. Between 2005 and 2008, excavations took place, revealing post holes where wooden poles would have been placed, earning the site the nickname “Woodhenge.”
In the new study published in the journal Antiquity, researchers looked at items collected from 29 shafts also found at Woodhenge during the estimated 300 years it was in consecutive use. What they found was that the site went through several periods of use by different cultures. In the oldest layers, from around 2321 to 2211 BCE, they discovered broken pots, stone axes and animal bones, all smashed to pieces, suggesting they were placed there as part of a ritual by the Bell-Beaker Culture, who lived throughout much of Europe at the time.
They also found something unexpected during: the dismembered bodies of 1o children, juveniles and women found in positions that suggested they were tossed into the shafts. Four of the women exhibited skull and rib fractures suffered before death. Study leader André Spatzier tells Laura Geggel at LiveScience that one of the skeletons, a teenager, had their hands bound before being tossed in the pit. “It remains unclear whether these individuals were ritually killed or if their death resulted from intergroup conflict, such as raiding,” the team writes in the study.
That find stands in contrast to the discovery of the graves of 13 men found in the east side of the rings, which were buried in a dignified manner with no signs of trauma. The orientation of these bodies suggest an association with death and sunrise, the team writes in the study, which could signify the culture that buried them had ideas of reincarnation or an afterlife.
The reason behind the disconnect between the burials can’t be known for certain, but the press release writes that “the gender-specific nature of the adult victims and the ritual nature of the other deposits make [ritual sacrifice] a likely scenario.”
Geggel reports that researchers had a hard time even finding the site because it was more or less decommissioned by the people who used it. “It looks like at the end of the main occupation, around [2050 BCE], they extracted the posts, put offerings into the postholes and probably burned all the wood and back-shoveled it into the ditch," Spatzier explains. “So, they closed all the features. It was still visible above ground, but only as a shovel depression.”
The ritual use of the site and its dates connect it to Stonehenge and other Neolithic circles in Britain, like the country’s own Woodhenge. It raises the possibility that building of circular henges was not limited to the British Isles, but may have spread across Europe before crossing the English Channel. “I would say it is certainly appropriate to reconsider the idea that Britain at this time was entirely a special case,” archaeologist Daniela Hofmann of the University of Hamburg tells Price.
But there are differences. Unlike the Pömmelte enclosure, there is currently no evidence that human sacrifice took place at Stonehenge, at least by its original builders, though there is one male skeleton that may show signs of ritual death. And Stonehenge was significant enough to draw people from far away to its rituals. Researchers have found that people — and food — from all over Britain and the farthest reaches of Scotland came to the site, and the remains of a man that came from the Alps, as well as trade goods from France, central Europe and even Turkey have been found at the Henge.
It sounds like a riddle: How did the moai, the giant stone carvings on Easter Island, get their hats?
In fact, it’s a legitimate conundrum. Somehow, the native Rapa Nui people cut stone from a quarry and moved the massive blocks distances as far as 11 miles throughout the island. In total, they created 887 of these statues, with some weighing over 80 tons. Each of these moai were adorned with 13-ton hats made of a different type of stone that came from a separate quarry.
Now, reports Kat Eschner at Popular Science, researchers think they’ve figured out just how the Easter Islanders got those massive toppers, called pukao, up there.
The new study, which appears in the Journal of Archeological Science, came about because a team of anthropologists and physicists wanted to ground their hypothesis in the archeological record.
“Lots of people have come up with ideas, but we are the first to come up with an idea that uses archaeological evidence," Sean W. Hixon, a graduate student in anthropology at Penn State and lead author of the study, says in a press release.
Hixon and his team worked under the assumption that the hats were all produced in a similar manner and placed on the moai using the same technique. So they looked for common features in the hats, creating detailed 3-D scans of 50 of the pukao found across the island as well as 13 cylinders of the red scoria rock found at the quarry where the hats were cut. What they found is that, besides their round shape, all the hats also include an indentation where they fit on the head and all the statues sit on similarly shaped bases.
Using this information, the team believes that the hats were rolled from the quarry to the site of the moai. Instead of being put on top of the head while the statue was lying down, as some researcher have proposed, they hypothesize that a ramp made of dirt and rocks was built to the top of the statue, which was tilted forward at a roughly 17 degree angle. Two teams of people would then pull the hat up the ramp using a technique called parbuckling, which allows the heavy stone to be rolled up the ramp without rolling back down.
George Dvorsky at Gizmodo reports that technique would allow a group of as little as 10 or 15 people to move the pukao, which was then further modified at the top of the ramp, something evidenced by shards of the red scoria found at the base of some moai. The hat was then turned 90 degrees and levered onto the statue’s head and the ramp was removed, forming wings on either side of the moai that still exist. In the final step, the base of the statue was then carved flat, causing it to sit upright with the fetching hat on top of its head.
While figuring out just how people created such monumental stone works before the advent of cranes and modern machinery is interesting, it challenges current assumptions about the ultimate fate of the Rapa Nui people. In recent years some historians have suggested that the inhabitants of the island were in such a fever to create the stone statues to their gods and ancestors that they used up all their resources, cutting down the palm forests that once covered the island to transport the stones, leading to resource depletion, starvation, civil war and cannibalism.
But a previous study in 2012 by the same group of researchers found that it’s likely the giant statues were engineered to be moved by rocking them back and forth. That technique does not require vast amounts of timber and uses relatively few people. That, along with the new research on the hats depicts a tradition that undoubtedly took some effort and planning, but wasn’t so overwhelming that it destroyed a society.
“Easter Island is often treated as a place where prehistoric people acted irrationally, and that this behavior led to a catastrophic ecological collapse,” anthropologist Carl Lipo of the University of Binghamton says in another press release. “The archaeological evidence, however, shows us that this picture is deeply flawed and badly misrepresents what people did on the island, and how they were able to succeed on a tiny and remote place for over 500 years…While the social systems of Rapa Nui do not look much like the way our contemporary society functions, these were quite sophisticated people who were well-tuned to the requirements of living on this island and used their resources wisely to maximize their achievements and provide long-term stability.”
So what actually happened to Easter Island and its inhabitants? Catrine Jarman of the University of Bristol writes at the Conversation that the colonizers of the island, likely Polynesian sailors, brought with them Polynesian rats, which ate seeds and sapling palm trees, preventing the forests from sprouting back after sections were cut. And there is no evidence of population crash before European contact. Instead, she writes, disease as well as several centuries of the slave trade reduced the island’s population from thousands to just 111 people by 1877.
At the start of 1844, James Buchanan’s presidential aspirations were about to enter a world of trouble. A recent spat in the Washington Daily Globe had stirred his political rivals into full froth—Aaron Venable Brown of Tennessee was especially enraged. In a “confidential” letter to future first lady Sarah Polk, Brown savaged Buchanan and “his better half,” writing: “Mr. Buchanan looks gloomy & dissatisfied & so did his better half until a little private flattery & a certain newspaper puff which you doubtless noticed, excited hopes that by getting a divorce she might set up again in the world to some tolerable advantage.”
The problem, of course, is that James Buchanan, our nation’s only bachelor president, had no woman to call his “better half.” But, as Brown’s letter implies, there was a man who fit the bill.
Google James Buchanan and you inevitably discover the assertion that American history has declared him to be the first gay president. It doesn’t take much longer to discover that the popular understanding of James Buchanan as our nation’s first gay president derives from his relationship with one man in particular: William Rufus DeVane King of Alabama. The premise raises many questions: What was the real nature of their relationship? Was each man “gay,” or something else? And why do Americans seem fixated on making Buchanan our first gay president?
My new book, Bosom Friends: The Intimate World of James Buchanan and William Rufus King, aims to answer these questions and set the record straight, so to speak, about the pair. My research led me to archives in 21 states, the District of Columbia, and even the British Library in London. My findings suggest that theirs was an intimate male friendship of the kind common in 19th-century America. A generation of scholarship has uncovered numerous such intimate and mostly platonic friendships among men (though some of these friendships certainly included an erotic element as well). In the years before the Civil War, friendships among politicians provided an especially important way to bridge the chasm between the North and the South. Simply put, friendships provided the political glue that bound together a nation on the precipice of secession.
This understanding of male friendship pays close attention to the historical context of the time, an exercise that requires one to read the sources judiciously. In the rush to make new meaning of the past, I have come to understand why today it has become de rigeur to consider Buchanan our first gay president. Simply put, the characterization underscores a powerful force at work in historical scholarship: the search for a usable queer past.
The year was 1834, and Buchanan and King were serving in the United States Senate. They came from different parts of the country: Buchanan was a lifelong Pennsylvanian, and King was a North Carolina transplant who helped found the city of Selma, Alabama. They came by their politics differently. Buchanan started out as a pro-bank, pro-tariff, and anti-war Federalist, and held onto these views well after the party had run its course. King was a Jeffersonian Democrat, or Democratic-Republican, who held a lifelong disdain for the national bank, was opposed to tariffs, and supported the War of 1812. By the 1830s, both men had been pulled into the political orbit of Andrew Jackson and the Democratic Party.
They soon shared similar views on slavery, the most divisive issue of the day. Although he came from the North, Buchanan saw that the viability of the Democratic Party depended on the continuance of the South’s slave-driven economy. From King, he learned the political value of allowing the “peculiar institution” to grow unchecked. Both men equally detested abolitionists. Critics labeled Buchanan a “doughface” (a northern man with southern principles), but he pressed onward, quietly building support across the country in the hopes of one day rising to the presidency. By the time of his election to that office in 1856, Buchanan was a staunch conservative, committed to what he saw as upholding the Constitution and unwilling to quash southern secession during the winter of 1860 to 1861. He had become the consummate northern doughface.
King, for his part, was first elected to the U.S. House of Representatives in 1810. He believed in states’ rights, greater access to public lands, and making a profit planting cotton. His commitment to the racial hierarchy of the slaveholding South was whole cloth. At the same time, King supported the continuation of the Union and resisted talk of secession by radical Southerners, marking him as a political moderate in the Deep South. For his lifelong loyalty to the party and to balance the ticket, he was selected as the vice-presidential running mate under Franklin Pierce in 1852.
Buchanan and King shared one other essential quality in addition to their political identification. Both were bachelors, having never married. Born on the Pennsylvania frontier, Buchanan attended Dickinson College and studied law in the bustling city of Lancaster. His practice prospered nicely. In 1819, when he was considered to be the city’s most eligible bachelor, Buchanan became engaged to Ann Coleman, the 23-year-old daughter of a wealthy iron magnate. But when the strain of work caused Buchanan to neglect his betrothed, Coleman broke off the engagement, and she died shortly thereafter of what her physician described as “hysterical convulsions.” Rumors that she had committed suicide, all the same, have persisted. For Buchanan’s part, he later claimed that he entered politics as “a distraction from my great grief.”
The love life of William Rufus DeVane King, or “Colonel King” as he was often addressed, is a different story. Unlike Buchanan, King was never known to pursue a woman seriously. But—critically—he could also tell a story of a love lost. In 1817, while serving as secretary to the American mission to Russia, he supposedly fell in love with Princess Charlotte of Prussia, who was just then to marry Czar Nicholas Alexander, heir to the Russian imperial throne. As the King family tradition has it, he passionately kissed the hand of the czarina, a risky move that could have landed him in serious jeopardy. The contretemps proved fleeting, as a kind note the next day revealed that all was forgiven. Still, he spent the remainder of his days bemoaning a “wayward heart” that could not love again.
Each of these two middle-aged bachelor Democrats, Buchanan and King, had what the other lacked. King exuded social polish and congeniality. He was noted for being “brave and chivalrous” by contemporaries. His mannerisms could at times be bizarre, and some thought him effeminate. Buchanan, by contrast, was liked by almost everyone. He was witty and enjoyed tippling, especially glasses of fine Madeira, with fellow congressmen. Whereas King could be reserved, Buchanan was boisterous and outgoing. Together, they made for something of an odd couple out and about the capital.
While in Washington, they lived together in a communal boardinghouse, or mess. To start, their boardinghouse included other congressmen, most of whom were also unmarried, yielding a friendly moniker for their home: the “Bachelor’s Mess.” Over time, as other members of the group lost their seats in Congress, the mess dwindled in size from four to three to just two—Buchanan and King. Washington society began to take notice, too. “Mr. Buchanan and his Wife,” one tongue wagged. They were each called “Aunt Nancy” or “Aunt Fancy.” Years later, Julia Gardiner Tyler, the much younger wife of President John Tyler, remembered them as “the Siamese twins,” after the famous conjoined twins, Chang and Eng Bunker.
Certainly, they cherished their friendship with one another, as did members of their immediate families. At Wheatland, Buchanan’s country estate near Lancaster, he hung portraits of both William Rufus King and King’s niece Catherine Margaret Ellis. After Buchanan’s death in 1868, his niece, Harriet Lane Johnston, who played the part of first lady in Buchanan’s White House, corresponded with Ellis about retrieving their uncles’ correspondence from Alabama.
More than 60 personal letters still survive, including several that contain expressions of the most intimate kind. Unfortunately, we can read only one side of the correspondence (letters from King to Buchanan). One popular misconception holds about that their nieces destroyed their uncles’ letters by pre-arrangement, but the real reasons for the mismatch stem from multiple factors: for one, the King family plantation was raided during the Battle of Selma in 1865, and for another, flooding of the Selma River likely destroyed portions of King’s papers prior to their deposit at the Alabama Department of Archives and History. Finally, King dutifully followed Buchanan’s instructions and destroyed numerous letters marked “private” or “confidential.” The end result is that relatively few letters of any kind survive in the various papers of William Rufus King, and even fewer have ever been prepared for publication.
By contrast, Buchanan kept nearly every letter which he ever received, carefully docketing the date of his response on the backside of his correspondence. After his death, Johnston took charge of her uncle’s papers and supported the publication of a two-volume set in the 1880s and another, more extensive 12-volume edition in the early 1900s. Such private efforts were vital to securing the historical legacy of U.S. presidents in the era before they received official library designation from the National Archives.
Still, almost nothing written by Buchanan about King remains available to historians. An important exception is a singular letter from Buchanan written to Cornelia Van Ness Roosevelt, wife of former congressman John J. Roosevelt of New York City. Weeks earlier, King had left Washington for New York, staying with the Roosevelts, to prepare for a trip overseas. In the letter, Buchanan writes of his desire to be with the Roosevelts and with King:
I envy Colonel King the pleasure of meeting you & would give any thing in reason to be of the party for a single week. I am now “solitary & alone,” having no companion in the house with me. I have gone a wooing to several gentlemen, but have not succeeded with any one of them. I feel that it is not good for man to be alone; and should not be astonished to find myself married to some old maid who can nurse me when I am sick, provide good dinners for me when I am well & not expect from me any very ardent or romantic affection.
Along with other select lines of their correspondence, historians and biographers have interpreted this passage to imply a sexual relationship between them. The earliest biographers of James Buchanan, writing in the staid Victorian era, said very little about his sexuality. Later Buchanan biographers from the 1920s to the 1960s, following the contemporary gossip in private letters, noted that the pair were referred to as “the Siamese twins.”
But by then, an understanding of homosexuality as a sexual identity and orientation had begun to take hold among the general public. In the 1980s, historians rediscovered the Buchanan-King relationship and, for the first time, explicitly argued that it may have contained a sexual element. The media soon caught wind of the idea that we may have had a “gay president.” In the November 1987 issue of Penthouse Magazine, New York gossip columnist Sharon Churcher noted the finding in an article headlined “Our First Gay President, Out of the Closet, Finally.” The famous author—and Lancaster native—John Updike pushed back somewhat in his novel Memories of the Ford Administration (1992). Updike creatively imagined the boardinghouse life of Buchanan and King, but he admitted to finding few “traces of homosexual passion.” Updike’s conclusion has not stopped a veritable torrent of historical speculation in the years since.
This leaves us today with the popular conception of James Buchanan as our first gay president. On the one hand, it’s not so bad a thing. Centuries of repression of homosexuality in the United States has erased countless number of Americans from the story of LGBT history. The dearth of clearly identifiable LGBT political leaders from the past, moreover, has yielded a necessary rethinking of the historical record and has inspired historians to ask important, searing questions. In the process, past political leaders who for one reason or another don’t fit into a normative pattern of heterosexual marriage have become, almost reflexively, queer. More than anything else, this impulse explains why Americans have transformed James Buchanan into our first gay president.
Certainly, the quest for a usable queer past has yielded much good. Yet the specifics of this case actually obscure a more interesting, and perhaps more significant, historical truth: an intimate male friendship between bachelor Democrats shaped the course of the party, and by extension, the nation. Worse still, moving Buchanan and King from friends to lovers blocks the way for a person today to assume the proper mantle of becoming our first gay president. Until that inevitable day comes to pass, these two bachelors from the antebellum past may be the next closest thing.
It was late on Christmas night, 1951, but Harry and Harriette Moore had yet to open any gifts. Instead they had delayed the festivities in anticipation of the arrival of their younger daughter, Evangeline, who was taking a train home from Washington, D.C. to celebrate along with her sister and grandmother. The Moores had another cause for celebration: the day marked their 25th wedding anniversary, a testament to their unshakeable partnership. But that night in their quiet home on a citrus grove in rural Mims, Florida, the African American couple were fatal victims of a horrific terrorist attack at the hands of those who wanted to silence the Moores.
At 10:20 p.m., a blast ripped apart their bedroom, splintering the floorboards, ceiling and front porch. The explosion was so powerful that witness reported hearing it several miles away. Pamphlets pushing for voters’ rights floated out of the house and onto the street, remnants of a long fight for justice. Harry Moore had spent much of the last two decades earning the enmity of Florida’s white supremacists as he organized for equal pay, voter registration, and justice for murdered African Americans. And yet despite his immense sacrifice and the nation’s initial shock at his assassination, Moore’s name soon faded from the pantheon of Civil Rights martyrs.
After the attack, Moore’s mother and daughter knew they would be unable to get an ambulance willing to transport a black victim, so nearby relatives drove the wounded Harry and Harriette to the town of Sanford, which was more than 30 miles away on a dark, two-lane road bracketed by dense foliage. Harry died shortly after arriving in the hospital, Harriette would die a little more than a week later. When Evangeline arrived at the train station the next day, “She didn’t see her mother and father, but she saw her aunts and uncles and family members. She knew something was wrong,” says Sonya Mallard, coordinator for the Harry T. and Harriette V. Moore Cultural Complex, who knew Evangeline before her death in 2015. Her uncle broke the news on the drive to the hospital, and “her world was never the same again. Never.”
In the years before his death, Harry Moore was increasingly a marked man—and he knew it. But he had begun charting this course in the 1930s, when he worked tirelessly to register black voters. He later expanded his efforts into fighting injustice in lynching cases (Florida had more lynchings per capita than any other state at the time), putting him in the crosshairs of Florida’s most violent and virulent racists.
“Harry T. Moore understood that we had to make a better way, we had to change what was going on here in the state of Florida,” says Mallard. Traveling around the state on roads where it was too dangerous to even use a public restroom, Moore’s mother, Rosa, worried he’d be killed, “but he kept on going because he knew it was bigger than him,” says Mallard.
Moore was born in 1905 in the panhandle town of Houston, Florida. His father, Johnny, owned a small shop and worked for the railroad, and died when Harry was just 9 years old. After trying to support her son as a single parent, Rosa sent Harry to live with his aunts in Jacksonville, a hub for African American business and culture that would prove to be influential on the young Moore. After graduating from Florida Memorial College, as today’s university was then known, Moore likely could have made a relatively comfortable life in Jacksonville.
However, the climate in Florida as a whole as hostile to African Americans. His formative years were ones of pervasive racial violence often unchecked by officials. Before the 1920 election, displaying the impunity enjoyed by white supremacists, the Ku Klux Klan “marched in downtown Orlando specifically to intimidate black voters,” says Ben Brotemarkle, executive director of the Florida Historical Society. When a man named July Perry came to Orlando from nearby Ocoee to vote, he was beaten, shot and hung from a light post and then the primarily African American town was burned in a mob rampage that killed dozens. For decades after, Ocoee had no black residents and was known as a “sundown town”; today the city of 46,000 is 21 percent African American.A portrait of Harry T. and Henrietta V. Moore (Courtesy of the Moore Cultural Complex)
In 1925, Moore began teaching at a school for black students in Cocoa, Florida, a few miles south of Mims and later assumed the role of principal at the Titusville Colored School. His first year in Cocoa, Harry met Harriette Simms, three years his senior, at a party. She later became a teacher after the birth of their first daughter, Annie Rosalea, known as Peaches. Evangeline was born in 1930.)
His civic activism flowed from his educational activism. “He would bring his own materials and educate students about black history, but what he also did was bring in ballots and he taught his students how to vote. He taught his students the importance of the candidates and making a decision to vote for people who took your interests seriously,” says Brotemarkle.
In 1934, Moore joined the National Association for the Advancement of Colored People (NAACP), an indication of his growing interest in civic matters. In 1937, Moore pushed for a lawsuit challenging the chasm between black and white teachers’ salaries in his local Brevard County, with fellow educator John Gilbert as the plaintiff. Moore enlisted the support of NAACP lawyer (and later Supreme Court Justice) Thurgood Marshall, the start of their professional collaboration. The lawsuit was defeated in both the Circuit Court and the Florida Supreme Court. For his efforts, the Moores later lost their teachings jobs—as did Gilbert.
In the early 1940s, Moore organized the Florida State conference of the NAACP and significantly increased membership (he would later become its first paid executive secretary). He also formed the Progressive Voters’ League in Florida in 1944. “He understood the significance of the power of the vote. He understood the significance of the power of the pen. And he wrote letters and typed letters to anyone and everyone that would listen. And he knew that [African Americans] had to have a voice and we had to have it by voting,” says Mallard. In 1947, building on the U.S. Supreme Court case in which Marshall successfully argued against Texas’ “white primary” that excluded minority voters, Moore organized a letter writing campaign to help rebuff bills proposed in the Florida legislature that would effectively perpetuate white primaries. (As the Tampa Bay Times notes, Florida was “a leading innovator of discriminatory barriers to voting.”)
Before his death, Moore’s efforts in the state helped increase the number of black voters by more than 100,000, according to the Moore Cultural Complex, a figure sure to catch the attention of influential politicians.
But success was a risky proposition. “Moore was coming into a situation in Central Florida where there was a lot of Klan activity, there were a lot of Klansman who had positions in government, and it was a very tenuous time for civil rights,” says Brotemarkle. “People were openly being intimidated and kept away from the polls, and Moore worked diligently to fight that.”
Moore was willing to risk much more than his job. He first became involved in anti-lynching efforts after three white men kidnapped 15-year-old Willie James Howard, bound him with ropes and drowned him in a river for the “crime” of passing a note to a white girl in 1944. The perpetual inaction in cases like Howard’s, in which no one was arrested, tried, or convicted, spurred Moore to effect change. In a 1947 letter to Florida’s congressional delegation, Moore wrote “We cannot afford to wait until the several states get ‘trained’ or ‘educated’ to the point where they can take effective action in such cases. Human life is too valuable for more experimenting of this kind. The Federal Government must be empowered to take the necessary action for the protection of its citizens.”
Moore letters show a polite, but persistent, push for change. His scholarly nature obscured the profound courage it took to stand up to the hostile forces around him in Florida. Those who knew him recall a quiet, soft-spoken man. “The fiery from the pulpit speech? That was not Harry T. Moore. He was much more behind the scenes, but no less aggressive. You can see it from his letters that he was every bit as brave,” says Brotemarkle.
Two years before his death, Moore placed himself in harm’s way in the most prominent manner yet with his involvement in the Groveland Four incident. The men had been accused of raping a white woman; a mob went to drag them from jail and not finding them there, burned and shot into nearby black residents’ homes. After their arrest, conviction by an all-white jury was practically a foregone conclusion, despite attorneys’ assertions that the defendants’ confessions were physically coerced. The case also pitted Moore against Sherriff Willis McCall, who was investigated numerous times in his career for misconduct related to race.
While transporting two of the suspects, McCall shot them, killing one. McCall claimed he had been attacked, but the shootings elicited furious protest. All this took place against the backdrop of the ongoing legal battle—eventually, the U.S. Supreme Court ordered a re-trial, which again ended in the conviction of the surviving suspect, who was represented by Thurgood Marshall. (In recent years, Florida has posthumously pardoned and apologized to all four of the accused).
Moore wrote repeatedly to Governor Fuller Warren, methodically dismantling McCall’s claims. He admonished Warren that “Florida is on trial before the rest of the world,” calling on him to remove the officers involved in the shooting. He closed with a reminder that “Florida Negro citizens are still mindful of the fact that our votes proved to be your margin of victory in the [runoff election in] 1948. We seek no special favors; but certainly we have a right to expect justice and equal protection of the laws even for the humblest Negro. Shall we be disappointed again?”
Compounding Moore’s woes, just weeks after the shooting of the Groveland suspects and weeks before his own death, he lost his job at the NAACP. Moore had clashed with the organization’s national leadership for his forward political involvement and disagreements over fundraising. It was a severe blow, but he continued his commitment to the work—albeit now on an unpaid basis.
During the fall of 1951, Florida saw a rash of religious and racial violence. Over a three-month period, multiple bombs had hit Carver Village, a housing complex in Miami leasing to black tenants, in what was likely the work of the KKK; a synagogue and Catholic church were also menaced. “As dark shadow of violence has drifted across sunny Florida—cast by terrorist who blast and kill in the night,” the Associated Press reported days after the Christmas bombing. If lesser known black residents were targeted, then Moore’s prominence meant his situation was especially perilous.
“Moore ruffled a lot of feathers, and there was a large population of Florida that didn’t want to see the type of change that he was part of,” says Brotemarkle.
“I tried to get him to quit the N.A.A.C.P., thinking something might happen to him some day,” Rosa Moore told a reporter after the bombing. “But he told me, ‘I’m trying to do what I can to elevate the Negro race. Every advancement comes by the way of sacrifice, and if I sacrifice my life or health I still think it is my duty for my race.”
News of Moore’s Christmas night death made headlines across the country. Former First Lady Eleanor Roosevelt expressed her sadness. Governor Warren called for a full investigation but clashed with NAACP executive secretary Walter White, who accused the governor of not doing enough. Warren said White “has come to Florida to try to stir up strife” and called him a “hired Harlem hatemonger.”
While Moore may have been out of favor with the NAACP’s national leadership shortly before his death, he was venerated soon after. In March of 1952, the NAACP held a fundraising gala in New York City, featuring the “Ballad of Harry T. Moore,” written by poet Langston Hughes. His name was a rallying cry at numerous events.
“The Moore bombings set off the most intense civil rights uproar in a decade,” writes Ben Green in Before His Time: The Untold Story of Harry T. Moore, America’s First Civil Rights Martyr. “There had been more violent racial incidents…but the Moore bombing was so personal, so singular – a man and his wife blown up in their home on Christmas Day – that it became a magnifying glass to focus the nation’s revulsion.”
While the publicity helped galvanize awareness for civil rights on a national level, the assassination soon had a chilling effect on voter registration in Florida. “People were petrified, they were scared,” says Mallard. The KKK “terrorized you, they killed you, they lynched you, they scared you. They did all that to shut you up.”
Meanwhile Harriette Moore remained hospitalized for nine days, dying from her injuries one day after her husband’s funeral. “There isn't much left to fight for. My home is wrecked. My children are grown up. They don't need me. Others can carry on," she had told a reporter in a bedside interview. Harriette’s discouragement was palpable, after years of facing the same threats side by side with Harry. “She adored her husband,” says Mallard.
The crime has never been definitively solved, despite commitments from notorious FBI chief J. Edgar Hoover in the bombing’s aftermath and from Florida Governor Charlie Crist in the mid-2000s. After almost 70 years, the identity of the killer or killers may never be pinpointed, but those who have studied Moore’s life and the multiple investigations of the case are confident it was the work of the KKK.
“As the movement’s ranks swelled and the battle was carried to Birmingham, Nashville, Tallahassee, Little Rock, Greensboro and beyond, the unsolved murders of Harry and Harriette Moore, still hanging in limbo, were forgotten,” Green writes. “For Evangeline and Peaches Moore, the pain and heartache never ceased. The murderers of their parents still walked the streets, and no one seemed to care.”A quote by Harry Moore adorns a fountain outside the Moore Cultural Complex (Francine Uenuma)
Moore’s life and death underscore that not all heroes become legends. Today cities like Selma, Montgomery and Memphis—not Mims—evoke images of the Civil Rights struggle. Moore worked for almost two decades without the weight of national outrage behind him. No television cameras documented the brutal violence or produced the images needed to appall Americans in other states. The Maya Lin-designed Civil Rights Memorial situated across the street from the Southern Poverty Law Center’s office in Montgomery, Alabama, recognizes martyrs from 1955 until Martin Luther King Jr.’s death in 1968. That was 17 years after the Moores were killed.
“When you talk about the contemporary civil rights movement, [people] look at the Brown v. Board of Education decision in 1954 as kind of the starting place for the timeline, and while that can be seen as true in a lot of ways, it overlooks a lot of activity that led up to that,” says Brotemarkle.
Nonetheless Moore’s work and legacy helped lay the groundwork for the expansion of civil rights onto the national platform, and Moore has received some belated recognition in recent decades. The Moore Cultural Complex in Mims welcomes visitors to a replica of their home, rebuilt on the original property. Several of their personal effects are on display at the Smithsonian’s National Museum of African American History & Culture in Washington, D.C.
In looking back at Moore’s life and work, it is abundantly clear he was never motivated by name recognition in the first place. Moore’s goal was singular - his daughter would later remember him saying before his death that “I have endeavored to help the Negro race and laid my life on the altar.”