Skip to Content

Found 125,229 Resources

The Perfect Way to Ripen Fruit and Other Ingenious Inventions Recognized by the Dyson Awards

Smithsonian Magazine

Credit: James Dyson Foundation

If there were ever a Michael Jordan of the inventor’s world, it would be Sir James Dyson. The billionaire founder of Dyson Industries, best known as the father of the Dyson bagless vacuum cleaner, has also over the years introduced a 10-second instant hand dryer and a bladeless fan. In many ways, he brings a sleek and innovative Steve Jobs-esque design sensibility to common appliances.

Not too long ago, Sir James started the annual Dyson awards, an international competition that “celebrates, encourages and inspires the next generation of design engineers.” Along with a smaller competition on the national level in Britain, aspiring inventors can also submit entries for a chance to win nearly $48,000. The winner will be announced on November 7, 2013.

Here are a few notable ideas that have been shortlisted as finalists for this year’s honors:

Credit: TitanArm.com

Titan Arm (USA)

This entry from the United States will appeal to fans of Iron Man. The Titan Arm is the end result of impressive efforts by students at the University of Pennsylvania to piece together an inter-working system of motors, cables, sensors and other inexpensive parts to produce an upper-body exoskeleton that enables the wearer to lift an extra 40 pounds beyond what natural strength can achieve. The team hopes the device can be used to prevent injuries to workers required to do heavy lifting as well as assist those undergoing physical therapy. Titan Arm has already claimed top prize in the Cornell Cup USA engineering competition, sponsored by Intel.

Credit: James Dyson Awards

OLTU Fruit Ripening Unit (Spain)

Sure you have your banana hangers, but the art of ripening fruit will take a lot more ingenuity in order to be perfected. That’s where the OLTU comes in. The ripening storage unit siphons power from your refrigerator to help create the ideal atmospheric conditions for various fruits and vegetables to uniformly reach this peak state. The container features four sections, each with different settings, such as cold dry, cold wet, fresh wet and dry warm, tailored to specific varieties.

Credit: James Dyson Awards

SONO (Austria)

So you can’t stand waking up to the roar of your neighbor’s lawnmower but would still appreciate hearing the song of a chirping bird during the early mornings? The Sono is a simple device that attaches to windows and works as a lounge bouncer of sorts for sounds that pass through from outside. The ring design enables the system to detect the tone of various kinds of sounds, and using Wi-Fi, lets users set the SONO to block certain frequencies while allowing others.

Credit: James Dyson Awards

Stack Printer (Switzerland)
With productivity devices these days, portable and mobile has become the way to go. Meanwhile, printers seem to be stuck at the office. Mugi Yamamoto doesn’t think this necessarily needs to be the case and has taken the minimalist approach as far as he it can go in developing the Stack printer. The industrial designer’s version of a slimmed-down inkjet removes the standard plastic paper tray and keeps the product to its bare essentials like the ink cartridge, the print head and frame for alignment. It works simply by placing it on top of a stack of papers and letting it run its course. Judging by the latest prototype, the Stack still wouldn’t fit into a briefcase. A backpack though? Now we’re talking.

Credit: James Dyson Awards

Xarius (Germany)

The Xarius can aptly be described as wind power that fits in your pocket. And just as fitting, it’s designed to re-charge and power portable devices such as smartphones and tablets. The internal power generator relies on a cleverly designed three-winged mini wind turbine that efficiently captures energy in remote places off the grid, such as camping grounds; it is also perfect for getaways off the coast. The generator is even efficient enough to capture energy at low wind speeds.

Check out the complete list of finalists!

Are We Re-Entering a Golden Age of American Bartending?

Smithsonian Magazine

It's a great time to be a bartender—and drinker—in America. Interest in, and sales of, premium distilled spirits has been on the rise for decades, spurred by a renewed interest in American heritage spirits, classic cocktail flavors and small, craft distillers. There might only be one other time in history in which a spirit-lover would have similarly thrived: the period between 1850 and Prohibition.

"The first great immigration from western Europe was in the 1840s, and it's no accident that the craft cocktail golden age, as it were, began around 1850," says Philip Dobard, director of the New Orleans-based Museum of the American Cocktail, adding that until then, the country was "largely a pioneer state." 

Before 1850, Americans certainly didn't want for alcohol. The first colonial settlers in Jamestown and Plymouth most likely brewed beer, while rum and later whiskey dominated the landscape of American distilling. But individually crafted drinks weren't the norm: most people drank punch, created in communal bowls, or straight spirits,  considered safer than water. 

The European immigrants brought with them a new arsenal of flavors and ingredients, along with a love for fine dining and haute cuisine. In the 1850s, America's first great restaurants, catering to a new clientele with new tastes, first opened their kitchens. Alongside these restaurants, Dobard explains, grew new bars, creating individual drinks with fresh ingredients like juices and bitters. 

Though the word cocktail first appeared in 1806—defined as "a stimulating liquor composed of any kind of sugar, water and bitters," the first cocktail, the Sazerac, was invented in in New Orleans in 1838. The word "mixologist" first arrived in print in 1856. And by 1862, the first book of cocktail recipes had been published, written by a San Francisco bartender named Jerry Thomas. In the book's preface, Thomas wrote that the cocktail was a beacon of American innovation:

This is an Age of Progress; new ideas and new appliances follow each other in rapid succession. Inventive genius is taxed to the uttermost in devising new inventions, not alone for articles of utility or necessity, but to meet the ever-increasing demands for novelties which administer to creature-comfort, and afford gratification tc (sic) fastidious tastes.

A new beverage is the pride of the Bartender, and its appreciation and adoption his crowning glory.

Alongside the Sazerac, classic cocktails like the Manhattan, Old Fashioned and Jack Rose trace their birth to this great moment in early bartending. But as cocktail innovation reached its peak, another movement caused its swift demise: the passage of the Volstead Act in 1919, which made Prohibition law throughout the land.

"Prohibition happened and it killed the craft," Dobard says. "A lot of American bartenders went abroad to work, others went into other professions." Breweries and distilleries were forced to shutter—those that managed to remain open had to change their business models entirely (Dobard describes one winery in Los Angeles that was able to stay open during Prohibition by making sacramental wine for religious services).

Even with the repeal of the Volstead Act in 1933, the craft cocktail movement languished. The United States faced the Great Depression, and World War II caused the diversion of domestic industry into war industry. As the war ended, industry once used to the mass production of wartime goods found new life in mass producing food items, spurring the industrialization of the food system—and the drinking world, with the rise of mass-produced sour mixes and juices.

In the 1960s, however, social upheaval inspired Americans to turn a critical eye to their food and drink. "It became clear to a critical mass of people that what we were consuming, as diners and imbibers, belonged to an agricultural industrial complex," Dobard says. "We were consuming whatever they handed to us. In questioning that, people learned how much more there was."

That reawakening coincided with an expansion in leisure travel, with more Americans being exposed to the tastes and flavors of foreign locales. Much like immigrants inspired the first cocktail revolution with their unique flavors, Americans in the 1960s and '70s came home desiring a taste of their travels in their local bar.

Inspired by a growing demand for cocktails made the traditional way, with an eye toward history and superior ingredients, a few bartenders began to revolutionize the American bar—by modeling it after its own past. Pioneering bartender Dale Degroff was largely responsible for leading the movement at its beginning in the 1980s, turning out historically inspired cocktails in New York City's Rainbow Room.

"It's only been recently that the cocktail has really come back," says James Rodewald, author of American Spirit: An Exploration of the Craft Distilling Revolution, who recently spoke at a spirited event at the National Museum of American History's After Hours series on the topic."I do agree that this is the best it's ever been if you like a mixed drink. The variety, the ingredients, the techniques—all top notch, at least at the best places."

Today, craft cocktails continue to be a growing trend in America, as bartenders and drinkers continue to discover that sometimes, the best revolution is one that looks back. "They're the best drinks," Rodewald says. "There is nothing better than a well-made Manhattan."

James Rodenwald joined D.C.-based bartender Derek Brown and craft distiller Michael Lowe (of New Columbia Distillers) for a discussion of craft brewing in America. The talk was part of the National Museum of American History's American History After Hours series, which explores topics in American history through food. Upcoming topics include: Julia Child on March 16; How Chicken Became America's Meat of Choice on April 8; Sushi on May 13; among others.

A Short Talk With a Legend of Rock

Smithsonian Magazine

El Capitan, as seen here from the floor of Yosemite Valley, was once considered almost unclimbable. Photo courtesy of Flickr user Xavier de Jauréguiberry.

Until 1958, no person in known history had climbed the face of what may be the world’s most famous cliff, Yosemite’s El Capitan.

In the 54 years since climbing greats Warren Harding, George Whitmore and Wayne Merry made the first ascent, “El Cap” has been scaled thousands of times. Many individuals have climbed the 3,000-foot wall by numerous routes, and today dozens of climbers may be on the face of the cliff at any given time, nearly every month of the year. Scraps of dropped camping debris litter the valley floor, including bags of human waste, though “poop tubes” are now required of multi-day climbers. Today, just going up is hardly even an achievement in the climbing community, and so climbers bent on setting records or gaining praise must attempt such stunts as solo climbing and speed climbing. It’s been the same story for many of the great walls around the world: Once unclimbed, they are now mostly old news. Pitons scar many of them from base to top, and chalk smudges indicate clearly where a thousand climbers before have anchored their fingertips. For each successive person who goes up—each taking advantage of advances in knowledge, technology and gear—the challenge of the climb loses another trace of its old glory.

But Yvon Chouinard remembers the early years of the sport. He was among the pioneers of modern rock climbing and has climbed El Cap six times, two of which were first ascents of unmarked routes. Chouinard, who lives in Ventura County, began climbing as a kid in the 1950s, when he and several friends began making their first trips to Yosemite. At the time, campsites in the national park were always plentiful—though climbing gear was not.

“We were stealing hemp ropes from the telephone company,” he recalled with a laugh as he spoke to me by phone recently. “We had to learn on our own. There were no schools back then.”

Common practice of the era was to pound bolts into the rock; climbers secured their ropes—and their lives—to these bolts in case of a fall. But Chouinard was among the first people to consider the adverse effects this was having. So he designed his own form of removable pitons and began selling them to others in the small but growing circle of climbers. Eventually he invented gear that could be wedged into cracks, then removed again, leaving the rock unmarked. Later still, Chouinard began making clothing suited for the rigors of scaling cliffs, and in 1972 he founded a little company called Patagonia. It would grow into one of the best-known names in outdoor apparel.

In the 1950s, Chouinard says, there were fewer than 300 climbers in America. Most routes, whether climbed previously or not, were still un-scarred by either chalk or metal, and Chouinard grew high on the challenge and the danger of ascending routes while feeling the rock with his free hand, reaching, sometimes straining, looking for that next hold.

Yvon Chouinard, American climbing pioneer and founder of Patagonia, works a route on the West Face of Sentinel Rock in Yosemite in the 1960s. Photo by Tom Frost.

Today, hundreds of thousands of climbers scale walls around the world. I asked Chouinard if this—the growing popularity of climbing—is good for the world, good for people and maybe even good for the rock.

“It would be good because it’s getting people outdoors and into natural places,” he said—except that, inevitably, the Earth’s great walls have suffered. “Today, you go up a route that people climbed in the 1920s using hemp ropes and pitons, and there’ll be a bolt every 15 feet—and next to a crack. It’s really unfortunate.”

Modern climbing has become commercialized, too, and increasingly competitive. Sponsorships and financial motivation to break records or just gain glory may push climbers beyond their own limits. “And that,” Chouinard said, “can kill you.”

Long ago, Chouinard and his contemporaries committed themselves to an unofficial set of climbing ethics, which foremost mandate that a cliff be left as nature made it; for the next climber, so went the idea, there should be no evidence of a prior climber’s passage. “If you’re going up a route that’s been climbed without gear a thousand times and you’re putting bolts into the rock, you’re ruining the whole experience for the next person,” Chouinard explained. He cites what he calls the “manifest destiny idea, especially in Europe,” about “conquering the mountain and making it easier for the next person.” By such a process, Chouinard says, the magic is all but lost as cabins and cable cars are built on its slopes.

"Clean climbing," with wedges that can be removed after use, leaves no scars on cliffs like this one in Sweden - but faint chalk marks still lead the way. Photo courtesy of Evan Riley.

In Yosemite, where the cliffs remain mostly as they always were, simply the crowds of people clamoring to get their hands on some rock may have diminished the experience. The park service estimates that climbers log between 25,000 and 50,000 “climber-days” per year. Chouinard rarely visits the park anymore simply because of the difficulty in reserving a campsite. He feels the cables that lead up the back side of Half Dome should be removed, leaving this granite cathedral to the skilled and the impassioned—or no one at all.

Today, the popularity of rock climbing has spurred the proliferation of urban climbing gyms. But whether these facilities of synthetic rock, shredded rubber floors and fluorescent lighting are the modern climber’s answer to the urge to go up is questionable. Chouinard thinks that gyms simply don’t replicate the real spirit of rock climbing. “Climbing without risk isn’t climbing,” he says. “And in gyms, there’s no risk. You aren’t leading, and you’re not using your head. You’re just following the chalk marks to the top.”

So if gyms don’t cut it, and if even Yosemite—the Mecca of great walls and sacred rock—has lost its excitement, where on Earth can a modern climber go to find what Chouinard, Harding, Tom Frost and other Golden Age rock legends enjoyed five decades ago? Chouinard says that Sub-Sahara Africa, the Himalayas and Antarctica each offer pristine climbing opportunities. In the United States, he says, Alaska still offers untouched cliffs. And that’s all the hints we’ll give, and we’ll leave the thrills of discovery to you. And remember: If you follow the chalk marks, you’ll get to the top—but are you really climbing?

Could You Crash Into a Black Hole?

Smithsonian Magazine

By their very name, black holes exude mystery. They’re unobservable, uncontrollable and—for more than 50 years after their first prediction in 1916—undiscovered. Astronomers have since found evidence of black holes in our universe, including a supermassive one at the center of our own Milky Way. Yet much remains unknown about these cosmic enigmas, including what exactly happens to the stuff that they suck up with their titanic gravity.

Fifty years ago, physicist John Wheeler helped popularize the term "black hole" as a description for the collapsed remnants of supermassive stars. According to Wheeler, who coined and popularized several other famous astronomy terms such as "wormholes," the suggestion came from an audience member at an astronomy conference where he was speaking, after he had repeatedly used the phrase "gravitationally collapsed objects to describe the cosmic giants. 

“Well, after I used that phrase four or five times, somebody in the audience said, ‘Why don’t you call it a black hole.’ So I adopted that,” Wheeler told science writer Marcia Bartusiak

Wheeler was giving a name to an idea first explored by Albert Einstein 50 years earlier, in his influential theory of general relativity. Einstein's theory showed that gravity is a result of the distortion of space and time by the mass of objects. While Einstein himself resisted ever acknowledging the possibility of black holes, other physicists used his groundwork to flesh out the galactic monsters. Physicist J. Robert Oppenheimer, of atomic bomb fame, dubbed these bodies "frozen stars" in reference to a key feature outlined by physicist Karl Schwarzschild soon after Einstein published his theory.  

That feature was the "event horizon": the line surrounding a black hole at which it becomes impossible to escape. Such a horizon exists because, at a certain distance, the speed required for any atom to break away from the black hole's gravity becomes higher than the speed of light—the universe's speed limit. After you cross the event horizon, it is thought, all of the matter that comprises you is shredded apart violently by intense gravitational forces and eventually crushed into the point of infinite density at the center of the black hole, which is called a singularity. Not exactly a pleasant way to go.

This detailed explanation of death via black hole, however, is theoretical. The intense gravity of black holes distorts the passage of time so much so that to observers outside the black hole, objects falling into one appear to slow down and "freeze" near the event horizon, before simply fading away. (Which sounds a lot nicer.)

In other words, despite the importance of this event horizon, scientists have never actually directly proven its existence. And because of the difficulty of even finding black holes (because light cannot escape them, they are invisible to most telescopes), much less observing them, there haven't been many chances to try. In the absence of convincing proof, some astrophysicists have theorized that some of the objects we call black holes might be dramatically different than what we’ve come to believe, with no singularity and no event horizon. Instead, they could be cold, dark, dense objects with hard surfaces.

This black hole skepticism began attracting its own skepticism, however, as telescopes finally captured black holes in the act of something extraordinary. In the last seven years, "people started seeing stars falling into black holes," says Pawan Kumar, an astrophysicist at the University of Texas at Austin, where incidentally Wheeler taught theoretical physics for a decade. "These are very very bright things that can be seen from billions of light years away."

More of these bright, relatively quick star swallowings have since been observed. Last year Kumar decided that these light emissions would make a good test for proving the existence of the event horizon. "Most people in the community assumed there is no hard surface," Kumar says. However, he stresses, "in science, one needs to be careful. You need proof."

So in 2016, Kumar and his collaborator Ramesh Narayan, of the Harvard-Smithsonian Center for Astrophysics, worked to calculate what kind of effects you would expect to see if a star being swallowed by a black hole was really colliding with a hard surface. It would be akin to smashing an object against a rock, Kumar says, creating intense kinetic energy that would be emitted as heat and light for months—or even years.

Yet a scan of telescope data over three and a half years found no instances of the light signatures that he and Narayan calculated would be released if stars struck a hard-surface black hole. Based on probability, the researchers had predicted that they should have found at least 10 examples over that time period.

Kumar calls this research, published this year in the journal Monthly Notices of the Royal Astronomical Society, a "good-sized step" toward proving the event horizon's existence. But it's still not quite proof. A hard-surface black hole could theoretically still exist within his study's calculations. But the radius of that surface would have to be within about a millimeter of the black hole's Schwarzschild radius, or the point at which the speed necessary to escape the gravity of it would equal the speed of light. (Note this the Schwarzschild radius is not always the same as an event horizon, since other stellar objects have gravity, too).

"The limits this paper places on the radius of a possible solid surface—4 thousandths of a percent outside the Schwarzschild radius for a supermassive compact object—is impressive," says Bernard Kelly, a NASA astrophysicist who wasn't involved in this research.

Kumar already has research in the pipeline to narrow that limit even further, to the point where it would be almost certain that no hard-surface black holes could possibly exist. That, for him, would be reliable proof that traditional black holes are the only kind of black holes that occupy our universe. "If it is completed, it will pretty much in my view close the field," Kumar says. "We will have firm evidence that Einstein's theory is right."

The Ability to Pronounce "F" and "V" Sounds Might Have Evolved Along With Diet

Smithsonian Magazine

“French fries” might not be on the menu if not for ancient farmers, and not because we can now grow plenty of potatoes, but because it would be harder to enunciate the f sounds needed to order them. The ability to make labiodental sounds—which are sounds that require you to put your lower lip on your upper teeth, such as f and v soundsmay not have fully developed until agriculture introduced softer foods to the human diet, changing our jaws, according to an intriguing and controversial study published today in Science.

Orthodontists know that overbite, and the human jaw’s horizontal overlap called overjet, are common among people all over the world. But the study’s authors assert that such jaw structures were rarer in the Paleolithic Period, when hunter-gatherer’s rough diets demanded more force from teeth that met edge to edge. Agriculture softened our ancestors’ diets with processed gruels, stews and yogurts, and this fare led to gradually shrinking lower jaws to produce today’s overcrowded mouths. This diet-driven evolution of the human bite over the last 10,000 years might have shaped some of the sounds we use to communicate today.

University of Zurich linguist Balthasar Bickel hypothesizes that less wear and stress on teeth and jaws allowed overbite to persist more often, creating a close proximity between the upper teeth and lower lip that made it a bit easier to utter f and v sounds. (Try making a “fuh” sound, first with your upper and lower teeth aligned edge to edge and then, probably more successfully, with your bottom jaw pulled back so your lower lip can more easily touch your upper teeth.)

“One of the take-home messages is really that the landscape of sounds that we have is fundamentally affected by the biology of our speech apparatus,” Bickel said at a press conference this week. “It’s not just cultural evolution.”

The difference between a Paleolithic edge-to-edge bite (left) and a modern overbite/overjet bite (right). (Tímea Bodogán)

Each time ancient humans spoke, there was only a small chance of their slowly changing jaw configurations producing labiodental sounds, but like a genetic mutation, it could have caught on over time. “Every utterance that you make is a single trial. And if you think of this as going on for generations over generations, you have thousands and thousands of trials—with always this probability of changing—and that leaves the statistical signal we find in the end,” Bickel said.

Bickel and colleagues tested the idea that overbite helped produce labiodentals by building biomechanical models and making them talk. Their data suggest that making f and v sounds takes 29 percent less muscular effort when the speaker has an overbite/overjet configuration. The researchers then searched for real-world evidence of where labiodental sounds became more common over time.

“We looked into the distribution of labiodental sounds across thousands of languages and their relation to the characteristic sources of food of the people speaking those languages,” Damián Blasi, also of the University of Zurich, said at the press conference. The survey showed that languages spoken by modern hunter-gatherers use only about one-fourth as many labiodental sounds as other languages.

Tecumseh Fitch, an expert on bioacoustics and language evolution at the University of Vienna who was not involved in the new study, says the interdisciplinary approach of biomechanics, bioacoustics, comparative and historical linguistics came to him as a surprise. “This is probably the most convincing study yet showing how biological constraints on language change could themselves change over time due to cultural changes,” he says via email. “The study relies, inevitably, on various assumptions and reconstructions of unknown factors (especially bite structure in current and ancient populations), but I think the authors build a very plausible case that will open the door to future detailed research.”

Still, the evolutionary process remains far from clear. Despite today’s ubiquitous modern human dental orientations around the world, half of about 7,000 existing languages never started to regularly use labiodental sounds at all. And the correlation of the sounds with softer foods doesn’t always hold up. Cooking has been around for hundreds of thousands of years, easing the stress on human teeth and jaws. Ancient Chinese agriculture produced easy-chewing rice, yet f and v sounds aren’t as common in Chinese as they are in Germanic or Romance languages.

Bickel, Blasi and colleagues argue that the evolution of overbite simply meant labiodentals would be produced more often. “That doesn’t mean that labiodentals will emerge within all languages. It does mean that the probability of producing labiodentals increases slightly over time, and that means that some languages are likely to acquire them but not all languages will,” says co-author Steven Moran.

Not everyone is convinced that diet reshaped our tooth alignment in the first place, however. “They haven’t established even that a soft diet would give you an overbite,” says Philip Lieberman, a cognitive scientist at Brown University. “To relate that to diet it has to be epigenetic,” meaning chemical compounds that become attached to genes can change gene activity without altering the DNA sequence. “There has to be some sort of regulatory mechanism that is triggered directly from the environment or diet, and I don’t know of any data on an epigenetic effect restructuring [tooth and jaw position].” Even such a link wouldn’t convince Lieberman that the change prompted the rise of f and v sounds. “We can produce these sounds whether we have overbite or not,” he says. “There’s arbitrariness in language. People have different words for the same things, and I don’t think we can relate any of it to changes in teeth.”

Biomechanical model of producing an f sound with an overbite/overjet (left) vs an edge-to-edge bit(right). (Scott Moisik)

Evolutionary biologist Mark Pagel at the University of Reading found some of the authors suggestions more plausible. “If their argument that having that overbite or overjet has become more prominent in recent fossils is actually true, if you get a developmental change actually changing the shape of our mouths, then there’s a real plausibility to it,” he says, adding that sounds tend to develop via the path of least resistance. “We make more readily the sounds that are easier to make. We’re constantly introducing tiny little variants. And if the shape of your mouth means you are more likely to introduce some kind of variant … then they are just a bit more likely to catch on.”

Despite the correlation between mouth shape and sounds, paleoanthropologist Rick Potts of Smithsonian’s Human Origins Program has reservations about the study’s conclusion that changing diets caused a rise of labiodentals. “In my view they don’t provide sufficient reasons for us embracing diet as the reason for producing [more] v and f sounds because they don’t deal at all with the anatomy of producing those sounds.”

Making v and f sounds, Potts says, requires only very slight retraction of the temporal muscle on the side of the head, which draws the jaw backward with a very subtle movement. “How does a harder diet limit the retraction of the jaw?” he asks. “That’s the essence of being able to make the v and f sounds. They do not in any way demonstrate how a bite-to-bite configuration of the teeth inhibits or makes it more expensive to make these sounds. I can’t see anything in the way teeth are oriented toward one another that would limit the retraction of the jaw.”

Potts says the study identifies some intriguing correlations but falls short in demonstrating likely causation. As an example, he says that if researchers found that the color red was favored by equatorial peoples like the Masai, and they also found that such people had a lower density of light receptors in their retinas than Arctic people, they might conclude that lack of light receptors was a biological cause for preferring the color red.

“But how would you possibly discount the fact that it’s just cultural history why the Masai wear red whereas Arctic people tend not to?” he asks. “It’s just the way people distinguish themselves and it becomes passed on in ways that are geographically oriented. I’m just concerned that [the study] hasn’t given enough credit to the idea of the accidents of cultural history and identity being part of why v and f sounds are less frequent in certain groups of people worldwide than others.”

Balthasar Bickel, on the other hand, says that language has been too often regarded as a purely cultural or intellectual phenomenon, and he hopes his group’s work will help to open new lines of scientific inquiry. “I believe there is a huge potential out there for studying language as part of the biological system it really is embedded in.”

Huge Wine Cellar Unearthed at a Biblical-Era Palace in Israel

Smithsonian Magazine

The wine is robust but sweet, with herbal notes and maybe a hint of cinnamon. Carefully stored in a room near the banquet hall, dozens of large jugs filled with the latest vintage sit waiting for the next holiday feast or visiting politician. Then, disaster strikes. An earthquake crumbles walls and shatters jars, spilling waves of red fluid across the floor and leaving the grand wine cellar in ruins.

This isn’t a vineyard villa in Napa—it’s one possible explanation for recent discoveries in the Canaanite palace of Tel Kabri, in the northwestern part of modern-day Israel. The remains of 40 large jugs found at the site show traces of wine infused with herbs and resins, an international team reports today in the journal PLOS ONE. If their interpretation holds up, the room where the vessels were found may be the largest and oldest personal wine cellar known in the Middle East.

“What’s fascinating about what we have here is that it is part of a household economy,” says lead author Andrew Koh, an archaeologist at Brandeis University. “This was the patriarch’s personal wine cellar. The wine was not meant to be given away as part of a system of providing for the community. It was for his own enjoyment and the support of his authority.”

Various teams have been excavating Tel Kabri since the late 1980s, slowly revealing new insights into life during the Middle Bronze Age, generally considered to be between 2000 and 1550 B.C. The palace ruins cover about 1.5 acres and include evidence of monumental architecture, food surpluses and complex crafts.

“Having a Middle Bronze Age palace isn’t all that unusual,” says Koh. “But this palace was destroyed toward 1600 B.C.—possibly by an earthquake—and then it goes unoccupied.” Other palaces in the region that date to around the same time had new structures built on top of the originals, clouding the historical picture. “We would argue that Kabri is the number-one place to excavate a palace, because it has been preserved,” says Koh. “Nothing else is happening over top that makes it difficult to be that archaeological detective.”

Image by Andrew Koh. A LIDAR image of the wine cellar at Tel Kabri. (original image)

Image by Andrew Koh. Zooming in on a LIDAR image shows details of the storage jars at Tel Kabri. (original image)

Image by Andrew Koh. A LIDAR image of a storage vessel at Tel Kabri. (original image)

Image by Andrew Koh. The uncovered wine cellar at Tel Kabri. (original image)

Image by Andrew Koh. Archaeologists take samples from ancient wine jars at the Tel Kabri site. (original image)

The team unearthed the wine cellar during excavations in 2013 and described their initial analysis at a conference this past November. In the new paper, Koh and his colleagues outline their methods and offer some context to help back up the claim.

The room holds the remains of 40 large, narrow-necked vessels that could have held a combined total of 528 gallons of liquid—enough to fill 3,000 modern bottles of wine. There is a service entrance and an exit connected to a banquet hall. The team says that samples of 32 jars brought back to the lab in Massachusetts all contained traces of tartaric acid, one of the main acids found in wine. All but three of the jars also had syringic acid, a compound associated with red wine specifically.

Residue in the jars also showed signs of various additives, including herbs, berries, tree resins and possibly honey. This would fit with records of wine additives from ancient Greek and Egyptian texts, the team says. Some of these ingredients would have been used for preservation or to give the wine psychotropic effects. “This is a relatively sophisticated drink,” says Koh. “Somebody was sitting there with years if not generations of experience saying this is what best preserves the wine and makes it taste better.”

However, finding tartaric and syringic acids doesn’t definitively mean you’ve found wine, says Patrick McGovern, a biomolecular archaeologist at the University of Pennsylvania and an expert in ancient alcohol. Both acids are also found naturally in other plants or can be produced by soil microbes. “It’s good they did a soil sample, because microorganisms do produce tartaric acid in small amounts, and they did not see in the soil,” McGovern says.

He also expressed some concern that the team’s traces from the ancient jars are not a perfect match for the modern reference samples used in the study. A few extra steps in the chemistry could verify the link between the acids and wine grapes, he says. Still, assuming the residue tests stand up, the results fit well with other evidence for wine making in the Middle East, he says. Previous discoveries suggest that wine grapes were first grown in the neighboring mountains and moved south into the region around Tel Kabri by the middle of the 4th millennium B.C. Records from the time show that by the Middle Bronze Age, Jordan Valley wine had become so celebrated that it was being exported to the Egyptian pharaohs.

So what would modern-day oenophiles make of the Tel Kabri wine? It may be an acquired taste. “All wine samples from different parts of the Near East have tree resin added, because it helps keep the wine from going to vinegar,” notes McGovern. “In Greece, they still make a wine called Retsina that has pine resin added to it. It tastes really good once you start drinking it. You get to like it, similar to liking oak in wine.” And McGovern has had some commercial success bringing back ancient beers—“Midas Touch” is an award-winning re-creation of beer from a 2700-year-old tomb found in Turkey.

If Koh and his team have their way, a Tel Kabri label could also make it to store shelves. “We’ve talked to a couple vineyards to try and reconstruct the wine,” says Koh. “It may not be a huge seller, but it would be fun to do in the spirit of things.” The scientists are even hoping they will be able to recover grape DNA from future samples of the jars, which might bring them closer to a faithful reconstruction of the ancient wine.

“Celebrated wines used to come from this region, but local wine making was wiped out with the arrival of Muslim cultures [in the 7th century A.D.],” says Koh. “Most grape varieties growing in Israel today were brought there by [French philanthropist Edmond James] de Rothschild in the 19th century.” Grape DNA from Tel Kabri could help the team track down any feral grapes growing in the region that are related to the Bronze Age fruit, or perhaps figure out which modern varietals in Europe are closest to the ancient beverage.

*This article has been updated to correct the area of the palace ruins.

Five Ways to Eat Seaweed

Smithsonian Magazine

There's a small seaside village in Okinawa, Japan, where more than 100 of its 3221 residents are over 90-years-old. This isn't some retirement community—Ogimi Village is actually renowned for its longevity. It's a quality locals contribute in large part to their diet, which includes locally grown vegetables and freshly foraged foods such as seaweed. This wild algae that's prevalent along our shores and always getting tangled in our fishing hooks is a dietary staple throughout much of Asia. Now it's finally gaining mainstream recognition in the States. Chefs like Michael Cimarusti at Providence in L.A. and Trevor Moran at Nashville's The Catbird Sea have been incorporating this superfood—a source of vitamins A and C as well as both calcium and iodine, and loaded with antioxidants—into everything from broth to garnishes, and it's also seeing play as a star ingredient.

While not all seaweed is edible (experts specifically recommend steering clear of Hijiki, which contains high levels of inorganic arsenic), those that are have a surprising diversity to them. These include red algae such as nori, which is best known as a sushi wrap; brown algae like wakame or kelp; and green algae or 'sea lettuce.' Seaweed can be bought dried or fresh and is readily available at natural and health food stores. Here are five suggestions for preparing it in your culinary endeavors.

In Soup: Kombu is a type of kelp seaweed and a flavor enhancer for dashi, a cooking stock that's the base of miso soup. To make dashi simply heat a 1/2 ounce of dried kombu strips in 4.5 cups of water, let it simmer for about 10 minutes and then add katsuobushi—or dried bonito flakes. Turn off the heat, let the mixture sit for another 10 minutes or so, then strain. It's this strained, golden-hued liquid that is dashi.

Seaweed also works well as a main ingredient, like in this recipe for Shiitake Mushroom Miso Soup, which incorporates both kombu and wakame. Korean Seaweed Soup is another good one. It uses dried seaweed, and often includes sirloin, clams, or mussels.

(© Lawton/photocuisine/Corbis)

2. In Salad: Seaweed is a great substitute for spinach or lettuce. Try whole leaves of purple dulse as a base and toss it with crisp apples and red cabbage, or mix wild nori, toasted with tofu, shallots, pecans, and wild rice. Soak dried wakame in warm water until it expands, and you have the base for a traditional Japanese seaweed salad, the kind often served in sushi restaurants. You can also sprinkle crispy bits of seaweed on a traditional salad, or grind up bits of dried seaweed to use instead of salt.

(© Topic Photo Agency/Corbis)

3. As a Snack: Being low in calories and sporting endless health benefits, seaweed makes an ideal between-meal treat. Spread seaweed tapenade onto toast points for the perfect hiking snack, or bake up some seaweed muffins to take with you on the go. For something a little lighter with a bit of added spice, try these nori crisps with wasabi. Seaweed snacks are also popping up in markets like Whole Foods and Trader Joe's.

(© Langot/photocuisine/Corbis)

4. As Dessert: Sure it doesn't sound as appetizing as say, a warm chocolate cake, but there are many ways to incorporate seaweed into your post meal offerings that'll leave you satisfied and even hankering for more. Seaweed pudding, typically made with Irish Moss, is a popular dessert in Europe's Atlantic coastal towns. Spice it up by adding berries, unsweetened chocolate, or even an Irish whiskey like Bailey's. For something even richer, try this recipe for seaweed brownies. It includes tiny strips of dulce seaweed for an extra boost of nutrition. Author and former SF Bay Area pastry chef even came up with a recipe for sugar cookies flavored with seaweed salt.

(© Leser/SoFood/Corbis)

5. Drink it: Still feel like the idea of incorporating sea algae into your cuisine is a little too out there? Why not skip the edible part and sip your seaweed instead. For years the Japanese have been drinking kombu tea, made with either kombu powder available at natural foods stores or thinly sliced strips of the kelp seaweed. The Irish Moss is a popular drink across the Caribbean. Along with its namesake seaweed it includes sweetened milk, spices and rum. At the soon-to-open restaurant Dirty Habit in San Francisco, CA, bar manager Brian Means will be offering his Dram at Mount Tam, a cocktail that mixes two ounces of St. George Terroir Gin with one ounce kombu-infused Vya vermouth, a half ounce of kale cordial, and four dashes of celery bitters. It's served strained over ice in an Old Fashioned glass with a lime twist—an easy-to-make 'at home' drink.

A Postmortem of the Most Famous Brain in Neuroscience History

Smithsonian Magazine

On August 25, 1953, a 27-year-old Connecticut native named Henry Molaison underwent brain surgery to treat the seizures he chronically suffered from as a result of epilepsy. Hartford Hospital neurosurgeon William Beecher Scoville, who had previously determined the brain regions where Molaison's seizures originated, removed a fist-sized chunk of brain tissue that included parts of both his left and right medial temporal lobes.

When Molaison awoke after the surgery, his epilepsy was largely cured. But removing so much brain tissue—and, in particular, a structure called the hippocampus—led to an entirely new problem for H.M, as he'd soon be called in the scientific literature to protect his privacy.

From that moment on, he was unable to create memories of any new events, names, people, places or experiences. He also lost most of the memories he'd formed in the years leading up to surgery. In the most fundamental sense possible, H.M. lived entirely in the moment.

"At this moment, everything looks clear to me, but what happened just before?" he once said. "That’s what worries me. It’s like waking from a dream. I just don’t remember." Although he interacted with the same nurses and doctors day after day, each time he saw them he had no idea he'd ever met them before. He remained a perfectly intelligent, perceptive person, but was unable to hold down a job or live on his own. Without the connective tissue of long-term memory, his life was reduced to a series of incoherent, isolated moments.

Out of this tragic misfortune came an unintended benefit. For decades, neuroscientists closely studied H.M., making groundbreaking discoveries about memory formation based on his condition. He voluntarily participated in testing almost continually, and by the end, he was widely known as the most important patient in neuroscience history.

When he died in 2008, researchers led by Jacopo Annese of UC San Diego froze his brain in gelatin and cut it into 2,401 ultra-thin slices for further research. Now, in a paper published today in Nature Communications, they've announced the results of their analysis. By using the slices to create a 3D microscopically-detailed model of H.M.'s brain, they've identified a previously-unknown lesion caused by the surgery, a finding that could shed further light on the anatomical structures responsible for memory.

A rendering of the UC San Diego team's 3D model of H.M.'s brain. Added in red are the areas removed during his 1953 surgery (Video by Brain Observatory/UC San Diego)

In the decades after H.M.'s surgery, researchers such as Brenda Milner and Suzanne Corkin studied H.M.'s memory limitations and used them to pioneer the nascent field of memory study. With records of the 1953 procedure, they were even able to link particular anatomical areas that H.M. was missing with memory functions. 

Previously, many had believed that it was impossible to assign functions to physical stuctures in this way, but H.M.'s unique case opened up new possibilities. He was incapable of storing new information in his explicit memory—the type of memory that allows us to consciously remember experiences and pieces of new information—but could remember pieces of information over a very short time period (up to about 20 seconds), evidence that his short-term memory was somewhat intact. He could also learn and retain new skills, even if he didn't remember the actual act of learning them.

These fine distinctions led scientists to distinguish between procedural memory—the unconscious memory that allows us to perform motor activities, like driving—and explicit memory. Additionally, that H.M. couldn't form new explicit memories but had undamaged childhood memories highlighted the difference between memory encoding and memory retrieval (he could still perform the latter, but not the former). Perhaps most importantly, the fact that he was missing his hippocampus suggested that the structure was crucially involved in the encoding of long-term explicit memories, but wasn't necessary for short-term or procedural memory.

A high-resolution photo of a slice of H.M. brain, zoomable down to the microscopic level and available online. (Image via Brain Observatory/UC San Diego)

H.M.'s brain was imaged while he was alive using MRI and other techniques, but the new high-resolution model—created with data taken from photographs of the thousands of thin slices—has allowed the researchers to delve deeper into the brain's anatomy and make these sorts of observations on a finer scale.

They've discovered that some parts of the brain that were believed to have been left intact after the surgery were actually removed. The left orbitofrontal cortex, for instance, contained a small lesion, likely caused during the surgery. Additionally, they found that some portions of the left and right hippocampi were actually undamaged, a finding that could cause researchers to re-examine previous beliefs about the role of the hippocampus in different sorts of memory.

The UC San Diego team also plans to publish a free online "atlas" of the brain, made up of the high-resolution images taken of its slices, viewable on a zoomable Google Maps-like platform (one photo has already been published). Given that the original dissection of the brain was broadcast live on the web and attracted an estimated 400,000 viewers, it seems likely that in death, as well as life, H.M.'s extraordinary condition will captivate many.

Smithsonian’s Kirk Johnson Steps Up to Be the Rock Star of Geology

Smithsonian Magazine

What is the director of the Smithsonian’s National Museum of Natural History doing, dangling over a cliff at the Grand Canyon from a rope?

Demonstrating in a most graphic way the stratification of the geological wonder. And sure enough, in one of the early segments of “Making North America,” a three-part special starting November 4 on the PBS science series “Nova,” the different shelves of the canyon slide out beneath him, courtesy of computer animation.

“The hope is that we get people excited about reading the planet,” Kirk Johnson says, safe in his fourth floor museum office. Since he was named Sant Director of the Smithsonian’s most popular museum in 2012, the boyish 55-year-old has popped up from time to time as an expert on a number of geology specials, including the Smithsonian Channel’s “Mass Extinction: Life at the Brink” and Discovery Channel’s upcoming “Racing Extinction.”

But for the three hour “Making North America,” Johnson is the face of the program, traveling to 17 states, Canada and the Bahamas for the flashy series, becoming for geology what Neil deGrasse Tyson is for astrophysics.

For the online part of the project, they’ve even made a Kirk Johnson bobblehead.

“I’ve been on lots of TV—about 25 or 30 shows. But usually it’s just spot talent,” Johnson says. “I’ve never presented before.”

And by presenting, that doesn’t just mean narrating after the fact or sitting behind his desk (though that would be pretty cool—the Washington Memorial is framed in a window just behind his right shoulder).

“It was a huge learning experience, put it that way,” Johnson says. “It was great fun. But I thought I had done TV and then I realize I hadn’t done TV before.” So he hung off that rope at the Grand Canyon. He went to a deep mine near Thunder Bay, Ontario, looking at 2.7 billion year-old-rocks. He traveled to Gooseberry Falls in northwest Minnesota to see its surface exposure of huge lava flows. He took them to “my main research area in southwestern North Dakota, where he’s found thousands of artifacts. But on the coast of Alaska, he says, “we made this amazing discovery of a fossilized palm frond in the winter tidal deposit right on camera.”

On the coast of Alaska, Johnson says, “we made this amazing discovery of a fossilized palm frond in the winter tidal deposit right on camera.” (WGBH)

A lot of such films have shown geologists who appear to be discovering amazing things with the camera right there. But this, he promises, this actually happened that moment. 

“I’m a purist,” he said a few weeks earlier, promoting the how at the TV Critics Association summer press tour. “We did a show with Nova a couple years ago in Snowmass Village where we had this discovery of mammoths and mastodons. They were saying, ‘Can you show us how you would find a mammoth bone?’ I said, ‘No, I absolutely will not do that. I will find one for you on camera.’

“If you know how to dig and you know where you’re digging, you will find stuff in a matter of minutes,” Johnson said. “So it’s way better for the thing to be real than a recreation. I abhor recreations.”

Indeed, one takeaway from “Making North America” is that anybody can find plant fossils or even dinosaurs anywhere in the country, if they just look. 

“Dinosaurs were around for over 150 million years, and they’re in lots of rocks,” Johnson says. “And paleontologists have only been around for 150 years, and there’s not many of us. So the volume of rock that have dinosaurs versus the volume of paleontologists, it’s a huge ratio.”

Roxborough State Park near Denver, Colorado (WGBH)

Johnson says he was out in North Dakota just this summer and found 20 dinosaurs. 

“If I go look for them, I will find them, because there’s tons of them out there,” he says. “It’s this sort of myth that they’re rare. It’s just that people don’t know how to look for them, and that, by the way, is pretty trivial too. It’s a pretty simple skill set.” 

Instead of looking up as you walk around, you look down.  “Look at the ground when you’re walking is what you do to find a dinosaur.”

And they’re everywhere—not just in faraway fields in the Dakotas. “We find dinosaurs in New Jersey and in New York,” Johnson says. “There’s a dinosaur from Washington, D.C. called Capitalsaurus, believe it or not. And there are dinosaurs in the L.A. Basin. There are fossils everywhere, and our continent has this very curious practice of burying its dead by the natural processes of sedimentation.

So if you know how to look, the stories are everywhere.

“And that’s what’s cool about the show,” he says. “Anybody who’s watching will be compelled to ask what’s happening beneath my feet? It doesn’t matter where you are, there’s a story there.”

Part of the reason is that the landscape has changed so much and continues to do so. The other point of “Making North America” is that the continent is still being made. “The landscape is always changing,” he says, “whether it’s by hurricanes, or the impacts of earthquakes, tornadoes or landslides, just simple erosion, or mountain-building processes.”

In fact, things are changing so much that his crew had to schedule a last minute filming trip to cover the hubbub over a summer New Yorker article predicting an imminent earthquake in the Pacific Northwest.

“It’s one of the few times when a geology story has had a huge impact on a bigger population,” Johnson says. “It really caught people of the Pacific Northwest by surprise. A show about the geology of North America that didn’t have it would be kind of remiss.”

Johnson knows the area: He became interested in geology when accompanying his father on rock climbs in the Pacific Northwest, where he grew up. “As a kid you need your little superpowers and mine was finding stuff—fossils or arrow heads or cool rocks and stuff,” he says. For a while that was enough.

“Then I thought: Wait a minute, these are all telling me something. There’s a story. Every one of these little things is a message from the past. Some little story is in there.”

He started going to the Burke Museum at the University of Washington and other museums that had fossils. “I’ve always been in this mode of finding stuff and taking them to museums and communicating the excitement of finding things,” he says.

Johnson studied geology at Amherst College, got a doctorate at Yale and before coming to the Smithsonian, was vice president and chief curator at the Denver Museum of Nature and Science.

Doing the show has been great, Johnson says, “because it helps me show my enthusiasm for this incredible planet we live on.”

The three-part series, “Nova: Making North America,” premieres November 4 at 9 p.m. and continues on November 11 and November 18 on PBS; check local listings. 

The Lac Des Iles Mine near Thunder Bay, Ontario (WGBH)

Macabre school supplies: 19th century dissection sets

National Museum of American History

A 19th-century medical student brought to school a number of things, including scientific texts and a hope to one day relieve the suffering of others. The student's most important school supply, however, was a dissecting set. The era's medical curricula emphasized the importance of human dissection in the training of America's young physicians because it allowed doctors to approach the body with greater scientific understanding. With a small wooden box of ivory-handled tools, an aspiring physician hoped to learn the essence of the human body beyond what text alone could teach.

A wooden box filled with various tools that include blades and scissors

As the Revolutionary War drew to a close in 1783, the country looked to mirror its newfound political independence in homegrown medical education. Before the proliferation of American medical schools, aspiring physicians had to travel to Britain or France to learn their profession. America's new interest in training her own physicians coincided with a growing European appreciation for hands-on dissection. Before the late 1700s, medical students rarely performed dissection themselves. Rather, they watched an anatomy professor demonstrate on a cadaver at the front of a lecture hall. In contrast, American medical education came of age in a world that required each student to perform dissection. Physicians of the time felt that dissection was the foundation of medical knowledge. According to surgeon Robert Liston, "it is only when we have acquired dexterity on the dead subject, that we can be justified in interfering with the living."

A wooden box with red lining laying open. The tools it contains are positioned around it, including scalpels and scissors

A 19th-century dissection set usually contained scissors, forceps, hooks, several scalpels, a heavy cartilage knife, and a blowpipe. Each instrument was designed to reveal the internal structures of the human body. For example, dissectors used scalpels and forceps to peel back the skin and blowpipes to inflate structures like the colon to make them easier to see. While surgical instruments of the same period had constructions that allowed for careful and specialized work, dissecting instruments were fewer in number and used more generally. Many instruments of the surgical craft were unnecessary in dissection. The medical student's few instruments enabled cutting through tough tissue and peeling away layers of the body to take stock of its parts.

A lack and white photograph of a group of young men standing behind a table on which is a human body which has been dissected

For much of the 19th century, surgeons performed operations on patients without any kind of anesthesia. Because of the pain patients had to endure, surgeons were trained to work as quickly as possible. On the other hand, anatomical professors encouraged their students to dissect slowly and methodically. Anatomy professor J. P. Judkins described dissection as the most captivating subject in medical study. According to him, dissection "rivits [sic] the attention, and excites the curiosity to know what wonders are contained with in us." The dissecting set allowed a medical student to expose these wonders.

But the dissecting set was not the anatomical student's only school supply. A cadaver was also necessary to perform dissection. However, the public saw dissection as a desecration of the dead and feared it might hinder a spirit's resurrection on Judgment Day. Because of this opposition, medical schools found they had a shortage of cadavers to use in the classroom. Many physicians either resorted to stealing bodies from graves or paying hired "resurrectionists" to do the same. But, if discovered, this theft could potentially result in a riot against the nearest medical institution. For this reason, body-snatchers preferred the graves of the poor and those of enslaved and free blacks. The white middle and upper class seldom objected to the violation of these populations.

A painting of a man touching the arm of a cadaver that lays on a slab. The man looks off as he reached for a tool on the counter next to him.

In this context, the dissecting set took on multiple meanings. It was a tool through which America built a scientifically respectable medical infrastructure. In the hands of curious young medical students, it represented a gateway into the medical profession. To those outside the medical community, the dissection set meant a threat of violation, both of religious beliefs and bodily integrity. Ultimately, the impact of this macabre school supply reached far outside the classroom.

Jenna McCampbell is an intern in the Division of Medicine and Science working with curator Katherine Ott. She is a junior at Smith College majoring in history and computer science.

Author(s): 
intern Jenna McCampbell
Posted Date: 
Friday, February 17, 2017 - 08:00
OSayCanYouSee?d=qj6IDK7rITs OSayCanYouSee?d=7Q72WNTAKBA OSayCanYouSee?i=5zNeYrQasEg:ZTFnzEHGXT0:V_sGLiPBpWU OSayCanYouSee?i=5zNeYrQasEg:ZTFnzEHGXT0:gIN9vFwOqvQ OSayCanYouSee?d=yIl2AUoC8zA

Six Ways Electrical Brain Stimulation Could Be Used in the Future

Smithsonian Magazine

Deep brain stimulation, or using mild electrical shocks to targeted parts of the brain with the purpose of affecting how they function, provides a whole new approach to dealing with neurological conditions that have challenged doctors for a long time.

For years now, it’s been used to treat Parkinson’s disease. By sending small shocks to regions of the brain that control movement, this stimulation can help keep the tremors of Parkinson’s under control.

It works like a kind of pacemaker for the brain. A surgeon inserts a probe into a specific area of the brain, and it’s connected by wires under the skin to a battery pack implanted near the top of a person’s chest. A doctor sets the strength of the electrical pulse, how long it lasts and how often it occurs. One type of stimulation can boost brain cell activity, another can slow it down.

Not surprisingly, as the technology and treatment have become more refined, scientists have started to look at other ways brain stimulation might be used, and not just to treat neurological damage from Alzheimer’s or traumatic brain injuries, but also to address mental health disorders and even, to some degree, behavior. There are studies underway, for instance, to see if it can help people stop smoking.

Here’s a sampling of the latest brain stimulation research: 

Slowing down Alzheimer’s: Alzheimer’s disease remains one of medicine’s most daunting challenges, both in terms of pinpointing a cause and developing a truly effective treatment. Now a team led by scientists at Johns Hopkins University is conducting a clinical trial to see if deep brain stimulation can slow memory loss and cognitive decline.

The research involves placing implants into the brains of about 40 patients with mild Alzheimer’s disease and measuring progression of the disease over an 18-month period. Specifically, the devices are being implanted in the patients’ fornix—a bundle of nerve fibers connecting the left and right sides of the hippocampus. That’s the region of brain associated with memory. The theory is that brain stimulation in this area could slow the rate of damage to the fornix and even create new brain cells in the hippocampus. 

The most recent phase of the trial, expected to last four years, focused on the safety of the implants in Alzheimer’s patients. So far, it has found no serious adverse effects.

So long to the big queasy?:  Researchers at Imperial College London say that brain stimulation could even be used to ease motion sickness. Scientists aren’t absolutely certain what causes the queasy sensation, but they believe it has to do with the brain trying to process conflicting signals from our ears and our eyes when we’re in motion. Previous research has determined that a well-functioning vestibular system—that’s the part of the inner ear that senses movement—makes it more likely that someone will feel that nauseating sensation.

So, the researchers wondered what would happen if they used an electric current to mute the signals from the vestibular system to the brain. They worked with 10 men and 10 women volunteers who agreed to wear a cap fitted with electrodes and then, for 10 minutes, receive a mild electrical current designed to inhibit brain cell activity. That was followed by a ride on a chair that rotated and tilted at different speeds to make them feel sick.

It turned out that those who received stimulation that reduced brain cell activity were less likely to feel nauseous and recovered more quickly than people whose brains were stimulated to boost cell activity. Now, the researchers are talking with potential partners about developing a portable anti-nausea stimulation device you could pick up at the drug store. 

Memories are made of this: At a conference earlier this month, DARPA, the research arm of the Defense Department, announced that as part of a study it has funded, patients who were given brain implants scored better on memory tests. Traumatic brain injuries are a big issue for the U.S. military—almost 300,000 members of the service have been treated for one since 2000. So DARPA is leading research efforts into how electrical stimulation might be used to help people with damaged brains create and retrieve memories.

Scientists having been working with brain surgery patients who have volunteered to be part of the memory project. The goal is to more clearly identify the process for how the brain forms and recalls memories and then use mild shocks from implants to recreate that process. It’s only a year into the project, but DARPA says that based on the results so far, it appears possible to map and interpret the neural signals coming from a brain as it encodes or retrieves a memory, and then actually improve that recall by electrically stimulating targeted sections of the brain.

Put down that cigarette: Another project in the early stages is looking at how brain stimulation might help people fight cravings for cigarettes or unhealthy food. Caryn Lerman, senior deputy director of the University of Pennsylvania’s Abramson Cancer Center, recently received a grant to investigate whether applying electrical shocks to sections of the prefrontal cortex behind the forehead—a brain region tied to self-control—can help people resist urges to engage in unhealthy behavior.

The idea is that if targeted correctly, this stimulation could strengthen pathways being used to fight the desire to light up. Preliminary results of an experiment involving 25 smokers found that after a 20-minute session with electrical stimulators strapped to their foreheads, people were able to wait longer before they reached for a cigarette than those who received a placebo treatment.  They also smoked fewer cigarettes.

Stroke recovery: Scientists at the Cleveland Clinic have applied to the Food and Drug Administration (FDA) for permission to begin testing deep brain stimulation on human stroke victims. The treatment seemed to have worked with rats—it appeared to promote the growth of new neurons in the brain.

Not that anyone thinks that this can provide a cure for strokes. When they occur, the blood supply to the brain is cut off and some areas just shut down, while communication to other regions is disrupted. Electrical stimulation can’t bring dead neurons back to life. But it could help create new neural connections, particularly in the cerebellum, the part of the brain that controls voluntary movements. The hope is that parts of the brain that are still healthy would then be better able to compensate for damaged ones.

About 800,000 people in the U.S. suffer strokes every year. And, according to the National Stroke Association, about half of those who survive become severely debilitated.

Boosting empathy: But what about using brain stimulation to change how people feel?  Researchers at Harvard University and Vanderbilt University have ventured into that territory with an experiment related to the doling out of justice.

They presented 66 volunteers with stories about a fictitious person named John—specifically they related a range of crimes he had committed and his mental condition when he had committed them. Beforehand, some of the participants were given a form of brain stimulation that could disrupt activity in the prefrontal cortex, which plays a key role not just in self-control, but also decision-making. For others, the stimulation device was attached, but never turned on.

The volunteers were asked to decide how blameworthy John was and also to determine, on a scale of 0 to 9, how extreme his punishment should be. What the researchers found is that the people whose brain activity was disrupted chose less severe punishments. 

Which raises perhaps the biggest question about brain stimulation: When does fixing the brain turn into changing the brain?

Herbert Hoover's Hidden Economic Acumen

Smithsonian Magazine

From our nation’s inception, Americans have been a forward-looking people— youthful, optimistic, even revolutionary. Progress has been our byword, and the past has often been dismissed as stodgy, if not rudimentary. Few phrases are so thoroughly dismissive as to pronounce, of a person, a trend, or an idea, as, that, or they, are "history."

This inclination is rooted in a sense of optimism, and the confidence that we learn as we go. But it can also reflect a degree of hubris, and the mistaken notion that those who came before were not so clever as we today. When that happens it can blind us to the obvious truth that our forebears possessed wisdom as well as ignorance, and can lead us to repeating mistakes that might well be avoided.

Take the case of Herbert Hoover, America’s 31st president but also considered an exemplar of economic mismanagement for his futile response to the onset of the Great Depression, which arrived to the fanfare of the famous stock market collapse of 1929.

Prior to my undertaking a study of Hoover’s single term in office, I shared that view of Hoover. I still see Hoover as a failed president, unable or unwilling to cultivate the personal bond with the electorate that is the ultimate source of power and influence for any elected official. The more I learned of Hoover’s policies, however, the more impressed I became with his insight, vision, and courage—particularly when it came to managing an economy turned hostile. I found, too, that time has done little to discredit his trepidation over the consequences of mounting debt.

When the crash hit the stock market, it set off a collapse in values not only of financial instruments like stocks, but also a global slump in commodity prices, trade, and, soon after, employment. In the White House, Hoover responded in what was for him typical fashion: a brief, terse statement of confidence, asserting “the fundamental business of the country… is on a very sound basis.” At the same time, but quietly, Hoover pressed the members of his cabinet to ramp up federal spending to provide work for the wave of unemployment that he privately predicted. Finally, he convened a series of “conferences” with business leaders urging them to maintain wages and employment through the months to come.

These conferences were derided at the time, and more sharply later, as indicative of Hoover’s subservience to the capitalist class, but that is unfair. Hoover’s overriding commitment in all his years in government was to prize cooperation over coercion, and jawboning corporate leaders was part of that commitment. In any event, the wages of American workers were among the last casualties of the Depression, a reversal of practice from the economic downturns of the past.

More telling was the evolution of Hoover’s response as the Depression progressed, spreading from a market crash to the worldwide economic disaster that it became. Peoples and leaders across the globe took the failure of markets, currencies, and policies to mark the death rattle of capitalism per se, and turned to systemic, centralized solutions ranging from communism, exemplified by Soviet Russia, to fascism.

Hoover never accepted the notion that capitalism was dead, or that central planning was the answer. He insisted on private enterprise as the mainspring of development and social progress, and capitalism as the one “ism” that would preserve individual liberty and initiative. It appeared as establishmentarian cant to many of Hoover’s contemporaries, but Hoover’s instincts look like insight today.

More than that, Hoover recognized what appeared a failure of the capitalist system for what it was: a crisis of credit. With asset values in collapse and large parts of their loan portfolios in default, banks stopped lending to farmers, businesses, and builders, stalling recovery, stifling consumer spending and throwing more people out of work. It was a vicious cycle, soon exacerbated by the failure of thousands of rural banks that only added pressure on the financial system.

Hoover’s answer was to stage an unprecedented government foray into the nation’s credit markets. He conceived of a new Federal Home Loan Bank system that would offer affordable loans at a time when mortgages generally covered only half the cost of home building, and ran for terms of just three to five years. Such a novel proposal naturally bogged down in Congress, and it took most of Hoover’s term to get an agency up and running; in the meantime, Hoover fostered similar moves in agriculture, channeling more funds to the existing Federal Land Bank System. In 1932, for instance, Hoover's agriculture secretary supervised $40 million in small loans—$400 and under—that helped 200,000 farmers get their crops in the ground.

As the crisis deepened, Hoover turned his attention to the banking system itself. First he called to a secret conference a clutch of the nation’s most powerful bankers and browbeat them into creating a “voluntary” credit pool to backstop the balance sheets of the more fragile institutions; when that effort failed, the president launched a new federal agency to make direct loans to ailing banks, railroads, and other major corporations. Authorized to issue up to $2 billion in credit—more than half the federal budget at the time—the Reconstruction Finance Corp was the first time the federal government took direct, systemic action to shore up the country’s private finance markets. It anticipated TARP, the Troubled Asset Relief Program, by roughly 80 years.

Hoover broke ground on still another financial front, and that was monetary policy. Venturing onto the turf of the Federal Reserve, Hoover pressed to expand the money supply by increasing the kinds of financial paper that would qualify for Fed reserves, thus increasing the amount of funds available to lend, and by advocating Fed purchase of large quantities of debt. Such purchases are termed "open market operations" and are a means of expanding the money supply, thereby (theoretically) lowering interest rates and easing credit. Carried out on large scale they are what today we call "quantitative easing."

Here, however, Hoover ran up against one of his core beliefs—that the currency should be convertible to gold. He felt that maintaining easy convertibility for the dollar, based on the gold standard, was critical to trade and business confidence, and so opposed every measure that might be considered inflationary. At the same time, he understood that low interest rates and easy credit markets could foster investment and recovery.

Torn between his allegiance to sound money and his insights into the state of the economy, Hoover was unable to push his credit plans to the hilt. That is, he backed off from mass bond purchases before the credit markets had a chance to respond, and set too high the collateral requirements for the Reconstruction Finance Corp. loans for banks.

Hoover wanted high collateral requirements because he did not want to assist insolvent banks, just those with liquidity problems. Banks needed to show that, in the end, they could cover the loans. Hoover was also pressured on the same grounds by lawmakers on his left and his right to make sure he wasn't throwing good (public) money after bad (private) money. It’s worth noting that none of those in government at the time had seen lending to private parties—let alone banks—on such a scale before. So they adopted a very conservative approach, which they loosened after gaining some experience, and after a new president had entered the White House.

Indeed, it was left for Franklin Roosevelt to pick up where Hoover left off. That is not to say that FDR did not represent a change in course for the country; his New Deal was a distinct point of departure. But it’s also true, as FDR advisor Rex Tugwell put it later, that “practically the whole New Deal was extrapolated from programs Hoover started.”

That Hoover failed in the White House is a matter of accepted wisdom, and in certain fundamental ways true beyond doubt. Far less known are the nuances of what he did right—his insights into capitalism, what makes it work, and how to answer its setbacks. But in a larger sense Americans are living with Hoover's legacy. For better or worse we remain the global citadel of capitalism, the leader in economic growth and income disparity. To those wondering how we got to this point, some measure of credit for has to go to Hoover, an unpopular president who followed his core beliefs at a time when many abandoned theirs.

Charles Rappleye is the author of Herbert Hoover in the White House: The Ordeal of the Presidency (2016). 

Commemorating 100 Years of the RV

Smithsonian Magazine

Every December 15, Kevin Ewert and Angie Kaphan celebrate a “nomadiversary,” the anniversary of wedding their lives to their wanderlust. They sit down at home, wherever they are, and decide whether to spend another year motoring in their 40-foot recreational vehicle.

Their romance with the road began six years ago, when they bought an RV to go to Burning Man, the annual temporary community of alternative culture in the Nevada desert. They soon started taking weekend trips and, after trading up to a bigger RV, motored from San Jose to Denver and then up to Mount Rushmore, Deadwood, Sturgis, Devil’s Tower and through Yellowstone. They loved the adventure, and Ewert, who builds web applications, was able to maintain regular work hours, just as he’d done at home in San Jose.

So they sold everything, including their home in San Jose, where they’d met, bought an even bigger RV, and hit the road full time, modern-day nomads in a high-tech covered wagon. “What we’re doing with the RV is blazing our own trail and getting out there and seeing all these places,” Ewert says. “I think it’s a very iconic American thing.”

The recreational vehicle turns 100 years old this year. According to the Recreational Vehicle Industry Association, about 8.2 million households now own RVs. They travel for 26 days and an average of 4,500 miles annually, according to a 2005 University of Michigan study. The institute estimates about 450,000 of them are full-time RVers like Ewert and Kaphan.

Drivers began making camping alterations to cars almost as soon as they were introduced. The first RV was Pierce-Arrow’s Touring Landau, which debuted at Madison Square Garden in 1910. The Landau had a back seat that folded into a bed, a chamber pot toilet and a sink that folded down from the back of the seat of the chauffeur, who was connected to his passengers via telephone. Camping trailers made by Los Angeles Trailer Works and Auto-Kamp Trailers also rolled off the assembly line beginning in 1910. Soon, dozens of manufacturers were producing what were then called auto campers, according to Al Hesselbart, the historian at the RV Museum and Hall of Fame in Elkhart, Indiana, a city that produces 60 percent of the RVs manufactured in the United States today.

As automobiles became more reliable, people traveled more and more. The rise in popularity of the national parks attracted travelers who demanded more campsites. David Woodworth—a former Baptist preacher who once owned 50 RVs built between 1914 and 1937, but sold many of them to the RV Museum—says in 1922 you could visit a campground in Denver that had 800 campsites, a nine-hole golf course, a hair salon and a movie theater.

The Tin Can Tourists, named because they heated tin cans of food on gasoline stoves by the roadside, formed the first camping club in the United States, holding their inaugural rally in Florida in 1919 and growing to 150,000 members by the mid-1930s. They had an initiation; an official song, “The More We Get Together;” and a secret handshake.

Another group of famous men, the self-styled Vagabonds—Thomas Edison, Henry Ford, Harvey Firestone and naturalist John Burroughs—caravaned in cars for annual camping trips from 1913 to 1924, drawing national attention. Their trips were widely covered by the media and evoked a desire in others to go car camping (regular folks certainly didn’t have their means). They brought with them a custom Lincoln truck outfitted as a camp kitchen. While they slept in tents, their widely chronicled adventures helped promote car camping and the RV lifestyle. Later, CBS News correspondent Charles Kuralt captured the romance of life on the road with reports that started in 1967, wearing out motor homes by covering more than a million miles over the next 25 years in his “On the Road” series. “There’s just something about taking your home with you, stopping wherever you want to and being in the comfort of your own home, being able to cook your own meals, that has really appealed to people,” Woodworth says.

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Overland Park Trailer Camp, circa 1925. (original image)

Image by © Jill Fromer. An RV travels through Yellowstone National Park. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Adams Motor Bungalo, 1917. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Sportsman Trailer, 1932. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Airstream, 1933. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Airstream Clipper, 1936. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Hunt Housecar, 1937. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Frank Motorhome, 1961. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Winnebago Motorhome, circa 1966. (original image)

Image by Photograph from the collections of Al Hesselbart and the RV/MH Hall of Fame and Museum. Newell motorhome, 1978. (original image)

The crash of 1929 and the Depression dampened the popularity of RVs, although some people used travel trailers, which could be purchased for $500 to $1,000, as inexpensive homes. Rationing during World War II stopped production of RVs for consumer use, although some companies converted to wartime manufacturing, making units that served as mobile hospitals, prisoner transports and morgues.

After the war, the returning GIs and their young families craved inexpensive ways to vacation. The burgeoning interstate highway system offered a way to go far fast and that combination spurred a second RV boom that lasted through the 1960s.

Motorized RVs started to become popular in the late 1950s, but they were expensive luxury items that were far less popular than trailers. That changed in 1967 when Winnebago began mass-producing what it advertised as “America’s first family of motor homes,” five models from 16 to 27 feet long, which sold for as little as $5,000. By then, refrigeration was a staple of RVs, according to Hesselbart, who wrote The Dumb Things Sold Just Like That, a history of the RV industry.

“The evolution of the RV has pretty much followed technology,” Woodworth says. “RVs have always been as comfortable as they can be for the time period.”

As RVs became more sophisticated, Hesselbart says, they attracted a new breed of enthusiasts interested less in camping and more in destinations, like Disney World and Branson, Missouri. Today, it seems that only your budget limits the comforts of an RV. Modern motor homes have convection ovens, microwaves, garbage disposals, washers and dryers, king-size beds, heated baths and showers and, of course, satellite dishes.

“RVs have changed, but the reason people RV has been constant the whole time,” Woodworth says. “You can stop right where you are and be at home.”

Ewert chose an RV that features an office. It’s a simple life, he says. Everything they own travels with them. They consume less and use fewer resources than they did living in a house, even though the gas guzzlers get only eight miles a gallon. They have a strict flip-flops and shorts dress code. They’ve fallen in love with places like Moab and discovered the joys of southern California after being northern California snobs for so long. And they don’t miss having a house somewhere to anchor them. They may not be able to afford a house in Malibu down the street from Cher’s place, but they can afford to camp there with a million-dollar view out their windows. They’ve developed a network of friends on the road and created NuRvers.com, a Web site for younger RV full-timers (Ewert is 47; Kaphan is 38).

Asked about their discussion on the next December 15, Ewert says he expects they’ll make the same choice they have made the past three years—to stay on the road. “We’re both just really happy with what we’re doing,” he says. “We’re evangelical about this lifestyle because it offers so many new and exciting things.”

New York Says Goodbye to Plastic Bags

Smithsonian Magazine

In an ambitious effort to reduce litter and waste, the state of New York has implemented a controversial ban on the distribution of single-use plastic bags—once a ubiquitous feature of grocery stores, shops and bodegas.

The law, which was passed last year and went into effect on Sunday, prohibits many stores from handing out plastic bags to customers. New York’s Department of Environmental Conservation has launched a campaign—#BYOBagNY—that seeks to encourage shoppers to bring their own bags, preferably reusable ones, with them when shopping.

“Plastic bag usage affects both our communities and environment,” says the department on its website. “Plastic bags can be seen stuck in trees, as litter in our neighborhoods, and floating in our waterways. … Using reusable bags makes sense and is the right thing to do.”

As Anne Barnard reports for the New York Times, New York governor Andrew Cuomo has said that the goal of the initiative is “not to be punitive,” but instead to educate consumers and businesses about environmentally-friendly practices. The state will wait until April 1 to start penalizing stores that violate the law, according to NBC New York. Businesses that do not comply will first receive a warning, but could pay $250 for a subsequent violation and a $500 fine for another violation within the same year.

Exemptions to the rule include plastic bags used for takeout food, uncooked meat or fish, bulk produce, and prescription drugs. Newspaper bags, garbage and recycling bags, and garment bags are exempt, too.

Retailers will be allowed to provide single-use paper bags, and local governments have the option of imposing a five-cent fee for each bag a customer uses. Per the Times, two of these cents will be allocated to “programs aimed at distributing reusable bags.” The remaining three cents will be given to New York’s Environmental Protection Fund.

With its new law, New York becomes the third state to ban single-use plastic bags, following in the footsteps of California and Oregon. Hawaii is said to have a “de facto ban,” since all of its local governments prohibit plastic bags.

Officials say that New Yorkers use 23 billion plastic bags each year, contributing to a major global pollution problem. Single-use plastic bags are as destructive as they are convenient. They often end up in oceans, where they entangle with or clog the stomachs of marine animals. Most plastic bags do not biodegrade (even ones marketed as biodegradable may not live up to their name), instead breaking down into smaller and smaller pieces that can be ingested by various organisms and accumulate in the food chain. As they decompose, plastic bags also emit greenhouse gases, thus contributing to global warming.

When New York’s plastic bag ban was first passed, some advocates criticized the government for stopping short of mandating a paper bag fee, potentially paving the way for consumers to simply use paper rather than reusable bags. As Ben Adler points out for Wired, paper bags may actually have a higher carbon footprint than plastic, largely because it takes more energy to produce and transport them. One study by the government of Denmark also found that if you look at the products’ entire life cycle from factory to landfill, certain types of reusable bags would have to be reused thousands of times to make them a more sustainable option than plastic bags.

Still, explains Jennifer Clapp, Canada research chair in global food security and sustainability at the University of Waterloo, to Ula Chrobak of Popular Science, such broad assessments are not “always that helpful.”

“Many of the life cycle assessment studies are basically looking at embodied energy and climate change,” she says, “and that doesn’t address these questions of permanence, toxicity, and hazards.”

The ban has also come under fire from store owners who worry about how the law will impact business. Jim Calvin, president of the New York Association of Convenience Stores, tells CNN’s Bre’Anna Grant and Evan Simko-Bednarski that the “biggest problem right now” is the shortage and rising cost of paper bags available to retailers.

Without paper bags on site, “[t]he only choices for a customer who forgot a cloth bag will be to buy a reusable bag on site, which might cost $1 or more,” notes Calvin, “or carry out their purchases in their arms, which makes a convenience store an inconvenience store.”

Proponents of the ban cite the importance of training shoppers to stop expecting that plastic bags will simply be handed to them at check-out.

“Right now, the bag is just so automatic for both you and the clerk,” Peter Iwanowicz, a member of New York State’s Climate Action Council, tells the Times. “You accept the bag handed to you even though you didn’t need it for that one greeting card.”

The ban, adds Iwanowicz, “is the first really big push back against disposable culture.”

Astronomers Prepare a Mission Concept to Explore the Ice Giant Planets

Smithsonian Magazine

If you could design your dream mission to Uranus or Neptune, what would it look like?

Would you explore the funky terrain on Uranus’s moon Miranda? Or Neptune’s oddly clumpy rings? What about each planet’s strange interactions with the solar wind?

Why pick just one, when you could do it all?

Planetary scientists recently designed a hypothetical mission to one of the ice giant planets in our solar system. They explored what that dream spacecraft to Uranus could look like if it incorporated the newest innovations and cutting-edge technologies.

“We wanted to think of technologies that we really thought, ‘Well, they’re pushing the envelope,’” said Mark Hofstadter, a senior scientist at the Jet Propulsion Laboratory (JPL) and California Institute of Technology in Pasadena. “It’s not crazy to think they’d be available to fly 10 years from now.” Hofstadter is an author of the internal JPL study, which he discussed at AGU’s Fall Meeting 2019 on 11 December.

Some of the innovations are natural iterations of existing technology, Hofstadter said, like using smaller and lighter hardware and computer chips. Using the most up-to-date systems can shave off weight and save room on board the spacecraft. “A rocket can launch a certain amount of mass,” he said, “so every kilogram less of spacecraft structure that you need, that’s an extra kilogram you could put to science instruments.”

Nuclear-Powered Ion Engine

The dream spacecraft combines two space-proven technologies into one brand-new engine, called radioisotope electric propulsion (REP).

A spacecraft works much like any other vehicle. A battery provides the energy to run the onboard systems and start the engine. The power moves fuel through the engine, where it undergoes a chemical change and provides thrust to move the vehicle forward.

(JoAnna Wendel)

In the dream spacecraft, the battery gets its energy from the radioactive decay of plutonium, which is the preferred energy source for traveling the outer solar system where sunlight is scarce. Voyager 1, Voyager 2, Cassini, and New Horizons all used a radioisotope power source but used hydrazine fuel in a chemical engine that quickly flung them to the far reaches of the solar system.

The dream spacecraft’s ion engine uses xenon gas as fuel: The xenon is ionized, a nuclear-powered electric field accelerates the xenon ions, and the xenon exits the craft as exhaust. The Deep Space 1 and Dawn missions used this type of engine but were powered by large solar panels that work best in the inner solar system where those missions operated.

Xenon gas is very stable. A craft can carry a large amount in a compressed canister, which lengthens the fuel lifetime of the mission. REP “lets us explore all areas of an ice giant system: the rings, the satellites, and even the magnetosphere all around it,” Hofstadter said. “We can go wherever we want. We can spend as much time as we want there….It gives us this beautiful flexibility.”

A Self-Driving Spacecraft

With REP, the dream spacecraft could fly past rings, moons, and the planet itself about 10 times slower than a craft with a traditional chemical combustion engine. Moving at a slow speed, the craft could take stable, long-exposure, high-resolution images. But to really make the most of the ion engine, the craft needs onboard automatous navigation.

“We don’t know precisely where the moon or a satellite of Uranus is, or the spacecraft [relative to the moon],” Hofstadter said. Most of Uranus’s satellites have been seen only from afar, and details about their size and exact orbits remain unclear. “And so because of that uncertainty, you always want to keep a healthy distance between your spacecraft and the thing you’re looking at just so you don’t crash into it.”

“But if you trust the spacecraft to use its own camera to see where the satellite is and adjust its orbit so that it can get close but still miss the satellite,” he said, “you can get much closer than you can when you’re preparing flybys from Earth” at the mercy of a more than 5-hour communications delay.

(JoAnna Wendel)

That level of onboard autonomous navigation hasn’t been attempted before on a spacecraft. NASA’s Curiosity rover has some limited ability to plot a path between destinations, and the Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) will be able to detect hazards and abort its sample retrieval attempt.

The dream spacecraft would be more like a self-driving car. It would know that it needs to do a flyby of Ophelia, for example. It would then plot its own low-altitude path over the surface that visits points of interest like chaos terrain. It would also navigate around unexpected hazards like jagged cliffs. If the craft misses something interesting, well, there’s always enough fuel for another pass.

A Trio of Landers

With extra room on board from sleeker electronics, plus low-and-slow flybys from the REP and autonomous navigation, the dream spacecraft could carry landers to Uranus’s moons and easily drop them onto the surface.

(JoAnna Wendel)

“We designed a mission to carry three small landers that we could drop on any of the satellites,” Hofstadter said. The size, shape, and capabilities of the landers could be anything from simple cameras to a full suite of instruments to measure gravity, composition, or even seismicity.

The dream spacecraft could survey all 27 of Uranus’s satellites, from its largest, Titania, to its smallest, Cupid, only 18 kilometers across. The mission team could then decide the best way to deploy the landers.

“We don’t have to decide in advance which satellites we put them on,” he said. “We can wait until we get there. We might decide to put all the landers on one satellite to make a little seismic network to look for moonquakes and study the interior. Or maybe when we get there we’ll decide we’d rather put a lander on three different satellites.”

“Ice”-ing on a Cake

The scientists who compiled the internal study acknowledged that it’s probably unrealistic to incorporate all of these innovative technologies into one mission. Doing so would involve a lot of risk and a lot of cost, Hofstadter said. Moreover, existing space-tested technology that has flown on Cassini, New Horizons, and Juno can certainly deliver exciting ice giant science, he said. These innovations could augment such a spacecraft.

At the moment, there is no NASA mission under consideration to explore either Uranus or Neptune. In 2017, Hofstadter and his team spoke with urgency about the need for a mission to one of the ice giant planets and now hope that these technologies of the future might inspire a mission proposal.

“It’s almost like icing on the cake,” he said. “We were saying, If you adopted new technologies, what new things could you hope to do that would enhance the scientific return of this mission?”

This article was originally published on Eos, an Earth and space science news publication.

The Amazon Has Lost More Than Ten Million Football Fields of Forest in a Decade

Smithsonian Magazine

This year, I was on the judging panel for the Royal Statistical Society’s International Statistic of the Decade.

Much like Oxford English Dictionary’s “Word of the Year” competition, the international statistic is meant to capture the zeitgeist of this decade. The judging panel accepted nominations from the statistical community and the public at large for a statistic that shines a light on the decade’s most pressing issues.

On Dec. 23, we announced the winner: the 8.4 million soccer fields of land deforested in the Amazon over the past decade. That’s 24,000 square miles, or about 10.3 million American football fields.

(Chart: The Conversation, CC-BY-ND Source: Mongabay)

This statistic, while giving only a snapshot of the issue, provides insight into the dramatic change to this landscape over the last 10 years. Since 2010, mile upon mile of rainforest has been replaced with a wide range of commercial developments, including cattle ranching, logging and the palm oil industry.

This calculation by the committee is based on deforestation monitoring results from Brazil’s National Institute for Space Research, as well as FIFA’s regulations on soccer pitch dimensions.

Calculating the cost

There are a number of reasons why this deforestation matters – financial, environmental and social.

First of all, 20 million to 30 million people live in the Amazon rainforest and depend on it for survival. It’s also the home to thousands of species of plants and animals, many at risk of extinction.

Second, one-fifth of the world’s fresh water is in the Amazon Basin, supplying water to the world by releasing water vapor into the atmosphere that can travel thousands of miles. But unprecedented droughts have plagued Brazil this decade, attributed to the deforestation of the Amazon.

During the droughts, in Sao Paulo state, some farmers say they lost over one-third of their crops due to the water shortage. The government promised the coffee industry almost US$300 million to help with their losses.

Finally, the Amazon rainforest is responsible for storing over 180 billion tons of carbon alone. When trees are cleared or burned, that carbon is released back into the atmosphere. Studies show that the social cost of carbon emissions is about $417 per ton.

Finally, as a November 2018 study shows, the Amazon could generate over $8 billion each year if just left alone, from sustainable industries including nut farming and rubber, as well as the environmental effects.

Financial gain?

Some might argue that there has been a financial gain from deforestation and that it really isn’t a bad thing. Brazil’s president, Jair Bolsonaro, went so far as to say that saving the Amazon is an impediment to economic growth and that “where there is indigenous land, there is wealth underneath it.”

In an effort to be just as thoughtful in that sense, let’s take a look. Assume each acre of rainforest converted into farmland is worth about $1,000, which is about what U.S. farmers have paid to buy productive farmland in Brazil. Then, over the past decade, that farmland amounts to about $1 billion.

The deforested land mainly contributes to cattle raising for slaughter and sale. There are a little over 200 million cattle in Brazil. Assuming the two cows per acre, the extra land means a gain of about $20 billion for Brazil.

Chump change compared to the economic loss from deforestation. The farmers, commercial interest groups and others looking for cheap land all have a clear vested interest in deforestation going ahead, but any possible short-term gain is clearly outweighed by long-term loss.

Rebounding

Right now, every minute, over three football fields of Amazon rainforest are being lost.

What if someone wanted to replant the lost rainforest? Many charity organizations are raising money to do just that.

At the cost of over $2,000 per acre – and that is the cheapest I could find – it isn’t cheap, totaling over $30 billion to replace what the Amazon lost this decade.

Still, the studies that I’ve seen and my calculations suggest that trillions have been lost due to deforestation over the past decade alone.

A 3.8-Million-Year-Old Skull Puts a New Face on a Little-Known Human Ancestor

Smithsonian Magazine

Spotting the intact Australopithecus skull in the Ethiopian dirt caused paleoanthropologist Yohannes Haile-Selassie to literally jump for joy. “It was something that I’ve never seen before, and I’ve seen a lot of cranial fossils,” he says.

The chance discovery by Haile-Selassie and an Ethiopian shepherd has created a captivating portrait of 3.8-million-year-old face, providing an unprecedented look at a hominin species from a key stage of human evolution. Experts say the extraordinary fossil can help redefine the branches of humans’ evolutionary tree during a time when our ancestors had just evolved efficient ways to walk upright.

“This cranium looks set to become another celebrated icon of human evolution,” Fred Spoor, a human evolution researcher at the Natural History Museum in London, writes in a News & Views article that accompanied Haile-Selassie and colleagues’ new study in the journal Nature.

The amazingly complete skull surfaced at Woranso-Mille, in Ethiopia’s Afar region, back in 2016. But it has taken 3 and a half years of hard work to answer the first question that arose—just what kind of skull is it?

Composite image of human hands holding “MRD” by Jennifer Taylor. (Photography by Dale Omori and Liz Russell / Cleveland Museum of Natural History)

Haile-Selassie and colleagues compared the skull (dubbed MRD after part of its collection ID number) with a wide variety of hominin fossils from across Africa. They sized up different morphological features to see what species the cranium represents and where it fits in the interconnected lineages of our family tree. The results identify the skull as belonging to a male Australopithecus anamensis. The hominin species is theorized to have vanished a bit earlier than 3.8 million years ago after giving rise to a later lineage, Australopithecus afarensis, to which the famed fossil Lucy belongs. A. anamensis has traits of both apes (climbing arms and wrists) and humans (changes in the ankles and knee joints to facilitate walking on two feet).

Most previous fossil specimens of A. anamensis are limited to small bits of bone, such as a tooth, partial jaw, or fragment of arm or shin. The opportunity to study a nearly complete braincase and face confirms the “southern ape” as a unique species and shines light on the differences between two of our most ancient hominin ancestors, A. anamensis and A. afarensis.

“Most of A. anamensis own traits are quite primitive,” Haile-Selassie says, noting the individual’s small brain, protruding face and large canine teeth. “There are a few features exclusively shared with A. afarensis, like the orbital region in the frontal area. But everything else is really primitive. If you look at it from the back, it looks like an ape. This is something that I never expected to see in a species that is hypothesized to be the ancestor of A. afarensis. So it changed the whole gamut of ideas in terms of the relationship between those two.”

The skull also casts doubt on prevailing ideas that the older lineage directly gave rise to the younger, instead suggesting that the two lived together, coexisting for at least 100,000 years. But the study authors stress that it’s still quite possible that early populations of A. anamensis gave rise to A. afarensis perhaps 4 million years ago—they just didn’t die out immediately afterwards.

“Probably a small population of A. anamensis isolated itself from the main population, underwent major changes, and over time distinguished itself from the parent species of A. anamensis. That’s probably how A. afarensis appeared,” Haile-Selassie says.

A reconstruction of the facial morphology of the 3.8 million-year-old 'MRD' specimen of Australopithecus anamensis. (Photograph by Matt Crow / Facial reconstruction by John Gurche made possible through generous contribution by Susan and George Klein / Cleveland Museum of Natural History)

The research team argues that the relationship between the two ancient hominin species, believed to be ancestors to our own genus Homo, may be a prime example of a nonlinear evolutionary scenario common in other non-human species. Anagenesis, when one species evolves so completely into another species that the progenitor disappears, is not the primary way the branches on our family tree diverged.

“Just because one species gave rise to another, it doesn’t mean that the source species (ancestor) disappeared,” Rick Potts, head of the Smithsonian’s Human Origins Program who was not involved in the new study, says via email from a dig in Kenya. “We’ve known for some time that the human family tree is branching and diverse, like the evolutionary trees of almost all other species. The new cranium is significant because it illustrates this pattern of biodiversity in a poorly known period of hominin evolution, just as our ancestors evolved a stronger and stronger commitment to walking on two legs.”

Paleoanthropologist Meave Leakey and colleagues reported in 1995 that A. anamensis was the first known species to evolve an expanded knee joint that allowed each of its legs to briefly bear all of its body weight during bipedal walking. Bipedalism set our ancestors apart from the apes, enabling ancient hominins to take advantage of a wider range of habitats than those available to tree climbers.

A second, related study helped to more precisely date the cranium fossil by investigating minerals and volcanic layers where it was found. The work also helped describe the long-vanished world in which A. anamensis and his kin lived.

The 3.8 million-year-old cranium of the 'MRD' specimen of Australopithecus anamensis. (Dale Omori / Cleveland Museum of Natural History)

The skull was buried in sand that was deposited in a river delta on the shores of an ancient lake. The sediment deposits also held botanical remains, revealing that the environment around the ancient lake was predominantly dry shrubland, but there was a mixture of other local ecosystems as well.

“There were forests around the shores of the lake and along the river that flowed into it, but the surrounding area was dry with few trees,” Beverly Saylor, a geologist at Case Western Reserve University and lead author of the second study, said at a press conference. The evidence suggests that, like contemporaries from other sites, the male hominin likely dined on a tough, ape-like diet of seeds, grasses and similar fare.

Haile-Selassie and colleagues have been working in the area of Woranso-Mille, Ethiopia, for 15 years. When a local shepherd showed up in camp to announce the find of some intriguing fossils, Haile-Selassie was skeptical, especially because locals had often dragged him to visit supposed fossil sites simply because they needed a ride somewhere. He asked Habib Wogris, the local chief who organizes fieldwork in the region each year, to take an hour-long walk with the shepherd to visit the site of his find.

“The chief has seen a lot of teeth of hominins from the site and he realized that this tooth looked like a hominin tooth,” Haile-Selassie says. “As soon as he returned and opened his hand and I saw the tooth, I said, ‘Where did you find it?’ They said, ‘let’s go and we'll show you.’”

The fossil site was in the region’s high ground, where the shepherd had moved his flock to escape seasonal flooding in lower areas. “He’s been living there like three months with his goats, and he saw the fossil when he was digging a hole for his newborn goats to make a protection for them from jackals and hyenas,” Haile-Selassie says.

Yohannes Haile-Selassiewith “MRD” cranium. (Cleveland Museum of Natural History)

On site, the shepherd showed him where the tooth had been lying, and Haile-Selassie surveyed the surroundings looking for other fragments.

“Three meters from where I was standing there was this round thing, just like a rock, and I said oh my goodness,” Haile-Selassie’s recalls. His reaction, literally jumping up and down with excitement, made the shepherd remark that the doctor had gone crazy. “I speak their language, and I said no the doctor is not going crazy. He’s just excited,” Haile-Selassie laughs.

With the rare fossil’s formal unveiling today, the excitement of the initial find three years ago has spread throughout the community of scientists looking to put a human, or hominin, face on our distant ancestors.

Camera, Motion Picture, Cine-Kodak Special, 16mm

National Air and Space Museum
With associated accessories.

Art Scholl was a highly regarded aerial cinematographer, stunt pilot, and aerobatic champion. Using this variety of cameras, Scholl shot aerial scenes for the 1970s television series Baa Baa Black Sheep and such movies as Capricorn I, The Great Waldo Pepper, Top Gun, and the IMAX film Flyers.

Scholl's de Havilland DHC-1A Chipmunk is on display in the Udvar-Hazy Center, and master copies of several of his films reside in the Museum's motion picture collection. The cameras are of 1940s to 1960s vintage. Many of these models were used by both U.S. and international military and commercial aerial photographers.

The de Havilland Chipmunk was originally designed as a post World War II primary trainer, a replacement for the venerable de Havilland Tiger Moth training biplane used by the air forces of the British Commonwealth throughout World War II. Among the tens of thousands of pilots who trained in or flew the Chipmunk for pleasure was veteran aerobatic and movie pilot Art Scholl. He flew his Pennzoil Special at airshows around the country throughout the 1970s and early 1980s, thrilling audiences with skill and showmanship, and proving that the design itself was a top-notch aerobatic aircraft.

The Chipmunk was designed, initially built and flown by de Havilland Canada subsidiary, hence the very Canadian "woods country" sounding name of Chipmunk that complemented their other aircraft the Beaver, Otter, and Caribou. The prototype first flew on May 22, 1946 in Toronto. DeHavilland of Canada produced 218 Chipmunks and de Havilland in England produced 1000 airplanes for training at various Royal Air Force and University Air Squadrons during the late 1940s and into the 1950s. OGMA in Portugal produced 60 for the Portuguese Air Force. (One source says 66.) In 1952, His Royal Highness the Duke of Edinburgh took his initial flight training in a Chipmunk. It was also used in other roles, such as light communications flights in Germany and for internal security duties on the island of Cyprus.

The Chipmunk was an all-metal, low wing, tandem two-place, single engine airplane with a conventional tail wheel landing gear. It had fabric-covered control surfaces and a clear plastic canopy covering the pilot and passenger/student positions. The production versions of the airplane were powered by a 145 hp in-line de Havilland Gipsy Major "8" engine.

Art Scholl purchased two Canadian-built Chipmunks from the surplus market after they became available in the late 1950s and early 1960s. He purchased the two-place DHC-1A, N114V, first and it now resides in the Experimental Aircraft Association's museum in Oshkosh, Wisconsin. In 1968, Scholl bought another DHC-1A and began extensive modifications that resulted in almost a completely new aircraft. He covered over one cockpit to reconfigure the aircraft into a single-place aircraft and installed a (fuel injected) 260 hp Lycoming GO-435 flat-opposed 6-cylinder engine. He removed 20 inches from each wingtip and changed the airfoil section of the tip area. The reduction in span led to the need to lengthen the ailerons inboard to retain control effectiveness. This in turn reduced the flaps to where they became somewhat ineffective, and, since the flaps really were not required for the normal show and aerobatic routines, he removed them as a weight saving measure. These modifications improved the low speed tip stall characteristics and improved roll performance during aerobatic maneuvers.

The vertical fin and rudder acquired a 25% increase in area and an increased rudder throw to manage the effects of increased engine torque and for better directional control during slow-speed aerobatic routines. The standard fixed landing gear was replaced with a retractable gear from a Bellanca airplane. The landing gear was subsequently damaged during a belly landing and resulted in a permanent wheel toe-in that was never repaired. This caused a tire drag during takeoffs and landings that led to the need for tire replacement after about 10 takeoffs and landings. Other idiosyncrasies were the pitot static tube being fashioned from a golf club shaft and a 3-inch extension added to the cockpit control stick to ease the control loads during the more severe aerobatic routines. Scholl also installed rear-view mirrors on both sides of the cowling just forward of the windscreen. He placed an RAF placard on the instrument panel as a memorial to some Vulcan bomber crew members who were his personal friends. He installed three smoke generators with red, white, and blue smoke for his show routines that included the Lomcevak tumbling/tailslide maneuver.

Scholl also referred to this plane as a Super Chipmunk, a term also used for a few other highly-modified Chipmunks. However, it technically remained a Chipmunk that flew in the Experimental category of FAA special airworthiness certificates.

Scholl designed most of these modifications himself, drawing upon his Ph.D. and his 18 years as a university professor in aeronautics. He held all pilot ratings, and was a licensed aircraft and powerplant (A&P) mechanic and an authorized FAA Inspector. He was also a three-time member of the U.S. Aerobatic Team, an air racer (placing several times at the National Air Races at Reno), an airshow pilot, and a fixed base operator with a school of international aerobatics. In 1959, Scholl began working for legendary Hollywood pilots Frank Tallman and Paul Mantz at Tallmantz Aviation and then later formed his own movie production company, producing and performing aerial photography and stunts for many movies and television shows. At airshows, Scholl often flew with his dog Aileron, who rode the wing as Scholl taxied on the runway or sat on his shoulder in the aircraft.

Art Scholl was killed in 1985 while filming in a Pitts Special for the movie Top Gun. Art Scholl's estate donated the Pennzoil Special, N13Y, serial number 23, and his staff delivered it to the Garber Facility in Suitland, Maryland on August 18, 1987. It is currently on display at the Museum's Stephen F. Udvar-Hazy Center at Washington Dulles International Airport in Chantilly, Virginia.

Can This Trash Can Turn Food Waste Into Garden Treasure?

Smithsonian Magazine

There are many parts of produce that consumers don’t usually eat—apple cores, orange peels, carrot tops, cucumber butts. That’s not to say that inventive chefs haven’t found ways to use these commonly trashed edibles. But in most developed nations, people waste a lot of food.

To put this into perspective: Every year, roughly one third, or 1.3 billion tonnes, of food produced worldwide for human consumption is wasted, according to the United Nations Food and Agriculture Organization. While a similar amount of food is wasted in both industrialized and developing countries, in the former regions, a whopping 40 percent of the waste can be pinned on consumers and retail.

And that’s a big problem.

Aside from the many people this waste could feed, billions of pounds of food heads to the landfill each year where it sits, decomposing and producing methane, a potent greenhouse gas. But an innovative composting device, the Zera Food Recycler, hopes to take a bite out of this mounting food waste.

The Zera recycler is the brainchild of WLabs, Whirlpool’s innovation incubator. First conceived in 2012, the device is a little larger than a standard kitchen trash can and, with the help of an additive, can turn food scraps into something resembling fertilizer.

Throughout the week you can put all of your food waste (from fruit to veggies to meat to dairy, minus any large pits or bones) into the device and close the lid. (WLabs of Whirlpool Corporation)

If executed correctly, composting is a win for the environment. No matter how you slice it, cucumber butts are always going to be an issue. But tossing them into landfill-bound trash can have more of an impact than most imagine. The piled trash has to be trucked to the nearest landfill (sometimes across state lines), where it produces enormous amounts of methane.

“If you put all the food waste into one country, it would be the world’s third largest greenhouse gas emitter,” Brian Lipinski, an associate in the World Resource Institute’s Food Program told Smithsonian.com in 2015.

Heaps upon heaps of trash are piled up in landfills, left to decompose with little aeration or stirring. This means that the trash undergoes what’s called anaerobic degradation—a process that emits methane, which heats the planet much more (up to 86 times) than its greenhouse gas cousin carbon dioxide. This type of anaerobic degradation can even happen in poorly tended compost piles that are not turned or otherwise aerated on a regular basis.

Even so, traditional composting can take months, requiring intense and prolonged microbial action to convert food to the earthy-smelling brown stuff you can liberally apply to lawns and gardens. And tending to the heap of degrading food—aerating the pile, adjusting the acidity, optimizing the carbon to nitrogen ratio—can only speed up the process so much.

So how does Zera deal with these limits? “It’s a really easy answer,” says Tony Gates, the project lead for Zera. “We’re doing no microbial breakdown at all.”

Zera relies on heating the material to start the decomposition—or rather liquefaction—process. According to the company’s website, throughout the week you can put all of your food waste (from fruit to veggies to meat to dairy, minus any large pits or bones) into the device and close the lid. When the machine is full, just drop in the additive pack—essentially, a combination of coconut husk and baking soda, says Gates. With a push of a button, the machine takes over, heating the soon-to-be food goo to a toasty 158 degrees Fahrenheit. A central auger slowly turns to agitate and aerate the mix and fans continually run to dry it down.

The squashed food transforms over the course of this process, which takes up to 24 hours—from liquified food to what’s known as the “peanut butter phase” to the solid phase to the loose fertilizer phase, says Gates.

The problem is what to do next, says Jean Bonhotal, director of the Cornell Waste Management Institute in Soil and Crop Sciences. “People have been working on these processes for a long time, and I love the idea of the process,” she says. “But [the resulting material] does have to be managed further.”  

What comes out of the sleek device isn’t mature fertilizer, she explains. If you go out and sprinkle a hefty layer on your garden, not only would it start to smell as microbes get to work digesting the food, but it could also have negative effects on the health of your plants since the carbon and nitrogen are not yet in a form that greenery can gobble.

Gates agrees, but says that their tests suggest that a light sprinkling (with emphasis on light) of the material over plants can actually have positive effects after two to three weeks, as microbes mull over the rich material and release the nutrients into the soil.

“We’re letting nature do [the composting] after the fact,” says Gates. “But what we’ve done is we’ve sped up the process of decomposition to a point where nature can take what we give it and do it much quicker.”

But there are still a couple of concerns with this process, Bonhotal points out. First is the volume of the material that will be produced. “You’re not adding stuff to your plants 365 days a year,” she says. And the lightness of the sprinkle necessary to prevent stink and plant death will result in the pre-compost product building up overtime.

Though it can be stored in airtight containers for a year or more, says Gates, this is one of the kinks he and his team are still working out. One potential, he says, is to use the material as a starter or fodder for a backyard or community compost pile.

The second concern is for the energy requirements of the machine, says Bonhotal. Heating and turning both the auger and fans does consume energy. But, according to Gates, the company has done everything it can to make production and running environmentally sound—right down to the limited use of Styrofoam in the packaging.

“From the beginning we wanted to make sure it was very clear that there's a distinct advantage of doing this process over sending the waste to the landfill,” he says. But without lifecycle analyses of the device, it is difficult to tell if the product breaks even with emissions.

Though Zera Food Recycler still has some kinks to iron out, this sleek, $1,199 device could help in the bid to limit landfill-bound waste. So if you aren’t into carrot-top soup or misshapen-beet ketchup, Zera is an option. Just make sure you are ready to roll up your sleeves and tend to all that pulverized food.

The Roots of Computer Code Lie in Telegraph Code

Smithsonian Magazine

Famously, the first long-distance message Samuel Morse sent on the telegraph was “What hath God wrought?” When it comes to digital progress, it’s a question that’s still being answered.

The telegraph was a revolutionary means of communication in itself, but it’s also connected to the development of modern computer languages. Like any new technology, its creation had a ripple effect, provoking a wide range of other innovations. Engineer Jean-Maurice-Émile Baudot, born on this day in 1845, was an important telegraph innovator whose telegraph system helped lay the groundwork for modern computers.

Baudot had been a telegraph operator since 1869, write Fritz E. Froehlich and Allen Kent in The Froehlich/Kent Encyclopedia of Telecommunications. When he was training, he learned how to operate Samuel Morse’s original telegraph, but he also learned to use other telegraph models. He practiced on the Hughes telegraph, an early printing telegraph that had a keyboard like a piano, and the Meyer telegraph, which was the first to use paper tape with holes in it to record telegraph signals, according to author Anton A. Huurdeman. Baudot built on these innovations, adding his own touch.  

Baudot Code

Baudot Code's biggest advantage over Morse Code, which was first used in the 1840s, and other earlier codes, was its speed. Earlier systems sent characters of information by using different lengths of character distinguished by a short gap (the “dits” and “das” of the Morse code system). “Baudot’s code sent characters in a synchronized stream,” writes author Robin Boast, “as each character code was exactly the same length and had exactly the same number of elements.” Although some of the ideas he used had been pioneered before, Baudot was the first to connect them all in a system, Boast writes. He goes on to explain, “most significant for us is that Baudot was the first to recognize the importance of a simple five-bit binary code–a digital code.” Baudot’s fixed-length binary code is a direct predecessor of some of the digital codes used today. 

ASCII, the most widely accepted code for translating computer information into the words you see on your screen, is based on Baudot code, which itself went through several permutations after Baudot's original innovation. But more importantly, Baudot's code itself "laid the first brick in the road to our digital universe," writes James Draney for Review 31. "Baudot's Printing Telegraph was an encoding system that ran off five-bit binary code. It was not the first binary code, of course, but it was the first to be properly considered digital, and its essence still exits in our computers, tablets and mobiles today."  

The baud, a unit of transmission speed used today for modems among other things, is named after Baudot. (Wikimedia Commons)

Printing on paper tape

Having already patented his printing telegraph in France, England and Germany, Baudot secured an American patent for his printing telegraph on August 21, 1888. The inventor wasn’t the first to use a paper-punch system to record telegraph signals, but because Baudot Code and his custom-built telegraph machines were widely embraced, being much faster than previous telegraphs, they helped keep the system alive. His printing telegraph was a predecessor to computers because it ran without human intervention, once the data (codes) were input, presenting the information to the receiver in a readable form–paper tape with coded holes in it. 

Baudot’s teletype machine, also called a teletypewriter, used a five-key keyboard, write Froehlich and Kent. “Borrowing from Meyer, Baudot developed a distributor that allowed five instruments to share the same wire,” they write. His prototype was tested in the later 1870s and widely adopted in France: “by 1892,” the pair write, “France had 101 Baudot-printing multiple telegraphs in operation.”

Digital printing using perforated paper was still used in the twentieth century, Boast writes, and it was “one of the first recording media used for electronic computers in the 1940s and ‘50s.” Think punch cards and ticker tape.

J.M.E. Baudot's Printing Telegraph, Patented Aug. 21, 1888 (U.S. Pat. No. 388,244)

Scientists Can Tell What Fish Live Where Based On DNA in the Water

Smithsonian Magazine

Ocean life is largely hidden from view. Monitoring what lives where is costly – typically requiring big boats, big nets, skilled personnel and plenty of time. An emerging technology using what’s called environmental DNA gets around some of those limitations, providing a quick, affordable way to figure out what’s present beneath the water’s surface.

Fish and other animals shed DNA into the water, in the form of cells, secretions or excreta. About 10 years ago, researchers in Europe first demonstrated that small volumes of pond water contained enough free-floating DNA to detect resident animals.

Researchers have subsequently looked for aquatic eDNA in multiple freshwater systems, and more recently in vastly larger and more complex marine environments. While the principle of aquatic eDNA is well-established, we’re just beginning to explore its potential for detecting fish and their abundance in particular marine settings. The technology promises many practical and scientific applications, from helping set sustainable fish quotas and evaluating protections for endangered species to assessing the impacts of offshore wind farms.

Who’s in the Hudson, when?

In our new studymy colleagues and I tested how well aquatic eDNA could detect fish in the Hudson River estuary surrounding New York City. Despite being the most heavily urbanized estuary in North America, water quality has improved dramatically over the past decades, and the estuary has partly recovered its role as essential habitat for many fish species. The improved health of local waters is highlighted by the now regular fall appearance of humpback whales feeding on large schools of Atlantic menhaden at the borders of New York harbor, within site of the Empire State Building.

Preparing to hurl the collecting bucket into the river. (Mark Stoeckle, CC BY-ND)

Our study is the first recording of spring migration of ocean fish by conducting DNA tests on water samples. We collected one liter (about a quart) water samples weekly at two city sites from January to July 2016. Because the Manhattan shoreline is armored and elevated, we tossed a bucket on a rope into the water. Wintertime samples had little or no fish eDNA. Beginning in April there was a steady increase in fish detected, with about 10 to 15 species per sample by early summer. The eDNA findings largely matched our existing knowledge of fish movements, hard won from decades of traditional seining surveys.

Our results demonstrate the “Goldilocks” quality of aquatic eDNA – it seems to last just the right amount of time to be useful. If it disappeared too quickly, we wouldn’t be able to detect it. If it lasted for too long, we wouldn’t detect seasonal differences and would likely find DNAs of many freshwater and open ocean species as well as those of local estuary fish. Research suggests DNA decays over hours to days, depending on temperature, currents and so on.

Altogether, we obtained eDNAs matching 42 local marine fish species, including most (80 percent) of the locally abundant or common species. In addition, of species that we detected, abundant or common species were more frequently observed than were locally uncommon ones. That the species eDNA detected matched traditional observations of locally common fish in terms of abundance is good news for the method – it supports eDNA as an index of fish numbers. We expect we’ll eventually be able to detect all local species – by collecting larger volumes, at additional sites in the estuary and at different depths.

Fish identified via eDNA in one day’s sample from New York City’s East River. (New York State Department of Environmental Conservation: alewife (herring species), striped bass, American eel, mummichog; Massachusetts Department of Fish and Game: black sea bass, bluefish, Atlantic silverside; New Jersey Scuba Diving Association: oyste)

In addition to local marine species, we also found locally rare or absent species in a few samples. Most were fish we eat – Nile tilapia, Atlantic salmon, European sea bass (“branzino”). We speculate these came from wastewater – even though the Hudson is cleaner, sewage contamination persists. If that is how the DNA got into the estuary in this case, then it might be possible to determine if a community is consuming protected species by testing its wastewater. The remaining exotics we found were freshwater species, surprisingly few given the large, daily freshwater inflows into the saltwater estuary from the Hudson watershed.

Filtering the estuary water back in the lab. (Mark Stoeckle, CC BY-ND)

Analyzing the naked DNA

Our protocol uses methods and equipment standard in a molecular biology laboratory, and follows the same procedures used to analyze human microbiomes, for example.

After collection, we run water samples through a small pore size (0.45 micron) filter that traps suspended material, including cells and cell fragments. We extract DNA from the filter, and amplify it using polymerase chain reaction (PCR). PCR is like “xeroxing” a particular DNA sequence, producing enough copies so that it can easily be analyzed.

We targeted mitochondrial DNA – the genetic material within the mitochondria, the organelle that generates the cell’s energy. Mitochondrial DNA is present in much higher concentrations than nuclear DNA, and so easier to detect. It also has regions that are the same in all vertebrates, which makes it easier for us to amplify multiple species.

eDNA and other debris left on the filter after the estuary water passed through. (Mark Stoeckle, CC BY-ND)

We tagged each amplified sample, pooled the samples and sent them for next-generation sequencing. Rockefeller University scientist and co-author Zachary Charlop-Powers created the bioinformatic pipeline that assesses sequence quality and generates a list of the unique sequences and “read numbers” in each sample. That’s how many times we detected each unique sequence.

To identify species, each unique sequence is compared to those in the public database GenBank. Our results are consistent with read number being proportional to fish numbers, but more work is needed on the precise relationship of eDNA and fish abundance. For example, some fish may shed more DNA than others. The effects of fish mortality, water temperature, eggs and larval fish versus adult forms could also be at play.

Just like in television crime shows, eDNA identification relies on a comprehensive and accurate database. In a pilot study, we identified local species that were missing from the GenBank database, or had incomplete or mismatched sequences. To improve identifications, we sequenced 31 specimens representing 18 species from scientific collections at Monmouth University, and from bait stores and fish markets. This work was largely done by student researcher and co-author Lyubov Soboleva, a senior at John Bowne High School in New York City. We deposited these new sequences in GenBank, boosting the database’s coverage to about 80 percent of our local species.

Study’s collection sites in Manhattan. (Mark Stoeckle, CC BY-ND)

We focused on fish and other vertebrates. Other research groups have applied an aquatic eDNA approach to invertebrates. In principle, the technique could assess the diversity of all animal, plant and microbial life in a particular habitat. In addition to detecting aquatic animals, eDNA reflects terrestrial animals in nearby watersheds. In our study, the commonest wild animal detected in New York City waters was the brown rat, a common urban denizen.

Future studies might employ autonomous vehicles to routinely sample remote and deep sites, helping us to better understand and manage the diversity of ocean life.

Kyrgyzstan’s Otherworldly Cities of the Dead

Smithsonian Magazine

In the summer of 2006, Margaret Morton found herself in Kyrgyzstan accompanying a friend who was conducting grant research on Kyrgyz culture for a theatrical performance. One day, as they were traveling by car through lonely, mountainous terrain, she noticed what appeared to be a city in distance.

Approaching the structure, however, she realized that it was desolate and overgrown with weeds. This was not a city of the living, but a city of the dead—a Krygyz ancestral cemetery. Captivated by the site, and the others that she saw on her trip, Morton extended her stay. While her attraction was aesthetic at the start, she soon learned that the cemeteries were veritable fossils of Kyrgyzstan's multicultural past and returned for two more summers to study and document the sites. Morton’s new book Cities of the Dead: The Ancestral Cemeteries of Kyrgyzstan exhibits both the beauty and structural uniqueness of these burial grounds. I spoke with Morton, who is a professor of photography at The Cooper Union, about the project.

51v6d40zxzL._SL160_.jpg

Cities of the Dead: The Ancestral Cemeteries of Kyrgyzstan

~ Margaret Morton (author) More about this product
List Price: $40.00
Price: $33.23
You Save: $6.77 (17%)

When you returned to Kyrgyzstan after your first trip, what were you looking to find?

I wanted to see in the different regions of Kyrgyzstan how [the cemeteries] varied, which they did dramatically.

How so?

On the Uzbekistan-Tajikistan border, they’re quite different. The images in the book with the animal horns and the yak tails—those were on the remote border regions. The one with the deer horns was actually on the north shore of Lake Issyk Kul—that area was originally settled by a tribe called the deer people.

The very grand cemeteries that I saw initially were on the south shore of Lake Issyk Kul. If they’re high up in the mountains, they’re very different. I had this theory that if the mountains are rounded and soft, the monuments have more rounded tops. I couldn’t help thinking it was just innate response. That is often the case where the people that build their own building are just responding very directly to the landscape because it’s a larger part of their lives than it is for us who live in cities.

And how did you go about finding the burial sites?

That proved more difficult that I had thought because of the roads. Kyrgyzstan is [mostly] mountains so there aren’t a lot of roads to get to places, and there aren’t a lot of paved roads—many haven’t been repaired since Soviet times—and there are a lot of mountain roads with hairpin turns, so I realized it was going to take two more summers to do what I wanted to do and to visit every region.

What elements or combination of elements in these cemeteries did you find most striking?

Certainly the fact that they looked like cities and that they were in this dramatic landscape. I was initially really more compelled by that response and not thinking about it as much as a burial tradition. As I learned more and more about it … the fascinating aspect was the fact that you could have nomadic references and Islamic references and Soviet references—all this could coexist in the cemetery architecture, and nobody had ever tried to change that or destroy that. That was really fascinating to me because, during the Soviet era, a lot of the important mosques were destroyed in Kyrgyzstan. But the cemeteries were never touched.

Do you think there is anything quite like this?

It seems that it’s quite unique. I did speak to artists and art historians from Kazakhstan and Tajikistan. I haven’t been to those countries, but I know a lot of people that either live there or have traveled there. They say that sometimes the cemeteries aren’t as elaborate, which is ironic because those countries do have more elaborate architecture than Kyrgyzstan. The metal structures that replicate the yurt—they said that it is unique to Kyrgyzstan. Elmira Kochumkulova, who wrote the book's introduction, had seen yak tails right on the Kyrgyz border in Tajikistan, but then she reminded me that those borders were Soviet-made borders.

Is anyone working to preserve the cemeteries?

The Kyrgz don’t preserve them. They think it’s fine that they return to the earth. A lot of [monuments] are made just from dried clay with a thin stucco, a thin clay coating over them, and you can see some of them look very soft and rounded and they wouldn’t have been when they were built, they would have had more pointed tops.

Your past four books have focused on environments of the homeless in New York. Did those projects inform this one in any way?

Absolutely. The four previous projects, even though they were centered in Manhattan and about homeless communities, were about the housing that homeless people made for themselves. [It's] this idea of people making their housing—in this case it’s housing their dead, and it’s a dramatic landscape that I was being exposed to for the first time ... what attracted me to it was the same.

Was there a reason why you chose to publish these photos in black and white?

The first summer I was photographing in black and white for my own projects. Then the second summer, I did film and then also color digital because I knew the country so much better. The color is just this pale, brown clay, usually—it’s very monochromatic. The architectural forms definitely come through better in black and white.

Do you have any projects coming up?

I’m photographing an abandoned space in Manhattan again. What will become of it I don’t know. I wanted to stay very focused on this book. I put so much energy into the project—I don’t want to let it go now that it’s finding its life in the world.

Gene Editing in Human Embryos Ignites Controversy

Smithsonian Magazine

Scientists in China recently reported that they have edited the genetic code of human embryos. The work relied on new technology that has been heralded as one of the most exciting developments in genetics in decades. But for some researchers, these experiments stepped over an ethical line. Before they had even published their work, rumors of the Chinese scientists’ research had prompted outcry urging a moratorium on such work. A letter last month in the journal Nature stated:

In our view, genome editing in human embryos using current technologies could have unpredictable effects on future generations. This makes it dangerous and ethically unacceptable. Such research could be exploited for non-therapeutic modifications. We are concerned that a public outcry about such an ethical breach could hinder a promising area of therapeutic development, namely making genetic changes that cannot be inherited.

At this early stage, scientists should agree not to modify the DNA of human reproductive cells. Should a truly compelling case ever arise for the therapeutic benefit of germ­line modification, we encourage an open discussion around the appropriate course of action.

The research team, led by researcher Junjiu Huang, at Sun Yat-sen University in Guangzhou, used a technique called CRISPER/Cas9 to try and edit the gene that causes a potentially fatal blood disorder in human embryos, reports David Cyranoski and Sara Reardon, who broke the story at Nature NewsThe CRISPR system works like cut and paste at the DNA level. Using the system, scientists can snip out targeted spots of genetic code and insert new sequences. The tool can turn disease-causing genes off or repair mutations with working copies of genes, as the Chinese team attempted to do. Already the tool has been used to engineer lab animals, such as monkeys, with specific gene changes and tweak adult human cells.

The Chinese researchers targeted the gene responsible for a blood disorder called beta-thalassemia. National Geographic’s Phenomena blog, Carl Zimmer reports that the researchers tried this technique on 86 embryos. Most of them, 71, survived long enough for observations. The CRISPR system cleaved and spliced genes in 28 embryos. One of the big concerns for gene editing is the possibility that the wrong genes will be cut, and indeed only a small fraction of those 28 were successfully spliced. The others were either partially repaired by the cells gene preservation mechanisms or cleaved in the wrong place entirely. Misplaced mutations can cause other diseases such as cancer. Even the four spliced embryos weren’t a success: Only some of the cells in the embryos were edited, creating genetic mosaics.

The researchers published their results in the journal Protein & Cell. They write: "Because the edited embryos are genetically mosaic, it would be impossible to predict gene editing outcomes," using genetic techniques to diagnose IVF embryos before they are implanted in the womb. They add, "our study underscores the challenges facing clinical applications of CRISPR/Cas9."

Reardon and Cyranoski of Nature News also report that Huang and his colleagues plant to continue the work, looking for ways to reduce the number of those off-target gene edits, but use adult human cells or animals. However, the reporters write that at least four other groups in China are also working on editing human embryos.

The Guangzhou team tried to allay some of the concerns about the ethics of their work by only using embryos from fertility clinics that had an extra set of chromosomes, after an egg was fertilized by two sperm. Live births would never have come from those embryos, though the zygotes do go through the first stages of development. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” Huang told Cyranoski and Reardon.

But still, the response in the research community has been immediate.

"No researcher should have the moral warrant to flout the globally widespread policy agreement against modifying the human germline," wrote Marcy Darnovsky of the Center for Genetics and Society, a watchdog group, in an email to Rob Stein writing for NPR’s "Shots" blog. "This paper demonstrates the enormous safety risks that any such attempt would entail, and underlines the urgency of working to forestall other such efforts. The social dangers of creating genetically modified human beings cannot be overstated."

Whether further work proceeds or is halted, the study will likely be recognized as pivotal in the history of medicine. Zimmer provides some historical context of changing the genes of humans in his blog post and writes:

Just because this experiment came out poorly doesn’t mean that future experiments will. There’s nothing in this study that’s a conceptual deal-breaker for CRISPR. It’s worth recalling the early days of cloning research. Cloned embryos often failed to develop, and animals that were born successfully often ended up with serious health problems. Cloning is much better now, and it’s even getting to be a business in the world of livestock and pets. We still don’t clone people, though–not because we can’t, but because we choose not to. We may need to make the same choice about editing embryos before too long.

 George Daley, a stem cell researcher at Harvard Medical School, told Cyranoski and Reardon at Nature News that the study was "a landmark, as well as a cautionary tale. Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes."

At NPR, Daley added, "We should brace for a wave of these papers, and I worry that if one is published with a more positive spin, it might prompt some IVF clinics to start practicing it, which in my opinion would be grossly premature and dangerous."

The Top Ten Weirdest Dinosaur Extinction Ideas

Smithsonian Magazine

What happened to the dinosaurs? For over a century, paleontologists have been puzzling over the fate of our favorite prehistoric oddities. The non-avian dinosaurs dominated the planet for an inconceivably long period of time, and their evolutionary success only heightens the mystery of their downfall.

Our understanding of dinosaurian demise has changed a great deal since 19th-century naturalists started studying the long lost animals. Today, paleontologists have discerned that most dinosaur lineages disappeared by about 66 million years ago after intense volcanic activity, climate change and a catastrophic asteroid impact triggered one of the worst mass extinctions in our planet’s history. Many forms of life disappeared. Among our favorite prehistoric celebrities, only the avian dinosaurs— birds— were left to carry on the legacy of Velociraptor and kin.

But before our current view came together, the extinction of the non-avian dinosaurs was an open-ended question. Here is a list of some of the stranger—now discarded—theories explaining the loss of our dear departed dinosaurs:

Egg-eating

George Wieland, an early 20th-century paleontologist, argued that the dinosaurs ate themselves into extinction. The ancestors of the fearsome Tyrannosaurus, he said, probably “got their first impulse toward gigantism on a diet of sauropod eggs.” Even the most caring of dinosaur mothers couldn’t stop the near-constant depredations of egg-hungry carnivores. Wieland conceded that monitor lizards and snakes may have consumed their share of embryonic dinosaurs, too, but the Yale researcher ultimately concluded that, “The potent feeders on dinosaur eggs and young must be sought for amongst the dinosaurians themselves.” In the years since Wieland’s 1925 hypothesis, fossil evidence has confirmed that dinosaurs, snakes and even mammals preyed on dinosaur eggs and infants, but never at a rate that could have caused mass extinction.

Pathological Shells

Invertebrate fossil expert H.K. Erben and colleagues thought that eggs led to the dinosaurian downfall in a different way. In a 1979 paper, the researchers reported that fossilized dinosaur eggshell fragments found in southern France and the Spanish Pyrenees showed two sorts of disorders —some had multiple shell layers, while others were pathologically thin. Either situation was lethal. Multi-layered eggs could have suffocated developing dinosaurs, while thin eggs easily broke or dehydrated the embryos. Some sort of climate change spurred hormonal changes in dinosaur mothers, the researchers suggested. But this explanation didn’t fit for other dinosaurs around the globe at the time. The deformed eggshells seem to have been a local phenomenon.

Overactive Glands

Baron Franz Nopcsa von Felső-Szilvás, a Hungarian-born aristocrat and a spy, was one of the most peculiar characters in the field of paleontology—and his extinction theories were just as unusual. Early in the 20th century, Nopsca suggested that a shortage of food, a “low power of resistance” and even diminished sex drive contributed to the demise of the dinosaurs. His favorite theory, though, was death by overactive glands. He believed that dinosaurs grew to their tremendous size thanks to secretions from their pituitary gland. Eventually, he argued, the gland drove the growth of dinosaurs to such excess that the animals became pathologically huge and grotesque. Nopsca tried to tie human pathologies to the conundrum of dinosaur extinction, but there’s no indication that the pituitary had anything to do with immense dinosaur sizes or their disappearance. 

Evolutionary self-destruct

In a slight to some of the most wonderful creatures of all time, “Going the way of the dinosaur” means falling into obsolescence by becoming too sluggish, stupid or oversized to survive. For a time, that’s what paleontologists believed happened to the dinosaurs. During the early 1900s— when Darwin’s theory of natural selection was still not entirely accepted within the scientific community— many paleontologists believed that organisms evolved along confined pathways. According to this thoroughly debunked notion, dinosaurs possessed a kind of evolutionary inertia that caused them to keep getting bigger and weirder. Some researchers even proposed that dinosaurs were dumb (compared to mammals) because they invested too much of their internal energies in growing huge and fierce. Yet, as even fossil experts of the time realized, this notion couldn’t explain why some of the biggest and strangest specimens —such as Stegosaurus and Brachiosaurus—thrived throughout the dinosaurs’ reign on Earth.

Too many males 

Within the past decade, infertility specialist Sherman Silber has repeatedly asserted that dinosaurs perished because they couldn’t find mates. Silber has speculated that—much like present-day alligators and crocodiles—changes in external temperature could determine the sex of dinosaur embryos developing in their eggs. In this case, he has argued, climate change caused by volcanic activity and asteroid impact could have skewed the global thermostat so that only one sex was produced. But beyond the fact that we really don’t know whether dinosaur sexes were determined by temperature or genetics alone, the idea doesn’t explain why reptiles that probably did have temperature-determined sexes, such as crocodylians, survived while non-avian dinosaurs died out. Silber’s proposal contradicts itself.

Caterpillars

In a fight, a caterpillar would hardly seem to be a match for a Triceratops. But in a 1962 paper based on his observations of the devastation caterpillars could cause among crops, entomologist Stanley Flanders proposed that the larvae of the first moths and butterflies would have quickly and totally denuded the Cretaceous landscape of vegetation. Herbivorous dinosaurs would have starved, Stanley argued, and predatory dinosaurs would soon be left with nothing to eat but each other. But not only did butterflies and moths coexist with dinosaurs for millions of years, there is no sign of such a disastrous caterpillar spike in the fossil record.

Cataracts

Explanations for dinosaur extinctions often reflected the expertise and perspective of the people who proposed them. No surprise, then, that in 1982 ophthalmologist L.R. Croft suggested that bad eyesight undid the dinosaurs. Since exposure to heat can make cataracts form more quickly, Croft surmised that dinosaurs with weird horns or crests developed these bizarre ornaments to shield their eyes from the relentless Mesozoic sun. In the world warmed by harsh sunlight, though, Croft expected that even these attempts to shade dinosaur eyes failed and that the creatures started to go blind before they hit sexual maturity. Croft’s idea, however, totally fails to explain the mass extinction of species other than the non-avian dinosaurs, 66 million years ago.

Supernova

Before the asteroid impact hypothesis gained widespread credibility, in 1971 physicist Wallace Tucker and paleontologist Dale Russell suggested another kind of death from above. Although the researchers lacked any direct evidence for their idea, they proposed that a nearby supernova could have had catastrophic consequences for life at the end of the Cretaceous. The explosion of a neighboring star, Tucker and Russell proposed, would bombard the upper atmosphere with X-rays and other forms of radiation that would quickly alter the climate, causing temperatures on Earth to plummet. No evidence of such a nearby event 66 million years ago has ever been uncovered.

Aliens

A display at Utah State University’s Prehistoric Museum points out that aliens could not have wiped out the dinosaurs, not least of all because, “There is no evidence of aliens or their garbage in the fossil record.” That hasn’t stopped some more imaginative folks from suggesting such sci-fi scenarios (most recently given a nod in the monster mash blockbuster Pacific Rim). Last year, the basic cable program “Ancient Aliens” devoted an entire episode to the idea, borrowing misunderstandings and outright fabrications from creationists to help make their case that extra-terrestrials eliminated the dinosaurs to make room for humanity. Apatosaurus only ever faced down aliens in comic books and movies.


Dinosaur Farts

Much like death by aliens, the idea that dinosaurs farted themselves into extinction was never a scientific hypothesis. The notion was a misconstrued conclusion drawn from some recent dinosaur research. Last year, paleontologist David Wilkinson and collaborators tried to calculate how much gas the long-necked, hefty sauropod dinosaurs could have produced. The researchers speculated that the dinosaurs’ annual output of methane gas would have been enough to influence the global climate, but the researchers said nothing about extinction. After all, a variety of sauropods existed for tens of millions of years without showing any sign of gassing themselves out of existence. Ignoring the actual research by Wilkinson and colleagues,, various news sites jumped on the study to suggest that dinosaurs gassed themselves into oblivion. Such sites were only blowing hot air.

124225-124248 of 125,229 Resources