Found 12,817 Resources containing: Fitness of the environment
Like a perverse turtle, Rob Greenfield wears his trash on his back: Sandwiched between heavy duty plastic sheeting is every wrapper, bag, tissue and twisty tie the environmental activist has accumulated over the past few weeks. His unusual garb is part of an attention-grabbing demonstration: Since September 19, Greenfield has been shuffling down the streets of New York City ensconced in his own debris to raise awareness of how much waste the average American produces in a month.
This is not Greenfield’s first sustainability-related stunt. In the past, the 30-year-old has lived off the grid, shunning traditional showers for more than two years to bring attention to water use; he's also gone dumpster diving with a television reporter to highlight urban food waste. In this case, “the focus is waste in general,” says Greenfield, by which he means food waste like orange peels and apple cores as well as manmade waste products. “It’s all the waste that we're sending to a landfill as individuals.”
Right now, Greenfield is creating about 3 lbs of trash per day. That’s significantly less than the average American, who creates about 4.5 lbs of trash per day—or about 130 lbs of trash per month—according to the Environmental Protection Agency. Greenfield attributes the discrepancy to the length of his project: Over a longer period of time, the average person would typically be replacing broken electronics or buying a new couch, which contributes to the 4.5 lb tally.
All that trash adds up to a sobering reality: In 2013, Americans generated about 254 million tons of trash. The global rate of trash production—which is currently dominated by the U.S., with China following close behind—is on track to triple by 2100. Those striking statistics are what propelled Greenfield to walk the streets covered in his own personal trash, including paper coffee cups, Target bags and McDonald’s wrappers.
“My goal … is to always find ways to get people excited about environmental issues,” he says. “There's so many reasons to feel that utter doom and gloom but I don't feel that's necessarily the best way to get people involved. That's why I try to keep things positive, fun and interesting.”
But Greenfield couldn’t have executed this vision without another key player: Nancy Judd, founder of a sustainable art and fashion company called Recycle Runway, is the creator of the meticulously-designed suit Greenfield is wearing. You could call their synthesis a match made in trash heaven; Judd, who made her first “trashion” in 1998, has a long history of combining art and recycled products dating back to an event she co-founded called the Recycle Santa Fe Art Market and Trash Fashion Show.
“We have such a disregard for the materials that pass through our hands, the resources that were used to create them and the pollution that was caused in their creation,” says Judd. “Everything we touch has a story, and the stories get lost so easily in this society where we throw things away without even thinking about it.”Judd had less than a month to design and create a suit that could hold up to 135 lbs of trash. (Courtesy Nancy Judd)
In August, Greenfield’s video producer, Chris Temple, discovered Judd and her recycled fashion through a fortuitous Google search. Her aesthetic and philosophy merged perfectly with their environmental ideals, so he reached out via email. Judd immediately agreed to be a part of the project. “I was intrigued right away,” she says.
Greenfield describes their collaboration as “kismet,” or fate: Both shared the goal of creating environmental awareness through education. “I don't know what would've happened if I hadn't found Nancy,” he says. “One of the challenges has always been how am I going to hold onto all of this trash. Not only is it bulky, but you have to have something designed that can hold 135 lbs of trash.” While Greenfield admits that there are days he dreads putting on his suit, thanks to Judd’s design, the trash load is fairly balanced.
In fact, trash has played a weighty role throughout Judd’s life. “It actually all started quite unexpectedly at art school, when the administration put in a soda pop machine,” she recalls. “I watched the garbage fill up with cans and asked the school if I could start a recycling program.” She would go on to have a 20-year career in waste, first as the recycling coordinator for the city of Santa Fe, and next as the executive director of the New Mexico Recycling Coalition, where her role was “to get people to think differently about trash and to utilize our recycling program more and create less waste.”
Yet outside of her day job, Judd was a passionate photographer. Her interests in recycled materials and her involvement with local artists came together when she helped launch the Recycle Santa Fe Art Festival, which has since become one of Santa Fe’s renowned art events. “My interest in conservation and my life as an artist collided in that moment and I created a piece of recycled fashion to promote our trash fashion show,” she says.
Several years—and countless trash couture creations—later, Judd decided it was time to leave her day job and fully embrace art for a living. In 2007, she founded Recycle Runway, which brings in revenue through sculpture commissions, exhibit sponsorships, speaking engagements and workshops. With her new business, Judd began to focus less on entertainment and more on education, from fashion shows to high traffic public exhibitions.
Her choice of where to display her art, for instance, is intentional. She usually hosts exhibits not in high-class galleries, but in airports. “It [is] a perfect place where my work could reach a high number of people who weren't necessarily environmentally-minded,” she explains. Many of her pieces are commissioned by corporations like Delta Air Lines, Toyota, Target and Coca-Cola.A match made in trash heaven. (Recycle Runway)
Judd thinks of herself as more of a sculptor than a fashion designer. While her pieces are wearable, the intention behind them is more educational than functional, she says. One of her creations, known as the “Obamanos Coat”—a purple-and-silver winter coat she created using door hangers from the 2008 Obama presidential campaign—is currently on display at the National Museum of African American History and Culture and is part of the Smithsonian’s Institution’s permanent collection.
Nearly all of Judd’s creations are made from trash she has gathered herself, either by dumpster diving or through various collections or donations. If it’s a work commissioned by a corporation, the trash often comes from the company itself. A typical piece can take her anywhere from 100 to 650 hours to execute, depending on the type of material used and how complex the design is. But for Greenfield's trash suit she was crunched for time: she had only about 25 days to design, source and construct the piece.
As a result, some of the suit’s components ended up coming from second-hand stores rather than directly from the garbage can. “If I'd had more time I could've sourced the strapping as well as the base coat and pants,” says Judd, noting that the strapping came from used backpacks, while she found the coat and pants from an army surplus store. “The only reused material is the clear plastic.”
The final product ended up taking her 125 hours from start to finish. “I didn't realize how big of a job this would be, and neither did she,” says Greenfield, who’s nearing the end of his demonstration. Fortunately, all that time and care won’t be going to waste (so to speak): Greenfield plans to travel across the country with the suit in 2017, using it as a dramatic visual aid that will drive home his point of how much trash each person makes. In 2018, Judd will exhibit the suit along with 19 other pieces at the Atlanta International Airport.
As of Thursday, Greenfield weighed in at 68 lbs of trash.
Every bartender knows the way to clear the room at the end of a long night is to turn up the volume on a less-inviting track. “My go-tos are Ween’s 'Mourning Glory' and Slayer’s 'Angel of Death,'” says Prashant Patel, a veteran bartender at the Eighth Street Taproom, a popular watering hole in the college town of Lawrence, Kansas. “Those high-pitched guitar solos jar people out of their seats and out the door.”
Science backs this up. Sound alters both our physical and mental state—from our breathing and heart rate to perceptions of smell and taste. What we hear while chewing, slurping or even twisting open a bottle builds our expectations about what we consume. Sound “influences everything,” wrote University of Oxford researchers Charles Spence and Maya Shankar in the Journal of Sensory Studies in 2010, “from what we choose to eat to the total amount and the rate at which we eat it.” Sounds can make chocolate and coffee seem sweeter, airplane food more savory and stale chips fresher. But when it comes to alcohol, the impacts of sound aren't always so innocuous.
New research on how soundscapes affect our perception of beer taste and alcohol content shows that sounds can change our perceptions of the alcoholic strength of beers—and influence the rate at which we consume them. For researchers, the finding was a surprise: a study recently published in the journal Food Quality and Preference was originally designed to explore the ways in which specific soundtracks changed perceptions of sweetness, bitterness and sourness in beers (you can listen to them and do your own experimenting here). But the researchers found out that sound affects more than just taste.
“When we developed the study, we weren’t aiming to explore the influence on alcohol strength,” explains lead researcher Felipe Carvalho of Vrije Universiteit Brussel. “We considered these findings quite curious.” To test their hypothesis, researchers served identical beers to 340 participants while playing two different taste-inducing soundtracks. Not only did the soundtracks change perceptions of taste, they found, but they also by extension influenced perceptions of alcoholic strength.
The team used Belgian beers because of their “higher perceived quality and range of flavor experiences.” The perceived alcohol content of the tripel and two Belgian pale ales were positively correlated with both sour and bitter tastes, and negatively correlated with sweet tastes. In other words, the beers that were perceived to be sour and/or bitter were also perceived to be more alcoholic than their sweet counterparts—even if they didn’t actually contain more alcohol.
“What we learned is that people rely on dominant attributes to rate the strength of beer,” Carvalho said. “One possible explanation is that people are generally poor at estimating alcohol content of beers by means of taste cues. Therefore, high-impact flavor (such as hoppiness/bitterness in the case of beer) might have been used as proxies for alcohol content,” he and his coauthors write in the study.
These findings build upon a 2011 study led by Lorenzo Stafford and social scientists at the University of Portsmouth in the U.K. on the effects of noise and distraction on alcohol perception. “We knew that loud music in bars lead to faster and greater alcohol consumption," says Stafford, citing a 2004 study, “but we wanted to find out the impacts of sound interference.”
That research team mixed vodka with cranberry juice, orange juice and tonic water and served it to 80 university students under four sets of conditions: in silence, with music, with a news segment they were asked to explain (known as a “shadow task”), and while they listened to both music and the news story. They found that perceptions of sweetness in alcohol were significantly higher when participants listened to music compared to the other conditions, and hypothesized that these heightened perceptions of sweetness led to higher consumption because of humans’ “innate preference for sweet foods.”
This might explain that crazy night of cocktail-fueled clubbing:. “There can be a potential for overconsumption when drinks are too sweet or the music is quite fast,” Stafford explains, “because the human brain is wired to seek pleasure.”
Sound is an experience that happens in the brain. It starts as movements in the world around us—fluctuations in the density of air molecules known as sound waves. These waves travel from the outer world toward our ear where they pass into the ear canal, funnel through the middle ear and pool in the cochlea. In the inner chamber, rows of microscopic hair cells are bathed in a potassium-rich fluid that helps transform vibrations into the nerve impulses that shoot up the auditory nerve to the brain. There, they finally become what we interpret as sound.
But “we” should be singular, because hearing—like smell and taste—manifests through responses that are specific to each and every one of us. This specificity makes some people more vulnerable to alcohol than others, and can change how sound affects their drinking habits. “Alcoholism and other addictions are chronic diseases of the brain, not an issue of willpower,” says Marvin Ventrell, executive director of the National Association of Addiction Treatment Providers. “The choice mechanisms that enable a healthy brain are not operational for someone who suffers from addiction.”
In light of growing research on how music and other sounds impact alcohol consumption, Ventrell adds: “It doesn’t come as a surprise to me that we can correlate, and even see causation, between sound and alcohol consumption. Environments such as bars and clubs are created to induce those addictive behaviors, and music is a piece of that—those bass, throbbing tones that are the soundtrack of nightclubs.”
Ventrell isn’t saying music shouldn’t be enjoyed and appreciated. “It’s not a bad thing,” he stresses. “The last thing I would want to do is discourage people from listening. But I would suggest that people steer clear of any music that might trigger addictive behaviors.”
Because sounds can influence a wide range of behaviors, researchers are looking into other ways they can be used to affect decision-making processes. “Now that we have these results, we want to customize sounds based on this information,” says Carvalho. “Imagine that sound could eventually allow you to enjoy a beer with low levels of alcohol, without losing the pleasure of perceiving such beer as a strong-flavored one. Belgians, for example, are used to drinking beers with lots of body and alcoholic strength. Perhaps sounds would allow them to drink less strong beers, without losing the quality of their experience.”
The potential, Carvalho adds, is “not just with music but all kinds of soundscapes, such as the sound of nature. We want to see how they can also trigger decision-making processes. Imagine if they could help you choose healthier types of food.” Or, different ways to drink.
Members of the Homo genus have been making stone tools for at least 2.6 million years, a new study published in the Proceedings of the National Academy of Sciences suggests. The findings, based on the discovery of a collection of sharp-edged stone artifacts at the Bokol Dora 1 site in Ethiopia’s Afar Basin, push the origins of early human tool-making back by some 10,000 years earlier than previously believed. Additionally, the research suggests that multiple groups of prehistoric humans invented stone tools on separate occasions, adapting increasingly complex techniques in order to best extract resources from their environment.
Although 3.3 million-year-old stone instruments known as "Lomekwian" tools predate the newly described trove, these were likely made by members of early hominin groups such as Australopithecus afarensis rather than members of the Homo genus. Until now, the oldest known Homo tools—dubbed “Oldowan” in honor of the Olduvai Gorge in Tanzania where the first examples of such artifacts were found—dated to between 2.55 and 2.58 million years ago. Excavated in Gona, Ethiopia, the sharpened stones are technologically distinct from the more rudimentary Lomekwian tools, which were first catalogued by researchers conducting fieldwork in West Turkana, Kenya, in 2015. Compared to the Oldowan tools found in Gona and now Bokol Dora, the earlier Lomekwian tools are decidedly less advanced.
The Bokol Dora trove, also known as the Ledi-Geraru collection, consists of 327 stone tools likely crafted by striking two rocks together to create sharp edges capable of carving up animals, as Phoebe Weston reports for the Independent. The ancient artifacts were found three miles away from the site where the oldest known Homo fossil, a 2.8 million-year-old jawbone, was unearthed in 2013, pointing toward the tools’ connection with early modern humans rather than ape-like hominins belonging to the Australopithecus genus.
“This is the first time we see people chipping off bits of stone to make tools with an end in mind,” study co-author Kaye Reed, an anthropologist at Arizona State University, tells Weston. “They only took two or three flakes off, and some you can tell weren’t taken off quite right. The latest tools seem slightly different in the way they’re made from other examples.”
Compared with the Gona tools and other Oldowan artifacts, the latest finds are actually rather crude. The instruments have “significantly lower numbers of actual pieces chipped off a cobble than we see in any other assemblage later on,” lead author David Braun of George Washington University explains to New Scientist’s Michael Marshall, adding that it’s possible the humans making them were less skilled than their later counterparts or simply didn’t have a need for extremely sharp tools. Still, the Ledi-Geraru artifacts are distinct enough from the older Lomekwian tools to warrant their classification as Oldowan.
The 2.6 million-year-old implements “have nothing to do whatsoever with what we see later on,” Braun tells Marshall. “It’s possible there are multiple independent inventions of stone as a tool.”The 2.6 million-year-old stone tools are technologically distinct from more primitive 3.3 million-year-old tools likely used by members of the Australopithecus genus (David R. Braun)
According to Cosmos’ Dyani Lewis, Lomekwian tools are roughly on par with the primitive instruments fashioned by modern primates such as capuchin monkeys. Oldowan tools, on the other hand, reveal a basic understanding of what Braun calls the “physics of where to strike something, and how hard to hit it, and what angles to select.”
“Something changed by 2.6 million years ago, and our ancestors became more accurate and skilled at striking the edge of stones to make tools,” study co-author Will Archer of the Max Planck Institute for Evolutionary Anthropology and the University of Cape Town notes in a press release. “The artifacts at BD 1 capture this shift.”
Given the fact that the Ledi-Geraru tools were found alongside the bones of animals, including gazelles and giraffes, the team argues that early humans’ shift toward skilled stone tool-making coincided with a rise in scavenging opportunities. As Science News’ Bruce Bower points out, Homo individuals inhabited open grassland expanses, whereas their earlier Australopithecus ancestors had to contend with dense tree coverage that limited hunting prospects.
Interestingly, the Independent’s Weston writes, the shift from Lomekwian to Oldowan tools appears to be associated with a change in early humans’ teeth. In the statement, Archer explains that processing food with the help of stone tools led to a reduction in the size of our ancestors’ teeth, offering a striking example of how “our technology and biology were intimately intertwined even as early as 2.6 million years ago.”
To date the Ledi-Geraru trove—likely dropped by early humans at the edge of a body of water and subsequently buried for millions of years—researchers drew on volcanic ash found several feet below the excavation site, as well as the magnetic signature of various sediment samples.
But as Bower notes, some scientists have expressed skepticism regarding these dating methods. Paleontologist Manuel Domínguez-Rodrigo of Madrid’s Complutense University says a detailed analysis of sediment formation is needed to verify the artifacts’ age, and Yonatan Sahle, an archeologist at Germany’s University of Tübingen, calls it “simply unwarranted” to deem the tools the oldest known Oldowan specimens without conducting further testing.
For now, Braun says, the team must focus on finding additional evidence of stone tools made between 2.6 and 3.3 million years ago. He concludes, “If our hypothesis is correct then we would expect to find some type of continuity in artefact form after 2.6 million years ago, but not prior to this time period. We need to find more sites.”
Over the past 15 years, a strange thing has happened. On one hand, carbon dioxide concentrations have kept on shooting up thanks to humans burning fossil fuels—in May, we passed 400 parts per million for the first time in human history.
On the other hand, despite certain regions experiencing drastically warmer weather, global average temperatures have stopped increasing. Climate change deniers have seized upon this fact to argue that, contrary to the conclusions reached by major science academies (PDF) around the world, greenhouse gas emissions do not cause global warming.
As it turns out, the truth is much grimmer. A pair of scientists from Scripps Institution of Oceanography have determined that the underlying process of global warming has merely been masked by natural decade-scale variations in the temperature of Pacific Ocean surface waters, related to the El Niño/La Niña cycle. Once that’s finished, our planet’s warming will march onward as usual.
Climate scientists have speculated about the possibility that ENSO (the El Niño-Southern Oscillation, the proper term for the cycle) was behind the apparent hiatus in warming for some time, but the scientists behind the new study—Yu Kosaka and Shang-Ping Xie—are the first to take a quantitative look at the role of Pacific surface temperatures in pausing global warming as a whole. Their paper, published today in Nature, uses climate models to show that the abnormally cool surface waters observed over the Pacific since 1998 can account for the lack of recent warming entirely.
Why has the Pacific been abnormally cool for the past 15 years? Naturally, as part of ENSO, a large swath of the ocean off the western coast of South America becomes notably warmer some years (called El Niño events) and cooler in others (La Niña events). Scientists still don’t fully understand why this occurs, but they do know that the warmer years are related to the formation of high air pressures over the Indian Ocean and Australia, and lower pressures over the eastern part of the Pacific.
Because winds move from areas of high pressure to low pressure, this causes the region’s normal trade winds to reverse in direction and move from west to east. As they move, they bring warm water with them, causing the El Niño events; roughly the reverse of this process happens in other years, bringing about La Niña. As it happens, colder surface temperatures in the Pacific—either official La Niña events or abnormally cool years that don’t quite qualify for that designation—have outweighed warm years since 1998.
That, say Kosaka and Xie, is the reason for the surprising lack of increase in global average temperatures. To come to this conclusion, they developed a climate model that, along with factors like the concentration of greenhouse gases over time and natural variations in the solar cycle, specifically takes the ENSO-related cycle of Pacific surface temperatures into account.
Typically, climate models mainly use radiative forcing—the difference between the amount of energy absorbed by the planet and the amount sent back out to space, which is affected by greenhouse gas emissions—as a data input, but they found that when their model did so, it predicted that global average temperatures would increase much more over the past 15 years than they actually have. However, when the abnormally-cool waters present in the eastern Pacific were taken into account, the temperatures predicted by the model matched up with observed temperatures nicely.
In models, the presence of these cooler waters over a huge area (a region within the Pacific that makes up about 8.2% of the Earth’s surface) serves to absorb heat from the atmosphere and thus slow down the underlying warming process. If the phenomenon is representative of reality, the team’s calculations show that it has caused the planet’s overall average temperature to dip by about 0.27°F over the past decade, combating the effects of rising carbon dioxide emissions and causing the apparent pause in warming.
This isn’t the first localized climate-related event to have effects on the progression of climate change as a whole. Last week, other researchers determined that in 2010 and 2011, massive floods in Australia slowed down the global rise in sea level that would have been been expected from observed rates of glacier melting and the thermal expansion of sea water. In many cases, it seems, the subtle and complex dynamics of the planet’s climate systems can camouflage the background trend of warming, caused by human activity.
But that trend is continuing regardless, and so the most obvious impact of this new finding is a disconcerting one: the Pacific will eventually return to normal temperatures, and as a result, global warming will continue. The scientists don’t know exactly when this will happen, but records indicate that the Pacific goes through this longer-term cycle every decade or so, meaning that the era of an abnormally-cool Pacific will probably soon be over.
Perhaps most distressing, the study implies that the extreme warming experienced in recent years in some areas—including much of the U.S.—is actually less warming than would be expected given the amount of carbon dioxide we’ve released. Other regions that haven’t seen much warming yet, meanwhile, are likely in line for some higher temperatures soon.
A trio of tiny rat statuettes stands sentinel in the center of Gregory Glass’s desk. The shelves above are stuffed with rat necropsy records and block-by-block population analyses. Huge, humming freezers in the lab across the hall are chockfull of rodent odds and ends.
Now Glass, a professor at the Johns Hopkins Bloomberg School of Public Health, leads me out of his building and into the streets of Baltimore for a bit of impromptu fieldwork. He asks that I leave my jewelry and purse behind; after all these years of tramping the alleys in the rougher parts of town, the disease ecologist still gets nervous around sunset. Yet mostly he enjoys observing the “urban ecosystem,” which, he says, is just as worthy of study as wilder areas, and maybe even more so: as savannas and rainforests shrink, cities grow, becoming a dominant habitat.
“This is what the natural environment looks like for most people,” Glass says, as we enter a narrow passage behind a block of row houses. Some backyards are orderly and clean, others are heaped with garbage. I promptly step in something mushy. Glass frowns down at my flimsy shoes.
Luckily we don’t have to walk far to find what we’re looking for.
“Right at the base of that plywood door? There’s your rat hole,” Glass says, pointing at a neatly gnawed archway. “You couldn’t draw a cartoon better than that. And they’ll graze on this grass right here.”
Glass has been following the secret lives of wild Norway rats – otherwise known as brown rats, wharf rats, or, most evocatively, sewer rats -- for more than two decades now, but Baltimore has been a national hotspot for rat studies for well over half a century. The research push began during World War II, when thousands of troops in the South Pacific came down with the rat-carried tsutsugamushi disease, and the Allies also feared that the Germans and Japanese would release rats to spread the plague. Rats were wreaking havoc on the home front, too, as Christine Keiner notes in her 2005 article in the academic journal Endeavor. Rats can chew through wire and even steel, obliterating infrastructure. Rodent-related damage cost the country an estimated $200 million in 1942 alone. Rat bites were reaching record highs in some areas.
Worst of all, one of the only tried-and-true rat poisons –an extract from the bulb of the Mediterranean red squill plant–was suddenly unavailable, because the Axis powers had blockaded the Mediterranean. Scientists scrambled to find a chemical substitute.
At that point, relatively little was known about the habits of Norway rats, which are beefy (they can reach the length of a house cat), blunt-faced, foul-smelling but surprisingly smart creatures that carry a plethora of nasty bacteria, viruses and parasites. They are native to Southeast Asia, but smuggled themselves aboard ships bound for North America and practically everywhere else, subsisting, in large part, on our garbage. They thrived in aging East Coast cities like New York and Baltimore.
Watch this video in the original article
Despite the critters’ ubiquity, Curt Richter, a Hopkins neurological researcher who was one of the first scientists to become interested in the problem, had to solicit rat-stalking tips from a city sanitation worker. (Richter later recounted these trials in a memoir, “Experiences of a Reluctant Rat-Catcher.”) He soon realized that wild rats were craftier and generally harder to kill than their tame counterparts. By 1942, though, he had a squad of Boy Scouts dropping poisoned baits around East Baltimore, in the blocks near the School of Public Health. The new rodenticide, alpha napthyl thiourea (ANTU), proved effective: city workers once recovered 367 rat casualties from a single block. Unfortunately, the poison was not as harmless to other animals as Richter professed: domestic dogs and cats died and several local children had their stomachs pumped.
But the Rodent Ecology Project, as it eventually came to be called, thrived in spite of these setbacks, nurturing all manner of provocative ideas. Famed psychologist John Calhoun, whose rat colonies at the National Institute of Mental Health inspired the children’s classic “Mrs. Frisby and the Rats of NIMH,” got his start in the alleys of Baltimore. (Interested in issues of crowding and social interaction, he eventually erected a quarter-acre rat corral behind his suburban home.)
Other project scientists began to map the basics of rat population dynamics, concepts that, Glass says, inform the way we manage endangered species today. Researchers noticed, for instance, that wiped-out blocks took time to repopulate, even though there were rats aplenty in all the surrounding blocks. Eventually, though, the rats almost always bounced back to their original numbers, the “carrying capacity” for that block.
Scientists even pinpointed rats’ absolute favorite foods; they relish macaroni and cheese and scrambled eggs and detest celery and raw beets. Their tastes are, in fact, eerily similar to ours.
Glass – who started off studying cotton rats in the Midwest – traps the animals with peanut butter baits and monitors the diseases they carry. (Hantavirus, once known as Korean hemorrhagic fever, and leptospirosis – which can cause liver and kidney failure – are of particular concern.) Lately he’s been interested in cat-rat interactions. Cats, he and his colleagues have noticed, are rather ineffectual rat assassins: they catch mainly medium-sized rodents, when they catch any at all. This predation pattern may actually have adverse effects on human health: some of the deceased mid-sized rats are already immune to harmful diseases, while the bumper crops of babies that replace them are all vulnerable to infection. Thus a higher proportion of the population ends up actively carrying the diseases at any given time.
Rats still infest Baltimore and most other cities. A few years ago a city garbage truck was marooned in the very alley we were touring, Glass says: rats had burrowed underneath until the surface caved in, sinking the truck to its axles. The rodents soon overran it, and its fetid load furnished quite a feast.
Even the poshest neighborhoods are afflicted: rats, Glass says, gravitate to fancy vegetable gardens, leaving gaping wounds in tomatoes. (Celery crops, one assumes, would be safer.) Recent surveys suggest that the rat populations of Baltimore neighborhoods haven’t changed much since the Hopkins studies began in the 1940s.
Yet we hadn’t glimpsed a single one on our stroll. Glass stopped suddenly in front of a junked-up yard and listened. “I didn’t see a rat, but I heard one,” he whispered. Rats – though adept at scurrying furtively – are actually quite vocal: they squeak, shriek and hiss. They also emit a series of high-pitched chirps inaudible to humans, which scientists believe may be the equivalent of laughter.
When Ken Hall first knocked on his neighbors’ door, it wasn’t to borrow a cup of sugar or an egg. He came to ask for the remains of their rotting decks—he needed cedar wood and lots of it.
They asked, “What’s it for?”
Quite unexpectedly, he said, “I’m going to build a whale!”
Hall found himself in the midst of this story because of a shift in direction. After 15 years of building 3D worlds for computer games, he wanted to build something that could be touched and seen without a screen. But what?
The Canadian artist knew he was going to create something big. Hall was drawn to large animals, especially those that had community and social structures, similar to what we know as humans. Hall found a story that caught his attention—it was the story of Hope, an orca that beached and died on the coast of Washington State in 2002. A necropsy found that the female animal contained the highest level of contaminants ever recorded in an orca, along with signs of significant bone loss and a bacterial infection. The Port Townsend Marine Science Center (PTMSC) led the effort behind the removal of Hope from the shore, and subsequent necropsy.
In 2011, the Idaho Virtualization Laboratory created a 3D scan of the skeleton, before it was put on display at PTMSC. Hall began to build prototype pieces based on the 3D data. He chose wood, and specifically cedar, as his medium. The cedar, Hall says, is an “homage to totem carving, and its role in passing knowledge to future generations,” honoring the traditional use of cedar by First Nations for totem poles in the Pacific Northwest. For it to go on display at various museums, the piece—which he named Legacy—would have to be made travel-ready, meaning it could be put up and taken down in a relatively short period of time, and displayed in a variety of ways based on the available space. Hall’s background in mechanical engineering came in handy at this point—“It was like a giant jigsaw puzzle” he says.
Image by Ken Hall. "It was like a giant jigsaw puzzle," says Ken Hall. His background in mechanical engineering came in handy. (original image)
Image by Ken Hall. The artist chose wood, specifically cedar, as his medium. (original image)
Image by Ken Hall. Hall chose the material to honor the traditional use of cedar by First Nations for totem poles in the Pacific Northwest. (original image)
Image by Ken Hall. Over two million visitors have experienced Legacy to date. (original image)
Image by Ken Hall. “Legacy is a stunning example of how science and nature can influence art and how art can expand the appreciation of science,” said Mary Jane Conboy the director of science content and design at the Ontario Science Centre. (original image)
Once 11 of the 46 vertebrae were carved, Hall realized just how big of a project this would be—it took him six months of full-time fabrication to make all the pieces (there are over 200 bones that make up the sculpture). The sculpture was completed and put on display at the Dufferin County Museum & Archives in Ontario. Sometimes accompanied by projection lights that provide a water-like effect and orca vocalizations playing in the background, the finished piece gives visitors a feeling of being underwater.
That feeling is what Hall wanted to provide people walking through the exhibit—one that highlights our connections as humans to the Earth and our ecosystems, like Hope and her community in the Pacific.
In the case of Hope, a transient (also called Bigg’s killer whale), researchers weren’t able to point to one specific cause of death; however, contamination is certainly an issue for all orcas in the region. There are three distinct orca ecotypes, or populations, documented off the U.S. North Pacific coast—transient, resident and offshore. All three overlap in parts of their home range but have distinctive physical traits, behaviors and even genes. According to the National Oceanic and Atmospheric Administration (NOAA), the sub-population of Southern Resident killer whales are “among the most contaminated marine mammals in the world” and listed as endangered—only 78 individuals were counted in the population in 2014.
Contamination comes from a variety of sources ranging from legacy chemicals that are no longer used, but persist in the environment (like DDT and PCBs), to chemicals that make up flame-retardants, found in things like carpets and furniture. Southern Resident Killer Whales are one of eight “highly at-risk species” that NOAA is drawing attention to in its “Species in the Spotlight” series. Lynn Barre, who leads the Seattle branch of NOAA’s Office of Protected Resources, is encouraged to hear about the art piece—“Even [orca] bones or skeleton as an art piece can inspire people to be [environmental] stewards.”
After its inaugural showing, Legacy has moved to other venues in Ontario and is scheduled to be on display at the Ontario Science Centre beginning in 2017 before embarking on an international tour. Over two million visitors have experienced Legacy to date.
“Legacy is a stunning example of how science and nature can influence art and how art can expand the appreciation of science,” said Mary Jane Conboy the director of science content and design at the Ontario Science Centre. “As Canada celebrates its 150 years in 2017, displaying Legacy at the Ontario Science Centre is particularly timely. This visually compelling piece asks our visitors to reflect on our current environmental practices and the changes we want to inspire for the future.”
Hall hopes to some day take the immersive exhibit to another level by incorporating his gaming background into the onsite experience. 3D virtual reality could evolve the sculptural art piece into an interactive installation: panning over the skeleton would allow visitors to see what the full animal looked like, not just an articulated skeleton. Zooming in to an area could answer questions, like “how do whales breathe,” “what are the impacts of underwater noise on whales,” and “what is it like to 'see' with sonar?”
Hall’s environmentally focused pieces tell a story. He wants visitors to gain a better understanding of how humans can live in harmony with nature. “I want to try to make thinking and understanding cool again,” he says, and he intends to keep his focus on our connection to the world around us in the hope that we all become more empathetically aware of our surroundings.
How did humans get to be so smart, and when did this happen? To untangle this question, we need to know more about the intelligence of our human ancestors who lived 1.8 million years ago. It was at this point in time that a new type of stone tool hit the scene and the human brain nearly doubled in size.
Some researchers have suggested that this more advanced technology, coupled with a bigger brain, implies a higher degree of intelligence and perhaps even the first signs of language. But all that remains from these ancient humans are fossils and stone tools. Without access to a time machine, it’s difficult to know just what cognitive features these early humans possessed, or if they were capable of language.
Difficult—but not impossible.
Now, thanks to cutting-edge brain imaging technology, my interdisciplinary research team is learning just how intelligent our early tool-making ancestors were. By scanning the brains of modern humans today as they make the same kinds of tools that our very distant ancestors did, we are zeroing in on what kind of brainpower is necessary to complete these tool-making tasks.
A leap forward in stone tool technology
The stone tools that have survived in the archaeological record can tell us something about the intelligence of the people who made them. Even our earliest human ancestors were no dummies; there is evidence for stone tools as early as 3.3 million years ago, though they were probably making tools from perishable items even earlier.
As early as 2.6 million years ago, some small-bodied and small-brained human ancestors chipped small flakes off of larger stones to use their sharp cutting edges. These types of stone tools belong to what is known as the Oldowan industry, named after Olduvai Gorge in Tanzania, where remains of some of the earliest humans and their stone implements have been found.The more basic Oldowan chopper (left) and the more advanced Acheulian handaxe (right) (Shelby S. Putt, courtesy of the Stone Age Institute, CC BY-ND)
Around 1.8 million years ago, also in East Africa, a new type of human emerged, one with a larger body, a larger brain and a new toolkit. This toolkit, called the Acheulian industry, consisted of shaped core tools that were made by removing flakes from stones in a more systematic manner, leading to a flat handaxe with sharp edges all the way around the tool.
Why was this novel Acheulian technology so important for our ancestors? At a time when the environment and food resources were somewhat unpredictable, early humans probably began to rely on technology more often to access food items that were more difficult to obtain than, say, low-hanging fruits. Meat, underground tubers, grubs and nuts may all have been on the menu. Those individuals with the better tools gained access to these energy-dense foods, and they and their offspring reaped the benefits.
One group of researchers has suggested that human language may have evolved by piggybacking on a preexisting brain network that was already being used for this kind of complex tool manufacture.
So were the Acheulian toolmakers smarter than any human relative that lived prior to 1.8 million years ago, and is this potentially the point in human evolution when language emerged? We used a neuroarchaeological approach to answer these questions.
Imaging brain activity now to reconstruct brain activity in the past
My research team, which consists of paleoanthropologists at the Stone Age Institute and the University of Iowa and neuroscientists at the University of East Anglia, recruited modern human beings—all we have at our disposal these days—whose brains we could image while they made Oldowan and Acheulian stone tools. Our volunteers were recreating the behaviors of early humans to make the same types of tools they made so long ago; we can assume that the areas of their modern human brains that light up when making these tools are the same areas that were activated in the distant past.
We used a brain imaging technology called functional near-infrared spectroscopy (fNIRS). It is unique among brain imaging techniques because it allows the person whose brain is being imaged to sit up and move her arms, unlike other techniques that do not allow any movement at all.Participants in the study made stone tools while their brain activity was measured with fNIRS. (Shelby S. Putt, CC BY-ND)
Each of the subjects who participated in this study attended multiple training sessions to learn how to make Oldowan and Acheulian tools before going in for the final test—making tools while hooked up to the fNIRS system.
We needed to control for language in the design of our experiment to test the idea that language and tool-making share a common circuit in the brain. So we divided the participants into two groups: One learned to make stone tools via video with language instructions; the other group learned via the same videos, but with the audio muted, so without language.
If language and tool-making truly share a co-evolutionary relationship, then even those participants who were placed in the nonverbal group should still use language areas of the brain while making a stone tool. This is the result we should expect if language processing and stone tool production require the same neural circuitry in the brain.
During the neuroimaging session, we had the participants complete three tasks: a motor baseline task during which they struck two round stones together without attempting to make flakes; an Oldowan task that involved making simple flakes without trying to shape the core; and an Acheulian task where they attempted to shape the core into a handaxe through a more advanced flake removal procedure.
The evolution of human-like cognition
What we found was that only the participants who learned to make stone tools with language instruction used language processing areas of the brain. This probably means that they were recalling verbal instructions they’d heard during their training sessions. That explains why earlier studies that did not control for language instruction in their experiment design found that stone tool production activates language processing areas of the brain. Those language areas lit up not because of anything intrinsic to making stone tools, but because while participants worked on the tools they also were likely playing back in their minds the language-based instruction they’d received.
Our study showed that people could make stone tools without activating language-related brain circuits. That means, then, that we can’t confidently state at this point that stone tool manufacture played a major role in the evolution of language. When exactly language made its appearance is therefore still a mystery to be solved.
We also discovered that Oldowan tool-making mainly activates brain areas involved in visual inspection and hand movement. More advanced Acheulian tool-making recruits a higher-order cognitive network that spans across a large portion of the cerebral cortex. This Acheulian cognitive network is involved in higher-level motor planning and holding in mind multi-sensory information using working memory.Areas of the brain that form the Acheulian cognitive network that are also active when trained pianists play the piano. (Shelby S. Putt, CC BY-ND)
It turns out that this Acheulian cognitive network is the same one that comes online when a trained pianist plays the piano. This does not necessarily mean that early humans could play Chopin. But our result may mean that the brain networks we rely on today to complete complex tasks involving multiple forms of information, such as playing a musical instrument, were likely evolving around 1.8 million years ago so that our ancestors could make relatively complex tools to exploit energy-dense foods.
One million Americans suffer from the tremors, stiffness and slurred speech of Parkinson’s disease. Major depression affects some 16 million US adults a year, and nearly 30 million Americans deal with the pain of migraine headaches, while about 1 in 1,000 endure the agony of even more painful cluster headaches. Medications are usually the first-line treatment for these and other neurological conditions, with deep brain stimulation surgery—where a surgeon cracks a patient’s skull and places tiny electrodes in the brain tissue as a sort of “brain pacemaker”—sometimes used as a last resort.
What if, instead of side effect-ridden drug regimens or invasive surgeries, these conditions could be treated by painlessly stimulating the brain from outside the skull?
“What if there’s a way to do this noninvasively?” Ian Graham, a biomedical engineering graduate student at Johns Hopkins University, wondered after witnessing a multi-hour deep brain stimulation surgery for depression.
Transcranial stimulation, or stimulating the brain from outside the skull, has become one of the hottest areas in biomedical engineering. The method is usually done in one of two ways. One technique, called transcranial direct current stimulation, uses electrodes placed on the scalp to send electrical signals to the brain. The other, called transcranial magnetic stimulation, uses a magnetic coil on the scalp to produce electrical activity in the brain. Different locations of the brain are stimulated at different intensities and frequencies based on the condition being treated. While no one is sure precisely how brain stimulation improves Parkinson’s and other conditions, it’s understood that the stimulation can affect how neurons fire and can regulate neurotransmitters like serotonin and dopamine.
Graham and other biomedical engineers at Johns Hopkins invented a headpiece that uses electrodes to stimulate the brains of Parkinson’s patients. The STIMband device, which will begin clinical trials later this year or early next year, is meant to be used at home, which sets it apart from other transcranial stimulation devices. The students hope it will help deal with some of the more debilitating symptoms of Parkinson’s, including tremor and balance issues. Earlier this month, the STIMband won a $5,000 second-place prize in VentureWell’s BMEidea national design contest for biomedical and bioengineering students.
With STIMband, the students place the electrodes in locations known from computer modeling to stimulate parts of the brain affected by Parkinson’s. They observed patients participating in Johns Hopkins studies on transcranial direct current stimulation and were impressed by the results.
“I’ve seen a patient come in, and after treatment he had to sign his name,” says Graham. “He said he hadn’t been able to write like that in years.”
The students met with patients in the hospital’s Parkinson’s clinic over many months to gather data about what people really needed in an at-home device. Eventually, they came up with a battery-powered design roughly based on a baseball cap, which can be easily slipped on and controlled with a large button.
STIMband treatment would start in the neurologist's office, where the device would be fitted to the patient. The patient would then take the STIMband home and use it for 20 minutes a day, every day. Treatment might eventually be modified based on individual results, but Graham says the patients would likely use the STIMband indefinitely, as long as they're seeing positive results.
"Since PD [Parkinson's Disease] is degenerative, and the STIMband acts differently than the medication, it should also prove beneficial for a longer period of time," says Graham. "Unfortunately that period of time is still unknown."
If STIMband trials prove successful, the group hopes to achieve FDA approval. The device would likely cost between $600 and $1,000, depending on material choices.
Transcranial stimulation is currently being studied by researchers as a treatment for neurological and neuropsychiatric conditions, including epilepsy, stroke, Tourette’s syndrome, depression and mania, migraine, schizophrenia, eating disorders, dystonia (painful involuntary muscle contractions) and chronic pain. But the FDA has only approved transcranial magnetic stimulation for medication-resistant depression.
“This is not like placing refrigerator magnets on people’s heads,” says neurologist David Brock, the medical director of Neuronetics, the company that produces NeuroStar, a transcranial magnetic stimulation device for depression. NeuroStar treatment is given in a doctor’s office. For a period of four to six weeks, patients come in five days a week for 45-minute sessions. They sit in a chair reading or listening to music while the device, placed over the left side of their forehead, stimulates their left prefrontal cortex.
People often mistakenly consider transcranial stimulation to be an alternative treatment, Brock says, but it’s actually backed up by clinical data. Studies show about 30 to 40 percent of treatment-resistant depression patients go into remission after using NeuroStar, while more have some improvement of symptoms.
Nor is transcranial stimulation like electroconvulsive therapy (ECT), or “shock treatment,” the stigmatized but often highly effective depression treatment that uses electricity to induce a seizure. Unlike ECT, transcranial stimulation doesn’t induce a seizure and doesn’t necessitate general anesthesia or a hospital stay. It’s also free from ECT’s more notorious side effects, including memory loss and confusion.
Brock says transcranial stimulation will almost certainly become an approved treatment for other conditions in coming years, once researchers pin down the right location and intensity to treat the issue at hand. “[Transcranial stimulation] is a lot like a Swiss Army knife,” he says. “We’ve figured out how to use the blade, but we haven’t figured out how to use all the other tools yet.”
Nearly 50 years ago, a computer engineer named Paul Baran peered into the future of American media and didn't like what he saw.
"With the diversity of information channels available, there is a growing ease of creating groups having access to distinctly differing models of reality, without overlap," wrote Baran, a co-founder of the California-based Institute for the Future and a pioneer of the early Internet. "Will members of such groups ever again be able to talk meaningfully to one another? Will they ever obtain at least some information through the same filters so that their images of reality will overlap to some degree?"
This was 1969. Baran was lamenting how the rise of television would cleave the political public. But his warnings may be more prescient today than ever: New findings based on an extensive survey of American book-buying habits find that readers on different sides of the political aisle are not only deeply polarized over scientific issues—they also read completely different scientific books.
"It's really a consumption divide," says James Evans, a sociologist at the University of Chicago and lead author of the study, which was published this week in the journal Nature Human Behaviour. "It's very difficult to imagine consumers of science in this environment appealing to a shared body of claims and facts and theories and arguments because they're really looking at different things."
Evans has long studied the history of science, and how scientists collaborate with industry. But recently, a conversation with Cornell University computational social scientist Michael Macy left him wondering whether the U.S's increasingly polarized politics would be reflected in how people view and read about science. The pair decided to team up to measure this polarization in a unique way: through the books they buy.
Unlike the more commonly used method of surveys, book-buying data is potentially more useful because it allows for much larger sample sizes, Evans says. Plus, it’s more anonymous than a survey: The books are purchased privately online and shipped in nondescript boxes to people's homes, meaning there's no fear of judgment from a pollster (a factor that may have helped skew polls before the 2016 U.S. presidential election).
Finally, purchasing a book requires a financial investment that makes it more likely that people are really committed to the view of that book, Evans says. As he puts it: "Talk is cheap. But if they're putting their money on the line ... this says they have a certain level of interest."
Evans and his collaborators drew on data from book giants Amazon.com and Barnes and Noble, which together have access to more than half of the world's book-buying market. They didn’t collaborate with either company, meaning they didn’t have access to buyers themselves. However, they were able to take advantage of a feature both websites offer: book suggestions.
When a customer buys a book from either site, a list of books that other people who bought that book tend to purchase will pop up. These suggestions "allowed us to build an entire network representation of that book-buying space," Evans says, linking hundreds of thousands of scientific books to each other in a web, along with more than 1,000 conservative and liberal books. All told, the team sorted through metadata for some 1.3 million books.
Researchers looked at that web to see what books about science are most often purchased by people who buy other books with liberal or conservative political slants (for example, a book by Rachel Maddow versus one by Ann Coulter). What they found was a stark divide in the kinds of science these two groups like to read about. Liberal readers more often picked books about basic science disciplines, such as anthropology, while conservative book purchasers tended toward applied science books, such as medicine.
"It's not just that they purchased different books, they purchased very different books from different regions of the scientific space," Evans says.
There may yet be hope for some measure of bipartisan unity. A few disciplines appeared to attract relatively equal interest from both sides of the political spectrum—namely, veterinary medicine, archaeology and paleontology. "Apparently we can all agree that dinosaurs are awesome," says Evans.
For science lovers dismayed by recent restrictions on the use of science at government agencies, there is another silver lining to the results: Political book purchasers of both persuasions were more likely to purchase books about science than topics like art or sports. "There's a really broad acceptance of the value of science,” Evans says, “by liberals and conservatives.”
The scientific fields that appeared most polarized among liberal and conservative-leaning book buyers may not surprise you: climatology, environmental science, social science and economics, among others. (By "polarized," the authors mean that there was very little overlap between what climate science books liberals bought versus the ones that conservatives bought.)
Evans worries that in the long-term, this polarization could not only influence how the public views science, but could shape science itself for the worse. "The concern is that this kind of polarization could end up shaping the production of science in those fields," Evans says—for example, leading scientists to design narrower studies that unconsciously seek to confirm results that align with their biases.
In an opinion piece published alongside the study, Georgia State University political scientist Toby Bolsen writes that the results underscore a growing concern about Americans associating themselves more with people and media with whom they share opinions on science and politics—which often leads to those opinions being strengthened. "This can impede science’s ability to enhance the quality of political debates," writes Bolsen, who wasn't involved in the research.
He cautions, however, that this study did not draw on a random sample of conservative and liberal books—they were picked by the researchers based on Amazon’s categorization of them. Nor does it address the motivations that drive an individual to buy or read a certain scientific book.
James Druckman, a political scientist at Northwestern University who studies how people form political preferences, says Evans' research is "clearly is a critical advance in what we know." Druckman, who also wasn't involved in this study, says the work "gives a much more nuanced and likely accurate view of partisanship and science." At the same time, he adds, "it avoids simplistic portraits of partisans."
This is far from the first effort to analyze so-called “information silos” using data. In 2014, when waves of violence were rocking Israel, data analyst Gilad Lotan published an analysis of the social media and news coverage of an attack at a school in the Gaza Strip. In a series of stunning maps, Lotan detailed the wide gap between the kinds of news outlets, posts and articles shared by those considered to be "pro-Israeli" and "pro-Palestinian" on Facebook and Twitter.
“A healthy democracy is contingent on having a healthy media ecosystem," Lotan wrote. "We need to be more thoughtful about adding and maintaining bridges across information silos online.”
In the future, Evans hopes to be able to work with online book publishers to collect specific data about buyers and their preferences. In the meantime, though, he hopes to see more work to bridge this scientific gap. For instance: scrutinizing book-recommendation algorithms to make sure that they don't box people into certain viewpoints, getting scientists to better communicate when there is consensus opinion in their fields, and creating more forums for people of different political views to discuss science.
"Doing that could allow us to make science a shared resource," Evans says. "I think the onus is on us as a society to grapple with this."
My least favorite day of the year has arrived. Yesterday we completed one last hike prospecting the badland hills north of Worland where rocks deposited during the PETM are exposed. The search was fruitless—we found no new plant fossil sites, no last-day-of-the-field-season wonders. Today is the day we break camp, pack everything back into the little red shed at the Bureau of Land Management yard, and leave.
Breaking camp, striking camp, anyway you put it, taking down the tents we have lived in for the past month always makes me feel sad. It is strange, but I think most people feel it—one becomes emotionally attached to a spot of ground very quickly. We arrived here just a month ago. This was, and soon again will be, a bare patch of relatively flat ground dotted with sagebrush and cactus. We set up a tent for cooking, a few more tents for sleeping. Each day we woke here, breakfasted here, left for work from here, returned here in the evening, ate again, and sat here and talked as the sky overhead of this spot darkened and broke out in stars. Our only commitments to this place are our temporary use of it, the temporary structures we brought with us, and a ring of stones we made to contain the occasional campfire. Yet through some trick of the human psyche it feels like home. Taking down the tents and packing them into Dino destroys the home we have made our own simply through living in it and enjoying it for a few weeks. No wonder the term is “breaking camp.”
Of course there are other reasons to feel a little melancholy as we pull the tent stakes, fold the tarps, pack the bins of dishes, and empty the coolers of their last blocks of ice. We are all giving up the fellowship that grows among any small group that lives and works together in a challenging environment, even for a short time. I have seen this happen, field season after field season, for nearly 40 years now. Some groups mesh exceptionally well, with others there is more friction, but always people learn to help one another to some degree. They come to feel a common purpose. And almost always they feel a connection to this harsh landscape, even a little sense of owning the place by virtue of living in it.
We will also miss the relative independence that comes with fieldwork—we have had stretches of several days when we were unplugged from the world, with no phone or email. Until about 10 years ago our only non-emergency contact with the rest of the world was via snail mail and weekly phone calls that could be placed from a public pay phone in Worland. Now, improved cell-phone coverage has turned the hill behind camp into the “phone booth,” and it takes a conscious decision to separate from the rest of the world. The reward of separating is to be, temporarily, master of your own schedule and captain of your activities, able to focus entire days on the rocks and fossils in front of you without even the shadow of distraction by the outside world. It seems a radical act, and it is almost as addictive as collecting fossils.
Image by Scott Wing. A flat patch of ground in the badlands in Wyoming. (original image)
Image by Scott Wing. The badlands north of Worland, Wyoming, shown here, expose sediments deposited during the Paleocene-Eocene Thermal Maximum. (original image)
Finally and most importantly, although fieldwork is physically hard and frequently monotonous, it also holds the possibility of great finds. In leaving I am giving up the chance that tomorrow I might walk around a nameless badland hill and find a spectacular new fossil site. The gambler in me wants to throw the dice a few more times. That is my main motivation for returning to the Bighorn Basin every summer. Some 20 years ago my colleague Bill DiMichele came to visit one of my field areas in the Bighorn Basin—I think curious that I continued to come back here year after year. One evening after dinner we walked to the top of a high butte near my camp and looked out onto an area of badlands called The Honeycombs, maybe 10 square miles of sharply weathered badland hills, each isolated from the next by ravines 50 to 100 feet deep, and each exposing on its sides rocks deposited in the last part of the Paleocene. Bill said what we were both thinking: “My God, you’ll never look at all that, it’s an endless labyrinth of outcrop just in this little area.” He was certainly right, but it remains fun to try.
We started packing not long after dawn so that we could complete the hardest work before it got hot, and by 10 a.m. our home is entirely packed and loaded into Dino. My poor old field vehicle is once again bulging at the doors. We take a last tour around our campsite, picking up the occasional small pieces of paper or plastic that have blown into the surrounding sage during summer windstorms. We all want to leave it as we found it, even if we don’t want to leave it at all. When we finish, the site is a barren, dusty, sage-spotted flat looking pretty much as it did when we got here. The fire ring, and a few smooth spots where tents were pitched are the only marks we have left.
Dino’s creaks and groans are louder than ever as I negotiate the camp road for a final time. Topping the first low hill outside of camp there is a large buck pronghorn standing by the two-track, grazing placidly. He looks up with mild interest as we pass, far more blasé than the usual pronghorn as we rattle by about 40 feet away. I like to imagine that he is patiently waiting for the “summer people” to leave and return the badlands to their regular state of sun-stunned, midday quietude. With any luck, though, we will be back in his territory next year. Who knows what we might find then?
Scott Wing is a research scientist and curator in the Smithsonian Institution’s Department of Paleobiology.
If you think Australia is full of weird creatures now, you should have seen it at the end of the last Ice Age. There were wombats the size of Volkswagons, koala cousins that resembled the mythical Drop Bear and enormous, venomous lizards larger than today’s Komodo dragons. But why did these fantastic beasts disappear? After a decade of debating this question, a new study is helping to revive a hypothesis that had previously been pushed aside.
What happened in Australia is just one part of a global story in the decline of the world’s massive mammals. From that island continent through Asia, Europe, Africa and the Americas, the close of the Ice Age 12,000 years ago saw the worldwide downfall of many large, charismatic creatures from the giant ground sloth to the beloved woolly mammoth. In every case, both humans and a warming climate have been implicated as major suspects, fueling a debate over how the extinction played out and what—or who—was responsible.
As far as Australia goes, humans have been promoted as prime culprits. Not only would early-arriving aboriginals have hunted megafauna, the argument goes, but they would have changed the landscape by using fire to clear large swaths of grassland. Some experts point to Australia’s megafauna crash after human arrival, around 50,000 years ago, as a sure sign of such a human-induced blitzkrieg.
For example, a region called the Sahul—which included Australia, Tasmania and New Guinea during the Ice Age—lost 88 species of animal that weighed over 220 pounds. These included oversized kangaroos that strutted rather than hopped, real-life ninja turtles with tail clubs and flightless birds twice the size of today’s emus.
The problem is, there’s no hard evidence that humans were primarily to blame for the disaster that befell these giants. Judith Field, an archaeologist at the University of New South Wales who focuses on megafauna and indigenous communities in Australia and New Guinea, says the hunting hypothesis has hung on because of its appealing simplicity. “It’s a good sound bite” and “a seductive argument to blame humans for the extinctions” given how simple of a morality fable it is, she says. But when it comes to hard evidence, Field says, the role of humans has not been substantiated.
So what really happened? The picture is far from complete, but a paper by Vanderbilt University paleontologist Larisa DeSantis, Field and colleagues published today in the journal Paleobiology argues that the creeping onset of a warmer, drier climate could have dramatically changed Australia’s wildlife before humans even set foot on the continent. And while this event was natural, it is a frightening portent of what may happen to our modern wildlife if we do nothing to stop the scourge of today's human-caused climate change.
The researchers focused on a spot in southeastern Australia known as Cuddie Springs, which turned out to be an ideal place to interrogate the fate of the continent’s megafauna. Initial scientific forays focused on searching for fossil pollen to reconstruct ancient environments, Field says. But in the process, researchers also found fossils and archaeological artifacts that indicated megafauna and humans lived alongside each other there for 10,000 years or more.
“The combination of the fossil bone, the pollen record and the archaeology make this a really unique opportunity to investigate the relationship between the three,” Field says.
Even better, DeSantis says, Cuddie Springs boasts older beds of fossils deposited long before human arrival. This provided an opportunity to document changes over a longer span of time, “and assess dietary responses to long-term shifts in climate,” she says. To that end, the paleontologists focused on fossils laid out in two horizons—one 570,000-350,000 years old and the other between 40,000 and 30,000 years old. Drawing on chemical clues about diet and microscopic damage to marsupial teeth found in those layers, the researchers were able to document who was around and what they were eating at each layer.
If you were able to take a time machine between the two time periods, you’d be forgiven for thinking that you had moved through space as well as time. “Cuddie Springs, around 400,000 years ago, was wetter,” DeSantis says, and there was enough greenery for the various herbivores to become somewhat specialized in their diets. Kangaroos, wombats and giant herbivores called diprotodontids browsed on a variety of shrubby plants, including saltbush. By 40,000 years ago, a warmer, drying climate had transformed the landscape and the diets of the mammals on it.
By late in the Ice Age, the plant-eating marsupials were all eating more or less the same thing, and the sorts of plants that were better at holding water for these mammals were much rarer. Saltbush, for example, became less palatable because, DeSantis says, “if you haven’t been able to find water for days, the last thing you are going to eat is salty food that requires you to drink more water.” The desert became drier, resources became scarce, and competition for the same food ramped up.
Altogether, DeSantis says, this suggests “climate change stressed megafauna and contributed to their eventual extinction.”
Knowing how climate change impacted Australia’s mammals thousands of years ago isn’t just ancient history. NASA recently reported that we’ve just gone through the hottest year on record in an ongoing string of exceptionally warm years. The only difference is that now, our species is driving climate change. “Australia is projected to experience more extreme droughts and intense precipitation events,” DeSantis says, including a projected temperature increase of around 1-3 degrees Celsius by 2050, thanks to Homo sapiens and our forest-razing, fossil-fuel-burning, factory-farm-dependent lifestyles.
Looking to the past may help us get ready for what’s coming. “Data from Cuddie Springs suggest that there is likely a tipping point beyond which many animals will go extinct,” DeSantis says. We’re on track to play out such a catastrophe again—and today’s changing climate can’t be halted or reversed, the least our species can do is prepare for it. “I always learned in school that the importance of studying history is to make sure that history doesn’t repeat itself,” DeSantis says.
Looking at the ghosts of climate change past gives us a preview of what’s coming—and what we might lose if we do not act.
In 2009, automotive designers at Japanese carmaker Nissan were scratching their heads over how to build the ultimate anti-collision vehicle. Inspiration came from an unlikely source: schools of fish, which move synchronously by sticking close together while simultaneously staying a safe stopping distance apart. Nissan took the aquatic concept and swam with it, creating safety features in Nissan cars like Intelligent Brake Assist and Forward Collision Warning that are still used today.
Biomimicry—an approach to design that looks for solutions in nature—is by now so widespread that you may not even recognize the real-life inspiration behind your favorite technology. From flipper-like turbines to leaf-inspired solar cells to UV-reflective glass with spider web-like properties, biomimicry offers designers efficient, practical, and often economical solutions that nature has been developing over billions of years. But combine biomimicry with sports cars? Now you're in for a wild ride.
From the Jaguar to the Chevrolet Impala, automotive designers have a long tradition of naming their cars after creatures that evoke power and style. Carmakers like Nissan even go so far as to study animals in their natural environments to advance automotive innovation. Here are a few of the most famous classic cars—commercial and concept—that owe their inspiration to the deep blue sea.
A Bubble of One’s OwnMcLaren P1 Supercar (Axion23 via Wikicommons)
While automotive designer Frank Stephenson was on vacation in the Caribbean, a sailfish mounted on the wall of his hotel made him do a double take. The fish's owner was especially proud of his catch, he told Stephenson, because of the fact that sailfish are coveted for being too fast to easily capture. Reaching speeds of 68 miles per hour, the sailfish is one of the fastest animals in the ocean (close competitors include its cousins the swordfish and marlin, all of which belong to the billfish family).
His curiosity hooked, Stephenson returned to his job at the headquarters of British automotive giant McLaren eager to learn more about what makes the sailfish the fastest in the sea. He discovered that the fish’s scales generate tiny vortices that produce a bubble layer around its body, significantly reducing drag as it swims.
Stephenson went on to design a supercar in the fish’s image: The P1 hypercar needs generous air circulation to maintain combustion and engine cooling for high performance. McLaren’s designers applied the fish scale blueprint to the inside of the ducts that channel air into the engine of the P1, boosting airflow by an incredible 17 percent and increasing the efficiency and power of the vehicle.
The Road Shark
Image by Tino Rossini / iStock. Corvette Mako Shark (original image)
Image by CoreyFord / iStock. Mako Shark Side Profile (original image)
Image by Arpad Benedek / iStock. Corvette Stingray (original image)
Image by Chris Sauerwald / iStock. Corvette Manta Ray (original image)
Out of all the ocean-inspired sports cars, the Corvette Stingray is perhaps the most famous. Colloquially named “The Road Shark,” the Stingray is still produced and sold today. It isn’t the only car to appear in a suite of shark and ray-inspired 'Vettes, however. There's also the Mako Shark, the Mako Shark II and the Manta Ray, although none of these have enjoyed the longevity of the Stingray. Built in the United States, America’s love affair with the Stingray continues today as a race-ready sports car for not a whole lot of money.
Corvette’s aquatic renaissance stemmed partly from one man’s fishing trip. General Motors design head Bill Mitchell, an avid deep-sea fisher and nature-lover, returned from a trip to Florida with a Mako shark—a pointy-nosed apex predator with a metallic blue back—which he later mounted in his GM office. Mitchell was reportedly captivated by the vibrant gradation of colors along the underbelly of the shark, and worked tirelessly with designer Larry Shimoda to translate this coloration to the new concept car, the Mako Shark.
Although the car never went on the market, the prototype alone gained iconic status. But the concept didn’t disappear entirely. Instead, after acquiring a few upgrades, the Mako evolved into the Manta Ray after Mitchell was inspired by the movement of a manta powerfully gliding through the ocean.
A Little More BitePlymouth Barracuda (crwpitman / iStock)
This iconic fastback almost had an entirely different namesake when Plymouth’s executives lobbied to call the car "Panda." Unsurprisingly, the name was unpopular with its designers, who were looking for something with a little more…bite. They settled on "Barracuda," a title more befitting of the muscle car’s snarling, toothy grin.
Serpentine in appearance, barracudas in the wild attack with short bursts of speed. They reach up to 27 miles per hour, and have been observed overtaking prey larger than themselves using their rows of razor-sharp teeth. Highly competitive animals, barracudas will sometimes challenge animals two to three times their size for the same prey.
The Plymouth Barracuda was hastily brought to market to jump the release of its direct competitor, the Ford Mustang in 1964. The muscle car’s debut was rocky, but it returned in 1970 with an unapologetically fierce body design and a V8 motor. Sleek yet muscly, the Barracuda lives up to its name—a wickedly fast classic car with a predatory instinct.
Misguided by a BoxfishMercedes-Benz Bionic (NatiSythen via Wikicommons)
Despite its goofy-looking exterior, the boxfish represents an amazing feat of bioengineering. Its box-shaped, lightweight, bony shell makes the small fish agile and maneuverable, as well as purportedly aerodynamic and self-stabilizing. Such attributes made it an ideal inspiration for a commuter car, which is why Mercedes-Benz unveiled the Bionic in 2005—a concept car that took technical and even cosmetic inspiration from the spotted yellow fish.
Sadly, the Bionic never made it to market after further scientific analysis on the biologic boxfish’s “self-stabilizing” properties were largely debunked. More research revealed that really, over the course of its evolution the boxfish had given up speed and power for an assortment of defensive tools and unparalleled agility. Bad news for the Bionic—but a biomimicry lesson for the books.
An ethereal sound, with a smooth, rangy melody that shuffles through keys, and a soft tap for a beat, fills a lab at Toronto’s Holland Bloorview Kids Rehabilitation Hospital. Made possible by wearable sensors on a child’s fingertips and chest that track pulse, breathing, temperature and sweat, and an algorithm that interprets that data as sound, the electronic output isn’t really danceable. But the changes in tempo, melody and other musical elements instead provide insight into the child’s emotions.
This is biomusic, an emotional interface that tracks physiological signals associated with emotional states and translates them into music. Invented by a team at Holland Bloorview, led by biomedical engineers Stefanie Blain-Moraes and Elaine Biddiss, the intent is to offer an additional means of communication to people who may not express their emotional state easily, including but not limited to children with autism spectrum disorder or with profound intellectual and multiple disabilities. In a 2016 study in Frontiers in Neuroscience, Biddiss and her coauthors recorded the biomusic of 15 kids around the age of 10 — both kids with autism spectrum disorder and typically developing kids — in anxiety inducing and non-anxiety inducing situations and played it back to adults to see if they could tell the difference. They could. (At the bottom of the study, you can dowload and listen to the biomusic.)
“These are children who may not be able to communicate through traditional pathways, which makes things a little bit difficult for their caregivers,” says Stephanie Cheung, a PhD candidate in Biddiss’ lab and lead author of the study. “The idea is to use this as a way for caregivers to listen to how those signals are changing, and in that way to kind of determine the feeling of the person they’re communicating with.”
While Biddiss’ studies employed that atmospheric sound, it need not be a particular type of music, points out Blain-Moraes, an assistant professor of physical and occupational therapy who runs the Biosignal Interaction and Personhood Technology Lab at McGill University. A former graduate student with Biddiss at Holland Bloorview who helped invent the original system, Blain-Moraes is working to further develop the technology. Among her modifications is the option to use different “sound skins” that apply noise that the user finds pleasant. The goal is not to design a technology for a single group.
“We look a lot for what we call resonant design,” she says. “We’re not trying to design for a condition, we’re looking to design for a need, and often those needs resonate across conditions.” This could be a caregiver who wants more information from her patient, or a mother who wants an alternative way to monitor a baby in another room. It could apply to an individual who wants to track his own emotional state, or someone with an aging parent who has become less able to express him or herself.
In the original state, the technology featured a fingertip sensor that tracked heart rate, skin temperature and electrodermal activity (perspiration). These were expressed, respectively, in the beat, key and melody of the music. An additional chest strap tracked chest expansion, which was integrated into the music as a sort of whooshing sound. Each of these physiological features is subject to change when a person is feeling anxious: Perspiration, heart rate and respiration all increase, while the blood vessels contract, making the skin temperature decrease.
But, there are still a lot of hurdles to overcome, technological and otherwise. Ideally, the system is less obtrusive. Blain-Moraes implemented a method to estimate breathing based on the amount of blood in the finger, to replace the chest strap, and placed other sensors in a FitBit like wristband. Fitting it all into a consumer product like an Apple Watch, while not inconceivable, will require smaller, better sensors than we have available now.
“There’s an important distinction that you need to make between changes in your body that happen to maintain homeostasis and changes in your body that are specific to emotional and mental states,” says Blain-Moraes. “You need sensors that are sensitive enough to be able to pick up these changes — and they tend to be a lot smaller scale and faster — that are related to physiological, mental and emotional states.”
Then, there’s the scientific challenges. Detecting anxiety seemed to work, when compared to a relaxed state. But how would the technology fare when comparing anxiety to excitement, two states that feature many of the same physiological signals, let alone complex and overlapping emotions? Using the context of the situation may help, but the process is further complicated by the users — kids with autism spectrum disorder don’t always show the same physiological signals, sometimes exhibiting increased heart rate in non-anxiety states, showing a narrower range of electrodermal activity and differing skin temperature responses.
"Biomusic and sonification technologies are an interesting approach to communicating emotional states," says Miriam Lense, a clinical psychologist and research instructor at Vanderbilt University Medical Center in the Program for Music, Mind and Society. "It remains to be seen how well this technology can distinguish states that have overlapping physiological output—for example, both excitement and anxiety involve heightened arousal—as well as mixed and fluctuating states. In different populations and for different individuals, there may be differences in how states are manifested physiologically."
Finally, and most problematically, there are ethical dilemmas. What biomusic is doing is broadcasting very personal information — one’s emotional state — publicly. In many of the use cases, the people in question don’t have the ability to communicate consent. And when a person is unable to verify the accuracy of that information — say, that they are in fact feeling anxious — that person may not be able to correct a misunderstanding.
“It’s like with many ethical issues, there isn’t a right or there isn’t a wrong,” says Biddiss. “It could equally be considered wrong to deny a person a communication pathway with their loved ones.”
In a worst-case scenario, this could play out in a feedback loop of embarrassing biomusic. Once, during a lecture, Blain-Moraes wore a biomusic system. When she was asked a difficult question, the biomusic intensified, causing everyone to laugh, which made her embarrassed, so it intensified further, and everyone laughed more — and so on.
Despite these issues, biomusic is progressing as a technology. It’s simple to interpret and doesn’t require undivided, visual attention. Blain-Moraes’ team at McGill is working toward an app, with companion sensors. They’re in the research and design stages, she says, sharing prototypes with caregivers and patients with dementia or autism to ensure that it’s a participatory process. In a previous study in Augmented and Alternative Communication by Blain-Moraes, Biddiss, and several others, parents and caregivers viewed biomusic as a powerful and positive tool, calling it refreshing and humanizing.
“This is really meant to be a ubiquitous tool, that can be used to make people more aware of their emotions,” Blain-Moraes says.
Last January, during a livestream on YouTube and Twitch, professional StarCraft II player Grzegorz “MaNa” Komincz from Poland struck a blow for humankind when he defeated a multi-million-dollar artificial intelligence agent known as AlphaStar, designed specifically to pummel human players in the popular real-time strategy game.
The public loss in front of tens of thousands of eSports fans was a blow for Google parent company Alphabet’s London-based artificial intelligence subsidiary, DeepMind, which developed AlphaStar. But even if the A.I. lost the battle, it had already won the war; a previous iteration had already defeated Komincz five times in a row and wiped the floor with his teammate, Dario “TLO” Wünsch, showing that AlphaStar had sufficiently mastered the video game, which machine learning researchers have chosen as a benchmark of A.I. progress.
In the months since, AlphaStar has only grown stronger and is now able to defeat 99.8 percent of StarCraft II players online, achieving Grandmaster rank in the game on the official site Battle.net, a feat described today in a new paper in the journal Nature.David Silver, principal research scientist at DeepMind, at a demo of AlphaStar in January. (DeepMind)
Back in 1992, IBM first developed a rudimentary A.I. that learned to become a better backgammon player through trial and error. Since then, new A.I. agents have slowly but surely dominated the world of games, and the ability to master beloved human strategy games has become one of the chief ways artificial intelligence is assessed.
In 1997, IBM’s DeepBlue beat Gary Kasparov, the world’s best chess player, launching the era of digital chess supremacy. More recently, in 2016, Deepmind’s AlphaGo beat the best human players of the Chinese game Go, a complex board game with thousands of possible moves each turn that some believed A.I. would not crack for another century. Late last year, AlphaZero, the next iteration of the A.I., not only taught itself to become the best chess player in the world in just four hours, it also mastered the chess-like Japanese game Shogi in two hours as well as Go in just days.
While machines could probably dominate in games like Monopoly or Settlers of Catan, A.I. research is now moving away from classic board games to video games, which, with their combination of physical dexterity, strategy and randomness can be much harder for machines to master.
“The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge,” David Silver, principal research scientist at DeepMind says in a statement. “The game’s complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 1026 possible choices for every move; and players have less information about their opponents than in poker.”
David Churchill, a computer scientist at the Memorial University of Newfoundland who has run an annual StarCraft A.I. tournament for the last decade and served as a reviewer for the new paper, says a game like chess plays into an A.I.’s strengths. Each player takes a turn and each one has as long as possible to consider the next move. Each move opens up a set of new moves. And each player is in command of all the information on the board—they can see what their opponent is doing and anticipate their next moves.
“StarCraft completely flips all of that. Instead of alternate move, it’s simultaneous move,” Churchill says. “And there’s a ‘fog of war’ over the map. There’s a lot going on at your opponent’s base that you can’t see until you have scouted a location. There’s a lot of strategy that goes into thinking about what your opponent could have, what they couldn’t have and what you should do to counteract that when you can’t actually see what's happening.”AlphaStar (Zerg, in red) defending an early aggression where the opponent built part of the base near AlphaStar's base, showcasing robustness. (DeepMind)
Add to that the fact that there can be 200 individual units on the field at any given time in StarCraft II, each with hundreds of possible actions, and the variables become astronomical. “It’s a way more complex game,” Churchill says. “It’s almost like playing chess while playing soccer.”
Over the years, Churchill has seen A.I. programs that could master one or two elements of StarCraft fairly well, but nothing could really pull it all together. The most impressive part of AlphaStar, he says, isn’t that it can beat humans; it’s that it can tackle the game as a whole.
So how did DeepMind’s A.I. go from knocking over knights and rooks to mastering soccer-chess with laser guns? Earlier A.I. agents, including DeepMind’s FTW algorithm which earlier this year studied teamwork while playing the video game Doom III, learned to master games by playing against versions of themselves. However, the two machine opponents were equally matched and equally aggressive algorithms. Because of that, the A.I. only learned a few styles of gameplay. It was like matching Babe Ruth against Babe Ruth; the A.I. learned how to handle home runs, but had less success against singles, pop flies and bunts.
The DeepMind team decided that for AlphaStar, instead of simply learning by playing against high-powered versions of itself, it would train against a group of A.I. systems they dubbed the League. While some of the opponents in the League were hell-bent on winning the game, others were more willing to take a walloping to help expose weaknesses in AlphaStar’s strategies, like a practice squad helping a quarterback work out plays.
That strategy, combined with other A.I. research techniques like imitation learning, in which AlphaStar analyzed tens of thousands of previous matches, appears to work, at least when it comes to video games.
Eventually, DeepMind believes this type of A.I. learning could be used for projects like robotics, medicine and in self-driving cars. “AlphaStar advances our understanding of A.I. in several key ways: multi-agent training in a competitive league can lead to great performance in highly complex environments, and imitation learning alone can achieve better results than we’d previously supposed,” Oriol Vinyals, DeepMind research scientist and lead author of the new paper says in a statement. “I’m excited to begin exploring ways we can apply these techniques to real-world challenges.”
While AlphaStar is an incredible advance in AI, Churchill thinks it still has room for improvement. For one thing, he thinks there are still humans out there that could beat the AlphaStar program, especially since the A.I. needs to train on any new maps added to the game, something he says human players can adapt to much more quickly. “They’re at the point where they’ve beaten sort of low-tier professional human players. They’re essentially beating benchwarmers in the NBA,” he says. “They have a long way to go before they’re ready to take on the LeBron James of StarCraft.”
Time will tell if DeepMind will develop more techniques that make AlphaStar even better at blasting digital aliens. In the meantime, the company’s various machine learning projects have been challenging themselves against more earthly problems like figuring out how to fold proteins, decipher ancient Greek texts, and learning how to diagnose eye diseases as well or better than doctors.
On "60 Minutes" the other night, Amazon founder Jeff Bezos made drones fun again. They're usually associated with clandestine warfare, but Bezos showed interviewer Charlie Rose--along with the millions of others watching--how the unmanned aircraft can be cool little gizmos that become a part of our daily lives--in this case by delivering stuff you ordered from Amazon right to your doorstep.
Bezos used the program to reveal the wonders of Amazon's "octocopter," a mini-drone with the capability of achieving the Holy Grail of e-commerce--deliveries within 30 minutes. This is still years away, as Bezos acknowledged, but it's clear he thinks drones will one day be as ubiquitous as Domino's drivers.
Bezos' demo had the desired effect--his octocopter was all over the Internet on Cyber Monday, burnishing Amazon's reputation as a company gliding along the cutting edge of customer service. Some derided the the whole thing as little more than a beautifully orchestrated publicity stunt, given the not insignificant hurdles commercial drones still need to clear. Other websites, such as The Telegraph in the U.K., piled on. It produced a list of nine things that could go "horribly wrong"--from drone hackers to long weather delays to packages falling from the sky.
The truth is, we won't really know all that can go wrong--or right--with commercial drones until closer to 2020, at least in the U.S. It could happen sooner, but the Federal Aviation Administration (FAA) has been moving slowly and cautiously, not surprising, considering that we're talking about tens of thousands of pilotless vehicles buzzing around in public airspace. Extensive drone testing at six still-to-be-named locations won't begin until next year, almost a year and a half behind the schedule set by Congress.
Me, my drone and I
But let's step back for a minute and forget about messy things like political and legal realities. If Bezos is right, more personal drones are inevitable. Many, no doubt, will be used to make deliveries. (That already appears to be happening in China.) But what else will they be able to do?
Plenty, if you believe some of the ideas that have been floated. And those little flying machines could become a lot more personal than most of us would have imagined.
Consider the possibilities:
1) I'm ready for my selfie: Not long ago, a group of designers from a product strategy firm named frog staged a workshop with the purpose of imagining ways that drones could become a much bigger part of our lives. One idea was an aircraft called the Paparazzi, and, true to its name, it would be all about following you around and recording your life in photos and videos. It would then feed everything directly to your Facebook page. Yes, it sounds ridiculously self-indulgent, but then again, who could have imagined our obsession with self portraits on phones?
2) Cut to the chase: Here's another idea from the frog workshop, a drone they named the Guardian Angel. Described as the "ultimate accessory for serious runners," it would act as a trainer or exercise companion by flying ahead and setting the pace. It could conceivably tap into data from a heart monitor a runner is wearing and push him or her harder to get pulse rate up. Or it could use data from a previous run and let a person race against himself. In short, these drones would be like wearable tech that you don't actually wear.
3) Take that, Siri: Researchers at M.I.T., meanwhile, have developed a personal drone app they've named Skycall, which serves as a personal tour guide. Sure, you can listen to your smartphone give you directions, but this app/drone combo would actually show you the way. It works like this: You tell the app on your phone where you want to go and it would then identify and contact the nearest unmanned aircraft. It would show up, like a flying cab, and lead you to your destination.
4) Allow me to revel in my greatness: A British drone maker has designed one that's a variation of the Paparazzi mentioned above, although his is geared more to outdoor types, such as mountain bikers,snowboarders and surfers. It tracks a person through a smartphone and, from overhead, takes a steady stream of photos and videos to capture his or her awesomeness for posterity.
5) An idea whose time has already come: Finally, Dan Farber, writing for CNET the other day, raised the prospect of what he called a "Kindle Drone." He sees it as a device about the size of a baseball, loaded with sensors and a camera, that would serve as a guard and personal assistant. On one hand, it could roam your house gathering data and generally making sure everything's in order. On the other, you could direct it to go find your phone.
Now that has potential.
Video bonus: Here's a drone in action in China, delivering a cake from the air.
Video bonus bonus: It's safe to say this is the only engagement ring delivered by drone.
Video bonus plus: Need to map the Matterhorn. No problem, drones at your service.
More from Smithsonian.com
Over the course of 1700 miles, they sampled the water for small pieces of plastic more than 100 times. Every single time, they found a high concentration of tiny plastic particles. “It doesn’t look like a garbage dump. It looks like beautiful ocean,” Miriam Goldstein, the chief scientist of the vessel sent by Scripps Institution of Oceanography, said afterward. “But then when you put the nets in the water, you see all the little pieces.”
In the years since, a lot of public attention has been justifiably paid to the physical effects of this debris on animals’ bodies. Nearly all of the dead albatrosses sampled on Midway island, for instance, were found to have stomachs filled with plastic objects that likely killed them.
But surprisingly little attention has been paid to the more insidious chemical consequences of this plastic on food webs—including our own. “We’d look over the bow of the boat and try to count how many visible pieces of plastic were there, but eventually, we got to the point that there were so many pieces that we simply couldn’t count them,” says Chelsea Rochman, who was aboard the expedition’s Scripps vessel and is now a PhD student at San Diego State University. “And one time, I was standing there and thinking about how they’re small enough that many organisms can eat them, and the toxins in them, and at that point I suddenly got goosebumps and had to sit down.”
“This problem is completely different from how it’s portrayed,” she remembers thinking. “And, from my perspective, potentially much worse.”
In the years since, Rochman has shown how plastics can absorb dangerous water-borne toxins, such as industrial byproducts like PCB (a coolant) and PBDE (a flame retardant). Consequently, even plastics that contain no toxic substances themselves, such as polyethylene—the most widely used plastic, found in packaging and tons of other products—can serve as a medium for poisons to coalesce from the marine environment.
But what happens to these toxin-saturated plastics when they’re eaten by small fish? In a study published today in Scientific Reports, Rochman and colleagues fill in the picture, showing that the toxins readily transfer to small fish through plastics they ingest and cause liver stress.This is an unsettling development, given that we already know such pollutants concentrate further the more you move up the food chain, from these fish to the larger predatory fish that we eat on a regular basis.
In the study, researchers soaked small pellets of polyethylene in the waters of San Diego Bay for three months, then tested them and discovered that they’d absorbed toxins leached into the water from nearby industrial and military activities. Next, they put the pollution-soaked pellets in tanks (at concentrations lower than those found in the Great Pacific garbage patch) with a small, roughly one-inch-long species called Japanese rice fish. As a control, they also exposed some of the fish to virgin plastic pellets that hadn’t marinated in the Bay, and a third group of fish got no plastic in their tanks at all.
Researchers still aren’t sure why, but many small fish species will eat these sort of small plastic particles—perhaps because, when covered in bacteria, they resemble food, or perhaps because the fish simply aren’t very selective about what they put in their mouths. In either case, over the course of two months, the fish in the experiment consumed many plastic particles, and their health suffered as a result.
“We saw significantly greater concentrations of many toxic chemicals in the fish that were fed the plastic that had been in the ocean, compared to the fish that got either clean plastic or no plastic at all,” Rochman says. “So, is plastic a vector for these chemicals to transfer to fish or to our food chain? We’re now fairly confident that the answer is yes.”
These chemicals, of course, directly affected the fishes’ health. When the researchers examined the tiny creatures’ livers (which filter out toxins in the blood) they found that the animals exposed to the San Diego Bay-soaked plastic had significantly more indications of physiological stress: 74 percent showed severe depletion of glycogen, an energy store (compared to 46 percent of fish who’d eaten virgin plastic and zero percent of those not exposed to plastic), and 11 percent exhibited widespread death of individual liver cells. By contrast, the fish in the other treatments showed no widespread death of liver cells. One particular plastic-fed fish had even developed a liver tumor during the experimental period.
All this is bad news for the entire food webs that rest upon these small fish, which include us. “If these small fish are eating the plastic directly and getting exposed to these chemicals, and then a bigger fish comes up and eat five of them, they’re getting five times the dose, and then the next fish—say, a tuna—eats five of those and they have twenty-five times the dose,” Rochman explains. “This is called biomagnification, and it’s very well-known and well-understood.”
This is the same reason why the EPA advises people to limit their consumption of large predatory fish like tuna. Plastic pollution, whether found in high concentrations in the Great Pacific garbage patch or in the waters surrounding any coastal city, appears to be central to the problem, serving as a vehicle that carries toxins into the food chain in the first place.
If you were stung by a bark scorpion, the most venomous scorpion in North America, you’d feel something like the intense, painful jolt of being electrocuted. Moments after the creature flips its tail and injects venom into your skin, the intense pain would be joined by a numbness or tingling in the body part that was stung, and you might experience a shortness of breath. The effect of this venom on some people—small children, the elderly or adults with compromised immune systems—can even trigger frothing at the mouth, seizure-like symptoms, paralysis and potentially death.
Based solely on its body size, the four-inch-long furry grasshopper mouse should die within minutes of being stung—thanks to the scorpion’s venom, which causes temporary paralysis, the muscles that allow the mouse to breathe should shut down, leading to asphyxiation—so you’d think the rodent would avoid the scorpions at all costs. But if you put a mouse and a scorpion in the same place, the rodent’s reaction is strikingly brazen.
If stung, the four-inch-long rodent might jump back for a moment in surprise. Then, after a brief pause, it’ll go in for the kill and devour the scorpion piece by piece:
This predatory behavior isn’t the result of remarkable toughness. As scientists recently discovered, the mouse has evolved a particularly useful adaptation: It’s immune to both the pain and paralytic effects that make the scorpion’s venom so toxic.
Although scientists long knew that the mouse, native to the deserts of the American Southwest, preys upon a range of non-toxic scorpions, “no one had ever really asked whether they attack and kill really toxic scorpions,” says Ashlee Rowe of Michigan State University, who led the new study published today in Science.
To investigate, Rowe visited the desert nearby Arizona’s Santa Rita Mountains and collected a number of mice and scorpions. Back at her lab, when she and colleagues put the two animals together in the same tank, they saw that the mice devoured the scorpions with gusto and were seemingly impervious to their toxic strings, showing no signs of inflammation or paralysis afterward. They even directly injected the venom into other mouse specimens to further confirm that it didn’t affect them physiologically.
The question remained, though, whether the mice were merely immune to the venom’s paralyzing effects, or were also unable to feel pain as a result of a sting. “I’d see the mice get stung, and they’d just groom a little bit and blow it off,” Rowe says. After she talked to people who’d been stung and heard how badly it hurt, she hypothesized that the mild reaction in the mice indicated that they were resistant to the pain itself.
Working with Yucheng Xiao and Theodore Cummins of Indiana University, she closely examined the physical structures that connect the sensory neurons (which convey external stimuli, such as pain) to the central nervous system (where pain is experienced). “There are big, long neurons that extend from the hands and feet all the way to the spinal cord, and they’re responsible for taking information from the environment and sending it to the brain,” she says.
Incredibly, the nerve cells associated with the interface between these two systems can continue functioning normally when they’ve been removed from the mice, if they’ve been properly preserved and cultured in a medium. As a result, her team was able to look at the mechanisms that control the flow of signals between the sensory neurons and the spinal cord—structures known as ion channels—and see if those present in grasshopper mice functioned differently than those in house mice when exposed to scorpion venom.
They found, in house mice, the venom caused a channel known as Nav1.7 to pass along a signal, causing the perception of pain. In grasshopper mice, though, something unexpected happened: The arrival of venom caused no change in the activity of Nav1.7, because proteins produced by a different ion channel, known as Nav1.8, bound to venom molecules and rendered them futile. In fact, this reaction produced an overall numbing effect on the entire mouse pain transmission system, leaving the animals temporary incapable of feeling all sorts of pain, including those unrelated to scorpion venom.
The researchers also looked at the underlying genetics, sequencing the genes that correspond to these alternatively-structured ion channels, which will allow them to investigate the specific evolutionary background of this remarkable adaptation. In theory, the incentives for the mouse species evolving an immunity to scorpion toxins seem obvious: The nocturnal rodent feeds on all sorts of scorpions, so unless it can visually distinguish between those that are benign and toxic, it will face severe consequences if it’s sensitive to the venom. “Death, after all, is a pretty strong selection pressure,” Rowe notes.
But on the other hand, pain serves a crucial evolutionary role, informing an organism when it’s in danger. Some other species have been know to evolve resistance to particular toxins (garter snakes, for instance, are resistant to the toxin produced by rough-skinned newts), but these examples all involve resistance to toxins that can kill, but don’t actually cause pain.
So the fact that grasshopper mice have evolved resistance to pain itself is novel—and likely a result of a very specific set of evolutionary circumstances. One important aspect is that bark scorpions are a significant proportion of the mouse diet, leading to frequent interactions between the two organisms. Additionally, says Rowe, “the mechanism is specific to the venom itself, so it doesn’t compromise the mouse’s overall pain pathways.” As a result, the mouse is still able to detect other sources of pain (just not right after it’s been bitten by by the scorpion), and thus will know when its faced with unrelated painful perils.
About three years ago, South Los Angeles resident Ron Finley got fed up with having to drive more than half an hour to find a ripe, pesticide-free tomato. So he decided to plant a vegetable garden in the space between the sidewalk and street outside of his home, located in the working-class neighborhood where he grew up, surrounded by fast food restaurants, liquor stores and other not-so-healthy options.
When the City of Los Angeles told him to stop, based on the old laws that said just trees and lawn could be planted on those skinny strips of urban land, Finley, who is a fashion designer and Blaxploitation memorabilia collector by day, quickly rose to fame as southern California's “guerilla gardener.” By founding a nonprofit called L.A. Green Grounds, whose monthly “dig-ins” feature hundreds of volunteers turning overlooked pieces of urban land into forests of food, Finley became the face of a public campaign against the city, which owns roughly 26 square miles of vacant lots that he believes could fit nearly one billion tomato plants. The city listened, and is now in the final stages of changing the rules to allow fruits and veggies to be planted along sidewalks.
“I'm pretty proud of that,” said Finley, who recently answered a few more questions for Smithsonian.com.
You've called South Los Angeles a “food desert,” a term I've started hearing all over the place. Can you tell me more about what that means?
I call them food prisons, because you're basically captured with your food system. There is no healthy food to be found. Food, if you want to call it that, is literally killing us very slowly. It's all sprayed and genetically modified and pressed and formed and processed. These areas are devoid of any kind of organic, healthy, nutritious food. There's not even a sit-down restaurant where you can have a nice meal prepared. That's what a food desert is. You can go for miles without having anything healthy to eat.
Is this a new phenomenon?
It's nothing new. It's been going on for years. It's just that now we have this proliferation of cancers and asthma and chronic illness. And then you have all these other people who can attest to food being their salvation. We have never heard of half these cancers, and a lot of it has to do with what we put into our bodies. It's like soil to a plant—if you don't have nutrients in that soil, the plant is going to get sick and die.
Why did you confront this issue by planting gardens along sidewalks?
My thing is like, “Flip the script.” Let's start something new. Let's create a new model. Why are we growing grass? What's the purpose of that, when you need to eat? When you have water shortages, why would you water grass? It's more labor intensive, you mow it, and you throw it away. You could be using less energy and growing food and developing an ecosystem that attracts beneficial butterflies, and bees, and hummingbirds. You're creating an ecosystem where everything is linked. Why do I do this? Because we are nature. Everyone tries to separate us from nature. People think nature is over there, that you go drive to nature. Nah, we're organic matter too, just like leaves.
Did your background as a fashion designer give you any special talents to tackle this issue?
I'm a human being. That's my background! I need to eat healthy food. If it's not there, you put it there, you build it. It was an inconvenience for me to get healthy food, so what better way to make it convenient than to grow it myself? In that, there is a multitude of learning possibilities, from meditation to learning systems to understanding that you can't just go from A to M. There's a system you have to follow, and gardening teaches that. Gardening is a metaphor for everything that happens in life. We're all gardeners. Some of us just forgot about it. It was the first job ever.
Why was the City of Los Angeles initially opposed to the sidewalk gardens?
Because of archaic laws. It happened because the system was not able to adapt fast enough to the current situations. But how long have these neighborhoods gone without triage? The neighborhood must do triage on itself. You don't wait for the saviors to come in. You are the guys and gals on the white horse. You've got to fix it yourself.
Have they come around?
The law in L.A. has been amended, due in large part to some people who championed what I'm doing, and the city seeing that this needs to happen. The ordinance is basically done; they're just fine-tuning what edibles you can plant.
Do the neighbors respect the sidewalk gardens? I would worry about people stealing food or trashing them.
The bottom line is that if it's on the street, like if you leave something on the curb, you are basically giving it away. So that's what happens. But you can't eat all the food you grow. It's impossible. You'd be eating all day and all night.
As far as people respecting them, most do. You have some haters, but haters make you famous. That's why you're talking to me.
Usually when people see one of my gardens, it engages them. They say that they don't see hummingbirds in their neighborhood, that they don't see butterflies. If you build it, they will come. It turns out to be a sanctuary.
I'd imagine some folks don't even recognize vegetables, because we are so removed from food farming.
They don't, especially the way I plant. I don't plant in rows. My gardens are more for aesthetics as far as look and appeal. I want beauty. I want color pops. I want different kinds of flowers, different smells and textures. A lot of people don't see it as a vegetable garden, but I think vegetable gardens are for the most part not attractive. Nothing in nature is straight.
You are also working on a new project?
It's a container café concept, with a café [called The Ron Finley Project] attached to a garden. I am putting the first one up on property that I have in South L.A., and then will scale them out for global domination. I am bringing healthy food to the community and showing people how to grow it and cook it. It will be a cafe where people can come to have lessons, to eat, to rent garden plots.
And people seem to be into your message too.
It's needed, and it's happening around the world, from North Africa to Newfoundland to Australia to England to South Florida. It's happening everywhere, in every place, and in between. People want their food system back. People want to touch the soil. They want to get back to nature. This society, with computers and cell phones and LinkedIn and Facebook, it's gotten us so far away from the food system that the system was hijacked. But food shouldn't kill you, it should heal.
The Beechcraft King Air is the world's most popular turboprop aircraft. Beech Aircraft developed the King Air in 1964 as a compromise between piston-engine and jet aircraft; it could fly farther and higher than piston-engine aircraft yet land on the short runways of most small airports. The design remains the primary business aircraft for small to mid-size companies and part of the flight inventories of larger corporations.
Rather than investing in new and expensive technology, Beech built an improved and marketable business aircraft from its existing production line. The aircraft displayed here, LJ-34, began as a Queen Air that was upgraded with Pratt and Whitney PT6A-6 turboprop engines, a design that soon became the C-90. Several companies operated it for a total of more than 7,000 hours of service.
The Beech King Air is the world's most popular turboprop aircraft. Beech Aircraft Corporation developed the King Air in 1964 as a compromise between piston-engine and jet aircraft and the design quickly found success. The King Air can fly farther and higher than piston-engine aircraft, and, unlike many jets, it can land on the short runways of most small airports. With the three different models, including the C90B, still in production in 2001, this aircraft remains the primary business aircraft for small to mid-size companies, and it is an integral part of the flight inventories of many larger corporations.
Since its incorporation in 1932, Beech Aircraft was a successful builder of civil and military aircraft. After Walter Beech's death in 1950, his wife and co-founder, Olive Ann, became president and chairman of Beech, and she continued the profitable aircraft production lines while also diversifying into other aerospace endeavors. In 1959, Beech Aircraft introduced the Model 65 Queen Air to fill the gap between the six-seat twin-Bonanza, a derivative of the single-engine Bonanza introduced in 1947, and the Super 18, a deluxe version of the classic Beech 18. The Queen Air featured the low-wing, all-metal, tricycle design typical of Beech's post-war aircraft, carried seven to nine passengers, and featured two horizontally-opposed 340-hp Lycoming engines. Subsequent improvements included a swept tail and a pressurized fuselage, but when turboprop engines were added to a Queen Air 88 in 1964, it was re-designated the King Air 90.
In August 1963, Beech Aircraft announced the King Air design to meet the requirements of executive and corporate business travel for six to nine passengers, using turboprop engines to bridge the gap between piston-power and jet aircraft. The first King Air, powered by Pratt & Whitney Canada PT6A-6 engines, flew on January 20, 1964 and, after the prototype completed a 230-hour test program, the design received its type certificate on May 27, 1964. The first production aircraft deliveries began in late 1964.
The design was a low-wing cantilever monoplane of aluminum construction with retractable tricycle landing gear. To improve its utility and safety in changing flight conditions, standard equipment, that had been optional on the Queen Air, included de-icing boots on the leading edges of the wings, fin, and tailplane. Flight instruments allowed for all-weather capabilities and various communications and navigation packages included autopilot, radio, and radar systems. The Model 90 had two seats in the cockpit and four reclining passenger seats facing each other in the cabin, with options for a two or three-place couch for passengers. Air conditioning and soundproofing also improved passenger comfort in the cabin. Two 500-hp P&W Canada PT6A-6 turboprop engines with three-blade Hartzell propellers gave the King Air a top ceiling of 27,400 feet and a range of 1,565 miles at 270 mph. Piston-powered aircraft could not match this performance while emerging jet aircraft of the 1960s used turbojet engines that were high-priced, noisy, and had high fuel consumption.
Rather than investing in a completely new and expensive technology, Beech built a vastly improved and marketable business aircraft from its existing production line. After the King Air's initial success, Beech concentrated on continuous upgrades to appeal to a range of executive and corporate needs. Sophisticated electronics packages, increased cabin space, and finer interior amenities in Models 90, 100, 200, 300, and 350 provided comfortable working and transport environments for business travelers. Newer models are longer and sport T-tails, but the basic configuration remains the same and continues to appeal as a new or previously owned medium-range aircraft. In addition to the airframe, the Pratt & Whitney Canada PT6A turboprop engine family consistently provides a high level of performance and reliability. Nearly 40 years since its introduction, the King Air series is still the king of the turboprops and fills a significant niche in the business aviation marketplace.
The Museum's aircraft, LJ-34, is an early model with both the Queen Air and King Air designations, 65-90, meaning it was a Queen Air 88 design upgraded with P&W Canada PT6A-6 engines. One hundred and twelve of these models were built. The aircraft received its airworthiness certificate on April 16, 1965 and was registered as N1920H. The exterior was white with Morning Glory Blue and Jubilee Gold trim and the interior was Frontier Green, Black and Arctic Beige. On April 23, 1965, the Aviation Department of Helmerich and Payne, Inc., of Tulsa, Oklahoma, purchased the aircraft and the company flew the aircraft almost every other day for many years, mostly around the Central Plains states. The aircraft's registration changed to N10LE on October 8, 1973 and it flew in California, until May 20, 1976, when the registration was changed to N275DP. I FLY SOUTH of Wilmington, Delaware, owned the aircraft by 1994 and it then returned to Raytheon's Beech Aircraft Corporation in March 1998. Stevens Aviation, of Vandalia, Ohio, performed general maintenance and painted it white with red and gray trim before Raytheon flew it to National Airport near Washington, DC, in April 1998. There, its flight career ended with 7,164.5 hours on the airframe. It was disassembled and trucked across the Potomac River to the Museum for installation in the Business Wings exhibit. In July of 2000, when the exhibit closed, Raytheon donated the aircraft to the Museum.
Climate change can be seen when a mountainside’s trees turn brown thanks to the burrowing of bark beetles, an insect population that explodes during drought, or when an iconic species is pushed closer to extinction. But some of its effects are obvious only to those who look for them. From decades’ worth of data, scientists build narratives about how the oceans are acidifying, the average temperatures are warming and the precipitation is becoming more extreme.
Jill Pelto, a recent graduate from the University of Maine, has made it her mission to communicate these changes. The 22-year-old artist paints vivid watercolors of mountains, glaciers, waves and animals, that on closer inspection, reveal jagged line graphs more commonly seen in the pages of a scientific journal than on a gallery’s walls. Pelto incorporates real scientific data into her art. In one piece, the silver bodies of Coho salmon dance over blue, rippled water filling a space under a falling graph line. The line connects data points that document the decline of snow and glacier melt that feed the rivers the fish inhabit. Another combines data that describe the rising of sea levels, the climbing demand for fossil fuels, the decline of glaciers and the soaring average temperatures. All of those line graphs lay one over another to create a landscape telling the story of climate change.
Mauri Pelto, Jill’s father, is a glaciologist and professor at Nichols College in Dudley, Massachusetts. When she was 16, Jill joined him in the mountains of Washington for a field season, measuring the depths of crevasses in the glaciers they tracked, recording the extent of snow and ice, and looking for other changes. The experience was life changing. She hiked up the North Cascades for six more field seasons and, in that time, witnessed the slow deaths of the mountains’ glaciers. Around the world, once intimidating bodies of ice and snow are ceasing their centuries-old movement and becoming static remnants of their former selves, pocked with melt-water pools and riddled with caves in the summer.
Now that she has earned her undergraduate degree in studio art and earth science, Pelto has plans to pursue a Master’s degree in climate science at the University of Maine next fall.
“I think the science evolved more from my love of the outdoors and caring about the environment, but the art was always supposed to be a part of my life,” she says. “I’ve always considered myself an artist first.”
I spoke with Pelto about her inspiration, her process and her desire to communicate the threats of climate change in a way that emotionally resonates with people.
Can you describe one of the most memorable experiences you had out in the field?
Everything about this past field season [late summer 2015] was striking. It was nothing like any of the others in many ways, due to climate change, due to the drought out West. Everything was different. There was virtually no snow left on the glacier, which was really odd to see. It was just all ice, which melts a lot faster. All the little ponds up there were really small, the reservoirs were depleted, but there were also more forming under the glaciers. I saw a huge lake forming there for the first time and that was really bizarre. It's weird, and sad.
Do you carry your art materials with you to the glaciers?
I take small stuff. I usually take a little watercolor sketchbook, a set of watercolors, some pencils. Fieldwork is usually in the morning, so in the late afternoon or early evening, I'll have time to do a watercolor and capture the different aspects of the landscape. During the summer, the sun doesn't set until pretty late.Pelto features in her own work in Measuring Crevasse Depth. She says: “I received funding from the Center for Undergraduate Research to purchase equipment that helps me measure crevasse dimensions. In the watercolor, I am using a cam-line measuring tape, designed to find the depth of a crevasse. These measurements have allowed me to study the variance in crevasse size across a glacier, and analyze their changes over time.” (Jill Pelto)
When did you start including the graphs of climate data in your work?
I started doing that after this last trip to Washington, this past September. I've been struggling for a long time how to have an environmental message in my artwork. I've done sketches, but those are more just landscapes and memories for me. So they don't really tell a story.
I realized that people who are interested in science pay attention to graphs. I think they are a really good visual, but other people don't really pay attention to them. That was my first thought when I looked at a graph that my dad made of the decline in glaciers—it is a really good visual of how rapidly the volume of these glaciers has declined. I saw how I could use that as a profile of a glacier, incorporating a graph but giving an artistic quality to it. People can learn from the image because you are seeing actual information, but hopefully they are also emotionally affected by it.
Where do you find the data?
Sometimes I'll be reading something and I'll see a graph that I think will be good for a piece. Often, I'll have a particular topic and I'll want to create something about it, so I'll look for visuals. I'll research different scientific papers, but also different sites like NOAA or NASA, or sites that have climate news—reliable sites where I can find different graphs and decide which one I think represents and best communicates what's going on.
Do you have a favorite piece?
I like the piece on glacier mass balance, which was one of the three in the series I created after this most recent trip to Washington. It's my favorite just because I feel a very personal connection to those glaciers after working on them seven years.
Why is it important to you to use art to help communicate science?
I think that art is something that people universally enjoy and feel an emotional response to. People across so many disciplines and backgrounds look at and appreciate it, and so in that sense art is a good universal language. My target audience is in many ways people who aren't going to be informed about important topics, especially scientific ones.
What do you hope viewers take away from your work?
I hope to have both intellectual and emotional content in my artwork. I also hope to inspire people to make a difference about these topics. I haven't quite figured out how to do that yet. People have been responding to [these pieces], but I think they are more likely people who already think these topics are important. So I want to find some way to challenge people to do something with my art and make it more of an activist endeavor.
I have a lot of plans. Right now, I have a piece in progress about caribou populations. Another thing I'm trying to do is collaborate with other scientists. They can tell me what they are working on, what the data is and what it might mean for the future.
In our pursuit of life beyond Earth, we've spent countless hours and billions of dollars scanning for radio signals from distant exoplanets and probing the dry riverbeds of Mars for signs of ancient fossils. But what if something is alive right now on a world you can see through a backyard telescope?
Today NASA took the first small step in a mission to explore Jupiter's icy moon Europa, one of the most likely places in our solar system for alien life to exist. The space agency has announced nine scientific instruments that will ride on a Europa-bound probe, which will repeatedly fly past the moon. NASA has yet to approve the actual spacecraft design or set a launch date, saying only that the craft could be ready to launch sometime in the 2020s. But the instruments alone are tantalizing, because they are designed to help answer one of the hottest questions in science today: are we alone in the universe?
"Europa is one of those critical areas where we believe the environment is perfect for the potential development of life," Jim Green, director of NASA's planetary science division, said today in a press briefing. "If we do find life or indications of life, that would be an enormous step forward in our understanding of our place in the universe. If life exists in our solar system, and in Europa in particular, then it must be everywhere in our galaxy."
At first glance, Jupiter's moon Europa doesn't look very inviting. It's small, frozen, airless and bathed in a constant haze of lethal radiation from nearby Jupiter. Ask anyone working in planetary science, though, and they will tell you that Europa is perhaps the most provocative destination on NASA's agenda. That's because if anything is essential to life as we know it, it's water, and Europa has bucketfuls.
Early hints of a hidden ocean on Europa prompted Arthur C. Clarke to pen a sequel to 2001: A Space Odyssey in which advanced aliens help protect primitive Europan life from human meddling. Then, in the 1990s, the Galileo spacecraft shocked the scientific establishment when it confirmed that Europa almost certainly has briny depths. Its ocean is anywhere from 6 miles to a few thousand feet below the ice, and it contains about two times as much water as all of Earth's seas combined.
As on Earth, the salty ocean of Europa is sitting on top of a rocky seabed, which could be spewing heat and nutrients into the water. One of Europa's neighboring moons, Io, is the most volcanically active body in the solar system, and according to Green, the Europan seafloor probably looks a lot like the churning, pockmarked surface of Io.
"Hydrothermal vents must represent the volcanoes we see on Io, if indeed Europa has an ocean straddling the entire body," he says. Evidence for these hidden hot spots comes from so-called chaos terrain, disturbed regions on the surface that are covered in brownish gunk. Models suggest these spots are where heat from volcanic vents circulates upward through the water and melts sections of the ice above, allowing some nutrients and organic compounds—the building blocks of life—to escape and coat the surface.
Like Earth's shifting tectonic plates, Europa's icy exterior also seems to be diving back into the liquid layer below in a process called subduction, possibly helping such material cycle through its seas. And most recently, the Hubble Space Telescope caught signs that Europa is sending massive plumes of water into space, akin to the explosive geysers found around Earth's geothermal regions.An artist's rendering of a Europa flyby mission. (NASA/JPL-Caltech)
It seems the more we look at it, the more Europa resembles a frozen mini-Earth, with all the right ingredients to support organisms in its seas. That has scientists champing at the bit to send out a space probe and try to meet the aliens next door. Support in Congress has added the right dose of political clout, and NASA's 2016 budget includes $30 million for formulating a mission.
All nine instruments will be able to fly on whatever spacecraft NASA selects, Curt Niebur, NASA's Europa program scientist, said during the briefing. The probe will be solar powered and will sweep past Europa at least 45 times, sometimes dipping as low as 16 miles from the surface to collect data. Once in place near the Jovian moon, the mission should last for three years.
The agency received 33 proposals from universities and research institutions across the country for the mission's science instruments, which it has narrowed down to these final selections:
- Plasma Instrument for Magnetic Sounding (PIMS), for determining Europa’s ice shell thickness, ocean depth and salinity.
- Interior Characterization of Europa using Magnetometry (ICEMAG), for measuring the magnetic field near Europa and inferring the location, thickness and salinity of the subsurface ocean.
- Mapping Imaging Spectrometer for Europa (MISE), for identifying and mapping the distribution of organics, salts and other materials to determine habitability.
- Europa Imaging System (EIS), for mapping at least 90 percent of Europa at 164-foot resolution.
- Radar for Europa Assessment and Sounding: Ocean to Near-surface (REASON), an ice-penetrating radar designed to characterize Europa’s icy crust and reveal its hidden structure.
- Europa Thermal Emission Imaging System (E-THEMIS), a “heat detector” designed to help detect active sites, such as potential vents where water plumes are erupting into space.
- MAss SPectrometer for Planetary EXploration/Europa (MASPEX), for measuring Europa’s extremely tenuous atmosphere and any surface material ejected into space.
- SUrface Dust Mass Analyzer (SUDA), for measuring the composition of small, solid particles ejected from Europa and providing the opportunity to directly sample the surface and potential plumes on low-altitude flybys.
- Ultraviolet Spectrograph/Europa (UVS), for detecting small plumes and measuring the composition and dynamics of the moon’s rarefied atmosphere.
These instruments "could find indications of life, but they are not life detectors," Niebur stressed. Planetary experts have been debating the issue, he said, and "what became clear is that we don't have a life detector, because we don't have a consensus on the thing that would tell everybody looking at it, this is alive." But the suite of experiments will help NASA directly sample the icy moon for the first time and better understand its icy crust, its internal composition and the true nature of its elusive plumes. "This payload will help us answer all of these questions," said Niebur, "and take great strides forward in understanding the habitability of Europa."
Evening picnics in a park, sunset beers by a lake and warm nights with the windows open are just some of the delights of midsummer. But as dusk falls, one of the most infuriating creatures on the planet stirs: the mosquito. Outdoor activities are abandoned in an ankle-scratching frenzy and sleep is disturbed as we haplessly swat at the whining source of our torment.
Of course, all these discomforts are nothing compared to the damage mosquitoes do as transmitters of diseases such as malaria, dengue or yellow fever. According to the World Health Organization, mosquito-borne yellow fever alone causes more than 30,000 deaths annually.
But now, in the on-going battle between human and mosquito, we might just have gained the upper hand. Scientists at Texas A&M University believe they have found a way to outsmart the bloodsuckers by tricking them into deciding not to bite us, and their main allies in this ruse are the billions of bacteria that live on our skin.
Bacteria "talk" to one another using a chemical system called quorum sensing. This cell-to-cell communication is used to control or prevent particular behaviors within a community, such as swarming or producing biofilm, like the formation of plaque on our teeth. To start a conversation, bacteria produce compounds that contain specific biochemical messages. The more of these compounds that are produced, the more concentrated the message becomes, until it reaches a threshold that causes a group response. Behaviors are more likely to occur as the message gets "louder"—and that makes it easy for other organisms to eavesdrop on the bacterial chatter.
“Even people respond to quorum-sensing molecules," says Jeffery K. Tomberlin, a behavioral ecologist at Texas A&M. "For example, if something is decomposing, there are quorum-sensing molecules that are released in that process that tell us it is not a good environment.”
Enter the mosquito. Previous work suggests that factors such as the volume of carbon dioxide we exhale, body temperature, body odor and even the color of our clothes may influence how attractive we are to the bloodthirsty insects. According to Tomberlin, mosquitoes can also hack into bacterial communication systems using chemoreceptors on their antennae, rather like World War II code-breakers intercepting an encrypted transmission: “Their radar system is extremely sensitive and can pick up these messages that are occurring. And they have the equipment that allows them to interrupt those messages,” he says.
Evolutionarily speaking, quorum sensing has always occurred in nature, and mosquitoes have evolved the ability to perceive these communications pathways via natural selection. Mosquitoes benefit from this hacking by gleaning information about the quality of a blood host and being selective about who they target. But the bacterial communication pathways continue to evolve, resulting in a race between competing organisms—on one side, bacteria are producing messages, and on the other, mosquitoes are trying to interpret them.
“Your opponent is always changing the encryption of their code. You have to break that code, and your survivorship depends on it,” says Tomberlin. Knowing that microbial communication can affect mosquito attraction, Tomberlin and his colleagues at Texas A&M—including Craig Coates, Tawni Crippen and graduate researcher Xinyang Zhang—have now shown that humans may be able to hack the hackers and influence whether mosquitoes decide to bite us.
Staphylococcus epidermidis is one among more than a thousand bacterial species commonly occurring on human skin. The team used a mutant form of S. epidermidis, in which they deleted the genetic mechanism that encodes its quorum sensing system. With the bacteria's biochemical pathways disrupted, the mosquitoes' "surveillance equipment" could no longer eavesdrop.A microscope view of the common skin bacteria Staphylococcus epidermidis. (David Scharf/Corbis)
The team then carried out a series of experiments using blood feeders, which were covered in sterile cloth treated with either the silenced mutants or unmodified wild-type bacteria. The team compared the feeders' attractiveness to the female Aedes aegypti mosquito, the main transmitting agent for yellow fever.
The blood feeders consisted of a culture flask sealed with a paraffin film that the mosquitoes could penetrate. A millimeter of rabbit blood was injected between the film and the culture flask, and warm water was pumped through the flask to keep the blood at average body temperature. The team placed feeders inside transparent plastic cages containing 50 mosquitoes and left them in the cages for 15 minutes. They recorded the insects' behavior on video, allowing them to count the number of feeding mosquitoes at each minute.
The team tested different scenarios, such as placing blood feeders treated with either wild-type or mutant bacteria in separate cages, then putting both types of bacteria in the same cage at the same time. When given a choice, “twice as many mosquitoes were attracted to the wild type on the blood feeder rather than the mutant on a blood feeder,” Tomberlin says.
Based on these findings, which are currently being prepared for submission to PLOS One, the team believes that inhibiting bacterial communications could lead to new methods for deterring mosquitoes that would be safer than harsh chemical repellents such as DEET. This could have important implications for reducing the spread of mosquito-borne diseases such as yellow fever. “Bacteria are our first line of defence, and we want to encourage their proliferation. However, we may be able to produce natural repellents that will allow us to lie to mosquitoes," says Tomberlin. "We might want to modify the messages that are being released that would tell a mosquito that we are not a good host, instead of developing chemicals that can be harmful to our bacteria on our skin, or to our skin itself.”
Tomberlin notes that manipulating bacterial conversations may have many other applications, and that these are being actively studied in other institutions. In terms of health applications, blocking communication between bacteria in the lungs of patients with cystic fibrosis could lead to new treatments for the disease. And in the energy industry, inhibiting quorum sensing could reduce oil pipeline corrosion caused by microbes.
Researchers such as Thomas K. Wood of Pennsylvania State University, Rodolfo García-Contreras of the Universidad Nacional Autónoma de Mexico and Toshinari Maeda of the Kyushu Institute of Technology are leaders in quorum sensing research. According to Wood, efforts to manipulate bacterial communication need to account for the microbes' sophisticated counter-espionage techniques: “We are also trying to understand how bacteria evolve resistance to the new types of compounds designed to stop bacteria from talking,” he says.
So now, for mosquitoes and for science, the code-breaking race is on.
Former Vice President Al Gore is back in the news with his documentary film An Inconvenient Truth, in which he travels the world presenting a slide show about global climate change. He also wrote a companion book of the same title (Rodale). Gore spoke with SMITHSONIAN about global warming, glacial melting and Russell Crowe.
Are you happy with the way the film has been received?
I couldn't be happier with the fact that it’s been extremely well reviewed, and I'm happy because it improves the chance for the movie to find its audience and to reach more people in a shorter period of time. [But] when a respected scientist writes a technical review saying "he got the science right"—that’s what thrills me.
What did you do to make sure you got the science right?
For 30 years now, one of the roles that I have played is to talk extensively with the scientific experts and gain their trust and confidence to the point where they're willing to spend the time to get me as up to speed as a layperson can get up to speed and then allow me to ask them questions such as, "Forget what you think you can get through the scientific publication process in the next two years. Tell me what your gut feeling is." I translate those gut feelings into plain English and take it back to them and let them privately vet it...[to] get it both communicable to the average person like me and to retain the integrity of the scientific analysis.
Some critics are skeptical of the 20-foot rise in sea levels that you predict. Is this just the worst-case scenario?
Not at all. The worst-case scenario is 140 feet, although that would be far, far into the future. There are two wild cards: one is Greenland, the other is West Antarctica. Greenland is the wilder of the two wild cards.... It's undergoing a radical discontinuity, it seems, both with a rapid increase in the [glacial] melt rate and with other developments that are quite concerning. For example, they have for the last 10 or 15 years been following the emergence of these icequakes. Icequakes are like earthquakes. They're now being picked up by seismometers all over the world, and in 1993 I believe there were 7. In 1999 that doubled to—if I'm not mistaken—14. Last year there were 30. And with these icequakes doubling twice in little more than a decade, there is growing concern. Here's the other thing: [the collapse of Antarctica's Larsen B ice shelf] was quite a significant event because the scientists that specialize in such things were genuinely forced to go back and examine what it was about their models that led them to radically [overestimate] the amount of time it would take an ice shelf like that to break up. They retrofitted into their models one new understanding that came out of that event, and that is what happens when you have surface melting resulting in pooling on the top of a large, thick ice shelf. The prior understanding had been that the water sinks down into the mass of the ice and refreezes. In this case they found that instead of refreezing it tunneled and left the ice like Swiss cheese, metaphorically, and vulnerable to a sudden breakup. It broke up in 35 days, and in fact the majority broke up in only two days. Now they see the same tunneling phenomena on Greenland. When I ask off the record, "Give me some time frames here, how realistic is it that we could see a catastrophic breakup and melting in Greenland in this century?" they cannot rule that out and privately will not.
Are the scientists being overly cautious?
No. They just do what scientists do and be very circumspect. If you have a curve of possibilities and the evidence points toward the more extreme end of the curve, if you're a scientist you're going to want extra levels of confidence before you go out and say, "This is more likely than I thought." I do not say in either the movie or the book what time frame ought to be placed on [glacial melting]. But it is not impossible that that could happen in a much shorter time frame than they are now saying. And I've excluded from my presentation a lot of more extreme predictions.
Has the media moved beyond the idea of global warming as a controversial theory?
I think for the time being that's past us. There is now a brand-new focus on the science. But I have seen periods similar to this, when there was a flurry of concern and focus and then it dissipated. It's partly due to the nature of the crisis. The time scale during which it unfolds is shockingly swift in geological time, and even in the context of a single life span, but in the six-hour news cycle it could still be displaced by other earthshaking events, such as Russell Crowe throwing a telephone at a hotel concierge or Britney Spears having a baby.
How do you keep the issue alive?
Tipper and I are devoting 100 percent of [our] profits from the movie and the book to a new bipartisan educational campaign that will run advertising and will be a presence in the mass media, to continue lifting this urgent crisis up for people to see and focus on.
People still think of you as the former Democratic presidential candidate—how do you get away from the idea of global warming as a Liberal issue?
It is for that reason that I am not even on the board of this new group. It's co-chaired by Ted Roosevelt IV, a Republican investment banker and a prominent Republican environmental leader, and Larry Schweiger, who is head of the National Wildlife Federation. His group is the most bipartisan in its membership—lots of hunters and fishermen for example. People on the board include [members of the Reagan and the first Bush administrations]. The Alliance for Climate Protection is determinedly bipartisan and nonpartisan, and its founding principles preclude any endorsement of specific legislation or candidates—it's focused purely and simply on public education and awareness.
Coming Soon: Stay tuned for Smithsonian.com's 'Focus on the Environment,' featuring the tropical cloud forest, "green" plastic, the most livable cities and more!