Found 1,026 Resources containing: robot
How does one get stuck studying frog tongues? Our study into the sticky, slimy world of frogs all began with a humorous video of a real African bullfrog lunging at fake insects in a mobile game. This frog was clearly an expert at gaming; the speed and accuracy of its tongue could rival the thumbs of texting teenagers.
The versatile frog tongue can grab wet, hairy and slippery surfaces with equal ease. It does a lot better than our engineered adhesives—not even household tapes can firmly stick to wet or dusty surfaces. What makes this tongue even more impressive is its speed: Over 4,000 species of frog and toad snag prey faster than a human can blink.
What makes the frog tongue so uniquely sticky? Our group aimed to find out.
Early modern scientific attention to frog tongues came in 1849, when biologist Augustus Waller published the first documented frog tongue study on nerves and papillae—the surface microstructures found on the tongue. Waller was fascinated with the soft, sticky nature of the frog tongue and what he called “the peculiar advantages possessed by the tongue of the living frog…the extreme elasticity and transparency of this organ induced me to submit it to the microscope.”
Fast-forward 165 years, when biomechanics researchers Kleinteich and Gorb were the first to measure tongue forces in the horned frog Ceratophrys cranwelli. They found in 2014 that frog adhesion forces can reach up to 1.4 times the body weight. That means the sticky frog tongue is strong enough to lift nearly twice its own weight. They postulated that the tongue acts like sticky tape or a pressure-sensitive adhesive—a permanently tacky surface that adheres to substrates under light pressure.Frog tongue holding up a petri dish just with its stickiness. (Alexis Noel/Georgia Tech, CC BY-ND)
To begin our own study on sticky frog tongues, we filmed various frogs and toads eating insects using high-speed videography. We found that the frog’s tongue is able to capture an insect in under 0.07 seconds, five times faster than a human eye blink. In addition, insect acceleration toward the frog’s mouth during capture can reach 12 times the acceleration of gravity. For comparison, astronauts normally experience around three times the acceleration of gravity during a rocket launch.
Thoroughly intrigued, we wanted to understand how the sticky tongue holds onto prey so well at high accelerations. We first had to gather some frog tongues. Here at Georgia Tech, we tracked down an on-campus biology dissection class, who used northern leopard frogs on a regular basis.
The plan was this: Poke the tongue tissue to determine softness, and spin the frog saliva between two plates to determine viscosity. Softness and viscosity are common metrics for comparing solid and fluid materials, respectively. Softness describes tongue deformation when a stretching force is applied, and viscosity describes saliva’s resistance to movement.
Determining the softness of frog tongue tissue was no easy task. We had to create our own indentation tools since the tongue softness was beyond the capabilities of the traditional materials-testing equipment on campus. We decided to use an indentation machine, which pokes biological materials and measures forces. The force-displacement relationship can then describe softness based on the indentation head shape, such as a cylinder or sphere.When the indentation head pulls away from the tongue, it adheres and stretches. (Alexis Noel/Georgia Tech, CC BY-ND)
However, typical heads for indentation machines can cost $500 or more. Not wanting to spend the money or wait on shipping, we decided to make our own spherical and flat-head indenters from stainless steel earrings. After our tests, we found frog tongues are about as soft as brain tissue and 10 times softer than the human tongue. Yes, we tested brain and human tongue tissue (post mortem) in the lab for comparison.
For testing saliva properties, we ran into a problem: The machine that would spin frog saliva required about one-fifth of a teaspoon of fluid to run the test. Sounds small, but not in the context of collecting frog spit. Amphibians are unique in that they secrete saliva through glands located on their tongue. So, one night we spent a few hours scraping 15 dead frog tongues to get a saliva sample large enough for the testing equipment.
How do you get saliva off a frog tongue? Easy. First, you pull the tongue out of the mouth. Second, you rub the tongue on a plastic sheet until a (tiny) saliva globule is formed. Globules form due to the long-chain mucus proteins that exist in the frog saliva, much like human saliva; these proteins tangle like pasta when swirled. Then you quickly grab the globule using tweezers and place it in an airtight container to reduce evaporation.
After testing, we were surprised to find that the saliva is a two-phase viscoelastic fluid. The two phases are dependent on how quickly the saliva is sheared, when resting between parallel plates. At low shear rates, the saliva is very thick and viscous; at high shear rates, the frog saliva becomes thin and liquidy. This is similar to paint, which is easily spread by a brush, yet remains firmly adhered on the wall. Its these two phases that give the saliva its reversibility in prey capture, for adhering and releasing an insect.
How does soft tissue and a two-phase saliva help the frog tongue stick to an insect? Let’s walk through a prey-capture scenario, which begins with a frog tongue zooming out of the mouth and slamming into an insect.
During this impact phase, the tongue deforms and wraps around the insect, increasing contact area. The saliva becomes liquidy, penetrating the insect cracks. As the frog pulls its tongue back into the mouth, the tissue stretches like a spring, reducing forces on the insect (similar to how a bungee cord reduces forces on your ankle). The saliva returns to its thick, viscous state, maintaining high grip on the insect. Once the insect is inside the mouth, the eyeballs push the insect down the throat, causing the saliva to once again become thin and liquidy.
It’s possible that untangling the adhesion secrets of frog tongues could have future applications for things like high-speed adhesive mechanisms for conveyor belts, or fast grabbing mechanisms in soft robotics.
Most importantly, this work provides valuable insight into the biology and function of amphibians—40 percent of which are in catastrophic decline or already extinct. Working with conservation organization The Amphibian Foundation, we had access to live and preserved species of frog. The results of our research provide us with a greater understanding of this imperiled group. The knowledge gathered on unique functions of frog and toad species can inform conservation decisions for managing populations in dynamic and declining ecosystems.
While it’s not easy being green, a frog may find comfort in the fact that its tongue is one amazing adhesive.
At first glance, portraits of the Belamy family seem to exemplify life in the upper echelons of French society. The haughty features of patriarch Le Comte De Belamy are framed by a voluminous white powdered wig, while the dynastic matriarch, La Comtesse, oozes wealth in her colorful silk attire. Skipping ahead several generations, you’ll encounter Madame De Belamy, whose tightly coiffed hair is tucked inside a blue hat rendered in Impressionistic strokes, and her son Edmond, a comparatively dour-looking young man clad almost entirely in black.
But there’s a catch to this story of generational greatness: In addition to being wholly fictional, the Belamy family hovers in that amorphous space between artificial intelligence and art. Although its members’ names and places in the family tree were assigned by Obvious, a Paris-based art collective, their likenesses are the brainchild of Generative Adversarial Networks, a machine learning algorithm better known by the acronym GAN.
Now, Naomi Rea writes for Artnet News, the youngest member of the family—as depicted in “Portrait of Edmond Belamy”—is set to make history as the subject of the first AI-produced artwork sold by an auction house.
A canvas print of Obvious’ (and GAN’s) creation will be included in Christie’s late October auction of Prints and Multiples, the New York-based auction house reports. It remains to be seen how bidders will react to the AI work, but Obvious remains optimistic, citing an estimated sale price of €7,000 to €10,000, or roughly $8,000 to $11,500.
Hugo Caselles-Dupré, one of Obvious’ three co-founders, tells Christie’s Jonathan Bastable that GAN consists of two parts: the Generator, which produced images based on a data set of 15,000 portraits painted between the 14th and 20th centuries, and the Discriminator, which attempts to differentiate manmade and AI-generated works.
“The aim is to fool the Discriminator into thinking that the new images are real-life portraits,” Caselles-Dupré says. “Then we have a result.”The painting is one of 11 portraits depicting the fictional Belamy family, which was named in honor of GAN creator Ian Goodfellow (Courtesy of Obvious)
According to an essay posted on Obvious’ Medium page, GAN analyzes thousands of images to learn the basic features of portraiture. The subsequent AI-generated portraits are both similar to the images in the original data source and singularly unique. A different image is rendered with every execution of the algorithm.
“This reflects a human creativity feature: We will never create twice the same thing,” Obvious writes.
Obvious, a three-man team made up of Caselles-Dupré, Pierre Fautrel and Gauthier Vernier, owes much to American AI researcher Ian Goodfellow, who developed the GAN algorithm in 2014. As Time’s Ciara Nugent notes, the rough French translation of “Goodfellow”—bel ami—provided inspiration for the fictional family’s name.
The Belamy portraits are painted in a semi-realistic style, their blurred details creating an overarching impression of motion. In the bottom right corner of the canvases, the artist’s signature is replaced by an intimidating mathematical equation:
Such proclamations of authorship are a central concern in the art world’s AI debate. Skeptics of the new technology doubt that machines can produce art, which has long been viewed as a uniquely human activity. If an AI researcher designs and executes an algorithm, who is the end product’s true creator: human artist or machine? And, most importantly, if robots can create art, where does that leave humans?
There are no easy answers to these questions, but as Rose Eveleth, host of the future-centric podcast Flash Forward, argued in a recent episode, this isn’t the first time humans have felt threatened—or entranced—by machine-made art.
Swiss-born watchmaker Pierre Jaquet-Droz launched the golden age of automata, or kinetic sculptures designed to mimic human movement, with “The Writer.” The 1770s doll was made of 6,000 moving parts that allowed it to scribble out an array of messages, dip a quill into an inkwell, and blink with unseeing eyes.
At the time, philosophers were engaged in a heated battle over what it meant to be alive, Eveleth notes. While we don't think of today's AI as a living organism, modern technology does continue to raise existential questions about what it means to be human. Consider a more recent innovation: the camera. It also posed some philosophical problems, Caselles-Dupré tells Time’s Nugent.
“Back then, people were saying that photography is not real art and people who take pictures are like machines,” he says. “And now we can all agree that photography has become a real branch of art.”
By including “Portrait of Edmond Belamy” in its fall sale, Christie’s isn’t offering a final ruling on the value of AI art. Still, the decision is sure to attract ire, elation and, if the sale is successful, newfound faith in the burgeoning medium.
“I’ve tended to think human authorship was quite important—that link with someone on the other side,” Richard Lloyd, head of Christie’s Prints and Multiples department, tells Nugent. “But you could also say art is in the eye of the beholder. If people find it emotionally charged and inspiring, then it is. If it waddles and it quacks, it’s a duck.”
Imagine coming down for breakfast and, instead of popping a piece of toast in the toaster and boiling an egg, you stick a cartridge in a printer. A minute or two later, you’ve got a freshly printed banana and flaxseed muffin.
Thanks to a new kind of 3D food printer, the printed breakfast is several steps closer to reality for the average consumer.
"Food printing may be the 'killer app' of 3D printing," says Hod Lipson, who’s led the creation of the new printer. "It's completely uncharted territory."
Lipson, a professor of mechanical engineering at Columbia University, has been studying 3D printing for nearly 20 years, working on printing things like plastics, metals, electronics and biomaterials. His work on 3D food printing came out of his research on printing complete 3D robots that could, in theory, “walk off the printer.”
To achieve something like this, a printer must be able to print with many materials at the same time. While experimenting with making multi-material printers, Lipson noticed the students in his lab were beginning to use food as a test material.
“They were using cookie dough, cheese, chocolate, all kinds of food materials you might find around an engineering lab,” he says. “In the beginning, it was sort of a frivolous thing. But when people came to the lab and looked at it, they actually got really excited by the food printing.”
So Lipson and his team began to take a more serious look at just what they could do with food. There are two basic approaches to 3D food printing, Lipson explains. The first involves using powders, which are bound together during the printing process with a liquid such as water. The second—the approach used by Lipson’s lab—is extrusion-based, using syringes that deposit gels or pastes in specific locations determined by the software’s “recipe.”
Lipson’s prototype involves an infrared cooking element, which cooks various parts of the printed product at specific times.
“We’ve used all kinds of materials, with different levels of success,” Lipson says. “Sometimes the materials are conventional—eggs, flour, cookie dough, cheese, pesto, jam. Cream cheese is something students like to work with a lot.”The printer prototype (Timothy Lee Photographers, Columbia University)
They’ve also recently collaborated with a New York culinary school, letting chefs play around with the prototype to see what they’d come up with.
“They kind of broke the machine by really pushing it to its limits,” Lipson says. “One thing we’ve learned is printing in cream cheese is very easy, but printing in polenta and beets is very hard. It has these granules in it, so from an engineering standpoint it’s much more challenging.
It’s also difficult to predict how different foods will fare when combined. It’s easy enough to create recipes based on single items like chocolate, whose properties are well-established. But when you start to mix things together—mixing, of course, being fundamental to cooking—the mixtures may have much more complex behaviors. Another challenge is figuring out when to cook what during the printing process. If you’re printing a pyramid of salmon and mashed potatoes, the salmon and the potatoes will need very different cooking times and temperatures. The team is tackling this problem with software design, working with computer scientists to create software that will predict what the final product will look like after cooking.
The printer Lipson's team has made is not the only food printer to be developed in recent years. But while products like Hershey’s chocolate-printing CocoJet or the Magic Candy Factory’s 3D gummy printer are single-ingredient, limiting their use for the general public, Lipson’s printer is unique for being able to handle many ingredients at once, and cook them as it goes.
Lipson sees the printer as having two main uses for consumers. First, it could be a specialty appliance for cooking novel foods difficult to achieve by any other process. You could print, say, a complex pastry designed by someone in Japan, a recipe you’d never have the expertise or equipment to make by hand. Lipson says he could imagine digital recipes going viral, spreading across the globe. The second use is about health and targeted nutrition. People are already increasingly interested in personal biometrics, tracking their blood pressure, pulse, calorie burn and more using cell phones and computers. In the future, it may be possible to track your own health in much greater detail—your blood sugar, your calcium needs or your current vitamin D level. The printer could then respond to those details with a customized meal, produced from a cartridge of ingredients.
“Imagine a world where the breakfast that you eat has exactly what you need that day,” Lipson says. “Your muffin has, say, a little less sugar, a little more calcium.”
As for when the printer might be available to consumers, Lipson says it’s more a business challenge than a technology one.
“How do you get FDA approval? How do you sell the cartridges? Who owns the recipe? How do you make money off this?” he says. “It’s a completely new way of thinking about food. It’s very radical.”
A recent redesign of the prototype may bring the product closer to being something the average consumer would accept. Previous versions of the printer were very high-tech, full of tubes and sticking-out nozzles. People had a hard time imagining it on their kitchen counters.
Then, one of Lipson’s students named Drim Stokhuijzen, an industrial designer, completely redesigned the machine, giving it the sleek look of a high-end coffee maker.
“His design is so beautiful people are saying for the first time, ‘oh, I can see the appeal of food printing, this is something I might actually use,’” Lipson says.
Although Lipson doesn’t think 3D food printing will replace other cooking techniques, he does think it will revolutionize the kitchen.
“For millennia we’ve been cooking the same way,” he says. “Cooking is one of the things that hasn’t changed for eternity. We still cook over an open flame like cavemen. Software has permeated almost every aspect of our lives except cooking. The moment software enters any field—from manufacturing to communications to music, you name it—it takes off and usually transforms it. I think that food printing is one of the ways software is going to enter our kitchen.”
There’s no good reason to expect that alien life, should any prove detectable, will be created in humanity’s image as Hollywood films tend to model them, said Seth Shostak, director of the Search for Extraterrestrial Intelligence (SETI) on Sunday at Smithsonian magazine’s “Future is Here” festival in Washington, D.C. Shostak, by the way, consults with film companies on alien depictions.
“Hollywood usually resorts to little gray guys with big eyeballs, no hair, no sense of humor and no clothes, because it saves a whole lot of backstory,” he said. “We’ve been rather anthropocentric. We assume that they’re somewhat like we are. That may be fundamentally wrong.” In response to an audience member’s question, he added, “Our data set on alien sociology is sparse.”
Extraterrestrial life is likely to be more computer-like than human in nature. Just as humans are building artificial intelligence, aliens may do the same, Shostak said, and instead of finding the kinds of aliens that show up in movies, humans could be more likely to encounter the robots or computer systems created by the aliens. So humans who hope to find extraterrestrial life ought to look in places that are different than how we've imagined them to date. Further-evolved alien life probably doesn’t require planets with water and oxygen, as people do, Shostak said.Seth Shostak, director of SETI, spoke about the search for extraterrestrial life. (Richard Greenhouse Photography)
Shostak's critique of popular culture's take on aliens' appearance was one of many critiques raised at the festival, which played host to scientists, philosophers, authors and engineers. While there, they envisioned a future where science meets science fiction. Sunday’s lineup of speakers, supported in part by the John Templeton Foundation, included Frans de Waal, a professor of primate behavior at Emory University; Marco Tempest, a “cyber illusionist”; Rebecca Newberger Goldstein, a philosopher and author; Sara Seager, a planetary scientist and astrophysicist; and several NASA scientists and engineers.
As varied as they were, the talks had one common thread: Human narcissism can be rather misleading and unproductive at times, while at others, it may hold great scientific promise.
If aliens are too often thought of in human terms, there is the opposite tendency to underappreciate animal ingenuity because they are compared to human intelligence. That sells dolphins, apes, elephants, magpies, octopi and others short, said de Waal, a primatologist. He’d rather scientists allow for more elasticity in adopting an anthropomorphic set of vocabulary and concepts to consider certain animals as rather more like humans.Frans de Waal, a primatologist, talked about animal cognition at the festival. (Richard Greenhouse Photography)
De Waal showed a video of a bonobo carrying a heavy rock on its back for half a kilometer until it arrived at the hardest surface in the sanctuary, where it used the rock to crack open some nuts. “That means she picked up her tool 15 minutes before she had the nuts,” de Waal said. “The whole idea that animals live only in the present has been abandoned.”
He showed a video of a chimp and another of an elephant each recognizing itself in a mirror, opening wide to gain an otherwise inaccessible view of the insides of their mouths. “If your dog did this, you’re going to call me,” he said.
All animal cognition, clearly, isn’t created equally, but de Waal stressed that for the animals that do exhibit cognition, it’s hardly a sin to use anthropomorphic terms to describe, say, a chimp laughing when tickled. It certainly looks and functions like a human laugh, he said.
The focus first on yet-unknown, and perhaps not-even-existent, alien life, and then on very familiar creatures, with which we share the planet, served as a microcosm of the broader scope of the day’s agenda. Laying the groundwork for the notion that the future has arrived already, Michael Caruso, editor-in-chief of Smithsonian magazine, told the audience to consider itself as a group of time machines.
“Your eyes are actually lenses of a time machine,” he said, noting that the further into space we look, the more of the past we see. “The light from the moon above us last night came to us a second and a half old. The light from the sun outside today is eight minutes and 19 seconds in the past. The light that we see from the stars at the center of the Milky Way is actually from the time of our last ice age, 25,000 years ago. Even the words I’m speaking right now, by the time you hear them exist a nanosecond in the past.”
While everything surrounding attendees represents the past, they themselves are the future. The key, he said, is to share knowledge, compare notes and overlap what we all know.
“That’s what we do here at the festival,” Caruso said.Sara Seager, a planetary scientist and astrophysicist, studies exoplanets. (Richard Greenhouse Photography)
Other speakers picked up where Shostak and de Waal left off. In the search for extraterrestrial life, scientists are studying exoplanets, or planets that orbit stars other than the sun. Some of these, said Seager, a MIT professor of planetary science and physics, display ripe conditions to support life. “We know that small planets are out there waiting to be found,” she said. Though that doesn’t mean it’s easy hunting. “I liken it to winning the lottery—a few times,” she said.
Philosopher and writer Rebecca Newberger Goldstein, meanwhile, turned the lens not on planets many light years away, but instead on the human condition domestically. She discussed what she called the “mattering map,” a spectrum upon which individuals weigh and evaluate the degree to which they matter. “We are endowed with a mattering instinct,” she said. Or put another way: Everyone has an address on the mattering map, “an address of your soul.”
So much psychic power is embedded in the notion of mattering, she added, that people often give up their lives to secure the opportunity to matter, or if they feel they no longer matter. This is particularly relevant in the age of social media, and selfies, she said, when there is a temptation to measure how much one matters based on others’ approval.
“Who doesn’t like it when their Twitter following grows?” she asked.
Other speakers filled in more holes in the broader conversation about the future colliding with the present. “What was once magic is now reality,” said Marco Tempest, a “cyber illusionist” whose magic performance was enhanced by digital elements. He performed a card trick while wearing a digital headset, and the audience saw, presumably, what he saw projected on a screen. The projection overlaid digital information atop the cards, sometimes animating certain elements and other times adding additional information. Magicians and hackers are alike, Tempest said, in that they don’t take what surrounds them at face value. They see material as something to be played with, examined and questioned, rather than taken for granted.NASA engineer Adam Steltzner talked about the Mars 2020 project. (Richard Greenhouse Photography)
A variety of National Aeronautics and Space Administration representatives, including Dava Newman, NASA’s deputy administrator, discussed everything from Hollywood depictions of space exploration to augmented and virtual reality. NASA’s mission is “off the Earth, for the Earth,” Newman said. She stressed that everything that NASA does, particularly when it comes to areas that are quite far from Earth, relates back to what is best for people on Earth. So it's off the planet, but it's all for the benefit of the planet. Jim Green, who directs NASA’s planetary science division, spoke highly of art’s capacity to impact the real-life space program. “Science fiction is so important to our culture, because it allows us to dream,” he said.
That melding of dreaming and reality, of searching for what humanity has never encountered, such as extraterrestrial life and new planets, is a vital mix that helps keep things grounded, said Seager, the astrophysicist, in an interview after her talk.
“We do have our ultimate goal, like the Holy Grail. I don’t want to say we may never find it [extraterrestrial life], but that thought is always kind of there,” she said. “At least we will be finding other stuff along the way.”
At the heart of Paris, in a former monastery dating back to the Middle Ages, lives an unusual institution full of surprises whose name in French—le Musée des Arts et Métiers—defies translation.
The English version, the Museum of Arts and Crafts, hardly does justice to a rich, eclectic and often beautiful collection of tools, instruments and machines that documents the extraordinary spirit of human inventiveness over five centuries—from an intricate Renaissance astrolabe (an ancient astronomical computer) to Europe’s earliest cyclotron, made in 1937; to Blaise Pascal’s 17th-century adding machine and Louis Blériot’s airplane, the first ever to cross the English Channel (in 1909).
Many describe the musée, which was founded in 1794, during the French Revolution, as the world’s first museum of science and technology. But that doesn’t capture the spirit either of the original Conservatoire des Arts et Métiers, created to offer scientists, inventors and craftsmen a technical education as well as access to the works of their peers.
Its founder, the Abbé Henri Grégoire, then president of the revolution’s governing National Convention, characterized its purpose as enlightening “ignorance that does not know, and poverty which does not have the means to know.” In the infectious spirit of égalité and fraternité, he dedicated the conservatoire to the “artisan who has seen only his own workshop.”
In 1800, the conservatoire moved into the former Saint-Martin-des-Champs, a church and Benedictine monastery that had been “donated” to the newly founded republic not long before its last three monks lost their heads to the guillotine. Intriguing traces of its past life still lie in plain view: fragments of a 15th-century fresco on a church wall and rail tracks used to roll out machines in the 19th century.
What began as a repository for existing collections, nationalized in the name of the republic, has expanded to 80,000 objects, plus 20,000 drawings, and morphed into a cross between the early cabinets de curiosités (without their fascination for Nature’s perversities) and a more modern tribute to human ingenuity.
“It is a museum with a collection that has evolved over time, with acquistions and donations that reflected the tastes and technical priorities of each era,” explained Alain Mercier, the museum’s resident historian. He said the focus shifted from science in the 18th century to other disciplines in the 19th: agriculture, then industrial arts, then decorative arts. “It was not rigorously logical,” he added.
Mostly French but not exclusively, the approximately 3,000 objects now on view are divided into seven sections, beginning with scientific instruments and materials, and then on to mechanics, communications, construction, transport, and energy. There are displays of manufacturing techniques (machines that make wheels, set type, thread needles, and drill vertical bores) and then exhibits of the products of those techniques: finely etched glassware, elaborately decorated porcelains, cigar cases made of chased aluminum, all objects that could easily claim a place in a decorative arts museum.
The surprising juxtaposition of artful design and technical innovation pops up throughout the museum’s high-ceilinged galleries—from the ornate, ingenious machines of 18th-century master watchmakers and a fanciful 18th-century file-notching machine, shaped to look like a flying boat, to the solid metal creations of the industrial revolution and the elegantly simple form of a late 19th-century chainless bicycle.
Few other museums, here or abroad, so gracefully celebrate both the beautiful and the functional—as well as the very French combination of the two. This emphasis on aesthetics, particularly evident in the early collections, comes from the aristocratic and royal patrons of pre-revolution France who placed great stock in the beauty of their newly invented acquisitions. During this era, said Mercier, “people wanted to possess machines that surprised both the mind and the eye.”
Image by © SONNET Sylvain/Hemis/Corbis. Clement Ader's steam-powered airplane, the Ader Avion No. 3, hangs from the ceiling of the Arts et Métiers museum. (original image)
Image by © SONNET Sylvain/Hemis/Corbis. Peering into the museum's mechanical room (original image)
Image by © SONNET Sylvain/Hemis/Corbis. The communication room (original image)
Image by © SONNET Sylvain/Hemis/Corbis. View of the airplanes and automobiles hall (original image)
Image by © SONNET Sylvain/Hemis/Corbis. The museum collection includes the original model of the Statue of Liberty by Frédéric Auguste Bartholdi. (original image)
Image by © Christophe LEHENAFF/Photononstop/Corbis. A student draws in a room filled with scientific instruments. (original image)
Image by © Christophe LEHENAFF/Photononstop/Corbis. (original image)
From this period come such splendid objects as chronometers built by the royal clockmaker Ferdinand Berthoud; timepieces by the Swiss watchmaker Abraham-Louis Breguet; a finely crafted microscope from the Duc de Chaulnes’s collection; a pneumatic machine by the Abbé Jean-Antoine Nollet, a great 18th-century popularizer of science; and a marvelous aeolipile, or bladeless radial steam turbine, which belonged to the cabinet of Jacques Alexandre César Charles, the French scientist and inventor who launched the first hydrogen-filled balloon, in 1783.
Christine Blondel, a researcher in the history of technology at the National Center of Scientific Research, noted that even before the revolution, new scientific inventions appeared on display at fairs or in theaters. “The sciences were really part of the culture of the period,” she said. “They were attractions, part of the spectacle.”
This explains some of the collection’s more unusual pieces, such as the set of mechanical toys, including a miniature, elaborately dressed doll strumming Marie Antoinette’s favorite music on a dulcimer; or the famous courtesan Madame de Pompadour’s “moving picture” from 1759, in which tiny figures perform tasks, all powered by equally small bellows working behind a painted landscape.
Mercier, a dapper 61-year-old who knows the collection by heart and greets its guards by name, particularly enjoys pointing out objects that exist solely to prove their creator’s prowess, such as the delicately turned spheres-within-spheres, crafted out of ivory and wood, which inhabit their own glass case in the mechanics section. Asked what purpose these eccentric objects served, Mercier smiles. “Just pleasure,” he responds.
A threshold moment occurred in the decades leading up to the revolution, notes Mercier, when French machines began to shed embellishment and become purely functional. A prime example, he says, is a radically new lathe—a starkly handsome metal rectangle—invented by engineer Jacques Vaucanson in 1751 to give silk a moiré effect. That same year Denis Diderot and Jean-Baptiste le Rond d’Alembert first published their Encyclopedia, a key factor in the Enlightenment, which among many other things celebrated the “nobility of the mechanical arts.” The French Revolution further accelerated the movement toward utility by standardizing metric weights and measures, many examples of which are found in the museum.
When the industrial revolution set in, France began to lose its leading position in mechanical innovation, as British and American entrepreneurial spirit fueled advances. The museum honors these foreign contributions too, with a French model of James Watt’s double-acting steam engine, a 1929 model of the American Isaac Merritt Singer’s sewing machine and an Alexander Graham Bell telephone, which had fascinated visitors to London’s Universal Exhibition in 1851.
Even so, France continued to hold its own in the march of industrial progress, contributing inventions such as Hippolyte Auguste Marinoni’s rotary printing press, an 1886 machine studded with metal wheels; the Lumière brothers’ groundbreaking cinematograph of 1895; and, in aviation, Clément Ader’s giant, batlike airplane.
Although the museum contains models of the European Space Agency’s Ariane 5 rocket and a French nuclear power station, the collection thins out after World War II, with most of France’s 20th-century science and technology material on display at Paris’s Cité des Sciences et de l’Industrie.
Few sights can top the Arts et Métiers’ main exhibit hall located in the former church: Léon Foucault’s pendulum swings from a high point in the choir, while metal scaffolding built along one side of the nave offers visitors an intriguing multistoried view of the world’s earliest automobiles. Juxtaposed in dramatic midair hang two airplanes that staked out France’s leading role in early aviation.
For all its unexpected attractions, the Musée des Arts et Métiers remains largely overlooked, receiving not quite 300,000 visitors in 2013, a fraction of the attendance at other Paris museums. That, perhaps, is one of its charms.
Parisians know it largely because of popular temporary exhibits, such as “And Man Created the Robot,” which showcased in 2012-13. These shows have helped boost attendance by more than 40 percent since 2008. But the museum’s best advertisement may be the stop on Métro Line 11 that bears its name. Its walls feature sheets of copper riveted together to resemble the Nautilus submarine in Jules Verne’s Twenty Thousand Leagues Under the Sea, complete with portholes.
For anyone looking for an unusual Paris experience, the station—and the museum on its doorstep—is a good place to start.
Six Exhibits Not to Miss
In 1974, just a couple years after the launch of the first Landsat satellite, scientists noticed something odd in the Weddell Sea near Antarctica. There was a large ice-free area, called a polynya, in the middle of the ice pack. The polynya, which covered an area as large as New Zealand, reappeared in the winters of 1975 and 1976 but has not been seen since.
Scientists interpreted the polynya’s disappearance as a sign that its formation was a naturally rare event. But researchers reporting in Nature Climate Change disagree, saying that the polynya’s appearance used to be far more common and that climate change is now suppressing its formation.
What’s more, the polynya’s absence could have implications for the vast conveyor belt of ocean currents that move heat around the globe.Satellite imagery allowed scientists to find an ice-free area in the Weddell Sea (upper left quadrant) in the Antarctic winters of 1974 through 1976. (Credit: Claire Parkinson (NASA GSFC))
Surface seawater around the poles tends to be relatively fresh due to precipitation and the fact that sea ice melts into it, which makes it very cold. As a result, below the surface is a layer of slightly warmer and more saline water not infiltrated by melting ice and precipitation. This higher salinity makes it denser than water at the surface.
Scientists think that the Weddell polynya can form when ocean currents push these denser subsurface waters against an underwater mountain chain known as the Maud Rise. This forces the water up to the surface, where it mixes with and warms colder surface waters. While it doesn’t warm the top layer of water enough for a person to comfortably bathe in, it's enough to prevent ice from forming. But at a cost—the heat from the upwelling subsurface water dissipates into the atmosphere soon after it reaches the surface This loss of heat forces the now-cool but still dense water to sink some 3,000 meters to feed a huge, super-cold underwater ocean current known as Antarctic Bottom Water.
Antarctic Bottom Water spreads across the global oceans at depths of 3,000 meters and more, delivering oxygen into these deep places. It’s also one of the drivers of global thermohaline circulation, the great ocean conveyor belt that moves heat from the equator towards the poles.A network of surface and deep-ocean currents moves water and heat around the world. (Credit: NASA/Map by Robert Simmon, adapted from the IPCC 2001 and Rahmstorf 2002)
But for the mixing to occur in the Weddell Sea, the top layer of ocean water must become denser than the layer below it so that the waters can sink.
To find out what has been going on in the Weddell Sea, Casimir de Lavergne of McGill University in Montreal and colleagues began by analyzing temperature and salinity measurements collected by ships and robotic floats in this region since 1956—tens of thousands of data points. The researchers could see that the surface layer of water at the site of the Weddell polynya has been getting less salty since the 1950s. Freshwater is less dense than saltwater, and it acts as a lid on the Weddell system, trapping the subsurface warm waters and preventing them from reaching the surface. That in turn, stops the mixing that produces Antarctic Bottom Water at that site.
That increase in freshwater is coming from two sources: Climate change has amplified the global water cycle, increasing both evaporation and precipitation. And Antarctic glaciers have been calving and melting at a greater rate. Both of these sources end up contributing more freshwater to the Weddell Sea than what the area experienced in the past, the researchers note.
To look at what the future might hold for this system, de Lavergne and colleagues turned to a set of 36 climate models. Those models, which predict that dry places of the world generally get drier and wet places get wetter, show that this area of the Southern Ocean should see even more precipitation in the future. The models don’t include melting glaciers, but those are expected to add more freshwater, which could make the lid on the system even stronger, according to the researchers.
A weakening of the mixing of water in the Weddell Sea could explain, at least in part, a shrinking in Antarctic Bottom Waters reported in 2012. “Reduced convection would reduce the rate of Antarctic Bottom Water formation,” says de Lavergne. That “could cause a weakening in the lower branch of the thermohaline circulation.”
That lower branch is the cousin to a similar process of convection happening in the Labrador Sea of the North Atlantic, where cold water from the Arctic sinks and drives deep currents south. If this source of deep water were shut off, perhaps because of an influx of freshwater, scientists have said that the results could be disastrous, particularly for Europe, which is kept warm by this movement of heat and water. Climate researchers consider this scenario highly unlikely but not out of the realm of possibility. And even a weakened system can have effects on climate and weather around the world.
More immediately, though, a weakening of the mixing in the Weddell Sea could be contributing to some of climate trends observed in Antarctica and the Southern Ocean. By keep warmer ocean waters trapped, the weakening may explain a slowdown in surface warming and expansion in the sea ice, the researchers note.
The weakening of the Weddell Sea mixing has also kept trapped all the heat and carbon stored in those deeper layers of ocean water. If another giant polynya were to form, which is unlikely but possible, the researchers warn, it could release a pulse of warming on the planet.
The two paralysis patients were up and walking on treadmills in no time. This impressive feat was made possible by an unprecedented new surgery, in which researchers implanted wireless devices in the patients’ brains that recorded their brain activity. The technology allowed the brain to communicate with the legs—bypassing the broken spinal cord pathways—so that the patient could once again regain control.
These patients, it turns out, were monkeys. But this small step for monkeys could lead to a giant leap for millions of paralyzed humans: The same equipment has already been approved for use in humans, and clinical studies are underway in Switzerland to test the therapeutic effectiveness of the spinal cord stimulation method in humans (minus the brain implant). Now that researchers have a proof-of-concept, this kind of wireless neurotechnology could change the future of paralysis recovery.
Instead of trying to repair the damaged spinal cord pathways that usually deliver brain signals to the limbs, scientists tried an innovative approach to reverse paralysis: Bypassing the injury bottleneck altogether. The implant worked as a bridge between the brain and the legs, directing leg motion and stimulating muscle movement in real time, says Tomislav Milekovic, a researcher at Switzerland's École Polytechnique Fédérale de Lausanne (EPFL). Milekovic and co-authors report their findings in a new paper published Wednesday in the journal Nature.
When the brain's neural network processes information, it produces distinctive signals—which scientists have learned to interpret. Those that drive walking in primates originate in the dime-sized region known as the motor cortex. In a healthy individual, the signals travel down the spinal cord to the lumbar region, where they direct the activation of leg muscles to enable walking.
If a traumatic injury severs this connection, a subject is paralyzed. Although the brain is still able to produce the proper signals, and the leg's muscle-activating neural networks are intact, those signals never reach the legs. The researchers managed to reestablish the connection thorough real-time, wireless technology—an unprecedented feat.
How does the system work? The team's artificial interface begins with an array of almost 100 electrodes implanted in the brain's motor cortex. It's connected to a recording device that measures the spiking of electrical activities in the brain that control leg movements. The device sends these signals to a computer that decodes and translates these instructions to another array of electrodes implanted in the lower spinal cord, below the injury. When the second group of electrodes receives the instructions, it activates the appropriate muscle groups in the legs.
For the study, the two Rhesus macaque monkeys were given spinal cord injuries in the lab. After their surgeries, they had to spend a few days recovering and waiting for the system to collect and calibrate necessary data on their condition. But just six days after injury, one monkey was walking on a treadmill. The other was up and walking on post-injury day 16.
The success of the the brain implant demonstrates for the first time how neurotechnology and spinal cord stimulation can restore a primate's ability to walk. “The system restored locomotor movements immediately, without any training or re-learning,” Milekovic, who engineers data-driven neuroprosthetic systems, told Smithsonian.com.
“The first time we turned the brain-spine interface on was a moment that I’ll never forget,” added EPFL researcher Marc Capogrosso in a statement.A new brain implant wirelessly send signals to the legs' muscle groups. (Illustration by Jemere Ruby)
The technique of "hacking" the brain's neural networks has produced remarkable feats, such as helping to create touch-sensitive prosthetics that allow wearers to perform delicate tasks like cracking an egg. But many of these efforts use cable connections between the brain and recording devices, meaning the subjects aren't able to move freely. “Neural control of hand and arm movements was investigated in great detail, while less focus has been given to the neuronal control of leg movements, which required animals to move freely and naturally,” Milekovic says.
Christian Ethier, a neuroscientist at Quebec's Université Laval who was not involved in the research, called the work a “major step forward in the development of neuroprosthetic systems." He added: “I believe this demonstration is going to accelerate the translation of invasive brain-computer interfaces toward human applications.
In an accompanying News & Views piece in Nature, neuroscientist Andrew Jackson agrees, pointing out how quickly advances in this field have moved from monkeys to people. A 2008 paper, for instance, demonstrated that paralyzed monkeys could control a robotic arm with just their brain; four years later a paralyzed woman did the same. Earlier this year, brain-controlled muscle stimulation enabled a quadriplegic person to grasp items, among other practical hand skills, after the same feat was achieved in monkeys in 2012.
Jackson concludes from this history that “it's not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of of the decade.”
The Blackrock electrode array implanted in the monkeys' brains has been used for 12 years to successfully record brain activity in the BrainGate clinical trials; numerous studies have demonstrated that this signal can accurately control complex neuroprosthetic devices. “While it does require surgery, the array is an order of magnitude smaller than the surgically implanted deep brain simulators already used by more than 130,000 people with Parkinson's disease or other movement disorders,” Milekovic adds.
While this test was limited to just a few phases of brain activity related to walking gait, Ethier suggests that it could potentially enable a greater range of movement in the future. “Using these same brain implants, it is possible to decode movement intent in a lot more detail, similar to what we have done to restore grasp function. ... I expect that future developments will go beyond and perhaps include other abilities like compensating for obstacles and adjusting walking speed.”
Ethier notes another intriguing possibility: The wireless system might actually help the body heal itself. “By re-synchronizing the activity in the brain and spinal motor centers, they could promote what is called ‘activity-dependent neuroplasticity,’ and consolidate any spared connections linking the brain to the muscles,” he says. “This could have long-term therapeutic effects and promote the natural recovery of function beyond what is possible with conventional rehabilitation therapies.”
This phenomenon is not well understood, and the possibility remains speculative at this point, he stresses. But the tangible achievement this research demonstrates—helping the paralyzed walk again with their brains—is already a huge step.
When I reach Hunter Hoffman, director of the Virtual Reality Research Center at the University of Washington, he’s in Galveston, Texas, visiting Shriners Hospital for Children. Shriners is one of the most highly regarded pediatric burn centers in America. They treat children from around the country suffering from some of the most horrific burns possible—burns on 70 percent of their bodies, burns covering their faces. Burn recovery is notoriously painful, necessitating torturous daily removal of dead skin.
“Their pain levels are just astronomically high despite the use of strong pain medications,” Hoffman says.
Hoffman, a cognitive psychologist, is here to offer the children a different kind of pain relief: virtual reality. Using a special pair of virtual reality goggles held near the children’s faces with a robotic arm (head burns make traditional virtual reality headsets unfeasible), the children enter a magic world designed by Hoffman and his collaborator David Patterson. In “SnowCanyon,” the children float through a snowy canyon filled with snowmen, igloos and woolly mammoths. They throw snowballs at targets as they float along, Paul Simon music playing in the background. They’re so distracted, they pay far less attention to what’s happening in the real world: nurses cleaning their wounds.
“The logic behind how it works is that humans have a limited amount of attention available and pain requires a lot of attention,” Hoffman says. “So there’s less room for the brain to process the pain signals.”
Virtual reality reduces pain levels up to 50 percent, Hoffman says, as good or better than many conventional painkillers.
The idea of using virtual reality (VR) to distract patients from pain is gaining traction in the medical community. And as it turns out, that’s only the tip of the iceberg when it comes to the emerging field of virtual reality medicine.
Perhaps the most established use of virtual reality medicine is in psychiatry, where it’s been used for treating phobias, PTSD and other psychological issues for at least 20 years. A patient with a fear of flying might sit in a chair (or even a mock airplane seat) while inside a VR headset they’re experiencing a simulation of takeoff, cruising and landing, complete with engine noises and flight attendant chatter. This kind of treatment is a subset of the more traditional exposure therapy, where patients are slowly exposed to the object of their phobia until they stop having a fear reaction. Traditional exposure therapy is easier to do when the phobia is of something common and easily accessible. A person afraid of dogs can visit a neighbor’s dog. An agoraphobic can slowly venture outside for short periods of time. But treating phobias like fear of flying or fear of sharks with traditional exposure therapy may be expensive or impractical in real life. That’s where VR has a major advantage. Treating PTSD with VR works similarly, exposing patients to a simulation of a feared situation (a battle in Iraq, for example), and appears to be just as effective.
Hoffman and his collaborators have done pioneering work in using VR for phobias and PTSD. Back in the late 1990s, they designed a program to deal with spider phobia, having a test patient see increasingly close and graphic images of a spider, eventually while also touching a spider toy. The patient was so spider phobic she rarely left the house during the day and taped her doors shut at night. By the end of her VR treatment she comfortably held a live tarantula in her bare hands. Hoffman has also created programs for dealing with PTSD, notably a September 11 simulation for victims of the attacks.
Scientists are quickly learning that VR has many other psychiatric applications. Studies suggest that VR exposure can help patients with paranoia, a common symptom of various psychiatric disorders such as schizophrenia. In a recent study published in the British Journal of Psychiatry, patients with “persecutory delusions” were put in virtual reality simulations of fearful social situations. Compared to traditional exposure therapy, the VR-treated patients showed a larger decrease in delusions and paranoia. Other studies suggest VR is helpful for children with autism and in patients with brain damage-related memory impairment. Some of Hoffman’s current research deals with patients with borderline personality disorder, an infamously hard-to-treat illness involving unstable moods and a difficulty in maintaining relationships. For these patients, Hoffman has designed a program using virtual reality to enhance mindfulness, which is known to decrease levels of anxiety and distress.
VR has also been shown to be a boon to amputees suffering phantom limb pain—the sensation that the removed limb is still there, and hurting. Phantom limb pain sufferers typically use “mirror therapy” to relieve their distress. This involves putting their remaining limb in a mirrored box that makes it look like they have two arms or legs again. For reasons not entirely clear, seeing the amputated limb appear healthy and mobile seems to diminish pain and cramping sensations. But this type of therapy has limitations, especially for patients missing both legs or both arms. A recent case study in Frontiers in Neuroscience discussed an amputee with phantom cramping in his missing arm that was resistant to mirror treatment and was so painful it woke him up at night. The patient was treated with a VR program that used the myoelectric activity of his arm stump to move a virtual arm. After 10 weeks of treatment, he began to experience pain-free periods for the first time in decades.
VR also stands to revolutionize the field of imaging. Instead of looking at an MRI or CT scan image, doctors are now beginning to use VR to interact with 3D images of body parts and systems. In one Stanford trial, doctors used VR imaging to evaluate infants born with a condition called pulmonary atresia, a heart defect that blocks blood from flowing from babies’ hearts to their lungs. Before lifesaving surgery can be performed, doctors must map the babies’ tiny blood vessels, a difficult task since every person is slightly different. Using a technology from the VR company EchoPixel, the doctors used a special 3D stereoscopic system, where they could inspect and manipulate holograms of the babies’ anatomy. They concluded that the VR system was just as accurate as using traditional forms of imaging, but was faster to interpret, potentially saving valuable time.
As virtual reality devices become higher quality and more affordable—in the past, medical virtual reality devices cost hundreds of thousands of dollars, while an Oculus Rift headset is just over $700—their use in medicine will likely become more widespread.
“There’s really a growing interest right now,” Hoffman says. “There’s basically a revolution in virtual reality being used in the public sector. We’ve been using these expensive, basically military virtual reality systems that were designed for training pilots and now, with cell phones, there are a number of companies that have figured out how to get them to work as displays for VR goggles, so the VR system has just dropped to like 1/30th of the cost it used to be.”
So next time you go to the doctor with a migraine or back pain or a twisted ankle, perhaps, instead of being prescribed a painkiller, you’ll be offered a session inside a virtual reality headset.
While the Nobel Prizes are 115 years old, rewards for scientific achievement have been around much longer. As early as the 17th century, at the very origins of modern experimental science, promoters of science realized the need for some system of recognition and reward that would provide incentive for advances in the field.
Before the prize, it was the gift that reigned in science. Precursors to modern scientists – the early astronomers, philosophers, physicians, alchemists and engineers – offered wonderful achievements, discoveries, inventions and works of literature or art as gifts to powerful patrons, often royalty. Authors prefaced their publications with extravagant letters of dedication; they might, or they might not, be rewarded with a gift in return. Many of these practitioners worked outside of academe; even those who enjoyed a modest academic salary lacked today’s large institutional funders, beyond the Catholic Church. Gifts from patrons offered a crucial means of support, yet they came with many strings attached.
Eventually, different kinds of incentives, including prizes and awards, as well as new, salaried academic positions, became more common and the favor of particular wealthy patrons diminished in importance. But at the height of the Renaissance, scientific precursors relied on gifts from powerful princes to compensate and advertise their efforts.
With courtiers all vying for a patron’s attention, gifts had to be presented with drama and flair. Galileo Galilei (1564-1642) presented his newly discovered moons of Jupiter to the Medici dukes as a “gift” that was literally out of this world. In return, Prince Cosimo “ennobled” Galileo with the title and position of court philosopher and mathematician.
If a gift succeeded, the gift-giver might, like Galileo in this case, be fortunate enough to receive a gift in return. Gift-givers could not, however, predict what form it would take, and they might find themselves burdened with offers they couldn’t refuse. Tycho Brahe (1546-1601), the great Danish Renaissance astronomer, received everything from cash to chemical secrets, exotic animals and islands in return for his discoveries.
Regifting was to be expected. Once a patron had received a work he or she was quick to use the new knowledge and technology in their own gift-giving power plays, to impress and overwhelm rivals. King James I of England planned to sail a shipful of delightful automata (essentially early robots) to India to “court” and “please” royalty there, and to offer the Mughal Emperor Jahangir the art of “cooling and refreshing” the air in his palace, a technique recently developed by James’ court engineer Cornelis Drebbel (1572-1633). Drebbel had won his own position years earlier by showing up unannounced at court, falling to his knees, and presenting the king with a marvelous automaton.A version of Drebbel’s automaton sits on the table by the window in this scene of a collection. (Hieronymous Francken II and Brueghel the Elder)
Gifts were unpredictable and sometimes undesired. They could go terribly wrong, especially across cultural divides. And they required the giver to inflate the dramatic aspects of their work, not unlike the modern critique that journals favor the most surprising or flashy research leaving negative results to molder. With personal tastes and honor at stake, the gift could easily go awry.
Scientific promoters already realized in the early 17th century that gift-giving was ill-suited to encouraging experimental science. Experimentation required many individuals to collect data in many places across long periods of time. Gifts emphasized competitive individualism at a time when scientific collaboration and the often humdrum work of empirical observation was paramount.
While some competitive rivalry could help inspire and advance science, too much could lead to the ostentation and secrecy that too often plagued courtly gift-giving. Most of all, scientific reformers feared an individual would not tackle a problem that couldn’t be finished and presented to a patron in his or her lifetime—or even if they did, their incomplete discoveries might die with them.
For these reasons, promoters of experimental science saw the reform of rewards as integral to radical changes in the pace and scale of scientific discovery. For example, Sir Francis Bacon (1561-1626), lord chancellor of England and an influential booster of experimental science, emphasized the importance even of “approximations” or incomplete attempts at reaching a particular goal. Instead of dissipating their efforts attempting to appease patrons, many researchers, he hoped, could be stimulated to work toward the same ends via a well-publicized research wish list.
Bacon coined the term “desiderata,” still used by researchers today to denote widespread research goals. Bacon also suggested many ingenious ways to advance discovery by stimulating the human hunger for fame; a row of statues celebrating famous inventors of the past, for example, could be paired with a row of empty plinths upon which researchers might imagine their own busts one day resting.
Bacon’s techniques inspired one of his chief admirers, the reformer Samuel Hartlib (circa 1600-1662) to collect many schemes for reforming the system of recognition. One urged that rewards should go not only “to such as exactly hit the marke, but even to those that probably misse it,” because their errors would stimulate others and make “active braines to beate about for New Inventions.” Hartlib planned a centralized office systematizing rewards for those who “expect Rewards for Services done to the King or State, and know not where to pitch and what to desire.”Galileo presents an experiment to a Medici patron. (Giuseppe Bezzuoli)
Collaborative scientific societies, beginning in the mid-17th century, distanced rewards from the whims and demands of individual patrons. The periodicals that many new scientific societies started publishing offered a new medium that allowed authors to tackle ambitious research problems that might not individually produce a complete publication pleasing to a dedicatee.
For example, artificial sources of luminescence were exciting chemical discoveries of the 17th century that made pleasing gifts. A lawyer who pursued alchemy in his spare time, Christian Adolph Balduin (1632-1682), presented the particular glowing chemicals he discovered in spectacular forms, such as an imperial orb that shone with the name “Leopold” for the Habsburg emperor.
Many were not satisfied, however, with Balduin’s explanations of why these chemicals glowed. The journals of the period feature many attempts to experiment upon or question the causes of such luminescence. They provided an outlet for more workaday investigations into how these showy displays actually worked.
The societies themselves saw their journals as a means to entice discovery by offering credit. Today’s Leopoldina, the German national scientific society, founded its journal in 1670. According to its official bylaws, those who might not otherwise publish their findings could see them “exhibited to the world in the journal to their credit and with the praiseworthy mention of their name,” an important step on the way to standardizing scientific citation and norms of establishing priority.
Beyond the satisfaction of seeing one’s name in print, academies also began offering essay prizes upon particular topics, a practice which continues to this day. Historian Jeremy Caradonna estimates 15,000 participants in such competitions in France between 1670, when the Royal Academy of Sciences began awarding prizes, and 1794. These were often funded by many of the same individuals, such as royalty and nobility, who in former times would have functioned as direct patrons, but now did so through the intermediary of the society.
States might also offer rewards for solutions to desired problems, most famously in the case of the prizes offered by the English Board of Longitude beginning in 1714 for figuring out how to determine longitude at sea. Some in the 17th century likened this long-sought discovery to the philosophers’ stone. The idea of using a prize to focus attention on a particular problem is alive and well today. In fact, some contemporary scientific prizes, such as the Simons Foundation’s “Cracking the Glass Problem,” set forth specific questions to resolve that were already frequent topics of research in the 17th century.
The shift from gift-giving to prize-giving transformed the rules of engagement in scientific discovery. Of course, the need for monetary support hasn’t gone away. The scramble for funding can still be a sizable part of what it takes to get science done today. Succeeding in grant competitions might seem mystifyng and winning a career-changing Nobel might feel like a bolt out of the blue. But researchers can take comfort that they no longer have to present their innovations on bended knee as wondrous gifts to satisfy the whims of individual patrons.
For every hero in American history, there must be a hundred scoundrels—con men, Ponzi schemers, cat burglars, greedy gigolos, jewel thieves, loan sharks, phony doctors, phony charities, phony preachers, body snatchers, bootleggers, blackmailers, cattle rustlers, money launderers, smash-and-grabbers, forgers, swindlers, pickpockets, flimflam artists, stickup specialists and at least one goat-gland purveyor, not to mention all the high-tech varieties made possible by the internet.
Most of these vandals have been specialists who stuck to a single line of skullduggery until they got caught, retired or died. Some liked to brag to admirers about their enterprises, and a tiny few dared to write and publish books about them; Willie Sutton, for example, the Tommy Gun-wielding "Slick Willie" who heisted some $2 million robbing banks back in the first half of the last century (when that was a lot of money), wrote Where the Money Was: The Memoirs of a Bank Robber in 1976. There was Xaviera Hollander, the Park Avenue madam whose memoir, The Happy Hooker, inspired a series of Hollywood movies and helped encourage the sexual frankness of recent decades.
Occasionally, one of these memoirists tells of diversifying, spreading out, trying this dodge if that one doesn’t work. Sutton's lesser known contemporary, Frank Abagnale, who was portrayed in the movie Catch Me If You Can, wrote of bilking wealthy innocents of some $2.5 million by posing as a lawyer, teacher, doctor and airline pilot before going straight. Other such confessors are hiding in the archives.
But there has been only one Stephen Burroughs, a poseur whose life would make a fabulous movie if today’s audiences were as interested in early American history as in robotic space monsters. His exploits began during the Revolutionary War when he ran off to join—then depart—the Continental Army three times at the age of 14. By the time he was 33, he had lived and misbehaved vigorously enough to make up the first version of his autobiography. So far, Memoirs of the Notorious Stephen Burroughs false has been published with slightly differing titles in more than 30 editions over a span of more than 216 years.
The New England poet Robert Frost wrote that Burroughs's book should stand on the shelf beside the autobiography of Benjamin Franklin. To Frost, Franklin's volume was "a reminder of what we have been as a young nation," while Burroughs "comes in reassuringly when there is a question of our not unprincipled wickedness…sophisticated wickedness, the kind that knows its grounds and can twinkle…Could we have been expected to produce so fine a flower in a pioneer state?"Harper’s Magazine once described Stephen Burroughs as “a gentleman who at times came in somewhat violent contact with the laws of his country.” (NMAH, from the Memoirs of the Notorious Stephen Burroughs,1835)
“Sophisticated wickedness that can twinkle” sounds like a review of one of Shakespeare’s greatest hits, his sublime caricatures of English nobility. But in Burroughs we find no nobility, only 378 or so flowing pages by the only son of a harsh Presbyterian preacher in a colonial New England village; a memoirist who lived his adventures before he wrote about them with such jolly sophistication. Or at least he said he did.
Stephen Burroughs was born in 1765 in Connecticut, and moved as a child to Hanover, New Hampshire. At home and briefly away at school, he earned and proudly wore a reputation as an incorrigible child, stealing watermelons, upsetting outhouses, restlessly looking for trouble.
He explained his boyhood thus: “My thirst for amusement was insatiable…I sought it in pestering others…I became the terror of the people where I lived, and all were very unanimous in declaring that Stephen Burroughs was the worst boy in town; and those who could get him whipt were most worthy of esteem…however, the repeated application of this birchen medicine never cured my pursuit of fun.”
Indeed, that attitude explained most of Burroughs’s imaginative career.
When he was 16, his father enrolled him at nearby Dartmouth College, but that didn’t last long—after another prank involving watermelons, he was sent home. Young Burroughs proved that schooling was not necessary for a quick-witted young man zipping between gullible New England communities so nimbly that primitive communications couldn’t keep up with him.
At 17, he decided to go to sea. Venturing to Newburyport, Massachusetts, he went aboard a privateer, a private vessel authorized to prey on enemy shipping. Having no pertinent skills, he picked the brain of an elderly medicine man before talking himself aboard as the ship’s doctor. This produced a dramatic account of surgery amid storms, battling a British gunship and later being jailed for improperly issuing wine to the crew, a series of adventures that would strain even Horatio Hornblower.
The historian Larry Cebula recalls two unacquainted travelers sharing a coach in 1790 New England when one of them, a Boston lawyer, discoursed about a famed confidence man named Burroughs. This Burroughs, he said, had “led a course of the most barefaced and horrid crimes of any man living, including stealing, counterfeiting, robbing and adultery, escaping prison, burning the prison and killing guards.” He did not realize that the fellow listening quietly to all this was Stephen Burroughs himself, who by then, at the age of 25, had a log of misdeeds stretching well beyond the lawyer’s account.Burroughs’s life can barely hint at the richness of his memoirs, which scholars accept as mostly, or at least partly, true. (NMAH, From the Memoirs of the Notorious Stephen Burroughs,1835)
A hundred years after Burroughs first tried to become a boy soldier, Harper’s Magazine described him as “a gentleman who at times came in somewhat violent contact with the laws of his country.” Yes: after his seafaring adventure, he snitched some of his father’s sermons and headed out pretending to be a preacher; he got away with it until the congregation caught on and chased him out of town. Skipping from village to village, he briefly occupied pulpit after pulpit.
When that career dwindled, he branched into counterfeiting. Printing phony money was a popular crime in those days, before common currency was established, and Burroughs was a master. The National Museum of American History in its new exhibition American Enterprise, displays a prime example of his art—a $1 certificate on the Union Bank of Boston, dated 1807, signed by Burroughs as cashier, and later stamped COUNTERFEIT.
Artful but not quite perfect, he was caught and jailed, but broke out and moved on, becoming a schoolteacher. Convicted of seducing a teenage student, he was sentenced to the public whipping post. He escaped again and took his tutorial talents to Long Island, where he helped organize one of the nation’s first public libraries. After failing at land speculation in Georgia, he returned north and settled across the border in Quebec, nominally a farmer but still counterfeiting till he was caught and convicted yet again. But there he settled down, converting to Catholicism and living as a mostly respectable citizen until he died in 1840.
This race through some of the high/low spots of Burroughs’s life can barely hint at the richness of his memoirs, which scholars accept as mostly, or at least partly, true. Whatever their factual percentage, they remain an affectionate, sometimes hilarious, extremely readable meander voyage through provincial life in the brand-new republic.
The permanent exhibition “American Enterprise” opened on July 1 at the Smithsonian’s National Museum of American History in Washington, D.C. and traces the development of the United States from a small dependent agricultural nation to one of the world's largest economies.
In 2009, automotive designers at Japanese carmaker Nissan were scratching their heads over how to build the ultimate anti-collision vehicle. Inspiration came from an unlikely source: schools of fish, which move synchronously by sticking close together while simultaneously staying a safe stopping distance apart. Nissan took the aquatic concept and swam with it, creating safety features in Nissan cars like Intelligent Brake Assist and Forward Collision Warning that are still used today.
Biomimicry—an approach to design that looks for solutions in nature—is by now so widespread that you may not even recognize the real-life inspiration behind your favorite technology. From flipper-like turbines to leaf-inspired solar cells to UV-reflective glass with spider web-like properties, biomimicry offers designers efficient, practical, and often economical solutions that nature has been developing over billions of years. But combine biomimicry with sports cars? Now you're in for a wild ride.
From the Jaguar to the Chevrolet Impala, automotive designers have a long tradition of naming their cars after creatures that evoke power and style. Carmakers like Nissan even go so far as to study animals in their natural environments to advance automotive innovation. Here are a few of the most famous classic cars—commercial and concept—that owe their inspiration to the deep blue sea.
A Bubble of One’s OwnMcLaren P1 Supercar (Axion23 via Wikicommons)
While automotive designer Frank Stephenson was on vacation in the Caribbean, a sailfish mounted on the wall of his hotel made him do a double take. The fish's owner was especially proud of his catch, he told Stephenson, because of the fact that sailfish are coveted for being too fast to easily capture. Reaching speeds of 68 miles per hour, the sailfish is one of the fastest animals in the ocean (close competitors include its cousins the swordfish and marlin, all of which belong to the billfish family).
His curiosity hooked, Stephenson returned to his job at the headquarters of British automotive giant McLaren eager to learn more about what makes the sailfish the fastest in the sea. He discovered that the fish’s scales generate tiny vortices that produce a bubble layer around its body, significantly reducing drag as it swims.
Stephenson went on to design a supercar in the fish’s image: The P1 hypercar needs generous air circulation to maintain combustion and engine cooling for high performance. McLaren’s designers applied the fish scale blueprint to the inside of the ducts that channel air into the engine of the P1, boosting airflow by an incredible 17 percent and increasing the efficiency and power of the vehicle.
The Road Shark
Image by Tino Rossini / iStock. Corvette Mako Shark (original image)
Image by CoreyFord / iStock. Mako Shark Side Profile (original image)
Image by Arpad Benedek / iStock. Corvette Stingray (original image)
Image by Chris Sauerwald / iStock. Corvette Manta Ray (original image)
Out of all the ocean-inspired sports cars, the Corvette Stingray is perhaps the most famous. Colloquially named “The Road Shark,” the Stingray is still produced and sold today. It isn’t the only car to appear in a suite of shark and ray-inspired 'Vettes, however. There's also the Mako Shark, the Mako Shark II and the Manta Ray, although none of these have enjoyed the longevity of the Stingray. Built in the United States, America’s love affair with the Stingray continues today as a race-ready sports car for not a whole lot of money.
Corvette’s aquatic renaissance stemmed partly from one man’s fishing trip. General Motors design head Bill Mitchell, an avid deep-sea fisher and nature-lover, returned from a trip to Florida with a Mako shark—a pointy-nosed apex predator with a metallic blue back—which he later mounted in his GM office. Mitchell was reportedly captivated by the vibrant gradation of colors along the underbelly of the shark, and worked tirelessly with designer Larry Shimoda to translate this coloration to the new concept car, the Mako Shark.
Although the car never went on the market, the prototype alone gained iconic status. But the concept didn’t disappear entirely. Instead, after acquiring a few upgrades, the Mako evolved into the Manta Ray after Mitchell was inspired by the movement of a manta powerfully gliding through the ocean.
A Little More BitePlymouth Barracuda (crwpitman / iStock)
This iconic fastback almost had an entirely different namesake when Plymouth’s executives lobbied to call the car "Panda." Unsurprisingly, the name was unpopular with its designers, who were looking for something with a little more…bite. They settled on "Barracuda," a title more befitting of the muscle car’s snarling, toothy grin.
Serpentine in appearance, barracudas in the wild attack with short bursts of speed. They reach up to 27 miles per hour, and have been observed overtaking prey larger than themselves using their rows of razor-sharp teeth. Highly competitive animals, barracudas will sometimes challenge animals two to three times their size for the same prey.
The Plymouth Barracuda was hastily brought to market to jump the release of its direct competitor, the Ford Mustang in 1964. The muscle car’s debut was rocky, but it returned in 1970 with an unapologetically fierce body design and a V8 motor. Sleek yet muscly, the Barracuda lives up to its name—a wickedly fast classic car with a predatory instinct.
Misguided by a BoxfishMercedes-Benz Bionic (NatiSythen via Wikicommons)
Despite its goofy-looking exterior, the boxfish represents an amazing feat of bioengineering. Its box-shaped, lightweight, bony shell makes the small fish agile and maneuverable, as well as purportedly aerodynamic and self-stabilizing. Such attributes made it an ideal inspiration for a commuter car, which is why Mercedes-Benz unveiled the Bionic in 2005—a concept car that took technical and even cosmetic inspiration from the spotted yellow fish.
Sadly, the Bionic never made it to market after further scientific analysis on the biologic boxfish’s “self-stabilizing” properties were largely debunked. More research revealed that really, over the course of its evolution the boxfish had given up speed and power for an assortment of defensive tools and unparalleled agility. Bad news for the Bionic—but a biomimicry lesson for the books.
A purple-haired sorceress holding a fireball. A three-headed dragon wrapping its claws around the world. A great raptor emerging from the flames.
No, these are not characters from a Magic: The Gathering deck. They are avatars depicted on the official mission patches made for the National Reconnaissance Office (NRO). Just as NASA creates specially designed patches for each mission into space, NRO follows that tradition for its spy satellite launches. But while NASA patches tend to feature space ships and American flags, NRO prefers wizards, Vikings, teddy bears and the all-seeing eye. With these outlandish designs, a civilian would be justified in wondering if NRO is trolling.
Unfortunately, given the agency's extreme secrecy, it’s impossible to answer that question for sure. But based on information that has been leaked about some of the patches, it seems there may be a method to the artistic madness.
Forged in Secret
Understanding the patches requires a trip back to the 1960s and the early days of the human space program, explains Robert Pearlman, a space historian and the founder of collectSPACE. At the time, NASA allowed its astronauts to name their spacecraft. John Glenn chose Friendship 7, for example, for the Mercury space capsule he piloted when he became the first US astronaut to orbit Earth. Gordon Cooper went with Faith 7 for his spacecraft during the final mission of the Mercury program.
When it came time to launch the Gemini program, however, NASA decided to take away the naming privilege. The astronauts, understandably, were disappointed. So Gemini pilot Cooper asked NASA if they’d be willing to compromise and—in the tradition of military squadrons—allow the crew to design a patch instead. NASA agreed, and since then patches have become a staple for both crewed and robotic NASA flights.
NRO arrived on the space launch scene around the same time that NASA’s first patches were being designed. In 1960, former president Dwight D. Eisenhower established the agency as a central authority for organizing the nation’s reconnaissance operations, and oversight of reconnaissance imaging satellites—spy satellites, in popular parlance—was a big part of that mission. Right from the start, NRO operations were all very cloak-and-dagger. The public didn’t even learn about the agency’s existence until 1971, and its first reconnaissance satellite program, Corona, wasn’t declassified until 1995. “The reconnaissance satellites have been a factor of the space program since the very beginning,” Pearlman says. “But they are indeed classified, and their capabilities are classified.”
Today NRO launches about four to six satellites per year, including the NROL-35 mission, with the patch seen above, slated to fly this Thursday. The public still doesn't know exactly what each satellite is doing, but for a couple decades now the agency has advertised the date and time of its launches—probably because, as Pearlman points out, “it’s hard to hide a rocket.” In response, a subculture of fervent hobbyists has become committed to watching the skies at night, piecing together the satellites’ orbits. At some point, those hobbyists discovered that—just like NASA—NRO also issues mission patches. The agency didn’t seem to care if the patches were leaked, and eventually it even started publishing depictions of the patches along with launch announcements. Even so, for years knowledge of the patches largely remained confined to enthusiasts, especially in the days prior to widespread social media.
Image by National Reconnaissance Office. Some enthusiasts muse that the five beams shooting out of the winged warrior's hand represent five pre-existing satellites in the Quasar communications system, since a Quasar satellite was the presumed payload of NROL-33 in May. The two wolves facing west and one facing east could indicate three new positions in this system. Finally, the setting sun may symbolize that this will be the final Quasar launch. (original image)
Image by National Reconnaissance Office. Edward Snowden’s NSA leaks broke shortly after the release of the patch for NROL-39, leading many to speculate that the octopus represented the tentacles of the government reaching out to control the world. After using the Freedom of Information Act, however, a journalist found a more mundane explanation: the octopus represents a failed instrument (nicknamed an octopus) that the team had to contend with while preparing the satellite for its December 2013 launch. (original image)
Image by National Reconnaissance Office. The presumed payload for NROL-38, launched in June 2012, is a type of satellite that functions with two others, creating a constellation. If that is true, the three-headed dragons might represent that satellite trio, and the positions of their heads around the Earth could hint at their real-world locations. (original image)
Image by National Reconnaissance Office. The launch number for NROL-66, which lifted off in February 2011, inspired the Route 66 reference. Some speculated that the bull is a reference to the devil, because of 66’s affinity with 666. The red bull could also be a nod to the type of rocket used for the launch, called a Minotaur. NROL-66 was not actually a spy satellite mission, but a classified device launched to demonstrate new technology. (original image)
Image by National Reconnaissance Office. The bird on this patch for NROL-49 could be an eagle to represent the US, and the flames might stand for the fireball produced by the Delta IV Heavy rocket used for the January 2011 launch. The feathered form could also be a phoenix—there is speculation that this satellite took the place of another that was discontinued. The Latin motto reads: "Better the devil you know." (original image)
Image by National Reconnaissance Office. Little is known about the patch for NROL-16, launched in April 2005. The pelican could refer to a location where those birds live, and the gorilla could be America asserting its dominance. (original image)
Image by National Reconnaissance Office. The rocket on the patch for NROL-1 represents the Atlas rocket used in the August 2004 launch, and the geometric shape in the middle might represent the Pentagon or the Department of Defense. “I don’t know about the hearts,” Pearlman says. (original image)
Image by National Reconnaissance Office. This is the NROL-11 patch design that amateur satellite trackers cracked in 2000. Its design inadvertently revealed the mission and location of its payload (see main text). (original image)
Image by National Reconnaissance Office. This patch for NROL-10, launched in December 2000, is a mystery. (original image)
Image by National Reconnaissance Office. The tiger is circling the globe, just like the satellite launched on NROL-9 in May 1999. Why a tiger—or the choice for a mission motto—nobody knows. (original image)
The patches’ relative obscurity changed in 2000, with the launch of a payload known as NROL-11. The mission patch depicted what appeared to be owl eyes peering down at the Earth, where four arrow-shaped vectors, two per orbit, made their way across Africa. Three of the vectors were white, and one was dark. Based solely on studying the design, civilian satellite watcher Ted Molczan hypothesized that the patch showed a failed satellite (the dark vector), and that the newly launched satellite would take its place.
Sure enough, after the launch a new satellite appeared just where Molczan predicted. Pearlman, who reported on the story at the time, says that NRO at first told him “no comment” when he contacted them. About 30 minutes later they called him back and asked him not to publish the story. Pearlman told them no dice, and in the end, the NRO spokesman told him that the patches were just morale-builders for those who work on the launches.
Whether NRO admits it, though, it seemed that NROL-11’s patch had inadvertently revealed classified details about its payload’s whereabouts, and when the story broke, the patches suddenly appeared on the public’s radar. Although the patches were under more scrutiny than ever before, the agency didn’t flinch. Rather than classify them or discontinue the tradition, NRO ramped up its game. Subsequent designs became even more ridiculous, featuring patriotic gorillas or 16th-century ships, for example. The public ate it up. Some—like the 2013 mission heralded by a giant Earth-eating octopus—sparked their own media frenzies, and rip-offs of the most popular designs popped up for sale online. NRO’s new motto seems to be “better to have a more outlandish design than show actual details about the flight,” Pearlman says.
As for their motivations, Pearlman doesn’t think they’re in it just for the lolz. “No, I don’t think they’re playing us,” he says. “If anything, it’s an internal gag. Like, how far can you take it without being reprimanded? Or maybe the patches represent jokes that cropped up in the processing of the satellites, which we’ll never know unless they’re declassified—and maybe not even then.”
Trying to update stairs is like reinventing the wheel: Improving on a simple design that has withstood the test of time is no easy task. For people with injuries or limited mobility, however, climbing a staircase is no easy task either. By hacking the physics of stair navigation, a team of biomechanical researchers have invented a prototype that could help propel stairs, and their users, to new heights.
“My mom likes to complain that she’s really active and she can walk over long distances, but every time there’s stairs she has difficulty climbing them,” says Karen Liu, a professor of computer science at the Georgia Institute of Technology.
Liu, also the corresponding author on a study in PLOS ONE that describes the new stair prototype, recognized that climbing stairs is ranked as one of the most difficult activities for the elderly or injured. Even when individuals are otherwise physically healthy and mobile, losing the ability to navigate stairs is often the deciding factor that forces people out of their homes and into assisted living communities.
The new stair prototype harnesses energy when a person descends the stairs, and recycles it on the way back up to give users a boost. It works because, counter-intuitively, people waste more energy going down stairs than up.
When climbing the stairs, all the energy you put into hefting your legs gets turned into potential energy—when you get to the top you’re higher than you were before. Descending the stairs is another story. With every step your body is essentially in a controlled fall, and the energy your muscles expend to prevent a nasty tumble to the bottom is wasted. Liu thought that wasted energy could be captured and returned on the upward climb.
“There was no way I could create a device like that,” Liu said. As a computer scientist, she had a good idea but no way to actually build a prototype. She turned to Lena Ting, professor of biomedical engineering at Emory University and an expert in human kinetics. “No one knows movement better than Lena,” she says. Together with Ting’s then post-doctoral researcher Yun Seong Song, they put their heads together and refined a design.
To their surprise, reinventing the wheel was a piece of cake. “It wasn’t as difficult as I thought it would be,” Song says. “We aimed for simplicity—we weren’t going for a fancy robot that can move and talk and make good decisions.” They were just trying to hack into the mechanics that already happen every time you go up or down a flight of stairs.
Each tread of their prototype energy-recycling assistive stairs are attached to springs. On the way down the stairs, pressure from each footstep pushes down a moveable platform, compresses the springs and locks the tread in place with an electromagnetic lock. The compressed springs capture and hold on to the energy that would otherwise be dissipated. On the way back up, a pressure gauge on each tread senses the foot, releases the lock, and the springs release that stored energy to help propel the climb.
“I thought, there’s no way this is going to work,” recalls Ting. Spring-loaded steps brought to mind visions of catapults or slingshots. That’s fine if you’re an Olympic vaulter, but not if you’re an elderly or injured person just trying to make your way upstairs.
They went through several spring designs and settled on a gentle extension spring. If you stand still on their new stairs, you don’t have to worry about being catapulted to the next landing. It’s not until you’re in motion that the springed platforms gently assists your foot up to the next level. “It can only help you as you’re actively going up or down, it can’t push you around,” Ting says.
Walking down their stairs “feels like walking down a hill with very soft soil,” Song says. “It’s like you have a cushion at every step, and as you walk down you’re squishing them. You feel low gravity.” That sensation was easy to master for the test subjects they brought into the lab.
Going up on the other hand, “It’s like someone is actually lifting your foot,” Song says. Everyone has squished through soggy grass in the rain, but feeling a ghostly assistance as you walk up the steps? That takes some getting used to. Still, after over 300 test runs, they reported no issues with safety. They also used sensors to measure joint activity, and showed that ascending the assistive stairs required significantly less work than what’s expended on conventional stairs. What’s more, their prototype comes together for a fraction of the price of installing an elevator or stairlift.
With these promising results, the team is hopeful they’ve got a product that could be marketed to make life easier for the elderly or injured. “Energy recycling stairs are a great idea,” says Steve Collins, professor of mechanical engineering at Carnegie Mellon University who was not involved with the study.
“Our muscles are amazing things,” Collins says. They grow stronger to suit our needs, they run off fuel we supply them from the environment, and they heal themselves. “Any engineer would love to be able to do some of those things,” he says, but for all their amazing capabilities, one thing muscles are bad at is recycling energy. With this new staircase, Collins says, the inventors have hacked the physics to do what our muscles can’t. “They efficiently capture and return the energy to you.”
Because the stairs are easy and inexpensive to install, Collins thinks they could allow older people or those with limited mobility to stay in their homes a bit longer. “It could really make a difference for a lot of people,” he says. “With some little tweaks...I think this could be a good product.”
Liu and her team have a provisional patent on their assistive stairs, but marketing their invention will depend on interest. They’ve got a lot of ideas, and could even implement a suite of new features for their stairs like storing the captured energy for use in other applications. Instead of help ascending the stairs, for example, they could jerry-rig a mechanism that could shuttle that energy to charge a cell phone instead.
Next on their docket is simply developing a full set of stairs that will build upon the existing prototype, however. At the end of the day, their goal is to help those with impaired mobility stay in their homes. Liu recounted showing her mom their prototype stairs. “Her comment was, ‘Well, you had better hurry.’"
Watch this video in the original article
I’m sitting in a boat, anchored in a secluded cove of the Panama Canal, waiting for the sun to set. Occasionally, the mild aftershock of a freighter passing through the center of the canal rocks the boat. But for the most part, the muddy water is calm.
My hosts, bat expert Elisabeth Kalko and Ben Feit, a graduate student studying under her tutelage, are setting up their sound equipment in the last remaining light. “The transition between day and night happens so fast,” says Kalko. She waxes poetic—on the cutout-like quality of the silhouetted trees and the rattling cicada orchestra. Her fine-tuned ear isolates the croaks of frogs and the croons of other creatures, and she mimics them for my untrained ear. Hear that? I imagine she can almost tell time by the rhythm of the forest’s pulsing soundtrack, she knows it so well.
Since 2000, Kalko, who is jointly appointed as the head of the experimental ecology department at the University of Ulm in Germany and a staff scientist at the Smithsonian Tropical Research Institute (STRI), has been making two trips a year, usually for a month each time, to Panama’s Barro Colorado Island (BCI). The six-square-mile island, where STRI has a field station, is about a 40-minute ferry ride from Gamboa, a small canal town north of Panama City. A hot bed for biodiversity, close to half of Panama’s 220 mammal species live and reproduce on the island.
The bats are what draw Kalko. Around 120 bat species—a tenth of the species found worldwide—live in Panama, and of those, 74 can be found on BCI. Kalko has worked closely with a quarter of them and estimates she has observed about 60 in an effort to better understand the various behaviors that have allowed so many species to coexist.
She has taken me to “Bat Cove,” just a five-minute boat ride from BCI’s docks, to get a glimpse of her work. Just inside the forest, I’m told, is a 65-foot-tall hollowed tree with a rotting pile of guano, scales and fish bones at its base—the roost of Noctilio leporinus. The greater bulldog bat, as it’s more commonly known, is the only bat on the island with fish as its primary diet. Using echolocation to locate swimming fish making ripples in the water’s surface, it swoops down over the water, drags its long talons and snatches its prey. In flight, it curls its head down to grab the fish, then chews it and fills its cheek pouches like a hamster.
Kalko holds a bat detector above her head. The device picks up the high frequency echolocation calls of nearby bats and runs them through a buffer to make them audible. Slowed down, the calls sound like the chirps of birds. Feit watches as sonograms of the sounds appear on his laptop. Kalko has compiled a library of these calls and, from their frequencies and patterns, can identify the species of the caller. As we are sitting, listening, she differentiates between aerial insectivores above the canopy, fruit-eating bats in the forests and fishing bats over the water. She can even determine their stage of foraging, meaning if they are searching or plunging in for a kill, from the cadence of the calls. Her deep passion for bats is contagious, and it puts me at ease, given the situation. When the chirps come in loud on the detector, her assistant casts his headlamp across the surface of the water. Greater bulldog bats often have a reddish color fur and can have a wingspan measuring over two feet, but their fluttering wings are the only things visible as they are fishing. “Wah,” Kalko exclaims each time a bat flits by the boat.
Image by Christian Ziegler. Out in “Bat Cove,” Elisabeth Kalko uses a bat detector to make the high frequency echolocation calls of nearby bats audible. She watches as sonograms of the sounds appear on her laptop. (original image)
Image by Christian Ziegler. After dark, greater bulldog bats leave their roosts to forage for fish. Kalko can determine the stage of a bat’s foraging, meaning if it is searching or plunging in for a kill, from the cadence of its call. (original image)
Image by Christian Ziegler. Noctilio leporinus, or the greater bulldog bat, is the only bat on Barro Colorado Island with fish as its primary diet. Most bats eat insects or fruit. (original image)
Image by Christian Ziegler. Fishing bats use echolocation to detect ripples in the water’s surface, then swoop down and snatch up their prey. (original image)
Image by Christian Ziegler. Noctilio leporinus sweeps its long talons across the water’s surface to collect its prey. (original image)
Image by Christian Ziegler. Greater bulldog bats can often be spotted by their reddish-orange fur and enormous wingspan. From wingtip to wingtip, they can measure over two feet. (original image)
Image by Christian Ziegler. In flight, Noctilio leporinus curls its head down to bite into the fish. (original image)
Image by Christian Ziegler. A greater bulldog bat might eat a dozen fish in a single night. (original image)
Image by Christian Ziegler. Once Noctilio leporinus catches a fish, the bat chews it and fills its cheek pouches like a hamster. (original image)
Image by Christian Ziegler. Bat expert Elisabeth Kalko catches bats in mist nets. She is then able to observe the bats’ behavior more closely in a flight cage, back at Barro Colorado Island’s field station. (original image)
Image by Christian Ziegler. Several Lophostoma silvicolum huddle inside a termite nest. Kalko suspects that the bats release some chemical that acts as a termite repellant. (original image)
Image by Christian Ziegler. A hot bed for biodiversity, close to half of Panama’s 220 mammal species live and reproduce on Barro Colorado Island, a six-square-mile research island in the middle of the Panama Canal. (original image)
Her shrieks are in awe, not fear. Kalko attributes bats’ historically bad reputation to people’s tendency to misinterpret encounters with them as attacks. She calls to mind popular images of a panicky bat accidentally trapped indoors and the cartoonish scene of a bat landing in a woman’s hair. Imaginations really run wild with the carnivorous, blood-sucking vampire bat, as well. But it is her hope that people come to see the beneficial roles bats play, first and foremost as pollinators and mosquito eaters. “Research pays off,” says Kalko. Scientists, for example, are finding that a chemical in vampire bat saliva that acts as an anticoagulant could potentially dissolve blood clots in humans with less side effects than other medications.
Kalko’s greatest discoveries are often made when she catches bats in mist nets, or volleyball-like nets that safely trap an animal in flight, and studies them in a controlled environment. She sets up experiments in flight cages at BCI’s field station and captures their movements with an infrared camera. One of her latest endeavors has been to team up with engineers from around the world on the ChiRoPing project, which aims to use what is known about sonar in bats to engineer robotic systems that can be used where vision isn’t feasible.
In her research, Kalko has found bats that live in termite nests; fishing bats off the coast of Baja, Mexico, that forage miles into the ocean; and bats that, unlike most, use echolocation to find stationary prey, like dragonflies perched on leaves. And her mind is always spinning, asking new questions and imagining how her findings can be applied in some constructive way to everyday life. If bats and ants can coexist with termites, do they produce something that is termite repellant? And if so, can humans use it to stop termites from destroying their houses and decks? Fruit-eating bats essentially soak their teeth in sugar all the time and yet they don’t have cavities. Could an enzyme in their saliva be used to fight plaque in humans?
Early in the night, several bats circle the area. Kalko recalls a feeding frenzy of small insectivores called molossus bats she once witnessed in Venezuela, when she was “surrounded by wings.” This is far from that, mainly because it is just a day or two after the full moon, when bats and insects are considerably less active. As the night wears on, we see less and less. Kalko emphasizes the need for patience in this type of fieldwork, and jokes that when she is in Panama, she gets a moon burn.
“So many billions of people in the world are doing the same thing, day in and day out,” she says, perched on the bow of the boat, as we motor back to the field station. “But we three are the only people out here, looking for fishing bats.”
Curator and FOOD: Transforming the American Table, 1950-2000 exhibition project director Paula Johnson recalls a memorable visit with Chuck Williams.
Chuck Williams, the founder of Williams-Sonoma—the kitchenware emporium that, beginning in 1956, introduced Americans to distinctive tools and cookware from different parts of the world—died on December 5. Upon hearing the news, we thought back to a sunny December day in 2011, when we took a field trip to Mr. Williams's San Francisco offices on behalf of the exhibition project, FOOD: Transforming the American Table, 1950-2000.
Curator Rayna Green, Associate Director Maggie Webster, and I were visiting several California donors to the exhibition and our first stop was at Williams-Sonoma headquarters. At 96, Mr. Williams was still coming to work regularly, and, dapper in coat and tie, he welcomed us warmly into his office overlooking the San Francisco Bay. The room itself was rather like a Williams-Sonoma store—open wooden shelves held an array of objects, artfully placed, that subtly beckoned to us, urging us to come a bit closer: a brilliant red KitchenAid stand mixer, a white ceramic creamer shaped like a playful cow, a painted water pitcher in the form of a chicken, and ceramic tea pots and decorated bowls arranged just so. We realized that this was the same design aesthetic that set Mr. Williams's stores apart from other purveyors of kitchen equipment—at the time, mostly hardware stores, where stacks of pots, pans, and tools were the norm. When we settled in for a conversation, he remarked on how his sense of design informed the look and layout of his stores from the very start: "That was one of the things I did right at the beginning. . . Not just putting the pots on the shelf without thinking about it. Putting it so the handle was partly out in front of the shelf and it welcomed the customer to pick it up to look at it."
Much of the conversation that day had to do with Mr. Williams's role in what we were calling the "good food movement" in the exhibition.
With roots in northern California, the movement was largely a reaction against the fast, processed, and packaged foods that had become so popular in households across America in the 1950s and 1960s. The California devotees of fresh, local, and organic foods were also interested in trying new cuisines and learning to cook beyond just the basics. While Julia Child guided these intrepid home cooks through unfamiliar techniques and recipes, Chuck Williams supplied them with previously unavailable cookware from France and Italy to help them achieve results. When asked about particular items, he said, "I think the most popular one was the soufflé dish. Just a plain, white soufflé dish. There wasn’t anything like that available in this country." We decided then and there to include one of Julia Child's white soufflé dishes made by one of Mr. Williams's favorite sources, the French company Pillivuyt, in the exhibition.
During our visit, Mr. Williams also talked about the early 1970s and the debut of the Cuisinart food processor, and what a difference it made to home cooks. He recalled offering the Cuisinart for sale almost immediately in his stores and how he, too, began using one in his own kitchen. As we talked about Julia's early adoption of the food processor, Mr. Williams offered to donate his first Cuisinart for the museum's collections and for the exhibition.
We note Chuck Williams's passing with sadness, but also with gratitude for his generosity to the museum. By sharing his memories of the "good food" movement, he helped us shape a section of the exhibition and provided insight into the types of objects that would most accurately represent that important story in American culinary history.
Paula Johnson is a curator in the Division of Work and Industry. She has also blogged about cooking with Julia Child in Washington, D.C.
It was not yet dawn on the U.S. West Coast when the Cassini spacecraft sent its final message to Earth and began its suicide plunge into Saturn. At NASA's Jet Propulsion Laboratory in Pasadena, California, scientists and engineers crowded into a packed mission control room, while others watched the signal unfold down the road on the campus of the California Institute of Technology. At just after 4:55 a.m. local time, September 15, 2017, the tiny orbiter ended its 20 year mission.
"I liken it to an undefeated boxer, or a baseball player that retires at the end of the season," said Brent Buffington, an aerospace engineer at JPL who helped plot Cassini’s path over the past six and a half years. "They went out on their terms."
Still, Cassini managed to wring out the last drop of science possible as it met its end in Saturn's dense clouds. Even as it hurtled toward oblivion, it was also investigating the planet's atmosphere for the first time. This was characteristic of the orbiter, which has been uncovering troves of incredible insights about Saturn and its moons ever since it arrived at the ringed planet in 2004. The mission's lifetime was extended not once but twice to give the craft more time to probe Saturn’s mysteries.
Cassini didn't stop at Saturn, either: The spacecraft pierced the thick smog of Saturn's largest moon, Titan, to discover lakes of methane and ethane, the only liquid known to exist on a planet other than Earth. It unveiled strange landforms, from dunes to labyrinths to possible ice volcanoes. Cassini also captured incredible images of geysers spouting from the southern pole of the icy moon Enceladus, and unmasked a liquid ocean hidden beneath the moon's icy crust.
These and other observations have helped make the case that our solar system is filled with ocean worlds—and that life may be able to evolve and even thrive far from the sun.Saturn's northern hemisphere in May 2017, observed by the Cassini mission. (NASA/JPL)
Ultimately, it was NASA's concern for Enceladus and Titan that mandated Cassini's death. Both worlds are ripe for life to evolve on their own, and scientists hope to hunt for possible signs on future missions. One very real worry is the possibility of contaminating these kinds of worlds with our microbes (to the point that we have an entire Planetary Protection Office devoted to preventing that from happening).
"The last thing we want to do is pollute these pristine bodies with the Earth microbes that could be on our spacecraft," Buffington said. So he and the navigational team sat down to figure out how to maximize how much science they could get out of Cassini, while keeping these potentially habitable worlds clear of contamination.
The navigational team pursued several potential orbits for Cassini once its fuel tank was on empty, Buffington said. They could park the spacecraft in permanent orbit around Saturn, sending back information about the system for years to come. They could smashed it into the rings to see how they would react, a collision that could also provide insights. They could crash it into one of the many Saturn moons. Or it could leave the system entirely, traveling to another giant planet or the strange asteroids of the outer solar system.
Each possibility was presented to the science team, who looked for the best way to make the most of the spacecraft's final days. The selection process was, Buffington says, "Darwinism at its finest.”
Smashing into the rings was swiftly ruled out. Trying to prove that none of the resulting pieces would end up falling onto—and potentially contaminating—Titan or Enceladus was all but impossible. Exploring another world was also rejected, given how many outstanding questions about Saturn remained.
And while an eternal orbit around Saturn sounded good, there was one big problem: Titan, one of the worlds they hoped to preserve, had the potential to cause chaos, and could one day send Cassini spiraling into one of the habitable moons.
So the team decided to put Titan's power to good work. Buffington, who left the mission in 2012 but returned to JPL to witness Cassini’s grand finale, said that one of the major breakthroughs was the realization that the massive moon could be used as a workhorse. That is, engineers could take advantage of the fact that, when a small body passes by a larger moving body, the small body’s path is altered in a way that scientists can calculate and predict.
"A single Titan gravity assist could be used to jump the entire main ring system," allowing the spacecraft to skirt the danger zone and travel between the planet and its rings, he said.
After the navigation team mapped out Cassini's final orbits a full half a decade before its demise, they sent the plans to the Cassini flight controllers. Every 10 weeks, they sent out a packet of navigational commandsto the spacecraft. They didn't chart the course, but they're the ones who make sure Cassini received it.
"They hand us the reference trajectory and then we fly it," said David Doody, head of JPL's Realtime Flight Operations department. Doody and his team of seven "Aces" (which is the official name for engineers who talk to the spacecraft in real time) input the small maneuvers that put the spacecraft where it needs to be. But while they helped nudge Cassini onto the correct path, it was Titan and its vast gra that did the heavy lifting.
"Titan is our big engine," Doody said. If Cassini were traveling down the freeway, he continued, the Aces would be responsible for keeping it in the correct lane. But the massive moon exerts the most control. "Titan is our offramp," he said.
In April, death by Saturn became inevitable. That was when the gravitational effects of a Titan flyby led to the last shift in a series of changes that pointed Cassini straight at Saturn, with no possibility of escape. Even if the mission planners had somehow changed their minds, the tiny boosters made for small shifts wouldn't be powerful enough to get the satellite off the crash course Titan had set it on.
At 3:53 a.m on September 13, Cassini mission operations engineer Michael Staab uploaded the last tweak to the spacecraft. Staab was at the console two weeks before the Grand Finale to send the last packet Cassini would ever receive, the final nudge from the thrusters that would put it on a precise path toward its demise. Although the spacecraft's course was already set, this last series of commands sealed its fate.
Did he feel any regret?
"I'm a heartless engineer," he laughed, sitting at the Ace console hours before the spacecraft met its fate. Unlike many of the scientists, who refer to Cassini as 'she,' Staab reminded us that Cassini is a robot, doing what it's designed to do.
For Doody, it was not his first time sounding the death knell on a beloved satellite. In 1994, he sent the final command to NASA's Magellan spacecraft that told it to duck into the clouds of Venus. But while Magellan required a single, specific command to meet its demise, Cassini's final road required a series of incremental changes that took half a decade to reach. "This time, it's so elegant," Doody said.
As Cassini hurled into Saturn's atmosphere, Doody stood in mission control at the Jet Propulsion Laboratory in California. The telescopes of the Deep Space Network telescopes in Australia, Spain and California connect mission control with satellites plunging through the depths of space. A plaque in the floor next to the Cassini Ace console identifies mission control as "the center of the universe."
After working on the mission from the very beginning, Doody says, the conclusion feels both exhilarating and final. "This is the end of a 20-year commitment," he said. "It's been blood, sweat and tears the whole time, and now that it's over, it's like jumping off a cliff."
Staab was standing in mission control as well, working 27 hours straight and serving as the backup Ace for the Grand Finale. "I'm sad to see it gone," he said. "But I'm very proud of what we accomplished."
Buffington was also at JPL, though not inside mission control. Like Staab, he says he didn't become overly emotional about the spacecraft, instead saving his admiration for the scientists and engineers who made this mission possible.
"If there's any emotion involved, it's just thanking the people for the amazing job they did engineering and building the spacecraft before I was even old enough to write my name," he said.
Cassini met its fiery fate with the help of flight controllers, engineers and Titan, but its legacy will continue in the years to come. The information it provided about the Saturn system, including its final measurements of the planet's atmosphere, will spur more than a decade of research.
"Cassini is inspiring all of us, young and old, to keep looking up and wondering what's out there," Buffington said.
Sending supplies into space is time-consuming and costly, so most proposals for long-term settlements on Mars or the moon include plans to grow plants in local greenhouses. When those greenhouses might actually be needed is still unclear, but Japan, Russia and China have all expressed interest in sending humans to the moon in the next few decades. NASA says its Asteroid Redirect Mission, which aims to capture and steer a small asteroid into lunar orbit, will help develop technologies for a human Mars mission in the 2030s. SpaceX CEO Elon Musk has pledged to land humans on Mars by 2026 using his company's own rockets. And the Netherlands-based group Mars One claims it will send colonists on a one-way trip to the Red Planet as early as 2024.
In the meantime, scientists have made some interesting headway on the science of crops in space, so we’ll be ready to feed any eventual interplanetary pioneers:
Chose the best soil
Will the local soil allow a Mars garden to grow? To find out, a team led by ecologist Wieger Wamelink at Wageningen University in the Netherlands attempted to grow plants—including carrots, tomatoes and rye—in simulated lunar and Martian soil. The fake regolith was manufactured by NASA using volcanic Earth soils and designed to replicate not only the size of the soil particles but also the geochemical composition. As a control, the team used a nutrient-poor soil collected from a site near the Rhine River in Europe. They did not vary light or atmospheric conditions in their trials, as they reasoned that plants on Mars and the moon would be grown in greenhouses, where such variables could be controlled.
So how might carrots do in alien soil? Surprisingly well, according to the team’s experiments, which were detailed in a recent issue of the journal PLOS ONE.
Thirteen of the 14 plant species managed to grow in all three soil conditions, and many survived the entire 50-day trial period. However, only field mustard and garden cress were able to produce new seeds, and then only in the artificial Martian soil. In general, imitation Mars proved much better for plant growth than fake moon—and it even slightly outperformed the Rhine River soil.
Test your space food safety
On smaller Mars, the gravity is about 38 percent of what we feel on Earth. NASA has been sending plants into space for decades to test whether reduced gravity affects germination, root growth and overall yield, and that information could be crucial to growing food on the Red Planet. But so far, NASA astronauts have not been able to eat what they sow, as mission managers have been worried about the safety of space-bred crops.
The “Veggie” experiment is the first NASA project that hopes to not only show that space plants are safe to eat, but to also start producing fresh food for astronaut consumption. The plant growth chamber was installed on the International Space Station and activated in May. Inside, astronauts placed special “pillows” containing a soil-like growth medium and seeds of red romaine lettuce. The plants were carefully watered and nurtured by the glow of LED lights. After 28 days, the lettuce was harvested, frozen and stored for return to Earth, so scientists can compare the space lettuce with counterparts grown in a Veggie chamber on Earth.A 28-day-old "Outredgeous" red romaine lettuce plant grows in a Veggie prototype pillow. (NASA/Gioia Massa)
Make sure you can breathe easy
Hot on the heels of the popular Curiosity mission, NASA’s next Mars rover is slated to land on the Red Planet sometime around 2021. Initial proposals for science instruments to fly on board the rover included the Mars Plant Experiment (MPX). This device would employ a clear “CubeSat” box attached to the outside of the rover, which would hold Earth air and about 200 seeds of a small flowering plant called Arabidopsis. Part of its goal was to see if plants could grow and thrive in the harsh radiation on the Martian surface.
Sadly, MPX was not selected for the rover’s science mission. But the robotic explorer will carry the Mars Oxygen ISRU Experiment (MOXIE), which will try to produce oxygen from the abundant carbon dioxide in Mars’s atmosphere. While not intended to be useful for plants, a system for converting CO2 to oxygen would certainly benefit future Mars gardeners as they tend their crops.
Microwave Mars for water
Evidence from Martian satellites and rovers strongly suggests that liquid water once flowed freely on Mars. Most of that water is now thought to be locked away as ice at the poles and beneath the planet’s surface, so future Martian farmers will need to find ways to access this vital resource.
As a first step in its colonization scheme, Mars One plans to send an unmanned lander to Mars in 2018 that will carry an experiment to demonstrate that water extraction is possible. And scientists at the Colorado School of Mines are investigating ways to avoid the painstaking work of drilling for water ice and then melting it. They think microwaves could be used to “cook” the relatively accessible Martian surface soil, and vaporized water could then be collected as condensation using a chilled plate.
A stuffed Dragon returns to Earth, Hubble spots a smiley face in the sky, a cosmic rose blooms in X-rays and more in our picks for this week's best space images.
Using a robotic arm on the International Space Station, astronauts reached out and plucked SpaceX's Dragon capsule from orbit on January 12. About a month later, on February 10, the unpiloted spacecraft headed back home, splashing down in the Pacific Ocean at 4:44 p.m. local time. The successful trip marked SpaceX's fourth cargo run to the ISS as part of a contract with NASA. Other companies and national space agencies can deliver goods to the orbiting lab, but Dragon is currently the only uncrewed cargo craft from any country that can also come back with supplies and science experiments. The rest are designed to burn up on reentry.
Galactic Smiley Face(NASA/ESA)
The Hubble Space Telescope took a look at galaxy cluster SDSS J1038+4849—and the cosmic object smiled back. The unusual effect is caused by gravitational lensing, when a massive object bends and magnifies the light from things behind it. In this case, the hefty galaxy cluster has created what's known as an Einstein ring, a rare sight that requires a precise alignment between the light source, the lens and the observer. Inside the ring, the two bright eyes are actually luminous galaxies.
While Hubble saw smiles, NASA's Solar Dynamics Observatory caught sight of a much more serious "face" in the sky. This image of the sun captured on February 10 shows a long, dark filament snaking across the lower part of the solar disk. The filament is actually a cloud of relatively cold material that is hovering in the sun's hot upper atmosphere, or corona. The structure, which is more than 533,000 miles long, gives the sun an eerie countenance, but it's nothing to stress over. Filaments typically drift peacefully in the corona for a few days and then vanish.
Star Blossom(X-ray: NASA/CXC/U.Texas/S.Post et al, Infrared: 2MASS/UMass/IPAC-Caltech/NASA/NSF)
Looking like a rainbow-hued rose, the supernova remnant G299.2-2.9 shines against a garden of stars in this composite image made with X-ray and infrared data. The object is an expanding shell of debris created when a very massive star exploded. It is particularly interesting because it probably resulted from a type Ia supernova, a class of uniform and highly symmetric explosions that astronomers use to measure distances across the cosmos. Weirdly, the X-ray data from NASA's Chandra satellite show some asymmetries in G299.2-2.9—hinting that we have lots more to learn about how these important events happen.
Lopsided Twins(Robert Gendler and Josch Hambsch 2005)
Our sun is unique in the galaxy in part because it is single—across the Milky Way, stars mostly come in pairs. Astronomers see that the stars in most of these binary systems are pretty evenly matched, with one star having roughly the same mass as its companion. But now researchers at the Harvard-Smithsonian Center for Astrophysics have found 18 extremely mismatched binary pairs. In all the cases, one star is fully grown, while the other is still in its infancy. The 18 oddities were found in a neighboring galaxy called the Large Magellanic Cloud, seen above, and they may offer some clues to the way stars across the cosmos are born.
"GoreSat" Away(Courtesy of Flickr user Steve Jurvetson)
It took 17 years, but a space weather satellite proposed by Al Gore during his vice presidential term has at last lifted off. The Deep Space Climate Observatory (DSCOVR) was formerly known as Triana, a satellite that Gore wanted to fly to provide a nearly constant view of the entire planet. But funding issues and political opposition put the project on hold until 2013, when NASA cleared a revamped version of the satellite to fly. Launched on February 11, DSCOVR is now a joint NASA-NOAA spacecraft that is headed for Lagrange Point 1, or L1. This is the spot almost a million miles away where Earth's gravity partially cancels out the sun's, essentially allowing a spacecraft to stay parked in between them. From this unique vantage point, DSCOVR will study how the solar wind affects the planet and provide an early warning of incoming solar storms.
Spreading Deltas(NASA Earth Observatory/Jesse Allen, with Landsat data from the U.S. Geological Survey)
Coastal erosion is a huge issue around the Gulf of Mexico—but in some places nature is still doing its best to rebuild. The images above show the emergence of new land at the mouths of the Wax Lake Outlet and the Atchafalaya River in Louisiana. As seen by Landsat satellites, mud flats around these outlets of the Mississippi River have grown dramatically between 1984 (left) and 2014. Harry Roberts, director of the Coastal Studies Institute at Louisiana State University, says that the deltas could serve as models for restoring and preserving the state's coastal marshlands.
A computer program called Pluribus has bested poker pros in a series of six-player no-limit Texas Hold’em games, reaching a milestone in artificial intelligence research. It is the first bot to beat humans in a complex multiplayer competition.
As researchers from Facebook’s A.I. lab and Carnegie Mellon University report in the journal Science, Pluribus emerged victorious in both human- and algorithm-dominated matches. Initially, Merrit Kennedy writes for NPR, five versions of the bot faced off against one professional poker player; in the next round of experiments, one bot played versus five humans. Per a Facebook blog post, the A.I. won an average of around $5 per hand, or $1,000 per hour, when playing against five human opponents. This rate is considered a “decisive margin of victory” among poker professionals.
Speaking with Kennedy, four-time World Poker Tour champion Darren Elias explains that he helped train Pluribus by competing against four tables of bot rivals and alerting scientists when the A.I. made a mistake. Soon, the bot “was improving very rapidly, [going] from being a mediocre player to basically a world-class-level poker player in a matter of days and weeks.” The experience, Elias says, was “pretty scary.”
According to the Verge’s James Vincent, Pluribus—a surprisingly low-cost A.I. trained with less than $150 worth of cloud computing resources—further mastered poker strategy by playing against copies of itself and learning through trial and error. As Jennifer Ouellette notes for Ars Technica, the bot quickly realized its best course of action was a combination of gameplay and unpredictable moves.
Most human pros avoid “donk betting,” which finds a player ending one round with a call and starting the next with a bet, but Pluribus readily embraced the unpopular strategy. At the same time, Ouellette reports, the A.I. also offered up unusual bet sizes and exhibited better randomization than opponents.
“Its major strength is its ability to use mixed strategies,” Elias said, according to a CMU statement. “That's the same thing that humans try to do. It's a matter of execution for humans—to do this in a perfectly random way and to do so consistently. Most people just can't.”
Pluribus isn’t the first poker-playing A.I. to defeat human professionals. In 2017, the bot’s creators, Noam Brown and Tuomas Sandholm, developed an earlier iteration of the program called Libratus. This A.I. decisively defeated four poker pros across 120,000 hands of two-player Texas Hold’em, but as the Facebook blog post explains, was limited by the fact that it only faced off with one opponent at a time.
According to the MIT Technology Review’s Will Knight, poker poses a challenge to A.I. because it involves multiple players and a plethora of hidden information. Comparatively, games such as chess and Go involve just two participants, and players’ positions are visible to all.
To overcome these obstacles, Brown and Sandholm created an algorithm engineered to predict opponents’ next two or three moves rather than gauge their steps through the end of the game. Although this strategy may seem to prioritize short-term gain over long-term winnings, the Verge’s Vincent writes that “short-term incisiveness is really all you need.”
Moving forward, multiplayer programs like Pluribus could be used to design drugs capable of fighting antibiotic-resistant bacteria, as well as improve cybersecurity and military robotic systems. As Ars Technica’s Ouellette notes, other potential applications include overseeing multi-party negotiations, pricing products and brainstorming auction bidding strategies.
For now, Brown tells Knight, the algorithm will remain largely under wraps—mainly to protect the online poker industry from incurring devastating financial losses.
The researcher concludes, “It could be very dangerous for the poker community.”
Real, hard science, it turns out, draws huge crowds. Especially when it’s explaining the truth behind today's biggest pop culture phenomena—and what’s on tap for the very near future.
At Awesome Con, Washington D.C.’s annual comics/pop culture convention, attendees waited in line to get into panel talks on the real science of their favorite sci-fi and fantasy books, comics and movies. A crowd groaned when informed that all 200 seats inside a session on the genetics of the world of Harry Potter had been filled. Around the corner, outside a much-larger room, dozens more waited for the chance to listen to how nanotechnology might make space elevators and targeted cancer therapy a reality.
Presented in parnership with Awesome Con, Smithsonian magazine’s Future Con showcased dozens of sessions on bleeding-edge science, technology, engineering and space exploration. Science panels covered space lasers, faster-than-light travel, artificial intelligence, cyborgs—a gamut of subjects that were once only fever dreams of creators like Ray Bradbury and Gene Roddenberry.
“Our fans obviously love Star Wars, Star Trek and Doctor Who, and we know they care deeply about real-world scientific advances in the same way they’re fascinated with science fiction,” said Awesome Con founder Ben Penrod, in a release. “Future Con makes Awesome Con a space not just to entertain, but to inspire and educate. We hope we can play a small part in creating the inventors, engineers, educators and astronauts of tomorrow.”
From June 16 to 18, an estimated 60,000 attendees took breaks from relishing each others’ costumes and eagerly standing in celebrity autograph lines to pop into more than 30 Future Con sessions with presenters from NASA, the National Science Foundation, universities, the Science Channel, museums and industry researchers.
Kicked off by a special presentation of StarTalk Live!, a podcast progeny of Neil deGrasse Tyson’s popular radio show, guest host and former International Space Station commander Colonel Chris Hadfield set the tone for the weekend by asking probing questions of podcast guests about what will be needed for human exploration of space in the very near future.
“It’s the 500-year anniversary of Magellan’s circumnavigation of the globe, and now we’re starting to look toward colonizing off-planet,” Hadfield said. “We’ll need the same as all explorers from history: better vehicles, better engines, better human interfaces.”
StarTalk guest Katherine Pratt, a neurosecurity researcher with the University of Washington, spoke about the potential usefulness of a remote-operated surgical robot her lab developed. And Suveen Mathaudhu discussed how his work in ultra-lightweight metals and novel materials at the University of California will help humanity embark on its next big voyage.
“The old explorers took some tools, but then used the resources they found when they got to their destination,” Mathaudhu told Hadfield. “Our whole universe is made up of a few basic things—iron, silicon, nickel—we just need to be able to take what we find and convert it to be able to stay where we go.”
Other requirements, for Mars colonization or anywhere else, show guests suggested, include controlled gravity, high-density power sources, radiation protection, and “potatoes that don’t require poop to grow,” chimed in cohost and Big Hero Six actor Scott Adsit. “Netflix!” added Irish comedian Maeve Higgins.
Mathaudhu and Pratt went into more depth on the work they do during a separate session on augmentation of human abilities through technology, like research underway on brain-computer interfaces. One project, for example, underway at Pratt’s home institution is a brain stimulation project that aims to allow subjects to “feel” sensation from a prosthetic limb, for example.
“I’m interested in how signals get to and from a device to the brain, like Geordi’s [LaForge] visor in "Star Trek," or Furiosa’s arm in Mad Max: Fury Road,” Pratt said. “We can do it now, but it’s clunky and hard to train. There’s a lot of research that’s going into touch—how to figure out surface friction, how much grip you need to pick something up. A lot more needs to be done, but we have a good start.”Future Con offered a chance to see StarTalk Live! with guest host Chris Hadfield (center). Also pictured: co-host Scott Adsit, Katherine Pratt, Suveen Mathadhu, Maeve Higgins. (Courtesy of Awesome Con)
Separate sessions delved deeper. One particularly popular panel was about space lasers. While the Death Star isn’t on the near horizon, lasers, according to NASA outreach specialist Kate Ramsayer, are currently starring in missions to map Earth and the moon in chiseled detail.
They’re also on the cusp of revolutionizing communications. A 2013 laser communication demonstration from LADEE, NASA’s Lunar Atmosphere and Dust Environment Explorer, beamed a high-definition video down to Earth at 622 megabits per second with a half-watt laser. It took only a few seconds to transmit the video, compared to the two hours it typically takes to send that much data from the moon. The experiment was an important step towards realizing broadband-like speeds for deep-space communication as well as here on Earth.
“The amount of data we were able to downlink from the moon is astounding,” said Jennifer Sager, a NASA engineer and LADEE mission lead. “If we’d used our regular radio-frequency system, it would have taken us two hours. You will see capabilities in your home improve based on these advancements in laser communications.”
Cryospheric scientist Brooke Medley also explained why the lasers on ICESat-2 that will be measuring Antarctic topography after its launch in 2018 are so important: to gain a clearer view of what happens to all that ice as seas warm.
“Antarctica is two times the size of the continental U.S.,” Medley said. “We can’t possibly measure the sheets from the ground or even a plane. You wouldn’t go to San Diego and think that because it’s sunny here, it must be sunny in New York as well—it’s the same thing with the ice in Antarctica. The ice is changing differently according to different forces, so we must measure it with satellites.”
ICESat-2 will provide data on Earth’s polar and temperate regions for ice scientists, forest ecologists and atmospheric scientists to analyze. Though the satellite is designed for a three-year lifespan, it will continue to transmit data as long as it’s working properly, Ramsayer added.
Thomas Bicknell, 14, of Haymarket, Virginia, attended the session with his mother, Arwen, for the reason many people gave when asked what drew their interest: it looked cool.
“I do subscribe to a YouTube channel by a guy who makes lasers and shows how much energy they each use,” Bicknell said. “The panel just seemed interesting.”
“It’s lasers in space,” his mother added. “How can you go wrong?”
Elsewhere, visitors cheered as former "Doctor Who" star David Tennant took the main stage for a chat with scientists about his character’s fictional travels through space and time and what we know about the real edges of our galaxy and universe. In two other jam-packed sessions, astrophysicist Erin Macdonald explored similar themes, describing how multiverses, artificial gravity, holes in spacetime and time travel may or may not be possible based on current observations, theories and mathematical models.
Macdonald, a former researcher at the Laser Inferometer Gravitational-Wave Observatory (LIGO)—before it announced last year that gravitational waves had been detected for the first time—cracked "Futurama" jokes and played snippets from popular video games like Mass Effect to help even the youngest members of her audience wrap their minds around the tough stuff.
“There’s such a passion for the science fiction fandoms themselves that people like to learn whatever they can about them,” Macdonald said of the popularity of the science sessions at a sci-fi/pop culture convention. “And parents… might not be able to answer questions their kids have or want to spend a Thursday night at a university lecture on physics. If you’re here and you have an hour to kill,” it’s an easy way to learn something new, she added.
Books, television, video games, movies and comic books will continue to play an important role in exposing science to a whole new generation of thinkers and tinkerers, said Ann Merchant, deputy director of communications at the National Academy of Sciences’ Science and Entertainment Exchange. The office connects Hollywood directors and producers with the scientific community, which offers advice and guidance on how to increase the use of science in movies while making it more interesting and authentic.
And, added Jim Green, director of NASA’s planetary science division, all these different forms of media—along with the hidden science they may carry—also often leads to something intrinsically necessary for progress.
“You never know how inspiration comes to people,” Green said. “It could be from a movie, or from talking to a teacher—or an astronaut. If it’s a movie that sparks an interest in finding out more about the Higgs Boson particle, that’s the start of a journey. It gives us an opportunity to dream, and without dreams, you’ll never be able to live them. Dreaming to go to Mars will become a reality.
Christmas is the busiest time of year for both Santa and the United States Postal Service. But while Santa has magic on his side, the USPS must rely on technology to make its deliveries. The service expects to distribute about 15.5 billion pieces of mail during the 2015 holiday season, which is more than 2 times the number of people on Earth.
What with so much mail zipping around the country, odds are some of it will never reach its final destination (fingers crossed that'll include Aunt Gale's ugly Christmas sweater). That's because the service uses computers to route the mail, and about two percent of the time (about 40 million pieces of Christmastime mail), the address on a package is illegible. Bad handwriting, water damage, archaic fonts and those plastic windows on letters all cause trouble for the computers.
That's where Karen Heath and her staff at the Remote Encoding Center in Salt Lake City step in.
"It's the handwriting, like your grandmother's, so unique that the computer has a hard time deciphering it," says Heath, manager at the center.
The U.S. Postal Service has a massive 78,000-square-foot branch, tucked away in the Utah capital, that deciphers illegible addresses. On a normal day, about 5 million pieces of mail are funneled through this branch, but as it creeps closer to December the number can be as high as 11 million, says Heath.
With just under 1,700 employees, the Center employees tackle all of the United State's illegible addresses in 33 different shifts that operate 24/7. And, according to Heath, they have a high success rate.
"We're getting [illegible addresses] from facilities from Hawaii to Puerto Rico and all the way across," Heath says. “Trying to identify what the sender has written is like a puzzle and our [employees] are putting the pieces together.”
When mail enters a regular postal service processing facility, large, powerful machines read the address on the envelope and compare it with a master database. Once there's a match made, the computers print a barcode onto the piece of mail.
If the computer can't read the address because of water damage or your grandmother's ornate script, it sends a picture of the address to a computer at the Remote Encoding Center.
For employees of the center, that means looking at thousands of addresses every day. Even the slowest (and usually the newest) “data conversion operators” can identify about 750 addresses per hour, whereas more experienced employees generally average about 1,600 per hour. "We have to walk a fine line of focusing on accuracy and not speed," Heath says.
That doesn't mean they don't have employees who are lightning fast; the center's quickest employee can decipher 1,869 images per hour. New hires must go through a 55-hour training test that Heath likens to a “Star Trek” exam.
"The training that a new employee gets, it's very intense," she adds. "It makes them fail over and over again. It feels impassable."
These operators don’t guess. The training gives them the expertise to accurately type in addresses that are then checked against the USPS database. Most of the time, there’s a match. When they don’t succeed–the water damage is too severe, the text too illegible or the information too incomplete—the mail goes to the department’s “dead letter” office, officially called the Mail Recovery Center. This is the postal service’s last resort, where employees make one final effort to find addresses by opening mail and examining its contents for clues.
After that, packages that can't be delivered or returned are sold at an online auction, where you can find GoPros, laptops, watches and robotic kits. “Some lots come with unexpected surprises, like $5,000 worth of marijuana hidden in a painting or human cremains mixed in with a collection of tableware,” according to the podcast 99 Percent Invisible.
Any money gets sent to the U.S. Department of Treasury and letters may be recycled into paper, says Lynn Heidelbaugh, a curator at the Postal Museum.
Heath has been working at the center since 1994, when the postal service opened its first illegible mail processing facility in Utah. Before the advent of computer programs, letters were sent to the “dead letter office” where employees investigated each piece of mail in a slow, painstaking process. USPS expanded its operations, peaking at 55 facilities like the one in Utah.
But by 1998, computer technology produced by the likes of Siemens and Lockheed Martin had surpassed human capabilities for speed, and, today, all but the Utah facility has shuttered. Engineers for these companies have been updating this technology constantly over the past few decades, fulfilling government contracts worth hundreds of millions of dollars in some cases.
"The number of items that [are illegible] has been diminishing over the years because the machines have gotten better at reading and matching [addresses]," says Nancy Pope, a curator at the Smithsonian National Postal Museum. Eventually, even the Remote Encoding Center could close.
If you're concerned about getting mail to your loved ones, the postal service recommends addressing all post with a sans-serif font, point size 10-12. But if you're set on writing all your mail by hand, don't worry, Heath's team has got your back.
"It's fun to know that you're getting somebody's package to them," Heath says. "There's a piece of mail that's not going to get to where it needs to go unless [we] invest something of [ourselves] in making sure that happens."
Think back to a favorite picture book, the one where the edges of the cover grew worn and a few pages loosened from the binding after so many readings. Perhaps it was the unfolding story that enthralled a young you, perhaps the luminous illustrations. Most likely it was the view the book offered into a different world.
"Picture books are some of the first memories I have for looking at and understanding the world around me," says J.D. Talasek, the director of the Cultural Programs of the National Academy of Sciences. But one doesn't have to be a child to find delight and wonder in images from children's books. That's the premise behind a new exhibition, "Igniting the Imagination," which opened this week at the National Academy of Sciences (NAS) in Washington D.C.
The exhibition features 29 artworks from the collection of children's book illustrations at the Mazza Museum, located at the University of Findlay in Ohio. Each illustration explores the worlds of science, engineering or medicine. In one, a bespectacled older gentleman and his companion, a young boy in a red t-shirt, lean to the side as they feel the centrifugal force of a rollercoaster's curve. The man's hat floats above and behind him, pushed off by the wind of his motion. In another, sea turtles appear to take off like a flock of sea-green-colored birds from a tower of pink, branching coral. A third juxtaposes the size of a Volkswagen Beetle driven by a poofy-haired woman with a stegosaurus sporting the same pale violet coloration as the vehicle.
The illustrations come from books that span the past half-century: The oldest is from Project Boy by Lois Lenski, published in 1954, and shows a group of children building a fort out of "junk." The subjects traipse from the magic of math to the biology of a decaying log to the engineering of a skyscraper.
"The exhibit is framed through these disciplines, but it uses the power of art to help make broader connections to how inventions, practices and discoveries frame our experiences," Talasek says.
Each image was selected to grab the viewer's attention through color, composition or the presence of something unusual and unexpected. "There is a kind of preconceived notion that art from children's books is simple, but you will see that the technical skill is astounding," says Dan Chudzinski, the curator of the Mazza Museum. "They would be at home in any art gallery."
The museum's collection was born in 1982, as part of a celebration of the 100th anniversary of Findlay College, the university's predecessor institution. Jerry Mallett, a professor of education at the time, spearheaded the establishment of the children's book illustration collection. What began as four pieces then has grown to more than 10,500 now through donations and acquisitions. The artworks include a diversity of styles and media.
An image from the book City Beats, illustrated by Jeanette Canyon, shows three pigeons perched on a twisted metal cable, overlooking a construction site. A reader holding the book itself might marvel at the plumpness of the pigeons, the weighty thickness of the cable and the stylish pebbled appearance of the sky and cityscape background. In person, the illustration proves to be a three-dimensional relief sculpture molded out of polymer clay. The sculpture was photographed for the book.
Other selections have similar surprises in store. Illustrator Robin Brickman crafted the ecosystem that springs up around a decaying log in A Log's Life from meticulously-cut pieces of paper. Gennady Spirin's scene of a cabin boy aboard a ship in To the Edge of the World, illustrated in a style reminiscent of a renaissance painting, is packed with details to reward the patient viewer—a map of the Gulf of Mexico replete with the approximations of early cartography and an old-style compass that Portuguese explorer Ferdinand Magellan might have used.
"The whole point is to pique curiosity," Chudzinski says. "We want the art to be a catalyst to get someone to pick up the book and then learn science along the way." To aid that mission, copies of the books and comfy chairs for visitors to curl up in and read accompany the artworks on view at the NAS.
The exhibition sprang from an experience Jay Labov, the senior advisor for education and communication for the National Academies of Sciences, Engineering and Medicine had as a visiting scholar to the University of Findlay.
Labov travels the country giving talks about STEM (Science, Technology, Engineering and Math) education. "One of the talks is about the importance of science as a liberal art in the 21st century," he says. "Too often we see, particularly in higher education, science being divorced from humanities." When he visited the Mazza Museum and gazed at paintings and drawings, the intricate design of a fold-out book caught his eye. A placard explained that engineers had helped design the pop-up constructions.
"It occurred to me that the illustrations in children's books were showing us interesting ways to understand science," he says.
"I know Jay has the heart of a child, playfulness and curiosity," Talasek says. "But he also just had a grandchild at that point, so the exhibition is a very personal recommendation for Jay."
"I did end up buying a lot of the books for my granddaughter," Labov says.
Adults visiting the exhibition may find themselves remembering the wonder they felt learning about science as a child. But children may glean something more. At least, that's what the organizers hope.
Talasek explains exactly what that "more" may be with an anecdote. One of the illustrations comes from You Are the First Kid on Mars by Patrick O'Brien. In it, three space-suited figures stride across rusty soil to approach the glistening, solar-paneled back of a robotic rover on the surface of the Red Planet.
The book itself inspired an astronaut to write the author with compliments: "This is the kind of book that I dreamed of as a kid, and the reason I became a physicist and astronomer. This is the first time since the 1970s that I have seen the excitement of space travel conveyed in a way that is both inspiring and plausible."
"Igniting the Imagination: Selections from the Collection of the Mazza Museum" is on view through August 7, 2017, at the NAS Building, 2101 Constitution Ave., N.W., Washington D.C. Visitors get in for free, but a photo ID is required.
Most textbooks teach that our galaxy, the Milky Way, resembles a flat spiral, with several prominent arms spinning out from the center. But a new, detailed 3-D map of the galaxy puts a twist in that image, literally. It turns out that the galaxy is not a flat pancake but warped with the edges curling above and below the galactic plane.
Getting an actual look at our own galaxy is basically impossible. So far, our most distant space probes have barely left our own solar system and will likely never leave the galaxy to capture an image from a distance. So astronomers have to rely on modeling to figure things out using the telescopes and instruments we have. That’s difficult because Earth is parked in a small spiral arm about 26,000 from the galactic center, making it hard to take in the big picture.
Elizabeth Gibney at Nature reports that prior to this study, the best maps of the Milky Way, which is about 120,000 light years in diameter, used indirect measurements, like counting stars and extrapolating information from other nearby spiral galaxies that we can see. But for this study, researchers from the University of Warsaw used the Optical Gravitational Lensing Experiment telescope at Las Campanas Observatory in Chile to analyze the Cepheids, a group of stars that brighten and dim on a predictable cycle, directly measuring their distances.
Over the course of six years, the team catalogued 2,341 Cepheids stretching across the galaxy, taking 206,726 images of the stars. Observing stars from Earth, it’s sometimes hard to know how bright they really are. A super-bright star that is very far away may appear dim. But researchers know that the slower a Cepheid star pulses, the brighter it really is, which allows them to calculate its true, or intrinsic, brightness. By comparing the brightness level of the star with its apparent brightness from Earth, the researchers were able to determine the distance and three-dimensional position of each Cepheid with more than 95 percent accuracy. Using these data points, they plotted the positon of the Cepheids throughout the galaxy, creating a structural map. The study appears in the journal Science.( J. Skowron / OGLE / Astronomical Observatory, University of Warsaw)
Researchers using other techniques have hypothesized that the Milky Way is warped and that the galaxy actually flares at the edges. Close to the galactic center, it’s about 500 light years wide. At the edges, it’s about 3,000 light years thick. This new visualization confirms that warp and flare and shows that they’re pretty significant.
“If we could see our galaxy from the side, we would clearly see its warp,” study leader Dorota Skowron tells George Dvorsky at Gizmodo. “Stars that are 60,000 light-years away from the Milky Way’s center are as far as 5,000 light-years above or below the Galactic plane. This is a big percentage.”
So why is our galaxy kind of twisted? Nadia Drake at National Geographic reports that warped spiral galaxies are not unusual and astronomers have catalogued many, including the Milky Way’s twin sister galaxy Andromeda. Nicola Davis at The Guardian reports that as many as half the galaxies in the universe have some degree of warping, but the Milky Way’s twists are larger than average.
It’s not completely clear what curled our edges, but researchers suspect it has to do with interactions between the galaxies in the local group, several dozen galaxies and dwarf galaxies clustered within 10 million light-years of the Milky Way. “We think the warp may have been caused by interactions with satellite galaxies,” Skowron tells Drake. “Other ideas point to interactions with intergalactic gas or dark matter.”
The new data may also provide some insight into how the galaxy evolved. The researchers identified three patches of Cepheids that are only 20 million to 260 million years old, mere babies compared to the oldest stars in the galaxy, which are 10 to 13 billion years old. The Guardian’s Davis reports that the youngest stars are closer to the galactic center while the older ones are farther out in the spiral arms. It’s possible that interaction with a passing dwarf galaxy could have caused them to pop into existence. Computer simulations show that to create the pattern they are found in, some sort of star forming events had to occur 64 million, 113 million and 175 million years ago.
Xiaodian Chen from the National Astronomical Observatories at the Chinese Academy of Sciences was part of a similar study published in February that also used a group of Cepheids to map the Milky Way’s 3-D structure. He believes this map is solid. “They essentially confirmed our earlier conclusions regarding the 3-D shape of the Milky Way’s disk, including its flaring in the outer regions,” Chen says. “A good thing about their confirmation of our work is that they used a different data set, covering 2,431 Cepheids compared to [our] 2,330, observed with a different telescope and through different filters. Yet they found pretty much the same result, which is comforting!”
While this new map is the most accurate in terms of revealing the galaxy’s overall structure, it’s by no means the most detailed look at our galaxy. Last year, the European Space Agency’s Gaia star mapper released the position and brightness of the 1.7 billion stars in our immediate neighborhood in the Milky Way and detailed data on 2 million of those stars.
I’ve reached the age when hearing conversations in crowded bars or restaurants is about as hard as trying to finagle free drinks from bartenders because I can't hear anything.
For me, it’s all one big din. But clearly I'm not alone. According to the National Institutes of Health, about one out of three Americans between the ages of 65 and 74 has suffered from some hearing loss. Over the age of 75, the rate jumps to one out of two. In most cases, it’s due to a combination of aging that causes changes in the inner ear and exposure to loud noises during our lives that has damaged sensory hair cells.
But now a California outfit called Soundhawk says it has just the thing for all those people like me: a device that aids hearing without actually being a hearing aid. Instead of allowing a person to just raise or lower the volume of noise around them, this gadget, called Scoop, would apparently offer more nuanced options. Soundhawk's device claims to allow users to create a different sound experience for different environments—for instance, one with more muted background noise in a bar. The earpiece would be controlled by a mobile app; by moving your finger across a smartphone screen, you could adjust the sound levels to hear what you want to hear without being distracted by ambient noise.
The key, says Soundhawk, is the use of algorithms that turn the device into a mini-mixer, one that’s meant to give users much more ability to manipulate what they hear.
The Scoop also comes with a small wireless microphone that can be placed up to 33 feet away from the device, say in front of a television you're watching or a bit closer to that particularly funny person across the table.
The 3.3 centimeter device is essentially a microphone in your ear. Its battery holds a charge for up to eight hours, but Scoop also comes with a portable charging case that can recharge it for as long as 24 hours before needing to plug in again.
The people at Soundhawk are quick to point out that that their device isn’t meant for people with serious hearing problems. In fact, the company’s website clearly states that the Scoop is not a diagnostic or treatment tool, which means it’s not subject to FDA regulation.
But Soundhawk is betting on aging Baby Boomers when Scoop debuts later this summer. The hope is that plenty of people are frustrated enough by their diminished hearing that they’ll spend just under $300 for a device that allows them to be their own personal sound engineers.
The system’s mobile app offers four audio settings: indoor, outdoor, dining and driving. Each can be set to complement a specific user’s hearing abilities, then adjusted to particular situations—at a concert, for instance, you'd want to hear performers far away; while dining, you probably want to listen only to the people seated at your table.
Soundhawk CEO Mike Kisch draws an analogy between his device and getting eyeglasses. Just as an optometrist asks you to compare the effectiveness of different lenses ("is it better like this—or like this?"), the Scoop, he says, is designed to let people find their best sound settings.
To date, the company has raised more than $11 million in venture capital, including $5.5 million in a second round of funding it announced last week. It also said it has reached an agreement with Foxconn, the Chinese manufacturer best known in the U.S. for making Apple products. The device can be preordered at a price of $279.
From a business standpoint, the Scoop is ripe with potential. But ultimately its success or failure will come down to whether people who can still hear most of the time will be willing to wear a device that could help them hear all the time.
One thing that could help its chances—it will come in nude.
You wear it well
Here’s other recent news about wearable tech designed to help people monitor their health:
· Google will see you now: Last week Google announced a new service called Google Fit, which it says will enable Android users to aggregate all their data from fitness trackers and health apps in one place. In short, all of that information collected by different devices would be pulled together into one app. This comes only a few weeks after Apple rolled out HealthKit, software also designed to be a hub for all of a person’s fitness and health data.
· But it still cannot give you a hug: Ford is looking at ways to have its vehicles respond to data gathered by devices worn by drivers. For instance, according to a recent interview with Ford engineers by ZDNet, if a person’s wearable indicates that he or she is very stressed, the car could send any incoming calls directly to voicemail and it might even switch the music to something particularly soothing.
· Talk about old school: Soon fitness wristbands may be replaced by devices that actually look like wristwatches--and fashionable watches at that. The French company Withings is rolling out the Activité, a stylish wearable with analog instead of digital dials. But don't let that fool you--it tracks steps taken and hours slept.
· Coming soon: Wearable robots: The FDA last week approved marketing of the first motorized device designed to act as an exoskeleton for people with lower body paralysis. Called the ReWalk, it’s worn over the legs and part of the upper body and can help a paralyzed person sit, stand and walk.
· Just breathe: And added to the growing collection of fitness trackers is one called Spire that measures a person’s stress levels and, through a smartphone app, can suggest that he or she take 10 deep breaths. The device, which would be attached to a waistband or bra strap, is expected to be out on the market in September—just in time for people starting to lose their vacation chill.