Found 1,028 Resources containing: robot
While GPS features in smartphones have made them easier to track down when lost, wallets and other items are still challenging to recover. F-stop, a St. Louis company that sells camera gear, has come up with a comprehensive solution.
KitSentry is a three-part "ecosystem." It includes a Bluetooth and WiFi-enabled field device that can be placed into any bag, NFC ID tags that can be stuck on key items and the Sentry app. Once the field device is activated and hooked up to the app, the user can monitor the bag at all times, see its location and also track its contents, to know whether any have been removed. The maker of KitSentry recently raised $27,039 on Kickstarter.
Here are five other quirky ideas that were funded this week:
The party hasn't really started until there's a dachshund tearing up the dance floor. It was only a matter of time before wearable tech reached puppies—in this case, a smartphone-controlled velcro vest studded with 256 colorful LED lights. The aptly named Disco Dog displays brilliant rainbows and dazzling patterns of sparkles and stripes. Produced by Party New York, the flashy outfit is customizable for dogs of all sizes. The lights can even spell out messages, such as "Lost Dog," in the unlikely event that an owner loses a pup wearing this ridiculous getup.
There are many devices available for us to track ourselves, but what about the world around us? PocketLab is a rectangular, portable sensor that adheres to soccer balls, backyard rockets, you name it, and gathers a variety of data—acceleration, angular velocity, altitude, temperature, magnetic field, pressure and force—during homespun experiments. Invented by a mechanical engineer at Stanford, PocketLab aims to shift perceptions about science by making data collection and analysis fun and easy. The sensor connects wirelessly to a smartphone or tablet, and all the information it collects is uploaded to an app in real time and seamlessly converted to Google Doc or Microsoft Office files.
Some high school students from Stamford, Connecticut were surprised when their class advisor told them that drones are primarly used for photography and wondered if they could enable the devices to do more. Inspired to apply them in a new way, the intrepid group devised Project Ryptide, a 3D-printed, lightweight plastic add-on to any drone, which includes a life preserver that self-inflates. If a swimmer needs help, the drone is able to quickly fly over and drop the rescue equipment—helping any lifeguards that may be on duty. The product's name calls attention to its main focus, which is to address the threat of dangerous riptides that quickly drag thousands of unsuspecting swimmers out to sea each year in the United States.
While bands like Fitbit and Jawbone offer helpful numbers of calories burned and steps taken, a new small, wearable device that hooks onto runners' waistbands, aims to provide one definitive number that captures intensity of training. Stryd, the brainchild of a team of Princeton engineers at the Boulder, Colorado startup Athlete Architect, calculates a runner's power, taking terrain into account, and sends that stat to a sports watch or smartphone. The engineers don't exactly divulge how they calculate "power," but the measure, reported in watts, is supposed to reflect the degree to which runners push themselves during workouts. (Jogging, they say, takes about 150 watts, while a hard workout requires double that or more.) Until now, heartrate has been the metric used to measure this effort.A landing page for monitoring personal health records. (Kickstarter)
Obtaining and deciphering medical records can be overwhelming and complicated. The California company Zobreus Medical Corporation is developing an app called POEM, short for patient-oriented electronic medical record, that aims to streamline this process, making it easy for providers and patients to quickly stay up-to-date with their information. Much like Mint brings together different banking and investment accounts in one place, POEM retrieves information from different providers and organizes it simply and cleanly on one platform. It enables patients to track previous prescriptions, doctor's notes and procedures. Users can also create "care circles," where they can stay in the loop on the medical developments of family and friends.
Drones may ultimately help change the face of agriculture, as we saw in action at the AgBot Challenge in Indiana last month, but it's not just commercial farming that could benefit from autonomous robots. Case in point: FarmBot, whose autonomous kits called Genesis will be available for pre-order this week, simply wants to oversee your home garden.
Its ambitions might be smaller than the contraptions that can remotely plant miles of seeds, but Genesis looks incredibly impressive. Developed by a team of three from California, the kit is an autonomous machine that’s installed atop and around a small garden—in your backyard, on a rooftop, or inside a greenhouse or lab. Once built, Genesis performs nearly the entire gardening process prior to harvesting, including planting the seeds, watering each plant precisely and on a set schedule, monitoring conditions, and pulverizing pesky weeds. Check out how it works:
As the trailer shows, Genesis slides along tracks installed alongside the garden box, with the main arm also shifting left and right and popping down into the soil to perform its various functions. Once given instructions, FarmBot can be left to its own devices to follow the planting and watering schedules you picked until the veggies are ready to harvest.
While it’s a pretty high-tech contraption, the interface is very simple. The Internet-connected FarmBot is controlled via a web app that uses a Farmville-esque visual grid, letting you drag and drop the kind of plants you want into your digital garden. Genesis has 33 common crops loaded into its software so far (artichokes, chard, potatoes, peas, squash, etc) and it automatically spaces the varying plants appropriately, taking the guesswork out of having a diverse garden. And the app can be accessed from a computer, phone, or tablet, so you can tweak your plan from anywhere and send it to your backyard ‘bot.
Image by Courtesy FarmBot. (original image)
Image by Courtesy FarmBot. (original image)
What’s surprising is that Genesis is a fully open-source project. That means that the creators have released the source code for the software and the blueprints for all of the hardware pieces, so coders and engineers can easily modify Genesis and build their own parts. Many components can be made using 3D printers, and the software can be tweaked to add features—or improve those implemented by the company.
That open approach and focus on expandability also means that you can personalize Genesis for your garden layout and needs. For example, you can hook up a solar panel to power the ‘bot, or use a rain barrel to irrigate rather than connect a hose. Genesis is also something of a meteorologist: It monitors real-time weather conditions to better manage your garden.
Genesis is the first commercial version of this autonomous gardening idea, allowing for planting spaces up to 2.9 meters × 1.4 meters, with a maximum plant height of 0.5 meters. It’s an all-in-one kit with nearly everything you need to get started, including all the metal and 3D-printed pieces—the nozzles, motors, belts, and pulleys—a Raspberry Pi 3 computer, and plenty more. You’ll need to build your own planter bed following the specifications, as well as provide the water, electricity, and Internet sources. Programmer or engineering know-how not required: The kit comes with a step-by-step guide. If you can get through an IKEA furniture setup, you should be able to put together Genesis (fingers crossed). But if you’re a techie, you can do much more with it if you want.(Courtesy FarmBot)
The Genesis kit will begin pre-orders this Friday, July 1, although it’s unclear when FarmBot will start shipping—or exactly how much the kit will cost. A blog post on their site last week suggests that the all-in starting expense for Genesis will be about $3,500, but that includes things like shipping, infrastructure, soil, and other setup expenses. Meanwhile, a report from New Times SLOsuggests that the kit itself will be sold at about $2,900, but creator Rory Aronson says they hope to eventually get the cost closer to $1,000 down the line.
It might be a pricey buy-in for now, but the Genesis kit is for early adopters who want the entire thing ready to install—and don’t mind riding the early wave of untested technology. Given the open-source approach, don’t be surprised if you can eventually buy different kinds of kits and supplement them with your own parts, expand upon the core kit with your own extra hardware, or even build your own FarmBot from scratch.
FarmBot’s documentation hints at ambitions for larger-scale farming ‘bots (imagine this technology on acreage!), so the Genesis kit could be just the beginning for this high-tech farming revolution.
More stories from Modern Farmer:
In a way, the intimate relationship between man and man's best friend is unjustly lopsided. For their part, dogs are able to understand us very well. In fact, researchers believe a border collie named Chaser has demonstrated a vocabulary of more than 1,000 words, along with the ability to comprehend more complex language elements such as grammar and sentences. Meanwhile humans, despite even the most, er, dogged scientific efforts, have yet to decode the literal meaning behind a canine’s bark (if there is any).
But a Swedish design lab that calls itself the Nordic Society for Invention and Discovery thinks that animal behaviorists have been going about it the wrong way. What its developers are proposing instead is the development of a device that can infer what an animal is thinking or feeling by analyzing, in real-time, changes in the brain. The concept they've imagined, dubbed No More Woof, would be sold as a lightweight headset lined with electroencephalogram (EEG) sensors, which record brain wave activity.
When combined with a low-cost Raspberry Pi microcomputer, the inventors surmise that the electrode-filled device, which rests atop a dog's head, could match a wide range of signals to distinct thought patterns. A specialized software known as a brain-computer interface (BCI) would then translate the data into phrases to communicate. The phrases, played through a loudspeaker, may range from "I'm tired" to "I'm curious what that is."
In December, the development team launched a crowdfunding campaign on Indiegogo.com in hopes of raising enough money to at least further explore the feasibility of such an idea (the BCI, for instance, is just an experiment at the moment). With a $65 donation, supporters of the project had an opportunity to reserve beta versions of the gadget, programmed to distinguish between two to three thought patterns, such as tiredness, hunger and curiosity, and communicate them in English. Those who pledged as much as $600 will receive a higher-end model capable of translating more than four distinct thoughts and suitable for a number of different breeds, which the group concedes has proven to be quite difficult.
"The challenge is to make a device that fits different dogs and measures in the right place," says Per Cromwell, the product's creator. "If it gets displaced it can lose the signal. We are struggling with these topics and would rather describe the devices we are working on as as working prototypes rather than mass produced products."
While developers more than doubled their initial goal—raising $22,664—you may not want to get your credit card out quite yet.
Since the Indiegogo launch, neuroimaging experts have come out to debunk claims made on the product's website, saying the science doesn't add up.
"What I saw in their video can't work," Bruce Luber, a Duke University professor who specializes in brain stimulation and neurophysiology, tells Popular Science.
Luber points out, for instance, that since EEG is designed to measure neural activity near the surface area of the brain, it won't be able to determine if an animal (or human) is feeling hungry; that feeling originates in the hypothalamus, which is located deep in the center of the brain. And while devices are being developed to allow users to move prosthetic limbs, steer a car or even play music, reliably identifying specific emotions and thoughts has thus far been beyond the scope of even the most sophisticated technology.
To be fair, Cromwell admits that the concept is being treated more or less as an experiment, or an exploration. There's also a disclaimer from the developers on Indiegogo that flatly states that No More Woof is still a work-in-progress and contributions do not guarantee a working product.
"When we started out we had no idea if it would work or not," he says in an email. "And to some extent we're still trying to make it work. So I think it would be more correct to describe the work as a couple of curious persons than being based on existing research.”
It's worth noting that this is the same oddball band of inventors to pursue other wacky ideas—from an indoor cloud to a flying lamp and a magic carpet for pets—but never deliver on them. Cromwell does claim to have made some progress, nonetheless, in pinpointing certain patterns he believes indicate, if not thoughts, at least a narrowed sense of what mood the dog is in.
The testing process, which he described in an email, involves using a video camera along with an EEG device to simultaneously record a dog's brain activity and physical response as it's exposed to a variety of stimuli, such as an unknown person, a ball, food or the smell of a treat.
“What we're focusing on in these early stages is measuring the amount of activity,” Cromwell explains. “Curiosity and agitation showed a significant increase in brain activity, and we're interpreting this as the dog being either curious and asking 'What is that?' or saying 'I want to play.' Conversely, when the dog is bored or tired, brain activity decreases and we translate this as 'Leave me alone' and 'I want to sleep.'"
Whether or not you find his method of translating dogspeak into intelligible words to be a stretch, Cromwell contends that it's an approach that should eventually lead to more accurate interpretation, as the team's research progresses. Currently, the only language option is English. “We know it's our translation and not an exact translation," he says. “But we are confident that more research will help us find and decipher more patterns.”
Will we ever see a machine that would allow human and pet to engage in actual conversation? If society wants it badly enough, it's totally possible, Luber tells Popular Science, particularly "if you get DARPA to put about $100 million toward it and get all of us working on it."
At first glance, portraits of the Belamy family seem to exemplify life in the upper echelons of French society. The haughty features of patriarch Le Comte De Belamy are framed by a voluminous white powdered wig, while the dynastic matriarch, La Comtesse, oozes wealth in her colorful silk attire. Skipping ahead several generations, you’ll encounter Madame De Belamy, whose tightly coiffed hair is tucked inside a blue hat rendered in Impressionistic strokes, and her son Edmond, a comparatively dour-looking young man clad almost entirely in black.
But there’s a catch to this story of generational greatness: In addition to being wholly fictional, the Belamy family hovers in that amorphous space between artificial intelligence and art. Although its members’ names and places in the family tree were assigned by Obvious, a Paris-based art collective, their likenesses are the brainchild of Generative Adversarial Networks, a machine learning algorithm better known by the acronym GAN.
Now, Naomi Rea writes for Artnet News, the youngest member of the family—as depicted in “Portrait of Edmond Belamy”—is set to make history as the subject of the first AI-produced artwork sold by an auction house.
A canvas print of Obvious’ (and GAN’s) creation will be included in Christie’s late October auction of Prints and Multiples, the New York-based auction house reports. It remains to be seen how bidders will react to the AI work, but Obvious remains optimistic, citing an estimated sale price of €7,000 to €10,000, or roughly $8,000 to $11,500.
Hugo Caselles-Dupré, one of Obvious’ three co-founders, tells Christie’s Jonathan Bastable that GAN consists of two parts: the Generator, which produced images based on a data set of 15,000 portraits painted between the 14th and 20th centuries, and the Discriminator, which attempts to differentiate manmade and AI-generated works.
“The aim is to fool the Discriminator into thinking that the new images are real-life portraits,” Caselles-Dupré says. “Then we have a result.”The painting is one of 11 portraits depicting the fictional Belamy family, which was named in honor of GAN creator Ian Goodfellow (Courtesy of Obvious)
According to an essay posted on Obvious’ Medium page, GAN analyzes thousands of images to learn the basic features of portraiture. The subsequent AI-generated portraits are both similar to the images in the original data source and singularly unique. A different image is rendered with every execution of the algorithm.
“This reflects a human creativity feature: We will never create twice the same thing,” Obvious writes.
Obvious, a three-man team made up of Caselles-Dupré, Pierre Fautrel and Gauthier Vernier, owes much to American AI researcher Ian Goodfellow, who developed the GAN algorithm in 2014. As Time’s Ciara Nugent notes, the rough French translation of “Goodfellow”—bel ami—provided inspiration for the fictional family’s name.
The Belamy portraits are painted in a semi-realistic style, their blurred details creating an overarching impression of motion. In the bottom right corner of the canvases, the artist’s signature is replaced by an intimidating mathematical equation:
Such proclamations of authorship are a central concern in the art world’s AI debate. Skeptics of the new technology doubt that machines can produce art, which has long been viewed as a uniquely human activity. If an AI researcher designs and executes an algorithm, who is the end product’s true creator: human artist or machine? And, most importantly, if robots can create art, where does that leave humans?
There are no easy answers to these questions, but as Rose Eveleth, host of the future-centric podcast Flash Forward, argued in a recent episode, this isn’t the first time humans have felt threatened—or entranced—by machine-made art.
Swiss-born watchmaker Pierre Jaquet-Droz launched the golden age of automata, or kinetic sculptures designed to mimic human movement, with “The Writer.” The 1770s doll was made of 6,000 moving parts that allowed it to scribble out an array of messages, dip a quill into an inkwell, and blink with unseeing eyes.
At the time, philosophers were engaged in a heated battle over what it meant to be alive, Eveleth notes. While we don't think of today's AI as a living organism, modern technology does continue to raise existential questions about what it means to be human. Consider a more recent innovation: the camera. It also posed some philosophical problems, Caselles-Dupré tells Time’s Nugent.
“Back then, people were saying that photography is not real art and people who take pictures are like machines,” he says. “And now we can all agree that photography has become a real branch of art.”
By including “Portrait of Edmond Belamy” in its fall sale, Christie’s isn’t offering a final ruling on the value of AI art. Still, the decision is sure to attract ire, elation and, if the sale is successful, newfound faith in the burgeoning medium.
“I’ve tended to think human authorship was quite important—that link with someone on the other side,” Richard Lloyd, head of Christie’s Prints and Multiples department, tells Nugent. “But you could also say art is in the eye of the beholder. If people find it emotionally charged and inspiring, then it is. If it waddles and it quacks, it’s a duck.”
They came for the vintage video game arcade, the sprawling art fair, and rare photo ops with their favorite celebrities. But they also came to learn.
Since its 2013 launch, the annual Washington, D.C. pop culture fest known as Awesome Con has blossomed into a national beacon of proud nerddom. Last weekend, some 60,000 enthusiasts from around the country descended on the Walter E. Washington Convention Center for a three-day celebration of all things nerd and culture. Brandishing homemade lightsabers and Tardises, and donning costumes inspired by franchises as diverse as Teen Titans, Spirited Away and The Last of Us, these dedicated fans had no trouble repping their sometimes-obscure passions.
But within Awesome Con is a series of lectures and panels that skew even more geeky than the rest of the conference: an educational series called Future Con that ties real-world concepts and cutting-edge scientific research in with the fiction. Run jointly by Awesome Con and Smithsonian Magazine, this series enlists NASA astrophysicists, university biologists and entertainment industry engineers to bring scientific expertise to bear on an assortment of intellectual properties ranging from Black Panther to Mass Effect.
Kicking off the lineup of Future Con presentations was a panel talk from NASA, held Friday afternoon, titled “NASA Science at Earth’s Extremes.” Experts delved into a selection of NASA’s current Earth science campaigns, showing the audience that NASA doesn’t just look outward to the stars—but also inward toward Earth.
Following presentations from glaciologist Kelly Brunt on Antarctic sledding expeditions and geologist Jacob Richardson on volcano recon in Hawaii and Iceland, environmental scientist Lola Fatoyinbo spoke on the carbon-rich equatorial mangrove ecosystems of Central Africa, and the importance of wedding on-the-ground fieldwork with observations from planes and orbiters. NASA is preparing to launch a pioneering mission called the Global Ecosystem Dynamics Investigation (GEDI—pronounced “Jedi,” of course) that will survey the verticality and dynamism of terrestrial forests with a LIDAR-equipped satellite. “May the forest be with you,” she concluded with a smile.
Soon after this panel came a live recording of Smithsonian’s AirSpace podcast, in which personnel from the Air and Space museum talked space stations with special guest René Auberjonois of Star Trek: Deep Space Nine, who felt the show accurately captured what living on a space station “would do to you at a psychological level.”
Air and Space Museum researcher Emily Martin posited that space stations are likely to play an increasingly large role as we push humanity beyond Earth. “We’re going to need to have these sorts of bus stops” for our astronauts, she said. Equipped with modern tech, she thinks modern spacefarers could make discoveries their forebears could only dream of. “Could you imagine an Apollo astronaut with a smartphone? Think of what they could do!”A Future Con panel discusses the science and social dynamics at play beneath the surface of Black Panther. (Lacey Gilleran)
Building on this theme of space exploration was a discussion on the mysteries of black holes, and one in particular located deep within our own galaxy. “There’s a four-million-solar-mass black hole sitting right in the middle of the Milky Way,” said NASA astronomer Jane Turner. She estimates that it sucks up the equivalent of an entire star each Earth year. A global alliance of scientists is on the verge of observing this black hole with an array of earthbound telescopes in an exciting an unprecedented project called the Event Horizon Telescope.
After this deep dive into the unknown, Future Con turned back toward the familiar and fun, putting on a widely attended panel talk on the science depicted in Marvel’s critically acclaimed blockbuster Black Panther. Panelists discussed the empowering message of Afrofuturism as well as particular real-life analogs to some of the wondrous “vibranium” technologies seen onscreen.
Lockheed Martin engineer Lynnette Drake argued that “graphene is very similar to vibranium in terms of what we use in the science world,” and her colleague Charles Johnson-Bey pointed out that absorptive nanofibers—like those in protagonist T’Challa’s panther suit—have a firm basis in reality. “We have nanomaterials we use to make materials lighter,” Johnson-Bey said. Some of them are even employed to diffuse lightning strikes on moving watercraft, in much the same way T’Challa’s armor absorbs and protects him from incoming energy.
Saturday’s lineup featured Future Con events on two more evergreen cultural phenomena: Harry Potter and Star Wars.
Duke biology professor Eric Spana walked a rapt crowd of Potterheads through the workings of heredity in Rowling’s books, concluding via a thorough analysis of salient—but fictional—case studies that sensitivity to magic must be an autosomal dominant trait. Where do Muggle-born witches and wizards come from, then? Spana had an answer for that too: thanks to spontaneously occurring germline mutations, he showed that it is perfectly reasonable to expect a teensy percentage of Muggle-born yet magic-sensitive kids to arise in any given population.
Spana puts the odds of being born magic-sensitive to Muggle parents at one in 740,000: “Powerball odds.” In other words, don’t hold your breath.The Awesome Con experience offered informative panels and personal engagement with artists, celebrities, and fellow nerds. (Lacey Gilleran)
Later in the afternoon, two of the designers who brought to life the widely adored Star Wars droid BB-8 talked about their prototyping process. Star Wars electronics engineer Matt Denton, who had started out in laboratory robotics but decided academia wasn’t for him, revealed that a host of BB-8 models were ultimately made for the screen, each with their own strengths and weaknesses. These included trike-mounted models, a lightweight model, a puppet model (for up-close emotional moments) and even a stunt model. The so-called “red carpet model,” a fully automotive droid that Denton’s coworker Josh Lee called “a whole new type of BB-8,” rolled out onstage to surprise and delight the fans.
Next were two thoughtful panels on increasing diversity in science and pop culture. In “Brave New Girls,” female scientists, science educators and science communicators discussed their experiences in the world of professional science, recounting stories of inspiration, hurdles overcome and successes achieved. Later, a second panel looked at trends in STEAM and diversity in comics and movies, stressing the importance of onscreen representation and the transformative effect of seeing someone who looks like you pursue dreams akin to yours.
Panelist Renetta Tull said that “Seeing Lieutenant Uhura in Star Trek was a big deal for me” as an African-American scientist and educator at UMBC. Some of her first major work in academia, on 3D imaging techniques, was inspired by the holodeck technology built into the Enterprise.
One of the most powerful sessions of the day was a screening of Stephen Hawking’s final film, Leaving Earth: Or How to Colonize a Planet. In the film, the legendary astrophysicist—who passed away this March—suggests that it’s time to start thinking seriously about a means of escaping Earth. “We can and must use our curiosity to look to the stars” for refuge, he says—Earth could be wiped out in any number of ways in the relatively near future.
The nearest potentially suitable destination for humankind is a planet slightly larger than ours orbiting the red dwarf Proxima Centauri. In order to reach this world, called Proxima B, we’d need to traverse an intimidating 4.2 light years of space. The solution, perhaps, will rely on the principle of solar sails. In time, a massive array of earthbound laser stations could fire simultaneously at a sail-equipped spacecraft, sending it hurtling into the black at a significant fraction of light speed. To protect voyagers from cosmic rays en route, biologists believe we might need to put them in a state of bear-like hibernation. Strangely enough, bears are effectively immune to radiation damage for the duration of their winter snooze.
The convention came to a close on Sunday, with a last smattering of Future Con topics addressing science in video games (Mass Effect got high marks for planetary dynamics, while Assassin’s Creed was chided for sketchy epigenetics), the many incarnations of the Batmobile (the panelists’ favorite was the 1989 model from Burton’s Batman, now on view at the National Museum of American History), and heady explorations of the deep universe and gravitational waves. Then, armed with heady visions of the future and a little more knowledge about the world around them, Awesome Con attendees compressed their lightsabers, bagged their d20s, and filed out into the cool March evening.
This event was made possible by Future Con sponsors Boeing, Netflix, and X, the moonshot company.
How does one get stuck studying frog tongues? Our study into the sticky, slimy world of frogs all began with a humorous video of a real African bullfrog lunging at fake insects in a mobile game. This frog was clearly an expert at gaming; the speed and accuracy of its tongue could rival the thumbs of texting teenagers.
The versatile frog tongue can grab wet, hairy and slippery surfaces with equal ease. It does a lot better than our engineered adhesives—not even household tapes can firmly stick to wet or dusty surfaces. What makes this tongue even more impressive is its speed: Over 4,000 species of frog and toad snag prey faster than a human can blink.
What makes the frog tongue so uniquely sticky? Our group aimed to find out.
Early modern scientific attention to frog tongues came in 1849, when biologist Augustus Waller published the first documented frog tongue study on nerves and papillae—the surface microstructures found on the tongue. Waller was fascinated with the soft, sticky nature of the frog tongue and what he called “the peculiar advantages possessed by the tongue of the living frog…the extreme elasticity and transparency of this organ induced me to submit it to the microscope.”
Fast-forward 165 years, when biomechanics researchers Kleinteich and Gorb were the first to measure tongue forces in the horned frog Ceratophrys cranwelli. They found in 2014 that frog adhesion forces can reach up to 1.4 times the body weight. That means the sticky frog tongue is strong enough to lift nearly twice its own weight. They postulated that the tongue acts like sticky tape or a pressure-sensitive adhesive—a permanently tacky surface that adheres to substrates under light pressure.Frog tongue holding up a petri dish just with its stickiness. (Alexis Noel/Georgia Tech, CC BY-ND)
To begin our own study on sticky frog tongues, we filmed various frogs and toads eating insects using high-speed videography. We found that the frog’s tongue is able to capture an insect in under 0.07 seconds, five times faster than a human eye blink. In addition, insect acceleration toward the frog’s mouth during capture can reach 12 times the acceleration of gravity. For comparison, astronauts normally experience around three times the acceleration of gravity during a rocket launch.
Thoroughly intrigued, we wanted to understand how the sticky tongue holds onto prey so well at high accelerations. We first had to gather some frog tongues. Here at Georgia Tech, we tracked down an on-campus biology dissection class, who used northern leopard frogs on a regular basis.
The plan was this: Poke the tongue tissue to determine softness, and spin the frog saliva between two plates to determine viscosity. Softness and viscosity are common metrics for comparing solid and fluid materials, respectively. Softness describes tongue deformation when a stretching force is applied, and viscosity describes saliva’s resistance to movement.
Determining the softness of frog tongue tissue was no easy task. We had to create our own indentation tools since the tongue softness was beyond the capabilities of the traditional materials-testing equipment on campus. We decided to use an indentation machine, which pokes biological materials and measures forces. The force-displacement relationship can then describe softness based on the indentation head shape, such as a cylinder or sphere.When the indentation head pulls away from the tongue, it adheres and stretches. (Alexis Noel/Georgia Tech, CC BY-ND)
However, typical heads for indentation machines can cost $500 or more. Not wanting to spend the money or wait on shipping, we decided to make our own spherical and flat-head indenters from stainless steel earrings. After our tests, we found frog tongues are about as soft as brain tissue and 10 times softer than the human tongue. Yes, we tested brain and human tongue tissue (post mortem) in the lab for comparison.
For testing saliva properties, we ran into a problem: The machine that would spin frog saliva required about one-fifth of a teaspoon of fluid to run the test. Sounds small, but not in the context of collecting frog spit. Amphibians are unique in that they secrete saliva through glands located on their tongue. So, one night we spent a few hours scraping 15 dead frog tongues to get a saliva sample large enough for the testing equipment.
How do you get saliva off a frog tongue? Easy. First, you pull the tongue out of the mouth. Second, you rub the tongue on a plastic sheet until a (tiny) saliva globule is formed. Globules form due to the long-chain mucus proteins that exist in the frog saliva, much like human saliva; these proteins tangle like pasta when swirled. Then you quickly grab the globule using tweezers and place it in an airtight container to reduce evaporation.
After testing, we were surprised to find that the saliva is a two-phase viscoelastic fluid. The two phases are dependent on how quickly the saliva is sheared, when resting between parallel plates. At low shear rates, the saliva is very thick and viscous; at high shear rates, the frog saliva becomes thin and liquidy. This is similar to paint, which is easily spread by a brush, yet remains firmly adhered on the wall. Its these two phases that give the saliva its reversibility in prey capture, for adhering and releasing an insect.
How does soft tissue and a two-phase saliva help the frog tongue stick to an insect? Let’s walk through a prey-capture scenario, which begins with a frog tongue zooming out of the mouth and slamming into an insect.
During this impact phase, the tongue deforms and wraps around the insect, increasing contact area. The saliva becomes liquidy, penetrating the insect cracks. As the frog pulls its tongue back into the mouth, the tissue stretches like a spring, reducing forces on the insect (similar to how a bungee cord reduces forces on your ankle). The saliva returns to its thick, viscous state, maintaining high grip on the insect. Once the insect is inside the mouth, the eyeballs push the insect down the throat, causing the saliva to once again become thin and liquidy.
It’s possible that untangling the adhesion secrets of frog tongues could have future applications for things like high-speed adhesive mechanisms for conveyor belts, or fast grabbing mechanisms in soft robotics.
Most importantly, this work provides valuable insight into the biology and function of amphibians—40 percent of which are in catastrophic decline or already extinct. Working with conservation organization The Amphibian Foundation, we had access to live and preserved species of frog. The results of our research provide us with a greater understanding of this imperiled group. The knowledge gathered on unique functions of frog and toad species can inform conservation decisions for managing populations in dynamic and declining ecosystems.
While it’s not easy being green, a frog may find comfort in the fact that its tongue is one amazing adhesive.
In August, marine biologists Johnny Gaskell and Peter Mumby and a team of researchers boarded a boat headed into unknown waters off the coasts of Australia. For 14 long hours, they ploughed over 200 nautical miles, a Google Maps cache as their only guide. Just before dawn, they arrived at their destination of a previously uncharted blue hole—a cavernous opening descending through the seafloor.
After the rough night, Mumby was rewarded with something he hadn’t seen in his 30-year career. The reef surrounding the blue hole had nearly 100 percent healthy coral cover. Such a find is rare in the Great Barrier Reef, where coral bleaching events in 2016 and 2017 led to headlines proclaiming the reef “dead.”
“It made me think, ‘this is the story that people need to hear,’” Mumby says.
The expedition from Daydream Island off the coast of Queensland was a pilot program to test the methodology for the Great Reef Census, a citizen science project headed by Andy Ridley, founder of the annual conservation event Earth Hour. His latest organization, Citizens of the Great Barrier Reef, has set the ambitious goal of surveying the entire 1,400-mile-long reef system in 2020.
“We’re trying to gain a broader understanding on the status of the reef—what’s been damaged, where the high value corals are, what’s recovering and what’s not,” Ridley says.
While considered one of the best managed reef systems in the world, much of the Great Barrier Reef remains un-surveyed, mainly owing to its sheer size. Currently, data (much of it outdated) only exists on about 1,000 of the Great Barrier’s estimated 3,000 individual reefs, while a mere 100 reefs are actively monitored.
Researchers instead rely on models, which has left gaps in knowledge. In the last two years, our understanding of how ocean currents dictate the reef’s ability to survive has improved. According to Mumby, spawn from as few as three percent of sites provides new life to over half of the reef. Those key reefs, however, still need to be identified.
“You can’t prevent bleaching or cyclones, but you can protect critically important sources of larvae,” he says. An accurate survey will help to manage coral-hungry Crown-of-thorns starfish, as well inform future restoration project sites.The majority of individual reefs that make up the Great Barrier Reef have not been directly surveyed. (Damian Bennett)
The Great Reef Census is not the first attempt to use citizen science to survey the reef. One such program, Reef Check, has been relying on citizens for 18 years—but it only monitors 40 key sites. Eye on the Reef, an app from the Great Barrier Reef Marine Park Authority, encourages users to upload significant sightings, such as bleaching events, Crown-of-thorns starfish and mass spawning events. But the new census will mark the first attempt to survey the entire reef system.
But the ambitious research program hinges on laypeople, meaning the data gathered could be of questionable scientific value. Citizen science is notoriously problematic, owing to deviations from standard procedures and biases in recording. For example, contributors to Eye on the Reef are more likely to record the spectacular (whale sharks, dugongs and humpback whales) than the common (starfish).
In 1992, Mumby’s first research project was analyzing reef survey data from citizen scientists in Belize. The results, he admits, were less than brilliant. “There are many citizen programs where the pathway between the data collected and the actual usage by management can be somewhat opaque,” he says.
Yet, Mumby believes that the Great Barrier Reef Census is different. The program has a clear connection to both research and policy, he says. Unlike other citizen science efforts, unskilled volunteers won’t be asked to estimate or monitor coral cover. Participants will do the most simplistic of grunt work: uploading 10 representative photos of their diving or snorkelling site with a corresponding GPS tag. This basic field data will then be used by the University of Queensland, which is already using high-resolution satellite images and geomorphic modelling to map the reef and predict the types of local ecosystems present.National Oceanic and Atmospheric Administration diver Kelly Gleason injects a crown-of-thorns starfish with with ox bile, a natural substance that kills the creature but does not harm the reef. (Greg McFall, NOAA Dive Program)
The project is critically important to understanding the reef, but it comes with limitations, says David Kline, a coral reef ecologist at the Smithsonian Tropical Research Institute. According to Kline, satellite imaging is only capable of penetrating to depths of about 5 meters, although some satellite mapping has achieved about 20 meters in ideal conditions. This leaves the deep-water mesotrophic reefs—which are less likely to suffer from bleaching and may be critical for reef recovery—under-studied. Some are located as deep as to 2,000 meters underwater.
“To really [survey] the entire Great Barrier Reef in a meaningful way, you need AUVs [autonomous underwater vehicles], drones, airplanes with multi-spectral imagery, and high-resolution satellites—and you need to be able to link the data between these different levels,” Kline says.
Kline is currently working with the University of Sydney’s Australian Centre for Field Robotics, where engineers are training AUVs to gather high-resolution imagery of the reefs, including mesotrophic reefs. This information can then be used to train machine learning algorithms to map the entire system.
However, Kline says it will likely be another 5 to 10 years before a fleet of AUVs is ready to efficiently map large areas such as the Great Barrier Reef. “Until then, we need ambitious projects to start making progress toward that goal,” he says. The Great Barrier Reef Census and the satellite mapping from the University of Queensland is a good start.
But even if the census’s methodology leads to stronger scientific data than previous efforts, the reef’s prognosis is still bleak. If global greenhouse emissions continue to rise at their current rate, it’s predicted that mass bleaching events, which have occurred four times in the past 20 years, will occur annually from 2044 onward.
If successful, the Great Barrier Reef Census will be the world’s largest collaborative scientific survey. And Ridley thinks if reports of the reef’s alleged death didn’t propel people to action, maybe reports of its ability to survive in the face of adversity will.
“We want the citizens to be helpful from a science perspective—but we also want people to give a shit,” Ridley says. “The world’s not moving fast enough toward net-zero emissions. Can the Great Barrier Reef be a point of inspiration, rather than a point of doom? I don’t know. But we’re giving it a bloody shot.”
Over 2,000 years ago, the churning ocean below the cliffs of the Greek island Antikythera swallowed a massive ship loaded with a trove of luxuries—fine glassware, marble statues and, famously, a complex geared device thought to be the earliest computer.
Discovered by Greek sponge divers in 1900, the shipwreck has since yielded some of the most impressive antiquities to date. And while severe weather has hampered recent dives, earlier this month a team of explorers recovered more than 50 stunning new items, including a bone or ivory flute, delicate glassware fragments, ceramics jugs, parts of the ship itself and a bronze armrest from what was possibly a throne.
“Every single dive on the wreck delivers something interesting; something beautiful,” marvels Brendan Foley, a marine archeologist at the Woods Hole Oceanographic Institute and co-director of the project. “It’s like a tractor-trailer truck wrecked on the way to Christie’s auction house for fine art—it’s just amazing.”
The wreck of the Antikythera ship hides beneath a few feet of sand and scattered shards of ceramic fragments at a depth of about 180 feet. Following an initial excavation funded by the Greek government, explorer Jacques Cousteau returned to the wreck in 1976 to mine the seemingly endless bounty, recovering hundreds of items.
But with even more modern advances in diving and scientific equipment, scientists believed the Antikythera wreckage had more secrets to reveal.
In 2014, an international team of archaeologists, divers, engineers, filmmakers and technicians embarked on the first excavation of this site in 40 years, using detailed and meticulous scientific techniques to not only find new treasures but also to try and reconstruct the ship's history.
The team used autonomous robots to produce hyper-precise maps of the site in partnership with the University of Sydney Australia, says Foley. These maps—accurate down to about a tenth of an inch—were pivotal for both planning dives and mapping discoveries.
The team also carefully scanned the site with metal detectors, mapping out the extent of the wreckage and deciding where to excavate. Using waterproofed iPads, the divers could mark each new artifact on the map in real time.
For the latest round of dives, a ten-person team logged over 40 hours underwater, surfacing with the fresh haul. Analyzing the artifacts should provide the team with a wealth of information, says Foley.
The Antikythera shipwreck is spread across two different sites separated by about the length of a football field, he says. Analytic tools, like comparing the stamps on amphora handles from each site, will help scientists determine whether the wreck represents one or two ships.
If it was two ships, “that opens up a whole series of questions,” says Foley. “Were they sailing together? Did one try to help the other?”
Still, the large size of objects recovered at the primary wreckage site suggests that at least one ship was massive, akin to an ancient grain ship. One such item recently recovered as part of the latest haul was a lead salvage ring about 15.7 inches wide, used to straighten tangled anchor lines.
Image by Brett Seymour EUA/ARGO. In their latest expedition, divers recovered over 50 artifacts, which hint at the history of the massive ship. (original image)
Image by Brett Seymour EUA/ARGO. The wreck of the Antikythera ship is buried under several feet of sand and scattered shards of ceramic fragments at a depth of about 180 feet. (original image)
Image by Phillip Short ARGO. An autonomous underwater vehicle surveys the wreck, creating a three-dimensional map of the site. (original image)
Image by Brett Seymour EUA/ARGO. During the latest round of dives, the team logged over 40 hours underwater. (original image)
Image by Brett Seymour EUA/ARGO. Divers carefully clear away sand and rubble to recover the often delicate artifacts. (original image)
Image by Brett Seymour EUA/ARGO. A diver displays his find. The shipwreck has yielded some of the most impressive antiquities to date. (original image)
Image by Brett Seymour EUA/ARGO. Scientists will study each artifact recovered in great detail, with hopes to reconstruct the history of the ship and its precious cargo. (original image)
Scientists hope to learn more about the origin of the ship—or ships—by analyzing the isotopic composition of lead artifacts similar to this ring, which will yield information about where the vessel itself was made.
For the ceramic artifacts, the team plans to look closely at any residues preserved inside the container walls. “Not only are [the ceramics] beautiful in their own right, but we can extract DNA from them,” says Foley. That could give information about ancient medicines, cosmetics and perfumes.
The team currently has plans to head back out to the site in May, but the future of the project is open-ended. With so much information to glean from the current set of artifacts, Foley says that they could let the site sit for another generation. With the rapid advance of technology, future expeditions may have even better techniques and be able to discover even more about the wreckage.
“What will be available a generation from now, we can’t even guess,” he says.
While the Nobel Prizes are 115 years old, rewards for scientific achievement have been around much longer. As early as the 17th century, at the very origins of modern experimental science, promoters of science realized the need for some system of recognition and reward that would provide incentive for advances in the field.
Before the prize, it was the gift that reigned in science. Precursors to modern scientists – the early astronomers, philosophers, physicians, alchemists and engineers – offered wonderful achievements, discoveries, inventions and works of literature or art as gifts to powerful patrons, often royalty. Authors prefaced their publications with extravagant letters of dedication; they might, or they might not, be rewarded with a gift in return. Many of these practitioners worked outside of academe; even those who enjoyed a modest academic salary lacked today’s large institutional funders, beyond the Catholic Church. Gifts from patrons offered a crucial means of support, yet they came with many strings attached.
Eventually, different kinds of incentives, including prizes and awards, as well as new, salaried academic positions, became more common and the favor of particular wealthy patrons diminished in importance. But at the height of the Renaissance, scientific precursors relied on gifts from powerful princes to compensate and advertise their efforts.
With courtiers all vying for a patron’s attention, gifts had to be presented with drama and flair. Galileo Galilei (1564-1642) presented his newly discovered moons of Jupiter to the Medici dukes as a “gift” that was literally out of this world. In return, Prince Cosimo “ennobled” Galileo with the title and position of court philosopher and mathematician.
If a gift succeeded, the gift-giver might, like Galileo in this case, be fortunate enough to receive a gift in return. Gift-givers could not, however, predict what form it would take, and they might find themselves burdened with offers they couldn’t refuse. Tycho Brahe (1546-1601), the great Danish Renaissance astronomer, received everything from cash to chemical secrets, exotic animals and islands in return for his discoveries.
Regifting was to be expected. Once a patron had received a work he or she was quick to use the new knowledge and technology in their own gift-giving power plays, to impress and overwhelm rivals. King James I of England planned to sail a shipful of delightful automata (essentially early robots) to India to “court” and “please” royalty there, and to offer the Mughal Emperor Jahangir the art of “cooling and refreshing” the air in his palace, a technique recently developed by James’ court engineer Cornelis Drebbel (1572-1633). Drebbel had won his own position years earlier by showing up unannounced at court, falling to his knees, and presenting the king with a marvelous automaton.A version of Drebbel’s automaton sits on the table by the window in this scene of a collection. (Hieronymous Francken II and Brueghel the Elder)
Gifts were unpredictable and sometimes undesired. They could go terribly wrong, especially across cultural divides. And they required the giver to inflate the dramatic aspects of their work, not unlike the modern critique that journals favor the most surprising or flashy research leaving negative results to molder. With personal tastes and honor at stake, the gift could easily go awry.
Scientific promoters already realized in the early 17th century that gift-giving was ill-suited to encouraging experimental science. Experimentation required many individuals to collect data in many places across long periods of time. Gifts emphasized competitive individualism at a time when scientific collaboration and the often humdrum work of empirical observation was paramount.
While some competitive rivalry could help inspire and advance science, too much could lead to the ostentation and secrecy that too often plagued courtly gift-giving. Most of all, scientific reformers feared an individual would not tackle a problem that couldn’t be finished and presented to a patron in his or her lifetime—or even if they did, their incomplete discoveries might die with them.
For these reasons, promoters of experimental science saw the reform of rewards as integral to radical changes in the pace and scale of scientific discovery. For example, Sir Francis Bacon (1561-1626), lord chancellor of England and an influential booster of experimental science, emphasized the importance even of “approximations” or incomplete attempts at reaching a particular goal. Instead of dissipating their efforts attempting to appease patrons, many researchers, he hoped, could be stimulated to work toward the same ends via a well-publicized research wish list.
Bacon coined the term “desiderata,” still used by researchers today to denote widespread research goals. Bacon also suggested many ingenious ways to advance discovery by stimulating the human hunger for fame; a row of statues celebrating famous inventors of the past, for example, could be paired with a row of empty plinths upon which researchers might imagine their own busts one day resting.
Bacon’s techniques inspired one of his chief admirers, the reformer Samuel Hartlib (circa 1600-1662) to collect many schemes for reforming the system of recognition. One urged that rewards should go not only “to such as exactly hit the marke, but even to those that probably misse it,” because their errors would stimulate others and make “active braines to beate about for New Inventions.” Hartlib planned a centralized office systematizing rewards for those who “expect Rewards for Services done to the King or State, and know not where to pitch and what to desire.”Galileo presents an experiment to a Medici patron. (Giuseppe Bezzuoli)
Collaborative scientific societies, beginning in the mid-17th century, distanced rewards from the whims and demands of individual patrons. The periodicals that many new scientific societies started publishing offered a new medium that allowed authors to tackle ambitious research problems that might not individually produce a complete publication pleasing to a dedicatee.
For example, artificial sources of luminescence were exciting chemical discoveries of the 17th century that made pleasing gifts. A lawyer who pursued alchemy in his spare time, Christian Adolph Balduin (1632-1682), presented the particular glowing chemicals he discovered in spectacular forms, such as an imperial orb that shone with the name “Leopold” for the Habsburg emperor.
Many were not satisfied, however, with Balduin’s explanations of why these chemicals glowed. The journals of the period feature many attempts to experiment upon or question the causes of such luminescence. They provided an outlet for more workaday investigations into how these showy displays actually worked.
The societies themselves saw their journals as a means to entice discovery by offering credit. Today’s Leopoldina, the German national scientific society, founded its journal in 1670. According to its official bylaws, those who might not otherwise publish their findings could see them “exhibited to the world in the journal to their credit and with the praiseworthy mention of their name,” an important step on the way to standardizing scientific citation and norms of establishing priority.
Beyond the satisfaction of seeing one’s name in print, academies also began offering essay prizes upon particular topics, a practice which continues to this day. Historian Jeremy Caradonna estimates 15,000 participants in such competitions in France between 1670, when the Royal Academy of Sciences began awarding prizes, and 1794. These were often funded by many of the same individuals, such as royalty and nobility, who in former times would have functioned as direct patrons, but now did so through the intermediary of the society.
States might also offer rewards for solutions to desired problems, most famously in the case of the prizes offered by the English Board of Longitude beginning in 1714 for figuring out how to determine longitude at sea. Some in the 17th century likened this long-sought discovery to the philosophers’ stone. The idea of using a prize to focus attention on a particular problem is alive and well today. In fact, some contemporary scientific prizes, such as the Simons Foundation’s “Cracking the Glass Problem,” set forth specific questions to resolve that were already frequent topics of research in the 17th century.
The shift from gift-giving to prize-giving transformed the rules of engagement in scientific discovery. Of course, the need for monetary support hasn’t gone away. The scramble for funding can still be a sizable part of what it takes to get science done today. Succeeding in grant competitions might seem mystifyng and winning a career-changing Nobel might feel like a bolt out of the blue. But researchers can take comfort that they no longer have to present their innovations on bended knee as wondrous gifts to satisfy the whims of individual patrons.
A purple-haired sorceress holding a fireball. A three-headed dragon wrapping its claws around the world. A great raptor emerging from the flames.
No, these are not characters from a Magic: The Gathering deck. They are avatars depicted on the official mission patches made for the National Reconnaissance Office (NRO). Just as NASA creates specially designed patches for each mission into space, NRO follows that tradition for its spy satellite launches. But while NASA patches tend to feature space ships and American flags, NRO prefers wizards, Vikings, teddy bears and the all-seeing eye. With these outlandish designs, a civilian would be justified in wondering if NRO is trolling.
Unfortunately, given the agency's extreme secrecy, it’s impossible to answer that question for sure. But based on information that has been leaked about some of the patches, it seems there may be a method to the artistic madness.
Forged in Secret
Understanding the patches requires a trip back to the 1960s and the early days of the human space program, explains Robert Pearlman, a space historian and the founder of collectSPACE. At the time, NASA allowed its astronauts to name their spacecraft. John Glenn chose Friendship 7, for example, for the Mercury space capsule he piloted when he became the first US astronaut to orbit Earth. Gordon Cooper went with Faith 7 for his spacecraft during the final mission of the Mercury program.
When it came time to launch the Gemini program, however, NASA decided to take away the naming privilege. The astronauts, understandably, were disappointed. So Gemini pilot Cooper asked NASA if they’d be willing to compromise and—in the tradition of military squadrons—allow the crew to design a patch instead. NASA agreed, and since then patches have become a staple for both crewed and robotic NASA flights.
NRO arrived on the space launch scene around the same time that NASA’s first patches were being designed. In 1960, former president Dwight D. Eisenhower established the agency as a central authority for organizing the nation’s reconnaissance operations, and oversight of reconnaissance imaging satellites—spy satellites, in popular parlance—was a big part of that mission. Right from the start, NRO operations were all very cloak-and-dagger. The public didn’t even learn about the agency’s existence until 1971, and its first reconnaissance satellite program, Corona, wasn’t declassified until 1995. “The reconnaissance satellites have been a factor of the space program since the very beginning,” Pearlman says. “But they are indeed classified, and their capabilities are classified.”
Today NRO launches about four to six satellites per year, including the NROL-35 mission, with the patch seen above, slated to fly this Thursday. The public still doesn't know exactly what each satellite is doing, but for a couple decades now the agency has advertised the date and time of its launches—probably because, as Pearlman points out, “it’s hard to hide a rocket.” In response, a subculture of fervent hobbyists has become committed to watching the skies at night, piecing together the satellites’ orbits. At some point, those hobbyists discovered that—just like NASA—NRO also issues mission patches. The agency didn’t seem to care if the patches were leaked, and eventually it even started publishing depictions of the patches along with launch announcements. Even so, for years knowledge of the patches largely remained confined to enthusiasts, especially in the days prior to widespread social media.
Image by National Reconnaissance Office. Some enthusiasts muse that the five beams shooting out of the winged warrior's hand represent five pre-existing satellites in the Quasar communications system, since a Quasar satellite was the presumed payload of NROL-33 in May. The two wolves facing west and one facing east could indicate three new positions in this system. Finally, the setting sun may symbolize that this will be the final Quasar launch. (original image)
Image by National Reconnaissance Office. Edward Snowden’s NSA leaks broke shortly after the release of the patch for NROL-39, leading many to speculate that the octopus represented the tentacles of the government reaching out to control the world. After using the Freedom of Information Act, however, a journalist found a more mundane explanation: the octopus represents a failed instrument (nicknamed an octopus) that the team had to contend with while preparing the satellite for its December 2013 launch. (original image)
Image by National Reconnaissance Office. The presumed payload for NROL-38, launched in June 2012, is a type of satellite that functions with two others, creating a constellation. If that is true, the three-headed dragons might represent that satellite trio, and the positions of their heads around the Earth could hint at their real-world locations. (original image)
Image by National Reconnaissance Office. The launch number for NROL-66, which lifted off in February 2011, inspired the Route 66 reference. Some speculated that the bull is a reference to the devil, because of 66’s affinity with 666. The red bull could also be a nod to the type of rocket used for the launch, called a Minotaur. NROL-66 was not actually a spy satellite mission, but a classified device launched to demonstrate new technology. (original image)
Image by National Reconnaissance Office. The bird on this patch for NROL-49 could be an eagle to represent the US, and the flames might stand for the fireball produced by the Delta IV Heavy rocket used for the January 2011 launch. The feathered form could also be a phoenix—there is speculation that this satellite took the place of another that was discontinued. The Latin motto reads: "Better the devil you know." (original image)
Image by National Reconnaissance Office. Little is known about the patch for NROL-16, launched in April 2005. The pelican could refer to a location where those birds live, and the gorilla could be America asserting its dominance. (original image)
Image by National Reconnaissance Office. The rocket on the patch for NROL-1 represents the Atlas rocket used in the August 2004 launch, and the geometric shape in the middle might represent the Pentagon or the Department of Defense. “I don’t know about the hearts,” Pearlman says. (original image)
Image by National Reconnaissance Office. This is the NROL-11 patch design that amateur satellite trackers cracked in 2000. Its design inadvertently revealed the mission and location of its payload (see main text). (original image)
Image by National Reconnaissance Office. This patch for NROL-10, launched in December 2000, is a mystery. (original image)
Image by National Reconnaissance Office. The tiger is circling the globe, just like the satellite launched on NROL-9 in May 1999. Why a tiger—or the choice for a mission motto—nobody knows. (original image)
The patches’ relative obscurity changed in 2000, with the launch of a payload known as NROL-11. The mission patch depicted what appeared to be owl eyes peering down at the Earth, where four arrow-shaped vectors, two per orbit, made their way across Africa. Three of the vectors were white, and one was dark. Based solely on studying the design, civilian satellite watcher Ted Molczan hypothesized that the patch showed a failed satellite (the dark vector), and that the newly launched satellite would take its place.
Sure enough, after the launch a new satellite appeared just where Molczan predicted. Pearlman, who reported on the story at the time, says that NRO at first told him “no comment” when he contacted them. About 30 minutes later they called him back and asked him not to publish the story. Pearlman told them no dice, and in the end, the NRO spokesman told him that the patches were just morale-builders for those who work on the launches.
Whether NRO admits it, though, it seemed that NROL-11’s patch had inadvertently revealed classified details about its payload’s whereabouts, and when the story broke, the patches suddenly appeared on the public’s radar. Although the patches were under more scrutiny than ever before, the agency didn’t flinch. Rather than classify them or discontinue the tradition, NRO ramped up its game. Subsequent designs became even more ridiculous, featuring patriotic gorillas or 16th-century ships, for example. The public ate it up. Some—like the 2013 mission heralded by a giant Earth-eating octopus—sparked their own media frenzies, and rip-offs of the most popular designs popped up for sale online. NRO’s new motto seems to be “better to have a more outlandish design than show actual details about the flight,” Pearlman says.
As for their motivations, Pearlman doesn’t think they’re in it just for the lolz. “No, I don’t think they’re playing us,” he says. “If anything, it’s an internal gag. Like, how far can you take it without being reprimanded? Or maybe the patches represent jokes that cropped up in the processing of the satellites, which we’ll never know unless they’re declassified—and maybe not even then.”
In 1974, just a couple years after the launch of the first Landsat satellite, scientists noticed something odd in the Weddell Sea near Antarctica. There was a large ice-free area, called a polynya, in the middle of the ice pack. The polynya, which covered an area as large as New Zealand, reappeared in the winters of 1975 and 1976 but has not been seen since.
Scientists interpreted the polynya’s disappearance as a sign that its formation was a naturally rare event. But researchers reporting in Nature Climate Change disagree, saying that the polynya’s appearance used to be far more common and that climate change is now suppressing its formation.
What’s more, the polynya’s absence could have implications for the vast conveyor belt of ocean currents that move heat around the globe.Satellite imagery allowed scientists to find an ice-free area in the Weddell Sea (upper left quadrant) in the Antarctic winters of 1974 through 1976. (Credit: Claire Parkinson (NASA GSFC))
Surface seawater around the poles tends to be relatively fresh due to precipitation and the fact that sea ice melts into it, which makes it very cold. As a result, below the surface is a layer of slightly warmer and more saline water not infiltrated by melting ice and precipitation. This higher salinity makes it denser than water at the surface.
Scientists think that the Weddell polynya can form when ocean currents push these denser subsurface waters against an underwater mountain chain known as the Maud Rise. This forces the water up to the surface, where it mixes with and warms colder surface waters. While it doesn’t warm the top layer of water enough for a person to comfortably bathe in, it's enough to prevent ice from forming. But at a cost—the heat from the upwelling subsurface water dissipates into the atmosphere soon after it reaches the surface This loss of heat forces the now-cool but still dense water to sink some 3,000 meters to feed a huge, super-cold underwater ocean current known as Antarctic Bottom Water.
Antarctic Bottom Water spreads across the global oceans at depths of 3,000 meters and more, delivering oxygen into these deep places. It’s also one of the drivers of global thermohaline circulation, the great ocean conveyor belt that moves heat from the equator towards the poles.A network of surface and deep-ocean currents moves water and heat around the world. (Credit: NASA/Map by Robert Simmon, adapted from the IPCC 2001 and Rahmstorf 2002)
But for the mixing to occur in the Weddell Sea, the top layer of ocean water must become denser than the layer below it so that the waters can sink.
To find out what has been going on in the Weddell Sea, Casimir de Lavergne of McGill University in Montreal and colleagues began by analyzing temperature and salinity measurements collected by ships and robotic floats in this region since 1956—tens of thousands of data points. The researchers could see that the surface layer of water at the site of the Weddell polynya has been getting less salty since the 1950s. Freshwater is less dense than saltwater, and it acts as a lid on the Weddell system, trapping the subsurface warm waters and preventing them from reaching the surface. That in turn, stops the mixing that produces Antarctic Bottom Water at that site.
That increase in freshwater is coming from two sources: Climate change has amplified the global water cycle, increasing both evaporation and precipitation. And Antarctic glaciers have been calving and melting at a greater rate. Both of these sources end up contributing more freshwater to the Weddell Sea than what the area experienced in the past, the researchers note.
To look at what the future might hold for this system, de Lavergne and colleagues turned to a set of 36 climate models. Those models, which predict that dry places of the world generally get drier and wet places get wetter, show that this area of the Southern Ocean should see even more precipitation in the future. The models don’t include melting glaciers, but those are expected to add more freshwater, which could make the lid on the system even stronger, according to the researchers.
A weakening of the mixing of water in the Weddell Sea could explain, at least in part, a shrinking in Antarctic Bottom Waters reported in 2012. “Reduced convection would reduce the rate of Antarctic Bottom Water formation,” says de Lavergne. That “could cause a weakening in the lower branch of the thermohaline circulation.”
That lower branch is the cousin to a similar process of convection happening in the Labrador Sea of the North Atlantic, where cold water from the Arctic sinks and drives deep currents south. If this source of deep water were shut off, perhaps because of an influx of freshwater, scientists have said that the results could be disastrous, particularly for Europe, which is kept warm by this movement of heat and water. Climate researchers consider this scenario highly unlikely but not out of the realm of possibility. And even a weakened system can have effects on climate and weather around the world.
More immediately, though, a weakening of the mixing in the Weddell Sea could be contributing to some of climate trends observed in Antarctica and the Southern Ocean. By keep warmer ocean waters trapped, the weakening may explain a slowdown in surface warming and expansion in the sea ice, the researchers note.
The weakening of the Weddell Sea mixing has also kept trapped all the heat and carbon stored in those deeper layers of ocean water. If another giant polynya were to form, which is unlikely but possible, the researchers warn, it could release a pulse of warming on the planet.
It is no small irony that a quote from one of the grimmest writers of the 20th century has become an inspirational mantra for high achievers ranging from jaunty entrepreneur Richard Branson to rising Swiss tennis star Stanislas Wawrinka, who recently beat Rafael Nadal to win the Australian Open.
The phrase has even been used in a commercial starring Liam Neeson to motivate the entire country of Ireland.
Call it a hunch, but this is not likely what Samuel Beckett, the great purveyor of pessimism, had in mind when he wrote in his 1983 novella Worstward Ho: "Ever tried. Ever failed. No matter. Try again. Fail again. Fail better."
That said, Beckett's alma mater, Trinity College in Dublin, Ireland, is now providing a fresh take on the concept of failing better. Earlier this month, the Science Gallery there opened an exhibition which explores failing as part of the process of finding solutions.
Learning from failure
The show, however, offers a nuanced view of failure, not simply as a stumble on the way to victory. Sure, there's space in the "Fail Better" exhibit given to James Dyson's tale of how his company went through 2,000 prototypes to create its latest cutting-edge vacuum. But attention is also paid to inventions and ideas that merit a full-throated "What were they thinking?"—from lobotomies done with ice picks to a device upon which a pregnant woman would be strapped down and spun with the idea that centrifugal force would make it easier for her to have her baby.
More than anything, says curator Jane ni Dhulchaointigh, the show is about giving failure its due, asking questions about how it’s perceived and its role in innovation. Is failure the opposite of success? Or is it integral to it? Is failure grossly undervalued? Can it be a good thing?
That last question is addressed in “Fail Better” through displays celebrating the common fuse, a device whose failure protects a larger system, and the K1 syringe, designed to fail after one use so it can't be shared and spread disease.
Still, failure is rarely acknowledged, notes Ni Dhulchaointigh, even in fields such as science where it serves such a critical purpose. “For example,” she says, “in scientific journals there is a bias towards publication of ‘successful’ experiments. Is this happening elsewhere? Will this dangerous trend mean that we will be increasingly unlikely to learn from each other’s mistakes?”
The Science Gallery's founding director, Michael John Gorman, had a desire to talk honestly about failure, particularly to young visitors to the museum. He approached Ni Dhulchaointigh last summer and gauged her interest in working with him to create an exhibition that takes a closer look at the yin-yang relationship of success and failure.
Gorman saw Ni Dhulchaointigh as particularly well suited for the role, given the circuitous route she took to her own invention, a multi-purpose type of silicon rubber that can be shaped like Play-Doh and sticks like superglue. She named it Sugru, from the Gaelic word for play.
Ni Dhulchaointigh produced her first batch of the malleable rubber back in 2003, then spent the next five years refining it, all the while thinking big as she looked for multinational partners. But no deal materialized and with money running low, she took to heart a friend’s advice to “Start small and make it good.”
She and her original partners decided to go it alone and, with a boost from a private investor, gave themselves six months to make Sugru happen. In late 2009, after a rave review in London's Daily Telegraph, their sticky, bendable rubber went viral. They sold 1,000 packages in six hours.
Since then it’s been pretty much one upward spiral for Sugru—selected one of Time's top 50 inventions of 2010 (ahead of the iPad no less). Ni Dhulchaointigh was named Design Entrepreneur of the Year at the London Design Festival in 2012. But she takes the most pleasure in the feedback she gets from the Sugru community, people from all over the world who send in pictures of how they’ve used it to fix things.
“In my experience, when things fail, a space opens up where imaginative solutions can be found,” says Ni Dhulchaointigh. “And the act of solving problems creatively has so much to offer—even on the smallest, humblest, everyday level, like fixing something that breaks.”
For Ni Dhulchaointigh, the appeal of a show like “Fail Better” goes beyond the stories of failure to the people who tell them. She reached out to leaders in different fields and landed the likes of famous explorer Ranulph Fiennes, who donated a pair of boots and the story of how they caused him to fail to summit Everest; innovation expert Ken Robinson, who shared the tale of how the accidental discovery of the color mauve led to the birth of the synthetic dye business; and reknowned astrophysicist Jocelyn Bell Burnell, who provided the sad case of the Mars Climate Orbiter, which broke apart in space because different teams of engineers had used different units of measure.
But the most poignant display in the exhibition is simply titled “Superman’s Wheelchair.” It's the first wheelchair used by actor Christopher Reeve after his horse riding accident made him a paraplegic. It was presented by Mark Pollock, a blind endurance racer and rower who himself was paralyzed when he fell from a second-floor window in 2010.
Pollock says he was moved by Reeve’s commitment to finding a cure for spinal cord injuries, and while Reeve died before he succeeded, Pollock has taken on the challenge, engaging in aggressive physical therapy and learning to walk with the help of robotic legs. There’s clearly no guarantee of success, but it remains his objective. As Pollock puts it, “We know that in our pursuit of a wildly ambitious goal, the potential for failure travels with us. If there is no risk of failure, it probably is not worth pursuing.”
Video bonus: Watch this video about the "Fail Better" exhibit, including gallery visitors sharing personal "fails."
"Fail Better" is on display at Trinity College Dublin's Science Gallery through April 27, 2014.
The two paralysis patients were up and walking on treadmills in no time. This impressive feat was made possible by an unprecedented new surgery, in which researchers implanted wireless devices in the patients’ brains that recorded their brain activity. The technology allowed the brain to communicate with the legs—bypassing the broken spinal cord pathways—so that the patient could once again regain control.
These patients, it turns out, were monkeys. But this small step for monkeys could lead to a giant leap for millions of paralyzed humans: The same equipment has already been approved for use in humans, and clinical studies are underway in Switzerland to test the therapeutic effectiveness of the spinal cord stimulation method in humans (minus the brain implant). Now that researchers have a proof-of-concept, this kind of wireless neurotechnology could change the future of paralysis recovery.
Instead of trying to repair the damaged spinal cord pathways that usually deliver brain signals to the limbs, scientists tried an innovative approach to reverse paralysis: Bypassing the injury bottleneck altogether. The implant worked as a bridge between the brain and the legs, directing leg motion and stimulating muscle movement in real time, says Tomislav Milekovic, a researcher at Switzerland's École Polytechnique Fédérale de Lausanne (EPFL). Milekovic and co-authors report their findings in a new paper published Wednesday in the journal Nature.
When the brain's neural network processes information, it produces distinctive signals—which scientists have learned to interpret. Those that drive walking in primates originate in the dime-sized region known as the motor cortex. In a healthy individual, the signals travel down the spinal cord to the lumbar region, where they direct the activation of leg muscles to enable walking.
If a traumatic injury severs this connection, a subject is paralyzed. Although the brain is still able to produce the proper signals, and the leg's muscle-activating neural networks are intact, those signals never reach the legs. The researchers managed to reestablish the connection thorough real-time, wireless technology—an unprecedented feat.
How does the system work? The team's artificial interface begins with an array of almost 100 electrodes implanted in the brain's motor cortex. It's connected to a recording device that measures the spiking of electrical activities in the brain that control leg movements. The device sends these signals to a computer that decodes and translates these instructions to another array of electrodes implanted in the lower spinal cord, below the injury. When the second group of electrodes receives the instructions, it activates the appropriate muscle groups in the legs.
For the study, the two Rhesus macaque monkeys were given spinal cord injuries in the lab. After their surgeries, they had to spend a few days recovering and waiting for the system to collect and calibrate necessary data on their condition. But just six days after injury, one monkey was walking on a treadmill. The other was up and walking on post-injury day 16.
The success of the the brain implant demonstrates for the first time how neurotechnology and spinal cord stimulation can restore a primate's ability to walk. “The system restored locomotor movements immediately, without any training or re-learning,” Milekovic, who engineers data-driven neuroprosthetic systems, told Smithsonian.com.
“The first time we turned the brain-spine interface on was a moment that I’ll never forget,” added EPFL researcher Marc Capogrosso in a statement.A new brain implant wirelessly send signals to the legs' muscle groups. (Illustration by Jemere Ruby)
The technique of "hacking" the brain's neural networks has produced remarkable feats, such as helping to create touch-sensitive prosthetics that allow wearers to perform delicate tasks like cracking an egg. But many of these efforts use cable connections between the brain and recording devices, meaning the subjects aren't able to move freely. “Neural control of hand and arm movements was investigated in great detail, while less focus has been given to the neuronal control of leg movements, which required animals to move freely and naturally,” Milekovic says.
Christian Ethier, a neuroscientist at Quebec's Université Laval who was not involved in the research, called the work a “major step forward in the development of neuroprosthetic systems." He added: “I believe this demonstration is going to accelerate the translation of invasive brain-computer interfaces toward human applications.
In an accompanying News & Views piece in Nature, neuroscientist Andrew Jackson agrees, pointing out how quickly advances in this field have moved from monkeys to people. A 2008 paper, for instance, demonstrated that paralyzed monkeys could control a robotic arm with just their brain; four years later a paralyzed woman did the same. Earlier this year, brain-controlled muscle stimulation enabled a quadriplegic person to grasp items, among other practical hand skills, after the same feat was achieved in monkeys in 2012.
Jackson concludes from this history that “it's not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of of the decade.”
The Blackrock electrode array implanted in the monkeys' brains has been used for 12 years to successfully record brain activity in the BrainGate clinical trials; numerous studies have demonstrated that this signal can accurately control complex neuroprosthetic devices. “While it does require surgery, the array is an order of magnitude smaller than the surgically implanted deep brain simulators already used by more than 130,000 people with Parkinson's disease or other movement disorders,” Milekovic adds.
While this test was limited to just a few phases of brain activity related to walking gait, Ethier suggests that it could potentially enable a greater range of movement in the future. “Using these same brain implants, it is possible to decode movement intent in a lot more detail, similar to what we have done to restore grasp function. ... I expect that future developments will go beyond and perhaps include other abilities like compensating for obstacles and adjusting walking speed.”
Ethier notes another intriguing possibility: The wireless system might actually help the body heal itself. “By re-synchronizing the activity in the brain and spinal motor centers, they could promote what is called ‘activity-dependent neuroplasticity,’ and consolidate any spared connections linking the brain to the muscles,” he says. “This could have long-term therapeutic effects and promote the natural recovery of function beyond what is possible with conventional rehabilitation therapies.”
This phenomenon is not well understood, and the possibility remains speculative at this point, he stresses. But the tangible achievement this research demonstrates—helping the paralyzed walk again with their brains—is already a huge step.
Beth Ripley ran down the hallway toward the cardiologist with a fresh heart in her hands.
Showing it to him, he took it and began to turn it over, inspecting and probing it. The cardiologist recognized immediately that months of planning had to be cast aside. Back to the drawing board.
The heart in question was a full-sized 3D model of the patient’s actual ticker, hot off the presses at Brigham and Women’s Hospital in Boston, Massachusetts. Ripley, a radiologist, along with attending radiologist Mike Steigner, had created the model for the cardiology team after digital models had proved ineffective for visualizing the surgical approach. Once the cardiologist got his hands on a mockup based on the data from the CT scans, the problem was clear as day.
Just by looking at the model, he realized that the approach to the procedure would probably have to change from a minimally invasive catheterization to a full-blown open heart surgery. In effect, his team had dodged a potential complication that was unforeseeable without the physical model.
Ripley and Tatiana Kelil, another Brigham and Women’s radiologist, are part of a new effort called 3D Print For Health, started only five months ago. It’s a labor of love, conducted in their spare time in an effort to stimulate discussions within the biomedical 3D printing community. They are also working with multiple surgeons and radiologists at Brigham and Women’s Hospital, studying how detailed 3D models of patients’ real anatomies can help reduce complications from surgery and treatment, and also improve patients’ ability to be their own best advocates.
“We wanted to build a place for patients and researchers to share ideas about how we can best use 3D printing in medicine,” Ripley says. “In the hands of the right people, it can be an extremely powerful tool.”
Image by 3D Print For Health. 3D scan of a heart (original image)
Image by 3D Print For Health. 3D scan of a stroke in the brain (original image)
Image by 3D Print For Health. 3D scan of a kidney (original image)
The team is motivated by their patients, and the desire to make a real difference for them. Sometimes that means helping the patient better understand their disease or pathology; sometimes it’s helping a surgeon develop a tightly choreographed play-by-play for an upcoming procedure.
“We asked surgeons what kept them up at night,” Kelil says. “Did they need help visualizing a patient’s anatomy, or communicating a procedure to a patient? We don’t want to print a model just because it’s printable—it has to have utility.”
Brigham and Women’s isn’t the first medical institution to use 3D printing in this way. Medical supply companies are using 3D printed anatomical models to design better prototypes of devices, including heart valves and prosthetics; the National Institutes of Health maintains a print exchange where models are freely available for download. What makes the Brigham and Women’s Hospital efforts different is that they’re designing and running studies of how pre-procedure printed models make a difference in reducing surgery time and complications.
The team is focusing on two procedures in conjunction with other physicians at Brigham and Women's Hospital—a minimally invasive aortic valve replacement and a robotic-assisted kidney tumor resection where every second counts after the vessels are clamped. Having a 3D model of a patient’s aorta prior to surgery allows doctors to choose a valve that fits exactly; for a kidney, the model gives surgeons better visualization of tumor location, minimizing tissue damage from reduced blood flow to the organ during surgery.
“With minimally invasive valve replacement, interventionalists don’t get to open your chest and physically measure the valve to make sure it fits,” Ripley says. “Currently, the only way to measure that is with a 2D image, but even with the best images it’s not always easy.”
Working closely with James Weaver and Ahmed Hosny, experts in high-resolution 3D printing at Harvard University’s Wyss Institute for Biologically Inspired Engineering, the team is investigating how accurately the digital data translates into physical models, as well as how to make the best use of existing scans to reduce patients’ exposure to unnecessary additional radiation.
Dentists have been doing it for years. When you lose the crown of a tooth, they fabricate a replacement; anything less than a perfect fit can damage the surrounding teeth and underlying bone. With 3D printing, the team sees personalized medicine taking off in the mainstream.
“We’re really interested in creating patient-specific treatments,” Hosny says. “We want to create the most appropriate solution for you, and ideally that means taking measurements, sending them off to a medical device manufacturer and getting back something that’s an exact fit for each patient.”
And, the group thinks that medical 3D models have applications in common ailments, not just for rare or complicated procedures.
Image by 3D Print For Health. Beth Ripley (left) and Tatiana Kelil (right) explain the process of 3D printing anatomical models to attendees at the National Maker Faire last weekend in Washington, D.C. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. (original image)
Image by 3D Print For Health. Kelil (far left) and Ripley (second from left) were joined by teammates James Weaver (second from right) and Ahmed Hosny (far right), from Harvard University's Wyss Institute for Biologically Inspired Engineering. (original image)
They’ve created an array of models showing effects of the “Top 10 Killers” to demonstrate how 3D printing would be useful for approaching cardiovascular disease, cancer, COPD, trauma, stroke, Alzheimer’s, diabetes, pneumonia and flu, kidney disease and suicide. At the recent National Maker Faire in Washington, D.C., attendees milled around the display table, picking up brains and feet and hearts, while Ripley, Kelil, Hosny and Weaver took turns explaining the process of producing models and their potential benefits for healthcare.
The team hopes their efforts will lead to patient-specific treatment. As a case in point, they refer to Steven Keating, a researcher at MIT, who in 2014 was diagnosed with a large brain tumor. Keating was active in working with Weaver and Hosny to visualize the tumor in 3D, while his surgeon, Ennio Chiocca, asked them to print a specially textured replica.
The 3D models were incredibly helpful in helping Keating to better understand the scope of the tumor and provided powerful visual aids for communicating his diagnosis to his family, friends and fellow scientists. His experience has also helped raise awareness within the general public as to the educational power of biomedical 3D printing.
Ideally, the group envisions a patient being able to take scan data to a doctor and have a model made of that organ or tissue—for any procedure. At the moment, most insurance plans don’t cover the cost of producing models, but Kelil says if we continue to prove its utilities in diagnosis, treatment, and cost reduction, that might change in the near future. The heart Ripley produced cost about $200 in materials and labor.
At a minimum, if a patient is interested in obtaining a 3D printed model, they should ask for the digital images right away, Ripley advises. Those images may come in handy down the road.
“Patients should have access to their own data,” Kelil says. “It’s their own anatomy.”
Joseph Madsen, associate professor of neurosurgery at Harvard Medical School and the director of the epilepsy surgery program at Boston Children’s Hospital, also recently took advantage of a physical model of a patient’s brain he would operate on. He was able to do a dry run of the surgery, which was a complicated follow-up procedure, on the model.
“We’re not quite there yet for routine use in surgery, but we have to keep working on it every day,” Madsen says. He thinks it will take some time for the practice to mature.
Madsen has a special understanding of the 3D modeling world: As a high schooler, his very first lab advisor was a computer science graduate student in Utah who was experimenting with computer animation. At the time, in the early 70s, this was a profound new use of computing power, in a time when computers were still slow behemoths. Nearly two decades later, Madsen’s advisor, Ed Catmull, co-founded Pixar Animation Studios.
“Ed had the vision of 3D objects that could be used in entertainment, and it still took 20 years between that and the production of Toy Story,” Madsen says. “What’s important is the vision for how the application [of 3D printing] is going to be made. It’s what you do with it, how you manipulate it. I’m very much in favor of the extension of the technology, but it’ll require a lot of really thoughtful evaluation and utility from surgeons.”
In 2009, automotive designers at Japanese carmaker Nissan were scratching their heads over how to build the ultimate anti-collision vehicle. Inspiration came from an unlikely source: schools of fish, which move synchronously by sticking close together while simultaneously staying a safe stopping distance apart. Nissan took the aquatic concept and swam with it, creating safety features in Nissan cars like Intelligent Brake Assist and Forward Collision Warning that are still used today.
Biomimicry—an approach to design that looks for solutions in nature—is by now so widespread that you may not even recognize the real-life inspiration behind your favorite technology. From flipper-like turbines to leaf-inspired solar cells to UV-reflective glass with spider web-like properties, biomimicry offers designers efficient, practical, and often economical solutions that nature has been developing over billions of years. But combine biomimicry with sports cars? Now you're in for a wild ride.
From the Jaguar to the Chevrolet Impala, automotive designers have a long tradition of naming their cars after creatures that evoke power and style. Carmakers like Nissan even go so far as to study animals in their natural environments to advance automotive innovation. Here are a few of the most famous classic cars—commercial and concept—that owe their inspiration to the deep blue sea.
A Bubble of One’s OwnMcLaren P1 Supercar (Axion23 via Wikicommons)
While automotive designer Frank Stephenson was on vacation in the Caribbean, a sailfish mounted on the wall of his hotel made him do a double take. The fish's owner was especially proud of his catch, he told Stephenson, because of the fact that sailfish are coveted for being too fast to easily capture. Reaching speeds of 68 miles per hour, the sailfish is one of the fastest animals in the ocean (close competitors include its cousins the swordfish and marlin, all of which belong to the billfish family).
His curiosity hooked, Stephenson returned to his job at the headquarters of British automotive giant McLaren eager to learn more about what makes the sailfish the fastest in the sea. He discovered that the fish’s scales generate tiny vortices that produce a bubble layer around its body, significantly reducing drag as it swims.
Stephenson went on to design a supercar in the fish’s image: The P1 hypercar needs generous air circulation to maintain combustion and engine cooling for high performance. McLaren’s designers applied the fish scale blueprint to the inside of the ducts that channel air into the engine of the P1, boosting airflow by an incredible 17 percent and increasing the efficiency and power of the vehicle.
The Road Shark
Image by Tino Rossini / iStock. Corvette Mako Shark (original image)
Image by CoreyFord / iStock. Mako Shark Side Profile (original image)
Image by Arpad Benedek / iStock. Corvette Stingray (original image)
Image by Chris Sauerwald / iStock. Corvette Manta Ray (original image)
Out of all the ocean-inspired sports cars, the Corvette Stingray is perhaps the most famous. Colloquially named “The Road Shark,” the Stingray is still produced and sold today. It isn’t the only car to appear in a suite of shark and ray-inspired 'Vettes, however. There's also the Mako Shark, the Mako Shark II and the Manta Ray, although none of these have enjoyed the longevity of the Stingray. Built in the United States, America’s love affair with the Stingray continues today as a race-ready sports car for not a whole lot of money.
Corvette’s aquatic renaissance stemmed partly from one man’s fishing trip. General Motors design head Bill Mitchell, an avid deep-sea fisher and nature-lover, returned from a trip to Florida with a Mako shark—a pointy-nosed apex predator with a metallic blue back—which he later mounted in his GM office. Mitchell was reportedly captivated by the vibrant gradation of colors along the underbelly of the shark, and worked tirelessly with designer Larry Shimoda to translate this coloration to the new concept car, the Mako Shark.
Although the car never went on the market, the prototype alone gained iconic status. But the concept didn’t disappear entirely. Instead, after acquiring a few upgrades, the Mako evolved into the Manta Ray after Mitchell was inspired by the movement of a manta powerfully gliding through the ocean.
A Little More BitePlymouth Barracuda (crwpitman / iStock)
This iconic fastback almost had an entirely different namesake when Plymouth’s executives lobbied to call the car "Panda." Unsurprisingly, the name was unpopular with its designers, who were looking for something with a little more…bite. They settled on "Barracuda," a title more befitting of the muscle car’s snarling, toothy grin.
Serpentine in appearance, barracudas in the wild attack with short bursts of speed. They reach up to 27 miles per hour, and have been observed overtaking prey larger than themselves using their rows of razor-sharp teeth. Highly competitive animals, barracudas will sometimes challenge animals two to three times their size for the same prey.
The Plymouth Barracuda was hastily brought to market to jump the release of its direct competitor, the Ford Mustang in 1964. The muscle car’s debut was rocky, but it returned in 1970 with an unapologetically fierce body design and a V8 motor. Sleek yet muscly, the Barracuda lives up to its name—a wickedly fast classic car with a predatory instinct.
Misguided by a BoxfishMercedes-Benz Bionic (NatiSythen via Wikicommons)
Despite its goofy-looking exterior, the boxfish represents an amazing feat of bioengineering. Its box-shaped, lightweight, bony shell makes the small fish agile and maneuverable, as well as purportedly aerodynamic and self-stabilizing. Such attributes made it an ideal inspiration for a commuter car, which is why Mercedes-Benz unveiled the Bionic in 2005—a concept car that took technical and even cosmetic inspiration from the spotted yellow fish.
Sadly, the Bionic never made it to market after further scientific analysis on the biologic boxfish’s “self-stabilizing” properties were largely debunked. More research revealed that really, over the course of its evolution the boxfish had given up speed and power for an assortment of defensive tools and unparalleled agility. Bad news for the Bionic—but a biomimicry lesson for the books.
I remember the first time I saw Eddie Van Halen on MTV, the way he played two hands on the fingerboard during his short “Jump” guitar solo. I loved his cool “Frankenstein” guitar, so named because he cobbled together a variety of guitar parts and decorated his creation with colored tape and paint. Even as a 13-year-old who grew up primarily listening to, and playing, classical music, I felt compelled to run out and buy his band’s “1984” LP at my local Tower Records store.
Rock 'n' Roll is an industry that’s continually pushing musical, social and cultural boundaries, and the electric guitar is its iconic instrument. The acoustic version has been around since at least the 16th century. So when I first started working with co-curator Gary Sturm on an exhibition about the invention of the electric guitar at the Smithsonian’s National Museum of American History, our driving question was: Why electrify this centuries-old instrument? The simplest answer: Guitarists wanted more volume.
Through the 19th century, guitars were part of a musical ensemble. As performance spaces increased in size, stringed instruments like guitars were hard to hear over other instruments, especially horns. As a result, the traditional Spanish-style acoustic guitar—wooden with a flat top, a symmetrical hollow body, a sound hole in the center, and gut strings—began to change in size, shape and construction. For example, in the late 1890s, Orville Gibson, founder of the Gibson Mandolin-Guitar Manufacturing Company, designed a guitar with an arched (or curved) top that was stronger and louder than the earlier flat-top design.
During the first three decades of the 20th century, with the rising popularity of Hawaiian and big band music in America, guitar makers built larger-bodied instruments, using steel instead of gut strings, and metal instead of wood for the guitar body. Around 1925, John Dopyera designed a guitar with metal resonating cones built into the top that amplified the instrument’s sound. That suited twangy Hawaiian and blues music but not other genres. Then, in the 1920s, innovations in microphones and speakers, radio broadcasting, and the infant recording industry made electronic amplification for guitars possible. The volume was suddenly able to go up, way up.
The electric guitar was essentially born in 1929—long before the advent of Rock 'n' Roll music. The first commercially advertised electric guitar was offered that year by the Stromberg-Voisinet company of Chicago, though it was not a smash hit. The first commercially successful electric, Rickenbacker’s “Frying Pan” guitar, didn’t kick off Rock 'n' Roll yet either, but it did inspire competitors to jump into the electric guitar market. Invented in 1931, the Frying Pan had an electromagnetic pickup made out of a pair of horseshoe magnets placed end-to-end to create an oval around the guitar’s strings, with a coil placed underneath the strings. The pickup, a device that converts the strings’ vibrations into electrical signals that can be amplified, was bulky and unattractive. But it worked. The commercial version of the Frying Pan was a hollow cast-aluminum lap-steel guitar, and wasn’t an immediate hit beyond some Hawaiian, country and blues musicians. It differs from the traditional Spanish-style guitar in that it is played horizontally, on a stand or in the player’s lap, and has a sliding steel bar that can be moved along the frets for a gliding effect.
Spanish-style electrics, which you could sling in front of you while standing and singing, proved to be much more versatile for many different musical genres. Gibson’s 1936 ES-150 (E for Electric and S for Spanish) had a sleek bar-shaped electronic pickup that was mounted into the guitar’s hollow body for a more streamlined look. The pickup earned the nickname “The Charlie Christian” thanks to the jazz virtuoso who is generally credited with introducing the electric guitar solo. In 1939, Christian stepped out in front of Benny Goodman’s band and performed long, complicated passages imitating the style of horn playing. He explained, “Guitar players have long needed a champion, someone to explain to the world that a guitarist is something more than a robot pluckin’ on a gadget to keep the rhythm going.”
There was a lot of tinkering with the Spanish-style electric guitar in the 1930s and 40s since the electronics in a hollow-body instrument caused distortion, overtone and feedback—especially problematic for recording sessions. Historians and guitar enthusiasts enjoy debating over who really developed the first solid-body Spanish-style guitar to resolve these sound issues. The American History Museum owns a rare Slingerland Songster made in or before 1939. This model is possibly the earliest commercially marketed solid-body Spanish-style electric guitar.
Regardless of the invention debate, it is clear that former radio repairman Leo Fender was the first to mass-produce and sell a successful solid-body Spanish-style electric guitar. His company’s simply constructed 1950 Fender Broadcaster (renamed Telecaster as the result of a trademark dispute), with its flat body and a neck bolted onto it, was initially derided by competitors as too simple and lacking in craftsmanship. Gibson’s president Ted McCarty dismissed it as a “plank guitar.” Yet everything about its patented, practical design was optimal for mass-producing an inexpensive solid-body guitar, earning Fender the moniker “the Henry Ford of the electric guitar.”
A rivalry sprang up between Fender and Gibson, creating some of the solid-body electrics most coveted by musicians and collectors, including the 1952 Gibson “Les Paul” model with a curved top and a combination bridge-tailpiece (the guitar was designed primarily by McCarty, with input by the famous guitarist who endorsed it), the 1954 Fender Stratocaster, and a 1958 version of the Gibson Les Paul with a new “humbucking” pickup that transmitted less background interference from electrical equipment.
The Fender Stratocaster may be the most widely recognizable electric guitar and the one most associated with the rise of rock and roll music. It featured a distinctive double-cutaway design that allowed musicians to play higher notes by reaching higher on the fingerboard, three pickups (which allowed for a greater range of sounds since previous guitars which had two pickups at most), and a patented tremolo system that allowed players to raise or lower the pitch of the strings. In the hands of guitarists like Buddy Holly, Eric Clapton, Bonnie Raitt and many others, the Stratocaster became an icon of American Rock 'n' Roll that took the world by storm. The Stratocaster, the Gibson Les Paul, and other solid-body electrics were nothing if not versatile, and rock guitarists were obsessed with versatility. Guitarists could not only change the tone, volume and pitch, but they also could manipulate the sound by playing close to the amplifier, grinding the strings against things, and using special effects accessories like the wah-wah pedal. Jimi Hendrix was this instrument’s master of manipulation, influencing generations of guitarists to experiment creatively with their playing techniques and equipment.
In the 1970s and ’80s the sound of the electric guitar was stretched in heavy metal music. As one of its leading practitioners, Van Halen pushed his self-built “Frankenstein” (based on a Stratocaster but with a mish-mash of other guitar parts) to the limit, experimenting, for instance, with “dive-bombing,” which uses the tremolo arm to drive the guitar’s lowest note ever lower. Hendrix had done this but forced the guitar out of tune as a result. However, by the mid-1980s, inventor Floyd Rose had improved the tremolo system, allowing players like Van Halen to dive-bomb repeatedly. The guitar sound was now not only loud but also really raucous, flashy and a bit dirty—just the way musicians, and their fans, wanted it.
It’s ironic that Leo Fender, the creator of the most influential instrument in rock music, wasn’t actually a fan of Rock 'n' Roll; he preferred country and western. But it goes to show you that once something new is out there, you can’t stop makers and players from reinventing it, adapting it for new purposes, taking it apart and putting it back together in new ways. The electric guitar is a prime example of unintended consequences. Initially, it just wanted to be a bit louder, but it ended up taking over and reinventing popular music and culture. Will we even recognize the sound of the electric guitar 10 or 20 years from now? I, for one, hope not.
Monica M. Smith is a historian and the exhibition program manager at the Smithsonian’s Lemelson Center for the Study of Invention and Innovation at the Smithsonian’s National Museum of American History. She wrote this for What It Means to Be American, a national conversation hosted by the Smithsonian and Zócalo Public Square.
Gallbladder removal is a very common procedure, accounting for more than 700,000 surgeries in the United States each year, at a dramatically high cost to health care providers. Traditionally, the procedure has required numerous incisions, which cause a long and painful recovery process. Even as the need for multiple incisions, or ports, has decreased, surgeons have sought a method for better visualization during surgery.
Levita Magnetics, a San Mateo, California-based medical device company, has spent over a decade developing a magnetic surgical system to ease some of the challenges associated with common procedures, starting with gallbladder removal through a single incision. By using magnets through the abdominal wall to maneuver tools during surgery, surgeons can benefit from a better view of the operative field. Fewer incision points can lead to less post-operative pain and scarring and a shorter recovery period. The U.S. Food and Drug Administration approved the company’s system, which includes a grasper device and detachable tip, in 2016.
When it was time to begin offering the system to surgeons in the field, the company went straight to some of the nation’s foremost surgeons. Matthew Kroh, the director of surgical endoscopy at the Cleveland Clinic, was the first to use the technology. Since then, major surgery centers at Stanford and Duke Universities have also partnered with Levita.
Levita Magnetics founder and CEO Alberto Rodriguez-Navarro spoke with Smithsonian.com about his first-of-its-kind system.
How did the idea for the company come about?
I’m a surgeon and spent 10 years working in a public hospital in the poorest area in Santiago, Chile, where I’m from. One of the biggest issues with surgery is avoiding pain. In surgery, pain is related to incisions, so the more incisions, the more pain a patient will have. When we reduce the number of incisions, a patient has less pain.
My father is a mechanical engineer, and he was thinking about this problem on his own. We began playing around with magnetics. You know those fish aquariums that you can clean without changing the water? Our system is a bit like that system—it’s the same concept but applied to surgery. Instead of the glass of the tank between the two areas, it’s an abdominal wall. We developed our first prototype in Chile more than 10 years ago. We filed our first patent in Chile and used our company for developing the idea, but we were pretty relaxed about it.
How did you advance the idea from there?
I did not expect this would change my life. But an important thing to note is that the Chilean government is trying to be a hub of health care in Latin America. There is a lot of effort being directed at helping entrepreneurs develop new things. In Chile, we proved our system successful for more advanced procedures. We also got commercial approval for Europe. But we chose to focus on the U.S. first.
The Chilean government sponsored some of our research and development, as well as my entrepreneurship training at SRI International (formerly the Stanford Research Institute). The chance of developing this further in Chile was small, so I stopped clinical practice in Chile, and we moved to the Bay Area in early 2013.
We finalized our clinical product in early 2014, completed clinical trials to earn a CE Mark for consumer sales in Europe in 2015, and the FDA approved our new technology in 2015. The FDA has been very supportive and created a new classification for our technology, “Magnetic surgical instrument system.”
How does your magnetic surgery system work?
A magnetic grasper device delivers and retrieves a detachable tip that clamps onto the gallbladder that can also be repositioned. The magnetic grasper fits through a single entry point, such as the navel. Then a magnetic controller positioned outside the abdominal wall is used to maneuver the tip into the desired position. It was designed to look and be simple.
Levita Magnetics is named for how our detachable tip can sort of levitate inside the abdomen.Grasper with magnetically-controlled positioning (U.S. Pat. No. 9,339,285)
What are some of the most obvious benefits?
Laparoscopic surgery can require four or five multi-port incisions. Surgeons end up lacking triangulation when they move from multi-port to a reduced port model. This can lead to instrumentation clashing and poor visualization, which leads to increased difficulty in the operating room and overall increased risk in performing surgery. One port limits movement.
With our external magnet, a surgeon can let go, so that mobility is not limited. Additionally, single-port visibility is not limited once a surgeon lets go. It’s a little like driving. If you can see well, you can go fast, securely. If you have to go slowly, that costs more resources.
How has adoption been in the field?
Surgeons can be very conservative—I say as a surgeon and as someone who knows surgeons—and they often do what they know. That means adoption among surgeons can be much slower than in other fields, and our task was to develop convincing scientific evidence. The technology itself is very manageable. Surgeons at Duke University and the Cleveland Clinic and several other institutions already use our system. Once surgeons adopt it, they really stick with it.
Why start with gallbladders? What’s next for Levita Magnetics?
Gallbladder surgery is the simplest abdominal surgery and one of the most common. But we see many other opportunities to eventually expand to thorax, bariatric, colorectal, and urological and gynecological surgeries.
We’re also moving into working with robotics to give more tools to the surgeons. We want to offer a system with more than one magnet on the field to provide a complete view. This would be especially advantageous in operating rooms where there are not two surgeons present, where there might be one surgeon and one medical student or assistant. Offering a surgeon a better option is also better for patients. It reduces invasiveness, increases safety, and is also a better use of human resources.
We have 14 issued or pending patents, including three patents [U.S. Patent Numbers 8,790,245, 8,764,769 and 9,339,285] granted in the United States. We also have an article coming out in the highly prestigious medical journal Annals of Surgery this spring. This is a good sign that we’re on the right track.
Big data is getting so big, it’s slipping the surly bonds of Earth.
A startup called Orbital Insight, which recently raised nearly $9 million in funding, is using satellite imagery and cutting-edge computing techniques to estimate global oil surplus, predict crop shortfalls before harvest time and spot retail trends by keeping track of the number of cars in big-box parking lots. It should also be possible to train the software to spot illegal deforestation early and better track climate change.
The company uses machine learning techniques and computing networks that mimic the human brain to spot patterns in massive amounts of visual data. Facebook uses similar techniques to recognize faces in uploaded images and auto-tag you and your friends. But instead of searching for faces, Orbital Insight is taking advantage of the growing abundance of satellite imagery, thanks to the rise of small, low-cost satellites, and teaching their networks to automatically recognize things like vehicles, the rate of construction in China and the shadows cast by floating-lid oil containers, which change depending on how full they are.
It would be impossible, of course, for humans to sift through regularly updated global satellite imagery. But with massively parallel computers and advanced pattern-recognition techniques, Orbital Insight aims to deliver types of data that haven’t available before. Current global oil estimates, for instance, are already six weeks old when they’re published. With Orbital, analysis of crop yields could be delivered mid-season—important information to have, whether you’re a high-level United Nations worker trying to get ahead of a food crisis, or a commodities trader working for a hedge fund.
Orbital Insight hasn’t been around long—it was founded in late 2013 and only came out of “stealth mode” late last year. But the company’s founder, James Crawford, has plenty of experience in compatible fields. A former autonomy and robotics head at NASA’s Ames Research Center, he also spent two years as engineering director at Google Books, turning archived printed pages into searchable text.
Several companies, like Spire and Inmarsat, and even Tesla’s Elon Musk, are working on hardware—designing and launching new networks of satellites—but Crawford says Orbital Insight is instead focusing purely on software.
“In some ways I see what we’re doing here in the impetus of this company,” says Crawford, “is taking a lot of the learning [at Google] about how to do big data, how to apply [artificial intelligence], how to apply machine learning to these pipelines of images, and apply that to the satellite space. ”
Crawford’s company may be one of the few working on using emerging software techniques such as artificial neural networks and machine learning to parse satellite imagery. But the technique he's using, also known as deep learning, is exploding in the technology space at the moment. Established companies like Facebook, Google and Microsoft are using deep learning techniques for things like automatic image tagging and improved speech recognition and translation. IBM also recently acquired a deep learning company, called AlchemyAPI, to enhance their Watson computer system.
With deep learning, powerful computers and multiple layers of concurrently running pattern recognition (hence the "deep" in deep learning) mimic the neural networks of the human brain. The aim is to get a computer to “learn” to recognize patterns or perform tasks that would be too complex and time-consuming to “teach” using traditional software.
The details of deep learning are technical, but at the very basic level, it’s surprisingly simple. When it comes to measuring retail trends with parking lot activity, Crawford says the company first has employees manually mark cars in a few hundred parking lots with red dots. “Then, you feed each individual car into the neural network, and it generalizes the patterns of light and dark, the pattern of pixels of a car,” says Crawford. “And when [the computer] looks at a new image, what it’s essentially doing is fairly sophisticated, but still basically a pattern match.”
When estimating retail activity, Crawford says his company is much better at inferring how a chain is doing on a national level, by measuring how full parking lots are over time and comparing that to how full the same lots were in previous quarters using older images, than gauging the health of an individual store.
He admits that many retailers already have ways of tracking this data for their own stores, but they would be happy to know how their competitors are doing months before financial results are released. The same would be true of hedge funds, who Crawford says are some of the company’s earliest customers. It’s easy to see how this kind of data could give investors a leg up. The satellite imagery is already available, and Orbital Insight is just parsing it, so it’s unlikely to spark any insider trading concerns.
If the network makes an occasional mistake, say confusing a dumpster for a car, it’s not much of a problem, Crawford explains, because the mistakes tend to cancel each other out on a large scale. For things like oil estimates, even if they’re off by several percentage points, it’s still better than waiting up to six weeks for more concrete data.
While the startup seems focused on providing data to market investors first, what the company does could be put to more altruistic uses as well. “We’re curious in the future about using this to detect deforestation, and to detect things like road building that could be a precursor to deforestation,” says Crawford. “There’s also really interesting things that can be done around looking at snow pack, water and other aspects for climate change.” He also says they’re looking into third-world agriculture, and says multi-spectral imagery is a good way to tell how healthy plants are, to predict crop failures.
Of course, any aspect of big data that also incorporates satellite imagery brings up privacy issues. But Orbital Insight isn’t taking the photos, they’re accessing and analyzing images that are already available. And as Crawford points out, current U.S. regulations for commercial imaging satellites stipulate that you can’t go below 20 cm per pixel. At that resolution, the average person would show up as a few dots. So it would be tough to distinguish individual people at all, let alone a person’s identity or even gender.
Crawford says that much of the short-term advances in deep learning techniques in general will involve simplifying and automating the tweaks to the algorithms (meaning less manually tagging cars or corn fields), so that companies can more quickly apply machine learning to new areas.
As for the future of Orbital Insight specifically, the company's founder definitely isn’t talking small. He likens what the company is doing to creating a “macroscope” that could impact the world to a similar degree that the microscope transformed biology.
“A lot of what we’re seeing about the Earth, whether it’s corn yield or deforestation, or oil inventory, are so big that you can’t see them with the human eye because you’d have to process a million images at once,” says Crawford. “It will ultimately change the way we view the Earth, change the way we think about it, and change the way we think about managing it.”
For the Connecticut-based dance troupe Pilobolus, innovation means never repeating itself.
“It’s not about throwing away what we’ve learned, but really taking what we’ve learned and twisting it, and saying and doing new things with it,” says the troupe’s co-associate artistic director Renee Jaworski. Some dance companies get stuck in the past, and after mastering one thing, audiences come to expect them to repeat it ad infinitum. There is also great financial incentive to be known for one particular act or approach.
“We’ve got to fight against that, because the world is going to go nowhere if everyone relies on the tried and true,” she says.
Pilobolus was founded by Dartmouth College students in 1971 and named for a fungus associated with grazing animals that the father of one of the founders was researching. Pilobolus, which has performed in nearly 65 countries, also has a reputation for being widely collaborative—with Penn & Teller, Art Spiegelman, and Maurice Sendak, for example. It has performed at the Academy Awards (2007) and on shows ranging from "Oprah" and "Late Night with Conan O’Brien" to "60 Minutes," and its contortionist acts appear in commercials, as when silhouetted dancers formed a car for a Hyundai ad.
One Friday evening in May, the troupe collaborated with video portrait artist Bo Gehring appearing at an event at the Smithsonian’s Kogod Courtyard, a magnificent central plaza and salon inside the 19th-century building that houses both the National Portrait Gallery and the Smithsonian American Art Museum in Washington, D.C. The task for the dance group and for Gehring that night was to challenge visitors to explore and expand the traditional boundaries of portraiture.
Many assume that the tradition of portraiture requires “old white men with the wigs on,” says Kim Sajet, who has directed the National Portrait Gallery since 2013, and was delighted to kick off the Smithsonian Institution’s three-part summer series “America Now,” designed to probe the many intersections between art and innovation.
The work of portraitist Bo Gehring—who won the museum’s 2013 Outwin Boochever Portrait Competition—is anything but traditional. His video portraits achieve a rare, intimate vantage point with their close-up, scrolling views of his sitters, such as his portrait of jazz singer Esperanza Spalding, currently on view in the Portrait Gallery’s exhibit “Eye Pop: The Celebrity Gaze.”
Even Jaworski’s co-associate artistic director Matt Kent had trouble with the concept. New themes for portraiture take some getting used to. Without mincing words, Kent initially proclaimed the idea of a camera slowly panning across a reclining model “terrible” and “boring.”
But when Kent first saw the simplicity of Gehring’s work last fall, he changed his mind completely. At the May event, Gehring took video portraits of audience members, and Kent and Jaworski’s Pilobolus group led dance workshops and performed on a makeshift stage.
“People have a space around them that depending on your culture, you don’t get to. But [Gehring’s work] sneaks past that. It goes inside the bubble,” Kent says. “You don’t usually have that sensation of someone that you don’t know, who is not a lover, family or a Pilobolus dancer.”
Pilobolus allows its dancers to express their individuality where other troupes often stress homogeneity, and it emphasizes both innovation and intimacy, Jaworski says. “None of them look the same. We let them be themselves up there,” she adds. “It is a portrait of what’s going on, because each individual has added to the process in a way that only they can.”Pilobolus performs "On the Nature of Things" at “America Now: Pilobolus and Portraiture,” the first of a three-museum collaboration with the Smithsonian's National Portrait Gallery, National Museum of American History, and Smithsonian American Art Museum sponsored by the Robert and Arlene Kogod Family Foundation at the National Portrait Gallery on Friday, May 22, 2015 in Washington, D.C. (Paul Morigi/AP Images for National Portrait Gallery)
In the performance at Kogod Courtyard, Pilobolus dancers impersonated robots, mimicked swimmers and toyed with their reflections in mirrors and video projections in a manner that suggested comparisons from Cirque du Soleil’s psychedelic arrangements to kaleidoscopic scene transitions of television's "That 70s Show."
In between dances, Gehring made portraits of Pilobolus dancers projected on two screens on either side of the stage creating what Jaworski calls a “live program.” (There were no paper programs distributed at the event.) In the videos, dancers held placards identifying each act: “All Is Not Lost,” “On the Nature of Things,” “Automation.” The videos, though placeholders, underscore the vulnerability of the dancer’s close-up.
“You can think of a dance piece as a portrait of whatever is going on in the studio at the time we are making the piece,” says Jaworski. “They put themselves in intimate positions with each other, but we are also inviting our audience to get to know the people on stage in a way that is very intimate.”
That intimacy, at the Portrait Gallery, was somewhat offset by a “New England fair” feeling that the gallery wanted, where visitors could get their portraits taken by Gehring in one corner, could find drinks and food in another, and visit the stage on the other end of the large room. That’s different from the kind of captive audience to which Pilobolus typically performs.
“We sort of trap our audiences and turn out the lights. We bring them into this world,” Jaworski says, herself experiencing a new paradigm. “This guy could get up in the middle of the piece and look at some portraits and come back and get back into it. That’s new. … You’re not trapped in a theater. You can view this as a museum piece.”
That flexibility, and the lack of a cover charge, drew Heather Whyte and her 9-year-old daughter Cassidy, who summed up the evening: “It was funny, weird and artistic.”
Even if pushing boundaries does get weird, they remain situated respectfully in historical context, says the museum staffer Bethany Bentley.
The museum, adds Sajet, has long honored ingenious thinkers, especially those who “made things happen,” from George Washington to Rosa Parks. “Innovation really comes down to human thought; it’s people thinking outside the box,” she says.
“No one is throwing out old portraits,” says Bentley. “What we are trying to help people see is that, yes, there is very representational portraiture, and that’s wonderful, and what much of our tradition is based on. [But] what we also want to think about is what does portraiture mean.”
Kent sees both promise and potential pitfalls in that kind of expansive thinking. “I don’t know when I’m having a really great, innovative idea or a really stupid idea,” he says. “They both feel the same. You have to just do it.”
For most of the NASA robots on and around Mars, March 8, 2015 was just another Sunday. As the red planet continued its slow march around the sun, a burst of solar material buffeted the atmosphere. No big deal—such changes in solar weather are pretty common.
But for one orbiting probe, March 8 was a day of Martian history in the making.
NASA’s Mars Atmosphere and Volatile Evolution (MAVEN) mission was watching closely as the solar outburst stripped away some of the planet's already thin atmosphere. Its observations back up scientists' suspicions that solar activity is a major player in shaping Mars's atmosphere, a finding that is even more exciting when viewed with an extremely patient eye.
That’s because billions of years ago, the young sun was thought to be much more active, spewing out solar storms more often and with more intensity than it does now. Given this new understanding of how the sun affects Mars, it seems likely that a stormy adolescent sun could be the reason Mars went from warm and wet to the chilly, barren world we see today.
During the March solar storm, MAVEN saw how charged particles in the red planet’s atmosphere got sucked up and swirled away. Planetary ions spewed out into space, bound into tendril-like magnetic “flux ropes” over 3,000 miles in length. Material from the atmosphere escaped at much higher speeds than normal during this event.
The solar outburst dramatically altered the red planet's weak magnetic environment and affected its upper atmosphere as well. Given the magnitude of the sun's impact on Mars, it seems likely that such flares have been a significant—even dominant—contributor to climate change on the red planet.
On Earth, life thrives in part because it is kept warm and cozy under a relatively dense blanket of atmosphere containing a mix of heat-trapping gases. Mars' modern atmosphere mostly contains carbon dioxide, a potent greenhouse gas, but it is substantially thinner, leaving the surface too cold to support large bodies of water, thought to be a key ingredient for life.
Considering the flood of evidence for liquid water on ancient Mars, astronomers suspect that the planet must have had a thicker atmosphere at some point in the past. The key question is whether the time frame for this warm, wet period, as defined by data from surface experiments, matches the time frame for a friendlier atmosphere.
In addition, scientists need to know whether an atmosphere that could sustain the right proportion of light, temperature and water was stable long enough for life to take hold, says David Brain, a co-investigator on the MAVEN team.
It’s most likely that the bulk of the planet’s atmospheric loss took place in the first billion or billion and a half years of its existence, Brain says. The new MAVEN data should help scientists figure out variations in the atmospheric escape rate and how that might have changed over time. Then they can work backwards and better pinpoint the timeframe for when Mars had a thicker atmosphere.
Image by NASA/JPL-Caltech/MSSS. NASA's Mars rover Curiosity took a selfie at one of its drilling sites inside Gale Crater, presented here as a "little planet" projection that shows the horizon as a circle. (original image)
Image by NASA/JPL-Caltech/MSSS. Rock strata in the foreground of this image from the Mars rover Curiosity dip toward the base of Mount Sharp, a 18,000-foot-high mountain inside Gale Crater. The strata indicate the flow of liquid water toward a basin—evidence that the crater once hosted a large lake. (original image)
Image by NASA/JPL-Caltech/University of Arizona/Texas A&M University. NASA's Pheonix mission landed near the north polar cap in 2008. These two images show a trench the lander dug in June of that year that exposed lumps of subsurface ice, visible in the shadowy bottom-left corner in the shot to the left. The ice sublimated when exposed to air and had totally vanished four days later. (original image)
Image by NASA/JPL-Caltech/Cornell/USGS. The Mars Exploration Rover Opportunity snapped this image of iron-rich mineral concretions nicknamed blueberries in Fram Crater. The spherules provided early evidence that water may have flowed on ancient Mars, as scientists think they are mineral deposits that formed as water trickled through rocks. (original image)
Image by NASA/JPL-Caltech/Univ. of Arizona. Dark, narrow streaks flow downhill on the walls of Horowitz Crater in this image from the Mars Reconnaissance Orbiter. These streaks are most likely caused by seasonal flows of cold, salty water on modern-day Mars. (original image)
Image by NASA/JPL-Caltech/Univ. of Arizona. Carbon dioxide frost decorates feather-like gullies in Mars' northern plains in this shot from the Mars Reconnaissance Orbiter. (original image)
Image by NASA/Univ. of Colorado. A graphic based on data from MAVEN shows what Mars' atmosphere would have looked like in ultraviolet during an October 2014 close encounter with comet C/2013 A1 Siding Spring. The comet sparked a meteor shower on Mars that ionized magnesium in the atmosphere. (original image)
Image by NASA/JPL-Caltech/Univ. of Arizona. The Mars Reconnaissance Orbiter snapped this image of sedimentary rock layers and windblown sand in Valles Marineris. (original image)
A better understanding of Mars’ atmosphere could lead to revelations about Earth and other planets, too.
“What’s exciting to me is the idea of Mars as a laboratory,” says Brain. “Once our models are really trustworthy, we can apply them in new situations.”
For instance, such improved models could lead to new insights about Venus, which has a similarly weak magnetic field. They could also offer clues to how Earth interacts during the sun during flips in its magnetic field. And instead of only looking at how the sun affects Mars, scientists plan to ask what their observations in turn reveal about the sun.
Discoveries about the March solar storm are just the tip of the iceberg—the study is being released along with three other results about Mars' atmosphere in Science and 44 additional papers in Geophysical Research Letters.
One study investigated the newly discovered Northern Lights-style aurora on the red planet—a diffuse phenomenon that appears to be driven by the scant magnetic field near the planet’s crust. Another paper shows results from MAVEN's flirtation with the upper atmosphere of Mars, which yielded data that's helping scientists understand the physics that keep particles inside the atmosphere.
A fourth study analyzes dust at various altitudes, suggesting that dust particles trapped high in the Martian atmosphere are actually from other planets.
And the discoveries could keep on coming: the MAVEN mission has been extended through September 2016, and scientists still have plenty more data from the initial observing campaign to analyze. For Brain and his colleagues, the information they’re seeing is nothing short of thrilling.
“Each individual data set is among the best or the best I’ve ever seen for any planet,” says Brain, who is regularly told by Earth scientists that they wish they had similar observations for our own planet.
And even with the massive amount of information released this week, the data suggest that there are plenty more Martian mysteries to solve, says Bruce Jakosky, MAVEN’s principal investigator. “This is a recognition that the Mars environment is a very complex one,” he says. “We think there’s an awful lot still to learn.”
University of Houston engineer Jose Contreras-Vidal does some futuristic, stranger-than-science-fiction research. He’s developed a “brain-machine interface” to interpret brain signals and turn them into movement. With this interface, he has created a bionic hand and a computer avatar that are controlled by the user's mind.
But the centerpiece of his work is a thought-controlled exoskeleton to help paralyzed people walk. For the past several years, Contreras-Vidal has been working with the REX lower body exoskeleton, developed by New Zealand-based REX Bionics. The exoskeleton is made to be controlled with a joystick. But Contreras-Vidal and his team have retrofitted a version to be used with their brain-machine interface. The user of the exoskeleton wears an electrode cap, with sensors on the scalp that read electrical activity in the brain. An algorithm developed by Contreras-Vidal and his team “interprets” the brain information and translates it into movement of the exoskeleton. In other words, the wearer thinks “move, left knee,” and the algorithm turns it into action. This can create relatively quick movements, as even in non-injured people it takes a split second for information to travel from the brain to the body.
“Any time we plan a movement, the information is there before we’re actually seeing the movement,” Contreras-Vidal says.
A number of researchers over the years have helped paralyzed people move using electrodes implanted in their brains. Contreras-Vidal’s patent-pending system is different because it is noninvasive—users take the electrode cap on and off at will. This is particularly useful in the case of patients who will only need the exoskeleton temporarily, such as stroke victims who might use the exoskeleton to regain walking ability, then learn to walk unaided. (A Brazilian-led team developed a noninvasive brain-controlled exoskeleton to allow a paraplegic to kick off the 2014 World Cup; the suit, however, didn't allow the user to walk unaided).
The thought-controlled exoskeleton is the result of years of work on decoding the language of the brain. At the University of Houston, Contreras-Vidal directs the Laboratory for Non-invasive Brain-Machine Interface Systems, which employs a team of engineers, neuroscientists, doctors, computer experts and even artists. Before Houston, he directed the Laboratory of Neural Engineering and Smart Prosthetics at the University of Maryland, where he worked on developing brain-controlled prosthetics for amputees. The algorithms used to translate thoughts into movement are constantly being improved, Contreras-Vidal says, in what he describes as a "creative process."
Right now, his lab is working on several projects using the brain-machine interface. One project looks at neuro-motor development in children using the electrode cap; the team hopes a better understanding of this process may eventually help children with neurological developmental disorders, such as autism. Another seeks to understand what happens in the brain when people experience art, fitting museum-goers with an electrode cap as they look at an art installation.
The brain-controlled exoskeleton is currently undergoing trials. It's already been used in a number of real-life scenarios; a British quadriplegic man recently walked using the exoskeleton at a conference in Italy.
Contreras-Vidal and several of his students will be demonstrating the exoskeleton at the Smithsonian’s upcoming Innovation Festival. The festival, a collaboration between the Smithsonian Institution and the U.S. Patent and Trademark Office, is happening September 26 and 27 at the National Museum of American History.
“We’re very excited about going to the Smithsonian, because I think scientists need to talk to the public, particularly to children,” Contreras-Vidal says. “They need to be exposed to this type of technology to see it’s really about creating and innovating.”
As if the robot-like exoskeleton is not impressive enough for kids and other festival-goers, Contreras-Vidal and his team will allow visitors to view their own brainwaves on a screen by donning electrode caps. Contreras-Vidal describes tuning into a person’s brainwaves as “listening to the neurosymphony.”
“I like to see the brain as a symphony, where all the major areas are part of the ensemble and each player in this ensemble is responsible for some aspect of their behavior,” he says. “To play this music they need to coordinate.”
The overlap of art and science is an important part of Contreras-Vidal's work. In the past, he's wired up artists to peer into their brains' creative processes. More recently he's been working with dancers. In a project called Your Brain on Dance, he's fitted dancers with electrode caps and displayed the resulting brain waves on a screen as they perform. He believes that ultimately this kind of inquiry into the neural basis of movement could lead to a new understanding of Parkinson's and other brain diseases.
At the Innovation Festival, visitors will be treated to such a dance performance.
“Scientists can learn a lot from art and vice versa,” Contreras-Vidal says. “I’m hoping that this will capture the imagination of people, children especially.”
The Innovation Festival will be held at the National Museum of American History on September 26 and 27, between 10 a.m. and 5 p.m. The event, organized by the Smithsonian Institution and the U.S. Patent and Trademark Office, will feature examples of American ingenuity developed by independent inventors, academic institutions, corporations and government agencies.
At the heart of Paris, in a former monastery dating back to the Middle Ages, lives an unusual institution full of surprises whose name in French—le Musée des Arts et Métiers—defies translation.
The English version, the Museum of Arts and Crafts, hardly does justice to a rich, eclectic and often beautiful collection of tools, instruments and machines that documents the extraordinary spirit of human inventiveness over five centuries—from an intricate Renaissance astrolabe (an ancient astronomical computer) to Europe’s earliest cyclotron, made in 1937; to Blaise Pascal’s 17th-century adding machine and Louis Blériot’s airplane, the first ever to cross the English Channel (in 1909).
Many describe the musée, which was founded in 1794, during the French Revolution, as the world’s first museum of science and technology. But that doesn’t capture the spirit either of the original Conservatoire des Arts et Métiers, created to offer scientists, inventors and craftsmen a technical education as well as access to the works of their peers.
Its founder, the Abbé Henri Grégoire, then president of the revolution’s governing National Convention, characterized its purpose as enlightening “ignorance that does not know, and poverty which does not have the means to know.” In the infectious spirit of égalité and fraternité, he dedicated the conservatoire to the “artisan who has seen only his own workshop.”
In 1800, the conservatoire moved into the former Saint-Martin-des-Champs, a church and Benedictine monastery that had been “donated” to the newly founded republic not long before its last three monks lost their heads to the guillotine. Intriguing traces of its past life still lie in plain view: fragments of a 15th-century fresco on a church wall and rail tracks used to roll out machines in the 19th century.
What began as a repository for existing collections, nationalized in the name of the republic, has expanded to 80,000 objects, plus 20,000 drawings, and morphed into a cross between the early cabinets de curiosités (without their fascination for Nature’s perversities) and a more modern tribute to human ingenuity.
“It is a museum with a collection that has evolved over time, with acquistions and donations that reflected the tastes and technical priorities of each era,” explained Alain Mercier, the museum’s resident historian. He said the focus shifted from science in the 18th century to other disciplines in the 19th: agriculture, then industrial arts, then decorative arts. “It was not rigorously logical,” he added.
Mostly French but not exclusively, the approximately 3,000 objects now on view are divided into seven sections, beginning with scientific instruments and materials, and then on to mechanics, communications, construction, transport, and energy. There are displays of manufacturing techniques (machines that make wheels, set type, thread needles, and drill vertical bores) and then exhibits of the products of those techniques: finely etched glassware, elaborately decorated porcelains, cigar cases made of chased aluminum, all objects that could easily claim a place in a decorative arts museum.
The surprising juxtaposition of artful design and technical innovation pops up throughout the museum’s high-ceilinged galleries—from the ornate, ingenious machines of 18th-century master watchmakers and a fanciful 18th-century file-notching machine, shaped to look like a flying boat, to the solid metal creations of the industrial revolution and the elegantly simple form of a late 19th-century chainless bicycle.
Few other museums, here or abroad, so gracefully celebrate both the beautiful and the functional—as well as the very French combination of the two. This emphasis on aesthetics, particularly evident in the early collections, comes from the aristocratic and royal patrons of pre-revolution France who placed great stock in the beauty of their newly invented acquisitions. During this era, said Mercier, “people wanted to possess machines that surprised both the mind and the eye.”
Image by © SONNET Sylvain/Hemis/Corbis. Clement Ader's steam-powered airplane, the Ader Avion No. 3, hangs from the ceiling of the Arts et Métiers museum. (original image)
Image by © SONNET Sylvain/Hemis/Corbis. Peering into the museum's mechanical room (original image)
Image by © SONNET Sylvain/Hemis/Corbis. The communication room (original image)
Image by © SONNET Sylvain/Hemis/Corbis. View of the airplanes and automobiles hall (original image)
Image by © SONNET Sylvain/Hemis/Corbis. The museum collection includes the original model of the Statue of Liberty by Frédéric Auguste Bartholdi. (original image)
Image by © Christophe LEHENAFF/Photononstop/Corbis. A student draws in a room filled with scientific instruments. (original image)
Image by © Christophe LEHENAFF/Photononstop/Corbis. (original image)
From this period come such splendid objects as chronometers built by the royal clockmaker Ferdinand Berthoud; timepieces by the Swiss watchmaker Abraham-Louis Breguet; a finely crafted microscope from the Duc de Chaulnes’s collection; a pneumatic machine by the Abbé Jean-Antoine Nollet, a great 18th-century popularizer of science; and a marvelous aeolipile, or bladeless radial steam turbine, which belonged to the cabinet of Jacques Alexandre César Charles, the French scientist and inventor who launched the first hydrogen-filled balloon, in 1783.
Christine Blondel, a researcher in the history of technology at the National Center of Scientific Research, noted that even before the revolution, new scientific inventions appeared on display at fairs or in theaters. “The sciences were really part of the culture of the period,” she said. “They were attractions, part of the spectacle.”
This explains some of the collection’s more unusual pieces, such as the set of mechanical toys, including a miniature, elaborately dressed doll strumming Marie Antoinette’s favorite music on a dulcimer; or the famous courtesan Madame de Pompadour’s “moving picture” from 1759, in which tiny figures perform tasks, all powered by equally small bellows working behind a painted landscape.
Mercier, a dapper 61-year-old who knows the collection by heart and greets its guards by name, particularly enjoys pointing out objects that exist solely to prove their creator’s prowess, such as the delicately turned spheres-within-spheres, crafted out of ivory and wood, which inhabit their own glass case in the mechanics section. Asked what purpose these eccentric objects served, Mercier smiles. “Just pleasure,” he responds.
A threshold moment occurred in the decades leading up to the revolution, notes Mercier, when French machines began to shed embellishment and become purely functional. A prime example, he says, is a radically new lathe—a starkly handsome metal rectangle—invented by engineer Jacques Vaucanson in 1751 to give silk a moiré effect. That same year Denis Diderot and Jean-Baptiste le Rond d’Alembert first published their Encyclopedia, a key factor in the Enlightenment, which among many other things celebrated the “nobility of the mechanical arts.” The French Revolution further accelerated the movement toward utility by standardizing metric weights and measures, many examples of which are found in the museum.
When the industrial revolution set in, France began to lose its leading position in mechanical innovation, as British and American entrepreneurial spirit fueled advances. The museum honors these foreign contributions too, with a French model of James Watt’s double-acting steam engine, a 1929 model of the American Isaac Merritt Singer’s sewing machine and an Alexander Graham Bell telephone, which had fascinated visitors to London’s Universal Exhibition in 1851.
Even so, France continued to hold its own in the march of industrial progress, contributing inventions such as Hippolyte Auguste Marinoni’s rotary printing press, an 1886 machine studded with metal wheels; the Lumière brothers’ groundbreaking cinematograph of 1895; and, in aviation, Clément Ader’s giant, batlike airplane.
Although the museum contains models of the European Space Agency’s Ariane 5 rocket and a French nuclear power station, the collection thins out after World War II, with most of France’s 20th-century science and technology material on display at Paris’s Cité des Sciences et de l’Industrie.
Few sights can top the Arts et Métiers’ main exhibit hall located in the former church: Léon Foucault’s pendulum swings from a high point in the choir, while metal scaffolding built along one side of the nave offers visitors an intriguing multistoried view of the world’s earliest automobiles. Juxtaposed in dramatic midair hang two airplanes that staked out France’s leading role in early aviation.
For all its unexpected attractions, the Musée des Arts et Métiers remains largely overlooked, receiving not quite 300,000 visitors in 2013, a fraction of the attendance at other Paris museums. That, perhaps, is one of its charms.
Parisians know it largely because of popular temporary exhibits, such as “And Man Created the Robot,” which showcased in 2012-13. These shows have helped boost attendance by more than 40 percent since 2008. But the museum’s best advertisement may be the stop on Métro Line 11 that bears its name. Its walls feature sheets of copper riveted together to resemble the Nautilus submarine in Jules Verne’s Twenty Thousand Leagues Under the Sea, complete with portholes.
For anyone looking for an unusual Paris experience, the station—and the museum on its doorstep—is a good place to start.
Six Exhibits Not to Miss
Master woodworker, furniture maker, and artist Wendell Castle died on January 20, at the age of 85. During his long and distinguished career, he helped define and redefine craft furniture in America.
I first came to Castle's work through what has become one of the museum's iconic pieces. Countless visitors to SAAM's Renwick Gallery have been puzzled, fooled, and delighted by his Ghost Clock. What appears to be a grandfather clock draped in white cloth is actually a trompe l'oeil work of art constructed from laminated and bleached Honduras mahogany.
In her catalogue, Craft for a Modern World, Nora Atkinson, the Lloyd Herman Curator of Craft, recounted a story that was a favorite of Michael W. Monroe, a former director of the Renwick. In 1990, an irate woman demanded to see Monroe, and was summoned by a volunteer on duty. Monroe met her in front of the recently acquired Ghost Clock. As Atkinson describes, "[She] explained to [Monroe] that this was her ninth visit to the museum since the clock had arrived, and she had come a long way to see it, yet here it was, still unceremoniously covered by a sheet, cinched neatly around the middle with a piece of twine, as it had been the last time, and the time before, and the time before that. It was disgraceful that it was still covered, standing in the galleries as it was. She just had to see it before she left, she explained."
After calming the woman, Monroe dashed to his office then returned with two pairs of white cotton gloves. He gave one pair to the woman, and then invited her to gently lift the "cloth" with her gloved hand. At last, she thought, the clock would be revealed. When she realized that the cloth was wood and the clock she was expecting underneath did not exist, she returned the gloves to Monroe, and proceeded to make her way down the Renwick's grand staircase until she reached the exit.
I have to tell you, she's not alone. A friend of mine just visited with her father and she had to stop him from reaching out and touching the "cloth." If you stand in the gallery near Ghost Clock you will see the repeated faces of astonishment and amazement. And, as Atkinson adds, "...It is a gentle reminder to look closer, that things are not always as they seem, for the visual language does not always so neatly reveal its intentions."
Castle invented his own visual language. In reference to the beginning of his career he said, "...The only way I could make what I wanted to make was to make it myself." In 2016, he participated in the museum's Maloof Symposium on "Furniture and the Future" and spoke about his embrace of evolving digital technologies and his acquisition of a robot named Mr. Chips to help assemble the larger pieces. His inventive work spans the second half of the twentieth century and the first part of the twenty-first.
Ghost Clock is a meditation on time, that has, in its own way, become timeless.
In addition to Ghost Clock on view at the Renwick through February 19, Castle's Music Stand can be seen at SAAM's Luce Foundation Center.
Related Blog Post
Knock Wood: The Future of Furniture is In Their Hands
Sam Maloof is America's best known contemporary furniture craftsman. While he is self-taught as a woodworker, both his name and his work are instantly recognizable. No other twentieth-century studio furniture maker has received as many awards for design and craftsmanship.
Curator and FOOD: Transforming the American Table, 1950-2000 exhibition project director Paula Johnson recalls a memorable visit with Chuck Williams.
Chuck Williams, the founder of Williams-Sonoma—the kitchenware emporium that, beginning in 1956, introduced Americans to distinctive tools and cookware from different parts of the world—died on December 5. Upon hearing the news, we thought back to a sunny December day in 2011, when we took a field trip to Mr. Williams's San Francisco offices on behalf of the exhibition project, FOOD: Transforming the American Table, 1950-2000.
Curator Rayna Green, Associate Director Maggie Webster, and I were visiting several California donors to the exhibition and our first stop was at Williams-Sonoma headquarters. At 96, Mr. Williams was still coming to work regularly, and, dapper in coat and tie, he welcomed us warmly into his office overlooking the San Francisco Bay. The room itself was rather like a Williams-Sonoma store—open wooden shelves held an array of objects, artfully placed, that subtly beckoned to us, urging us to come a bit closer: a brilliant red KitchenAid stand mixer, a white ceramic creamer shaped like a playful cow, a painted water pitcher in the form of a chicken, and ceramic tea pots and decorated bowls arranged just so. We realized that this was the same design aesthetic that set Mr. Williams's stores apart from other purveyors of kitchen equipment—at the time, mostly hardware stores, where stacks of pots, pans, and tools were the norm. When we settled in for a conversation, he remarked on how his sense of design informed the look and layout of his stores from the very start: "That was one of the things I did right at the beginning. . . Not just putting the pots on the shelf without thinking about it. Putting it so the handle was partly out in front of the shelf and it welcomed the customer to pick it up to look at it."
Much of the conversation that day had to do with Mr. Williams's role in what we were calling the "good food movement" in the exhibition.
With roots in northern California, the movement was largely a reaction against the fast, processed, and packaged foods that had become so popular in households across America in the 1950s and 1960s. The California devotees of fresh, local, and organic foods were also interested in trying new cuisines and learning to cook beyond just the basics. While Julia Child guided these intrepid home cooks through unfamiliar techniques and recipes, Chuck Williams supplied them with previously unavailable cookware from France and Italy to help them achieve results. When asked about particular items, he said, "I think the most popular one was the soufflé dish. Just a plain, white soufflé dish. There wasn’t anything like that available in this country." We decided then and there to include one of Julia Child's white soufflé dishes made by one of Mr. Williams's favorite sources, the French company Pillivuyt, in the exhibition.
During our visit, Mr. Williams also talked about the early 1970s and the debut of the Cuisinart food processor, and what a difference it made to home cooks. He recalled offering the Cuisinart for sale almost immediately in his stores and how he, too, began using one in his own kitchen. As we talked about Julia's early adoption of the food processor, Mr. Williams offered to donate his first Cuisinart for the museum's collections and for the exhibition.
We note Chuck Williams's passing with sadness, but also with gratitude for his generosity to the museum. By sharing his memories of the "good food" movement, he helped us shape a section of the exhibition and provided insight into the types of objects that would most accurately represent that important story in American culinary history.
Paula Johnson is a curator in the Division of Work and Industry. She has also blogged about cooking with Julia Child in Washington, D.C.