Found 330,060 Resources
The age of naval battles between huge ships on the high seas seems to have passed into distant memory. Instead, some of the most devastating attacks on giant vessels in recent years have been executed by boats small enough to get through the larger ships’ defenses.
But now, governments around the world are working on technology designed to stop these attacks. In the U.K, researchers are working on a remote monitoring system—called the MATRiX system—that resituates the traditional responsibilities of a lookout to land-bound control rooms. The system has a connected network of anti-pirate deterrents attached to the outside of the ship. If a threat is detected, the deterrant system releases two relatively simple tools—nets that will catch in the propellers of attacking boats and a fog of capsaicin, the active ingredient in pepper spray (and bear repellent).
While merchant vessels have problems with pirates, military vessels face a different ideological set of challenges, including terrorist attacks like the one on the USS Cole fourteen years ago. In that attack, a small boat filled with explosives managed to get close to the Cole and blow a hole in the side of the ship.
In order to deal with the threat of small boats without putting sailors in harms way, the Navy has developed a system that can convert any boat into a fully automated ship, capable of confronting an enemy vessel without risking the lives of military personnel. The automated boats can work in tandem, swarming a target vessel, earning the system the moniker "swarmboats." The swarmboat system was tested in August on the James River.
The software that directs the vessels is called CARACaS (Control Architecture for Robotic Agent Command and Sensing), and was originally developed by NASA for Mars rover missions. But as advanced as the swarmboat system is, humans are still involved, as Wired reports:
The ships in August’s test didn’t open fire, but the Navy is getting there, though it says robots will not decide when or whom to attack. “If there is any kind of designation, any kind of targeting,” says Rear Adm. Matthew Klunder, Chief of Naval Research, “there is always a human in the loop.” If a boat loses communication with its human captain, who may be halfway around the world, it goes dead in the water.
When her high school in Grosse Ile, Michigan, started a co-ed robotics team dubbed The Wired Devils, Maya Pandya thought she’d give it a try. The 17-year-old already excelled in math and science, and had considered going into engineering as a career. But while the team was part of a larger initiative meant to “inspire young people’s interest and participation in science and technology,” her first interactions with other team members left her frustrated.
“When I first walked in, the guys on the team acted like I didn’t really want to do engineering,” says Maya, who will be a senior next year. “It felt like they assumed things automatically. Once I pushed people out of that mindset, they accepted me and started listening to my ideas.”
It wasn’t until the last few weeks of the team’s 6-week build session, when students came together to construct a robot for an upcoming competition, that things seemed to click. Maya recalls working on her team’s robot one day, and realized that hours had passed. “I was enjoying it so much that time just flew by,” she says. “It was that moment that I realized I could actually go into robotics.”
Maya is part of a growing number of girls who are trying out robotics—through school clubs or regional organizations, and in co-ed or all girls teams—and finding out that they have a knack for it. FIRST (For Inspiration & Recognition of Science & Technology), the nonprofit that helped spark the girls-in-robotics moment and is behind The Wired Devils, now boasts more than 3,100 teams nationwide and over 78,000 student-aged participants.
Robotics advocates say these programs provide a way for school-age girls to get exposure to the field while also discovering their passion for STEM-based careers—a priority that’s been on the national agenda for the past several years, in part thanks to President Obama’s push for increased participation by women and minorities in STEM careers.
“There's a push overall for kids to be into robotics because, from a talent pool standpoint, the U.S. isn't putting out enough people to stay ahead in math, science, or any of the STEM fields,” says Jenny Young, founder of the Brooklyn Robot Foundry, a robot-based after-school program that strives “to empower kids through building.” “Girls are half the population, and there really isn't any reason why girls shouldn't see how fun and exciting and rewarding engineering can be.”
Others say the rise of girls in robotics reflects a natural transition as the gender divide begins to narrow. “I have seen a shift in society over the last year of basically ‘girl power’ and the removal of gender barriers,” says Sarah Brooks, program manager for the National Robotics League, a student robot-building program run by the National Tooling & Machining Association. “It has allowed more girls to feel confident in these types of roles—and it has allowed the boys to be confident that the girls are there.”Keena, shown shaking hands with Michigan Gov. Rick Snyder at a 2016 state robotics competition, and her sister Maya, to her right. (Keena Padya.)
Of course, robotics isn’t just about STEM training. It's also a lot of fun. “Robotics are amazing,” says Maya’s younger sister Keena, 15, who has also been bitten by the robotics bug. “At first I only joined the club because my sister was involved. But once I got into it and I started seeing the design process, the build process, the programming and how everything came together, I found out that this is a field that I could possibly go into.”
Arushi Bandi, an incoming high school senior at Pine-Richland High School, says that robotics programs helped her get key mentorship from other girls. Bandi, who is 16, is a member of Girls of Steel, a girls-only high school robotics team run by Carnegie Mellon University. Thanks to advice from older team members, Bandi realized that she was interested in majoring in computer science—a marriage of subjects and interests she was already drawn to—when she attends college. Before, she hadn’t even known the field existed.
Yet while the raw numbers of girls (and boys) participating in robotics is increasing, a gaping gender disparity is still apparent. In Michigan there has been an “uptick” in female robotics participation, but the percentages are less than inspiring. During the 2012-2013 school year, 528 of the 3,851 students enrolled in these programs were female (14 percent), while in 2014-2015, 812 out of 5,361 were female (15 percent), according to statistics compiled by the Michigan Department of Education.
With the White House STEM push and programs like FIRST, there isn't necessarily the same lack of opportunities for young women to get into robotics and STEM careers as there once was. The problem, it appears, is often the lack of suitable role models. “I think the challenge is getting women into those fields,” says Bandi. “And, after that, future generations will naturally transition into them.”
Terah Lyons, a policy advisor in the White House Office of Science and Technology Policy, agrees. Lyons points to the striking decline in the number of undergraduate degrees earned by women in engineering, math/statistics and—most dramatically—computer science over the past few years. Degrees earned by women have dropped from 28 percent in 2000 to just 18 percent in 2012, the National Science Foundation reported in its 2014 Science and Engineering Indicators Report.
“It’s tough to envision yourself as a leader in a field if you don’t see leaders that resemble you,” says Lyons. “The fact that there aren’t sufficient female role models is a catch-22 death spiral in a way, because it discourages women to go into these STEM fields and, further, women in future generations aren’t encouraged to study the subjects and the decline sort of happens from there.”Another Foundry creation. (Brooklyn Robot Foundry)
As Maya's experience shows, girls interested in entering robotics still face cultural barriers—which the girls themselves are often very aware of. “In our society, a lot of the toys for boys are more focused on building,” says Maya. “Girls really don't have that. When girls join robotics, they get exposed to all of these things.”
Young, a mechanical engineer, says making robots fun will help pull more kids into the fold, particularly young girls who may not be engaged the same way as their male peers. She strives counter the societal stereotyping that “robots are just for boys” by teaching simple circuits to build basic robots, but letting the kids decide what to do next. Some of her students build fuzzy pink kitties that “wiggle and wobble,” while others craft more boxy, classically-shaped robots—it’s up to them.
This fall, young girls around the country will watch as our country’s first female presidential nominee campaigns for the highest position in the United States. But the numbers show that overcoming the gender hurdle and encouraging women to go into science and math will still require time and dramatic societal reprogramming. “We need to tell the younger girls who are interested in these fields that they’re good at it,” says Young. “If girls and robotics could be mainstream, that would be the sweetest day ever.”
"You can flip it upside down, or rightside up and it still does the job!"Issa Nesnas, an engineer for the Jet Propulsion Laboratory, discusses and demonstrates prototypes of land rovers.
In the movies, you never hear robots say “Huh?”
For all his anxiety, "Star Wars"' C-3PO was never befuddled. Sonny, the pivotal non-human in "I, Robot" may have been confused about what he was, but didn’t seem to have any trouble understanding Will Smith.
In real life, though, machines still struggle mightily with human language. Sure, Siri can answer questions if it recognizes enough of the words in a given query. But asking a robot to do something that it hasn’t been programmed, step-by-step, to do? Well, good luck with that.
Part of the problem is that we as humans aren’t very precise in how we speak; when we’re talking to one another, we usually don’t need to be. But ask a robot to “heat up some water” and the appropriate response would be “What?”—unless it had learned how to process the long string of questions related to that seemingly simple act. Among them: What is water? Where do you get it? What can you put it in? What does ‘heat up’ mean? What other object do you need to do this? Is the source in this room?
Now, however, researchers at Cornell University have taken on the challenge of training a robot to interpret what’s not said—or, the ambiguity of what is said. They call the project Tell Me Dave, a nod to HAL, the computer with the soothing voice and paranoid tendencies in the movie "2001: A Space Odyssey."
Their robot, equipped with a 3D camera, has been programmed to associate objects with their capabilities. For instance, it knows that a cup is something you can use to hold water, for drinking, or as a way to pour water into something else; a stove is something that can heat things, but also something upon which you can place things. Computer scientists call the training technique grounding—helping robots connect words to objects and actions in the real world.
“Words do not mean anything to a robot unless they are grounded into actions,” explains Ashutosh Saxena, head of the Tell Me Dave team. The project's robot, he says, has learned to map different phrases, such as “pick it up” or “lift it” to the same action.
That’s a big step forward in human-robot communication, given how many different ways we can describe a simple task.
“All robots, such as those in industrial manufacturing, self-driving cars, or assistive robots, need to interact with humans and interpret their imprecise language,” he said. “Being able to figure out the meaning of words from their environmental context would be useful for all of these robots immediately.”
A Group Effort
Saxena, along with graduate students Dipendra Misra and Jaeyong Sung, have also turned to crowdsourcing to collect as many different variants of the English language as possible.
Visitors to the Tell Me Dave website are asked to direct a virtual robot to complete a certain task, such as “Make ramen.” Because most people tend to give different commands as they lead the robot through the process, the team has been able to collect a large vocabulary related to the same step in the process.
Those commands, recorded in different accents, are associated with stored video simulations of different tasks. So even if the phrases are different—“take the pot to the stove” as opposed to “put the pot on the stove”—the Tell Me Dave machine can calculate the probability of a match with something it has heard before.
At this point, the Tell Me Dave robot completes requested tasks almost two-thirds of the time. That includes cases in which objects are moved to different places in the room, or, the robot is working in a different room altogether. Sometimes, however, the robot is still clueless: When it was told to wait until ice cream became soft, “it couldn’t figure out what to do," Saxena says.
Still, it has become much better at filling in unspecified steps. For instance, when told to “heat the water in the pot,” the robot realized that it first needed to carry the pot over to the tap and fill it with water. It also knows that when instructed to heat something, it can use either the stove or the microwave, depending on which is available.
Saxena says the Tell Me Dave robot training must improve before it can be used in real life settings; being able to follow directions 64 percent of the time isn’t good enough, he says, particularly since humans understand what they’re told 90 percent of the time.
Saxena and his team will present their algorithms for training robots, and show how they’ve expanded the process through crowdsourcing, next week at the Robotics Science and Systems Conference at the University of California, Berkeley; similar research is being done at the University of Washington.
Here’s more recent news about research into communicating with and through robots:
- What’s the gesture for “make sure my seat is warm"?: Mercedes-Benz wants to be the first major car company to start selling driverless cars, perhaps as soon as 2020, and its engineers have started working with robotics experts to develop ways for people to communicate with their vehicles. One method getting a lot of attention is the use of the hand signals that a car’s sensors could comprehend. Experts say that with the right gesture, you could hail your parked car to come pick you up.
- Finally, helper robots for mechanics: At Audi, robot helpers will soon be shipped to the company's mechanics around the world. The robots will be equipped with 3D cameras controlled by an off-site specialist, who can guide the people actually working on the cars through tricky repairs.
- Making Siri smarter: According to a report in Wired, Apple has started hiring top speech recognition experts as it begins focusing on the concept of neural networks, having machines learn words by building connections and mimicking the way neurons function in the human brain.
- Robot needs ride to art show: Later this month, a robot will begin hitchhiking across Canada. Called HitchBOT, it’s been described as a combination art project-social experiment. The goal is to see if HitchBOT can make it from Halifax to a gallery across the country in British Columbia. It won’t be able to move on its own, but it will be equipped with a microphone and camera that will allow it to detect motion and speech. It also will be able to answer questions using a Wikipedia-sourced database.
Planning the menu for a dinner party in a tiny apartment can be far easier than making sure guests have a place to sit: Many apartment dwellers simply don’t have the luxury of a full dining set and a comfy couch for movie nights.
But a new system in development at the Biorobotics Laboratory (BioRob) at the Swiss Federal Institute of Technology could mean people in cramped living spaces no longer need to make those compromises.
The project, called Roombots, is a system of modular robots that can assemble (and reassemble) themselves into various pieces of furniture. A full-size couch or bench, for example, could break apart into six dining chairs; a squat coffee table could build itself up into a proper dining table.
It’s not the first time researchers have built robots that can put themselves together. Engineers have been experimenting with modular robots for years. Most recently, a team at MIT showcased M-Blocks, a system of magnetic cubes aimed at turning robots from unitaskers into adaptive multitaskers.
Unlike previous attempts, though, the Roombots system can use passive, non-robotic parts. The passive components can be large, stationary pieces of furniture, such as a wall or a tabletop, or four-inch cubes; the active modules, which measure about 8.5 inches across, are what make the bots come to life.
The modules consist of two articulating spheres, containing three motors, a battery and a wireless radio. The spheres have sets of claw-like connectors on the outside of the modules, which allow them to grab onto one another or link up with passive parts.
The most recent Roombots demonstrations showcase base-level functionality. A set of four modules can move a small table across a room, while a set of three can reconfigure cubes from a tripod to a snake. The connectors, however, are limited by the amount of weight they can handle without bending.
Though it hasn’t been said explicitly, it appears that existing furniture can also be retrofitted with Roombots connectors. The table in the demonstration, for instance, looks like a simple Ikea side table.
Currently, the motors inside active Roombots modules allow them to roll across the floor individually or as a group. A string of modules, for instance, can induce a spin that helps them roll, as a unit, across the floor like a log.
While there’s plenty of flash in the Roombots demo to tempt apartment dwellers of the future, the researchers’ goal is far more altruistic. They want the system to be used primarily as a way to assist the elderly or disabled. A Roombots table, for instance, could move a person’s glass of water closer to where he or she is sitting; a dining chair could pull itself out and push itself back in. Roombots might also configure themselves to help those with limited mobility sit, stand up or lie down.
But the researchers have a lot to figure out before that’s possible. First, they must refine the algorithms that govern how Roombots move (at the moment, their movements are somewhat limited). The team is also exploring other forms of locomotion that will allow modules to climb on and around one another, allowing for faster transitions between configurations. While the current motors are capable of such tasks, the team must refine the bots' logic to allow them to decide how to re-arrange on their own.
There’s also the matter of control. Researchers currently manipulate Roombots via a Bluetooth connection. They’ve also mocked up a system that lets researchers control structures with gestures using a Microsoft Kinect. Ideally, the system would pair wirelessly with a tablet application, but that software is still in development.
Despite all the challenges ahead, Roombots have already captured the imagination of many robotics fans across the web (the Terminator comparisons appear to be unavoidable). The team will continue to refine the system over the coming months and years—and it could be up to 20 years before the system is refined enough for consumer use—but ultimately, researchers hope Roombots will become the LEGO bricks of smart furniture.
For now, the concept offers a glimpse of what it could look like for anyone—from artists to designers—to develop their own take on hyper-functional furniture.
Each year, U.S. farmers lose up to 12 percent of their crops to disease and another 12 percent to pests. Sickly plants account for billions of dollars in lost profit in the agriculture industry.
The trouble is, plant diseases are hard to spot before it’s too late. By time there are visible symptoms—wilted leaves, for instance, or discoloration—the disease has advanced enough as to be untreatable.
But plants do let their surroundings know when they’re sick or under attack, just not in a way we’ve ever been able to understand in real time—until now. A team of engineers led by Gary McMurray, division head for food processing technology at the Georgia Institute of Technology (Georgia Tech), has developed a method to monitor and decode those messages to allow farmers to identify and treat diseases before they take root. His vision: robotic arms that capture and identify plants' natural disease signals on the fly.
All plants produce natural signals, in the form of volatile organic compounds (VOCs), to alert the surroundings of what’s attacking them. McMurray’s system captures those compounds using a miniaturize version of a gas chromatograph (called a micro GC). The devices, which were originally developed around the turn of the 20th century, are used to separate chemicals within complex samples—in McMurray's case, the gasses a plant emits. As gas is heated and passed through a column, and an electronic sensor detects the compounds the sample contains.
There’s no pinching or plucking of samples and running them back to the lab for analysis—a process that can take days or weeks to complete. All the information the micro GC needs is in the air, which means treating diseases, or saving plants from them, could get a whole lot faster.
The applications for gas chromatography stretch well beyond crops. The Department of Homeland Security, for instance, uses them to detect certain types of gases and hazardous chemicals. And researchers have are also using them to screen for certain types of digestive disease in humans.
Traditionally, chromatographs have been large devices anywhere from 3 to 10 meters long; advances in nanomaterials and manufacturing have allowed McMurray to create one that’s the size of an 8-volt battery.
“We could have never built this; even five years ago it wouldn’t have been possible,” he says.
McMurray's micro GC could be mounted to a robotic arm on existing farm equipment, such as a tractor or plow. The arm would hold the device above the plant leaves and gather air samples. A small computer, no more powerful than a basic laptop or iPhone, can then process data from the samples to identify any pathogens; a quick flush of helium prepares the sensor to evaluate its next sample.
Because they're so small, "[we] can build multiple micro GCs onto one robot," McMurray says. “I can have 10, 20, even 100 of them on a single tractor.”
That means one tractor could gather samples from stalks, roots and buds simultaneously.
The micro GC system could also be used to screen all of our food—from fruits to vegetables and grains—for disease.
“Fresh produce that’s being shipped around [the globe] could be carrying various pests or diseases. If you have some sort of field-deployable or mobile sensor, you could detect the VOCs that come off of the plants,” McMurray postulates. “It could solve some very big problems.”
The team is currently finishing lab testing for the micro GC to make sure its results are consistent with those from larger chromatographs. In August or September, they will run their first field test, in which a researcher will walk a micro GC through peach fields to test for Peachtree Root Rot.
While that initial test will focus on a specific disease, gas chromatography can screen for dozens of pathogens at once, which also distinguishes it from other approaches, McMurray says.
Even with the technology, plant pathologists will still have to map the VOC emissions associated with certain plans and certain diseases.
Plants whose VOC emissions are already well documented will have a leg-up when micro GC screening gets underway; others will require more time and research to diagnose and treat. “The pathogen we chose, no one knows anything about,” McMurray says, “but, for example, there is a lot known about certain fungi.”
But the Georgia Tech micro GC is an important turning point in usability and scalability, McMurray says.
“What we have we think is unique,” McMurray says, “these new manufacturing processes are opening up a whole new era of sensors.”
Robots are taking over.
Starting this month, robots will invade Seoul’s Incheon International Airport. The robots will drive themselves around the airport, assisting passengers and picking up litter.
Troika, as one robot is called, stands 4.5 feet tall and responds to its name when travelers need help, according to Associated Press.(Courtesy of LG Electronics)
Passengers traveling through the airport can scan their boarding pass and Troika will take them directly to their gate. (Theoretically Troika is not programmed with spite, so the robot will not lead rude passengers on an aimless route through the airport.) If passengers start to lag behind the robot, Troika will say “Please stay closer so I can see you."
The robot will be able to speak English, Korean, Chinese, and Japanese by the end of the month. It can tell passengers the weather in their final destination, information about flights or display a map of the airport. When it speaks, Troika’s screen shows eyes that blink and smile.
Another robot will assist maintenance teams around the airport, picking up and collecting any debris that it encounters on its rounds. Incheon Airport said in a statement that it does not expect the robots to replace humans, only add extra assistance during overnight shifts or particularly busy days.
This is only the latest example in a series of airport robot takeovers. At Geneva Airport, there’s a robot named Leo that checks passengers in and takes their checked bags to the baggage handling area. And meanwhile in Amsterdam, there’s a robot named Spencer who can recognize emotion and help passengers make connecting flights.
Other articles from Travel + Leisure:
Sometimes, experiments in mass transit don’t pan out. The push out to the suburbs after World War II killed the dreams of many a city planner, and their legacy lies in (very photogenic) ruins. In Paris, the Petite Ceinture, an old steam line, is a favorite of urban explorers. In Cincinnati, the abandoned remains of a never-used subway system are opened once a year for tours. In Rochester, officials are still trying to figure out what to do with the the remains of a rapid transit system, built in 1927 and then abandoned within just thirty years.
Right now, Rochester's abandoned rail system provides a canvas for graffiti artists and a gloomy backdrop for photographers:
But some Rochester locals think it could be much more. From The Atlantic Cities:
The covered portion downtown can still easily be explored...For now, it serves as a popular destination for urban explorers and graffiti artists. As more cities find ways to reuse their above-ground railways, Rochester sits on a unique underground asset. "All I need to do is to take a walk along the Highline," says Governale, "and it becomes painfully clear to me that we're missing out on something."
New York City’s High Line is one of the most notable success stories in rehabilitating an abandoned transit line. The 1.45 mile long park snakes over the streets of Manhattan drawing both tourists and locals. And now it seems like every other city wants one—Chicago, London, Mexico City. Even New York wants another High Line–like success: one enterprising group is currently working to create the Lowline, an underground park that would be located in an old trolley terminal on New York’s Lower East Side.
Modern critics would probably hail the up and coming rock artists that once inhabited Indonesia. About a hundred caves outside Moras, a town in the tropical forests of Sulawesi, were once lined with hand stencils and vibrant murals of abstract pigs and dwarf buffalo. Today only fragments of the artwork remain, and the mysterious artists are long gone.
For now, all we know is when the caves were painted—or at least ballpark dates—and the finding suggests that the practice of lining cave walls with pictures of natural life was common 40,000 years ago. A study published today in Nature suggests that paintings in the Maros-Pangkep caves range from 17,400 to 39,900 years old, close to the age of similar artwork found on the walls of caves in Europe.
“It provides a new view about modern human origins, about when we became cognitively modern,” says Maxime Aubert, an archaeologist at Griffith University in Australia. “It changes the when and the where of our species becoming self-aware and starting to think abstractly, to paint and to carve figurines.”
Swiss naturalists Fritz and Paul Sarasin returned from a scientific expedition to Indonesia between 1905 to 1906 with tales of ancient rock shelters, artifacts and cave paintings, but few specifics. Dutch archaeologist H. R. van Heereken first described the cave paintings around Maros in 1950, and though Indonesian researchers have done significant work in the caves, little has been published on them since.
Work by local scientists describes more recent charcoal drawings that depict domesticated animals and geometric patterns. It also mentions patches of potentially older art in a red, berry-colored paint—probably a form of iron-rich ochre—that adorns cave chamber entrances, ceilings and deep, less accessible rooms. Previous estimates put the Maros cave art at no more than 10,000 years old. “People didn’t believe that cave paintings would last for that long in caves in a tropical environment,” says Aubert.
Image by Kinez Riza. A hand stencil design on the wall of a cave in Sulawesi, Indonesia. (original image)
Image by Kinez Riza. Hand stencils, like the one pictured above from a cave in Sulawesi, are common in prehistoric art. (original image)
Image by Kinez Riza. A cave wall with a babirusa painting and hand stencil shows the range in simple to sophisticated artwork found in the Maros-Pankep caves. (original image)
Dating cave paintings can prove extremely difficult. Radiocarbon dating can be destructive to the artwork and can only be used to date carbon-containing pigment—usually charcoal. This method also gives you the age of the felled tree that made the charcoal, rather than the age of the charcoal itself. Bacteria, limestone and other organic material can further skew the dating results. “We often see wildly varying radiocarbon dates from the same painting,” says Alistair Pike, an archaeologist at the University of Southampton who was not affiliated with the study.
While excavating archaeological remains in the caves, Adam Brumm, a co-author and archaeologist at the University of Wollongong in Australia, noticed “cave popcorn” on some of the artwork. This layer of bumpy calcite would eventually become stalactites and stalagmites millennia down the road, but most importantly it contains uranium—a radioactive substance that can be used to estimate a painting’s age.
Aubert and his colleagues collected 19 samples taken from the edges of 14 works of art across seven cave sites. The images ranged from simple hand stencils to more complex animal depictions. In the lab, they estimated the age of the paintings based on uranium isotopes in the samples. In some cases, calcite layers were found above or beneath the art. “If I have a sample on top, it’s a minimum age, and if it’s on the bottom of the painting, then it’s a maximum age,” explains Aubert.
Most of the artwork is around 25,000 years old, which puts it among the oldest artwork in Southeast Asia. But some turned out to be significantly older than expected. “It was a bit of a shock,” says Aubert with a chuckle. One hand stencil dates to at least 39,900 years ago, making it the oldest example of hand stenciling in the world. Some of the animal artwork sets records as well: a painting of a female babirusa, or “pig-deer”, is at least 35,400 years old.
These dates are within spitting distance of some of Europe’s oldest rock art and sculptures. Using uranium dating, Pike’s team previously put hand stencils and geometric paintings in Spain’s El Castillo cave as the oldest on record: a maximum of 40,800 years old. More complex naturalistic images of animals at the famous Lascaux caves in France are around 20,000 years old, while those in Chauvet, France, measure around 32,000 years old—though some refute that date. Animal sculptures found in caves in Germany date to a similar time period, as well.
Image by Pedro Saura. The red dots (above) in El Castillo cave's Corredor de los Puntos have been dated to 34,000 to 36,000 years ago. Elsewhere in the cave, a similar dot is estimated to be 40,800 years old, again based on uranium dating. (original image)
Image by © Javier Trueba Rodriguez/Science Photo Library/Corbis. Artwork of fighting rhinoceroses painted on the wall of Chauvet cave in France. Based on radiocarbon dating of charcoal pigment used to create the paintings, the oldest animal image in Chauvet cave is estimated to be 32,000 years old. (original image)
Image by © Corbis. A painting of a bison in Altamira cave, Spain. Uranium dating suggests that the art work at Altamira was produced around 20,000 years ago, or between 35,000 and 15,200 years ago. (original image)
Image by H. Jensen/University of Tubingen. . During excavations in 2008, a female figurine dubbed "Venus of Hohle Fels" was discovered in Hohle Fels cave in southwestern Germany. Scientist estimate that this figurine is at least 35,000 years old. (original image)
Scientists traditionally thought that humans began creating art once they reached Europe from Africa, and that human art forms dissipated to the far reaches of the globe from there. “It’s a pretty Euro-centric view of the world,” says Aubert. “But now we can move away from that.” The study provides compelling evidence that artists in Asia were painting at the same time as their European counterparts. Not only that, they were drawing recognizable animals that they probably hunted.
“This raises several interesting possibilities,” says Pike. Rock art may have emerged separately in these disparate locales. Given that simple hand stencils show up all over the world, he points out, that wouldn’t be too surprising. Then there’s the possibility that upon leaving Africa, around 70,000 years ago, modern humans had already developed artistic know-how, which they brought with them as they settled Europe and Asia. If that is true, there’s even more ancient cave art waiting to be discovered between Europe and Indonesia. Aubert has a hunch that’s the case: “It’s just that we haven’t found them or dated them yet. I think it’s only a matter of time.”
In the near future Tasmania, Australia's island state, will house the first rock lobster hatchery in the world—and possibly launch a new, multi-million dollar industry.
Unlike the Maine lobster—the popular U.S. variety that comes from the Atlantic Ocean—the rock lobster, or “spiny lobster,” as it’s also know, lives in warm waters like the Caribbean Sea and Pacific Ocean. It should be noted that “rock lobster” isn’t just one kind of crustacean (or simply the title of a B52’s song, for that matter) but a general term for a bunch of different, related species. In a lot of places around the world, rock lobster of one sort or another is the go-to crustacean at dinner time, especially down under.
People love rock lobster. A lot. So much so that over the years their numbers dwindled in the wild requiring countries like Australia to enforce a quota system that caps the amount that can be taken by fishermen. In the case of the Australian rock lobster, the notion of producing commercial quantities in a hatchery has, until now, been nearly impossible. The creatures are notoriously hard to grow from eggs because of their complex life cycles—one of the longest larval developments of any marine creature—which require slightly different growing conditions in the various early stages of their lives.
But researchers at the University of Tasmania’s Institute for Marine and Antarctic Studies (IMAS), located in Hobart, have figured out how to grow the creatures in special tanks, using a particular diet and hygiene practices that took more than 15 years to perfect, according to the Mercury newspaper. The details of the technology are being held close to the vest by the researchers, but we do know it uses a closed-loop system involving 10,000-liter tanks that recirculate and purify the water, that it’s shortened the time the lobsters spend in their larval stage, and that no antibiotics are used in the process.
Unlike commercial production of rock lobster in Indonesia and Vietnam , which uses young, wild-caught lobsters as stock, the Australian venture will be the first in the world to start from eggs, which means it won’t be diminishing the supplies in the wild—rock lobsters can produce as many as half a million eggs at a go (obviously in the wild not all of them will make it to adulthood). Although the Maine lobster and its close relative, the European lobster, aren’t farmed, per se, there are some hatcheries in the U.S. and Europe that grow them from larvae and release the juveniles into the wild where they are then caught once they reach maturity.
PFG Group, a Tasmanian maritime equipment manufacturer, has invested $10 million (about $8 million U.S.) in the project in a spin-off company from the university and is planning to build a commercial hatchery scheduled to be up and running by 2021, according to news.com.au. The young rock lobsters could then be transferred to facilities around the world, where they can be grown to market size.
“I definitely think it could be a multimillion-dollar industry in Australia—land-based lobster production to the tens if not hundreds of millions of dollars,” PFG chief executive Michael Sylvester told the Australian newspaper recently. “There is a huge export opportunity, multiple additional jobs in Australia, and high-value science.”
China is a huge market for rock lobster (about 95 percent of the catch from the U.S. West Coast heads there) so the Aussies are hoping that they’ll be able to take advantage of the continued demand.
More stories from Modern Farmer: