Found 1,024 Resources containing: robot
A Japanese company is planning to send some robotic personality to the International Space Station, in the hopes of providing at least one astronaut with some much-needed entertainment. Here’s the Guardian reporting on Kirobo, the “world’s first humanoid talking space robot”:
Its name comes from the Japanese words for hope and robot, and its task is momentous for a kilo of superbly engineered plastic and a bundle of plug leads: nothing less than to supply emotional warmth and companionship.
The robot has just been launched into the abyss and is scheduled to arrive at the ISS this Friday. It has been programmed to visually recognize the face of Koichi Wakata, an astronaut who will join the ISS crew in November.
Although Kirobo stands just 34cm tall, weighs slightly less than a kilo, and is modelled on a beloved Japanese cartoon figure, Astro Boy, it would be quite wrong, indeed grossly offensive, to describe it as a toy. It will also relay messages and commands from the control centre to Wakata, and keep records of all their conversations.
Though having your private conversations with a robot recorded seems a bit invasive, Kirobo’s creator, Tomotaka Takahashi, says the robot will provide “a kind of ‘listening’ conversation.” Rather than just answer specific questions with specific answers, Kirobo strives to actively participate in conversations.
Plus, what could go wrong? Kirobo, the Guardian says, “has an Earth-bound twin called Mirata which can monitor any problems in space” and, at a press conference, he told reporters that he “hoped to create a future where humans and robots live together and get along.” Which should be reassuring, but…even the homicidal HAL 9000 had a double on earth. And he once told a reporter: “I enjoy working with people.” As the Guardian puts it: “Generally robots in space have had a bad press.” Maybe Kirobo can turn that their reputation around.
More from Smithsonian.com:
Feeling forsaken? For many people in Japan, loneliness is a daily reality. More than six million elderly people live alone at last count, and by 2030, one study projects that nearly 40 percent of Japanese people will live by themselves. But help is on the way in the form of an adorable new robot, reports Tribune News Services.
The robot is called the Kirobo Mini, and it’s aimed at making people feel less alone. It was developed as part of the Toyota Heart Project, an initiative to help create artificial intelligence to improve the world of the future. Named after the Japanese word for “hope,” the roughly four-inch-tall robot can talk, gesture, and respond to its owner’s emotions using artificial intelligence and a camera that lets it look at its surroundings.
Kirobi Mini is so tiny it can fit into a car’s cup holder in a special, baby seat-like container. And the resemblance to a baby doesn’t end there: Toyota characterizes it as “a cuddly companion always on hand for heart-touching communication.” It can turn its head toward people, laugh and talk to them, but as the Tribune reports, it can’t recognize individuals.
That might not matter to companionship-starved people seeking love and human connection with a robot. Take Aibo, for example: The Sony-produced dog of the late 1990s sold over 150,000 units despite a $2,000 price tag and, as The New York Times reports, is still considered to be a family member by the few owners that haven’t broken them yet. Jibo, a yet-to-be-released robot servant being dubbed as a “social robot,” has already racked up nearly $4 million in presales alone. And Pepper, a humanoid robot that sold out in mere seconds after its launch in 2015, can now be found in banks and airports throughout Japan.
The idea behind all these gadgets is fairly simple: By providing stimulation and company, companion robots could take the place of humans or fill in when friendship is scarce. And it turns out there’s something to the concept. A 2013 study found that a group of people in nursing homes reported less loneliness when they regularly interacted with a therapeutic interactive seal bot called Paro.
Of course, robots have a ways to go before they’re able to, say, sense when you’re mad at them or give you the world’s best hug. But Toyota thinks that Kirobo Mini is a good start—despite the fact that, as engineers admit to Tribune News Services, it's essentially a talking box. So how much will it cost to purchase your new, slightly dumb BFF? Once it’s available in the United States, it will cost you a cool $390. Friendship certainly doesnt come cheap these days.
The three reactors that melted down at Fukushima Daiichi’s nuclear power plant are still quite dangerous. Radiation levels remain too high for humans to enter. But the reactors need to be fully decommissioned and repaired. So robots are rolling in to give experts some eyes on the inside. For Popular Science, Mary Beth Griggs reports on the latest mechanical helper, which should make its foray into the plant at the end of August.
Griggs writes about the scorpian-shaped robot developed by Toshiba:
The small robot is narrow enough to fit through a pipe, and can flatten itself out into a rigid line to cross gaps or squeeze through tight spaces. But it can also raise its back like a scorpion’s tail, lifting up a camera that can help investigators get an exact location on the fuel.
Earlier this year, the plant’s owner, Tokyo Electric Power Co. (TEPCO), sent a snake-like robot to inspect reactor number 1’s primary containment vessel, which holds the nuclear fuel. The shape-changing robot found radiation doses at about a tenth of the levels expected, reports Martyn Williams for PC World. But the bot died just three hours into its mission.
The new robot’s ability to raise its camera will give its operators a better view. It’s also able to pick itself back up if it tumbles. The scorpion-like bot should operate for about 10 hours if all goes well. Here it is in action:
Robots are capable of doing a lot of things these days, and now, researchers in Canada are looking to add yet another skill to the list: hitchhiking.
HitchBOT is part art project and social experiment. The genderless robot, cobbled together from a beer bucket and pool noodles, will attempt to make its way from Halifax to Victoria this summer, relying entirely on kind drivers to get it to its ultimate destination.
But this robot isn’t just a hunk of metal. Just like any good hitchhiker, HitchBOT is designed to interact with the people who pick it up.
The robot will have voice recognition and processing abilities that will allow it to make small talk with its voice. It will even be able to draw on Wikipedia for conversation topics. It will also have an LED screen so it can message humans using text, and can make some facial expressions. And it will be able to hold text conversations with multiple people at the same time over the internet.
Hitchbot will be powered with solar panels covering the beer cooler bucket that makes up its torso, and can also be recharged from car cigarette lighters or a regular outlet. But if Hitchbot's power runs out as it is waiting for its next ride, written instructions on its body will tell people how to strap it into the car and plug it in, and direct people to a help website.
Hitchhiking itself seems to be coming back into vogue, and not just for robots. John Waters recently wrote a book about his own journey hitchhiking from coast to coast. Will people be as hospitable to a robot as they were to a Hollywood actor? We’ll find out on July 27, when HitchBOT’s journey is set to begin.
Robots may not be the most traditional means of spreading Buddhist teachings, but one Chinese temple is giving it a go. By working with engineers and artificial intelligence experts from some of China’s top universities, a Buddhist monk who lives just outside of Beijing has developed a little robot monk who can hold simple conversations and recite traditional chants in hopes of sharing ancient teachings through modern technology.
With bright yellow robes and a shaved head, the two-foot-tall robot pronounced “Xian’er,” (in Chinese, "贤二"), looks like a toy caricature of a Buddhist monk. However, the little robot has the ability to respond to voice commands, answer simple questions about Buddhist teachings and a monk’s day-to-day life, and even recite some mantras, Didi Kirsten Tatlow reports for the New York Times.
At first glance, technology and Buddhism may seem incompatible. After all, Buddhist teachings often center around rejecting materialism and worldly sentiments. However, Master Xianfan, the Buddhist monk behind Xian’er’s creation sees the little robot simply as a more modern tool for spreading the religion’s teachings in a world where billions of people are constantly connected through smartphones and the internet.
"Science and Buddhism are not opposing nor contradicting, and can be combined and mutually compatible," Xianfan tells Joseph Campbell for Reuters.
Xian’er began as a sketch Xianfan drew in 2011 soon after he first joined the Longquan temple outside of Beijing, Harriet Sherwood reports for The Guardian. Since then, the temple has used the character as a means for spreading its teachings as China’s ruling Communist Party has relaxed laws regarding religion in the country. For several years, the temple has produced cartoons and comic books starring Xian’er. Now, Xianfan hopes that by stepping off of the page, his cartoon creation might help draw new converts to Buddhism in a fast-paced, technology-heavy world.
"Buddhism is something that attaches much importance to inner heart, and pays attention to the individual's spiritual world," Xianfan tells Campbell. "It is a kind of elevated culture. Speaking from this perspective, I think it can satisfy the needs of many people."
Since his debut last October, Xian’er has become a minor celebrity at the temple, with news of the robot drawing visitors to the temple in hopes of catching a glimpse of the mechanical monk. However, not everyone is as enthusiastic about the robot as Xianfan, Tatlow reports.
“It relies on permutations and combinations of words to solve problems, but whether it can really deal with deep personal issues, I’m not sure,” Zhang Ping, a woman visiting the temple, tells Tatlow. “Everyone is different. For some, those may be about family, for others, about work.”
Xian’er’s repertoire may be somewhat limited to certain phrases and questions at the moment, but Xianfan hopes that will soon change. Just months after Xian’er’s debut, the monk is back at work with programmers and engineers on creating a new version of Xian’er, which will have a broader range of responses and functions, Campbell reports. But don’t expect the cute little robot to show up on store shelves any time soon.
“We’re not doing this for commerce, but just because we want to use more modern ways to spread Buddhist teachings,” Xianfan tells Beijing News."Xian'Er" makes its debut at the Guangzhou Animation Festival in October, 2015. (Xinhua/Xinhua Press/Corbis)
NASA’s next space robot doesn't look much like the wheeled or four-legged or humanoid robots that generations of science fiction have dreamed up. More than anything else, it looks like an abstract, geometric structure. But this amalgam of lines could explore new planets, expanding, contracting and lurching along new terrain. It’s called the Super Ball Bot, and it has its origins in a humble baby’s toy.
Wired reports that two engineers from NASA’s Innovative Advanced Concepts Program were tossing around a baby toy made of wire and rods when they noticed it absorbed impact well when it hit the floor. Though they jokingly compared it to a landing robot at first, Vyat SunSpiral and Adrian Agogino soon realized that there was an intriguing principle at play—a concept known as tensegrity.
The term was coined by Buckminster Fuller, who studied the relationship between tension and structural integrity. (Most famously, perhaps, with his geodesic domes.) The concept can be found in nature, too, from cellular structures to spider webs. NASA calls tensegrity structures “counter-intuitive”—built with seemingly-fragile components, they are able to distribute tension and compression across the entire structure.
Inspired by tensegrity, SunSpiral and Agogino developed a robot that’s easily manipulated and that can squeeze into tight places. Unlike other space-exploring robots, the Super Ball Bot is lightweight. What it lacks in rigidity, SunSpiral tells Wired, it makes up for in terms of flexibility.
“We’re accustomed to building rigid and linearly connected systems,” SunSpiral explains. “And we don’t have as many computational tools to develop tensegrity systems. [Super Ball Bot] breaks so many rules of conventional engineering.”
The engineers will present their concept at the IEEE International Conference on Robotics and Automation this spring. But the breakthrough won’t be the first time science has been furthered by a bit of play—from Legos to balloons and kites, scientific discoverers have long found inspiration in toys.
If the brain is a collection of electrical signals, then, if you could catalog all those those signals digitally, you might be able upload your brain into a computer, thus achieving digital immortality.
While the plausibility—and ethics—of this upload for humans can be debated, some people are forging ahead in the field of whole-brain emulation. There are massive efforts to map the connectome—all the connections in the brain—and to understand how we think. Simulating brains could lead us to better robots and artificial intelligence, but the first steps need to be simple.
So, one group of scientists started with the roundworm Caenorhabditis elegans, a critter whose genes and simple nervous system we know intimately.
The OpenWorm project has mapped the connections between the worm’s 302 neurons and simulated them in software. (The project’s ultimate goal is to completely simulate C. elegans as a virtual organism.) Recently, they put that software program in a simple Lego robot.
The worm’s body parts and neural networks now have LegoBot equivalents: The worm’s nose neurons were replaced by a sonar sensor on the robot. The motor neurons running down both sides of the worm now correspond to motors on the left and right of the robot, explains Lucy Black for I Programmer. She writes:
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.
Timothy Busbice, a founder for the OpenWorm project, posted a video of the Lego-Worm-Bot stopping and backing:
The simulation isn’t exact—the program has some simplifications on the thresholds needed to trigger a "neuron" firing, for example. But the behavior is impressive considering that no instructions were programmed into this robot. All it has is a network of connections mimicking those in the brain of a worm.
Of course, the goal of uploading our brains assumes that we aren’t already living in a computer simulation. Hear out the logic: Technologically advanced civilizations will eventually make simulations that are indistinguishable from reality. If that can happen, odds are it has. And if it has, there are probably billions of simulations making their own simulations. Work out that math, and "the odds are nearly infinity to one that we are all living in a computer simulation," writes Ed Grabianowski for io9.
Is your mind spinning yet?
Americans are, increasingly, working from home. Whether small business owners, freelance workers or regular old telecommuters, more and more of us are working from home offices, coffee shops or even bars. And although many of these workers usually need only a laptop and smartphone, every once in a while, it's necessary—and nice—to print out contracts, leases or other formal documents.
Over on Kickstarter, a company called ZUtA Labs is raising money (they're nearly at their goal) to create the last piece of a portable office—a little robot printer, a Roomba-esque thing that drives around on a sheet of paper and prints as it goes.
“Print machines now-a-days are essentially a printhead running left and right on a moving piece of paper. We asked ourselves, why not get rid of the entire device, just put the printhead on a set of small wheels and let it run across a piece of paper. By doing so, we allow the printer to really be as little as possible,” the team writes.
Unlike some projects that take to Kickstarter once they're more or less finished and just looking for a marketing bump, the robot printer still has some kinks to be worked out, says Wired UK. The team hasn't figured out, for instance, how to teach the printer to reorient itself if it gets moved mid-way through a job.
At $200 a pop the little robot won't exactly be the most economical solution for your once-in-a-while printing needs. But, it's an innovative solution to an increasingly common problem, and, just maybe, a foreshadowing of the demise of the oft-hated office printer.
It sounds like a plot from Black Mirror: Students are asked to identify matching objects, but when a robot chimes in with an obviously wrong answer, some kids repeat what the bot says verbatim instead of tapping into their own smarts. But this isn't science fiction—a new study published in Science Robotics suggests kids easily succumb to peer pressure from robots.
Discover’s Bill Andrews reports that a team of German and British researchers recruited 43 children aged between 7 and 9 to participate in the Asch experiment, a social conformity test masquerading as a vision exam. The experiment, which was first developed during the 1950s, asks participants to compare four lines and identify the two matching in length. There is an obviously correct answer, as the lines are typically of wildly varying lengths, and when the children were tested individually, they provided the right response 87 percent of the time.
Once robots arrived on the scene, however, scores dropped to 75 percent.
“When the kids were alone in the room, they were quite good at the task, but when the robots took part and gave wrong answers, they just followed the robots,” study co-author Tony Belpaeme, a roboticist at the University of Plymouth in the United Kingdom, tells The Verge’s James Vincent.Pictured here are the robot used, the setup of the experiment and the “vision test” shown to participants (Vollmer et al.)
In the new testing environment, one volunteer at a time was seated alongside three humanoid robots. Although the lines requiring assessment remained highly distinguishable, child participants doubted themselves and looked to their robot counterparts for guidance. Of the incorrect answers that the children provided, 74 percent matched those provided by the robots word for word.
Alan Wagner, an aerospace engineer at Pennsylvania State University who was not involved in the new study, tells The Washington Post’s Carolyn Y. Johnson that the implacable faith humans often place in machines is known as “automation bias.”
“People tend to believe these machines know more than they do, have greater awareness than they actually do,” Wagner notes. “They imbue them with all these amazing and fanciful properties.”
The Verge’s Vincent writes that the researchers conducted the same test on a group of 60 adults. Unlike the children, these older participants stuck with their answers, refusing to follow in the robots’ (incorrect) footsteps.
The robots’ demure appearance may have influenced adult participants’ lack of faith in them, Belpaeme explains.
“[They] don’t have enough presence to be influential,” he tells Vincent. “They’re too small, too toylike.”
Participants questioned at the conclusion of the exam verified the researchers’ theory, stating that they assumed the robots were malfunctioning or not advanced enough to provide the correct answer. It’s possible, Belpaeme notes, that if the study were repeated with more authoritative-looking robots, adults would prove just as susceptible as children.
According to a press release, the team’s findings have far-reaching implications for the future of the robotics industry. As “autonomous social robots” become increasingly common in the education and child counseling fields, the researchers warn that protective measures should be taken to “minimise the risk to children during social child-robot interaction.”
Last week in Pomono, California, 23 robots competed for $3.5 million in prizes. One robot emerged as the victor, and the herald of a future where humans and robots work together (hopefully not against each other). But many failed, spectacularly.
The DARPA Robotics Challenge (DRC) was inspired after the Fukushima nuclear disaster made clear the need to develop more robust, dexterous robotic rescuers. The challenge itself involved navigating through a simulated disaster environment and preforming tasks such as turning a valve, driving a vehicle and clambering over debris. For IEEE Spectrum, Erico Guizzo and Evan Ackerman write:
Lots of robots fell over, and a bunch of robots fell over multiple times. As much as nobody wanted to see a robot fall, everybody wanted to see a robot fall, and the possibility of falls (and reality of falls) kept everyone watching on the edge of our seats.
A compilation of all the falls from IEEE Spectrum provides the opportunity to simultaneously grimace and giggle for those who didn’t make last week’s event in person.
"These robots are big and made of lots of metal and you might assume people seeing them would be filled with fear and anxiety," says the event’s organizer, Gill Pratt, in a statement. "But we heard groans of sympathy when those robots fell. And what did people do every time a robot scored a point? They cheered! It's an extraordinary thing, and I think this is one of the biggest lessons from DRC — the potential for robots not only to perform technical tasks for us, but to help connect people to one another."
The robots here don’t make many decisions for themselves. Instead, they scan and measure spaces before passing that information to their operator teams, a quarter of mile away. In the end, human judgement is still needed. But the idea is that the robots can go where humans cannot. Still, the tottering progress and moments spent pondering the next move all add up until, as Mona Lalwani notes for Engadget, "it seems they need humans more than humans need them."
The winning robot was from the South Korean Team KAIST, named DRC Hubo. It completed the course and beat the challengers in 44 minutes and 28 seconds. DRC Hubo’s sucess comes in part from its ability to stand on two legs like a human, but also kneel on wheels built into its knees to move about with greater stability. Guizzo and Ackerman note how the others fared in IEEE Spectrum:
Other teams also performed well in the competition, but setbacks made their robots lose time. These included Tartan Rescue’s CHIMP, a robot with legs and tank-like tracks that was the only robot to get back up after a fall; the University of Bonn’s Momaro, an elegantly simple wheeled machine with a spinning head and two arms; NASA Jet Propulsion Laboratory’s RoboSimian, a four-legged robot that seemed to perform yoga moves; IHMC’s ATLAS, a large hydraulic-electric humanoid made by Boston Dynamics (and used by other DRC teams).
Here’s the winning bot in action during the door opening task, no stumbles in sight:
Since the 2011 meltdown at a nuclear power plant in Fukushima, Japanese authorities have been working to decontaminate the area. A crucial step in the clean up process is locating the nuclear fuel that melted during the disaster—a task easier said than done. Humans can’t safely go near the site, and robots sent to probe the highly toxic reactors have sputtered and died.
But as Kyle Swenson reports for the Washington Post, experts recently made a breakthrough: an underwater robot photographed what appears to be solidified nuclear fuel at the site of the disaster.
The robot, nicknamed “Little Sunfish,” documented icicle-like clusters, clumps and layers of the suspected nuclear material in one of the three reactors that was submerged in water when Japan was hit by a massive earthquake and tsunami six years ago. Some layers are more than three feet thick. According to the Associated Press, the formations were found “inside a main structure called the pedestal that sits underneath the core inside the primary containment vessel of Fukushima’s Unit 3 reactor.”
Takahiro Kimoto, a spokesperson for the Tokyo Electric Power Company (TEPCO), tells Kazuaki Nagata of the Japan Times that “it is possible that the melted objects found this time are melted fuel debris.”
“From the pictures taken today, it is obvious that some melted objects came out of the reactor,” he explains. “This means something of high temperature melted some structural objects and came out. So it is natural to think that melted fuel rods are mixed with them.”
The lava-like mixture of nuclear fuel rods and other structural materials is known as corium, and finding its location is vital for decontamination efforts. As Lake Barrett, a former official at the U.S. Nuclear Regulatory Commission, tells Nagata, “[i]t is important to know the exact locations and the physical, chemical, radiological forms of the corium to develop the necessary engineering defueling plans for the safe removal of the radioactive materials.”
The possible identification of corium at Fukushima is a promising first step, but there is a long road ahead. Further analysis is needed to confirm that the substance is indeed melted fuel. Then authorities will need to figure out a way to remove it from the area. The process of decommissioning the reactors is expected to take 40 years, and cost about $72 billion, according to an estimate from the Japanese government.
It's not all bad news. With Little Sunfish, scientists may have finally developed a robot that can withstand the highly radioactive bowels of Fukushima’s nuclear reactors, which will help them conduct further investigations of the site.
Much attention is paid to the cargo passengers carry in and out of airports. Suitcases and trunks are tagged, x-rayed, even searched. Yet, the same level of scrutiny is not often applied to other means of travel.
“It’s really hard to maintain security at ports,” explains Sampriti Bhattacharyya, a mechanical engineering graduate student at the Massachusetts Institute of Technology. “How do you get on your hands and knees and check everything?” Inspectors would need to peer inside every cabin and cabinet and under floorboards to be sure there's nothing to hide.
In early September, she and her advisor, engineering professor Harry Asada, presented their solution at the International Conference of Intelligent Robots and Systems. Their Ellipsoidal Vehicle for Inspection and Exploration (EVIE, for short) is a football-sized robot that swims along ships’ hulls, using ultrasound to sniff out potential contraband.
Smugglers often conceal goods in secret compartments in ship hulls. Many of these crafts are small, and port security may not have the resources or time to search every one. Ultrasound will allow EVIE to spot hollow areas in a hull, where wares are likely to be stashed.
EVIE spans about eight inches across, and its plastic body is divided into two distinct hemispheres. The upper hemisphere contains a propulsion system of six water jets, which can push EVIE forward at about 2 miles per hour. The lower hemisphere is watertight and houses all the electronics, including a battery, motion sensors, central processor, wireless radio and a camera; the team flattened the bottom, so that EVIE could press flush against surfaces. For now, the robot is wirelessly remote controlled, but the researchers think it could one day be programmed to work autonomously.
Originally intended to assess the condition of water tanks in nuclear reactors, the team designed EVIE to peek into places that are either unsafe or inaccessible to humans. Its stealthy propulsion system, however, makes the remote-controlled robot ideal for surreptitious searches. Instead of propellers, which create a visible wake, the team opted for the six internal water jets. EVIE’s 3D-printed upper chamber fills with water, which the jets expel to propel and steer the craft. “You cannot see the jets in the water; you can hide it in a bunch of bushes [or seaweed] and let it go,” explains Bhattacharyya.The current version of EVIE consists of a watertight base for its electronics and a 3D-printed upper containing the jet-based propulsion system. (Courtesy Sampriti Bhattacharyya & Harry Asada)
The control scheme is very sensitive, which is both a blessing and a curse. A high degree of maneuverability will allow pilots to skim as close to hulls as possible, but can also make it difficult to maintain a precise distance and a straight line. Before the team can make ultrasound work, they’ll have to improve its control mechanism and figure out how to navigate rough surfaces, such as hulls that are uneven or covered in barnacles. Ultrasound requires either direct contact with a surface or a consistent distance from it.
The team is currently conducting still-water tests to figure out how to help EVIE hover at a prescribed distance. Using a hydrodynamic buffer, or pre-determined fixed gap between the robot and boat, Bhattacharyya explains, could be a way for the device to quickly identify areas that require a closer look. “If the surface is rough, and I have a time crunch and want to scan really fast, I can stay at distance and then stop whenever I see something,” she says.
The prototype has already piqued the military's interest. “I am particularly interested to see if this type of technology could find use in domestic maritime operations, ranging from the detection of smuggled nuclear, biological, or chemical agents to drug interdiction, discovery of stress fractures in submerged structures and hulls, or even faster processing and routing of maritime traffic,” Nathan Betcher, a special-tactics officer in the U.S. Air Force, told MIT News.
The current device's lithium-ion battery can power the craft for about 40 minutes, enough time to screen several hulls. Bhattacharyya plans to increase battery life to 100 minutes with the next generation. She imagines a future where fleets of EVIEs monitor ports; they will rotate, with some reporting to scanning duty as others return to their charging stations. But, full-scale commercialization is still years away, says Bhattacharyya.