MIT experts develop drone that can be navigated by gestures
The "Conduct-A-Bot" can make the interaction between people and robotic equipment more instinctive, natural, and safeEuropost
Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication. But intuitiveness is hard to teach - especially to a machine. Looking to improve this, a team from Massachusetts Institute of Technology (MIT)’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a method that dials us closer to more seamless human-robot collaboration. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to pilot a robot’s movement.
To enable seamless teamwork between people and machines, the muscle-control feature requires electromyography and motion sensors to be worn on the biceps, triceps, and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real time, without any offline calibration or per-user training data. The system uses just two or three wearable sensors, and nothing in the environment - largely reducing the barrier to casual users interacting with robots.
By detecting actions like rotational gestures, clenched fists, tensed arms, and activated forearms, Conduct-A-Bot can move the drone left, right, up, down, and forward, as well as allow it to rotate and stop. If you gestured toward the right to your friend, they could likely interpret that they should move in that direction. Similarly, if you waved your hand to the left, for example, the drone would follow suit and make a left turn.
In tests, the drone correctly responded to 82% of over 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94% of cued gestures when the drone was not being controlled.
The new feature is impressive not only because it uses biofeedback instead of other kinds of gesture recognition to control the drones, but also because of how the controls can set up a range of different potential applications making the remote tech more accurate. This type of system could eventually target a range of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials.
"Cobotics," the industry that focuses on creating robots that can safely work alongside humans and in close collaboration with robots, would greatly benefit from the advances that were made by MIT's research team.
“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on the new paper. “This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”
“Furthermore, this system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.”