Researchers have created a robot capable of peeling bananas without harming the fruit–an impressive achievement, considering humans can do this task effortlessly. The device features two arms with prongs resembling hands; one picks up the banana while the other gently pulls back its peel, slowly and delicately peeling away each piece of skin piece by piece until all peeling is complete in under three minutes. Researchers hope their technology could also be applied to other fine motor tasks, like handling glasses of water or whisking bowls of soup.
University of Tokyo scientists led by Heecheol Kim have used deep imitation learning to train their robot to peel bananas using this process, showing it hundreds of times, then letting it observe and replicate what it saw. Their efforts resulted in an effective robot with 57% success rates that didn’t squash or damage the fruit’s texture, reports NBC News.
Unlike other robotic devices, the Banana Peeling Robot doesn’t rely on sensors to read a banana’s surface; it features two arms with “fingers” instead. As one arm picks up the fruit, another moves in to carefully grab its tip with its fingers before peeling begins – shifting arms between uses each time!
This multi-step process makes machine task difficult due to their reliance on sensor data to grasp objects, yet one robot managed to peel a banana with an impressive 57% success rate despite not yet being capable of handling metal parts or providing meals to restaurant patrons.
To ensure the banana-peeling robot was successful, its designers developed it with a set of rules to govern how its arms move and hold the fruit. Their aim is to reach and peel it without creating mashes or bruises – this goal can be reached by breaking up peeling process into multiple subtasks such as GraspBanana, GraspTip, ReachRight, Reposition and PeelTip that don’t damage or bruise fruit while ReachLeft and PeelRight require high precision manipulations respectively.
Demonstrations involve human guidance of the robot as it performs subtasks in sequence. Each demonstration sees the robot reaching out and touching an end-effector to a banana, then using foveated images to identify it and performing reactive global action (see Figure 13).
Recent work by scientists used this same approach to teach a robot cheese fondue dipping skills. After showing each step to a machine, which then replicated them without human input – producing results very similar to how a person would perform in such scenarios – see video below for results of study published online in Robotics & Automation Letters by co-authors Yshiyuki Ohmura from University of Tokyo and Yasuo Kuniyoshi from Japan respectively.