Robots will use the latest computer-vision and machine-learning 
algorithms to try to perform the work done by humans in vast fulfillment
 centers.  
Packets
 of Oreos, boxes of crayons, and squeaky dog toys will test the limits 
of robot vision and manipulation in a competition this May. Amazon is 
organizing the event to spur the development of more nimble-fingered 
product-packing machines.
Participating
 robots will earn points by locating products sitting somewhere on a 
stack of shelves, retrieving them safely, and then packing them into 
cardboard shipping boxes. Robots that accidentally crush a cookie or 
drop a toy will have points deducted. The people whose robots earn the 
most points will win $25,000.
Amazon
 has already automated some of the work done in its vast fulfillment 
centers. Robots in a few locations send shelves laden with products over
 to human workers who then grab and package them. These mobile robots, 
made by Kiva Systems, a company that Amazon bought in 2012 for $678 million,
 reduce the distance human workers have to walk in order to find 
products. However, no robot can yet pick and pack products with the 
speed and reliability of a human. Industrial robots that are already 
widespread in several industries are limited to extremely precise, 
repetitive work in highly controlled environments.
Pete Wurman,
 chief technology officer of Kiva Systems, says that about 30 teams from
 academic departments around the world will take part in the challenge, 
which will be held at the International Conference on Robotics and 
Automation in Seattle (ICRA 2015).
 In each round, robots will be told to pick and pack one of 25 different
 items from a stack of shelves resembling those found in Amazon’s 
warehouses. Some teams are developing their own robots, while others are
 adapting commercially available systems with their own grippers and 
software.
The
 challenge facing the robots in Amazon’s contest will be considerable. 
Humans have a remarkable ability to identify objects, figure out how to 
manipulate them, and then grasp them with just the right amount of 
force. This is especially hard for machines to do if an object is 
unfamiliar, awkwardly shaped, or sitting on a dark shelf with a bunch of
 other items. In the Amazon contest, the robots will have to work 
without any remote guidance from their creators.
“We
 tried to pick out a variety of different products that were 
representative of our catalogue and that pose different kinds of 
grasping challenges,” Wurman said. “Like plastic wrap; difficult-to-grab
 little dog toys; things you don’t want to crush, like the Oreos.”
The video below shows the approach taken by a team at the University of
 Colorado. The team is using off-the-shelf software and building a robot
 arm specialized for the task, says Dave Coleman, a PhD student 
involved.
The
 contest could offer a way to judge the progress that has been made in 
the past few years, when some cheaper, safer, and more adaptable robots 
have emerged (see “How Technology Is Destroying Jobs”)
 thanks to advances in the technologies underlying machine dexterity. 
New types of robot manipulators are making machines less ham-handed at 
picking up fiddly or awkward objects, for example. Several startups are 
developing robot hands that seek to copy the flexibility and sense of 
touch found in human digits. Progress in machine learning could help 
robots perform far more sophisticated object manipulation in coming 
years.
A
 key breakthrough in this area came in 2006, when a group of researchers
 led by Andrew Ng, then at Stanford and now at Baidu, devised a way for 
robots to work out how to manipulate unfamiliar objects. Instead of 
writing rules for how to grasp a specific object or shape, the 
researchers enabled their robot to study thousands of 3-D images and 
learn to recognize which types of grip would work for different shapes. 
This allowed it to figure out suitable grips for new objects.
In
 recent years, robotics researchers have increasingly used a powerful 
machine-learning approach known as deep learning to improve these 
capabilities (see “10 Breakthrough Technologies 2013: Deep Learning”). Ashutosh Saxena,
 a member of Ng’s team at Stanford and now an assistant professor at 
Cornell University, is using deep learning to train a robot that will 
take part in the Amazon challenge. He is working with one of his 
students, Ian Lenz.
While
 the Amazon challenge might seem simple, Saxena believes it could 
quickly make an impact in the real world. “If robots are able to handle 
even the light types of grasping tasks the contest proposes,” he says, 
“we could actually start to see a lot of robots helping people with 
different tasks.”
