Predicting Grasping Order in Clutter Environment by Using Both Color Image and Points Cloud
    Grasping selection for a individual object has been developed many years, and there many feasible proposals have been presented. However, for a robot serving in daily life, this capacity is far from enough. Because the scene in our common life is usually chaotic, where objects are often mutual influences, front and back occlusion, stack up and down. In this paper, we mainly focus on grasping plan and selection in this clutter scene. The first step of this work is to segment the objects from the input picture which are going to manipulate. We use the Faster R-CNN model to segment objects based on the color image for this model have a fine ability to detect multi-objects. For planning the correct grasping order in a chaotic scene, however, it is not enough only using the color information. So, we should combine the geometry information of point cloud together. In this process, we use the extrinsic parameters to transform the bounding-box of objects in color image to point cloud. Then, we calculate the grasping order whit the geometry information of point cloud. In experiment, we demonstrate the whole process of our proposal. And we actually grasping common objects in a clutter environment with the KINOVA ARM and an underactuated hand designed by ourselves.