Abstract

In this paper, we focus on the robot grasping problem with parallel grippers using image data. For this task, we propose and implement an end-to-end approach. In order to detect the good grasping poses for a parallel gripper from RGB images, we have employed transfer learning for a Convolutional Neural Network (CNN) based object detection architecture. Our obtained results show that, the adapted network either outperforms or is on-par with the state-of-the art methods on a benchmark dataset. We also performed grasping experiments on a real robot platform to evaluate our method's real world performance.

Keywords

GrippersArtificial intelligenceGRASPComputer scienceObject detectionConvolutional neural networkBenchmark (surveying)RobotComputer visionFocus (optics)Task (project management)Object (grammar)Deep learningTransfer of learningRGB color modelCognitive neuroscience of visual object recognitionPattern recognition (psychology)Engineering

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Pages
4953-4959
Citations
122
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

122
OpenAlex

Cite This

Hakan Karaoğuz, Patric Jensfelt (2019). Object Detection Approach for Robot Grasp Detection. , 4953-4959. https://doi.org/10.1109/icra.2019.8793751

Identifiers

DOI
10.1109/icra.2019.8793751