Güler, Püren and Bekiroglu, Yasemin and Pauwels, Karl and Kragic, Danica

IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, Illinois, 2014

BibTeX Citation

Robots operating in household environments need to interact with food containers of different types. Whether a container is filled with milk, juice, yogurt or coffee may affect the way robots grasp and manipulate the container. In this paper, we concentrate on the problem of identifying what kind of content a container holds by using tactile and/or visual feed-back in combination with grasping. In particular, we investigate the benefits of using uni-modal (visual or tactile) or bi-modal (visual-tactile) sensory data for this purpose. We direct our study toward cardboard containers that can hold liquid, solid or have no content. The motivation for using grasping rather than shaking is that we want to investigate the content prior to applying manipulation actions to a container. We employ and compare different learning methods: kmeans, k-nearest-neighbor and Quadratic Discriminant Analysis. Our results show that in general we achieve comparable classification rates with uni-modal data but also that the visual and tactile data are complimentary. Thus, integrating visual and tactile data can improve the overall classification performance.