Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/38967
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yuanlin, Hong | - |
dc.contributor.author | CHEN, Junhong | - |
dc.contributor.author | Cheng, Yu | - |
dc.contributor.author | Han, Yishi | - |
dc.contributor.author | VAN REETH, Frank | - |
dc.contributor.author | CLAESEN, Luc | - |
dc.contributor.author | Liu, Wenyin | - |
dc.date.accessioned | 2022-12-02T14:00:49Z | - |
dc.date.available | 2022-12-02T14:00:49Z | - |
dc.date.issued | 2022 | - |
dc.date.submitted | 2022-11-14T19:05:53Z | - |
dc.identifier.citation | Frontiers in Neurorobotics, 16 (Art N° 1041702) | - |
dc.identifier.issn | 1662-5218 | - |
dc.identifier.uri | http://hdl.handle.net/1942/38967 | - |
dc.description.abstract | Obtaining accurate depth information is key to robot grasping tasks. However, for transparent objects, RGB-D cameras have di culty perceiving them owing to the objects’ refraction and reflection properties. This property makes it di cult for humanoid robots to perceive and grasp everyday transparent objects. To remedy this, existing studies usually remove transparent object areas using a model that learns patterns from the remaining opaque areas so that depth estimations can be completed. Notably, this frequently leads to deviations from the ground truth. In this study, we propose a new depth completion method [i.e., ClueDepth Grasp (CDGrasp)] that works more eectively with transparent objects in RGB-D images. Specifically, we propose a ClueDepth module, which leverages the geometry method to filter-out refractive and reflective points while preserving the correct depths, consequently providing crucial positional clues for object location. To acquire su cient features to complete the depth map, we design a DenseFormer network that integrates DenseNet to extract local features and swin-transformer blocks to obtain the required global information. Furthermore, to fully utilize the information obtained from multi-modal visual maps, we devise a Multi-Modal U-Net Module to capture multiscale features. Extensive experiments conducted on the ClearGrasp dataset show that our method achieves state-of-the-art performance in terms of accuracy and generalization of depth completion for transparent objects, and the successful employment of a humanoid robot grasping capability verifies the e cacy of our proposed method. | - |
dc.description.sponsorship | This work is supported by the National Natural Science Foundation of China (No. 91748107, No. 62076073, No. 61902077), the Guangdong Basic and Applied Basic Research Foundation (No. 2020A1515010616), Science and Technology Program of Guangzhou (No. 202102020524), the Guangdong Innovative Research Team Program (No. 2014ZT05G157), Special Funds for the Cultivation of Guangdong College Students’ Scientific and Technological Innovation (pdjh2020a0173), and the Key-Area Research and Development Program of Guangdong Province (2019B010136001), and the Science and Technology Planning Project of Guangdong Province LZC0023. | - |
dc.language.iso | en | - |
dc.publisher | Frontiers Media SA | - |
dc.rights | 2022 Hong, Chen, Cheng, Han, Reeth, Claesen and Liu. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. | - |
dc.subject.other | depth completion | - |
dc.subject.other | transparent objects | - |
dc.subject.other | grasping | - |
dc.subject.other | deep learning | - |
dc.subject.other | robot | - |
dc.title | ClueDepth Grasp: Leveraging positional clues of depth for completing depth of transparent objects | - |
dc.type | Journal Contribution | - |
dc.identifier.volume | 16 | - |
local.format.pages | 13 | - |
local.bibliographicCitation.jcat | A1 | - |
local.publisher.place | AVENUE DU TRIBUNAL FEDERAL 34, LAUSANNE, CH-1015, SWITZERLAND | - |
local.type.refereed | Refereed | - |
local.type.specified | Article | - |
local.bibliographicCitation.artnr | 1041702 | - |
dc.identifier.doi | 10.3389/fnbot.2022.1041702 | - |
dc.identifier.pmid | 36425928 | - |
dc.identifier.isi | 000889800400001 | - |
local.provider.type | CrossRef | - |
local.uhasselt.international | yes | - |
item.validation | ecoom 2023 | - |
item.contributor | Yuanlin, Hong | - |
item.contributor | CHEN, Junhong | - |
item.contributor | Cheng, Yu | - |
item.contributor | Han, Yishi | - |
item.contributor | VAN REETH, Frank | - |
item.contributor | CLAESEN, Luc | - |
item.contributor | Liu, Wenyin | - |
item.fullcitation | Yuanlin, Hong; CHEN, Junhong; Cheng, Yu; Han, Yishi; VAN REETH, Frank; CLAESEN, Luc & Liu, Wenyin (2022) ClueDepth Grasp: Leveraging positional clues of depth for completing depth of transparent objects. In: Frontiers in Neurorobotics, 16 (Art N° 1041702). | - |
item.fulltext | With Fulltext | - |
item.accessRights | Open Access | - |
crisitem.journal.issn | 1662-5218 | - |
crisitem.journal.eissn | 1662-5218 | - |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
fnbot-16-1041702.pdf | Published version | 2.71 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.