This method takes as input an RGB image of a scene and estimates the 6D pose (location and orientation) of a target object in the image. This technology is applicable to scenarios where the 6D pose of a target rigid object is desired.
6D pose estimation of rigid objects is the key to applying human-robot collaboration for automated processes in industrial environments. Estimating 6D poses of rigid objects from RGB images is challenging. This is especially true for textureless objects with strong symmetry, since they have only sparse visual features to be leveraged for the task and their symmetry leads to pose ambiguity.
Centre for Transformative Garment Production (TransGP) was established with the collaborative effort of The University of Hong Kong and Tohoku University. The Centre aims to provide solutions for the needs of the future society, where labour shortage would arise from population aging, and where increasing percentage of the mankind will be living in megacities. The Centre also targets at driving paradigm shift for reindustrializing selected sectors, i.e. garment industry, which is still relying on labour-intensive operations but with clear and identified processes for transformation. A number of goals are expected to be achieved through the Centre’s research programmes, such as to leverage the proprietary AI and robotics technology to shorten development cycles, to improve engineering efficiency and to prevent faults and increase safety by automating risky activities.