The frame-by-frame tracking framework in Figure 4 relies on a prior visual segmentation of the object in the image, based on a graph-cut-based segmentation technique using color cues. The corresponding segmented point cloud is first registered through a classical iterative closest point method and then by fitting the known meshes of the object on the point cloud. The basic idea is to derive external forces exerted by the point cloud on the mesh and to integrate RGB Image Visual Segmentation of the Object Segmented Image Depth Map Segmentation of the Depth Map Sampling Backprojection Point Cloud Previous State of the Mesh Rigid Registration Using ICP Rigidly Transformed Mesh Point Cloud Deformable Registration Using Closest Point Correspondences and FEM Model Deformed Mesh Figure 4. An overview of the developed approach for deformable object tracking. ICP: iterative closest point. september 2018 * IEEE ROBOTICS & AUTOMATION MAGAZINE * 87