create the grid line on the floor, which allows better depth perception for the user wearing the AR headset. The ARROW marker is used to visualize the motion intent of the robot. The arrow emanates from the robot model, with its tip located at the goal point the robot is moving to. The arrow is continually updated by the ROS application as the robot moves closer to the goal point. The AR system was validated in a study with 15 participants and their experiences were surveyed with a questionnaire. The results are summarized in Figure 8(c). A lower rating in perceived collision risk (2.3 with AR compared to 4.1 without AR) and higher rating in task efficiency (5.6 with AR versus 4.4 without AR) suggest that the AR system improves the experience when working in a shared workspace with the mobile robot with respect to these aspects. Visualizing Robotic Arm's Goal Pose During Object's Handover In the work by Newbury et al. [13], ARviz was used to visualize the motion intent of the Franka Emika Panda robotic arm when performing handover tasks with a human collaborator. ARviz was also used to communicate to the robot when to initiate the handover task. This application [shown in Figure 9(b)] used two Display Plugins and one Tool Plugin. VisualizationMarkerArray Display is used to visualize a wireframe of the object at the detected location during the handover. The wireframe is created with a collection of LINE markers. The position and orientation of the lines are defined and updated at a fixed rate by the ROS system. The grasp pose of the robotic gripper is visualized with Stamped Pose Display. A preloaded mesh of the end effector of the Franka Emika Panda is used to customize the Stamped Pose Display Plugin, with low opacity to avoid the virtual gripper occluding the user's view of the object during the handover in this case. The Tool Plugin used by this AR application is the Voice Command Tool Plugin. With this Tool Plugin, a (a) No AR Robot + Intent Q1 - Collision Risk Q2 - Task Efficiency 4.1 (σ = 1.7) 4.4 (σ = 1.8) (c) Figure 8. ARviz use case 2: visualizing mobile robot and communicating its motion intent. (a) The mobile robot is not visible to the user at a T-junction. (b) A representation of what the user sees from the AR headset. The robot model is visualized, enabling the user to see the robot through the wall. The cyan colored arrow emanating from the robot model showing its direction of motion. The floor is shown as a grid to improve depth perception. (c) The user study survey results showing the benefit of the ARviz implementation in reducing collision risk and increasing task efficiency.(Source: [12].) 2.3 (σ = 1) 5.6 (σ = 1.2) (b) p Value ∗0.0002 ∗0.01 5 4 3 2 1 Task Fluency (a) 5 4 3 2 1 Trust 5 4 3 2 1 Safety (b) No AR AR (c) Figure 9. ARviz use case 3: visualizing a robotic arm's goal pose during object's handover. Top (a) The robot picking up an object from the user's hand. The user is wearing an AR headset. (b) The detected pose of the object, and AR visualization of how the robot is planning to grasp the object. (c) Survey results from the user study reported in the article showing that the ARviz supported interaction improved task fluency, trust, and perceived safety. (Source: [13].) MARCH 2022 * IEEE ROBOTICS & AUTOMATION MAGAZINE * 65