6 t = 51.5 t = 50.5 t = 49.5 t = 48.5 t = 47.5 t = 45.6 vx (m/s) t=0 5 t = 43.6 t = 3.1 3 t=0 t=9 t = 14.9 t = 20.9 t = 26.8 t = 32.7 t = 38.6 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 40 50 0.5 t = 20.9 vy (m/s) y (m) t = 5.1 0 -0.5 t = 42.6 4 Measured Commanded 0.5 Robot Car t = 42.6 0 -0.5 2 ωz (rad/s) 0.1 1 0 0 1 2 3 x (m) (a) 4 0 -0.1 5 30 Time (s) (b) Figure 8. The performance of the integrated system during the time-critical factory task. (a) A typical robot trajectory for the test scenario, with the assembly task performed only on the left side of a moving car (goal A, as depicted in Figure 2). The gray area denotes the dynamic surface, the yellow polygon indicates the robot, and the blue rectangle represents the moving car on the conveyor belt. The orange arrow indicates the direction of robot motion. Time stamps (in seconds) for the robot and car are provided in the figure. Once the robot is positioned next to the car (goal A), it remains stationary relative to the car. This is observed as the linear motion of the robot along the assembly line. (b) The robot's absolute velocity for the trial depicted in (a). The commanded velocity is autonomously generated by the robot's motion planner. Note that the commanded velocity is zero when the robot is positioned at goal A. The measured robot velocity is obtained by postprocessing the estimate of the robot's pose from the localization algorithm. The system successfully tracked the velocity commands during the demonstration. 6 Robot t = 134 t = 133 t = 132 t = 131 t = 130 t = 129 t = 126 t = 127 t = 128 t = 125 t = 124 t = 123 t = 122 t = 120 t = 121 t = 118 t = 119 5 t = 114 t = 113 t = 2.1 t = 112 4 t = 4.1 t = 8.1 t = 20.1 t = 14.1 t = 26.1 t = 32.1 t = 38.1 t = 44.1 t = 111 y (m) t = 50.1 t = 110 3 t = 51.1 t = 109 t = 52.1 t = 108 t = 53.1 2 t = 58.1 t = 62.1 t = 68.1 t = 74.1 t = 80.1 t = 86.1 t = 92 t = 102 t = 55.1 1 0 0 1 2 3 4 5 x (m) 6 7 8 9 10 Figure 9. The robot trajectory for the test scenario during which the assembly task was performed on both sides of the car (goal A and goal B, as depicted in Figure 2). The orange arrow denotes the direction of robot motion. This task required the robot to travel farther and spend more time on the dynamic surface than the single-side task. maintainability. In addition, techniques for predicting human motion and conveying robot intent will be required to achieve anticipatory robot behavior and realize the bene- fits in task performance. 80 * IEEE ROBOTICS & AUTOMATION MAGAZINE * june 2018 The modular design of our control and sensing system will enable the development and testing of algorithms for human- robot collaboration without additional consideration of dynamic surfaces and will facilitate the integration of these