Where is the humerus? A tale of two reference frames...
In my first Ph.D. project, I robotically replicated the motion of the humerus as recorded via motion capture while subjects performed activities typical of an active amputee. The first task of this project was to program the position and orientation of the humerus onto the robot. In this post, I describe my method for accomplishing this task. To me, this is an interesting topic because it uses the same concepts as my previous post on establishing the position and orientation of a rigid body; but, the pen and paper are replaced by a robot and motion-tracking system.
At first glance, this seemed like a straightforward undertaking, but I had to piece together several steps to accomplish it. Along the way, I:
- Designed and 3D-printed a hemisphere rigid body to house the diodes of the motion-tracking system.
- Wrote C# code to start and track the status of motion programs on the robot.
- Wrote C++ code to initiate and save recordings from the motion-tracking system.
- Wrote MATLAB code to perform plane and sphere fits.
Ascertaining the position and orientation of the humerus from the perpsective of the motion-tracking system way the easy step. I utilized the digitizing probe of the motion-tracking system (Optotrak Certus) to digitize points around the humeral head, shaft, and epicondyles. By a performing a sphere fit of the humeral head points, I calculated the humeral head center. The humeral head center, together with the epicondyles, allowed me to define the position and orientation of the humerus according to standards set by the International Society of Biomechanics. Below is an interactive 3D plot, generated using Plotly.js, demonstrating the points I digitized on the 3D printed humerus.
Once I measured the position and orientation of the humerus from the perspective of the motion-tracking system, I needed to relay this information to the robot. Put another way, it was necessary to interpret the position and orientation of the humerus from the perspective of the robot, rather than that of the motion-tracking system. This is typically referred to as spatial synchronization, and a variety of solutions exist. For several reasons, including speed and ease, I settled on the procedure outlined below.
First, I 3D-printed a hemisphere, evenly covered it in the diodes that our motion-tracking system utilized, and attached it to the last link of the robot (see video below). I chose a hemisphere design because I thought that it would allow each diode to smoothly glide in and out of the view of the camera. At this point, I could measure the position and orientation of the 3D-printed hemisphere using our motion-tracking system - although at this stage only the position is necessary. Next, I programmed the robot to perform the following motions (as seen in the video below)
- Rotate its 1st joint through a ~180° sweep.
- Rotate its 2nd joint through a ~70° sweep.
- Rotate its 4th joint through a ~90° sweep.
- Rotate around its origin (actually the robot's default tool center point but that detail is irrelevant for this post).
Recording the hemisphere position while the robot performed the motions above, provided me with the data that was necessary to compute the robot reference frame from the perspective of the motion-tracking system. As seen in the interactive plot below, the trajectories traced by the hemisphere as the robot rotates its 1st, 2nd, and 4th joint define the Z, Y, and X axes of the robot, respectively. In each instance, the coordinate axis is defined as the vector perpendicular to a plane fit of the trace of the hemisphere position. The origin of the robot was computed as the center of the sphere fit to the point cloud generated by the hemisphere as the robot rotated around its origin. The X, Y, and Z axes and the origin completely define the reference frame of the robot. Once I established the reference frame of the robot with respect to the motion-tracking system, it was a simple matter of using a coordinate system transformation to express the position and orientation of the humerus with respect to the robot.
An accurate estimate of the position and orientation of the humerus with respect to the robot, allowed me to precisely control the motion of the humerus. The video recording below demonstrates this accuracy. I instructed the robot to rotate the humerus while keeping the humeral head still (i.e. rotation with no translation). In subsequent work, I replicated the motion of the humerus during jumping jacks, jogging, lifting a gallon jug, and internal rotation.
I have left out many details that would make this post too lengthy. But, in case you are wondering, I utilized the Singular Value Decomposition (SVD) to orthogonalize the matrix that specifies the robot reference frame: \(\boldsymbol{R} = \begin{bmatrix} X & Y & Z \end{bmatrix}\). Of course, I also utilized the SVD for each plane fit. And, I've written previously about utilizing the SVD to compute the position and orientation of a body segment from skin marker motion capture. It's no wonder the SVD has been called "one of science's superheroes in the fight against monstrous data". Finally, I should mention that performing a plane fit to determine an axis in space is a poor choice due to numerical sensitivity concerns. However, combining 3 plane fits with the SVD, seems to nullify these concerns. The precision (mean deviation) of the above procedure was 0.3 mm for position, and 0.3° for orientation across 67 iterations.