Automatic location & direction tracking, and physical/virtual video!

Our basic goal is to build “cyber-physical systems”, that combine physical parts like robots, with virtual parts like simulations.  This weekend, I finally got one working!

For robot localization, I’m now using the gradient-based bullseye detection algorithm described in my last post (or try my OpenCV bullseye detection code), but I’m using a slightly blockier bulls-eye to make it easier to see.  For direction, I’m looking at the color balance: around the bullseye, I compare the center of mass of red pixels (front of the robot) versus the center of mass of green pixels (back of the robot).  This gives me a robot direction reliably, and it only bounces around by a few degrees!  The direction finding code is here, the magic is cv::moments for finding the center of mass.

Small mobile robot with tank treads and a colored pattern on top.
Rovoduino, with a multi-color coarse bulls-eye on top. The red half of the bulls-eye is in front, and the green in back, so drawing a line from green to red gives the robot’s direction.  This is enough to process the output of the two ultrasonic range sensors on the front.

I’m using a coarse pattern so I can see the robot reliably from the ceiling, with my 120 degree wide-angle webcam.  The wide angle gives my robot more room to drive around before it runs off the screen!  There is some motion blur when the robot is moving, which is another reason a coarser bullseye works better.

Top-down view of room, with robot in center, and pilot to the side.
View from top-down webcam–the robot’s colors and pattern are clearly visible. This is actually only half the full frame, even though the camera is only 7 feet off the ground, since this camera has a very wide field of view.

The next step was writing a little MSL 2d graphical display program on my laptop, which combines the robot’s location and direction from the webcam, with ultrasonic sensor readings from the robot reported over XBee radio.  Here’s a screenshot:

Onscreen display of robot's path, with sensor data drawn in shades of gray
The robot is drawn at its true position and orientation. The current ultrasonic distance sensor readings are drawn in red. The robot’s understanding of the room is in shades of gray.

The main thing a mobile robot needs to understand is where it can safely drive.  Places the robot has already driven are drawn in black–definitely driveable.  Places the robot has seen are clear are in dark gray–probably driveable.  Detected obstacles are shown in white–you can’t drive there.

The end result?  I can combine physical locations and sensor readings with a cyber model.  The fact that physical and virtual models match up in realtime is really cool to watch!

P.S.  It’s pretty tricky to reconstruct true obstacle positions when all you get are distance readings–when the sensor is reading a distance of 50cm, this tells you there’s nothing closer than 50cm, and you know something is at 50cm somewhere along the sensor’s viewable arc, but not where it is along that arc!

In the simulation above, I’m treating “nothing detected here” as slightly higher priority than “something detected near hear”, an idea which works quite well in simulation but isn’t perfect in practice.