Our basic goal is to build “cyber-physical systems”, that combine physical parts like robots, with virtual parts like simulations. This weekend, I finally got one working!
For robot localization, I’m now using the gradient-based bullseye detection algorithm described in my last post (or try my OpenCV bullseye detection code), but I’m using a slightly blockier bulls-eye to make it easier to see. For direction, I’m looking at the color balance: around the bullseye, I compare the center of mass of red pixels (front of the robot) versus the center of mass of green pixels (back of the robot). This gives me a robot direction reliably, and it only bounces around by a few degrees! The direction finding code is here, the magic is cv::moments for finding the center of mass.
I’m using a coarse pattern so I can see the robot reliably from the ceiling, with my 120 degree wide-angle webcam. The wide angle gives my robot more room to drive around before it runs off the screen! There is some motion blur when the robot is moving, which is another reason a coarser bullseye works better.
The next step was writing a little MSL 2d graphical display program on my laptop, which combines the robot’s location and direction from the webcam, with ultrasonic sensor readings from the robot reported over XBee radio. Here’s a screenshot:
The main thing a mobile robot needs to understand is where it can safely drive. Places the robot has already driven are drawn in black–definitely driveable. Places the robot has seen are clear are in dark gray–probably driveable. Detected obstacles are shown in white–you can’t drive there.
The end result? I can combine physical locations and sensor readings with a cyber model. The fact that physical and virtual models match up in realtime is really cool to watch!
P.S. It’s pretty tricky to reconstruct true obstacle positions when all you get are distance readings–when the sensor is reading a distance of 50cm, this tells you there’s nothing closer than 50cm, and you know something is at 50cm somewhere along the sensor’s viewable arc, but not where it is along that arc!
In the simulation above, I’m treating “nothing detected here” as slightly higher priority than “something detected near hear”, an idea which works quite well in simulation but isn’t perfect in practice.