All posts by olawlor

UAV tracking with Bullseye!

To control the UAV from a computer, we need to know where the UAV is located, so we can send it piloting commands to make it go wherever we want.  Hopefully we can use the bullseye detector we built, which uses a webcam image.  Here’s Mike’s little UAV with a cardboard bullseye duct taped on top:

The bullseye tracks really well!

The next step is building an autopilot system for closed-loop UAV control, rather than the R/C controller Mike is using for manual control above.  We’re playing with the XBee right now, trying to set up a telemetry system for the computer to feed piloting commands to the UAV.

Automatic location & direction tracking, and physical/virtual video!

Our basic goal is to build “cyber-physical systems”, that combine physical parts like robots, with virtual parts like simulations.  This weekend, I finally got one working!

For robot localization, I’m now using the gradient-based bullseye detection algorithm described in my last post (or try my OpenCV bullseye detection code), but I’m using a slightly blockier bulls-eye to make it easier to see.  For direction, I’m looking at the color balance: around the bullseye, I compare the center of mass of red pixels (front of the robot) versus the center of mass of green pixels (back of the robot).  This gives me a robot direction reliably, and it only bounces around by a few degrees!  The direction finding code is here, the magic is cv::moments for finding the center of mass.

Small mobile robot with tank treads and a colored pattern on top.
Rovoduino, with a multi-color coarse bulls-eye on top. The red half of the bulls-eye is in front, and the green in back, so drawing a line from green to red gives the robot’s direction.  This is enough to process the output of the two ultrasonic range sensors on the front.

I’m using a coarse pattern so I can see the robot reliably from the ceiling, with my 120 degree wide-angle webcam.  The wide angle gives my robot more room to drive around before it runs off the screen!  There is some motion blur when the robot is moving, which is another reason a coarser bullseye works better.

Top-down view of room, with robot in center, and pilot to the side.
View from top-down webcam–the robot’s colors and pattern are clearly visible. This is actually only half the full frame, even though the camera is only 7 feet off the ground, since this camera has a very wide field of view.

The next step was writing a little MSL 2d graphical display program on my laptop, which combines the robot’s location and direction from the webcam, with ultrasonic sensor readings from the robot reported over XBee radio.  Here’s a screenshot:

Onscreen display of robot's path, with sensor data drawn in shades of gray
The robot is drawn at its true position and orientation. The current ultrasonic distance sensor readings are drawn in red. The robot’s understanding of the room is in shades of gray.

The main thing a mobile robot needs to understand is where it can safely drive.  Places the robot has already driven are drawn in black–definitely driveable.  Places the robot has seen are clear are in dark gray–probably driveable.  Detected obstacles are shown in white–you can’t drive there.

The end result?  I can combine physical locations and sensor readings with a cyber model.  The fact that physical and virtual models match up in realtime is really cool to watch!

P.S.  It’s pretty tricky to reconstruct true obstacle positions when all you get are distance readings–when the sensor is reading a distance of 50cm, this tells you there’s nothing closer than 50cm, and you know something is at 50cm somewhere along the sensor’s viewable arc, but not where it is along that arc!

In the simulation above, I’m treating “nothing detected here” as slightly higher priority than “something detected near hear”, an idea which works quite well in simulation but isn’t perfect in practice.

 

Robot Tracking with a Bulls-eye

One of our big needs has been a computer vision robot tracking system–once we know where the robots are, we can start working on wheel odometry, sensor fusion, multi-robot path planning, and the rest of our search and rescue setup.  The hard part here has been teaching the computer to recognize the robot and distinguish it from the background–this is something humans do really well, but it’s a hard problem.

Charlie had the good idea of tracking a bulls-eye image, which is nice because it has rotational symmetry.  Here’s an image, where we might want to find the blue bulls-eye symbol in the center.  (Sadly, the computer doesn’t understand “Look for the bulls-eye” until we teach it what that means!)

High contrast concentric circles target in blue against an office background.

There’s a cool little trick using “spatial brightness gradients” to detect circles.  A gradient points in the direction of greatest change, like dark to light, and it’s actually pretty easy to compute for an image.

The direction of the black shape’s gradient is shown with the gray bars. The gradient follows the outline of the shape, but is perpendicular to it, like railroad ties.

This afternoon I had the realization that the spatial brightness gradients in a bulls-eye are all lined up with the center of the circle (more formally, the gradient along an arc intersects the center of the arc).

Here’s an image where I’ve extended the brightness gradients out from each edge, by drawing a dim line through each point oriented along the gradient.  (Gradients are computed in OpenCV using cv::Sobel to get the X and Y components, but to draw dim lines I needed to rasterize them myself.  It’s a little faster if you skip low-magnitude gradients, which probably aren’t part of our high-contrast target.)

Black image with thin white gradient lines scattered around.

Now that we’ve drawn dim white lines oriented along the gradients, they all intersect at the center of the bulls-eye.  And this isn’t just for debugging, I can efficiently count intersecting lines by rasterizing them all.  So now I have a much easier computer vision problem: find the bright spot!

Finding the single brightest spot would be easy, but I wanted to support multiple bulls-eyes for multiple robots, and the program should be smart enough to realize it’s not seeing anything, so I needed to be a little smarter.  Finding all points brighter than some threshold would work too, except a big clear bulls-eye might have lots of points near the center that are all above the threshold.  So what I do is find all points brighter than a threshold (currently 100 crossing gradients) that are also brighter than any nearby point.

Here’s the final result, where I’ve automatically identified the bulls-eye, and figured out how big it is and how it’s oriented.  The computer can actually keep up with video, doing this 30 times per second, and can reliably track a dozen bulls-eyes at once!

To summarize, the code (1) finds strong brightness gradients, (2) extends lines along those gradients, (3) find where many of those lines are crossing.  Each bulls-eye center has tons of crossing gradients, one per pixel in the edges of the bulls-eye.  This works, and it’s very hard to fool because of all the ‘votes’ for a true bulls-eye–and very few other objects have so many gradients that all converge at a single point.  We did manage to get a white Rover5 wheel to count as a fake bullseye against a black monitor background, but it wasn’t easy.

How accurate is it?  Really accurate!  We can track a 3-inch bulls-eye image from 8 feet away with sub-pixel precision–the random location standard deviation is under 0.05 pixels; worst-case location error is only +-0.2 pixels, or under 1 millimeter!  I’m also estimating orientation by taking one quadrant out of the bulls-eye, and comparing the gradient-derived center with the center of mass; this orientation estimate is less reliable but still has a variance of only 2-3 degrees, and +-10 degrees worst case.

There are still problems; in particular if you tilt the bulls-eye more than about 45 degrees, the circles become ellipses and the gradients don’t line up.  If the shape moves too fast, the blurring destroys the gradients.  Also, if the bulls-eye is too close to the camera, my little gradient lines are too short to reach the center and the program fails.

If you’ve got a webcam and a command line, you can get the code and try it like this:

   sudo apt-get install git-core libopencv-dev build-essential
   git clone http://projects.cs.uaf.edu/cyberalaska
   cd cyberalaska/vision/bullseye
(print out images/bullseye.pdf)
   make
   ./track

Once this setup is working reliably, we’ll package up an easy to use robot tracking server package for you!

OpenCV for multi-color tracking and localization

Localization, or “where is my robot?”, is really important, since you can’t tell the robot where to go unless you know where you’re starting.  It’s also a hard problem to solve reliably, especially indoors where you don’t have GPS.  For the 2013 CPS challenge, we used a Kinect to find our flying UAV, but we’d like to support ground vehicle localization too, and that’s not easy in a depth image.

I’ve done webcam color matching for decades, but I’ve always used my own homebrew image access and processing libraries, which makes it hard to use, port, and modify my stuff–even for me!  This month, I’ve been finally learning a standard, portable video and image analysis library: OpenCV.  It’s even got a simple GUI library built in, so you can add sliders and such.

Here’s the program in action:

The red and green blobs to the right of my head are bright pink and green squares of paper, detected and highlighted in the image.  Note the bad matches on my red chair.

The basic process is to find all the pixels that match a given color, estimate their center of mass, then draw a smaller box around that center of mass for a second pass.  This produces a “trimmed mean” position, which is less sensitive to outliers.

The output of the program is the average (X,Y) coordinates of the center of mass of each color, and the number of pixels that matched.  If not enough pixels match, the target object probably isn’t visible.  The program has sliders so you can adjust the matching tolerances, and you can click to get a new hue, saturation, value (HSV) colorspace target color.

If you have the target color set exactly right, and your webcam is working well, you can get accuracy better than 0.1 pixel!  But if the colors are dancing around, you might get 2-3 pixel variance in the location.  And if you have the colors wrong, or something else is the same color, you can get entirely the wrong location.

Because the apparent color of a reflective object like a sheet of paper depends on the ambient lighting, I need to allow loose color tolerances or else tweak the target colors to get a good match.  We should try using some diffused LEDs or color lasers to produce inherent emissive colors; back in 2008 David Krnavek used an LED in a white ping-pong ball for color tracking, with good results.

Latency seems good, and I get about 20 frames per second even on my low-end backup laptop and random webcam.  However, the OpenCV graphical output is fairly expensive, so I don’t do it every frame.

Download the color tracker code at my github.  The plan is to build up something like this into a reliable web accessible multi-robot tracking system!

Aero Balance Beam: Control Theory Demo

At our workshop this week, we did some hands-on testing of our control theory knowledge with an aerodynamically-driven balance beam–basically just an electrically controlled fan that blows to lift up a weighted board.  The idea was to capture the difficulties of UAV control in a simpler and slightly less dangerous system.  Little did we know what we were signing up for!  (Arduino code and data after the break.)

Continue reading Aero Balance Beam: Control Theory Demo

Two Kinects: surprisingly functional!

The Kinect is a really handy sensor for robotics.  We’ve been talking about having one Kinect mounted on the wall, to impose a simple global coordinate system; and separate obstacle detection Kinects mounted on each robot.

Surprisingly, multiple Kinects don’t seem to interfere with each other too badly–the Primesense dot detection is smart enough to preferentially recognize its own dots over the dots from the other kinect.

Image from a Kinect, with a second Kinect active on the left side of the frame. Despite the bright glare from the second Kinect’s IR emitter, there is almost no interference in the depth image.

Rover 5 platform test run

Mike just ordered a Sparkfun Rover 5 chassis ($60), and Steven decided to test drive it at 12V using his new quad-bridge driver board, in turn hooked to an Arduino Mega (for the interrupt lines) and an XBee for wireless serial communication.

http://www.youtube.com/watch?v=ZD30zdlazpc

It’s a nice little platform, big enough to put a laptop and some actuators on, and it moves authoritatively when driven at 12V.  It tracks quite straight even without using the onboard encoders.  Mark, from Mount Edgecumbe, suggested putting on Mechanum wheels, so the chassis can translate sideways too.

We’ll work on a decent mounting system, so all the electronics don’t fall out when the thing tips over!

Arduino Serial Latency & Bandwidth vs. Message Size

We’ve been using Arduinos for all our projects, and it was time I got around to benchmarking the serial communication performance.  It’s actually not very fast; even at the maximum baud rate of 115200 bits per second, delivered performance is only a little over 10KB/second each way, and it only hits this bandwidth when sending over 100 bytes at a time.

Arduino Uno to PC roundtrip serial communication bandwidth, as a function of message size.

The problem for small messages seems to be a 4 millisecond minimum roundtrip latency.  Messages over about 40 bytes seem to take several such latencies, so there’s a stair step pattern to the latency.  Paul Stoffregen says this is due to the Uno firmware’s 4.1 millisecond transmit timeout.

Arduino Uno to PC roundtrip serial communication latency, measured in milliseconds, for various message sizes.

Evidently, the Teensy (with direct USB to the chip, not a USB-to-serial onboard) gets about 1ms serial latency.  The same page reported the Duemillanove at 16ms minimum (62Hz!).

Overall, this means you’re only going to get a 250Hz control rate if you’re shipping sensor data from an Arduino Uno up to a PC, and then sending actuator signals back down to the Arduino.  But 250Hz is enough for most hardware projects we’ve been thinking about.

The other annoying problem?  After opening the serial port, the Arduino Uno takes 1.7 seconds to boot before it responds to serial commands.  Anything you send from the PC before that time seems to be lost, not buffered.  The fix is probably to have the Uno send the PC one byte at startup, so the PC knows the Uno is ready.

Using Kinect + libfreenect on Modern Linux

Here’s how I got libfreenect working on my Ubuntu 12.04 machine, running Linux kernel 3.2.   Generally, I like libfreenect because it’s pretty small and simple, about 8MB fully installed, and gives you a live depth image without much hassle.  The only thing it doesn’t do is human skeleton recognition; for that you need the much bigger and more complicated OpenNI library (howto here).

Step 0.) Install the needed software:
sudo apt-get install freeglut3-dev libxmu-dev libxi-dev build-essential cmake usbutils libusb-1.0-0-dev git-core
git clone git://github.com/OpenKinect/libfreenect.git
cd libfreenect
cmake CMakeLists.txt
make
sudo make install

Step 1.) Plug in the Kinect, both into the wall and into your USB port. Both the front LED, and power adapter plug LED, should be green (sometimes you need to plug and unplug several times for this). lsusb should show the device is connected:
lsusb
… lots of other devices …
Bus 001 Device 058: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 001 Device 059: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 001 Device 060: ID 045e:02ae Microsoft Corp. Xbox NUI Camera

Step 2.) Run the libfreenect “glview” example code:
cd libfreenect/bin
./glview
Press “f” to cycle through the video formats: lo res color, hi res color, and infrared. The IR cam is very interesting!

The source code for this example is in libfreenect/examples/glview.c.  It’s a decent place to start for your own more complex depth recognition programs: equations to convert depth to 3D points here!

———— Debugging Kinect connection ——————-

Number of devices found: 1
Could not open motor: -3
Could not open device
-> Permissions problem.
Temporary fix:
sudo chmod 777 /dev/bus/usb/001/*

Permanent fix:
sudo nano /etc/udev/rules.d/66-kinect.rules

Add this text to the file:
 # ATTR{product}=="Xbox NUI Motor"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666"
 # ATTR{product}=="Xbox NUI Audio"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666"
 # ATTR{product}=="Xbox NUI Camera"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"

sudo /etc/init.d/udev restart
—————–
Number of devices found: 1
Could not claim interface on camera: -6
Could not open device
-> The problem: modern kernels have a video driver for Kinect.
Temporary fix:
sudo modprobe -r gspca_kinect

Permanent fix:
sudo nano /etc/modprobe.d/blacklist.conf
Add this line anywhere:
blacklist gspca_kinect
——————-
if lsusb only shows the motor, not audio or the camera:
Bus 001 Device 036: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor

This MEANS THE KINECT IS NOT POWERED via the 12V line!

Front green light blinking: Kinect is plugged into USB.
AC plug cable green light: 12V power is connected.

Solution: plug in the power cable!  If it is plugged in, unplug and replug it.

 

Kinect + Parrot Drone = ?

We’d really like to do interesting things with indoor UAVs, like the fun little Parrot AR.Drone 2.0.  Outdoors, you have GPS (see ArduPilot), but indoors GPS isn’t accurate enough even if you had good reception.

Enter the Kinect.  It returns a live 2D + depth image that can be converted to true 3D point cloud quite easily, and at 30 frames per second.  If you have a beefy enough indoor UAV, you can mount the Kinect on the UAV, but the Kinect’s weight, power, and processing requirements make this expensive.  I’m a fan of mounting the Kinect on the wall, off to the side, where it can see most of the room (it’s easy to rotate the 3D points into any coordinate system you like).

Together, you have a sub-$500 UAV with a 3D position sensor.  The only problem?  The UAV is thin and black, so the Kinect can’t see it beyond about 10 feet, even with the big indoor hull attached:

Parrot AR.Drone 2.0 seen in Kinect field of view.
I’ve circled a Parrot AR.Drone flying in the Kinect depth image. No valid depth samples were returned, so it shows up as black (unknown depth).

One solution is to attach a small reflector, which can be as simple as a piece of paper.  Here, I’ve got a small 8×4 inch pink piece of paper attached to the front of the drone, to give the Kinect something to see.

With a 8 by 4 inch pink sheet of paper attached, the drone is clearly visible with depth in the Kinect image.

This works quite reliably, although the paper does mess up the vehicle dynamics somewhat.  Anybody tried this?  Would painting the indoor hull help the Kinect see the UAV?

“TriloBYTE” two-wheeled robot chassis

I fabricated a quick two-wheeled robot chassis tonight, which I’m calling TriloBYTE because of the rounded front.  It’d look more like a Cambrian-era Trilobite if I had the USB cable trailing out the back, but the scuttling motion is definitely Trilobitian.

Red plate with wires and wheels
The TriloBYTE chassis

The chassis fits in a 7″ diameter circle, and is made from 1/2″ thick red HDPE cutting board, scroll saw cut following my chassis template (PDF or SVG).  I always draw up a template in Inkscape and print and glue it down before cutting, drilling, or milling–it’s so much easier to build stuff if you already know where everything is going!

It drives nicely when powered by two tiny Pololu 1:100 gearmotors and 42x19mm wheels. The motors drop into milled slots in the chassis, and are held down with 1/2″ pan head self tapping screws (with 3/32″ holes predrilled).

The motor mount for TriloBYTE. It's just a milled slot in the HDPE, with two pan-head screws holding the motor down.

The motor controller is a sparkfun ardumoto shield, which seems to work fine when powered only by the Arduino’s 5v line.  With bigger motors I’m sure I’d be stressing the USB connection, but these tiny motors work fine–the stall current is only 200mA or so.

I definitely need mechanical strain relief on the main USB cable, because I’m finding myself putting a lot of stress on it just bumping around. You could definitely argue for putting the Arduino in backwards, so the USB cord trails behind. Surprisingly, the USB cable doesn’t seem to drag the robot around too badly, at least until it tries to climb its own cable. The motors definitely have enough power to hop the cable, but not with the open-loop PWM setup I have.

The cat seems fascinated by it!

My cat sniffing at the TriloBYTE robot. No cats or robots were harmed in the making of this photo. I'm hoping I won't find the robot clawed to shreds tomorrow morning!

Next up: some sensors, and real control software.  Using Firmata to test out new hardware is sure easy!

 

Meet the Team: Dr. Lawlor

Hi! I’m Dr. Orion Lawlor, and I love robots!  I’m an assistant professor of Computer Science at the University of Alaska Fairbanks, and have helped Dr. Bogosyan set up the CYBER-Alaska project since the basic idea back in 2009.

Facts about Dr. Lawlor:

  • Grew up in Glennallen, Alaska eating moosemeat and salmon, and donating blood to mosquitos.
  • In a construction frenzy during the summer of 2009, built a 2,000 square foot shop with help from his dad Tom.
  • Can weld with stick, MIG, or gas; cast aluminum, zinc, or urethane; cut steel on a manual mill or CNC lathe; and drive heavy equipment… all without leaving his driveway!

I’ve worked on a lot of different computer hardware and robotics projects over the years:

  • In 2009 through today, I worked with a really broad array of students building a solar powered robot to drive around Greenland.  We’re still waiting on NSF funding for the full build, but work continues on this project in my spare time.
  • In 2007, 2008, and 2010, I worked with some great students building underwater robots for the worldwide MATE ROV contest.   I used ever more complex PIC microcontrollers for this project: in 2007 I used a network of 16F676 chips running software serial code, in 2008 I upgraded to the PICkit 2 programmer and 16F690 chip for more pins and hardware serial communication, and in 2010 I used the 18F2455 for direct onboard USB.
  • In the early 2000’s, I wrote usb_pickit, the first open-source driver for the PICkit 1 programmer.  I learned a lot about hardware, including how to laser print and iron on printed circuit boards.
  • Back in 1998, I soldered together an ISA card by hand, to collect analog data at a higher rate than I could get over a serial line.  The card actually still works, although the ISA slot is so old, I can only plug it into an equally ancient machine!

My “day job” is writing software and building high performance simulations, so I’m excited to combine my fabrication, electronics, and software experience to build cutting-edge systems!

Howto: Arduino + Firmata = Easy Hardware!

I’ve been playing with microcontrollers for over a decade now, and have hands-on experience with PIC, ARM, 68HC11, and MSP430 devices.  Usually, it’s a long slow road where you (1) pick a processor to buy, (2) pick a device programmer, (3) pick a compiler, (4) read the processor documentation, (5) find/fix example code until it compiles, (6) program the chip, (7) figure out why the lights aren’t blinking.  It’s usually very painful, takes a few days at the very least, and every new task or chip is just more work.

But this is easy!

  1. Buy an Arduino Uno R3 for about $30.  Lots of places have the hardware, including Amazon (and cable) or SparkFun (and cable).  The cable is an ordinary USB A to B cable, with the thick square end like on most USB printers.
  2. Download and unzip the Arduino 1.0 IDE.  (On Linux, you’ll also need to “sudo apt-get install gcc-avr avr-libc” to get the compiler.)
  3. Open the IDE’s “Drivers” folder.  Right click “Arduino UNO R3.inf” and hit “Install”. Plug in the Arduino, and it should show up as a serial port (something like COM3 or /dev/ttyUSB0.)
    1. The very similar Arduino Duemilanove 2009 board doesn’t need drivers; it shows up as an ordinary FTDI USB to serial device.
  4. Start up the IDE by double clicking Arduino.exe, and:
    1. Choose File -> Examples -> Firmata -> StandardFirmata.
    2. Hit File -> Upload.  The TX and RX LEDs will flicker as the device is programmed.
    3. The Arduino now responds to the serial Firmata command protocol.
  5. Download the Firmata Test Program, run it, and choose the “Port” from the menu (like COM3 or ttyUSB0.)
  6. Click pin 13 on and off, and watch the LED blink!  You can set any pin to input (reading low or high voltage) or output (producing low or high voltage), and many pins have other functions available like analog input, PWM(pulse width modulation), or servo control.  Just click and drag to interact with all the pins!
    Pin descriptions for Firmata connected to Arduino.
    Firmata Test showing all the pins for a live connected Arduino.

The beautiful thing about this is you don’t have to figure out how to enable analog inputs, initiate ADC conversions, correctly set the PWM control registers, or set interrupt modes–it’s pure plug and play!