Category Archives: Uncategorized

UAV tracking with Bullseye!

To control the UAV from a computer, we need to know where the UAV is located, so we can send it piloting commands to make it go wherever we want.  Hopefully we can use the bullseye detector we built, which uses a webcam image.  Here’s Mike’s little UAV with a cardboard bullseye duct taped on top:

The bullseye tracks really well!

The next step is building an autopilot system for closed-loop UAV control, rather than the R/C controller Mike is using for manual control above.  We’re playing with the XBee right now, trying to set up a telemetry system for the computer to feed piloting commands to the UAV.

Automatic location & direction tracking, and physical/virtual video!

Our basic goal is to build “cyber-physical systems”, that combine physical parts like robots, with virtual parts like simulations.  This weekend, I finally got one working!

For robot localization, I’m now using the gradient-based bullseye detection algorithm described in my last post (or try my OpenCV bullseye detection code), but I’m using a slightly blockier bulls-eye to make it easier to see.  For direction, I’m looking at the color balance: around the bullseye, I compare the center of mass of red pixels (front of the robot) versus the center of mass of green pixels (back of the robot).  This gives me a robot direction reliably, and it only bounces around by a few degrees!  The direction finding code is here, the magic is cv::moments for finding the center of mass.

Small mobile robot with tank treads and a colored pattern on top.
Rovoduino, with a multi-color coarse bulls-eye on top. The red half of the bulls-eye is in front, and the green in back, so drawing a line from green to red gives the robot’s direction.  This is enough to process the output of the two ultrasonic range sensors on the front.

I’m using a coarse pattern so I can see the robot reliably from the ceiling, with my 120 degree wide-angle webcam.  The wide angle gives my robot more room to drive around before it runs off the screen!  There is some motion blur when the robot is moving, which is another reason a coarser bullseye works better.

Top-down view of room, with robot in center, and pilot to the side.
View from top-down webcam–the robot’s colors and pattern are clearly visible. This is actually only half the full frame, even though the camera is only 7 feet off the ground, since this camera has a very wide field of view.

The next step was writing a little MSL 2d graphical display program on my laptop, which combines the robot’s location and direction from the webcam, with ultrasonic sensor readings from the robot reported over XBee radio.  Here’s a screenshot:

Onscreen display of robot's path, with sensor data drawn in shades of gray
The robot is drawn at its true position and orientation. The current ultrasonic distance sensor readings are drawn in red. The robot’s understanding of the room is in shades of gray.

The main thing a mobile robot needs to understand is where it can safely drive.  Places the robot has already driven are drawn in black–definitely driveable.  Places the robot has seen are clear are in dark gray–probably driveable.  Detected obstacles are shown in white–you can’t drive there.

The end result?  I can combine physical locations and sensor readings with a cyber model.  The fact that physical and virtual models match up in realtime is really cool to watch!

P.S.  It’s pretty tricky to reconstruct true obstacle positions when all you get are distance readings–when the sensor is reading a distance of 50cm, this tells you there’s nothing closer than 50cm, and you know something is at 50cm somewhere along the sensor’s viewable arc, but not where it is along that arc!

In the simulation above, I’m treating “nothing detected here” as slightly higher priority than “something detected near hear”, an idea which works quite well in simulation but isn’t perfect in practice.

 

Robot Tracking with a Bulls-eye

One of our big needs has been a computer vision robot tracking system–once we know where the robots are, we can start working on wheel odometry, sensor fusion, multi-robot path planning, and the rest of our search and rescue setup.  The hard part here has been teaching the computer to recognize the robot and distinguish it from the background–this is something humans do really well, but it’s a hard problem.

Charlie had the good idea of tracking a bulls-eye image, which is nice because it has rotational symmetry.  Here’s an image, where we might want to find the blue bulls-eye symbol in the center.  (Sadly, the computer doesn’t understand “Look for the bulls-eye” until we teach it what that means!)

High contrast concentric circles target in blue against an office background.

There’s a cool little trick using “spatial brightness gradients” to detect circles.  A gradient points in the direction of greatest change, like dark to light, and it’s actually pretty easy to compute for an image.

The direction of the black shape’s gradient is shown with the gray bars. The gradient follows the outline of the shape, but is perpendicular to it, like railroad ties.

This afternoon I had the realization that the spatial brightness gradients in a bulls-eye are all lined up with the center of the circle (more formally, the gradient along an arc intersects the center of the arc).

Here’s an image where I’ve extended the brightness gradients out from each edge, by drawing a dim line through each point oriented along the gradient.  (Gradients are computed in OpenCV using cv::Sobel to get the X and Y components, but to draw dim lines I needed to rasterize them myself.  It’s a little faster if you skip low-magnitude gradients, which probably aren’t part of our high-contrast target.)

Black image with thin white gradient lines scattered around.

Now that we’ve drawn dim white lines oriented along the gradients, they all intersect at the center of the bulls-eye.  And this isn’t just for debugging, I can efficiently count intersecting lines by rasterizing them all.  So now I have a much easier computer vision problem: find the bright spot!

Finding the single brightest spot would be easy, but I wanted to support multiple bulls-eyes for multiple robots, and the program should be smart enough to realize it’s not seeing anything, so I needed to be a little smarter.  Finding all points brighter than some threshold would work too, except a big clear bulls-eye might have lots of points near the center that are all above the threshold.  So what I do is find all points brighter than a threshold (currently 100 crossing gradients) that are also brighter than any nearby point.

Here’s the final result, where I’ve automatically identified the bulls-eye, and figured out how big it is and how it’s oriented.  The computer can actually keep up with video, doing this 30 times per second, and can reliably track a dozen bulls-eyes at once!

To summarize, the code (1) finds strong brightness gradients, (2) extends lines along those gradients, (3) find where many of those lines are crossing.  Each bulls-eye center has tons of crossing gradients, one per pixel in the edges of the bulls-eye.  This works, and it’s very hard to fool because of all the ‘votes’ for a true bulls-eye–and very few other objects have so many gradients that all converge at a single point.  We did manage to get a white Rover5 wheel to count as a fake bullseye against a black monitor background, but it wasn’t easy.

How accurate is it?  Really accurate!  We can track a 3-inch bulls-eye image from 8 feet away with sub-pixel precision–the random location standard deviation is under 0.05 pixels; worst-case location error is only +-0.2 pixels, or under 1 millimeter!  I’m also estimating orientation by taking one quadrant out of the bulls-eye, and comparing the gradient-derived center with the center of mass; this orientation estimate is less reliable but still has a variance of only 2-3 degrees, and +-10 degrees worst case.

There are still problems; in particular if you tilt the bulls-eye more than about 45 degrees, the circles become ellipses and the gradients don’t line up.  If the shape moves too fast, the blurring destroys the gradients.  Also, if the bulls-eye is too close to the camera, my little gradient lines are too short to reach the center and the program fails.

If you’ve got a webcam and a command line, you can get the code and try it like this:

   sudo apt-get install git-core libopencv-dev build-essential
   git clone http://projects.cs.uaf.edu/cyberalaska
   cd cyberalaska/vision/bullseye
(print out images/bullseye.pdf)
   make
   ./track

Once this setup is working reliably, we’ll package up an easy to use robot tracking server package for you!

OrionDev-GK12 Edition

Here’s a small zip file that contains everything you need to create 2d graphics in c++.  This comes pre-packed with OpenGL, GLU, GLUT, GLEW, GLUI, and SOIL.  Included are several examples.

LINK

Instructions for installation:

  1. Download zip.
  2. Unzip zip.
  3. Open the oriondev-gk12 folder and run editor.bat.
  4. Open an example file to get started!

This zip is for Windows, but if anyone wants to use this on a Unix system feel free to use our libraries:  msl (Mike’s Standard Library) and osl (Orion’s Standard Library).  The libraries are located in:  oriondev-gk12/compiler/include

Two Kinects: surprisingly functional!

The Kinect is a really handy sensor for robotics.  We’ve been talking about having one Kinect mounted on the wall, to impose a simple global coordinate system; and separate obstacle detection Kinects mounted on each robot.

Surprisingly, multiple Kinects don’t seem to interfere with each other too badly–the Primesense dot detection is smart enough to preferentially recognize its own dots over the dots from the other kinect.

Image from a Kinect, with a second Kinect active on the left side of the frame. Despite the bright glare from the second Kinect’s IR emitter, there is almost no interference in the depth image.

Rover 5 platform test run

Mike just ordered a Sparkfun Rover 5 chassis ($60), and Steven decided to test drive it at 12V using his new quad-bridge driver board, in turn hooked to an Arduino Mega (for the interrupt lines) and an XBee for wireless serial communication.

http://www.youtube.com/watch?v=ZD30zdlazpc

It’s a nice little platform, big enough to put a laptop and some actuators on, and it moves authoritatively when driven at 12V.  It tracks quite straight even without using the onboard encoders.  Mark, from Mount Edgecumbe, suggested putting on Mechanum wheels, so the chassis can translate sideways too.

We’ll work on a decent mounting system, so all the electronics don’t fall out when the thing tips over!

Using Kinect + libfreenect on Modern Linux

Here’s how I got libfreenect working on my Ubuntu 12.04 machine, running Linux kernel 3.2.   Generally, I like libfreenect because it’s pretty small and simple, about 8MB fully installed, and gives you a live depth image without much hassle.  The only thing it doesn’t do is human skeleton recognition; for that you need the much bigger and more complicated OpenNI library (howto here).

Step 0.) Install the needed software:
sudo apt-get install freeglut3-dev libxmu-dev libxi-dev build-essential cmake usbutils libusb-1.0-0-dev git-core
git clone git://github.com/OpenKinect/libfreenect.git
cd libfreenect
cmake CMakeLists.txt
make
sudo make install

Step 1.) Plug in the Kinect, both into the wall and into your USB port. Both the front LED, and power adapter plug LED, should be green (sometimes you need to plug and unplug several times for this). lsusb should show the device is connected:
lsusb
… lots of other devices …
Bus 001 Device 058: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor
Bus 001 Device 059: ID 045e:02ad Microsoft Corp. Xbox NUI Audio
Bus 001 Device 060: ID 045e:02ae Microsoft Corp. Xbox NUI Camera

Step 2.) Run the libfreenect “glview” example code:
cd libfreenect/bin
./glview
Press “f” to cycle through the video formats: lo res color, hi res color, and infrared. The IR cam is very interesting!

The source code for this example is in libfreenect/examples/glview.c.  It’s a decent place to start for your own more complex depth recognition programs: equations to convert depth to 3D points here!

———— Debugging Kinect connection ——————-

Number of devices found: 1
Could not open motor: -3
Could not open device
-> Permissions problem.
Temporary fix:
sudo chmod 777 /dev/bus/usb/001/*

Permanent fix:
sudo nano /etc/udev/rules.d/66-kinect.rules

Add this text to the file:
 # ATTR{product}=="Xbox NUI Motor"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02b0", MODE="0666"
 # ATTR{product}=="Xbox NUI Audio"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ad", MODE="0666"
 # ATTR{product}=="Xbox NUI Camera"
 SUBSYSTEM=="usb", ATTR{idVendor}=="045e", ATTR{idProduct}=="02ae", MODE="0666"

sudo /etc/init.d/udev restart
—————–
Number of devices found: 1
Could not claim interface on camera: -6
Could not open device
-> The problem: modern kernels have a video driver for Kinect.
Temporary fix:
sudo modprobe -r gspca_kinect

Permanent fix:
sudo nano /etc/modprobe.d/blacklist.conf
Add this line anywhere:
blacklist gspca_kinect
——————-
if lsusb only shows the motor, not audio or the camera:
Bus 001 Device 036: ID 045e:02b0 Microsoft Corp. Xbox NUI Motor

This MEANS THE KINECT IS NOT POWERED via the 12V line!

Front green light blinking: Kinect is plugged into USB.
AC plug cable green light: 12V power is connected.

Solution: plug in the power cable!  If it is plugged in, unplug and replug it.

 

Kinect + Parrot Drone = ?

We’d really like to do interesting things with indoor UAVs, like the fun little Parrot AR.Drone 2.0.  Outdoors, you have GPS (see ArduPilot), but indoors GPS isn’t accurate enough even if you had good reception.

Enter the Kinect.  It returns a live 2D + depth image that can be converted to true 3D point cloud quite easily, and at 30 frames per second.  If you have a beefy enough indoor UAV, you can mount the Kinect on the UAV, but the Kinect’s weight, power, and processing requirements make this expensive.  I’m a fan of mounting the Kinect on the wall, off to the side, where it can see most of the room (it’s easy to rotate the 3D points into any coordinate system you like).

Together, you have a sub-$500 UAV with a 3D position sensor.  The only problem?  The UAV is thin and black, so the Kinect can’t see it beyond about 10 feet, even with the big indoor hull attached:

Parrot AR.Drone 2.0 seen in Kinect field of view.
I’ve circled a Parrot AR.Drone flying in the Kinect depth image. No valid depth samples were returned, so it shows up as black (unknown depth).

One solution is to attach a small reflector, which can be as simple as a piece of paper.  Here, I’ve got a small 8×4 inch pink piece of paper attached to the front of the drone, to give the Kinect something to see.

With a 8 by 4 inch pink sheet of paper attached, the drone is clearly visible with depth in the Kinect image.

This works quite reliably, although the paper does mess up the vehicle dynamics somewhat.  Anybody tried this?  Would painting the indoor hull help the Kinect see the UAV?

The Arduino-Based Dual Relay Control Board

This project allows us to use an Arduino to control any AC powered device, including lamps and small heaters.  Its limited to 20A, but a lot can be done with 20A.  The picture below shows it running two 12V PC case fans. The arduino Uno controlling this device is located underneath the board (out of view in the picture).

A video of this device in operation may be found here:

http://www.youtube.com/watch?v=7rWQjZVRCU8

Datasheet, parts list, and Eagle files for this device can be located here.  the GNU General Public License applies to this device and its files.  Please let us hear your comments or questions on this device!

DataSheet:

Relay Board DataSheet

Parts List:

Parts List for Relay Board

Eagle Files:

Relay Board V3

 

Meet the Team: Steven

I am a Computer Engineering Graduate Student, working toward my PhD.  My focus is in embedded systems, wireless sensor networks, robotics, and networked robots.  My Master’s thesis covered the construction and programming of a small network of tiny robots that could communicate with each other and explore a model building in a cooperative, coordinated manner.

 A video of a successful exploration using 3 robots can be seen here:

http://www.youtube.com/watch?v=GipvllwpOpU

Before I built networks of these robots, I had built just one robot and used it for a college competition known as Micromouse.  Micromouse is a competition in which a robot must explore, map, solve, and then race through a small 16 cell x 16 cell maze.  The robot knows only the size of the maze and the location of the start cell and stop cell when it begins its run.  It must start by exploring and mapping the maze to find routes from start to finish, then it must race along the route that will give it the best time from start to finish.  The robot with the best time wins the competition.  My team and I have taken 1st place 5 out of 6 of the last years at the NW/NE area competitions, and took 2nd the other year.  A video of our winning run from year 5 can be found here:

http://www.youtube.com/watch?v=a11M-kZXASA&feature=relmfu

The exciting speed run occurs at time 3:40, so zoom ahead to that point if you want to skip the slower mapping run.

For years I’ve been fascinated with the idea of functional art: devices that perform useful tasks yet look good while they do it, or devices that are mostly artistic in nature yet also have some function.  A few years ago I did my first functional art design: the Digilog clock, a clock with an analog design that uses LEDs to represent the hands of the clock.

Lately I’ve also been growing increasingly fascinated with UAVs, an outgrowth of my years as a private pilot and ultralight flight instructor.  I’ve been informally trained to operate the Aeryon Scout, a nimble and easy-to-fly quadcopter, and have been experimenting with open source hardware and software solutions, specifically the ArduPilot Arduino based open source autopilot system.  I look forward to working more with these systems in the coming years ahead.

Meet the Team: Mike

I’m Mike, and I too have a soft spot for robots!  I’m a student seeking a degree in Computer Science at the University of Alaska Fairbanks.

Background

  • Born and raised in Fairbanks, AK.
  • Got my first computer when I was 4.
  • Took my first computer apart when I was 5.
  • Started programming when I was 8.
  • Starting building robots when I was 15.

Looking forward to building some new hardware and software!