Using OpenCV with Robots?

Hi, I’ve recently downloaded and used a library called OpenCV which does some really neat things with webcams. What I’m hoping to do is be able to mount a webcam either on the robot or over it, and be able to use what the webcam sees for instructions. I was curious if anybody has done this before or if anybody has an ideas for interfacing it with a computer.

I guess the biggest problems with this are:

  • How to get the vex cortex to transmit data through serial (I can’t seem to be able to find anyway of doing it)
  • Is there anyway to transmit webcam data wirelessly?(I could just buy a really long USB extension cord)

For reference, I do know how to program with OpenCV already.

EDIT: When I say “robots”, I do mean the Vex Cortex. Also, when I say “webcam”, I mean a Logitech one, not the one in Vex since it only has composite output.

Thanks

The Cortex is probably going to be too locked down and underpowered for vision processing. however, the ARM9 controller should be fine.

I’ve been investigating object tracking using the C++ OpenCV library and a raspberry Pi as a side project. My conclusions right now is that the raspberry Pi (which is more powerful than an ARM9) is only just powerful enough to pull off HLS based image filtering, albeit my tests are early days yet and not very optimized.

I’ve seen interesting simpler implementations of object tracking by using infrared lasers, and no filtering - it does appear to be a much faster way (as no conversion to HLS colour space takes place), but this isn’t really an option for Sack Attack.

Either way, this isn’t something that can be accomplished at High School level, but is absolutely possible for college competition.

A shameless plug, but AURA is working with a National Instruments Compact Rio team, who are implementing computer vision for their upcoming competition. The CRio is significantly more powerful (and expensive!) than other microcontrollers, but this a good example of the possibilities that computer vision could reach: http://www.youtube.com/watch?v=U4BD0iHdtXk&feature=plcp

To answer your actual questions: You can use the UART ports on the cortex to transmit serial data. We have done so successfully using an arduino, and should it should be possible to use the GPIO pins on the rPi to transmit directly to the cortex. As a side note, if you are using competition template, some serial data (I believe for VEX LCD’s) gets send over UART2.

We are processing images on board, but there are some wireless camera’s available that send video over wireless LAN, which is what our cRio team uses (you can notice in the video that the camera, and the whole robot, are still actually tethered, but it does have 802.11 wifi support).

~Matt.

First of all, some of the stuff you have just said is gibberish to me so excuse me if I am a bit inaccurate here. However, I know that Marty from Massey made image tracking work on a Cortex running ROBOTC. Here’s the thread btw. I am very likely missing some key point though that makes a big difference…

~George

I was planning on computing everything on a laptop, as I don’t have the money/resources to buy a more powerful processor such as the Arm9. I’ve just noticed that there seems to be openCV for android, and I just happen to have an old Droid and an Archos 5 laying around, so I might have some luck with some more powerful processors. Also, the Droid has a camera built in. Hopefully I can figure out something with those.

Ohh, alright. My team is just getting a cortex this upcoming season, so I’ve never actually used one before. I have a seeduino and an arduino nano laying around, however I don’t have any xbee controllers which tend to be on the more expensive side (once again I can just use a very long usb cable or connect it to an android device)

I duno if you meant for programming/hardware or something else, but if it’s programming/hardware, I take that as a challenge :wink:

Thats pretty cool, I’ll have to take a deeper look into that.

Since I’m pretty new to object-tracking, tell if I’m wrong, but for the first video does the processor take each pixel and see if a certain percentage of a color is there. If it is, add/average that pixel with other pixels that had that certain percentage or more. Then whatever average you get after going through all the pixels is where the ball is roughly centered, then PID the motors to that point.

Thanks for the comments!

I was meaning in the rules for HS Sack Attack, I’m not doubting you :stuck_out_tongue:

Xbee’s would be a great way of transmitting the data. If you are not restricted to having everything onboard (i.e. for competition), off board processing is probably best, as the available power is usually much more.

Good luck!

~Matt.

Oh alright, my bad XD, I wasn’t aware that you could use things like that in the college version of sack attack.

Alright thanks!

The thread that you linked to is about object tracking by color. I wouldn’t call it easy, but as far as vision processing goes, it falls to the simpler side of the spectrum. OpenCV is usually used for much more complicated visual analysis. Even if you could interface the Cortex with OpenCV, I doubt that it would be able to handle things much more difficult than color tracking.

Thanks for explaining :smiley: it makes much more sense now.

~George

OpenCV has a tutorial project called ColorBlobDetect. Part of that example creates an array of regions that enclose a colour that you can specify.
Find the centers of those regions and the x,y co-ords of the centre can be passed to a cortex etc via an RS232 link.
The Cortex can then take actions to reduce the ‘offset’ of the x,y cords to zero. Offset being the diff between desired and measured co-ords.
The trick is to decide which region to follow.
OpenCV has functions to help you with this.

I feel like I should clear up what I said in my previous post. While OpenCV can do blob detection as you point out, I think trying to use it on the Cortex for that reason is like using a framework for no reason. It’s like trying to install java to write a program that multiplies two numbers together. You can do it, but it probably makes more sense to write a short program in assembly (speaking in terms of processor resources, obviously). OpenCv is just a library though, and it’s open source and free. So there’s no reason that you can’t go into the code and use their algorithms.