Vision sensor for aiming at goal

I coach a middle school team and my son is the programmer. He loves it and always is trying to figure out new things. Our robot has a color sensor for the rollers and an inertia sensor for programming turns. He asked me to buy him a vision sensor. He thinks he can program it to aim at the goal for programming skills and possibly in match play from close to the goal. Are any teams doing this successfully? Are there any resources/example code?

1 Like

Yes this is something many teams have thought of and experimented with. It is a viable way to locate the goal but in a competition scenario it might have some trouble with picking up other objects it thinks are the goal. This could be mitigated with careful tuning or limiting its field of view.


One of the challenges with the vision sensor is different lighting at competitions can throw it off. It can be done, but an alternative and in my opinion more reliable solution is odometery. It requires trigonometry and a little bit of more advanced coding, but there are many resources and it could be an awesome learning experience as it would help with future high school math classes

1 Like

Could you clarify what you mean by odometer? My son can do some trig if that is what is needed. Right now he uses the motor encoder to drive in programming and the inertia sensor for turns. We don’t have GPS sensors or have the strips at our school unfortunately if what you are suggesting uses that. I think he would love it if we could eventually get those.

This should help:


Odometry is just the process of tracking your robots position across the field, you need other programs to use the data (such as a loop auto-aiming the bot based on the odometry data. It sounds like all you need is three optical shaft encoders or two optical shaft encoders and the inertial sensor to successfully implement odometry. Due to the slop motor encoders are not recommended for use in odometry. You don’t need any GPS sensors to do this. You’ll also need to build tracking wheels using some small omnis and the shaft encoder mounted below the robot and always touching the ground. It sounds like your son uses a PID, given he uses the motor encoders and inertial already, so he shouldn’t have too much issues programming odom, given he has experience with some of the code like pulling data from an encoder and using it. Good luck!


One thing that I’ll comment on is that while odometry in itself is a powerful tool, it’s implementation given VEX hardware can only be so powerful. Sudden shocks like collisions or heavy defense will throw off your odometry, and render it useless. After experimenting with it myself earlier in the season, I found that under heavy defense I couldn’t get it to accurately return an accurate position of where it actually was within 20 seconds or so. Of course, this is anecdotal, and other teams may find more or less success with it.

I’ve only slightly experimented with vision sensing but I know teams who use them. After scrimmaging with them and then competing with/against them the following week, vision is a very powerful tool if implemented well, but it suffers with sub-optimal lighting conditions. At our competition yesterday, there were extremely large windows on both sides of the room, which would change the lighting as the day went on. Blue signatures were essentially impossible to track because of the windows, and signatures had to be recalculated on the fly. However, I still think vision is much better than odometry for in-match aim assisting.


Other with more experience can add to this, but our experience with the vision sensor was like this:

  1. hookup usb cable to sensor
  2. aim sensor at a COLORED item you want to identify by COLOR
  3. take a snapshot & store in memory of the sensor itself
  4. disconnect usb cable
  5. program vision sensor to detect that color within certain degrees of view

If it’s not obvious… it’s really made to identify a colored item that you are already VERY close to or have acquired. The presence of other colored items in the viewable area can and will false positive.

The Purdue Sigbots link explains it very well, but it is position tracking using either 3 track wheels and 3 encoders or 2 track wheels and 2 encoders + 1 inertial. The main advantage of it is that you can sense the distance from the goal as well, allowing you to regulate the flywheel velocity to hit perfect long range shots. If it is built right, it can stay on target for a whole match; I speak from the experience of 4 tournaments this year. We haven’t been off more than 6 or so inches by the end of 2 minutes, something that when 7ft away does not make too much of a difference and . However I do not doubt that other teams have found more success with the vision sensor and it also seems like a very viable and reliable solution! Good luck on whichever you go with!

1 Like

This a couple of matches from yesterday to show the accuracy we got with the auto aim. It isn’t perfect by any means but it definitely helps.

We are 20002Y, the big bot in the back shooting long range. I’m a first year driver and I am not too great at it so I use the auto aim as a clutch

1 Like

Really cool! I have learned so much from this thread

We had practice today and found out our school has the tracking wheels. My son is going to try it and try to get it going by January. We already have an inertia sensor. He might not get it to work but I am sure he will learn a lot trying. He also wants to try to a vision sensor but since we don’t have one we will try this first.

One note about auto-aiming is that if you are good enough at manually aiming (takes a few hours of practice or scrimmaging), it’s not really worth the effort to code and deal with the annoyance of vision sensor.


I will second what @JacobM and @AliA said. Vision sensore depends a lot on the lighting in the room, people moving around the field in the bright closes and even opening or closing the doors in the bright lit hallway.

We got it to work almost well in our lab, but when we went to the competition it failed us completely at the skills field and we couldn’t figure out why until after the end. Turned out there was a large television behind the field that was messing the sensor. After they turned it off our program worked, but it was after the competition was over.

1 Like