So it’s near the end of the season… and worlds is approaching. I’m wondering how many teams have been able to perfect vision for use in autonomous and driver control and how teams have done this. I have many questions such as how well did it work? How much did it benefit you? How did the backgrounds/lighting effect it and how did you cope with varying backgrounds? Are you planning on using it for worlds? Did you use the vision sensor to line up your angle or adjust your distance? Thanks, and all replies are appreciated.
Personally, I don’t think the vision sensor will be viable for worlds (or any competition) for a couple reasons. First, it often recognizes objects in the background as field elements and would really mess with things. Especially at worlds, team members wear colored shirts very often which would mess up the sensor.
Second, the lighting conditions between competitions change, and so without doing some extensive testing at worlds, the vision sensor would not be useful. From my (fairly limited) testing I found that lighting conditions have a huge effect on what the sensor sees.
Just my 2 cents.
tbh normal code seems more consistent than the vision sensor…
Agreed. Vision sensors only seem to make targeting worse, especially for practiced drivers and in auton.
and especially when the lighting conditions make each flag have a unique shade of red, even the blue ones:
YES!This exact thing happened at a tournament earlier this season. Thankfully nobody was utilizing the sensor (at least not well enough for it to make a difference).
idk why they do stuff like that, it makes it harder for everyone. I actually got headaches while driving.
For what it’s worth, I’m pretty sure exactly nobody utilized a vision sensor at the Arizona high school state championship this past weekend.
And it’s not like we had any obvious background interference either.
it seems to be more of a struggle than it’s practically worth.
I managed to use a vision sensor quite effectively at multiple tournaments and I feel if you have the right white balance and brightness settings on your vision sensor, it is definitely a viable option for worlds. Of course, that’s just my opinion because it works for my team. Then again, I only use it for match auton and skills auton.
R.D.Z., you seem to be clearheaded about this in a way nobody else has been. I’ve long wanted sensors in general to be truly viable. Please write a short whitepaper about how you did it and how well it worked. I’d volunteer to help write it, fwiw.
Xenon_27 - glossy surfaces are mirrors. Just because your arena camera sees red, doesn’t mean the robot camera will. The arena camera sees the red of the floor because it’s looking downward. Robot cameras will be looking level or upward, so the (hopefully dark) roof and room will be the reflection. True, it might make sense to design field parts without such glossy surfaces, to encourage use of vision. Frankly, the best environment for vision is a white, diffuse-lluminated ceiling, not a dark room with downlighting. Torcheres!
I saw a team using it to really good effect in autonomous at UK Nationals at the weekend. I guess it’s viability does depend on the background a lot though. Was just black at Nationals.
Team 1200Z Syntax Error from Wisconsin uses the vision sensor very well during both their auton and driver. Their programmer resets the vision signatures at every tournament so that all colors are accurate on the given day. during their auton they use normal Ploops to do their auton and then after turning toward the flags he triggers his vision sensor horizontal alignment code to make sure that the robot is accurate in case it bumped something. It works really well and even helped them win excellence award at states.
Thanks. I hate it.
It would be nice if vex released guidelines on the types of lights allowed to light fields. “Standard white of any temperature are allowed. No colored lights may be used to illuminate the fields during regulation matches,” or something to that effect. Is that so hard?
Last night, the programmer for our team just made a button-activated, horizontal-aiming PD controller with autofire, and tuned it in an hour. It was very rough and dirty, and has not been optimized at all other than tuning the ks. There is so much more you can do with these objects, that as long as you retune for the current lighting conditions, the vision sensor could serve as an aimbot, line follower/position tracker, ball tracker, cap color sensor, and so much more.
Also, implementing multiple sensors could lead to some rudimentary lidar.
or just wait for the new lidar sensor that was announced here at the bottom of the page
Takes time to perfect it. We give our programmer an abundant amount of time to work on code. He made our vision sensor extremely consistent. Wisconsin does not do weird lighting on the field for its matches, so we dont worry about it. As long as lighting is consistent it works well.
Also, after we perfected the vision sensor code, our programmer got bored and made some self writing autonomous code lol