It’s not good for current skills or autonomous
change my mind
It’s not good for current skills or autonomous
We have a MS team who uses their vision sensor in 15 second auton to align their shot on flags. They also use it to navigate to push the low flag during their 1 minutes skills auton along with high flag alignment. USEFUL!
In programming skills we use the vision sensor to line up all of our shots and currently have the 4th highest score in the world. It helps make our programming skills much more consistent than it otherwise would be.
We use it in auton and driver control. ITS VERY USEFUL. It takes an extra half second to line up, and consistently hits flags. We seem to be one of the only team in Wisconsin to successfully use it in practical situations. There is a use for them, but they have to be programmed to align fast with flags.
my programmer was trying to do this in pros, Is there any chance that you could share that code with me even if it isn’t pros?
They started with this post on VEX Forum, and then they went from there.
We use the vision sensor to see if we have a ball in our shooter. It a relatively simple task, but it has sped up our shooting, allowing us to put up a score over 30 currently and hopefully over 40 at state.
Wouldn’t a limit switch be simpler, cheaper, and easier for the same purpose?
Maybe if the ball was always in the same spot. Using the vision sensor would allow for much more flexibility in ball positioning in an intake. Not to mention leaving a legacy port open for one of the sensors that can’t use the new ones.
We had teams that were able to use it effectively from the front half of the field. However, at our state competition, they had banners behind the fields containing both red and blue. The organizers were asked if they could remove them prior to the start of the matches to no avail. One of our teams volunteered to go out an purchase sheets to block the banners…this offer was not accepted. As a result, many teams had to back out the vision system code on the fly. All that time and effort…
I was at Wisco States, team 14888D, and I can confirm that team 1200Z has an absolutely amazing vision sensor code that makes shooting flags absolutely amazing. it is super accurate every time too. the key thing with the vision sensor is that you just need to get the code working really well and then practice with using it.
It is for this exact reason that my team chose not to use vision sensor.
Great job on that skills score! I haven’t really worked with the vision sensor, but no one in my region can get it to improve shot accuracy. Could you describe the algorithm you’re using?
Thanks! We’re just using the green on the flags and P loop to aim the robot. It’s simple but very effective.
Yeah, detecting the green makes a lot more sense. Thank you so much for your help!
You’re welcome! I’m glad I could help.
In California some High Schools have red or blue bleachers behind the fields. Had to scrap my vision sensor auto aim. It is relatively useless if RECF doesn’t have the game have some sort of tarp or something to block outside interference. (Or reflective tape like in FRC)
Anyways, another big thing is that you need to calibrate it. Wisco state was broken up into two days, first 5 matches friday night, and then saturday were the other 5 and eliminations. The lighting was different both days so we had to calibrate it for both days. Also, the skills field was upstairs, and the lighting up there was completely different. We are going to the US Open and the World Championship, and we aren’t too fearful because lighting seems consistent in both of those, unless we make it to the dome
Also, funny story of our vision sensor. So, with one of our qual matches on Friday, we were on the red alliance, and we were detecting blue flags for autonomous. 5509A (who we would eventually pair up and make it to the finals with) had blue shirts on and were nearby the field. Our vision sensor picked them up instead of the flag, so it aimed for them and we ended up losing that auton period . We did end up winning the match regardless, so we didn’t sweat it too much.
At worlds, vision sensors can be an issue, because there is a lot of red. Also, we like to calibrate our sensor on the field, so far we have had no issues with doing that, however with how fast pace worlds is sometimes, they might not make time for it. I would suggest bringing you own flag and calibrating that with the lighting of the room you’re in. Also, we plan on making vision sensor code to sense the green along with the color of the flag, it should narrow out distractions made from other red and blue things surrounding the fields