I’m on the fence on whether I should buy a vision sensor for my team to compete in Tower Takeover and potential future games. On paper, the vision sensor sounds amazing for autonomous and I’m really tempted to buy it, but most of the time when I see it brought up here on the forum it seems like the overall consensus of it quite negative. What’s your opinion on this?
If you do classes, its a good learning tool. And if you can get it to work at competitions, its amazing. However, getting it to work in competition is extremely difficult due to the wild variations in quality of tournaments (lots of different lighting and there may not be time to calibrate it) along with the poor decisions made by the GDC in element design (shiny plastic things are harder to track, especially with how the vision sensor is limited in the sense you can not run custom post processing on it).
Personally I’d say it depends on how experienced your kids are. If they are more experienced go for it, otherwise I wouldn’t.
(I have never tried/tested out the vision sensor for my self.) From what I hear the only way to use it is to use the built in color detection algorithm. It can only detect 7 colors and people say that different lighting can completely mess things up. So, yeah, very unimpressive.
Unless vex can simply allow you access raw image data from the camera on the v5 (so that you can do image processing your self) I wouldn’t want to use it. The v5 could easily handle a program that does image processing way better then what they currently have running on the camera. In my opinion you shouldn’t, but I really hope vex will unlock the vision sensor full potential, then I would buy one.
If vex unlocked the full potential of the vision sensor...
you could train an ai to detect cubes, towers, other robots and what alliance they are on, various markings on the floor, the field perimeter, and much more; and under any light conditions. Stuff like this already exists in the real world and could run on the v5…
Imagine if you had something like a RPI zero for offloading the data to be computed, that would be amazing.
@JamsG My one item is I would recommend at least one. It is a great little teaching tool, the team may or may not use it but they have it in their toolkit, and $70 is not that bad for the educational value IMO.
I agree with @vexvoltage.
Think we have 2 or 3 pieces in the lab. So far the teams have not decided to use it for competitions, but it is always good to have around and get the teams to know how to use it.
You never know… maybe one of these seasons it will become an essential item to win the game.
From my experience, the vision sensor is actually pretty decent when looking at objects on the field. Yes lighting changes mess up the colors, but you can recalibrate them easily and it generally doesn’t break between fields in the same tournament. I have briefly tested with the cubes in different lighting and it seems like it’ll work acceptably.
Anything without the grey background of the field, however, is probably not going to work consistently. (This mostly comes from my experience with flags last year, I guess it could be different this year, but I wouldn’t bet on it)
tl;dr it’ll work pretty well for finding cubes on the field, but if you’re looking anywhere else you’re gonna have a bad time
Last year we used the vision sensor to decide whether to shoot flags on the field in autonomous. It is a difficult sensor to use, but with patience and a lot of effort, can be put to pretty good use. I have seen a few uses for it as a distance sensor and as a course correction sensor.
Our team spent a ton of time developing an auto-aim function for Turning Point using the vision sensor, only to have it fail at almost every comp because of factors outside our control (dimly lit room, reflections on flags, referee wearing a blue shirt, etc). However, we did have success with it when using it for close-range stuff - we mounted one in our intake and programmed it to look for big yellow objects passing by so that we could detect when we had a ball. I could see doing a similar thing in Tower Takeover to see what color cubes are in the robot’s lift perhaps, but other than that, I don’t think there’s much practical use for it.
U could use it to see if there is a cube in a tower, and if so, what color (in auton). Then u could have ur robot programmed to do different things depending on the outcome. Say if there was a green cube in the tower, then ur robot would collect more green cubes to stack. U could also have it de-score the cube if it is the same color as the pre-load of the opposing alliance.
I think that this would be a really cool idea, but because there are external factors (e.g. people, robots, etc.) in the background, identifying a cube will be hard to do. Although, if you figured out a way limit the external factors, it would allow for much more complicated autonomous routines. wait nevermind, don’t tell my team or else I’ll have to code it.
If the vision sensor is close enough to the cube that the cube covers most of the picture, it shouldn’t be too unreliable. Just look for a green/purple/orange object that’s at least 200px by 200px. Unreliability is introduced when looking for small objects - differentiating between a flag and another object at 4’ away is hard, but differentiating between a cube and another object when the cube is 4" away is easy.
Would it be legal to place a color filter in front of the vision sensor allowing it to only detect one color? You could use this to identify cubes while blocking out interference
I’m pretty sure you can’t - you’re only allowed to use a color filter for the light sensor, not the vision sensor.
But this wouldn’t really be that helpful anyway - the vision sensor is great at filtering hue. If, say, you’re trying to look for green cubes, and someone’s red shirt is messing up your vision code, you’re doing something very wrong.
The issue arises when the vision sensor mistakes objects of the same color, such as mistaking a blue-shirt-wearing referee in Turning Point with a blue flag. Color filtering wouldn’t help with this, as the objects are both the same color.
I’m part of a vex team that is competing in the 2019 -2020 season and so I know how much experience our team has and we’ve been at it for 2-3 years now and I agree with PortalStorm4000 that you should only buy the vision sensor if your team is experienced enough. If this is your team’s second year, you might be ready to get it but you will definitely not do your best with the sensor. Either way, if your team thinks they can handle it, then do some research and see if they are comfortable with using it.
Personally, I feel that the vision sensor is definitely a worthwhile investment. It makes auto a lot more accurate as you can center your robot to certain objects and you can track certain objects of a specific size. If you can configure and program it properly, it will help you with consistency during auto. Just make sure your values for your signatures are right because that often is the problem with the vision sensor. Changing brightness and white balance will help vision sensor programming quite a bit. We used it all season and at worlds. We would have made it further in our division (math) if it weren’t for a complete blunder of a call (from what I think).
Do you know if there is any programming interface that could read object position on your computer, or all you get is a live stream?
Yep! When writing up how to configure the Vision Sensor in Robot Mesh Studio’s C++ for IQ, I took some screen captures of the configuration tool in use where you can see the object info:
As far as I know, this is (roughly) the same interface that all the programming tools use for Vision Sensor configuration.