I am not too familiar with the vision sensor so I’m asking for your thoughts. Would it be worth trying to come up with a way to have the vision sensor sense the color order of the balls scored in goals as you approach? The idea is that the robot would then know exactly how to sort the goal automatically. Is this a viable option or is the vision sensor’s view too limited to try this?
A vision sensor could handle it, but you might be better off not relying on it. My team used vision in Turning Point and then ditched it for Tower Takeover because it wasn’t reliable and we happened to lose ports whereever the sensor was plugged in.
Yes technically it could you will just have to have ha failsafe in case the vision sensor detects the wrong thing.
I encourage you to try. It’s a very cool idea, and you have more or less a clear view of a goal.
There are several ways of going about this, but I suggest filtering and pattern matching.
You can filter the input and match to patterns you’ve determined previously.
Exactly. Give it a go and learn about a lot of cool concepts. The Mk. 1 Eyeball is probably a better sensor for this job but worth experimenting with the vision sensor too IMO.
Good point: Either way, you can only learn if you try.
Sometimes I think the Vex forum tends to be a little on the pessimistic side.
Give it a try, it would useful for programming. Good Thinking
I would doubt using a vision sensor will help much since there are only 3 balls, and eyeballing should be enough for it. IIRC vision takes some time to sense the object, making eyeballing more efficient.
Its always good to learn how to program vision sensors though, just in case you need it some day