My team and I have recently been trying to actively use v5 vision sensors to recognize the colors on the flags and then hit them in autonomous (Flywheel Bot). Is this viable? Any suggestions or other helpful implementations?
It can be done. It would take a significant chunk of very good code, though.
Our team considered this, but decided against it. During autonomous, you need to keep track of your robot’s position (so that you know where to go). If the autonomous is just a bunch of large motions (drive forward, turn left, etc) it’s pretty easy to know where the robot is. But if vision sensor tracking is used, the robot will need to be constantly making small adjustments to its position in order to line up with the flag. After making these small adjustments, you can’t count on knowing where the robot is anymore, so you wouldn’t know where to drive it to! Maybe if shooting the flag is the last thing your autonomous program does, it’d be fine, but otherwise I’d strongly advise against it.