Vision sensor

I don’t know much about the vision sensor right now. Does anyone know what all it can do and what its limitations are? For example, could you have it turn your robot so that you are pointed at the flag and the correct distance away?

Yes. I’m actually writing a bit of a guide to using it in Robot Mesh Studio right now, so I will be sure to link that here when it’s all done.

As for capabilities, much of it comes down to how you use it. The data reported by the camera is bounding boxes of color blobs (signatures) or bounding boxes and orientation angle of linear color codes made from defined color blobs (codes). For alignment, you can use the x positions of the data to orient the robot. For rangefinding, you can use the y-position of an object or its size as indicators. For codes, the alignment of the code can also provide some information. How accurate each of these methods are depends mostly on the work you put in to calibration and data collection. You could also do range finding through comparing the positions of a signature as seen through more than one camera.

For limitations, it is very sensitive to lighting changes, though with clever and advanced programming you could likely overcome this drawback. The only other drawback would be a limited frame size: 316x212 pixels. But that one can be overcome with multiple cameras, a moveable mount, or remembering data while traversing the chassis.