How to use a vision sensor to locate a MOGO

New to VRC and just started playing with a newly acquired vision sensor. I’ve read all the help and watched all the tutorials but I still have more questions than answers. I thought it supported actual object recognition, but apparently it simply memorizes and recognizes colors in a certain range?.. What do the different sliders do in the Vision Utility dialog? When defining a MOGO signature, should you try to exclude the PVC pipe as its color may match the color of the tiles? Do you capture just the sticker, or the base in general? Is there a way to see at runtime what the camera is seeing and recognizing? Wouldn’t you have to re-record the signatures on an actual competition field due to different lighting conditions (which I don’t think the rules allow)?

Is this code snippet essentially saying “is the camera seeing an object matching REDBOX signature”?

I believe you would want to take a snapshot of the object your going for so that way the sensor has a reference image. And yes there is a way you can connect your phone or tablet to your vision center to stream the footage so you can see from the bots perspective. You wouldn’t have to re-record the signatures as its essentially tracking the same object and as long as you have good lighting you should be fine. The Pvc Pipe does concern me to as you dont want your bot lookin at another mogo and thinking that its the one your going for so either you could code that out or just take a signature of the base of the mogo. For the base your going to want to capture the base itself as all mogos have that sticker and sometimes it would be obstructed so taking a signature of the base is your best bet. Also the vision sensor stores the width and height of an object and the origin x and y point. Your code snippet isn’t going to tell you if it sees a red-box signature your going to have to print that out to the brain if you want to know if it sees it or not. Also I would not recommend blocks as they are less versatile and sometimes harder to use

1 Like

Thanks. How do I interpret this piece of code (it’s straight from the Vision Sensing tutorial)?

I know I can turn on vision wifi on the brain and connect to that and stream Is there a way to get this view at runtime to see things from the camera’s perspective? Or at least save still images to verify what the camera is/is not recognizing?