The vision sensor works by saving different “signatures”, which are the colors you want it to track. First plug in the Vision sensor directly into your computer with a micro USB cable. Open the PROS Vision Tool, and draw a box over what you want it to search for (green, red, and blue are all good choices). If you cant get a steady image then you can click on the pause button, and it will pause the video stream till you click it again. Then click the set button next to the different sig slots in the utility. The top sig is 1, the second from the top is 2, third down is 3, and so one and so forth. After that you should see the video stream in the utility now shows boxes over the “vision objects” it is detecting. To further tune the signatures to detect the colors you want, you can change the threshold value. This changes how close the colors have to be to the original one for it to count as that signature, and you should see the vision objects moving accordingly. Then exit the utility with the X on the top right. If you exited the utility properly, it should have saved your signatures during the process.
To reference the vision sensor in our program, we must first make a vision sensor object. I’ll call mine camera and it is on port 13.
We then grab the different signatures from the vision sensor object, and put what they detect into their own respective vision objects. There are many functions to do this. I will use get_by_sig for this example. You can call the object anything you want, I called mine rtn as that’s what the examples do.
The first number we parse is which vision object we want to grab, as sometimes there may be many objects in the vision camera’s FOV (field of view) at one given time. 0 is the largest, 1 is the second largest, 3 is the third largest, etc. If you ask for a non-existent vision object (you asked for the 7th largest and there are only 6), it will give out an errno. I am grabbing the largest object seen in this example.
The second number we parse is which signature we want to detect with. Let’s pretend I calibrated signature 2 to be the green on the flags, and that’s what I want to detect.
pros::vision_object_s_t rtn = camera.get_by_sig(0, 1);
This vision object holds the data we want (size, coords, etc). I’m going to grab the x coordinate and put it into a variable named xCoord.
int xCoord = rtn.x_middle_coord;
If you want to detect the flags its a good idea to use a “color code” instead of just a signature. Color codes work by combining two signatures together. This helps stop false positives so you need the two signatures (colors) next to each other. For example there is green on both the alliance flags, and at many events there is red and blue in the background. We start by making the color code. We input two signatures here, in the order we want to detect them. So pretend for this example we are detecting the blue flag with signature in slot 1, and then detecting the green target with signature in slot 2.
pros::vision_color_code_t FLAG_CODE = camera.create_color_code(1, 2);
And then we replace our get_by_sig with a get_by_code. The first number is still the size, but now we replace the signature with our new color code.
pros::vision_object_s_t rtn = camera.get_by_code(0, FLAG_CODE);
Then we continue our code as we did before, but note that the object center and size is a bit different as it includes the whole flag instead of just one portion. An important thing to remember is to calibrate the vision sensor’s signatures as often as you can at competition. Lighting at events changes throughout the day and will not be the same as your lab. Good luck.