IIRC the standalone version of vision utility does, but the VEXcode version will have default signatures sent from VEXcode to the utility.
You can just click the small button lower right to copy signature info to the clipboard.
(sometimes doesn’t work for some reason)
They would look like this
vision::signature SIG_1 (1, 9669, 11299, 10484, -2953, -1597, -2276, 3.000, 0);
vision::signature SIG_2 (2, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_3 (3, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_4 (4, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_5 (5, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_6 (6, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_7 (7, 0, 0, 0, 0, 0, 0, 3.000, 0);
vex::vision vision1 ( vex::PORT1, 50, SIG_1, SIG_2, SIG_3, SIG_4, SIG_5, SIG_6, SIG_7 );
The order is
id, uMin, uMax, uMean, vMin, vMax, vMean, range, type
which is slightly different to the order in the buffer that needs to be sent to the sensor. I think the rgb parameter can be left at 0, it’s not part of the object detection, I forget exactly what it’s used for, perhaps the led color or something, but it’s not important.
Without going into all the messy details, the vision sensor can calculate the necessary parameters given an area of an object, that’s the legacy way that signatures are set. We can also perform the same calculations in the vision utility code, and that gets used when the image is frozen as it’s local to the vision utility and vision sensor probably has a completely different image at that point, we only need the data in VEXcode so sensor has no knowledge of signature in that case.
The parameters define a area of hue in a pseudo yuv space. we are not so concerned with luminance (the y component) only the color difference components (u and v) for object detection. The specific details of the algorithm I could not fully explain, it was developed as part of the pixycam project, it’s complicated, I translated the algorithm from C to typescript back in 2017 as a technology demo, that was supposed to be integrated into VCS, never happened, so the technology demo was simplified and became the vision utility.
Here is a peek at the original application that vision utility was based upon, it had many additional features, one of which was a visualization of signatures in uv space. This shows a red object and how the numbers translate into an area.
Please remember vision sensor is not officially supported by RobotC, the code I pushed out back in 2019 was just a stopgap measure for users that had not upgraded to VEXcode and wanted to be able to play with it a little. The vision utility originally shipped with VCS (vex coding studio) and may have worked a little differently at that time, I forget, we tried to make integration with VEXcode somewhat better.