Vision Sensor Support with Robot C

Continuing the discussion from Debugging Vex Code IQ - no print/serial port support:

Thank you James for the information on how to use the vision sensor in Robot C, but I am still not seeing reliable output.

I setup a stationary setup with the vision sensor pointed at a white background and a red ring in the center of the screen. The only red object in view is the red ring.

Next I went to Vexcode IQ and calibrated the vision sensor for sig_1 to see the red ring

ring calibration

It found the ring at X 139, Y 135, Width 94, Height 78

Then I disconnected the USB cable, closed Vexcode IQ and went to robot C and ran the demo program provided.

I got this output

found 4
0: 1 2 0 480 240
1: 1 480 240 480 240
2: 1 480 240 480 240
3: 1 480 240 480 240

It found 4 objects. I think it should have only found 1. Only one signature is set and that signature is focused on a red object and there is only one red object in view. Next it seems to have bogus data for the X,Y, width and height. I would have expected it to be identical to the calibration screen input since the setup is completely fixed.

Am I missing a step?

Thank you for all of your help!


No idea. I checked again this time using PC for vision utility, a fresh download of the demo code from github and a new vision sensor. works as before. I was checking using the IQ screen originally, but typical output in debug stream for one red object was.

found vision sensor on port 4
Found vision sensor on port 4
found 2
0:   1 134 133  60  50
1:   1 136 193  12   2
found 1
0:   1 146  97  98  90
found 1
0:   1 136  96 100  93
found 1
0:   1 128  97 100  94
found 1
0:   1 124  93  98  96

check you can see objects in the device dashboard. I only have vision sensor plugged in to the IQ, no motors or anything else.


I think the problem is related to the vision sensor losing the signature calibration. It appears if I calibrate the signatures in Vexcode IQ and then exit and come back into Vexcode the signatures are still there. If I power cycle the senor, everything is fine. If I connect the brain, or not it seems to be fine. If I download the demo program from robotC, it doesn’t detect and if I go back to VexcodeIQ the signatures are lost. Everything is reset.

Is there any technical info on the vision sensor? Such as the I2C register map? How is the signature information stored?

What can trigger a reset? Is it possible to save off the signature calibration and restore it through the robotC interface?

Thank you!

1 Like

A VEXcode program stores the signatures with the code. There is communication between the vision utility and VEXcode, when the utility is opened whatever signatures were previously in the code are sent to the vision sensor. In VEXcode V5 Pro you would see them in the robot-config.cpp (or a separate vision header file if you created that). You can also see them in the latest version of VEXcode IQ if the project is converted to text and the config region is expanded in the code.

An example of how they may look.

vision::signature Vision6__SIG_1 = vision::signature (1, 5567, 6905, 6236,-383, 927, 272,2.5, 0);
vision Vision6 = vision (PORT6, 50, Vision6__SIG_1);

There’s no easy way to visualize what the numbers mean, it’s just data the vision sensor wants.

PROS and Mathworks use a standalone version of the vision utility, VEX does not provide that directly, our partner developers package, code sign etc. and make it available.

Both the VEXcode and stand alone versions will also save a signature inside the vision sensor when set. This was core functionality that existed in the original product that the vision sensor was based on. If the vision sensor is disconnected from USB after the vision utility is closed, it should have saved any signatures that were set. If a VEXcode program is run, the saved signatures may be overwritten temporarily until the sensor is power cycled.

Downloading a RobotC program should not affect the vision sensor in any way, I checked that, stored signatures remain after download and when the program is run.

There are a couple of functions in the RobotC demo library I supplied that can get/set signatures. You could probably take the VEXcode data and reformat into the correct order for those functions, the structure order is different in the RobotC demo code IIRC.

There is no public information. The vision sensor is a heavily modified version of the pixycam, there is technical info available for that, some would align with the vision sensor, most would not.
You should also be able to figure out the register map from the demo code, all the necessary info is in there.


Thank you James. This makes sense. It appears it is important to set the signatures from the code and there is funny behavior with the freeze/unfreeze. It is easy to work around now that the behavior is clear, but to share for others here is what I am seeing.

Experiment 1:

  • Open VexCodeIQ and then Vision Utility
  • Freeze
  • Define sig_1
  • Unfreeze
  • Can see it working real time
  • Close VexCodeIQ
  • Open RobotC and dump signatures
  • Sig 1 isn’t defined

Experiment 2:

  1. Open VexCodeIQ and then Vision Utility
  2. (don’t) Freeze
  3. Define sig_1
  4. Can see it working real time
  5. Close VexCodeIQ
  6. Open RobotC and dump signatures
  7. Sig 1 IS defined

Experiment 3:

  • RobotC - Set signature through C API visionSignatureSet()
  • Open VexCodeIQ and then Vision Utility - no signatures set
  • Close VexCodeIQ
  • Open RobotC and query the sensor, signatures are still there
  • Go to device dashboard, and objects are found

It appears that the Vision Utility launched from VexCode IQ doesn’t do a readback of the config in the sensor, and that freeze/unfreeze affects the signature saving

Experiment 4:

  • Open Vexcode Pro V5
  • Again appears no read back from the vision sensor
  • Signatures can be set and are stored in the code
  • Freeze/unfreeze behavior works the same as vexcode IQ (need to avoid freezing)
  • From Robot C only signatures saved when unfrozen work; however, Vexode Pro V5 stores all the signatures in the code so when relaunching the vision utility it restores the signatures it knows from the code

It appears the best path forward would be to use Vexcode Pro v5 to collect all the of the signatures and then map them to RobotC visionSignatureSet() calls. Unfortunately, the structures are slightly different. There is a RGB parameter in the robotC version which isn’t exported in the Prov5 initializer. It is also non-zero coming from the vision utility so it appears to be used.

Is there any information on what these parameters do? Is the RGB parameter needed for type 0 (normal mode)? Where can I get more info on color mode?

Additionally, it appears that the RobotC object get doesn’t return the same answer in a stationary fix environment when multiple signatures are loaded. For example with three get object calls are spaced 2 seconds a part.

Call 1
1 x:166 y:138 w: 92 h: 34 a: 0 t: 0
2 x:164 y:176 w: 94 h: 34 a: 0 t: 0

Call 2
1 x:166 y:138 w: 92 h: 35 a: 0 t: 0
2 x:162 y:179 w: 98 h: 31 a: 0 t: 0
3 x:164 y: 65 w: 86 h: 65 a: 0 t: 0
4 x: 0 y: 59 w: 12 h: 61 a: 0 t: 0
4 x:302 y: 59 w: 14 h: 60 a: 0 t: 0

Call 3
4 x:302 y: 59 w: 14 h: 57 a: 0 t: 0
4 x: 0 y: 59 w: 10 h: 62 a: 0 t: 0
5 x:172 y:133 w: 70 h: 4 a: 0 t: 0

Call 1 sees signature 1 and 2. In call 2 signature 3 and 4 are also seen. In call 3, signature 1 and 2 are missing, but nothing changed in the environment.

Is this expected?

Thank you!

1 Like

Thank you James, to refine my previous post a bit. I am now clear that it is expected the signatures will be stored in the code and the vision utility won’t do a readback from the sensor. If Vexcode IQ/V5/Pro is used the stored signatures will be pushed into the vision utility, but when using RobotC it is expected that the signatures will need to be taken and manually put in the code.

To do this, there is a one parameter difference related to RGB. Can this be set to zero, or how should this parameter be set when pulling a signature from vexcode which doesn’t seem to use it?

Additionally, there seems to be something funny when setting signatures in the vision utility when the image is frozen. Many times the signature doesn’t seem to ‘stick’; however, if the signature is set with the image unfrozen it seems to work fine.

Thank you!


IIRC the standalone version of vision utility does, but the VEXcode version will have default signatures sent from VEXcode to the utility.

You can just click the small button lower right to copy signature info to the clipboard.
(sometimes doesn’t work for some reason)

They would look like this

vision::signature SIG_1 (1, 9669, 11299, 10484, -2953, -1597, -2276, 3.000, 0);
vision::signature SIG_2 (2, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_3 (3, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_4 (4, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_5 (5, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_6 (6, 0, 0, 0, 0, 0, 0, 3.000, 0);
vision::signature SIG_7 (7, 0, 0, 0, 0, 0, 0, 3.000, 0);
vex::vision vision1 ( vex::PORT1, 50, SIG_1, SIG_2, SIG_3, SIG_4, SIG_5, SIG_6, SIG_7 );

The order is
id, uMin, uMax, uMean, vMin, vMax, vMean, range, type
which is slightly different to the order in the buffer that needs to be sent to the sensor. I think the rgb parameter can be left at 0, it’s not part of the object detection, I forget exactly what it’s used for, perhaps the led color or something, but it’s not important.

Without going into all the messy details, the vision sensor can calculate the necessary parameters given an area of an object, that’s the legacy way that signatures are set. We can also perform the same calculations in the vision utility code, and that gets used when the image is frozen as it’s local to the vision utility and vision sensor probably has a completely different image at that point, we only need the data in VEXcode so sensor has no knowledge of signature in that case.

The parameters define a area of hue in a pseudo yuv space. we are not so concerned with luminance (the y component) only the color difference components (u and v) for object detection. The specific details of the algorithm I could not fully explain, it was developed as part of the pixycam project, it’s complicated, I translated the algorithm from C to typescript back in 2017 as a technology demo, that was supposed to be integrated into VCS, never happened, so the technology demo was simplified and became the vision utility.

Here is a peek at the original application that vision utility was based upon, it had many additional features, one of which was a visualization of signatures in uv space. This shows a red object and how the numbers translate into an area.

Please remember vision sensor is not officially supported by RobotC, the code I pushed out back in 2019 was just a stopgap measure for users that had not upgraded to VEXcode and wanted to be able to play with it a little. The vision utility originally shipped with VCS (vex coding studio) and may have worked a little differently at that time, I forget, we tried to make integration with VEXcode somewhat better.


Thank you for the details. This makes it clear.

If only u and v are used, does that mean the sensor will be susceptible to lighting changes? Are there any signature tuning recommendations to increase light stability?

I noticed that in the PROS version there are APIs to deal with brightness.



Is there any chance the I2C register info could be made available for these? I would be happy to post the code back to the forum or github.

Thank you!

1 Like

I added register info and APIs for these to the github repo.
We don’t use white balance in VEXcode, it’s left on automatic.
you will have to experiment with brightness, I forget if it’s 0-100 range (which IIRC it is) or 0 to 255. Above a certain value the sensor will max out and brightness will not increase, it’s really controlling the gain of the camera.


Wonderful! Thank you so much!

Here is a github link if anyone needs it

1 Like

@jpearman is there by chance a known issue with the Vision I2C interface, or is there way to reset the vision sensor through the I2C? It appears the I2C buffer jams and sends old stale values quite frequently.

For example, here you can see vision utility and the debug stream side by side with a simple print of the object location. Notice they are aligned, and the signature is working well.

A bit later, the ball is removed and the vision sensor doesn’t notice. It continues to return the same values.

Here is another example where the program started up and returned random values when in reality there was nothing there. Sometimes if finds one imaginary object and other times four imaginary objects.

After a power cycle of the brain, everything comes back and works for a bit and then the problem happens again. The problem happens on two different brains and two different vision sensors so I don’t think it is a bad brain or a bad vision sensor. The vision sensor works correctly through the vision utility so it again confirms the sensor is fine. It appears to be an issue with the interface to the brain.

Another detail to note is after a reset, robotC frequently doesn’t see the vision sensor. Then after going to the device dashboard and selecting the port, it will find the vision sensor. After this, robotC finds it also.

Additionally, after the buffer stuck issue mentioned above is seen. In the device dashboard, it also shows stuck values which don’t respond if ball is moved around.

Thanks so much for your help and support.


Not that I’m aware of, but the majority of the initial development work was done using V5 so there may be issues we missed. I should also mention that the majority of development for the vision sensor was not done internally to VEX, so we have less insight to this sensor than some of the others. I will see if we can reproduce these issues using IQ gen 2 and VEXcode to see if it’s the sensor or something specific to IQ gen 1 and RobotC.

That may be because the sensor takes a long to boot and the IQ misses that it’s connected. Some programs on IQ will cause an I2C bus re-enumeration, I forget if VEXcode does that, but I do know that RobotC does not, why is lost to history.

Then that would tend to suggest a sensor issue, but I need to see if that’s the same with IQ2 as it has a far more robust I2C implementation.


I built a simple program in VEXcode to take a snapshot and print the X,Y to the brain screen and it appears to work fine. I also don’t see the issue of not finding the vision sensor on program start, or the issue of the x,y results getting stuck.

It would appear the problem isn’t with the Gen1 brain since it is working fine from VEXcode. Any chance some hints are available from the VEXcode I2C handling which could be used to make RobotC work, or coming full circle can a debug stream be enabled for VEXcode?

This thread began out of a desire to use the vision sensor and have debug support. The RobotC debugger is excellent, but perhaps optional; however, without a debug stream and using only the brain screen for debugging is a severe limitation.


1 Like

If you are programming using text, there is a debug stream of sorts.

console::write sends formatted output (simple printf like format description) to the second serial port the IQ has, you would need an external terminal such as screen (OSX) or PuTTY (windows) to be able to see output.

Technically printf can also work, but as IQ has so little memory, printf usually exhausts resources so we don’t recommend using it. IQ2 has more memory, the latest version of VEXcode has built in console that can display the output and also expands the print block to allow printing to either screen or console.


Excellent! This works great! Thank you! :smiley:

1 Like