Vision Sensor config - objectCount inconsistent

I’m having a hard time setting up my vision sensor. I set up the objects in the vision sensor utility, in the utility is consistently tracks the flag signitures. However, once if try to use the vision sensor in actual code, the data I get back is wayy off of that in the utility. Vision.objectCount for the blue flag signiture is usually zero, occasionally jumping to 1 or 2, where in the vision utlity it’s always seen tracking a steady 3. I tried jpearmans vision program from a while back (sorry, can’t find the link) that displays the vision sensor data on the screen, after configuring my objects in the utlity, and have a similar issue, with it showing as not tracking any signitures at all on the V5 screen. I’m guessing I must be doing something very wrong, because i don’t feel the data from the sensor should be this inconsistent.

Depending on lighting and contrast object recognition is highly variable. You’ll have to play with the sensitivity of each signature to get recognition as consistent as possible with minimal false positives. You may also have to take multiple samples and to confirm valid results.

Is the detected objects shown as tracked in the vision sensor utility refelctive of what the program should detect, or is it just an approximate reference, and you have to bump up the sensitivity for the vision sensor to work on the V5 programs?

Yes, it should be accurate.
IIRC there was a bug with objectCount in VCS, it depended on how takeSnapshot was used. It’s been fixed, but we are stuck using the existing SDK until VCS gets an update.

Can you provide more detail on which ways work/have issues currently?
Thank you.

jpearman, yes, please do this for the Vision Sensor. We all heard a lot of hype about V5, about all the things it will be able to do, and we’ve found the reality to be something else. Frankly, it’s been a struggle to keep from abandoning ship. Since we are so late in the season, and the kids don’t have time to hack their way through all the not-really-quite-so-ready realities of V5, can we at least have a “heads up” on what might still be glitchy, non-existent, cringeworthy, etc.? Working hours and hours only to stumble over a known “gotcha” is demoralizing.

takeSnapshot can be used in a number of ways. takeSnapshot is filtering all the objects** that have been received from the vision sensor into an array of objects that you are interested in, you access the array of filtered objects using the “objects” array. The objects we receive from the vision sensor in the brain have already been sorted by size, the largest object is always received first. When takeSnapshot is used like this


Vision.takeSnapshot(0);

We are asking for all objects (ie. they matched any signature or color code), the largestObject in this case is just a copy of Vision.objects[0].

If takeSnapshot is used like this,


Vision.takeSnapshot( 1 );

or more likely with your code like this


Vision.takeSnapshot( SIG_1 );

we are asking for objects that match the first signature (1 is the same as SIG_1 in this case)

the returned objects array will contain only those objects that match SIG_1, however, if there are other objects (ie. signatures being detected) the largest may not have been the first one we received from the vision sensor, that’s the bug we have, the largestObject could have incorrect info copied into it in this case.

so the workaround would be to check objectCount and if > 0 use the first element of the objects array rather than the largestObject variable.

Vision.takeSnapshot( SIG_1 );
if( Vision.objectCount > 0 ) {
   // do something with Vision.objects[0]
}

The bug was found and fixed back in October. Unfortunately, although the VCS team has the latest SDK which we continue to update with fixes and features, customers using VCS cannot access that until VCS itself has another release.

Using the vision sensor has a learning curve, it’s not easy, compared to other sensors like potentiometers or quad encoders, it’s quite a bit more difficult because the variables, the lighting, the background, the exact way objects reflect the light falling on them, make setting up the signatures the vision sensor is using a bit tricky. It’s probably on par with tuning a PID loop, however, I have seen students successfully use it with reliable results, just expect to invest time playing with it to understand the limitations it has.

** an object being the coordinates and size of an area that matches a programmed signature.

2 Likes

Very informative and explains what we saw testing with the LargestObject command yesterday… thank you.

On separate but related note…We have been considering looking for 2 adjacent signatures (one of the colored flag) and one of the green rectangle to reduce false positives. Vex events have a lot of banners with red squares and blue monitors behind the nets that can produce false positives. We were going to take two snapshots and verify the flag with an adjacent green rectangle. It sounds like it might be possible to pull both arrays of data from one snapshot which would reduce issues from movement between the snapshots. Is this possible?

ok, so one thing to understand is that Vision.takeSnapshot is really only a filtering function, it does not cause a request for objects from the vision sensor. The vision sensor is constantly sending the 32 (I think we have the limit set at 32) largest objects back to the V5 no matter what the signature is. We limit that number again in the C++ vision class to 16 that would be passed back in the objects array. If you use takeSnapshot with a 0 parameter the objects array will contain all the information you need, you can iterate through it and just check the “id” property to see if it matches something you are interested in yourself.

Would this issue still apply in VexCode?

The “largestObject” bug ?
No, VEXcode has the latest SDK and that bug is fixed.