AI Vision Sensor Issues - Identifying Competition Elements

Has anyone had success getting the new AI Vision Sensor to recognize the game elements when running in a program. From our experience it seems like it identifies it okay in the Block Code Program through the Configure Utility but when running on the Brain we don’t get consistent results.

We’ve tried switching between the Classroom and Competition Classifications with limited success but still nothing consistent enough to use in actual competition robotics. In addition, we’ve tried changing lighting conditions and a variety of other things with no success.

Wondering if other teams are having the same experience and/or if anyone has found a way to consistently utilize the Competition Classifications within their teams.

Hi! I’ve been using the AI vision sensor for a bit, here are some questions I have about your setup:

When you’re running code to interface with the sensor on the brain, is the physical location any different for the sensor? For some objects, like mobile goals, the sensor has to be able to see enough of the base of the object to detect it with the AI model (ie minimal obstructions).

Also, have you made sure to enable to AI model detection for the sensor in your code?

And I’m assuming that you’re using the sensor in a loop, at what rate are you pinging the sensor? (ie, how fast does your loop run?).

The sensor definitely works great for competition stuff, as I have been using it for mogo alignment and it can see beyond my clamp just fine.

TL;DR: make sure sensor isn’t obstructed and that the AI model is told to be active in code.

You need to have the competition model loaded. The classroom objects are different, cubes, rings and bucky balls.

I’m confident that it isn’t an obstructed view thing, as they’ve tested in different fields, different lighting, and more on that front.

What do you mean by “telling the AI Model being told to be active in code”. Isn’t that just the “Take Snapshot” block.

Okay, firstly, thank you so much for posting a screenshot of your code! Makes it WAY easier to debug. Also, the ‘Take Snapshot’ block only instructs the sensor to take a picture and process it, it does not (necessarily) instruct the sensor to initialize the AI model.

So, looking into the blocks documentation, it does not appear that there is an equivalent to the cpp function aivision::modelDetection(enable). I apologize for referencing it, and the lack of said function in blocks suggests to me that it either automatically configures in blocks or it is not configurable there. If you really really want to ensure that this is not the issue (although I strongly recommend reading the rest of this post first), you could use a switch block to achieve this command if it proves to be the issue by doing the following:

  1. In the left hand navigation pane for the block code editor, click on the aqua circle that says ‘switch’ at the top of the list.
  2. Drag the switch block that has an indent in the top and a carat on the bottom (the first option, looks like a regular set drive velocity block but is blank) into your workspace and snap it between the ‘when started’ hat and the first set drive velocity block.
  3. Inside the switch, type: AIVision.modelDetection(true);
    This switch block will allow you to use the cpp function on your AI vision sensor that does not appear to be exposed to block code (although it is very possible that this step is actually occuring in the background).

However, now seeing your code, it does seem like there is a different issue contributing to the lack of consistency with your robot. When the AI vision sensor is instructed to take a snapshot, it can find multiple items at once. This means that it could find rings and mobile goals simultaneously. The object that it finds that has the largest pixel area will be returned as the first object, which is the one that you are referencing the centerX of in your conditional statement. I am led to suspect that it works when the largest object is the one you want, and it does not work when the largest object is not the one you want.

In order to ensure that the object it finds is the one you want, you are going to have to use a loop to look at the id value of each found object to ensure that you look at only the centerX of the largest instance of the object you want. This can be achieved by using a for loop that makes use of the AI vision object count block to determine how long to loop for and uses the set AI vision object item block to change which object that the sensor found that you are looking at. Then, you can use the AI classification is block to see if the object you are considering in the loop is the correct one, and if so, then proceed with your current logic for the centerX. If the object is not the correct one, the loop will move on to the next iteration and the next object. If you need help writing a for loop like this, let me know and I can provide a screenshot of a similar for loop that you could adapt to your situation.

As an aside, your programming looks quite advanced, and I am very impressed with what you have going on in blocks. In the future it would definitely be worth considering switching to cpp, but blocks can definitely work great for pretty much everything.

TL;DR: switch block could allow you to ensure the AI model is enabled, and you should use a for loop to ensure that the AI object you consider is actually what you think it is.