Okay, firstly, thank you so much for posting a screenshot of your code! Makes it WAY easier to debug. Also, the ‘Take Snapshot’ block only instructs the sensor to take a picture and process it, it does not (necessarily) instruct the sensor to initialize the AI model.
So, looking into the blocks documentation, it does not appear that there is an equivalent to the cpp function aivision::modelDetection(enable). I apologize for referencing it, and the lack of said function in blocks suggests to me that it either automatically configures in blocks or it is not configurable there. If you really really want to ensure that this is not the issue (although I strongly recommend reading the rest of this post first), you could use a switch block to achieve this command if it proves to be the issue by doing the following:
- In the left hand navigation pane for the block code editor, click on the aqua circle that says ‘switch’ at the top of the list.
- Drag the switch block that has an indent in the top and a carat on the bottom (the first option, looks like a regular set drive velocity block but is blank) into your workspace and snap it between the ‘when started’ hat and the first set drive velocity block.
- Inside the switch, type: AIVision.modelDetection(true);
This switch block will allow you to use the cpp function on your AI vision sensor that does not appear to be exposed to block code (although it is very possible that this step is actually occuring in the background).
However, now seeing your code, it does seem like there is a different issue contributing to the lack of consistency with your robot. When the AI vision sensor is instructed to take a snapshot, it can find multiple items at once. This means that it could find rings and mobile goals simultaneously. The object that it finds that has the largest pixel area will be returned as the first object, which is the one that you are referencing the centerX of in your conditional statement. I am led to suspect that it works when the largest object is the one you want, and it does not work when the largest object is not the one you want.
In order to ensure that the object it finds is the one you want, you are going to have to use a loop to look at the id value of each found object to ensure that you look at only the centerX of the largest instance of the object you want. This can be achieved by using a for loop that makes use of the AI vision object count block to determine how long to loop for and uses the set AI vision object item block to change which object that the sensor found that you are looking at. Then, you can use the AI classification is block to see if the object you are considering in the loop is the correct one, and if so, then proceed with your current logic for the centerX. If the object is not the correct one, the loop will move on to the next iteration and the next object. If you need help writing a for loop like this, let me know and I can provide a screenshot of a similar for loop that you could adapt to your situation.
As an aside, your programming looks quite advanced, and I am very impressed with what you have going on in blocks. In the future it would definitely be worth considering switching to cpp, but blocks can definitely work great for pretty much everything.
TL;DR: switch block could allow you to ensure the AI model is enabled, and you should use a for loop to ensure that the AI object you consider is actually what you think it is.