Hello people I’m here to talk about the latest sensor that vex had released. First of all I’m not an expert or engineer, I’m not standing at back of what I might say in this topic. AI vision is a good sensor but if I’m not wrong there is a major problem with it so because it is 30 fps we got a dataflow 30hz(which means 30 times per second).
When we are in color detect mode and detecting a specific color in that 30 frame we have the color in every screen. Even it is blured or not it is there. so we are able to capture the color every single time. But when we are in AI classification sensor checks every screen even the blured ones. So basically probably as the alghorithm it is unable to capture bluering or redring or the mobile goal every time because it is bluered in some images. So where are we trying to reach is when using the sensor with PID we need an accurate dataflow from the sensor and if the sensor is unable to sense the object accurately we are not able to get good results with PID. Only if we are trying to make something in lower speeds but thats not good. Waiting for returns especially from @jpearman
You’ve made a great observation about the VEX AI Vision sensor’s limitations! The 30fps frame rate combined with motion blur can indeed affect object classification reliability, especially at higher speeds.
For optimal PID control with the Vision sensor, try using data filtering/smoothing or using multiple sensor inputs for redundancy.
I dont think we are not that much able to control image of the sensor it has a chip inside of it which does this task. So the creators should make that if it is able to
You’re right! The VEX AI Vision sensor’s image processing is handled by its internal chip, and there’s definitely room for optimization.
we need a reply from @jpearman definitely
What exactly is the question ?
movement of the aivision sensor is likely to cause motion blurred images (it will depend on light levels and how the sensor has adjusted exposure). The sensor will not be able to detect objects as well with blurred images.
first of all thanks for replying. I’m from team takevians. I thought since there is no videos that tells how to use AI vision. But by the time I discovered that it is 30 fps and when higher speeds this can cause problems to not catch objects. On the other hand we got the vision sensor too and when everything is setted truely its more capable to capture objects in higher speed bcs it is 50 fps. And finally the question is even do I set everything right that I can set. This thing sucks when capturing objects by AI in higher speeds. It is better when capturing with color but the framerate is too less to control a PID loop with high speeds. Since that is a vision sensor and we are planning to use this in matches what you say about it. Will it get a update or something to fix that otherwise will this sensor still as itself right now. (sorry if my english sucks )
The camera module in the aivision sensor runs at 30fps, not much we can do about that. The AI algorithm actually runs slower than 30fps, it’s tough with cheap embedded processors to achieve anywhere near real time. So there are no plans for any updates to this sensor.
Okay since you said that it’s all I’m gonna do a video of the sensor to youtube that I’m explaining things about it. Thanks for all your help!
I had a related question - is there a way of knowing the right rate at which to take snapshots? I have 40ms right now which is a little slower than 30FPS, but is there no point in even running it at that rate? Ideally there would be a callback in the API to notify when a new snapshot makes sense.
Thanks,
nick.
No. take snapshot is really just a filtering function, it looks at the most recent data and returns just the objects you are interested in.
@jpearman got it, thanks!
In personal experience, the AI vision sensor updates fast enough to create pretty reliable algorithms. The actual “AI” stuff is pretty bad unfortunately, but with auto white balance and the correct color codes, the Ai vision sensor is very fast and reliable. Used it in 2 different comps without changing color codes and still had a reliable auton. Used high tolerance color codes and like 3 max objects.
TLDR
- dont use the “AI Classifications”
- use the color codes
- use the auto white blanace feature
- limit max objects (around 3)
- set tolerances for color codes pretty high
well I it might work as you say but since native refresh rate is 30 fps it won’t let us capable use it when doing fast moments