Thank you for the praise and suggestions for programming, I have done backend python for my previous jobs so I am not super familiar with more professional C/C++ formatting, but I’ll make sure to remember that for next time.
How consistent was the camera latency?
Using the python library for the Luxinos Cameras they provide built in functions to help you measure the Camera’s latency (Examples on how to use that here). In my early proof of concepts in the summer of 2021 I was using YOLO v3 to train the model and was getting ~50 ms of latency, but after switching to YOLO v5 I noticed that it went down to around 30ms (both at 1080p ~30fps). The 10 microseconds you are getting on your Jetson Nano are extremely impressive, and likely would not require any latency compensation for an application like this.
Could you describe the turrets performance increase from applying latency compensation?
Baked into the latency compensation is other ‘safety’ that uses the measured robots position with odometry, more importantly than latency compensation what this helped account for was lost frames caused by vibrations in the robot (I intentionally isolated the camera on thin polycarb to act as a dampener for flywheel vibration but overall robot shaking from movements still caused issues) or motion blur/game objects blocking the cameras vision of the target over small intervals of time.
I can tell you 100% that with the set up that I am using on this robot you will not be able to get anything remotely close without latency/frame loss compensation. I’ll go into a high level overview of how that works since you have other questions about it as well:
The first important thing to understand is how to calculate the distance from the target to the robot from the image. Because in this situation the height of the target is known and constant, you are able to use the pinhole model of a camera to simplify it down to simple right angle trigonometry using the resolution and FOV of the camera.
Latency compensation becomes important to use because as you can imaging the robot has moved between the time the image was taken and the time that the image was processed. So, the calculated position of the target using the method above does not reflect the state of the robot in the real world. (picture below)
Using this, what I do is take a new reference frame of the robots position every time a frame is captured by the camera, while that image is processing it will update the position of the robot in that reference frame, then when it is finally calculated you can use triangle math (law of sines/law of cosine) to create a third vector from the robots current position to the target which is what you should be aiming at right now.
Hopefully that makes some sense.
On top of this though, it is easy to imagine how you account for lost frames as well. If the object is not detected in the image when it is done being processed, you can use the calculated vector to the goal from a previous frame as a replacement to the actual camera reading. In essence, all this does is make the distance “Delta S” from the above image a longer value.
This creates a robust control system because the odometry is being used as a backbone to the camera, and the camera is used to reduce distance that odometry has to be accurate for in order to know exactly where the target is.
Did you consider accounting for other system latencies (computation, actuation, etc.)?
There is feed forwards built into the turret aiming that attempts to adjust for the velocity of the drive train but it is very basic. I would really like to make it able to shoot while driving around which would require a robust model for handling this, but other problems i’ll mention later are really what prevent this from being possible.
One thing that I did not account for was the acceleration of the turret itself being an issue, because the gyroscopic force of the flywheel and weight of the turret itself, the acceleration from being still was a headache to try to account for, there were points where I was using piecewise functions to scale error at lower RPM’s in attempts to make it accelerate faster but a tuned PID loop ended up working the most consistently across all robot speeds. If I were to rebuild this though I would definitely put 2 motors on the turret just to eliminate the issue all together.
What computation (if any) was distributed on to the Pi vs the V5 Brain?
The Pi handled exclusively the image processing, it would send the coordinates of the object to the V5 brain, then all the math/latency correction was done on the brain along with the other robot functions.
We’re you ever able to gauge the communication latency between the Pi and the V5 System to compare it to pure latency from the camera?
I never actually measured it, but it never was an issue so I assume that it is negligible (I am only sending a 6 character long string between the Pi and the Brain)
did you apply that reference trajectory to the flywheel speed control as well?
There is no flywheel speed control, the hood itself is on a curved rack and pinion system to adjust for the distance change. To answer the question though, it was baked into all the latency stuff mentioned above. Here is a video of the hood:

Conclusion
The biggest problem I ran into with all of this, which I did assume would be an issue but decided to just ignore it, was that I was using the drive motor encoders for odometry instead of tracking wheels or any other form of position tracking. This lead to error in the calculations when the wheels slipped during fast accelerations or pushing against the wall. With tracking wheels I think this would work significantly better to the point where you could pursuit driving and shooting at any speed at the same time, which I would love to see someone do.
Hope this answered all your questions, if you have any more don’t hesitate asking. Good luck this season! 