I think to use vision censor’s focal length.
Do you know vision censor’s focal length ?
This post has some useful numbers about the vision. Not sure if it’s exactly what you’re looking for.
What do you need the focal length for? In my experience, you can compute everything you need for images without it. Focal length is usually given as a range of lengths anyways.
But, if you are really curious, using the FOV calculations in the thread linked above, the focal length can be found using similar triangles and some algebra.
The formula for focal length is:
Focal length = (Object distance / ((1 / Magnification) + 1)) * 1000
Where focal length and object distance is in mm, and magnification is unit-less
Where magnification is
Magnification = Image size / Object size
Where Image size and object size are both in mm
You can then calculate the image size in mm using the FOV
image size = (2 * focal length * magnification + 1) * tan(Angle of view / (2 * (180/π)))
good luck have fun
Thank you !
I check it
Thank you for answer !
I think focal length is always the same, because vision censor doesn’t zoom like a camera.
And if know focal length and value of vision censor’s center_x and i use trigonometric function , i know required angle to rotate the robot until it faces the middle of the mobile goal.
Based on your opinion, I’ll give it one more try.
Thank you!
It may be hard to believe, but you don’t need to know focal length at all to do this.
(The voiceover was made for our prof so don’t worry about the audio. The ability to track without worrying about focal length at the end is what I’m trying to help demonstrate. You can skip to around the 2:10 mark)
This is a demo of my final project for a robotics course im taking at wpi. We used what are known as AprilTags and an OpenMV camera to track a robot. I’m not sure how much of what I learned there carries back to vex, but the way we implemented it was by using a state machine handled over WiFi that tracks the pixel center of the April Tag using PID.
In your case with a mogo, you would just calculate the pixel center of whatever color goal it is, and add PID where the error is the pixel distance of the center of the blob of color to the pixel center of the camera.
In my implementation, we were able to calculate the distance to the other robots using only the april tags and camera, but I’m not sure that would work as well in this case since the shape of the mogo is so irregular. However, you should be able to do something similar if you do a vision and distance sensor combo.
OH MY GOD!
You certainly don’t need a focal length.
Try it without focal length.
Thanks so much!!!