ive been trying to use the vision sensor to find distance but i cant figure it out if someone has already figured out how to do it and could share the file i would be very appreciative or just show me how to start out to getting a vision sensor to find distance that would work too.
It isn’t very practical. The basic idea would be to use the height of the object and compare it to the known height to find the distance. The problem is that the vision sensor very often only sees part of an object meaning that the distance you calculate would show the object as being further away than it actually is.
What do you want to use it for? There are probably better options.
im trying to use it to determine a distance to an object like a flag or a cap or another kind of object
This could probably be better done with a combination of a vision sensor and an ultrasonic. My team wasn’t a ball bot this year, so I’m not sure of the specifics of this
You know where the center of each flag is, you can calculate the # of pixels in-between them to figure out the distance, you know one side and one angle of a right triangle, you can find the other side
do u know a formula that could help me with that or anything else that could assist me
cause i have my vision sensor at a vertical angle and it is 5 inches off the ground
is there a way you could come up with a formula that would help me cause ive hit mutiple roadblocks tring to figure it out
If you are using the vision sensor to track distance, I would suggest having it be 45 degrees up from directly forward so there will be more pixels allocated to determine the distance, but I haven’t tested the vision sensor yet so I may not know if it actually helps or not.
You need several pieces of information:
How wide is the camera field of view? This will let you turn x-positions into horizontal angle measurements. How tall is the camera field of view? This will let you turn y-positions into vertical angle measurements. How much is the camera tilted in pitch, yaw, and roll compared to the robot’s frame of reference? This will let you translate camera-based angles to robot-based angles.
From these, it is possible to determine the direction of a signature from the camera in the robot’s frame of reference. Additional information is needed to determine the range and position of the signature. Knowing the ostensible vertical height is one way, as it fixes it on a plane, and there is only one intersection of a straight line (the direction vector) and a plane. Working out that intersection would determine the signature’s exact coordinates.
If you don’t know the height of the target, it’s somewhat harder. You can make sure you know the height by arranging your camera such that it can only see the bottom most flags. Barring that, you can attempt to use area (width x height) as a rough approximation of range. If you can see three flags in a vertical (or near-vertical) line, you can also use that information for identifying the height of each one. If you can see all nine flags, you can determine exactly where your robot is on the field and precisely which direction it’s facing.
Break down the trig in steps. Translate the camera-reference frame data to robot-reference frame data, then from there use that to determine where the robot is in the world.
I wrote a post showing how to do this, along with attached diagrams. It may take me a little while to find it, though. The original post, which I think has been linked elsewhere as well, was made before the vision sensor had been made generally available. So I suspect I posted it July-ish.
Antichamber made one a while back with the beta vision sensor that was surprisingly accurate
ive seen that video and that is what ive been trying to replicate but ive been working on that for a while and i cant figure it out after like 3 months.
Maybe I could provide some insight.
Wanna PM and talk about it?
sure i sent u a messege