We’ve been experimenting with the V5 vision sensor to better understand how it works and what its limitations are. Theo, our programmer, has so far been able to make it track and follow a ball using just a P loop for now. We used an old NBN ball because our game objects hadn’t arrived yet.
We saw that while we were there. Very cool
can you post the code for it?
For the pongbot, at least, the source is here: https://www.robotmesh.com/studio/248918
A bit later in the season once everything is working smoothly we might release some code. Theo and I have been working hard to learn c++ in order to use PROS3 with the V5 as we figure it will give us more programming freedom compared to other options out there.
I think this will be the season of insane autonomous routines. Both during auto, as well as in the driver controlled portion. Miss a ball? let the robot recalculate the position. Or let the robot base position off of the flag. Dang.
I’m assuming there are two p loops, or at least processes going on. The first one is the robot rotating to keep the object it’s tracking (a.k.a. the orange NBN ball) in the middle of the x axis. The second one is the robot moving towards the object it’s tracking until it’s within a certain range, and I’m certain this second part is what you were describing as the p loop
Nice!
Yeah pretty much. There’s one P loop that uses the X axis deviation to control the direction and another that uses the Y axis deviation to move the arm that the vision sensor is on so it can always see the ball. Then there’s a third P loop which is used for the robot’s speed depending on how far away the ball is which is measured by the object’s pixel width.
@Jacob Walter I just noticed that the arm moves! It’s actually a little satisfying seeing it move… Anyways, I’m planning to use a vision sensor, like most people, on my robot, but unlike your vision sensor which moves on an arm, mine is probably going to be stationary. How well does it work when the sensor is stationary?
It has a surprisingly large field of view. Stationary vision sensor would work just fine.
I would really love to know the FOV of the vision sensor. Does it say it on the sensor or any other documentation?
I’m not sure if the specs are shown anywhere, but the vision sensor is based on the pixycam which has a horizontal FOV of 75 degrees and a vertical FOV of 47 degrees. http://cmucam.org/projects/cmucam5
Not certain if it’s identical to the vision sensor, but it looks like it is.
Well, this is probably the coolest thing I’ve seen this week.
Best of three would be cooler.
VEXU getting a V5 to at least start working on our bots would be cooler.
Hello, I’m Theo. The FOV of the Vision Sensor varies depending on your programming software.
In the PROS 3 header files, it’s
#define VISION_FOV_WIDTH 316
#define VISION_FOV_HEIGHT 212
It was different before, but I think they changed it to mirror the recommended amount from VCS.
You can think of them as pixels, but I think they are actually arbitrary units.
There are 316 “units” between the left and right FOV of the sensor.
I don’t know what changing that number will do, I don’t know the maximum hardware resolution of the sensor.
Hope this helps
I think you misunderstand. For a lot of purposes the FoV (the angle the camera sees) is really important to know. Without knowing this, knowing how many pixels you’re dealing with may well be insufficient. There are a number of spots where you can find the pixels. But, though noted as FoV in those #define statements, this is really resolution, not FoV. We’re waiting on the other piece. The best we’ve heard yet is from CMU’s pixycam, as @Jacob Walter posted and has been posted before. Ideally, VEX will publish the FoV. Next best, someone with a vision sensor will place the camera somewhere and start making physical measurements to determine what the FoV actually is and let us know.
Right, I got confused about that. You want the actual angles it can see, not the resolution it outputs.
Once I have some time I can make those measurements.
Thanks