New V5 vision sensor

  1. 5 months ago

    Do you guys think that it is possible to use the new V5 vision sensor and use that to make auto-correct for adjusting the robot position to shoot the flag? If yes can you show the code?

  2. AlexM_4478X

    May 16 Monroe, CT 4478X

    I can't speak for code part of that but if you look here at another thread
    https://www.vexforum.com/index.php/33535-visual-sensor-and-painted-robot
    You'll find info on how the vision sensor works. It is modeled off of the CMU Pixy camera (to which there is a video description of in the thread)

  3. leeto_of_troy

    May 16 Arizona 6030J

    From VEX's V5 architecture website:

    The Vision Sensor provides a robot with new capabilities and allows for expanded learning. At its most basic mode, the sensor tells you where a colored object is located. The location's X value gives you the right and left position. When the camera is tilted down, the Y value gives you the distance to the object, with a little basic trigonometry on your part.

    You could definitely use the vision sensor to aid in positioning the robot to shoot at the flags.

    And from PROS V5 API:

      // coordinates of the middle of the object (computed from the values above)
      uint16_t x_middle_coord;
      uint16_t y_middle_coord;

    You could definitely use x_middle_coord as a current parameter for PID control to help align the robot with the flag, though I don't have any code with me to show an example with right now.

  4. According to someone I know who does FRC, Robonauts did this for 2017 and got it working really well to accurately shoot from anywhere on the field. Though this will take an absurd amount of testing to make sure that there aren’t some exceptions the program doesn’t catch.

  5. heatblast016

    May 16 Redmond, WA

    Yeah, I'm pretty sure Robonauts did this in 2017, though it wasn't 100% accurate. Having multiple targets may add some complications though, because there isn't just one target that you can stay locked onto.

  6. MayorMonty

    May 16 Greenville, SC 3796B

    For all the talk about the game-changing nature of the V5 Vision Sensor, because the field is pretty static, I'd imagine you'd get similar results merely by having accurate drive and turn functions, and wall squaring at the right time

  7. heatblast016

    May 16 Redmond, WA

    @MayorMonty Yeah, you could, however that would be affected by a variety of different factors, and the vision sensor makes it so that even in the driver controlled period, you can always hit the target without aiming. Look at FRC team 254's 2016 robot as an example. They had vision automatically targeting the goal, meaning their robot could pick up a ball, and shoot it in without needing to reposition or anything. Also, sensor error can cause problems as well.

  8. sazrocks

    May 17 Arizona 2114V
    Edited 5 months ago by sazrocks

    @MayorMonty For all the talk about the game-changing nature of the V5 Vision Sensor, because the field is pretty static, I'd imagine you'd get similar results merely by having accurate drive and turn functions, and wall squaring at the right time

    Wouldn’t that require an accurate field position tracking system, something that AFAIK only two teams (1826 and 323Z) have ever made work consistently?

  9. Edited 5 months ago by Impulse Theory

    If your robot shoots far enough from the flags, a vision sensor should see all 3 flags vertically (But there is nothing stopping you from mounting your sensor sideways, so it sees more vertically. You'd be able to get away with a closer shot). I think it would be pretty easy to implement an aim corrector that positions the robot just right to shoot the flag chosen by a user (Maybe a 2 button setup, 1 button per one of the top 2 rows). It would just choose the column of flags that are closest to the center of the screen.

  10. dbenderpt

    May 17 Indianapolis, Indiana 6842
    Edited 5 months ago by dbenderpt

    @sazrocks Wouldn’t that require an accurate field position tracking system, something that AFAIK only two teams (1826 and 323Z) have ever made work consistently?

    I played against that 323Z design...didn't work out so well for me ;) Brilliant coding on that robot.

  11. MayorMonty

    May 17 Greenville, SC 3796B

    @sazrocks Wouldn’t that require an accurate field position tracking system, something that AFAIK only two teams (1826 and 323Z) have ever made work consistently?

    My thought is that you'll want to limit the positions that you're shooting from. If you want to adjust your launcher to shoot from anywhere on the field, then it would be a very good idea to use the V5 Vision Sensor

  12. heatblast016

    May 18 Redmond, WA

    @sazrocks It depends on the vision sensor, because you can use a combination of target size(in pixels), a lookup table, and linear interpolation between points to do a very similar thing

 

or Sign Up to reply!