GPS Questions for Spin Up

Hiya everyone! I have a question about the GPS sensor. How do I make it so that it turns the robot to a specific X and Y coordinate? Me and my team have gotten the coordinates for the goals, so we’re trying to make it so that you can just press a button and the entire robot turns towards that goal. Thanks as always!

You are going to need to use trigonometry, more specifically arc tan, that should give you the angle from one point to another using the difference between the x and y coordinates. Then make the robot face that angle

5 Likes

To add on to what @7996B is saying, if (x1, y1) is the position of your robot, and (x2, y2) is the position you want to face, you can calculate the angle you want to turn to using this function, with these inputs.

15 Likes

Just as a reminder that saddly GPS strips are not required on any feild but the programming skills feilds, but they are often on the driver skills feilds.

5 Likes

Except at worlds I think. There they’re always there. what a tongue twister

auto aim will be difficult without something like odom during a match, which can go off course during a match (unless you really do some good coding and multi tasking, probably with a second controller)

Got to give the EP’s their event hosting organizations a little time. Starting this season, our organization will be using porta-fields at our events we host (including the upcoming RiverBots signature event), including all competition fields, skills fields, and at least one practice field (additional practice fields will have metal perimeters with the stick-on/fall-off velcro strips, as we only have 6 portafields so far). I’m sure other EP’s are also stepping up their game as well, since the requirement for GPS on all fields in inevitable.

7 Likes

Do you want to drive to a coordinate, or face a coordinate? Or do you want to drive to a coordinate and face a certain position?

For the “point a direction” you could just use the inertial sensor or the vision sensor with a pid.
For the Inertial sensor you just need to write a PID for that. I will be writing this in C++ and you will need to declare all of the variables I employ as “doubles” before your “while” loop, should you choose to use this in your main method. (Note: Calibrate inertial sensor as part of your pre-auton.)

if (Controller1.ButtonX.pressing()){target=90;} //or make more of these for each direction you want it to face.
err=target-Inertial1.heading(degrees);
speed=err-lasterr;
lasterr=err;
autoturn=err*kp+speed*kd;
(set kp=.1 and kd=0 and tune kp till it is slightly aggressive, then tune kd starting at .1 until it slows the motion without overdamping it. It is all trial and error and you can look up pid programming.)
Lmotor.spin(forward, autoturn*.12, volt); //.12 converts percent to volts
Rmotor.spin(forward, -autoturn*.12, volt);

If you want to do this during driver code, then when you assign your Axis values to the motors, include the autoturn like .spin(forward, .12*(Axis3+Axis1+autoturn), volt);
However, to get this to only happen when you press a button, you need something like
if (!Controller1.ButtonUp.pressing()) {autoturn=0;}

If you want to use a camera, its a pretty similar process.

Vision14.takeSnapshot(Vision14__SIG_1);
err=160-Vision1.largestObject.centerX;
… rinse and repeat.

I put in conditions for my autoturn to also equal zero if largestObjectsize<threshold so that if it doesn’t see something, it doesn’t wildly turn to the side. There are lots of other auto aim tricks you can do.

Now if you are talking about driving to a position, and not just turning, its much harder.
you basically need:

  • [Block of code to figure out where you are and your heading.]
  • [Block of code to rotate the field around your bot and figure out the required amount forward/back, and the rotation using lots and lots of trig]
  • [Translational PID to go forward and back the required amount]
  • [Rotational PID to turn the required amount]
  • motor.spin(forward, .18*(translationout+/-rotationout), volt) just like above.

And then have a strafe amount too for x drive and mechanum. Lots of code to tweak.

I would start by learning to make an aimbot using a vision sensor. Then you can work you way up to odometry.

3 Likes

Dome feilds didn’t have GPS. This year there might be GPS on all worlds feilds.

2 Likes

What about using a vision sensor to detect the goals? I was thinking maybe using that, and maybe using the size of the goal in the image to determine the distance from it.

Yeah the problem with that is that VEX hates making good products. So the vision sensor is dogwater Fr. it hates locking onto something accurately especially at a distance. You can probably get it working for aiming if you try, I know a team who did, but for distance? I doubt. But try anyways if you want.

1 Like

Make sure to test using the Vision sensor’s ability to detect goals in “realistic” situations. You’ll possibly compete in gyms which may have lots of red or blue backgrounds. Your opponents may wear blue or red shirts.

1 Like

We had an accomplished student working on that last year, but I don’t think his results were better than odom. He did get it partially working, but the error rate was fairly high. Also, keep in mind that your camera angle might have to pivot because of the high goal’s height and your varied distance from it. All this is not to say that it won’t work, and frankly, it makes sense if you can get it to work because you might already be using a vision sensor for targeting. However, adding odometry using, say, tracking wheels, is not especially hard and is (in my experience) more reliable.

Yeah I will, thanks for the advice!

TBH the main aim is to get aiming working, our driver said that he will prefer manually moving the bot closer/further from the high goal anyway

There should be limiting rules regarding messing with vision sensors by placing colored items behind or near the target. My team has discussed this before, but I don’t remember the conclusion. I will edit this with a rulebook link if I find one.

Edit: Well, R12 Sec. d includes that game object mimicking devices on a robot are illegal. I can’t find anything on game element mimicking devices outside the field, but I’ve contacted a vex legal authority to ask his opinion. I imagine a ref would ask a bystander with said game element mimicking device to move, but I also know that without a rule to back that up, they may not, and the people using this forum tend to lean towards the ‘not’ side.

Another edit: G1 would allow a ref to ask that a game element mimicking device be removed, though they may choose not to, and, as evidenced from the past, most likely would overlook it. However…

let’s all take a moment of silence to remember G3.

I see. I might use odometry instead, but until I actually get the sensor, I can’t say for sure. Thanks for the advice though!

No problem. They are absolutely correct about the limitations of the sensor though. We have also had two of our original three suddenly die, so yeah, low quality, high expense, time consuming problem solving, and shooting trouble like discs are skeets. But it could be fun though…

We have gotten consistent results using the code I posted above.

1 Like

Have multiple blue signatures and compare the performance of several of them. Literally just cycle through images comparing vision.largestObject.size() and then saving the best performing one. That way you can make up for exposure.

Use multiple vision sensors.

Use a distance sensor to gather distance based on the distance to the net after aimbot targets the net.

Average the aimbot image data over multiple frames to process and further identify a better target.

4 Likes

… this is an interesting idea that may require testing. I’ll get on it. Thank

1 Like