[ X-Drive] Finding position on field using encoders

Hi everyone, this is my first post on the forum, so feel free to tell me I’ve done something wrong or put my post in the wrong place.

My team wants to have an auto-aiming launcher for NbN. To do this, I believe (correct me if I’m wrong) it is easiest to make the field a grid, with X and Y coordinates.

Our robot will have a gyroscope, and motor encoders on each wheel. We can add sensors if required

I want to find and update my coordinates based on my encoders, possibly using the gyroscope (if needed)

The problem is that I have no idea how to do this. Starting tips and suggestions would be awesome :smiley:

I have searched around a bit and haven’t found any threads related to this. If there are some that I haven’t discovered, please post a link to the and I will check them out :slight_smile:

Thanks, DarkMatterMatt

p.s. I am a beginning coder, with a basic understanding of the language, so please speak English :stuck_out_tongue:

There we go! Sweet. Now, lets get started.

Here is a thread that is not only really cool, but also exactly what you are talking about. Unfortunately, they don’t seem to have put up their code, and I don’t quite feel like putting up anything my team has made yet, but I can run you through the basic concept of it.

First, lets start from the bottom and figure out what our velocity is, X and Y. With an X drive it could be a tad bit tricky, but all you really need to do is average the motion vectors of all of the wheels, as per this diagram.

Now if you are not familiar with vectors, here is the first site that popped up on google. They are basically ways to depict motion and force through more than one dimension.

One funky thing is that unless you decide to put your wheels in a + sign vs an X, (functionally identical but mathematically more complex to do X shape) you will need to do some trigonometry. Since these vectors aren’t pointing straight along a specific forward/backward left/right axis on your bot (with an X shaped drive) you will need to “decompose” the singular “speed” of each wheel into it’s X and Y motion before you can add them all up properly. This takes a bit of trigonometry, but i’m sure you can figure that part out on your own. There you have it! you have gotten the complete, overall speed and angle (from your gyroscope) of your robot! Next post will include the concept behind turning that into a position.

Now you know your whole, overall, robot “vector.” Unfortunately, without high processsing power and expensive tools such as LIDAR scanners (or not too expensive tools like the kinect) the best and onliest way to actually know your position is to integrate your motion. What does this mean?

It means you know how fast your robot was going for a certain amount of time, meaning you can calculate how far it went. If you continue to add every small distance you go, you will know where you are. This is, of course, not an ideal method, as it does not actually tell you where you are on the field, instead giving you your position relative to your starting position.

Furthermore, this method is prone to something called “drift”. You may have heard this term applied to gyroscopes since they actually do the exact same thing (gyros don’t actually detect angle, they detect rate of change of angle, and then the speed is integrated over time.) It essentially means that your measurements are not perfect, so over time, all of the little errors will add up to one giant error.

This hasn’t proven much of an issue for teams doing autonomous movement, as the period only lasts 15 seconds, meaning accrued error doesn’t usually go over an inch. In addition, teams often “re-zero” themselves against a wall or corner to kill the error off.

So run this through several times a second, with the gyroscope showing which way is forwards?

Yup! Thats just it! Great visualization.

For the integration, it would be best to do this at approx 50hz rather than a “few” times a second", roughly the maximum speed VEX controllers allow you if I’m not mistaken. Integration by Eulers method (which is mathematically what you are doing) is a naturally imprecise process which gains precision when you decrease the period. Less time between measurements means less unmeasured fluctuation in between measurement.

In addition, as you are integrating, raw velocity may not be the best source of your “ticks”. When I was doing this I had issues with huge drift, as velocity seems to be measured in “time between ticks” instead of “average ticks per second” to give better instantaneous velocity measurements. I’m not entirely sure how the Cortex handles it, but if you have issues this may be one of the causes. I would stick to actually measuring the difference from the last tick measurement to the current one to keep precision up so you dont “miss ticks”

Lastly, as you mentioned, you do need the gyroscope to get which way is forward. In addition, the final motion vector needs to be modified by being turned into an angle and velocity, rotated by the angle of your robot (as measured by your gyroscope) and converted back into an <x,y> vector for your integration to match up with the field. ( <0,100> really means <-100, 0> in relation to the field if your robot has turned 90 degrees to the left).

Lastly, if you’re feeling really clever, you might supplement your gyroscope with the encoders. It’s never quite necessary but if you may notice, a discrepancy in two opposing omni wheels correlates linearly to rotation. (If the upper right encoder is giving 10 ticks AND the lower left encoder is ALSO giving 10 ticks, you know the robot has rotated by scaled_encoder_count/radius_of_robot radians.

(I’m not sure what level of math you know, but just in case you or readers are not familiar with radians, they are an angle measurement based on the radius. One radian is the arc length the radius of a circle would cover if wrapped around the circumference. 360 degrees = 2pi radians.)

Cheers and good luck!

WARNING: I do not believe these calculations are entirely valid for mecanum drives as at full tilt, the rollers may not actually be rolling due to their internal friction. In an X drive the rollers HAVE to roll.

Hey Guys,

I’d hate to rain on your parade, but what happens if you get bumped and your wheels slide?

I know with omni-drive “at least one wheel will be turning no matter the direction”, but motor resistance could easily allow the wheels to lose grip.

Any ideas on how to accommodate for this?

Note my mention of other tools which are unavailable to the vex competition due to price and processing power requirements. For now, this is the best we have.

So as for you first question, tough luck, chances are you have bigger issues than shooting if you’re in a bushing fight using a holonomic drive.

For the second question, “motor resistance” isn’t an issue, as the math makes sure your motors are in sync. There’s never a situation where a wheel is being dragged across the ground and the motor isn’t moving.

There ARE however, already discussed says to mitigate the bumping issue however. You can essentially add 2 more unpowered “follower wheels” like in a mouse. You will still lose your orientation if someone bumps you to the point of lifting your wheels, but at least your omnidirectional drive which apparently is able to produce enough torque to spin your wheels instead of stalling your motors won’t introduce error In your pushing matches.

This all boils down to minimizing drift. Actually integrating the encoder ticks isn’t all that hard, but your error factor is going to grow at a horrifically high rate.

Holonomic drive wheels are constantly in a state of sliding. This is fine if you know what direction your sliding in, but often establishing a definitive heading is tricky.

It can be done with a gyro, but that gyro is also subject to drift. So you rapidly lose confidence in your heading, and that compounds when you use the now lossy heading to integrate encoder ticks.

This is a tricky problem. Don’t expect a simple solution. A Kalman filter with lots of inputs (gyro, accelerometer, encoders, etc) that’s been well tuned on a really well build robot may due. But that’s far beyond the scope of something that could be explained here on the forum.

I haven’t seen any issues with gyro drift (save for bad gyros or wiring) in my 4+ years writing code for robots. A properly set up gyro (again not broken and with no wires passing motors), should at the VERY MOST get 0.5 deg/min.Generally there’s not enough time even in the 15-60 second autonomous period when it is necessary. If a team is missing something, it is more likely that the flexibility of the frame or improper sampling can be blamed. (seriously if you play the entire match the maximum half a degree off and we assume you drive 12 feet total you lose drumroll 1.2 inches! coughsubjecttoencoderprecisioncough)

I don’t imagine many people actually use encoders to supplement their gyroscopes for angle measurement. They generally lack the same precision and the gyros themselves are accurate enough for the autonomous period. If you REALLY want more than your gyro can give you may want to pair it with a magnetometer. That has worked for the Oculus rift at the very least.

Bottom line: Unless you’re planning to play THE ENTIRE GAME without touching a corner, you don’t need anything more than your encoders and a standard gyro, giving separate measurements of angle and distance respectively.

Theoretically the VEX gyro can be this good, Chris wrote a nice blog article about it a couple of years back.
https://vamfun.wordpress.com/2011/05/09/note-vex-gyro/

It is affected by vibration, I did some tests here.
https://vexforum.com/showpost.php?p=346287&postcount=1

Many teams struggle to get good results.

Think I might have been thinking of the compass… Humm, not sure. It’s been too long and I don’t feel like grabbing the code.

Probably not, VEX doesn’t have a compass as a product.

This can be done in theory with a combination of accelerometer and gyro, but it’s hard. What we really need is an IMU, a combination of accelerometer, gyro and compass with some clever processing that gives us a position and orientation.