I’m not sure what this would actually be called so sorry if this question has already been brought up and I just didn’t know what to search for. One of the issues with the mecanum is drive is that it’s not intuitive like a tank or arcade drive. To fix this, I wanted to be able to move based on where the driver is located instead of the robot’s absolute position (sorry if that doesn’t make sense). For example, if the robot is turned at a 45 degree angle and the driver makes the robot go straight forward, it would move straight relative to its starting position. I’m also not sure if this would be more intuitive, so if not please let me know. I realized that this probably is confusing so I provided a picture. I was just wondering if someone could provide some pseudo-code. Thanks you for the help.
I believe this is called Field-Centric Drive - I’m sure searching for that may help with finding what you need. Maybe even look at resources from other competitions such as FRC or FTC.
Hope this helps!
If you know a little trig with a gyro it’s not that bad. The problem with a mecanum drive is the the wheels don’t go to the side as fast as they go forward or backward. With an X drive this would be much easier.
For the rotation alone it’s something like:
x’ = x cos(gyro) + y sin(gyro)
y’ = x sin(gyro) - y cos(gyro)
Then you have to add in the rotation of the bot from another input.
Then you apply the x’ to the strafing and the y’ to the fwd and back.
Or no I think you add the rotation in at this point.
Yeah I guess that’s not that easy… But not impossible either. Get on the good side of your precalc teacher and they can help you work it all out.
Another route for this is to look at odometry, 5225a has a great document and code release about it. It vex shaft encoders to track the robots exact position and angle on the field. This can be useful for both autonomous and driver control. Word of warning this and the method above, they are impact sensitive. If the bot gets bumped, it won’t have a correct “forward” more than likely.
Odometery is used for the exact reason of being able to stay on path despite obstacles, the robot is able to tell its orientation and distance relative to the starting point.
Edit: In driver the contact is far too heavy, in autonomous it will work fine if written well
if you do this, you probably will have to recalibrate it quite often. it may be impractical this season with heavy defense and bumping into a bunch of cubes and towers and whatnot.
One more quick comment on strafing drives, they shouldn’t be as hard to drive as you think with practice. I used to fly rc helicopters, which uses a similar control scheme to this type of drive. Use your left joystick to go front/back and left/right and use the right left/right axis for turning. This makes it easy to drive and turn simultaneously.
The hardest step is creating a drive system that preserves directionality of the joystick inputs. Before worrying about the field orientation, you have to come up with a system where pushing the joystick at a certain angle from center will make the robot move away from its center at the same angle. And once you have that, adding in a gyro/heading consideration is actually really easy.
The first step to a system that preserves the direction of the joystick input is to, well, measure the direction of the joystick input. This step is really easy. Feed your X and Y joystick values into an inverse tangent function. Presto change-o, you have the heading of the translation inputs. (We’ll ignore the rotation axis for now. It’s less interesting, and can come in later.)
The next step is where the magic starts. Your joystick inputs have to be agnostic to the current orientation of the robot. On paper, the joysticks can cover any value between -100% and 100% simultaneously, and this is true if you use the vertical axis of one stick and the horizontal axis of another. But if you use both axes on one stick, you will discover that both axes cannot be at their extremes simultaneously. One methodology has a square domain, and the other has a rounder (but not circular!) domain. There’s a few ways to deal with the non-square, not-quite-circular domain. What you want in the end, though is the same: a magnitude value that is 0 at the center and 100% when the stick is moved as far as it can go.
Once you’ve dealt with that, field-orienting your control is as simple as changing the joystick input direction by your robot’s heading (whether read from a gyro or odometry). This is the really easy step, assuming that you’re keeping track of the direction fairly accurately.
But then you have to translate this direction and magnitude back to motor commands that preserve the directionality of the command. This is basically the reverse of the step with the joysticks. If you got this far, this part shouldn’t be too hard.
And then all that’s left is to consider what to do about the rotation input!
One methodology has a square domain, and the other has a rounder (but not circular!) domain. There’s a few ways to deal with the non-square, not-quite-circular domain.
Do you mean that there is a mathematical way to alter the domain from round to square or do you mean that by making sections of the joystick “dead”, you can create a square domain? Also, why is the domain round but not circular?
The domains I was talking about were the possible joystick axis values. Using two different sticks produces a square graph of possible X’s and Y’s, but using just one stick produces something that’s rounder than a square but not-quite a circle. Nothing involving a deadband here. That should be done before you feed a value into your control scheme.
You can check the domain of your joysticks with just a bit of graphics on the screen. Here is a program that I wrote when trying to investigate the joystick ranges that draws a square on the Brain screen for each joystick and displays the current position of each to the handheld controller. As you move the joysticks, it fills in the squares, and the result is the shape of your joysticks’ ranges. You will probably notice that it doesn’t fill the square (unless they’ve updated the firmware and I didn’t notice). The shape won’t be a circle, but it will be round-ish.
As for the mathematical way of stretching circles into squares and vice versa, it’s not too hard if you know polar coordinates. If you consider only a quarter of it at a time, the bounding box of the square in that quadrant is going to be represented by r(theta) = r0 * sec(theta)
or similar equation. So to convert any magnitude from a square-based magnitude to a circle-based one, you just divide it by the trig term for that section (sec(theta) in the case of theta being between -45 to 45 degrees). And because you only care about the scalar modification, you can actually just use the sec(theta) scaling for everything, so long as you add or add or subtract 90 degrees to theta until it’s between -45 and 45 first.
The two equations for converting back and forth between polar and rectangular coordinates are x = r * cos(theta)
and y = r * sin(theta)
. Those are also what you’d use to find what trig function to multiply by to stretch a circle into the range that your motors can produce.
A accelerometer could probably provide enough data to know when the bot is bumped. So if the accelerometer detects a large enough change in motion/ a change in motion that doesn’t fit in with the other data, it could revert to the “forward” before the impact.
My team wanted to use field oriented drive this year. I ported this to pros and it worked after I accounted for the gyro value being backwords from robotc.
You may well find a good chunk of what you need from FTC. The big issue you would likely face is that FTC has an IMU readily available to get orientation on request.