dx = targetX - CurrentX;
dy = target - CurrentY;
angle = radiansToDegrees(atan2(dy, dx);
this is the bit of code I am using for it
dx = targetX - CurrentX;
dy = target - CurrentY;
angle = radiansToDegrees(atan2(dy, dx);
this is the bit of code I am using for it
You’ve forgotten to close the parentheses around your radiansToDegrees call. Unfortunately, I can’t find documentation for atan2 anywhere, so I can’t say for sure what you need to get it to work. But apart from the parentheses, that code should work (assuming you have already declared the dx, dy, and angle variables with appropriate types before this part).
I’ve figured it out
if my current x position is positive, dx will be negative, and the same is true for my Y position and dy, but to get the appropriate angles for a positive x and y position, atan2 expects to receive a positive valueso all I have to do is feed the atan2 function -dy, and - dx, then add 180 to what comes out of the radiansToDegrees function and I’m good.
May I ask how do you determine the position of your robot? Is it through sensors like Encoders and Gyro, or is there something else in the code that needs to be considered?
My team is also looking into position-tracking, but I am not sure how.
My team has the position tracking nearly done using accelerometers because the wheels might slip making encoders inaccurate after the first bump. The key is to figure out how fast you are moving in the x and y directions keeping in consideration the angle of the robot. Remember .5at^2+ v0t+x0=x.
so far we are using an encoder on each wheel and two gyro’s, and the values of the gyros we average. I am proud to say the tracking code is fully functional, also we have code in place, (this is what the atan function was for) that will move the robot autonomously to any position in the XY plane, assuming it has a clear straight path to it. also, it can autonomously point itself at the goal.
there is definitely a lot of drift in the system, the real problem is not wheel slip but gyro drift, and the accelerometers do not solve that problem, even averaging out two gyro’s and individually tuning their sensor scales, they just don’t seem to be accurate enough, they drift far to much, until there is a solution found for gyro drift I don’t know that we will see a good really accurate way to measure robot position, the thing we are now messing around with is using the difference between the encoders to solve for angle.
Have you looked into implementing a kalman filter? Or maybe some high and low pass filters? Or maybe a complementary filter ( cause no kalman here )?
It will be interesting to see if someone can actually accomplish effective position tracking with these questionable sensors
I would love to do this, unfortunately I don’t really know how any of this filter stuff works, it’s something I will have to research and look into
This sounds really impressive, minus the problems with drift. About how long would you say it can accurately aim at the goal before too much drift makes it impossible to do so?
I would say two rotations of the robot, once the total rotation of the robot reaches something around there the drift is two great
ok so I spent the day looking into filters, and I have a question, something like a kalman filter assumes that you have a sensor, which is inherently noisy, so it it’s fluctuating up and down around a given point, and that given point is the correct value, I see how that would help to establish a definite reading if the sensor is noisy, but my problem is drift, how does a kalman filter help to account for drift of the gyro, because if the gyro has drifted off of the correct value, then the kalman filter will determine an incorrect value, or am I missing something?
The shortest answer to your question is that the filter will not rely exclusively on the gyroscope for heading information. It will try to fuse information from all available sources to produce the best heading estimate and the correction to gyro drift.
Filter will not do any magic eliminating gyro drift errors, but it will let you decrease your errors from “unusable position after two full turns” to unusable position after 10 turns or something like that. You only need to implement enough filter to last through the 2 min match with about 2 inch / 14 feet angular accuracy at the end to still hit the high goal.
To minimize influence of the gyro drift on your solution you need to model the expected range of gyro measurements. To do that you need to know the sources of gyro errors.
Essentially, the gyro is a micro-motor rotating at a very high speed. According to Newton’s law you would expect that it will keep its rate of rotation in the inertial space. If you assume that there is no friction in the system, then once spun up the gyro will keep rotating at the constant speed.
When robot itself rotates you can think of it as if rotating around gyro rotor while later stays put (rotating with constant velocity) in the inertial space. When gyro sensor subtracts expected gyro turn angle from the actual gyro angle it could deduct how much robot itself has rotated.
In the real world there is friction, so you need to constantly pump some energy into a rotating gyro to compensate for any friction losses. This is tricky. High end gyros have very precise temperature and voltage regulators to ensure predictable energy flow through the gyroscope.
VEX gyro is doing quite a good job, but still couldn’t compensate for all external influences like it higher end cousins do. It is subject for temperature and battery voltage drifts, as well as mechanical vibrations. You cannot do anything about the first two, but you can, at least, mount gyro on some anti-slip foam in the robot’s center of gravity to isolate it from any undesired accelerations and vibrations.
When you initialize vex gyro you need to let robot stay still for about 1.3 sec. During that time RobotC libraries read the physical gyro and learn about its current drift. Then the library will assume that gyro keeps drifting with the same rate when returning gyro readings.
During the robot operation, gyro may experience change of the drift rate due to temperature or voltage change. If your robot shakes when you start the motors the gyro readings may jump as well, adding to the cumulative error.
So, what can you do?
First of all, mount gyro on anti-slip foam to dampen vibration.
Then you may want to maintain your own correction to the gyro drift and monitor gyro readings every time you know your robot stands still and is not rotating. If you see gyro readings drift you will need to update your drift rate correction.
Also, if you know that every time you start your motors - the gyro reading jumps, then either ramp up motor power gradually to avoid jerks, or model and ignore sudden gyro change during that phase, or both.
If you go all the way to last suggestion you may as well just implement the full filter that will model and correct errors on all sensor observations, not just gyros.
To model gyro with filter you need to predict next high fidelity gyro reading by estimating its current drift rate and bias and correcting them when you have cross-checked heading angle using other sensor. You can cross-check one gyro with another gyro, position estimates from quad encoders, or using events of line followers crossing white lines on the field.
Also, note that position sensor readings (quads and gyros) are cumulative and their error does not center around true value, but is a subject to random walk especially for gyros.
I am kind of stuck in another filter thread because capow08 posed some good questions that challenge my assumptions. But once I figure that out it should move along toward the filter implementation.
These are the math functions that ROBOTC implements.
float sin(const float fRadians)
float cos(const float fRadians)
float asin(const float Sine)
float acos(const float Cosine)
float atan(const float Tangent)
float atan2(const float Y, const float X)
float degreesToRadians(const int nDegrees)
short radiansToDegrees(const float fRadians)
float sinDegrees(const float nDegrees)
float cosDegrees(const float nDegrees)
unsigned short abs(const short input)
unsigned long abs(const long input)
float abs(const float input)
float sqrt(const short input)
float sqrt(const long input)
float sqrt(const float input)
short sgn(const short input)
short sgn(const long input)
short sgn(const float input)
float exp(const float input)
float log(const float input)
float log10(const float input)
float ceil(const float input)
float floor(const float input)
long round(const float input)
float pow(const float base, const float exponent)
ok so this is where I get confused, how would I fuse, for example a gyro reading with my encoder reading?
everything I have read on kalman filters has mentioned doing this, but none of them have allowed me to figure out how this would be done. the best way I could see to do this is use a difference in wheels to update a predicted change in angle and compare that to the change in angle that actually came out of the gyro, but as we have said, wheel slip will cause a problem, but perhaps, if can use a change in the two wheels at any one time, as my current gps code already zeros the encoders every 20 Ms. so if one wheel is greater than the other that can cause a turn, and I should be able to use law of cosines to calculate the degrees of that turn, the problem then becomes is that any more accurate than my gyros are, because if the gyro is the most reliable form of angle measurement I have then averaging or filtering in less valid measurements wont help me right?
ok, I have done some more testing, the gyro drift is not caused by rotation at all, after carefully tuning the sensor scale for my two gyros, I did 5 rotations, and because the gyros do 3600 units per rotation, the result should be 18000
my results for the gyros were 18004, and 17984, averaging to 17994 for a total error of 6 out of 18000, which is well, well below, a 1% error, however, I can accumulate up to 10 degrees of error, just driving around for 30 seconds and completing no more than one rotation, so some how the error is tied to the robot physically moving and changing its position, not just a result of rotation or motor vibrations, any idea why that might be?
Last year we had cortex and all MC29s mounted on a c-channel and then entire assembly was connected to the chassis through the anti-slip pads. There was no metal to metal contact even with the screws. This had fixed our problem of VEXnet connection frequently dropping out that was plaguing us at the beginning of the season.
When we decided to add a gyro we had mounted it to the cortex assembly on top of another set of anti-slip pads. So, without realizing it, we had the low-pass filter jgraber was mentioning above.
Since the thread mentioned MEMS, I looked up specs for the VEX gyro and turned out they use oscillating element which depends on Coriolis effect and it, indeed, will be very sensitive to the vibrations on its oscillation frequency.
ok, I have done some more testing, the gyro drift is not caused by rotation at all, after carefully tuning the sensor scale for my two gyros, I did 5 rotations, and because the gyros do 3600 units per rotation, the result should be 18000
my results for the gyros were 18004, and 17984, averaging to 17994 for a total error of 6 out of 18000, which is well, well below, a 1% error, however, I can accumulate up to 10 degrees of error, just driving around for 30 seconds and completing no more than one rotation, so some how the error is tied to the robot physically moving and changing its position, not just a result of rotation or motor vibrations, any idea why that might be?
Collin, since you have two gyros, could you mount them in the same plane but 90 deg rotated from each other and run the test? If they use oscillating element it might be more sensitive to the vibrations along the specific direction. Ideally, you would mount gyros on top of each other in center of rotation of your robot which, hopefully, coincides with center of gravity as well.
ok so this is where I get confused, how would I fuse, for example a gyro reading with my encoder reading?
everything I have read on kalman filters has mentioned doing this, but none of them have allowed me to figure out how this would be done. the best way I could see to do this is use a difference in wheels to update a predicted change in angle and compare that to the change in angle that actually came out of the gyro, but as we have said, wheel slip will cause a problem, but perhaps, if can use a change in the two wheels at any one time, as my current gps code already zeros the encoders every 20 Ms. so if one wheel is greater than the other that can cause a turn, and I should be able to use law of cosines to calculate the degrees of that turn, the problem then becomes is that any more accurate than my gyros are, because if the gyro is the most reliable form of angle measurement I have then averaging or filtering in less valid measurements wont help me right?
I understand your confusion. It is a difficult subject. Fusion that we are talking about works kind of backwards.
You do not get sensor readings and then compute your dx and dy using lots of trig functions. Instead you say: here is my current position and velocity. Where will I be at a time of my next measurement and what I expect each encoder and gyro reading to be? This is your predict step.
Now that you have predicted all sensors you compute the difference between expected and actual readings. You say: ok, they all are a bit off, which one should I trust more based on its past performance? Then you use some sort of least squares calculation to determine weighted trust coefficients to minimize errors. Initially, you could just hardcode trust level for each of your sensors.
Let say trust coefficients are: first gyro 30%, second gyro 10%, and four quad encoders 15% each. The next step is to correct your predicted position. You ask: if first gyro reading was true where would my state (pos & vel) had to be in order for predicted gyro reading to match the measurement. Once computed you move your state 0.3 of the way toward considered state.
For example, if after last measurement cycle you thought you were heading at 30deg and gyro reads it as 33 then you redo your state prediction based on 31 deg heading angle. Repeat this process for all your sensors and sum up your state corrections.
If your prediction algorithm was bit off and all your sensors were perfect and pointing in the same direction for the correction then you would move almost 100% toward true location. Where “almost” comes from the non-linear nature of differences predicting 31 vs 33 deg arc movement and other approximations, unless you decide to code them too.
Another important note is that sometimes geometry of the measurements is such that some state corrections are simply unobservable from certain measurements. For example, dx correction is unobservable by a quad attached to an omni-wheel which is moving along dy.
The trickiest part of implementing the filter is to do what-if state corrections (they call them partial derivatives) and do high fidelity state prediction. As you can see from the other filter thread it may get complicated fast even if the goal is to make it simple.
Let see what we can do for your robot without making it too much complicated.
First of all, you have to have high fidelity measurement task. Zeroing out your dx, dy counters every 20ms cycle is just too much of an error. You need to let cortex firmware run its sensor counters uninterrupted and get as accurate timestamps as possible.
Next, don’t bother with considering motor power like the other thread does. Just predict movement based on current position, heading and velocity.
Then try to predict your sensor readings and compare them with the real measurements. Let see how well they match up and then we could decide what needs to be the next step.
It could be that you need to estimate velocity based on multiple measurement cycles, or maybe we need to consider motor power levels. Or, maybe, it will be enough to detect and estimate gyro drifts, or, maybe, detecting and ignoring gyro jumps would be enough to get you 10x accuracy…
My team has the position tracking nearly done using accelerometers because the wheels might slip making encoders inaccurate after the first bump. The key is to figure out how fast you are moving in the x and y directions keeping in consideration the angle of the robot. Remember .5at^2+ v0t+x0=x.
How did you do this? I can figure out how to program Accel Sensors?