I am trying to use atan() in robot c to find the angle between two point, in this case, my current location and origin, and it never works, I never get the correct angle out of the function, it should give me 180 degees off of the angle I want, because arctan should give the angle between origin and a point and I want the angle between a point and origin. for example, atan of the point of (50,0) should return 0, this would be atan(0/50), and, but the angle I am looking for should be 180, for the example of (50,0) the equation works, but never any where else, at (50,50) it still returns something close to 0, I think the real value it returns is about .2. I know that atan also doesn’t work in all quadrants, so I messed around with atan2(y,x); and again, it works for 50,0 but not for anything else. I haven’t been able to find any pattern in the results I am getting or any clues to help me out

also, I know atan functions return radians so I have been converting them to degrees

if anyone can help me with this, it would be great

I did not know RobotC has an atan function. You could create your own arctan series by using the first few terms of the inifinite series for arctangent.

The inifinite series for arctangent approximates the arctangent of a number whose absolute value is less than or equal to 1. It goes: arctan x = x - x^3 / 3 + x^5 / 5 - x^7 / 7 + …

While the arc tangent function is limited in what quadrants it is directly applicable, the function atan2 takes into account the signs (+ or -) of the x and y components and will give you a result from 0 to 2pi.
Now I haven’t used atan2 in RobotC specifically (nor did I realize it existed, that’s nice). Are you sure you are converting from radians to degrees properly? I think RobotC has a built in function to do it.
Could post the direct output of the atan2 function before you convert it? The magnitude of your values shouldn’t be a problem.

You’ve forgotten to close the parentheses around your radiansToDegrees call. Unfortunately, I can’t find documentation for atan2 anywhere, so I can’t say for sure what you need to get it to work. But apart from the parentheses, that code should work (assuming you have already declared the dx, dy, and angle variables with appropriate types before this part).

I’ve figured it out
if my current x position is positive, dx will be negative, and the same is true for my Y position and dy, but to get the appropriate angles for a positive x and y position, atan2 expects to receive a positive valueso all I have to do is feed the atan2 function -dy, and - dx, then add 180 to what comes out of the radiansToDegrees function and I’m good.

May I ask how do you determine the position of your robot? Is it through sensors like Encoders and Gyro, or is there something else in the code that needs to be considered?

My team is also looking into position-tracking, but I am not sure how.

My team has the position tracking nearly done using accelerometers because the wheels might slip making encoders inaccurate after the first bump. The key is to figure out how fast you are moving in the x and y directions keeping in consideration the angle of the robot. Remember .5at^2+ v0t+x0=x.

so far we are using an encoder on each wheel and two gyro’s, and the values of the gyros we average. I am proud to say the tracking code is fully functional, also we have code in place, (this is what the atan function was for) that will move the robot autonomously to any position in the XY plane, assuming it has a clear straight path to it. also, it can autonomously point itself at the goal.

there is definitely a lot of drift in the system, the real problem is not wheel slip but gyro drift, and the accelerometers do not solve that problem, even averaging out two gyro’s and individually tuning their sensor scales, they just don’t seem to be accurate enough, they drift far to much, until there is a solution found for gyro drift I don’t know that we will see a good really accurate way to measure robot position, the thing we are now messing around with is using the difference between the encoders to solve for angle.

This sounds really impressive, minus the problems with drift. About how long would you say it can accurately aim at the goal before too much drift makes it impossible to do so?

ok so I spent the day looking into filters, and I have a question, something like a kalman filter assumes that you have a sensor, which is inherently noisy, so it it’s fluctuating up and down around a given point, and that given point is the correct value, I see how that would help to establish a definite reading if the sensor is noisy, but my problem is drift, how does a kalman filter help to account for drift of the gyro, because if the gyro has drifted off of the correct value, then the kalman filter will determine an incorrect value, or am I missing something?

The shortest answer to your question is that the filter will not rely exclusively on the gyroscope for heading information. It will try to fuse information from all available sources to produce the best heading estimate and the correction to gyro drift.

Filter will not do any magic eliminating gyro drift errors, but it will let you decrease your errors from “unusable position after two full turns” to unusable position after 10 turns or something like that. You only need to implement enough filter to last through the 2 min match with about 2 inch / 14 feet angular accuracy at the end to still hit the high goal.

To minimize influence of the gyro drift on your solution you need to model the expected range of gyro measurements. To do that you need to know the sources of gyro errors.

Essentially, the gyro is a micro-motor rotating at a very high speed. According to Newton’s law you would expect that it will keep its rate of rotation in the inertial space. If you assume that there is no friction in the system, then once spun up the gyro will keep rotating at the constant speed.

When robot itself rotates you can think of it as if rotating around gyro rotor while later stays put (rotating with constant velocity) in the inertial space. When gyro sensor subtracts expected gyro turn angle from the actual gyro angle it could deduct how much robot itself has rotated.

In the real world there is friction, so you need to constantly pump some energy into a rotating gyro to compensate for any friction losses. This is tricky. High end gyros have very precise temperature and voltage regulators to ensure predictable energy flow through the gyroscope.

VEX gyro is doing quite a good job, but still couldn’t compensate for all external influences like it higher end cousins do. It is subject for temperature and battery voltage drifts, as well as mechanical vibrations. You cannot do anything about the first two, but you can, at least, mount gyro on some anti-slip foam in the robot’s center of gravity to isolate it from any undesired accelerations and vibrations.

When you initialize vex gyro you need to let robot stay still for about 1.3 sec. During that time RobotC libraries read the physical gyro and learn about its current drift. Then the library will assume that gyro keeps drifting with the same rate when returning gyro readings.

During the robot operation, gyro may experience change of the drift rate due to temperature or voltage change. If your robot shakes when you start the motors the gyro readings may jump as well, adding to the cumulative error.

So, what can you do?

First of all, mount gyro on anti-slip foam to dampen vibration.

Then you may want to maintain your own correction to the gyro drift and monitor gyro readings every time you know your robot stands still and is not rotating. If you see gyro readings drift you will need to update your drift rate correction.

Also, if you know that every time you start your motors - the gyro reading jumps, then either ramp up motor power gradually to avoid jerks, or model and ignore sudden gyro change during that phase, or both.

If you go all the way to last suggestion you may as well just implement the full filter that will model and correct errors on all sensor observations, not just gyros.

To model gyro with filter you need to predict next high fidelity gyro reading by estimating its current drift rate and bias and correcting them when you have cross-checked heading angle using other sensor. You can cross-check one gyro with another gyro, position estimates from quad encoders, or using events of line followers crossing white lines on the field.

Also, note that position sensor readings (quads and gyros) are cumulative and their error does not center around true value, but is a subject to random walk especially for gyros.

I am kind of stuck in another filter thread because capow08 posed some good questions that challenge my assumptions. But once I figure that out it should move along toward the filter implementation.

ok so this is where I get confused, how would I fuse, for example a gyro reading with my encoder reading?
everything I have read on kalman filters has mentioned doing this, but none of them have allowed me to figure out how this would be done. the best way I could see to do this is use a difference in wheels to update a predicted change in angle and compare that to the change in angle that actually came out of the gyro, but as we have said, wheel slip will cause a problem, but perhaps, if can use a change in the two wheels at any one time, as my current gps code already zeros the encoders every 20 Ms. so if one wheel is greater than the other that can cause a turn, and I should be able to use law of cosines to calculate the degrees of that turn, the problem then becomes is that any more accurate than my gyros are, because if the gyro is the most reliable form of angle measurement I have then averaging or filtering in less valid measurements wont help me right?

ok, I have done some more testing, the gyro drift is not caused by rotation at all, after carefully tuning the sensor scale for my two gyros, I did 5 rotations, and because the gyros do 3600 units per rotation, the result should be 18000
my results for the gyros were 18004, and 17984, averaging to 17994 for a total error of 6 out of 18000, which is well, well below, a 1% error, however, I can accumulate up to 10 degrees of error, just driving around for 30 seconds and completing no more than one rotation, so some how the error is tied to the robot physically moving and changing its position, not just a result of rotation or motor vibrations, any idea why that might be?