Robot position calculation

the velocity of the robot is calculated by subtracting the current encoder value from the last one, in real life this method might cause errors in the yaw of the launcher
for example if the robot is travelling at 100 ticks per second, and the velocity is calculated 500 times per second, that means the velocity will be 0, and then 1 every 5 ticks or so
when the velocity goes from 0 to 1 it will think the robot is going from 0 to 500 ticks per second in 1/500th of a second causing the launcher to jerk back and forth

an easy solution to this is to reduce the interval that you calculate velocity, but this will make the calculation less accurate
a better solution is to smooth out the input so {0,0,0,1,0,0,0,1} turns into a stream of approximately 1/5
that way it wont jerk as much at any interval

oh I see thanks!!!

ive noticed the algoritm for the yaw is flawed

here is a simulation of shooting balls at different speeds
the graph shows how far away the ball is from the target

Fantastic! Now all you need is centimeter perfect odometry and a lazy susan! Easy peasy right?

Awesome simulations though. Really like the “to the point” graphics and data visualization.

I think the problem is that you are calculating your launcher yaw with a time of flight assuming your launcher is aimed at the goal. In reality though, as your robot is moving side to side relative to the goal, this assumption breaks down.

The launcher yaw is a function of the flight time, which is in turn a function of the launcher yaw. As such, you will probably need to implement an iterative solver like the Newton-Raphson method that will give you a yaw within an error range you specify.

Here is an example to illustrate the problem. In the figure below the robot is represented by the box and the goal is represented by the triangle (sorry for the poor hand draw quality, I was in a rush).
https://vexforum.com/attachment.php?attachmentid=9317&stc=1&d=1431295756
For simplicity, we will assume that the robot is traveling at 1 meter per second and that the launcher shoots at 1 meter per second. Now the obvious solution to this problem would be to fire the ball straight up (90 degrees relative to x-axis) and it will make it into the goal after 1 second. However, following the code gives you a different result as I will show next.

Following the code, you first determine how long it will take the ball to reach the goal based on its position. From Pythagoras the distance to the goal is 1.41 meters and since the ball travels at 1 meter per second we have a flight time of 1.41 seconds. You then figure where the goal will be relative to the robot after 1.41 seconds. I have drawn this below.
https://vexforum.com/attachment.php?attachmentid=9318&stc=1&d=1431295760
You can see that after 1.41 seconds the robot will be horizontally past the goal by 0.41 meters. Therefore you aim backwards behind your robot by about 22.5 degrees to make the shot and will miss the goal by 0.41 meters.

Do you see the issue? Launcher speed, robot speed, and robot heading relative to the goal all have a part in the amount of error you are seeing.

If you use an iterative solver, you can specify the maximum amount of error you willing to have at a cost of computation time. The tighter the error, the more iterations you will need.
IMG_20150510_175853.jpg
IMG_20150510_175846.jpg

i understand, but right now im going to create some real trajectory models before trying to fix it

What software was this made in? Sorry if you already answered it and I missed it

Just wondering, how do you plan to compensate for potential error in your sensor readings? For instance, what would happen to the accuracy of the encoder values if your opponent rammed into your robot?

it is coded using the Love2D engine

the error caused from being rammed would depend on how much the wheels slip, an accelerometer and gyro would help increase accuracy more

I find this to be really cool because I actually have the exact same idea but instead of simulating it, I built it. Best part is it works to!! But it was a pain to program. What do you use to detect your X and Y value?

ill release some C++ code once i test it to make sure it works on my robot (Complexist still has it)


large accuracy improvement, have also been working on real 3d trajectories but those arent ready yet

If you don’t mind telling, what did you change? That looks a thousand times better.

well the equation requires you to know the xy distance between the robot and net to know where to aim to compensate for the robot’s velocity
this distance is calculated assuming the robot isnt moving, which is wrong when i apply the equation
the “correct” equation is recursive, which i obviously cant use
so i just run the equation on itself a few times which gives me enough accuracy

Did you have a set error window and run the program until it met this constraint, or did you have it run the same number of times regardless? You could save some computation time if you did the latter which may make a difference on the Cortex when you are trying to make these calculations at 20-50 Hz.

Once you introduce the trajectory equation you are going to have a fun time :slight_smile: It looks like right now you are assuming the balls are shot like a gun at a fixed velocity and they go in the goal when they get there. With a trajectory equation you now need to make the balls reach the goal at a specific height in their trajectory. This boils down to you having to solve for two variables simultaneously (yaw and a launcher parameter, probably velocity). I don’t think you will run into any issues with having multiple solutions the to problem, but I can’t be sure without doing some math.
Because there are two variables in the solution, it probably won’t be as simple as running the program a few times and the solution converges. This is because small changes in one of the variables may not affect the other in a linear manner. To account for this, you could use the recursive multivariate Newton-Raphson method. To do this, you would need a function model of your trajectory. While you could do this theoretically, you would probably be best off collecting experimental data and then fitting a finite polynomial (for ease in calculation the partial derivatives in the Jacobian matrix) that related the launcher parameter (velocity?) to the shot distance at goal height. This way you would ensure nice convergent data in the fewest number of iterations possible for a complex problem like this.

i just iterate twice and it gives me almost perfect accuracy, the small bumps in the graph are actually errors in the way the the minimum distance between the goal and ball is calculated

i know CPU time is going to be critial because i have other things running in the background like smart motor


here it is \o/

WHAAAT, thats awesome, are you doing it just with encoders, man thats so cool, how are you getting the robot to feed info to some outside program?

this one is using a gyro, i tested and found out that encoders are better to find your angle if you dont calibrate it (the scaling factor for the gyro used in the video was off by about 10%)

to get this to work i use the old serial programming cable and linux, the debug stream is dumped to the tty opened by the serial adapter and i read that with my external program

Would you be willing to post your raw data if you have it (the encoder and gyro values along with timestamps)? I would be extremely interested in trying out some different filtering/tracking techniques and seeing how they all compare but I don’t have any data to work with.