Hello again friends,
I was wondering how could i use GPS for robot tracking or pretty much the robot knows where it is in the arena and can line up to the goal automatically.
I have an x drive with 4 ime encoders and 4 more on the flywheel.
Also would you use trilateration and triangulation to achieve this and how.
I asked this many times before and you think I would have the answer but that has not been the case. Any help will be great as I am also researching myself.
Feel free to contact me at [email protected]
Many thanks
David_5839A
I will contact ur team but could you explain think in vectors more please
I think Luke323Z means to keep an x coordinate variable and y coordinate variable constantly throughout the match. Converting to inches should be pretty easy with some TickToInch function in the program. The hardest part is incorporating the turning, etc. of an x-drive, but there should be a post somewhere from the past on this forum about the math behind it. Then, just use magnitude for your launch distance and angle to constantly face it to the goal.
You may need a z variable when the robot is close to the goal to change the angle.
This thread is a decent start for just position (no rotation).
I have sent you an email with a write up of how vectors work, but here is the link to the google file any way in case others want it
this write up is for mecanum drives, but the vectors should work in the same way for an X drive
https://drive.google.com/file/d/0B1LLlSCW4Hm5elZNc01XQVQ4a1E/view?usp=sharing
Thanks Collin
See you at dulaney on the 26th
David
Exactly. However, you shouldn’t even need to convert to inches.
The computer is the only thing that has to work with the numbers, so why not make the field coordinates be based off of ticks rather than inches? That way you aren’t wasting time by constantly cycling calculations between inches and ticks. I don’t know for certain (jPearman correct me?), but I’m guessing that could significantly slow down the possible cycling speed of the calculations, and breaking down vectors into the highest resolution possible is how you keep the whole system accurate.
I will hopefully release a tutorial on our YouTube channel about this when we get a cortex on our base and I can mess around with programming.
Oops. My mind was swirling with easy to understand programming, robber barons, and vector problems from last year’s classes
A YouTube video explanation of this would be wonderful.
It gets a bit more difficult once you get rotation into play. But still manageable once you get all the factors into play (and then drop a few for being a rigid body).
Luckily you do not really have to worry about scaling nor shear of your object as it is a rigid body. Read this and you will see all the factors that come into play.
Here is some more detail in this senior level ME class notes I found:
[http://people.cs.clemson.edu/~dhouse/courses/401/notes/affines-matrices.pdf
So yes, it’s hard. Sorry about that.
First figure out the rotation first and back that out relative to the center point of the robot (or wherever really, but the center of the holonomic drive makes a lot of sense).
Then you will get the translation of your robot to its new position. You not only should keep track of the position but the heading of the robot as I believe you may want to be pointing in a particular direction…
You also may want to figure out the edges and see if they will be bumping into anything. But that is more for your plan.
Lastly, the real world or robots comes into play a bit. Slippage in the wheels is the other item to overcome so accelerate at a nice rate and you should avoid error from spinning wheels. So try and avoid it. If not, you can reset your view of the coordinates via lines or sonar or buttons bumping into the wall.](http://people.cs.clemson.edu/~dhouse/courses/401/notes/affines-matrices.pdf)
The “Robot GPS” you are referring to is a problem in robotics usually known as localization. As some pointed out you can attempt to determine the current position of your robot by applying various translations from a starting position – this is known as dead reckoning. Unfortunately, as was also mentioned, error due to slippage and inaccuracies will accumulate over time, eventually making the estimates useless.
Of course in the Vex game you might be able to get away with what is mentioned above (resetting position when certain conditions are met), but in the real world sensor data needs to be constantly integrated. Methods that use algorithms such as the Kalman Filter and the Particle Filter merge sensor data and use probability to choose a “best estimate” for the current position.
A great intro to robotics problems like localization can be found here. Implementing these methods is certainly possible, and would make for a much more interesting and real competition than the current examples of “programming skills” which consist of stationary baseball pitchers
I’m by no means expert in the area but I do know of many resources, so please feel free to ask for clarification.
I am trying to create a localization program so my robot can shoot balls from anywhere on the field, I know The formulas for x and y and rotation and I can probably figure out the projectile formulas but the accelerometer has noise and slow update rate, the encoders on wheels are inaccurate whenever the wheels slip or in a collision, any suggestions?