I teach robotics and have some of my programming minded students working on building a flowchart for learning odometry in stages rather than all at once. Currently my Jr/Sr. new-to-robotics-but-not-to-C++ students are on step 5 or 6. Here are the tasks I have had my programmers complete to expedite understanding autonomous navigation. Please give me feedback, or let me know if you think there are better checkpoints instead of the ones below.
-
Make a translational PD control for one motor.
-
Make a rotational PID with the IMU
-
Add an element to the rotational PID to compare target angle, target-360, and target+360 to find which results in the lowest error.
-
sum the rotational and translational values to create a drive straight PID with constant target angle.
-
Track position by updating an array with [x, y] based off of the instantaneous differential drive, Sideways center ODO wheel, and IMU. Have the path use deltatheta/2 to approximate the new position while driving in a curved path, since it is a good approximation of a constant radius of curvature. Use the documents to complete this.
-
Identify instantaneous target values for translation and rotation by rotating the target [xf, yf] around the robots coordinates [xi, yi] an angle equal to the robot’s heading. This will then give you [xf’, yf’] and you can simply subtract yf’-yi to assign your translational target value, and then use arctan to assign your rotational target value. This calculation will need to be performed every loop and the target values updated.
You have now entered carrot-on-a-stick mode.
-
Create a series of waypoints and upon completion either update [xf,yf] or perform tasks. This is basic straight line navigation, and can even handle the robot being pushed around. Just make sure that you have two waypoints per destination to make sure that you have a chance to set and orient the robot.
-
Use a version of spline, bezier, or some other interpolation to blend your waypoints or set desired curves
https://www.mathworks.com/matlabcentral/mlc-downloads/downloads/submissions/47593/versions/20/previews/documentation/modules/mobile/html/TrajectoryPlanDocumentation.html?access_key=
-
Attempt a programming skills using only these splined waypoints and attempt to push the robot off course or introduce barriers. Find weaknesses.
-
Introduce visual and other sensors to optimize the detection of grasped objects.
-
Use vision and distance sensors for the final docking of a movement, and use this to recalibrate your global [x, y] based on an assumed position of the object.
-
Possibly learn how this works on x drives. There are resources available, and at this point you are probably ready to learn how to compute using each of the 4 wheel positions into the x and y calculations.
-
Use A* or some other search algorithm to take an array representation of the field with obstacles and have the robot pathfind its way from waypoint to waypoint, avoiding obstacles along the way. This will replace your spline interpolation with a better series of waypoints. This will prepare you for VEXAI
Past this is stuff I have no experience with so I can just suppose…
-
Use the vision sensor to build a simple point cloud of object colors and locations and then start to build an decision making algorithm that evaluates positions, and auto-pathfinds, grabs, and scores or descores as needed.
-
Have your AI learn to play D.
-
AI Supremacy.
Here are a few resources I have found along the way.