My students have tried a few different ways to determine where the robot is during the autonomous period.
The first attempt was to use sonars, but that proved too unreliable, stray readings and the like.
The second attempt was to use the integrated motor encoders. That was also unreliable, but we are not certain why. Just moving forward a set distance and back didn’t get you back to the same location. Most likely wheels were slipping. It could be our robot’s design. We have four omni-wheels, each at 45 degrees from the direction of motion. This could contribute to the slippage. Other geometries might be more reliable.
What have other teams found that worked?
For traveling distance, the IME’s are the most accurate as they have the finest granular set of values and a good update rate. But they have drawbacks regarding static discharge, packets not received properly, etc.
The quadrature encoder is the next best for travel distance measurements.
So those are the sensors, but that does not solve your problem per se because you are exactly right wheel slippage as well as slop from the moved pint to your sensor makes thee measurements not read as well as you would like.
What can you do about it
- Ensure a nice tight drive train from wheel to sensor (no sloppy chains or loose wheel to mess with the shaft and tight tolerances)
- Don’t jam it to 127 right away - smoothly transition up and then smoothly transition down in speed. Slew rate and proportional control are two things to look up on managing this. PID does an even better job but start with P and slew rate.
- Manage the traveled distance along the direction of travel of the wheel, not the robot. Vector math is fun on a holonomic. Figure out the distance on each axis separately and manage to that. Going 45 degrees (straight on the robot’s view, but 45 offset on the wheels view) is what you seem to be measuring. Try measure each component separately.
I’m with Giraffes. Although my team didn’t figure out how to use them in time for the competition this year, IME’s would definitely be the way to go. Also, I am taking note of Giraffes’ suggestions regarding IME’s for next year.
I have found ultrasonics reasonably accurate +/- 1 inch. IMEs and quad encoders are good, but as Team80_Giraffes points out, wheel slippage can cause errors. The best solution (IMHO) is to use a combination of sensors, use encoders as the primary means to move but then adjust by using ultrasonics and line sensors. For example, during the sack attack season team 8888 built a programming skills robot that needed to drive parallel to the trough. After moving several feet there was often an error (the robot had drifted slightly left or right) so they used an ultrasonic sensor aimed at the field wall to correct for that. Same with the line sensors, if you are driving forwards and know that in say 3 feet you should cross a line then use that additional information to correct any residual error in the encoder count.
What code where you running to do this? If it was done by hand then I believe the readings should have been the same. This would then lead me to assume its not a sensor issue but rather getting to the sensor value that is causing you problems.
You should check and see how far you are actually traveling compared to how far the code tells the bot to travel. See if you are drifting/coasting past the spot where you should have traveled. Finding why you are not getting the results you want is the first step.
Jpearman has a great suggestion in using a secondary sensor to make corrections.
Thanks for the advice everyone. As for measuring the distance, it is not going as far as we think it should. So I’m guessing slippage is the main problem. We tried to just calibrate – adding an offset or slope – but the slippage must not be consistent.
As far as code, my students programmed it in RobotC. We are getting sensor values that are plausible, but just not accurate. And and this stage there’s no time for more tweaking.