Thank you for that information! So do you think there’s something different your team is doing that’s giving you more accurate results?
yes, our programmer took measurements using a tape measure and created a function to accurately calculate how far the robot has moved. He also created functions to average out the wheel encoder values in order to get a more accurate measurements almost down to a tenth of an inch. If you have any questions just lemme know
Implementing the extended kalman filters, double integrating the acceleration, and controlling for gravity are all not easy to do. 3 wheel odometry, on the other hand, is a problem of simple geometry.
Doing full position tracking using acceleration values from the IMU is way harder (if it’s even possible)
Yeah, I think earlier an Event Partner mentioned that the IMU doesn’t give straight-up positioning data.
So it seems like the odometry is a much better option. My team has an x-drive, but as long as we just put in the tracking wheels, that shouldn’t affect much right? Also, if you can is it possible for you to show some code (or even pseudocode) to just explain how the code works? Thanks.
Again, I have assumed that the question refers to turning, as the V5 inertial sensor is not at all usable for position tracking.
@Drew2158U 3 wheel odometry is not at all “easy” and requires understanding of trigonometry in addition to geometry.
@5278C If you plan on implementing three-wheel odometry, as explained here, do understand that it will be fairly difficult especially if you are not familiar with the mathematics or motion algorithms. Like @Midas3217m mentioned, it’s much easier and mostly as accurate to use the inertial sensor for turning and internal motor encoders to track distance. This is probably your best bet unless you have the knowledge, time, and energy to implement position tracking on your own.
Positional tracking from an IMU is extremely easy and accurate; however, it drifts really quickly. Since you are accumulating velocity over time, from which you are then accumulating position, error builds up fast and does not cancel itself out as it would with other sensors. Things like the Kalman Filter can be used to mitigate this, but tracking wheels would definitely be more accurate over a longer period of time. For which one to use during the 15 second autonomous, I have no clue and my team plans on using both. Because autonomous is only 15 seconds, IMU data can probably be used at least for the first part of that time to accurately get position data, but this data will definitely be unreliable after autonomous is over.
Double integration is actually extremely easy: all that needs to be done is to multiply the average of the current value and the last value by the time between these readings in the correct units. For example, if the IMU returned an acceleration in X of 5 m/s^2 20ms ago and returns 7 m/s^2 now, you would add 6 m/2^2 * 0.02 s for a 0.12 m/s change to the velocity variable. The same thing is done again for position.
Adding a Kalman Filter though, is much less easy.
I’m fairly certain you do not need to control for gravity as all forces in the Z direction should be balanced and the robot will not be accelerating up or down.
im not the programmer of our team but in our experience the inertial sensors have worked plenty in odom stuff. we had tracking wheel based odom but the quad encoders were finicky so we kinda just gave up on that. our auton was basically perfect and didn’t fail once at states.
How was this done? The odometry relies on the arc length of the wheel travel to update the vector displacement. All the angle math could be substituted with the IMU however (mind you it’d need to be converted from degrees to radians).
My guess is that @cykaraptor is referring to using the IMU for turning, and the drivetrain’s integrated encoders for positioning. (As I understand it, double integrating the IMU’s acceleration values is not at all accurate after only a few movements.)
Not only that, but the IMU would have to be placed in the exact tracking center of the robot in order to do any sort of motion algorithms (not necessarily, but I’d imagine you’d get results that are off-set by however far off the IMU is from the tracking center).
The robot must be perfectly level with the ground to do any form of odometry with the IMU (unless you implement tri-axial math which would require checking for more than three different cases in the event the robot tips forward/backward even slightly). You don’t need to do this with tracking wheels because they are tensioned into the ground. You’d also be using a completely different type of vector math than what everyone has seen in the Pilons. Not impossible, just highly implausible given the limitations of the IMU.
I’m not super sure about the specifics cause I’m not the programmer but I sorta know what’s going on by reading the code. my main point is that for us our quad encoders were bad giving us weird values, skipping the tracking wheel process gave us better results. our imu is in the tracking center of our robot, it is perfectly level and barely touching the ground. from how it sounds, it is possible we got a lucky sensor because we haven’t had much/ any drifting problems or he found a workaround that i don’t know about. odometry might not be the right word but that’s what our programmer calls it. regardless it’s 100% position tracking code that can do curves and you tell it to go somewhere and it goes there (sounds like odom to me). just saying what worked for us.
It is not a requirement to have IMU in the center of the robot. You can place it anywhere. Then it will report accelerations and angular rotation in that mount point. Angular rotations will return identical numbers, but accelerations at that point will be different from that in the center of the robot because of centrifugal forces. Correct math will be more complicated. So I agree with @mvas8037 - put it in center of rotational of the robot and everything is going to be much simpler.
This was probably due to wheel slippage
Regardless if the IMU is level or not, if the robot physically tips forward or backward from momentum then it will require some form of tri-axial math. I do believe that your autons were spot on, I’ve seen them online. 7701 team right? They were pretty smooth, I’m just very surprised you guys used the IMU for position tracking given the limitations.
Unless you bypassed VexOS somehow, it is impossible to get a “lucky sensor” simply because the refresh sample rate is limited by the software and cannot be modified by the user. This is why I’m confused about how you guys were able to integrate through the accelerometer data. It is possible your IMU was ideally manufactured. Not sure how significantly a manufacturing flaw contributes to drift though. For the most part, I’ve heard positive feedback about drifting issues with the IMU and have no experienced any significant drift issues myself.
Spline generation and position tracking are not inter-related, but you can use position tracking data to easily generate curved paths (so so not to confuse anyone reading through this). Did you guys use PROS or VexCode?
we use pros (20 characters)