I’ve been reading up on position tracking algorithms for autonomous and programming skills, and the consensus seems to be that dedicated tracking wheels are far more accurate than integrated encoders due to wheel slippage. Many teams seem to be using 3 tracking wheels to account for robots strafing slightly while moving. However, with tracking wheels requiring an encoder each (taking up 2 ADI ports each!), it seems like a robot with three tracking wheels would be severely limited in terms of available ports.
Some teams also use ultrasonic sensors to “calibrate” their position tracking against a known object (such as the corner of the field). I could see this being a very useful strategy in Tower Takeover. However, it would be impossible to hook up 3 encoders and 2 ultrasonic sensors simultaneously due to the low number of ports, to say nothing of connecting additional potentiometers, limit switches, or other devices for other parts of the robot!
I figure I’ll need at least 1 potentiometer or limit switch for a lift, and probably one more for some other function (perhaps detecting if a cube is present?)
Which would you say is more important? A third tracking wheel or an ultrasonic sensor? Or should I just wait for the ADI expander to come out? (Although we all know VEX’s track record with these things…)
I would use ultrasonic sensor only to anticipate collision with a wall or a game object and slowdown preemptively.
Analog sensors like Ultrasonic rangefinder, Line Follower, and Gyro are subject to environmental noise (1) (2) (3).
The most important improvement would be to use some sort of sensor fusion algorithm like Kalman filter. V5 has plenty of memory and processing power for such algorithms.
I would use 2 tracking wheels mounted at 45 deg angle and augment it with a line follower for position re-calibration when robot drives over a white line.
This way, with just two tracking wheels, all forward/backward, left/right, and rotation around the robot center is observable (the only unobservable movement would be rotation around point where lines continuing along tracking wheels would intersect).
The only problem I forsee with the “45 degree tracking wheel” idea is that it would be impossible to distinguish between left-right movement and rotational movement. (If both wheels are spinning clockwise, is the robot moving right, or rotating counterclockwise?)
I hadn’t thought of using line-following sensors to calibrate the position - that would work great, considering the robot would be constantly crossing a white line when approaching the goal zones. I think OkapiLib supports many filtering algorithms so that should be pretty easy. Worst-case-scenario, if the lighting is bad, the robot could just position itself based on dead reckoning.
However, connecting multiple sensors to a single port wouldn’t bring much benefit here; the V5 brain requires that encoders and ultrasonics be plugged into two consecutive ports. The Line Sensor is also an analog sensor, so while potentially a limit switch could be wired in parallel (such that if the value is 0, the limit switch is pressed, and otherwise, the readout from the analog sensor is passed to the brain).
It depends on what you’re using it for, a gyro might be a viable solution if you are trying to measure rotation, and the IMEs aren’t aweful if you have a robot with a low center of gravity and a centric center of mass (less wheel slippage).
Yes, if only tracking wheel encoders were used do calculate position of the robot that would be true.
However, if sensor fusion algorithm is used and knows about multiple parameters, such as control commands sent to the motors, readings from the powered wheels’ built-in encoders, any ultrasonic and line following sensors, etc… then it will be able to tell the difference.
Just using powered wheel encoders you can roughly tell where you are. The errors come from the slop in the drivetrain and powered wheels potentially slipping. With tracking wheels it will be able to detect and mostly resolve those errors.
If the rest of your robot needs more sensors you can attach only two tracking wheels and invest into studying math associated with using Kalman filter.
Last year we used 2 tracking wheels and traction wheels on the back of the robot in line with each other.
However, even though we had tractions, the tracking wheels would still slide slightly when turning.
Also, it made our turning really weird, especially for autonomous.
The result of this was that I could never get the odometry to work perfectly, it would always drift a bit.
This year I will definitely do 3 tracking wheels.
How do you guys get accurate reading from the third tracking wheel? I use 2 tracking wheels for position tracking because I can’t get accurate reading from the third (maybe because it’s an omni wheel).
They’re also useful for correcting your position. If you drive the robot flat against one wall, and measure the distance to an adjacent wall with an ultrasonic sensor, you now know your current position and orientation. This can be used to update/correct your encoder-based position tracking system.
As others have pointed out, this has no way of telling the difference between rotation and horizontal movement; you’d be better off keeping your two wheels parallel, and just assuming that you don’t move sideways (of course, having three tracking wheels would be preferable still). (This is assuming you’re trying to only use the two tracking wheels; see below). If you want a logical argument for why this won’t work, the robot’s positon on the field has 3 degrees of freedom: x-coordinate, y-coordinate, and orientation. This means that you need three dimensions of input to keep track of it.
In principle, yes, that should be possible. I’ve yet to see an attempt at getting a Kalman filter working for tracking, but that would be interesting.
If you haven’t taken a look at the link to your document yet (below), essentially your third wheel should be perpendicular to the other two. By combining data from these three wheels, you should be able to keep track of the position and orientation of the robot.
I was trying to use the third tracking wheel to determine the heading of the robot, but what I found is the value of the third encoder highly depends on the center of rotation. I used the value difference between the other two tracking wheels to determine the orientation in the end.
Yeah, as you have found, it is impossible to determine heading from the third wheel unless you turn on a dime.
You can’t differentiate horizontal movement from rotational movement, and it is affected by the center of turning.
The only way of getting heading is from the two parallel wheels, then the expected movement of the third wheel for that rotation is canceled out, leaving the horizontal displacement.