What kind of sensors should we get?

Long story short, our field does not have GPS code strips or “official” tiles (we’ve foam tiles from HD at a fraction of the price sans white strips). The only sensors we have are some three-wire bumper/limit switches and an ultrasonic range sensor. Our entire budget is ~$300. What kind of sensors would you recommend getting so that our auto routine is not entirely odometry-based?

Did you mean to ask what sensors you need in order to do odometry?

Anyways, what sensors you need depend entirely on your robot. If you plan to do odometry then 3 of either the new rotation sensors or the old quadrature encoders would work for measuring the rotation of free spinning encoder wheels. Aside from that, the V5 smart motors themselves have quite capable internal encoders which can be used for most types of manipulators.

3 Likes

Outside of normal bumper switches, limit switches, and potentiometers to monitor the robot’s interna positions, I would probably go with gyro-sensors to improve accuracy of turning during autonomous.

Of course, there’s always the “poor man’s” method of resetting for lining things up: drive your bot against the field perimeter wall until it straightens up…

5 Likes

Coming from FLL/VEX IQ, I thought odometry didn’t require any sensors (drive forward xx inches…) But yes, I’m trying to figure out how to make best use of that $300. My inclination is to get GPS code strips and a GPS sensor, but that would be our entire budget. Would either the vision sensor or the inertial sensor work to get to and accurately locate those mobile goals?

Keep in mind for the GPS sensor that the strip is only guaranteed to be present on skills fields. There is no guarantee this season that they will be present on normal competition fields for your autonomous.

10 Likes

Just driving according to instructions is considered “dead reckoning” because there is no feedback from any sensors to tell you what actually took place. The concept of odometry in recent seasons provides feedback through non-powered tracking Wheels that tell the brain what the robot actually moved, as compared to what was programmed. Personally, with a low budget, I would hold off on the GPS tracking strips unless that’s something you’re really excited about doing. On a low budget, I’d wait for GPS to become mainstream.

4 Likes

get an inertial sensor, you can do a lot with it (you can calculate distance in the field with speed, or velocity= distance / time)

Odometry requires some form of rotary encoder on two parallel wheels (and a perpendicular wheel if your robot can move sideways). The V5 smart motors have built in encoders which can be used for odometry (the OkapiLib library included with PROS supports this natively.) However, when the encoders are mounted to the drive wheels, any slippage caused by too much acceleration or bumping into obstacles will cause significant errors with odometry. To overcome that, many teams use separate free-spinning tracking wheels with external encoders, so there is little to no slippage.

What is typically done (and what I did) in FLL and VEX IQ is not true odometry. Most of the time teams will just tell the motors to move the robot straight until the encoders in the motors have turned the wheels enough to have traveled the desired distance, and then turn a certain amount, and then drive straight in some other direction, etc. That does not keep track of the actual position of the robot between movements. Small tweaks of the robot between movements can add up to cause significant errors over a whole routine, primarily in heading. True odometry constantly calculates the exact position of the robot on the field from all previous movements of the robot, regardless of what movements are made (it’s a bit more complicated, but that’s the idea.) True odometry does still incur some error over time, but it’s far less than tracking the robot only in the direction the move is supposed to go in.

If you could fit separate tracking wheels in your robot with rotation sensors attached (the old style of encoders are only $11, the new style is $40), you could use true odometry without worrying about slippage. I have found that the V5 Inertial sensor (IMU) (when not miscallibrated from the factory) can be more accurate than tracking wheels at sensing the rotation of the robot. You can get a long ways through just the built in motor encoders for distance, and a gyro for rotation.

If you want a programming challenge, you could try using the IMU to measure distance (which tends to drift slowly over time) and the robots wheels. Most of the time you would rely on the wheels unless the readings from the wheels significantly disagree with IMU (indicating the wheels slipped), in which case the data from the IMU can fill in the gap)

9 Likes

I’ve been playing with a 3-wire range finder with the idea of using it to locate a mogo and position the robot so that all it needs to do is drive forward a fixed distance to pick up the mogo. As a test, I’d position the robot ~2 feet away from a mogo and have the robot turn slowly to the left 30 degrees and then to the right 60 degrees, while polling the range finder and keeping track of the shortest distance it returns. My idea was that the shortest distance would correspond to the position of the robot where the mogo was directly in front of it. The idea failed in practice; my guess is that I didn’t realize that the range finder would emit a sound wave that resembled a cone and not a beam, and that it did that only 4 times a second. Would a V5 distance sensor perform better with the same code?

The v5 distance sensor uses light. I haven’t tested it but it would work for trying to find a mobile goal but not that great for positioning. The main problem is that light can pass through the plastic on the field perimeter and on the parking platform. Which could mess up your autonomous. The ultrasonic range finders don’t have such flaws.

The distance sensor uses a laser, so it measures along a single line. I would not recommend using the distance sensor to find exact distances because it has an annoyingly large error margin (0.6 inches minimum). That being said, the distance sensor still is very useful because it measures along a straight line, not a cone.

It is expensive, but the vision sensor is perfect for this use case (doesn’t require turning because of the FOV) and I’m using it on my bot.

Could you please clarify if the vision sensor could be used to orient the robot directly towards a mogo?

The vision sensor gives you the x/y position of one or more colored regions (as configured by you) within its image. You could program the robot to turn left or right until a particular colored region (e.g., a mobile goal) was in the center of the vision sensor’s view.

4 Likes

this is the reason for the colored sticker on the MOGOs

5 Likes

Oh, is this the sticker you’re referring to?
image

So you’re saying we could configure the sensor with images of a red/blue/yellow MOGO, and it’d locate matching objects on the field? Do you suggest setting up the sensor to look for those stickers specifically, or red/blue/yellow MOGO bases in general?

if you set it up to look for the goals from far away and look for the stickers up-close, you can grab the MOGO from a consistent angle. additionally, the yellow MOGOs should have the sticker aligned with the poles , which is good if you plan on scoring on the goal or can rotate a goal.

5 Likes

you probably won’t have very much luck trying to target those stickers with the vision sensor. The sensor isn’t very good at differentiating colors in less than extreme brightness, so I doubt if it’s even possible for it to recognize those stickers. Especially the non-colored ones on alliance goals.

you might have some success by targeting the entire base of the goal, they are large brightly colored objects after all. In my limited experience with the sensor though, it has a hard time telling red and blue apart if the lighting conditions aren’t extremely bright, but that might not be a problem because you’re unlikely to encounter alliance goals of the other team’s color during autonomous. In skills it might be an issue.

I prefer not to use the vision sensor at all, because I don’t trust it, and I think it’s usually more efficient just to use some sort of encoder based movements.

3 Likes