WPI Lidar explanation

So Team GOAT would like to present some explanations of its Lidar code that ran at the VEXU world championship. We thought it would be a fun interesting challenge to make a dynamic autonomous(auto located balls around the field and pick them up) that we had programmed ourselves 100%(I believe it was the first fully autonomous individually programmed vex robot ever). This is obviously a difficult challenge in its self, but what made it even more difficult is we decided to forgo any additional processing power and did it all on the Cortex in ROBOTC. I would do a full code release but the issue is that we wrote it all in the span of 3 weeks and there are lots of hacked together and or bloated aspects that are not something perspective programmers should be exposed to. (If anyone wants to play around and explore it I can add you to the private git repo) So instead here is an explanation of the steps that went into autonomous locating balls.

To ensure accuracy steps 1-4 were ran 3 times and a ball was only confirmed if 2 of the 3 iterations through a ball was computed to exist

1 Parsing
Consists of reading in packets of, start byte, starting index and 4 consecutive distances in mm as well as RPM measurement of the motor. This data is read and saved to an array of shorts beginning at starting index and ending at starting index+3.
http://i.imgur.com/QNj550v.jpg raw data

2 Chassis filter
The data from the predefined ranges representing points blocked by the chassis are eliminated from the list to reduce cost on useless data.

3 Triangle filter
While still in polar coordinates the data was search for every distinct local minimum that occurred within 3 points such that no point is more than 50 mm farther away than the middle point. This test was designed to be extremely cheap computationally as it had to run on more or less all of the 360 data points. Each point was checked to its 2 neighbours to see if it passed the filter and keeping everything in polar coordinates kept the cost down. This created a list of possible balls.

4 Circular filter
This is the first test outside of polar coordinates. Cartesian coordinates in the reference frame of the robot being the origin and the X axis extending out to each possible ball to reduce cost normalizing data to global reference frame. In this reference frame a possible circle was calculated that crossed both the point to the left and the point to the right of the center( remember any 2 points within 2r of each other are guaranteed to have 2 circles of radius r that intersect them). The middle point was then used to calculate the certainty of a ball based off its distance from the center in relation to a known radius. Any balls within more than 5% uncertainty were refused. This test was extremely expensive due to the nature of the math involved but extremely restrictive on nonBall clusters of points.

Here half of the general equation for computing a circle based off 2 points after we did a lot of work optimizing it
circle.k = (-circle.a * sqrt(-bd2 * (a2 - 2 * circle.a * circle.c + b2 - 2 * circle.b * circle.d + c2 + d2) * (a2 - 2 * circle.a * circle.c + b2 - 2 * circle.b * circle.d + c2 + d2 - 4 * R2)) + circle.c * sqrt(-bd2 * (a2 - 2 * circle.a * circle.c + b2- 2 * circle.b * circle.d + c2 + d2) * (a2 - 2 * circle.a * circle.c + b2 - 2 * circle.b * circle.d+ c2 + d2 - 4 * R2)) + a2 * b2 - a2 * d2 - 2 * circle.a * b2 * circle.c + 2 * circle.a * circle.c * d2 + pow(circle.b, 4) - 2 * pow(circle.b, 3) * circle.d + b2 * c2 + 2 * circle.b * pow(circle.d, 3) - c2 * d2 - pow(circle.d, 4)) / (2 * (circle.b - circle.d) * (a2 - 2 * circle.a * circle.c + b2 - 2 * circle.b * circle.d + c2 + d2));

5 Normalize data to center off of the center of the robot

6 Compute global coordinates of ball

7 Blacklist balls in protected or inaccessible zones

8 Compute closest ball from list of allowed confirmed balls

Updates coming if there is interest.

2 Likes

Wow that’s a pretty amazing bit of code! How successfully was it used in competition? And do you have any match film so we can see it in action?

Our team attempted to do something similar, but since we’re in VRC we were limited to only ultrasonics. We weren’t able to do any full-field detection, but using a pair of ultrasonics, we could detect balls directly in front of the robot. Didn’t do much with the sensor data but was an interesting possibility for us, cool to see that somebody actually accomplished something similar :slight_smile:

Here is a video of it finding a few balls and shooting them. Given enough time and a full field it could score most of the field into the goal. Right after this test I realized my PID loop allowed the robot to stop driving straight if the error alligned perfectly between 80 and 100 mm away so ignore the little derp on the first ball. This is also before extensive work speeding everything up. The speed is still limited by single balls and lidar processing speed.

The robot actually accurately distinguished between a robot vs a ball which is why so much filtering was required. The picture of live data was when the robot was surrounded by tons and tons of balls, kinda a ridiculous example. I will have to see if I have any pictures of raw data that isn’t staged.

Nice, thanks for sharing! :slight_smile:

Amazing code. Keep up the good work

What did you use as your rangefinder?

Nicely done and it’s great to see students working on what is actually a pretty tough problem. I did extensive work with Sick LMS221 sensors and TI DSPs to generate real time (25Hz) container handling machinery positioning information at ports. We were scanning with 4 LIDAR sensors over 30m distances and distributing processed data around with a quality tag used to establish best results and avoid blocking situations. Positioning 30 ton machines within 10cm under all weather conditions is a neat trick.
Optimizing to pseudo objects as soon as possible in your processing is the key to success here but constraining it to a robotC VM is perhaps making life harder than it should be. If you post more details I’ll try to find time to comment.

So we had an 80$ lidar out of a neato vacuum cleaner laying around.
http://www.ebay.com/itm/like/282023483964?lpid=82&chn=ps&ul_noapp=true

The data came in over UART and we ran velocity control to maintain roughly 300 RPM. The lidar has 1 degree resolution and returned distances in mm. Enough precision to detect 3 points on a ball a meter away but not enough precision to guarantee that through rounding all 3 points exist on the same circle in 2D space. To actually guarantee that mathematically we would have needed perfect accuracy and a perfectly level lidar. Without these the surest way to calculate a potential circle was off the outer 2 points and to use the middle point to check certainty.

By the way our work on all of this won us the VEXU Design Division Think award.

1 Like

Does the lazar sensor actually send out visual light?

Pretty sure it is IR. It is a pretty common laser used in the hacker community.

https://xv11hacking.wikispaces.com/LIDAR+Sensor

This is awesome, but doesn’t look like the code would be speed up much… I might be missing something because you would need to make an array with the current degree and distance the ball is away, convert to xy coordinates, kill the unaccsesible balls, then use inertial navigation to drive to the balls and score… All of which you looked like you already had so just the optimization of the pid loops is the only thing to speed things up right?

Im interested in the xy positioning that you use and how your robot keeps track of its position.
Are you able to explain that?

So for the most part the optimization we performed was on the processing end. We had initially been reading data until we had read every new point and then running a task in the backround to wipe all incoming data while we were processing and computing balls. This lead us to be wasting time waiting for new information when it came time to process the next iteration of balls. So instead we switched to letting the data build up in the UART buffer so that time could be saved on that end. For the most part our code was pretty optimized but there were things like hard coded delay statements. For example we used to wait 2 seconds after stopping before beginning processing the data to ensure we were completely stopped. This was largely unnecessary as the PID loop was tuned to only exit if error was small and velocity was 0 over 50ms.

Largely though time required for computation could only be lowered by so much. The lidar spun about 4-5 rps so every .225 seconds it fully rotated. And if we wanted to do 3 full rotations of data calculation before moving to a ball so well over half a second minimum of standing still. In all honesty thinking about it I think we could have been better off if the majority of our autonomous code used Bang Bang instead of PID for moving around; the accuracy was unimportant and the position tracking was so good that it could be okay doing full speed or 3/4 speed maneuvers with no ramp up.

Sure it was all pretty simple. For the most part we were able to drive for several minutes by hand and only experience theta drift of a ± 1 degree. Remember on odometry like this, theta drift of any significant amount will entirely destroy your coordinate system. If your theta drifts 10 degrees than every timestep your movements in the global X and Y coordinate system will be completely inaccurate even if your X and Y were perfect up until that point.

The trick I found for really good position tracking isn’t any more cool code. Imagine if you were in a car suspended above the air and you pushed the pedal down for hours.

What do you expect would happen to the odometer?
Also what do you expect to happen if you had barely enough traction to move and sent motors full power on your robot?

Did you use a Vex gyro or another 3rd party sensor? Also, it sounds like you used either an accelerometer or an unpowered wheel with an encoder for (x, y) movement. How did you handle drift, or did you even need to do anything special for it?

Didn’t use anything third party or an accelerometer. Really I think the most important thing you can do is have an unpowered encoder wheel for position tracking. It provides much better data and isn’t likely to be messed up by running into things or trying to accelerate too quickly.

So did you use those unpowered encoder wheels for a theta measurement as well? I looked into doing that, but we never got around to putting that on the Worlds robot and we are currently moving everything into storage for a while. I’d also like to take a look at your code eventually; Vex U seems like a programmer’s paradise, at least to this high schooler. =)

Unpowered wheels were used for theta as well. We only used 2 wheels so we didn’t have good data side to side. In theory 4 wheels could give you better data on robots sideways drift during fast turns or arcs but couldn’t fit horizontal wheels into the chassis.

Thanks for that, I have been trying to work out xy coordinates for a while now.

I will post the task that is in charge of position tracking but it’s pretty simple. The other thing we did was every time we got near a wall we would square up with the wall and drive until we hit the wall to reset odometry to handset values. I will explain a little better later when I’m at my computer.