How would i go abou trying to map out an environment with limited materials?

With only 4 ultrasonic distance sensors (mounted on all four sides of a quad-wheeled), our robot is supposed to map out its surroundings in a static environment. My only solution is finding the relationship between wheel diameter and linear movement (to track movement effectively) and then apply those numbers, combined with our front, back, left, right distance values to some kind of polar coordinate plane. I also have to find some way to actually map out these coordinates so we know what our robot is seeing in real time. Are there any solutions to this (maybe some pre-existing code, alternate method to go about this whole thing, calculations, etc.)

I don’t know how feasible this would be with vex parts, but a cool idea could be to have a spinny sensor attached to one motor kinda like this video. That would make it easy to know the angle and distance at any one point, and could then be plotted onto polar coordinates. It would be simpler and more accurate than driving around the whole robot.

2 Likes

One of the biggest issues is the mathematical knowledge to knit everything together. You not only need to know the position of every point relevant to the robot, but you must also know where the robot is as well. Either you can use advanced college-level math or build an odometry program that uses the rotation of the wheels and its velocities, alongside robot’s heading.

2 Likes

This is a lot of math. The option that I was thinking of would involve having a constantly spinning ultrasonic sensor using a motor at the base.

This theory works in visualizing an elevated sensor as the top point on a right triangle.

So, to sum it up:
Doing a little calibration, you can find out how far the ultrasonic sensor points by default with the Pythagorean theorem. By calibrating it while having the sensor point at the ground, you find the base of the triangle. Your ultrasonic sensor is the hypotenuse, which allows you to find the height of the triangle, Subtracting the new height from the sensor’s constant elevation will get you how far the object is off the ground.
This detects objects, and can find them in a 3d space. Just having the sensor readout be different by a significant factor can see that an object is there, and you can find the edges by turning the sensor left and right until it finds the ground again.

If possible, having a second motor to change the rotation of the sensor up and down would allow for the finding of object depth and it’s further use as a point to rotate on. Calibration of distance for this would be done with sensor elevation, and the ultrasonic sensor data.

This was very, very complicated. If you could describe your environment and it’s general size that would be nice.

Sensor MUST be facing downwards, towards the field.
This also assumes that the sensor has relatively accurate data, and you have a flat surface to work with.