I am trying to follow a line more effectively. It works right now but it makes a saw pattern and is almost never going straight. I would think that a P.I.D. loop would be the way to go, but to do this I need an analog line position as an input.
An analog line position is where the robot knows exactly where the line is and how much it needs to turn to get back on the line (a reading would range from -100 to +100, 0 would mean that the robot is centered on the line)
Verses a digital line position were the robot has only three possibilities, left, middle, right, (or 1,2,3) and is always over correcting itself. (this is the only thing I have ever seen used with a vex robot)
Any help you could offer would be very appreciated, Titan 103
Having the light sensors closer. Depending on the line and the contrast and the robot that may mean three in a row or in a triangle pattern with the two side sensors next to each other and the center sensor ahead or behind them. This may allow an outside sensor to detect the line moving beneath it sooner, or, depending on the line, may cause premature adjusting.
Bringing the thresholds closer to center instead of leaving them at 0 and 255. In short, if you begin to adjust at shades of gray instead of black and white, you’ll adjust more often.
A Variation of number two where the amount that your robot adjusts depends directly on the inputs. A little change in gray = a small adjustment and the more the shade shifts from optimal the more the robot must adjust.
Note that all three include risks with their rewards.
I was thinking that you could make the line black in the middle and gradually become lighter outwards. Then you could have two sensors about the line’s width appart. Then the robot would know how much to turn.
Yes, that would be good. I wish the lines on the competition field were like that. Most of the work I am doing on line following is in preparation for future competitions.
I always thought it would be interesting to mount the line sensors on a light-weight scanning head driven by a servo. This would let you scan for the line without moving the whole robot (let the head do the saw movement instead of the whole bot).
I think you’d use a PID loop to run the servo to keep the head centered over the line. Then use the servo angle as input to the steering (possibly a 2nd pid loop?).
I know this is off topic, but I just wanted to say that I really like the VEX robot in the picture you used for your avatar. Do you have a larger picture of it that you could post in the gallery? It looks really good - is it the robot that you are trying to use the line following on?
All right I will post a larger one. No this is not a line following robot it was merely my first very succesful tank-with-grabber robot. The robot I am currently trying to use for line following is a sick looking Elavation bot. I plan on eventually making a robot just for line following
P.S. my avitar robot is where I got my name, it was called the TITAN 103
If you do it this way, you should only have to use one of the line following sensors on the scanning head. you could just record the servo angle when the line is detected to steer the robot like you said. I think using all 3 sensors would make things too complicated and would probably be overkill for the scanning method.
You don’t need to do this because that’s how the sensor works anyway.
The sensor is measuring an area a certain distance in diameter. I"m not exactly sure how much but I’m guessing it’s about 1/4 of an inch in diameter. As the sensor crosses the line a greater percentage of the area goes from white to black or visa-versa. This change in percentage is shown as a change in voltage which is shown as a digital change between 0 and 255.
You can test this yourself by putting some electrical tape on a sheet of white paper or by using a sharpie marker to make a wide dark line on a sheep of white paper. Set a motor to turn a wheel in the air based on the input of the sensor. As you move the sensor (in your hand) from the white to the dark sections you should see the motor speed up and slow down.
The number of sensors you have in a fixed area determines how accurately you can figure out how much you need to adjust your robot. The larger the area you’re covering, the less likely you are to lose the line entirely. The more your sensors check the area (number of checks per second) the more accurate you’ll be. The faster you’re moving, the more likely you are to leave the line before you’re able to correct.
Thanks! I am planning on using line following on my elevation robot too. I just have to get mine finshed. I still can’t decide whether to use tank tread conveyor belts like most everyone else or a claw arm. Something like the arm on your tank-with-grabber robot would work great - that’s why I thought that it might be your elevation bot.
If you use a line like I said, you can have a line with a value of about 0 in the middle and about 255 on the outside and an increasing value as you go outwards. With a normal line, you jump from about 0 to about 255, but if you have a line that varies like the one I described, then the robot would be able to sense how far into the line it is and calculate how far it has to move to correct itself. I know it won’t work for elevation, it was just a thought.
If you’re jumping from 0 to 255, that means that your robot has crossed the entire cross section field of one of it’s sensors in-between measurements. This is entirely possible depending on the size of your sensor field and the speed of your robot and how much you’re making course corrections (swerving).
The solution to this is NOT to blame the environment but to improve the robot. There are a lot of things you can experiment with.
Test your sensors to see how high above the line they can be and still be accurate. If you can increase the height you’ll increase the area the sensor is getting it’s input from.
Experiment with the pattern of your sensors. For examples five sensors in the following pattern.
.]]
]]]
provide a lot more feedback than three sensors side by side. It even be that three sensors patterned
.]
]]
may provide much more overlap and accuracy although I suspect that your robot is moving too fast for such a pattern to be efficient.
Another possible pattern would be:
._]
.
.
]]]
If the front sensor (array) is on the line, your bot can go faster, when the front sensor loses the line, the bot knows it has a turn coming and should slow down and focus it’s efforts on the back sensors.
Or maybe you feel the need for speed. If so:
]]]]]]]
will allow you to have gone far off the line and still correct your direction and speed.
Look at your software. Your controller can only do so many operations per second. The more things that controller has to do, the less it’s looking at the sensor. If it’s spending time performing operations that it doesn’t need to do while moving then it’s wasting cycles.
Note that many of these things are at cross purposes, for example, the more sensors you have and the more complicated the routines you have for analyzing them, the less frequently you’re going to poll any one sensor.
All of these things involve creating experiments, testing, and learning the strengths and limitations of the equipment far beyond what’s in the PDF files.
Some things people may want to try, just to see what happens.
Can you use additional sensors to increase the illumination of an area but only poll a single sensor mounted higher above the surface?
Would is be beneficial to mount your sensor at an angle to change the shape of the area it’s looking at, especially if #1 works?
Do you have enough light to use the light sensors instead of the IR line followers?
I support Bons. Because of the width of the sensor, you don’t need a line that has a smooth curve. The finite width of the IR beam should compensate for that.