A thread for Algorithms


Well, I’m not asking for leeks, but it would be cool to see a thread with a bunch of different types of algorithms and how teams have used them, let’s make it happen. Everyone knows about the infamous PID loop, but what other types of algorithms has everyone used and for what purpose?

1 Like


I have tried a technique for the vision sensor. It is for automatically turning the bot towards the flag.

It is a filter that determines which flags in the view of the camera is the closest to the center of the view. The reason for this is I use an encoder PID to turn to the desired flags approximately first, then use the vision and this filter to determine which flag to track. Then pass down the value as a sensor input for a PID on the base to turn to the aimed flags precisely.

The filter is very simple using the std::sort function:

bool compareCenter(vision_object_s_t i, vision_object_s_t j) {
    return ((fabs(i.x_middle_coord * 1.0 - (VISION_FOV_WIDTH * 1.0)/2) < fabs(j.x_middle_coord * 1.0 - (VISION_FOV_WIDTH * 1.0)/2)));


Does anyone have any algorithms or mathematical equations that they can share to help a bot stay on a imaginary line as best as possible without a line tracker?



I could try to explain an algorithm involving a PID loop but I’d probably get it wrong :sweat_smile: so I’d offer a simple alternative using a ultrasonic sensor pointed off the side. This really only works in an environment with a detectable wall parallel to the robot, but this method isn’t often said, so I’d like to get it out there.

1: Record distance from side wall as variable distance
2: Set drivetrain motor speed
3: IF the ultrasonic sensor value > distance, adjust speeds so the drivetrain is going closer to the wall
4: IF the ultrasonic sensor value < distance, adjust speeds so the drivetrain is going away from the wall

1 Like


Oooh, this looks like it could be a fun thread!

Hey, I recognize that username… :smiley:

On a serious note, while (as I mentioned in the position tracking document) we don’t exactly want to just release the math/code for the algorithms we used last year, as we’d much rather teach people how we came up with those algorithms as opposed to just handing over a “working” solution that would not at all be optimized to everyone’s needs. Part of the reason we haven’t released anything beyond the initial document is a combination of not having the time to write it, and that explaining that kind of thing in a written document only is quite difficult (I’m aware that the tracking doc isn’t exactly the easiest thing to follow, has some typos, etc…)

@Andrew_Strauss and I have discussed the possibility of starting a YouTube video series on robotics programming, so maybe that could make its way there… but no promises, given that we’re already stretched on time and we’re not even competing in VEX U yet.

Nonetheless, I’ll try to go over a couple simple algorithms that we used last year:

Nearest equivalent angle: Simple mathematical trick, very useful as a building block in building motion algorithms on top of a position tracking system. Anyway, assuming that you’re trying to find the equivalent angle for \theta_{target} that’s closest to \theta_{ref}, both in radians:

\left[ \frac {\theta_{ref}-\theta_{target}} {2\pi} \right] \times 2\pi + \theta_{target}

Where, for lack of better notation, \left[ x \right] denotes the nearest integer to some real number x (we just used ROBOTC’s round function.

How does this work? Well, the rounding part calculates how many rotations off you are from the target (the expression inside is the difference between the target and reference angle values, measured in rotations rather than radians - the conversion is done by dividing by 2\pi). The outside then converts that offset in integer revolutions back to radians, which gives how much you need to change the target by to get to the nearest equivalent angle… and then adds that to the target, thus calculating the result.

Ultrasonic filtering: Basically what it says. We used this to ensure that all of our ultrasonic values were actually picking up on the intended object.

  1. Set a target number of “good” readings (for example, 10)
  2. Set a maximum number of readings to take (for example, 100)
  3. Set a period to take readings at (for example, 10 milliseconds)
  4. Set a minimum and maximum expected value for the ultrasonic readings
  5. Take a reading
  6. If it falls in the expected range, add it to the “total” value, and increment the number of “good” readings you’ve taken
  7. Sleep
  8. Repeat 5-7 until either you’ve taken the target number of “good” readings, or you’ve reached the maximum total number of readings
  9. Divide the “total” of the “good” readings by the number of good readings that were taken, to get the average. This is now your “filtered value” for the ultrasonic.

I hope this is a fairly straightforward explanation. By only taking “sane” values from the sensor, you can eliminate noise caused by inconveniently placed field elements and other sources in some scenarios. Further tweaks to this algorithm can be made, for example running it concurrently on two separate ultrasonic sensors that you happen to need the values of at the same time, and falling back to an alternate sensor data source if there weren’t enough “good” readings; we did both of these last year.

Best of luck and feel free to ask if anything here needs clarification.



Well, I’m curious as to what the implementation of the nearest equivalent angle would be used for.

You said that you want to find the equivalent angle but for what exactly? Is it the orientation of the robot, a position for an arm, etc. It would be helpful if you could give an example of what theta target could be and what theta reference could be. The ultrasonic filtering is pretty straight forward I’d say. I noticed that on the PILONS bot last year there were ultrasonic sensors on the sides of the bot. Probably for tracking where the walls of the field are relative to the robot. Also, thank you for replying to this thread!



This was used for calculations involving robot orientation; we added it after the robot took the scenic route turning (i.e. doing a 300 degree turn to the left instead of a 60 degree turn to the right) one too many times. Incorporating this into a turning function that works with absolute orientations lets you not worry about what equivalent angle you’re providing, as it will automatically choose the shorter way (if that’s what you want to do).

Sort of. The ultrasonics on the bot (there was another symmetrically on the other side) pointed out the front corners of the robot at 45’s. They were used to reset our position tracking system while depositing mobile goals in the 10 point zone during programming skills.

My pleasure!

1 Like


I felt like it so I tried to make a noise filtering class that is somewhat based off of what @nickmertin said. I’m not completely sure if it works.

#include <algorithm>

template <class vType>
class Noise_Remover {
  vType valueArray [50];
  int p = 0;

  vType GetValue() {
    vType vArrTemp [50];
    std::copy(std::begin(valueArray), std::end(valueArray), std::begin(vArrTemp));
    std::sort(vArrTemp.begin(), vArrTemp.end());
    int h = 0;
    vType mean = 0;
    vType lo = vArrTemp[0];
    vType hi = vArrTemp[0];
    vType median = .5*(vArrTemp[25] + vArrTemp[26]);
    for (vType num : vArrTemp) {
      if(num > hi) {
        hi = num;
      } else if (num < lo) {
        lo = num;
    vType range = hi - lo;

    int m = 0;
    for (vType num : vArrTemp) {
      //Filter out outliers
      if(fabs(num - median) < (.25 * range)) {
        mean += num;
    if(m == 0) {
      return -1;

    mean /= m;

    return mean;


  bool operator==(vType v) {
    return this->GetValue == v;

  void operator+=(vType i){
    this->valueArray[p] = i;
    p += 1;
    p %= 50;

I probably should’ve added more operator overloads.



Looks decent. Though,

  1. If you’re going to abstract that into its own class template anyway, you should probably make it more configurable. (That’s probably more useful than additional operator overloads). 2. Just to be clear, your filter only looks at how close together the values are, rather than some preconfigured absolute max and min. This means it wouldn’t handle cases where an ultrasonic is reading a consistent value that’s entirely insensible and should be ignored.
  2. You don’t need to sort the array and loop through to find the max and min. I’d recommend just removing the sort, in which case you can probably just use valueArray directly rather than creating a temporary copy.
  3. You’ll need a way to push data into the filter, unless you’re just going to create a new filter object every time you get data, in which case maybe this should be a function instead of a class.
  4. Syntax: you probably want public: at the beginning, and you need to add the () after getValue in your operator==.