7842F 2018-19 Robot Showcase - Flywheel Control, Odometry, Engineering Journals & More

Did you go to Worlds last year?

Yes. Try to answer your own questions =).
If you have more questions, please combine them in a single comment so that this thread is not cluttered by single-line discussions.


hey could you give a brief description as to how your lvgl for odometry will help users in tuning their odometry?

1 Like

Like I said, odometry is hard to tune. It takes a lot of iterations of fine-tuning to get it right, and is prone to error and drift. Everything needs to be working perfectly together, or else the whole system crashes. Thus, it is important for the tuning and diagnosing to be as seamless, fast, and efficient as possible.

What most people do is print information to the terminal. While it works, it is quite awkward and hard to visualize. What I mean by this is that it takes considerable effort to analyze and figure out what the stream of numbers coming in mean. It is just easier to have a visualization.

Having the LVGL visualization makes it faster to get information. You can quickly see if the sensors are working, if the wheels are working, etc. You can drive the robot forward, and directly see if the robot is drifting on the screen. It is useful for tuning constants, because you can move the robot and watch the robot on the screen do so as well, and see if it matches what you expect. If the visualization is not precise enough, you can use the numbers printed on the screen.

TL;DR It provides a faster way of visualizing the odometry and troubleshooting problems, as it readily gives visual feedback that is a hassle to get otherwise.

Does this answer your question?


yes and i had another question, are we able to use this with our own odomentry system and not using your api?
Because i have my own way of getting orientation and x-y coordinates


Yeah, you will probably have to edit odomDebug’s code for it to work with your format.
Soon I’m probably going to change how it’s structured so it can work with more formats.

Right now, it uses OkapiLib v4 beta’s odometry system, which will probably be released soonish.
You can download the beta version of OkapiLib v4 from its GitHub releases.

If you wanted to, you could inherit your tracking system from OdomChassisController, which would make it compatible with my code as-is.


ok cool also is it possible if you could send some photos of your tracking wheel?

1 Like

Is the demonstration video not enough? I also have a picture in the post.
If you want more, there may be some images in the engineering journals.
In fact, we even have build instructions in page 82 of the project journal lol :wink: .


oh ok thanks i didnt see that page

1 Like

I had a chance to read through all details one more time…

First, I was trying to understand if your flywheel control algorithm is a continuous (textbook definition) PID, where everything is controlled in a smooth manner, without artificial jumps, vs the multi-segmented approach, where if() statements introduce discontinuity.

For example, your 7842F-Alpha-Code-RobotC repository has such if() statement, but in V5 code it is essentially a nop-statement

Then I got lost trying to decide which of the features I like more:

demonstrated at 3:35 in this video: https://youtu.be/iW4RlnHbDrY?t=215

or the Vision Sensor instant visualization and filtering:

Please, don’t wait until the end of the season to release the code. :slight_smile:


Thanks for the compliments! I know it takes a bit of time to read :slight_smile:

For the flywheel PID, I believe it qualifies as a continuous, textbook PID. The algorithm can theoretically attain any desired velocity, it does not have a designated velocity that it operates best at.

I don’t think the two pieces of code you linked do what you think. They simply favor the D to one direction.
In this case, I just wanted the D to react to the flywheel slowing down, but I did not want the D to react to the flywheel speeding up. If the flywheel suddenly accelerated for some reason while it was running at target velocity, the D’s natural response is to cut the power.

However, a flywheel takes more effort to speed up than it does to slow down, and a flywheel’s natural bias is to slow down, and our flywheel was ratcheted making it impossible to forcefully slow down. Because of this I wanted the PID to react less aggressively and only lower the power a bit if the flywheel overshot, allowing the flywheel to naturally slow down and settle at the target velocity instead of overshooting the other way and slowing down too much. I also didn’t want the D to interfere with ramping up the flywheel.

For the RobotC version, I scaled down the D when it was below 0, and for the V5 version I cut out the D completely when it was below 0. Apart from that, the algorithm is linear and would have a smooth uninterrupted output signal if the target velocity was slowly increased.
I hope this clears things up.

Noted :slight_smile:
I am willing to DM it to people if they are willing to beta test it, give feedback and suggestions, and test it for practical use on their robots.


Hey everyone!
Just an update on odomDebug.
I’ve made a new version and updated the github.

Aside from some visual changes:


I’ve also updated the API to work with any kind of odometry. Now you can directly set the position of the robot on the screen, instead of having it read a specific kind of odometry from the backend. I have also better formatted and commented the code.

For a better idea of what changed, check out the repository.

Let me know if you have any questions, enjoy!


Hey, I’m pretty impressed by your work and I skimmed the two journals that you have.

I noticed that you guys spent most of the time building and was wondering how you had the time to perfect all these complex programs you had. This season, as a programmer, I always felt that I didn’t have enough time to perfect the things that I had and I’d like some advice.


Yep, that’s the struggle.

We have ~7 hours per week to work on the robot, and since we just have one builder, it takes most of our time just to build.

As for programming, it comes down to how much time you put in at home. I am super passionate about programming, so it is what I do in my free time. At school, at home, and on trips, programming is one of my higher priorities, which I have to balance with school, cross-country skiing, and piano. I have 1.3k commits on GitHub, and have probably spent hundreds of hours on my (awesome and fun) projects.

I develop the programs at home, in preparation for the next meet. I mostly just try to run everything through my head to make sure it works, but for the LVGL stuff I used the LVGL simulator.
When Jacob builds, I am able to test stuff on the robot, but it is sometimes a struggle having the robot for myself.

Not having enough time does affect us, namely that we are rushed before a competition, with work still to do. We often write autonomous programs a day before or at a competition, simply because we did not have enough time to finish building. We were not able to do skills last year, though I’m sure I would be able to make an awesome routine (given enough time). Finally, our driver practice is always rushed and often improvised.

Most of my productive programming is done at home, when I get into the zone with music and stuff. I rarely build programs at our meets, its just too distracting there to be focused. I get all the needed programming done ahead of time, make a testing plan, and then I test and tweak while Jacob builds.

I hope this answers your question, basically being focused, deliberate, and prepared allows you to make the most of your limited time with the robot.

Builders: give your programmers enough time, they are more vital to the robot than you think :smile:


I’m literally bussing over this post.
It’s absolutely phenomal. Your robot, the code, and the website

oHmYgOd it’s just too much.



I’ve been working on odom for a bit now and I saw the gif of your javascript simulation of the path calculation and its exactly what i was picturing for how the path calculations would work, only problem is i cant seem to figure out how to calculate the velocities based upon the data given.

I (with help of others) got to something along the lines of like a time versus position graph and then taking derivative to get a velocity vector and turning that into the motor inputs (thats probably not explained well or necessarily correct but you get the general idea) but i felt like that was overthinking it, and theres most likely a simpler way.

So far I have the robot tracking X and Y coordinates and tracking its angle on the field, but i havent been able to get any motion profiling working besides the horribly inefficient point turn to face target and then drive forward. I attempted a similar PID control like the one you mentioned and got the exact same spasm results around the target point and i felt like there was a better way and then i saw this post which was perfect. (Thanks for this, its really well documented!)

1 Like

So my understanding is that your robot is able to somewhat reach the target position, but it never quite settles there. Instead, it goes crazy around that spot, constantly turning back and forth to adjust for the change in target angle. I’m not sure how much you understand about the issue you’re having, so I’m going to explain from the basics up for you and for anyone else reading this and having similar problems.

Why the robot is spasming
The calculation for the target angle involves arctan. targetAngle = arctan(yDistance / xDistance) where yDistance = targetY - currentY and xDistance = targetX - currentX; However, the version of arctan that we use takes the sign of yDistance and xDistance into account. I personally use atan2(yDistance, xDistance) from the cmath library in C/C++. Let me know if you want me to fully explain where I got this formula from.

The important part is this:
Assume targetPoint: (10,10); currentPoint: (0,0) -> xDistance = 10; yDistance = 10
Now we know that the robot should travel at a 45° angle to reach its target.

Now flip the targetPoint and currentPoint:
targetPoint: (0,0); currentPoint: (10,10) -> xDistance = -10; yDistance = -10
So the target angle now is 225°. The robot now wants to turn around completely to go where it wants. Now imagine this happening on a very small scale. If you are currently at (10,10) and need to go to (11,11), you know you need to go at 45° for some distance. Now say the robot overshoots its target and ends up going to (12, 12), the robot now calculates a negative xDistance and yDistance, so it wants to turn 180° for such a tiny distance. Now, this will probably happen multiple times which is where the spasming happens.

Theo’s Approach to Solving this
Disclaimer: I did not come up with this approach. This is @theol0403 's concept. I just spent time understanding it and I’m explaining it again here. To understand this approach, consider the following diagram:
The dot in the middle is the robot’s target position. When the robot is a distance smaller than the green circle, that is considered within tolerance and you should shut off the PID. You’re going to need this kind of tolerance because you will never be exactly at the right spot in the real world. You decide this value through experimentation.

The blue circle is the area in which the robot is not allowed to turn at all. You only allow the robot to go forwards or backwards as far as possible until moving in straight lines will not help anymore. The way I did this is go forwards/backwards until my desired target angle is 90° from my current angle. I can explain why 90° is the magic number if you would like me to. Once the robot either (1) reaches a distance error of less than the tolerance (green circle) or (2) has a 90° difference between current and target angle, kill the PID and declare the movement as finished.

If the robot’s distance from the target is greater than the blue circle, it follows the traditional method of moving where it turns to the target position and then moves forward. It continuously checks its angle and turns if needed when moving forwards.

Let me know if you need clarification on anything. I will say from doing this myself that it can take time to fully grasp the concept. Good luck!


First of all, thanks for the detailed reply!

You guessed right on what was happening, and the way you explained it makes a lot more sense. Ive been (like you said) taking arctan and i didnt take into account sign flip at target point, so thats gonna help a lot on the spasm part of the problem.

My biggest problem, however, is properly calculating motor velocities based on current and target coordinates and angles. I have the position and angle of the robot being tracked, and my problem was finding a good way to take the input data (current and target coordinates and angles, and error) and transfer it all into motor velocities. (Basically the GIF that theo sent of his javascript simulation)

I know Theo is working on a response as well, but if you or anyone else grasps how that works then please feel free to explain it, it would help me out a lot.

Edit: This GIF (Specifically, the Green bars representing motor velocity is what im struggling to find)


I’m not quite sure what you mean by that, but if it ends up working, let me know! Is the idea that if the distance to point becomes smaller, then velocity should be positive, but if the distance becomes greater, then the robot should back up? That’s a very interesting idea, and I have no idea if it would work. I think the problem is that you can’t do PID on it since you don’t have an error, and therefore you can’t do proportional control and slow down before you reach the target.

The best way to convert error to motor velocities is to use PID. You take the distance to the target, apply PID on that, take the angle to the target, apply PID on that, and then combine the two. Further down I explain how I combine the two PIDs.

I’ve seen a few people avoid the spasms when doing PID, using a few methods.
First of all, let’s recognize the limitations of a skid-steer chassis. If for various reasons the robot inevitably slightly misses the target point, you have two options. Either you can back up and try to better align yourself to the point, or you can cut your losses and realize that you can’t move sideways. The question then becomes when do you stop trying to correct angle, and how do you settle.

Given these limitations, one way of settling is to exit the movement when your error is sufficiently small, and hope that you won’t run into a situation where the error is too big but the robot can’t move sideways. I believe that is what this does, which simply exits a certain distance away.

However, a way that I have found works well is to give up on angle correction after a certain distance, but continue with linear PID. However, since we use pythagoras to calculate the distance to the target, the error will always be in the positive, preventing the PID from settling. My solution is to provide an artificial polarity to the distance error.

Here is how I provide a polarity to the distance error:

  • Calculate distance to point
  • Calculate angle to point
  • Wrap angle to be ±180 degrees
  • If absolute value of angle is > 90 degrees, then driving backwards is needed. The distance to point is negated.

This way, if you are in front of the point, it will know to drive backwards. All you need to do is disable the angle correction at a certain radius of the point, and hopefully the distance PID will properly settle. If it does not, then you need to exit the motion when you get to a certain distance.

If you want to allow the robot to drive backwards (currently it just drives backwards when the angle pid is disabled), you can then rotate the angle error so that it is in the range of ±90 degrees.

Another solution is to heavily bias the angle PID, so that the robot faces the target point as quickly as possible when moving. To do this, I’ve made a custom drive function that reduces forward velocity in favor of angle velocity:

void driveVector(double forwardSpeed, double yaw) {
  // combine the forward speed and rotation together
  double leftOutput = forwardSpeed + yaw;
  double rightOutput = forwardSpeed - yaw;
  // get the maximum absolute velocity
  double maxInputMag = std::max(std::abs(leftOutput), std::abs(rightOutput));
  // if maximum is over 100%, scale down all velocities
  if (maxInputMag > 1.0) {
    leftOutput /= maxInputMag;
    rightOutput /= maxInputMag;
  // set motors

This way, if the yaw is made to be beyond the constraints of ±1.0, then it will reduce the forward velocity to encourage the robot to rotate more.

Here is one of my simple PID drive algorithms:

// calculate errors
angleErr = angleToPoint(targetPoint); // automatically wraps +-180
distanceErr = distanceToPoint(targetPoint);

// if angle to point is behind the robot, drive backwards
if (angleErr.abs() > 90_deg) distanceErr = -distanceErr;

// forget about angle inside range
if (distanceToTarget.abs() < settleRadius) {
  angleErr = 0_deg;
} else {
  // rotate angle to be +-90, so that the robot can drive backward
  angleErr = rollAngle90(angleErr);

// calculate PID velocities
double angleVel = anglePid->step(angleErr.convert(degree));
double distanceVel = distancePid->step(distanceErr.convert(millimeter));

// send velocities to drive function
// increase turnScale to bias turning
driveVector(distanceVel, angleVel * turnScale); 

My custom “closest point on current heading” algorithm I used was very similar to that, except it allowed me to have the distance error calculated to be 0 if no movement was possible. @Electrobotz 's suggestion of settling until angle is 90 is a very good idea, you could even use the angle error from 90 as the input to your PID :thinking:.

I hope this makes sense, and that you have an idea on how to start to implement your target error -> motor velocity PID algorithm.

Let me know if you have any more questions, both about this post and if you are still struggling with your algorithms.


That summed it up really well! The part about biasing for turns makes a lot of sense, i didnt have that part so that should really help. Also, i didnt have any logic to drive backwards in case it passed the point, which is why my robot alternatively endlessly circled the point, so that problem can be fixed now. That response makes a lot of sense, thanks so much for all the detail! It definitely helps me out quite a bit, I really appreciate it!