Cortex/393 Motion Planning Library

Hi all,

I’ve been working in the background for a few months on a linear motion planning library for ROBOTC. I’ve been posting updates and instructions on an old thread here but I’d like to start a formal thread for this tool.

Full ROBOTC library, including my Gyro filtering, PID library and motionPlanner.c: latest version .zip

Basically, I’ve designed a single .c file that you can include to your competition program (motionPlanner.c) which is designed to provide precise position and velocity control of any motor/sensor pair in a background task in ROBOTC with minimal user effort/implementation. This is designed to offer high end controls code to users/teams that may otherwise not have the ability or time to create. I hope people get good use out of this (and maybe learn a thing or two). I encourage teams to use this code on their worlds robots this year. I’ll be porting this library to VCS and PROS for V5 sometime soon. I’d like to add 2D path planning and move execution but that may take me some time to design and implement (I’d like to use a pure-pursuit algorithm on 2D curved trajectories but I need to figure out how to universally track a generic robot’s position and orientation accurately, using only VRC legal sensors. That might take some time).

Feedback/bug reports would be appreciated. This was really hard to implement using ROBOTC, and there may be some funky behavior in some situations as a result. I’ll be updating this over time but the download link above should always be the latest stable release.

It generates trapezoidal/triangular/s curves for motor output, which is a very efficient way to get a robot from one position to another while minimizing error. Here’s what a planned move’s position/velocity/acceleration set points over time look like:

In order to follow the planned motion curves, cascading PID loops are used (position PID is fed into velocity PID and then output to motors). There are 14 parameters per motor: 7 gains to tune (3 position PID gains, 3 velocity PID gains, and an acceleration gain), PID integration band bounds for both, and 5 profiler configuration options (that I’ll mention how to configure shortly). Once I find the time I can make a formal guide on how to effectively tune all these settings, but most of the default settings will do the trick unless you need a really high level of precision.

In order to use this code, just include it into your ROBOTC project:

#include "motionPlanner.c"

The profiling is done in a background task in real time, so you can use it for auton/competition without changing the library/configuration. When you call createMotionProfile() it will automatically start the required tasks and be ready to go.

NOTE: Once the motionPlanner tasks are started, setting motor outputs using motor[port] won’t work, as the motionPlanner’s control loops will override those output values every cycle. Use profileSetMotorOutput instead of motor[port].

1 Like

Here’s an example use case for this tool: let’s say you wanted to set a profile for a four motor turbo drive (ports 2 and 9 on left, ports 3 and 8 on right) using encoders (dgtl1/2 for left, dgtl3/4 for right). Here’s the initial setup for that configuration:


profileSetSensor (port2, dgtl1);
profileSetSensor (port3, dgtl3);

//set max velocity in ticks (sensor units) per second
profileSetMaxVelocity (port2, 1728); //120rpm * 360 ticks per rev / 60 seconds = 1728 ticks per second 
profileSetMaxVelocity (port3, 1728);

//sets motors to follow other motors/profiles. Reversed flag swaps direction of this motor
//<port>, <master to follow>, <reversed>
profileSetMaster (port9, port2, false); 
profileSetMaster (port8, port3, false);

Ports 8 and 9 would be set to follow ports 3 and 2 respectively.

createMotionProfile will also autodetect encoders defined in the ROBOTC motors and sensors setup, but not the type/velocity. So you can skip profileSetSensor if you want for encoders configured this way, but you still would need to set the maxVelocity parameter.

To tune the control gains and features, there are a handful of functions:

//would cap the speed during a move to 1200 ticks per second, defaults to maxVelocity above
//moves are computed with using this speed as the max, and will take longer if this value is lowered
//calling profileSetMaxVelocity resets this speed limit
profileSetSpeedLimit (port2, 1200); 

//a gain that makes the motor try and follow the acceleration cuve more aggresively
//higher values mean that more power will be added when speeding up/slowing down to help "stick"
//to the speed curve
profileSetAccelerationGain (port2, 0.15);

//time in milliseconds to ramp up to max speed for a move
//think of this like a "slew rate", or a 0-max speed time
//defaults to 1000, or 1 second from 0 to max speed
profileSetAccelerationTime (port2, 1000);

//sets the rate (in samples per second, Hz) in which the controllers try and sample and set motor output
//defaults to 50Hz
profileSetSampleRate (port2, 50);

//the position PID controller needs to run slower than the velocity PID controller
//this function sets the number of velocity cycles that run for every position update
//defaults to 4, should be 3 or higher usually.
profileSetPositionSampleTime (port2, 4);

//sets the amount of acceleration "smoothing" that is done. A value of 0 makes the speed curve 
//a sharp trapezoid, a value of 1 makes it a very smooth "S" shape
//defaults to 0.5, can range from 0.0 to 1.0
profileSetJerkRatio (port2, 0.5);

//basically calls SensorValue [sensorPort] = 0;
//may have strange results depending on the type of sensor being used
//when in doubt, just reset the sensor manually if necessary
profileResetPosition (port2);

//sets position PID gains and integral cutoffs
//the integral cutoffs define a range in which the position error is summed and used for the I component.
//loops with an error value outside the defined range leave the errorSum at the previous value 
//<motorPort>, <kP>, <kI>, <kD>, <innerICutoff>, <outerICutoff>
//these are the default position PID settings
profileSetPositionController (port2, 3.0, 0.0, 0.0, 30, 150);

//sets velocity PID gains and cutoffs
//these are the default velocity PID settings, except for kP.
//kP is set to (127.0 / maxVelocity) whenever profileSetMaxVelocity is called, but can be overridden here. 
profileSetVelocityController (port2, 0.0, 0.0, 0.0, 50, 500);

You can plot the output curves in ROBOTC’s datalogger like my above post by calling the function below. You can only log one motor at a time due to there only being so many datalog series.

while (true) {
  profileLog(port2); //port2, or whatever other motor you want
  delay (100); //can be any delay you want

There’s an advanced feature that allows users to use a pointer to a variable (int or float) in place of a sensor port, in case you’re doing some sensor processing. Once you set this, you can update the value of the variable you used and the controller will detect the changes automatically.

float processedSensorValue = 0.0;

profileSetSensorPtr (port2, &processedSensorValue);

To use the motion profiler to execute moves or control the speed of motors, you call the following functions:

//calculates and executes a move from the current position to the desired position
//two consecutive calls to profileGoTo with the same position value do not stack, it's absolute positioning
//reset the sensor value to 0 in between moves to get relative position control
//<motorPort>, <position>
profileGoTo (port2, 3000);

//sets the target velocity of a motor.
//The motor will try to run at that velocity until another command is executed
profileSetVelocity (port2, 1500);

//sets the raw output of the motor. Equivalent to motor[port2] = value;
//You need to use this function to set the motor value of a profiled motor, else it will be overridden
//This value is linearized using a TrueSpeed lookup table (63 is ~50% speed, 32 is ~25% etc...)
profileSetMotorOutput (port2, 127);

The goTo and setVelocity functions will immediately use the profiler to plan motion curves and then use the built-in PID cascade control to get to the desired position/velocities.

If using the red quadrature encoders (the external ones), be aware that ROBOTC only uses a 16 bit signed integer to store the value of the encoder, so it will overflow after 32,767 ticks. If you want to use a red encoder at higher resolution (by using a gear reduction), you can assign an encoder to a motor in the motors and sensors setup, and then assign the value of getMotorEncoder(nMotor) to a variable and use profileSetSensorPtr() with that value in order to get a 32 bit value (getMotorEncoder() goes all the way to 2,147,483,647). Pro tip: the integrated encoder functions work on red encoders if they’re configured in motors/sensors setup (getMotorEncoder, resetMotorEncoder, etc…).

#pragma config(Sensor, dgtl1,  ,               sensorQuadEncoder)
#pragma config(Sensor, dgtl3,  ,               sensorQuadEncoder)
#pragma config(Motor,  port2,            ,             tmotorVex393TurboSpeed_MC29, openLoop, encoderPort, dgtl1)
#pragma config(Motor,  port3,            ,             tmotorVex393TurboSpeed_MC29, openLoop, encoderPort, dgtl3)
#pragma config(Motor,  port8,            ,             tmotorVex393TurboSpeed_MC29, openLoop)
#pragma config(Motor,  port9,            ,             tmotorVex393TurboSpeed_MC29, openLoop)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//

int encoderLeftValue = 0;
int encoderRightValue = 0;

task updateEncoderVariables () {
  while (1) {
    encoderLeftValue = getMotorEncoder (port2);
    encoderRightValue = getMotorEncoder (port3);
    abortTimeSlice ();

task main () {
  startTask (updateEncoderVariables);

  createMotionProfile (port2);
  createMotionProfile (port3);
  profileSetSensorPtr (port2, &encoderLeftValue);
  profileSetSensorPtr (port3, &encoderRightValue);

@jmmckinney Nice work. My usual advice applies here, add a license (MIT, GPL etc. ) so everyone knows what they are allowed to do with this code and who they need to give attribution to if they create a modified version.

@jpearman I’ve been doing research on which one makes the most sense for a project like this. I’m concerned about making sure teams don’t copy the code and claim it as their own without giving proper attribution.

Would a team be have to give proper attribution if a judge asks a team about their code if I used the MIT license?

You can give teams permission to use the code in pretty much anyway you want, there are dozens of licenses out there. (usual disclaimer here, I’m not a lawyer and this topic is complicated) Without a license (and I’ll mention copyright in a moment) technically no one can use the code without seeking your permission. A license like BSD, MIT or Apache more or less says I can do what I want with the code, use it in my commercial project etc. without disclosing any changes that I make. GPL basically says that if I change your code and use it for anything that gets distributed, I need to share my changes with everyone.
You own the copyright even if you don’t have a copyright notice in the code, but personally I’m a fan of adding something that says I wrote this code and retain copyright.
So my opinion, use either BSD or MIT, add that small amount of text in the header saying that’s what the license is. Then add something that says others should give credit for the code to you if they use it in their projects. Here’s an example from something I also posted the other day.

That has a lot in the header but essentially the original author laid out the conditions of using his code. I then released under GPL, essentially if someone modified and made potentially a better version it should also be released so everyone can benefit.

Very nice stuff. A couple of points/questions. Looks like both your position controller and velocity controller are defaulted to P controllers; that is 0 gain for I and D. You say the defaults work well. So have you tried tuning in an I and D and decided against it based on the performance of the P controllers, or am I missing something here?

You mention you’d like to do 2-D pathing in the future. Does that mean you don’t do drive-straight correction now, since that’s a type of 2-D motion management?

And a related question: Are encoders on both drive sides needed? Will it work with only one encoder?

There are no licenses I know of which would require a team to give attribution to a competition judge. If you want that kind of attribution, you’ll have to either add that as a separate requirement outside the license as @jpearman says, or create a different license of your own. Which can be difficult, but it’s by no means impossible.

Here’s a nice resource that can help answer some of your questions about existing licenses, even though it’s written from a different point of view. This document is about what you have to do when you are the one using and/or incorporating Open Source Software in to your own project. But knowing that answer neatly tells you the requirements for each of the main OSS licenses.

@jpearman @kypyro I knew about the default copyright laws and rights, and was looking at a lot of the very permissive licenses. I didn’t realize I could add my own condition about credit/attribution. I’m going to take a look at those resources you guys linked and I’ll try to move on that ASAP. Thanks!

They work well in their defaults even though they’re both P controllers since they’re cascaded together. By default its like having a PD loop for position or a feed forward PI loop for velocity. The thing that makes a system like this effective is the fact that the position/velocity set points move over time, allowing you to track headings and speeds and do more optimization.

No it will drive straight, the way it handles it is a little different than having the sides compare with each other though. In theory it should do it and maintain a better end point than a generic drive straight algorithm. When you call profileGoTo, a motion curve is computed on the fly, which includes moving position and velocity set points (ie. at time t = 10 I should be at x moving at y speed). If you set both sides of a drive to the same waypoint, and let’s say one leads and one lags, then they will independently adjust to try and fit themselves to their planned trajectory curves (which are identical to each other), thus trying their best to drive straight. Ideally you’d set a speed limit to a move (like 80% max speed or less) so that there’s room to speed up if a side is lagging.

I’ve wanted to integrate gyro readings into moves to make driving straight even more accurate, but with this paradigm it gets really complicated if you want to maintain a constant end point. If you simply added another gain for yaw, your end point would move proportionally to the sum of the yaw error that you correct for over time. In order to guarantee a proper end position (not necessarily heading), you’d need to implement something like a pure pursuit algorithm.

If you used only one encoder on one side for driving straight, you’d just need to call profileSetMaster on all the other side’s motors. It would work but you’d lose the drive straight correction.

What I mean by 2D is basically curved paths (x + z dimensions). Right now you can use this to drive straight and do point turns (linear or 1D in the sense of planning a trajectory).

EDIT: added MIT license with attribution request. Thanks for the advice!

Offhand, I can think of two other ways to incorporate gyro info aside from changing your planning to a course-capture algorithm.

The first and more complex (and computationally expensive) method is to perform sensor data fusion on your inputs (encoders and gyro) to create a more accurate model of your position and heading. Several methods are available, and the one I’ve done more often is Kalman filtering. Probably not worth the effort and it’s not going to run on the current Cortex anyway.

The second method is to use an encoder for one side (the “master”) and to use the gyro input to synthesize an encoder value for the other “slave” side. You could even do some combined filtering/fusion by averaging the second side encoder with the gyro synthesized encoder input. The gyro synthesized encoder would be your first order approximation, while averaging it with the second side encoder would be your second order approximation. The second order approximation should be more accurate, at least as regards gyro drift.

Here’s a post I did a bit ago explaining how to convert gyro info into synthetic encoder info:

For what it’s worth, I did a simple benchmark earlier this year to see if running at least a 2 DOF EKF on the cortex was possible and performant enough to actually be useful. I found that TinyEKF ran fast enough to still meet the 15ms loop times I run my closed-loop controllers at. Has to be PROS, though. ROBOTC adds too much overhead.

Updated to v1.0.2

  • Fixed bug causing consecutive profileGoTo calls to behave incorrectly.
  • Fixed bug which caused profileResetPosition to do nothing.

The download link in original post always points to the latest stable version, so keep using it.