Robot position calculation - using simplified Filter-Smoother

I tried to avoid long formulas with large number of greek letters but, I guess, there is no way around - it has to be done. :slight_smile:

Let us consider our simple robot of mass (m) that drives forward using (n) motors each set to the same power. Important difference from previous page is that now we account for drivetrain components having combined moment of inertia (Idt) as measured around wheel axles. To spin them up we would need torque (Tdt) which we incorporate into net force equation as a force (Fdt) in the floor plane by dividing torque with the drive wheels’ radius (rw). Similarly drive wheels’ angular acceleration and velocities are expressed via robot’s forward going velocity and acceleration.

(Tdrive) is a total torque generated by (n) motors that is proportional to some coefficient (Cdt) which converts voltage differential between motors’ input voltage (Vpwm) and (Vemf) into available torque. Note that (Cdt) will account for kinetic friction losses inside the motors which are proportional to the torque provided by the motors.

(Frrf) denotes friction force loss from rolling resistance in the soft floor tiles and wheel tires. Since (Frrf) is proportional to the robot’s mass via Fnormal=mg it will be expressed as coefficient (Crrf)*(m). Finally, (Cemf) is a coefficient that relates (Vemf) with wheel’s angular velocity (w).

https://vexforum.com/attachment.php?attachmentid=9663
https://vexforum.com/attachment.php?attachmentid=9664

As you can see robot’s acceleration (a), velocity (v), and driven distance (s) could be expressed using three coefficients (Cp), (Cf), and (Ct) which could be found experimentally. Since we know how those coefficients relate to robot’s mass, number of motors and driving wheel radius, if necessary, we could derive underlying coefficients (Cdt), (Crrf), and (Cemf) as well.

So how do we determine value of coefficients (Cp), (Cf), and (Ct). Turns out it is easy. All you need is to conduct two experiments with your robot.

The first experiment is to place the robot on some supports such that wheels rotate freely in the air and let it run. Then after wheels have reached stable speed you log quad encoder measurements at several time points.

The second experiment is very similar, but you let the robot run on the floor tiles so that you can observe rolling resistance coefficient (Cf).

Here is a graphical illustration of those experiments:

https://vexforum.com/attachment.php?attachmentid=9665

Horizontal axis is time and vertical is for velocity (v) and distance (s). Distance plots (blue lines) will approach grey lines, whose slope correspond to the final velocities of wheels in each experiment. The point where those lines intersect horizontal axis corresponds to (Ct) components due to either (Idt) or robot mass (m).

Note: the formula on the graph reads s(infinity)=(…)(t-Ct) it is really supposed to mean “large value of t” when exponential portion practically goes to zero.

In the next example our robot will only rotate in place. We assume either a two-wheeled turtle-bot or a symmetrical x-drive where each wheel is at a distance (rr) from the robot’s center of rotation. It is also assumed that center of rotation roughly coincides with the center of mass (otherwise equations will become crazy complex).

All (n) motors will be run with the same power (Vpwm) in the same direction to cause robot rotation (i.e. clockwise or anti-clockwise).

Robot’s moment of inertia around its center of rotation is (Ir) and drivetrain monent of inertia around wheels’ axles is (Idt).

Assuming there is no wheel slippage robot’s body angular acceleration and velocity could be expressed with wheels’ angular acceleration (alpha) and velocity (w).

First we write the force equation. Robot’s body will resist angular acceleration with torque (Tr) which translates to force (Fr=Tr/rr) at the wheel touch points. Similarly, drivetrain will resist accelration with (Fdt=Tdt/rw).

Solutions for (w) and (theta), which are wheels’ angular velocity and total rotation angle can be easily converted to angular velocity and heading of the robot’s body.

https://vexforum.com/attachment.php?attachmentid=9671

Resulting formulas depend on the same (Cp) and (Cf) as for forward motion but the value of timing coefficient (Ct2) is different from (Ct) because angular motion depends on the robot’s moment of inertia (Ir). To find (Ct2) you use the same technique of running robot on the ground (rotating) and then plotting quad encoder measurements vs time.

As I was reviewing my math solving for both translation and rotation of the robot - there were a couple of questions that came up.

First, the easy question:

Every motor has some viscous friction which is proportional to the rotation speed and reduces available torque by (Tvf = Cvf*w). I don’t mention it explicitly in the above equations, because experimentally measured value for Cemf conveniently accounts for both viscous friction and electromotive force.

We only talk about kinetic friction which is proportional to the output torque and rolling resistance friction which is proportional to the slope of the indentation robot makes in the tiles and therefore it mass. But there must be something else, because when wheels are in the air, none of those two forces are present, yet when power is turned off wheels will lose their energy and stop.

In the lab settings Cvf would be easy to measure by monitoring energy flow into the motor that runs without load. However, few of us have access to a good test equipment.

So here is an easy puzzle/question: find one or more easy ways to measure approximate Cvf value of a VEX motor using nothing but common VEX parts.

The second question is more complex. It might be easy for people who do mechanical dynamics for living, but I am not one of them.

Suppose we have X-drive robot with omni-wheels. Motors for both wheels in each diagonal pair will be commanded the same power, but each pair may have different level (no robot body rotation). Also, assuming kinetic friction in the omni rollers is negligible.

Then it should be possible to apply forward motion equations (from post#41) to each wheel pair independently. In the robot’s frame of reference one pair will give solution for dx and another for dy. This lets you draw the path of the robot as a function of time. Even though the path may be curved due to non-linear acceleration, dx and dy are fully independent, because of the X-drive geometry.

Is this correct? I will appreciate if somebody with experience weights in.

Your calculations are very nice and verbose. The only thing that strikes me as odd is that the force due to rolling friction doesn’t appear to be a function of the robots velocity.

I wouldn’t consider myself experienced, but I can take a shot at it.
Based on how you have described the example, your conclusion should be correct. The principle is similar to the classic projectile motion problem where the kinematics in the x and y directions are completely uncoupled and can be written independent of one another.

I can only think of one caveat. I imagine that the energy lost from the robot due to the interaction between the wheel and the soft foam floor is a function of the wheels velocity. Even with friction-less rollers, energy is still lost as the wheels translate sideways across the field and has to push the “wave” of foam around. If this is the case, then the problem turns into something similar to projectile motion with drag and the two equations become coupled. However, this effect probably doesn’t become dominant until the robot gets quite heavy and starts to significantly sink into the field so I think it would be fair to ignore it and go with the previous conclusion so long as the robot isn’t too heavy.

Just to clarify, the last part was referring to the kinetics of the problem presented (which would come up if one was trying to work through the prediction step equations like you have been). The kinematics remain uncoupled.

My knowledge of the rolling resistance is limited to the wikipedia article and a few other sources I could find on the internet. Most research is done about soft inflated tires running on the hard surface. The dominant factors determining Crr in that case are tire temperature and pressure with vehicle velocity taking the third place and can actually slightly decrease with higher velocities before going up again in some scenarios (see fig 2.23 on pg16).

In cases where tires are firm, the dominant opinion is to use constant Crr and that velocity doesn’t need to be considered at all.

We have slightly different situation with firm tires running on somewhat soft surface. Few sources that consider rolling resistance on the soft surfaces don’t even talk about velocity in that context. The best source on this subject analyzes aircrafts running on the grass and soil surfaces and is hardly applicable to our scenario.

So, here is how I understand it after looking through the above links:

https://vexforum.com/attachment.php?attachmentid=9679

When robot is at rest it makes an indentation in the foam surface with the depth somehow proportional to its weight. The section of the foam-to-wheel contact is depicted by (ℓ).

When robot goes forward with speed (Vrobot) it starts climbing out of the hole. As it moves toward the edge the new section of the foam will starts compressing and will go down with some speed (Vsink). The faster the robot moves - the closer it gets to the edge, and the smaller is the new section of the floor (ℓ) that now supports robot’s weight. The smaller that section is the larger will be the pressure and the faster the foam will sink.

For each value of velocity (Vrobot) there will be a point of equilibrium when foam sinks down just as fast as the robot advances. If we measure value of Crr it will correspond to a slope at an angle (theta) such that it takes the robot to advance distance (delta s) just as fast as the foam sinks to the depth (delta h). Without having analytical models, the simplest way to determine relation between Vrobot and Vsink increases is, essentially, by running an experiment to determine the value of Crr at various robot velocities.

If I could persuade technik jr that I really need chassis finished to run this experiment, and we have chassis done … then I’ll try it. But it will, probably, be much longer before chassis are properly aligned and my measurement task is fully debugged to get enough sensitivity for a conclusive Crr vs velocity plot.

Edit: almost forgot to mention the most important part: (Cf) coefficient that includes (Crr) is the only coefficient that does not fully depend on the robot as it might change from field to field. So the smart thing to do is to let filter estimate this coefficient as part of its state. The robot will start with hardcoded average value for (Cf) then drive a couple of feet straight forward and estimate correction for (Cf) on this specific field. By the time robot starts making turns and needs more precision it will already have refined value of (Cf). If it is, indeed, a case that (Crr) considerably depends on the speed, then we will see the pattern in the drifting filter estimates of (Cf) at various speeds and will be able to easily fix equations.
rolling_resistance_friction.jpg

Even though we didn’t have enough progress on our testing rig to post any data, technik jr had a good question about equations. What would happen if (Vpwm=0)? Will robot drive backwards because of (-Cf)?

Obviously, it will not. The short answer is that friction is a resistive force. It exists only when external force is applied, resisting object’s movement. If there is no external force then friction force is zero.

https://sp.yimg.com/ib/th?id=OIP.M3d9337db6596c5c6308e9deafb503a26H0&pid=15.1&rs=1&c=1&qlt=95&w=146&h=118

For the longer answer we need to look at the image in the previous post. On the left figure the wheel is at rest. Foam that it touches pushes it up vertically and in horizontal direction equally opposing forces push it to the center from both left and right sides.

The right figure shows moving wheel in the state of dynamic equilibrium. Notice that while it interacts with the foam on the right, it is not touching any on the left. The horizontal component of the force is asymmetrical and pushes it only to the left. If you imagine robot suddenly losing all its forward velocity (and all kinetic energy as a result) it would actually roll back to the left for a little bit, while the floor rebounds on the left, sinks on the right, and eventually forms symmetrical supports on both sides.

So in case of rolling resistance friction robot will actually drive backwards while its support is asymmetric. Since foam of the field rebounds and sinks very fast it will not take a lot of time.

while I must admit, this thread has totally lost me in it’s advanced state, I have been able to successfully implement a kalman filter on my robot, it only takes in data from two sources (but from one of them in two different ways)

the goal of this filter is to get a more accurate gyro reading that doesn’t drift as much over time

first input Is that raw gyro value. Then I use past gyro values to predicted angular acceleration and velocity for the next phase, and I use this to calculate where I predict my angle to be for the next phase, the idea here is to prevent drift, I then use a kalman filter to combine these values.

Next I compare values from the left and right encoders to predict a change In angle during that timestep, and I combine this with my gyro value’s in a second kalman filter.

Testing remains to be done to see if the filtered version is more accurate than the unfiltered one, but I hope it is

I think these codes are very useful, but not very precise in the vex robotics level.

Sorry, to hear that. I thought that when technik jr lost me after the first differential equation, there was still some room to go with the older students. But I understand you - it is a difficult subject.

That’s great! Our initial goal was to use information from various noisy sources to reduce overall error and it looks like it works for you!

Prediction of the future sensor measurements is the most important first step towards setting up a filter that is capable of doing sensor fusion. You definitely have a functioning filter, however I wouldn’t call it Kalman Filter. Formal definition of KF requires it to do specific thing with the state uncertainty (also called covariance or think about it as error bounds). You can still go some way toward Kalman Filter by initially hardcoding how much you trust each of the sources: current predicted state, gyro reading, or quad encoders on the wheels.

I think that, instead of using two independent filters, having one filter that fuses both sources will be much better option. Two independent filters just smooth the noise each in its own channel. A single filter that combines all data sources will be able to take advantage of the cross-correlations. Your second filter sounds like it might be the one, but from your description I couldn’t tell for sure.

Collin, could you share your code? I feel this thread might be better served if we start from something that already works and most readers understand, rather than trying to rewrite a text book on kinematics of rigid bodies or theory of estimation.

I must admit that I don’t have enough access to Cortex and parts to conduct my own experiments. I promised technik jr and teammates that this season I will not touch the robot until they have first fully functioning prototype. They choose to interpret that in the most literal way. :frowning:

Good news is that robot already drives around and picks up the balls. Even better news is that they have learned how friction affects torque as it travels through the gear-train. They actually used formulas to design the latest flywheel setup. It used to be that it would need two force motors to start, but now after they have done more realistic torque estimates and aligned bearings to reduce friction, a single speed motor could start and spun up to speed 5” flywheel with 21:1 external gear ratio! :slight_smile:

The bottom line is: I will really appreciate if people with “full access” to functioning robots will help drive this thread and keep it on the easy-to-understand level.

maybe I am lost as to what a kalman filter is, but I used this site as a tutorial and then modified the prediction state to work with prediction of future reading.

http://home.wlu.edu/~levys/kalman_tutorial/

this tutorial says we would use PastState*A to predict the next state, but the state of my gyro would not change based on a constant multiplied by the last state, so I needed a more complex prediction.

here is the code, I want to preface this by saying “working” means it compiled without error, I have not yet been able to test if this filter effectively works to calculate my next state with more accuracy than just a gyro. we have been re-building our robot recently so I don’t have total access to it right now. I have a little code bot I use that I will bring home tomorrow to test the code on

task gyroFilter(){
	float gain =  0;
	float R = .016; // published accuracy of the gyro
	float lastAngle = theta;// theta is the value coming from my gyro
	float lastDAngle = 0;// the change in angle from the last cycle
	float aAngle = 0; // angular acceleration
	float thetaP = theta; // prediction for the next state of theat
	float Perr = 0; // prediction error
	float lastPerr = 0;// prediction error from last cycle
	float DangleP = 0;// predicted change in angle for last cycle
	float Dangle = 0; // change in angle during this cycle
	float A, AE;// a constant that represents a constant for which lastState * A = Current state
	float thetaPE = theta;// predicted next angle using encoder
	float PerrE = 0;// prediction error from encoder
	float lastPerrE = 0;
	float gainE = 0; //encoder gain
	float DanglePE = 0;// predicted change in angle based on encoders
	float ldist, sdist, rad;// larger wheel distance, smaller wheel distance, radius of turn
	float circum;// circumference of circle robot is moving on
	float Eerr = (8/360);// published error of encoder
	////////////////////////////////////////////////////////////////////////////////////////////
	while(true){
		// PREDICTION PHASE 1 - using Gyro
		///////////////////////////////
		Dangle = theta - lastAngle;
		aAngle = Dangle - lastDAngle;
		DangleP = lastDAngle+aAngle;
		thetaP = lastAngle + DangleP;
		A = thetaP/lastAngle;
		Perr = A*lastPerr*A;
		// PREDICTION PHASE 2 - using encoder
		/////////////////////////////////////////////
		if(abs(LD) > abs(RD)){
			ldist = -LD;
			sdist =  RD;
		}
		if(abs(RD) > abs(LD)){
			ldist = RD;
			sdist = LD;
		}
		rad = abs(ldist/(sdist+.0001)) * 10;
		circum = 2*rad*PI;
		DanglePE = ldist/(circum+.001);
		thetaPE = lastAngle + DanglePE;
		AE = thetaPE/lastAngle;
		PerrE = AE * lastPerrE*AE;

		//UPDATE PHASE 1 using Gyro and Predicted Gyro
		/////////////////////////////////////////////
		gain = Perr/(Perr+R);
		thetaP = thetaP + gain*(theta - thetaP);
		Perr = (1-gain)*Perr;
			// thetaP now becomes the excepted value for theta, we will now combine this with a third input
		// which is the encoders, to gain a third prediction of what theta should be
		// by combining the three predictions we hope to arrive at a more accurate conclusion
		////////////////////////////////////
		////////////////////////////////////

		//UPDATE PHASE 2 - using thetaP and predicted angle based on wheel encoders
		//////////////////////////////////////////////////////////////////////////
		gainE = PerrE/(PerrE + Eerr);
		thetaPE = thetaPE + gainE*(thetaP-thetaPE);
		PerrE = (1-gainE)*PerrE;
	//////////////////////////////////////////////////////////////////

// Now ThetaPE, which is our prediction combinging, the measured Gyro Value
// the predicted gyro value based on past gyro values
// and the predicted gyro value based on the encoders
// becomes the accepted value for our current angle


//SETTING PHASE - here we set our angle and sensors to all the calculated values, and
// set the recursive variable so that they can be used in the next loop

SensorValue[gyro1] = SensorValue[gyro] = thetaPE*10;
lastAngle = thetaPE;
lastDAngle = Dangle;
lastPerr = Perr;
lastPerrE = PerrE;



	}


}

That’s a great link! It could save us lots of posts. Everybody who wants to learn about filters, please, read it to the end. It has lots of math symbols when it gets to matrices, but just try to read it - descriptions are very good.

Now back to your code, Collin. At first I was confused by the variable names, but once I figured out what was going on I was very impressed by how much filter you have implemented! There are several problems though.

My best guess is that you followed tutorial through the scalar parts and correctly implemented formulas for scalar filter. You tried to use two scalar filters. First one to filter out drift of the gyroscope and the second to combine gyro and encoder measurements.

The first problem is that Mr. Levy introduced process noise (Q) for computing predicted value of § only in part 14, well into matrix section of the tutorial. As a result you are ending up with Perr and PerrE both having values of zero. This leads to gainE being zero and, essentially, resulting thetaPE depending only on quad encoders and ignoring gyroscope.

It would be easy to fix by adding some process noise: (Perr = A * Perr * A + Q). Where value for Q is found by trial and error for each type of sensor. However, I am still trying to wrap my mind around how the values for Perr and PerrE would change over time.

The second problem is sensor fusion. If you somehow get those two independent filters do the fusion, it would be simply amazing. But most likely painful and you will end up with the code equivalent to a single multi-parameter filter.

It may sound as if in order to go multi-parameter you must to use matrices and vectors. That is not true. For just three sensors the code may actually end up shorter without them. And if you need to be more advanced, there is an excellent BNSLib implementing vectors, matrices, and lots of other cool stuff ([https://github.com/JMarple/BNSLibrary).

Finally, I don’t see any delay() in your loop, so the task must be cycling fast doing updates as soon as the sensor values change one tick at a time. I am most concerned about implicit assumptions about the time between measurements. Everytime another task interrupts this task (taking some time) you will end up with an error trying to predict future values based on the deltas.

The fix for that would be to timestamp every measurement and use time explicitly in your predict and update steps.

To help transition to that mode I will include code for my sensor task. It is not tested yet, but we will improve it and learn how to tie it into the filter code in the following posts.](https://github.com/JMarple/BNSLibrary).)

Disclaimer: this code for high fidelity measurement task is still untested at the time of writing. Once edit window for this post closes, an updated version could be found at https://github.com/technik3k/VexLib3k.

Notes:

According to RobotC help page on available data types long is 64 bit, while float is only 32 bit. If we were going to keep timestamp in floats with microsecond (10^-6) precision we would only be able to reference 32 second time periods at most. Going to 64-bit longs has no such problem.

We intend to start sensor task with kHighPriority and further [;
for( i=0 ; i<hf_sensor_cnt ; i++ )
{
curVal* = sensorValue[hf_sensor_id[i]];
curCnt* = -1;
}
hogCPU(); // obtain exclusive lock on CPU by this task
long nSysCurr = nSysTime;
while(nSysCurr == nSysTime); // spin until timer ticks
nSysCurr = nSysTime;
for( n=0 ; 1 ; n++ )
{
if( nSysCurr == nSysTime ) // still in the same time tick
{
for( i=0 ; i<hf_sensor_cnt ; i++ )
{
short s = sensorValue[hf_sensor_id[i]];
if( curVal* != s ) // has sensor changed?
{
curVal* = s; // remember new value
curCnt* = n; // remember sub-tick count
}
}
}
else // nSysTime changed
{
int min_dt = 101000;
for(i=0;i<hf_sensor_cnt;i++)
{
if( curCnt
>= 0 ) // there was sensor change during prev tick cycle
{
hf_sensor_val* = curVal*;
hf_sensor_t* = nSysCurr * 1000 + (curCnt**1000)/n;
curCnt* = -2; // this sensor has been read in this measurement cycle
}
else if( curCnt* == -1 ) // for sensors that still haven’t changed
{
long dt = hf_sensor_nxt* - nSysTime * 1000; // when would it change?
if( dt>0 && dt < min_dt)
{
min_dt = dt; // remember closest future expected change time
}
}
}
if( min_dt > 3000 ) // if it is more than 3 msec away - yield to other tasks
{
break;
}
nSysCurr = nSysTime; // otherwise monitor next clock cycle
n = 0;
}
}
releaseCPU(); // release exclusive lock on CPU by this task
delay(1); // yield at least one 1 msec cycle to other tasks
}
}
[/CODE]

In the next post I will try to show how to do filter predict and update steps based on timestamps, but if somebody wants to jump ahead and post it first, please, feel free to do so.*************](“http://help.robotc.net/WebHelpVEX/index.htm#Resources/topics/VEX_Cortex/ROBOTC/Task_Control/hogCPU.htm”)

Lets do our first filter example. It is going to be very simple. In fact, Collin’s code is far ahead of it, because he uses covariance (Perr, PerrE). This example will not use covariance and will trust measurements 100%, essentially having gain = 1.

Consider a simple contraption, like this jpearman’s gyro test rig:

Alternatively, you may mount Cortex onto the turntable gear, put battery on top of it and attach gyro to the battery to address issues discussed in this thread. One way or another have gyro that could stay still or be rotated when you choose so.

We are going to use the following variables (names are consistent with Dr. Levy’s tutorial):

long Tk1; // timestamp of the gyro measurement (internal unit is usec)
short Zk; // gyro measurement (internal unit is gyro ticks)
long Tk;  // time of the current state estimate (internal unit is usec)
float Xk[4]; // state estimates at a time Tk:
// Xk[0] - gyro bias (how much gyro had drifted from heading) (ticks)
// Xk[1] - gyro drift rate (how much gyro drifts per unit of time)(ticks/usec)
// Xk[2] - heading (of the robot or platform that gyro is mounted on) (ticks)
// Xk[3] - angular velocity (rotation rate of robot or platform) (ticks/usec)

Predict step will move state Xk at a time Tk to the time of the measurement Tk1:


float dt = 1.0*(Tk1-Tk);
Xk[0] = Xk[0] + Xk[1]*dt;   // expected gyro bias
Xk[2] = Xk[2] + Xk[3]*dt;   // expected platform heading
Tk = Tk1;

Next the measurement update step will compute residuals and corrections to state estimates:

float Yk = Zk - (Xk[2] + Xk[0]); // residual = actual reading - expected value

if( motor_power == 0 ) // we are expecting that gyro platform is stationary
{
  Xk[0] = Xk[0] + ( 1.0    ) * Yk;      // updating gyro bias 
  Xk[1] = Xk[1] + ( 1.0/dt ) * Yk;      // and drift estimates
}
else // motor turning the platform
{
  // if( residual_ratio > some_max ) // robot cannot physically turn this fast
  //   do something to handle this abnormal situation
  //   for example, reject this measurement and panic if it's 5th time
 Xk[2] = Xk[2] + ( 1.0    ) * Yk;      // update robot heading
 Xk[3] = Xk[3] + ( 1.0/dt ) * Yk;      // and angular velocity estimates
}

We assume that previous gyro update was at time Tk and it had been drifting for the period of (dt).

Finally, we do some housekeeping and predict next measurement time for the precision sensor_task:


long nextTk=Tk+1.0/(Xk[1]+Xk[3]); // t = s/v = 1 tick/(internal+platform drifts) 

To understand state updates, ask yourself this question: what if actual measurement came (n) ticks over the expected value? What does it mean and what needs to be done?

If platform is stationary then we need to update our gyro bias and increase modeled drift rate such that next time expected and actual readings will match up. (Collin, you, essentially, do that in your code when you reset sensorValues] back to zero or thetaPE).

If the platform is in motion (motor power applied) then we assume that drift stays the same and just count gyro changes toward the platform motion. Any drift errors during that time will introduce errors to our final solution, but there is nothing we can do about that with just one sensor. You need to be careful to extend “powered” filter mode some time beyond actual motor power is turned off, since robot still moves using up its kinetic energy.

Believe it or not, but since we have 4 states and just one measurement source, we are using matrix form of the equations and partial derivatives. If you look at tutorial part 19 or EKF on wikipedia you will see the equation for Xk measurement update is:

[INDENT]Xk = Xk + Kk * Yk
Xk = Xk + Gk (Zk - h(Xk))[/INDENT]
Where Xk is a vector of state estimates, Yk is a vector of measurement residuals (Yk=Zk-h(Xk)) and the only confusing part is that tutorial calls gain matrix Gk, while wikipedia and everybody else call it Kk.

In our example Xk is a vector of 4 values, Yk is a vector of 1 mesurement, anf Gk is a matrix {1.0, 1.0/dt, 0, 0} for stationary mode and { 0, 0, 1.0, 1.0/dt } for “powered” mode.

So we have successfully figured out partials without using any differential equations or anything else equally complex that you would expect from the name.

you would be correct here, the other stuff is still a little lost on me, I am working through it and will hopefully update the code over time I decided just to start a the first level

  1. how would I go about finding process noise, I never really understood how they found process noise in the tutorial. I think that the solution to this is not to add process noise, but to set the starting state of Perr and PerrE to something around .5 or maybe 1

  2. as to how Perr would change, that is totally in relation to A. The resting state of A would 1, this would be no change in Gyro Value between steps, that is why setting Perr to .5 might be good, because A would go up and down around one keeping Perr between 0 and 1. The larger the change is Gyro predictions between steps, the more error we expect, that is why Perr changes in relation to A, however if A is less than one (ie gyro value goes down) why would we want Perr to get smaller?

finally a question, a Perr of zero would lead to a gain that would favor the prediction , and a Perr of 1 would lead to a gain favoring measurement, is this True?

Collin, let me answer your questions in the reverse order:

The formula for gain, as found in part 5 of the tutorial, is G = P/(P+R). So, yes, the first assertion about Perr=0 is correct, but the second one about Perr=1 may or may not be true. Here is the reason why.

The common misconception, that Dr. Levy did not dispel (but inadvertently reinforced) is that value of § falls between 0 and 1. Many people, familiar with math in general but having no experience with filters, make this subconscious assumption because most of the times in math § stands for some kind of probability.

Covariance values of § and sensor variance ® are not confined to the range of 0 to 1 and could be any positive number. The meaning of covariance is how much uncertainty there is in the corresponding state estimate.

For example, you may have a sonic range meter that returns distance in millimeters. The manual may say that when measuring distances around 1 meter its variance ® is 16 mm^2. It means that they run a bunch of tests and their standard deviation was 4 mm.

You run a filter and the state estimate (X) is 900 mm with corresponding covariance § of 49 mm^2. This means that your state estimate has larger uncertainty than the sensor accuracy is. The gain would be G = 49/(49+16) = 0.754 and it will take most of the measurement residual to correct your state.

On the other hand, if your covariance § is 2 mm^2 it means you are well within your sensor accuracy and with gain G=2/(2+16)=0.11 you will mostly trust your current state estimate (prediction) and do relatively little correction based on the measurement.

You are correct in describing what your code does. What I really wanted to say is that, since you have setup your second filter in a nonstandard way, I wasn’t sure how Perr should behave in order to have stable filtering.

We want covariance (Perr) to be small but not too small because then filter will be ignoring measurements or will approach true value very slowly.

Usually, you would tweak all three parameters: initial covariance §, sensor accuracy/noise ®, and process noise (Q) as you calibrate your filter.

First, you develop null cases for your system to isolate one sensor at a time. This lets you figure out good initial values for § and confirm that your own sensor piece meets manufacturer specifications for ®.

Then you let system run for a while and analyse residuals. If you see that covariance drops and filter becomes overconfident and starts ignoring measurements, you may want to introduce some process noise (Q). Then if you see it overcorrecting with state estimate having too much noise itself, you may decrease (Q).

Also, if filter doesn’t keep up with fast changing parameters, increasing (Q) will force it to trust predicted state less and accept more corrections from the measurements. In general, if you feel that you don’t model all of the physics correctly then easy solution is just to pump more process noise into the system. You pay for it by having less stable estimates that keep jumping around with noisy measurements.

Finally, if you encounter an abnormal situation you may want to temporarily inject some process noise, but that may be tricky call depending on the situation.

The second example, still doesn’t use covariance, but has enough filter to demonstrate sensor fusion.

Consider a two-wheeled robot having a gyro and two quad encoders on each of the wheels:


long Tk1[3]; // timestamp of the sensor measurement (internal unit is usec)
long Tpr[3]; // timestamp of the previous sensor measurement (usec)
short Zk[3]; // sensor measurements (internal units are sensor ticks)
// Zk[0] - gyro
// Zk[1] - left quad
// Zk[2] - right quad

long Tk;  // time of the current state estimate (internal unit is usec)
float Xk[8]; // state estimates at a time Tk:
// Xk[0] - gyro bias (how much gyro had drifted from heading) (gyro ticks)
// Xk[1] - gyro drift rate (per unit of time) (gyro ticks/usec)
// Xk[2] - heading of the robot (gyro ticks)
// Xk[3] - angular velocity of robot body (gyro ticks/usec)
// Xk[4] - left wheel total travel (encoder ticks)
// Xk[5] - left wheel velocity (encoder ticks/usec)
// Xk[6] - right wheel total travel (encoder ticks)
// Xk[7] - right wheel velocity (encoder ticks/usec)

Since we are not modeling wheel slippage, robot heading change rate (angular velocity) is locked with the difference between left and right wheel velocities:

[INDENT]Xk[3] = Crot * (Xk[7] - Xk[5])[/INDENT]

Where Crot is some constant depending on the wheel size, distance between wheels, and conversion factors of gyro and quad ticks to angles. We also assume that gyro reports positive angle when you turn to the left.

Predict step is called before we process each measurement and advances filter state from time Tk to the time of the measurement Tk1:


float dt = 1.0 * ( Tk1 - Tk );
Xk[0] = Xk[0] + Xk[1] * dt; // gyroBias += drift * dt
Xk[2] = Xk[2] + Xk[3] * dt; // heading  += angVel * dt
Xk[4] = Xk[4] + Xk[5] * dt; // leftDist   += leftVel * dt
Xk[6] = Xk[6] + Xk[7] * dt; // rightDist += rightVel * dt
Tk = Tk1;

Measurement Updates are unique for each of the sensors:


float dt = Tk1[1] - Tpr[1]; // time since previous Left quad meas update
float Yk = Zk[1] - Xk[4];   // residual = measurement - expected value

Xk[4] = Xk[4] + (0.25   ) * Yk; // left wheel 
Xk[5] = Xk[5] + (0.25/dt) * Yk; // left velocity 
Xk[2] = Xk[2] - Crot * (0.25   ) * Yk; // left wheel turns robot to the right
Xk[3] = Xk[3] - Crot * (0.25/dt) * Yk; // dXk[3] = - Crot * dXk[5]

float dt = Tk1[2] - Tpr[2]; // time since previous Right quad meas update
float Yk = Zk[2] - Xk[6];   // residual = measurement - expected value

Xk[6] = Xk[6] + (0.25   ) * Yk; // right wheel 
Xk[7] = Xk[7] + (0.25/dt) * Yk; // right velocity 
Xk[2] = Xk[2] + Crot * (0.25   ) * Yk; // right wheel turns robot to the left
Xk[3] = Xk[3] + Crot * (0.25/dt) * Yk; // dXk[3] = + Crot * dXk[7]

float dt = Tk1[0] - Tpr[0]; // time since previous Gyro meas update
float Yk = Zk - (Xk[2] + Xk[0]); // residual = measurement - expected value

Xk[0] = Xk[0] + ( 0.03    ) * Yk;      // gyro bias 
Xk[1] = Xk[1] + ( 0.03/dt ) * Yk;      // gyro drift
Xk[2] = Xk[2] + ( 0.2     ) * Yk;      // robot heading
Xk[3] = Xk[3] + ( 0.2/dt  ) * Yk;      // robot angular velocity
Xk[4] = Xk[4] - ( 0.1/Crot    ) * Yk;
Xk[5] = Xk[5] - ( 0.1/dt/Crot ) * Yk;  // dXk[5] = - dXk[3]/(2*Crot)
Xk[6] = Xk[6] + ( 0.1/Crot    ) * Yk;
Xk[7] = Xk[7] + ( 0.1/dt/Crot ) * Yk;  // dXk[7] = + dXk[3]/(2*Crot)

The key to understanding the above code is observability. You always want to update every state parameter which is observable from your measurement.

For example, if left quad has changed you can update left wheel travel distance and velocity as well as robot’s heading and rotation rate. But you have no idea what is happening to the right wheel or gyro drift, so you leave them unchanged.

Similarly, when gyro reading changes you can update robot heading as well as gyro bias and drift. You cannot observe robot’s forward movement from gyro, but you can observe difference between left and right quad readings. When gyro changes we push half the Xk[3] change into Xk[5] and Xk[7], because they are locked by the formula above.

Another important point is that if one measurement source corrects (or even overcorrects) state parameter, the other measurement source will pick up the slack (or correct it back) because residuals used in state corrections are differences between actual and expected values.

For example, if robot is turning and gyro change pushed changes into left and right wheel distance counters, then subsequent quad measurements will simply add what is missing.

Final note is about constants that you see above: 0.25, 0.03, 0.2. Those are hardcoded values that I just made up, because this example doesn’t use covariance. If it did, those numbers would be automatically updated with current values of covariance § specific to each state, sensor noise ® specific for each measurement source, ant whatever process noise (Q) we choose to inject during the predict step.

A simplified way of thinking about those constants would be: every time a measurement from quad encoder is processed, we give current predicted state 75% confidence and 25% confidence to the measurement. After the second measurement from the same source is processed there will be 9/16 of the original state left, after the third one 27/64, and so forth…

As a result we will have slightly delayed, but much smoother (free of the noise) state estimates. Such smoothing is important for path integrator that uses heading and speed information to plot robot’s position on the field.

The above code may look robot-ready, but unfortunately it is not. In addition to hardcoded covariance values, there is a number of other unsolved problems.

If measurement cycle is too short you may introduce a lot of roundoff error. And if it is too long - you get errors due to non-lineriarity when robot accelerates-decelerates and velocity changes by non-trivial amount between the measurements. In general, any measurements that change too fast or too slow will confuse the code from the previous post.

For example, consider robot driving forward with the speed of 5 quad ticks per 10 msec meas cycle. At some point right wheel is commanded more power and, as robot starts turning, the right wheel will accelerate to an average speed of 7 ticks / 10 msec cycle over the next 2 cycles. If next measurement comes from right quad at 20 msec - the filter will nicely correct position and velocity estimates from expected +10 ticks and 5 ticks/cycle to +14 ticks and 7 ticks/cycle respectively:

time:| 0 msec      | 10 msec         | 20 msec
pos: | 0        +5 | +5          +10 | +14
vel: | 5         5 | 5             5 | 7 = 5+(4/2) 

Now consider what happens if, after a long silence, gyro registers a tick somewhere in the middle.

time:| 0 msec      | 10 msec         | 20 msec
pos: | 0        +5 | +6          +11 | +14
vel: | 5         5 | 5             5 | 6.5 = 5+(3/2)

Even if gyro measurement corrects position to +6 it will fail to increase velocity due to a very long (dt) since it last changed. Eventually the filter will catch up to the correct speed of 7 ticks/cycle, but a couple of path integrator steps will be slightly off target. However, we have a strange situation when having more data points makes result worse off. We need to recognize that some data points are not as good as the others.

Ideally, if we could read all sensors at every cycle, it would work like this:

time:| 0 msec      | 10 msec         | 20 msec
pos: | 0        +5 | +6          +12 | +14
vel: | 5         5 | 6             6 | 7 = 6+(2/2)

The problem is that until slowly changing sensor ticks, we have no idea what’s going on with this sensor. You cannot assume it still has an old value, because it could tick while you literally doing your measurement update.

There are several ways to deal with this problem. Among them is increasing time between measurement update cycles and modelling better physics to avoid non-lineriarity. However, the best bang for your efforts will be including control inputs into your prediction logic.

As I showed in the first example on top of this page it is very easy. Most of the time you know if your robot should be turning or not because you know what motor power was commended at any time and you know when it is safe to assume that sensor tick should not be happening. The rare exception would be if somebody started to push your robot, but then you have bigger problems to worry about.

I am working on the two more posts: another filter example that incorporates covariance and process noise, and a post about unique challenges that RobotC’s virtual machine environment presents for implementing sensor_task.

So, before the subject gets even more complex, let me pose a couple of questions:

If you wanted to use a second gyroscope, how would you modify the code in the above example? What will second gyro improve? What limitations will still remain? (Hint: think observability)

In the previous post I have complained about some measurements being “not as good as the others”. There is a way to code that statement into the filter implementation. Do you have any guesses of what I am trying to suggest? How specifically would you translate it into C code? (Hint: look a few posts back, there is an answer)

Please, feel free to answer any or all parts, or ask your own questions if you need clarifications.

Since there were no takers for the homework, I’ll give it a try myself. :slight_smile:

In order to fuse measurements from the second gyroscope into the filter solution, we need to define two new states that describe its behaviour. And most of you will, probably, guess correctly that those need to be:

// Xk[n] - second gyro bias (how much gyro had drifted from heading) (ticks)
// Xk[n+1] - gyro drift rate (per unit of time) (gyro ticks/usec)

Then we define predict step, similar to the first gyro, as:


Xk[n] = Xk[n] + Xk[n+1] * dt; // gyroBias += drift * dt

Measurement updates from either of the quad encoders will still update robot heading, but will not touch biases and drifts for gyros.

Measurement updates from either of gyros will change robot heading and wheel travel estimates, but each gyro reading will update only its own bias and drift states.

If we were using a single gyro and was computing dynamic covariance and sensor noise level we could run into the problem. Correction that supposed to go into drift and bias could go into robot heading or vice versa. In the above example, with hardcoded covariance, I picked the numbers that would work only assuming that gyro error is drifting much slower than robot changes its heading (0.03 vs 0.2).

With two gyroscopes we reduce chances of that by roughly 50%, since both gyros would need to drift in the same direction in order for error to be unobservable. In reality, if we properly update covariance and process noise, we will do even better, because chances of both gyros drifting in the same direction with the same rate at the same time are even smaller. If they actually do, then there must be serious design flaw. For example, if gyros have poor voltage regulators and always drift as battery voltage drops.

However, for the robots that are outfitted with three quad encoders, such that two of them could observe rotation, you wouldn’t need a second gyroscope. If the robot heading is observable by any other sensor or sensor combination then gyro bias and robot heading states will be independent.

If gyro sensor haven’t ticked for a while, we don’t really know when it is going to tick, but we know for sure that it cannot be trusted, because the uncertainty will introduce some error in the state. So we just associate some temporary noise with this sensor and will correct very little based on its next reading.

Whenever possible, we want to avoid such stale sensors and, in case of a gyro, using QCC2 method to read direct gyro rate, instead of relying on RobotC’s built-in gyro integrator, will remove a lot of headache and improve the accuracy.

You will see in the next post, that incorporating covariance predicts and updates into our code is surprisingly trivial. If you read tutorial and studied Collin’s code you could easily guess the code for the simple mode of operation, when we don’t have full square covariance matrix and stick with a scalar covariance values for each state.

The devil will be in the details. You will need to understand your robot and sensors in order to make decisions when to pump additional process or sensor noise and how much. This is where you spend most of the time calibrating and fine tuning filter.

For example, you notice that, when you drive omnis faster then certain speed, the gyro starts having larger noise. You can easily deal with that by increasing noise variance ® associated with gyro sensor. However, RobotC’s built-in integrator would have absolutely no clue and will start reporting junk.

In general, you never want to use somebody else’s filter or integrator output as the input for your sensor fusing filter. Any preprocessing code that uses latent variable states, if()s, or skews signal’s statistical distribution, and don’t report what it had done along with the measurements, is going to cause performance degradation and major headache.

You always want to get raw data and model all biases and drifts yourself. The only exceptions could be running-average type of filters/smoothers, where you know exactly what is happening with the statistical properties of the data source.

In addition to that, if your filter knows about what control signal had been given to the motors it will be in a much better position to model more accurate physics and will know when and where to increase or decrease process noise.