Using the gyro in ROBOTC

It looks as if the gyro will be a favorite sensor again this year. Using the gyro in ROBOTC can be a little tricky, it returns values in the range -3599 to 3599 (1/10 degree steps) but the behavior when rolling over from the maximum value to 0 is a little odd. When going in one direction it will increment to 3599 and then roll over back to 0. When going the other way it will decrement to -3599 and then roll over back to 0. This can make determining the absolute angle harder than it needs to be.

A couple of years ago I released a small library that helps with this as well as adding some code to reduce drift. I have now pushed the same code (a few small changes) out to github, the gyro angle can now be read as a float in the range 0 to 359.9 degrees as well as an absolute angle that has no limits. Use might be as follows.


#include "gyroLib2.c"

task main()
{
    float ga;

    bLCDBacklight = true;

    // Initialize the gyro on the default analog port (in1)
    GyroInit();

    while(1)
        {
        if( GyroValidGet() ) {
            ga = GyroAngleDegGet();

            if( ga > 100 ) {
                // do something useful
            }
        }

        wait1Msec(20);
        }
}

Library is here
https://github.com/jpearman/RobotCLibs

read the source for more details.

1 Like

The gyro would be very important if used for computer controlled aiming. I was wondering if it is possible to program your code during driver control, where you have already programmed your code to record the robot’s x,y location, and if you press a button, based on your x,y location on the field, your robot will automatically turn the closest direction to the goal until it is faced towards the goal, without a billion of statements like if this x,y location then turn this direction, maybe with an equation? Would this need a gyro, and if it can also use encoders, what would be more accurate? this would save time, increase accuracy, and save you the frustration of turning so many times in a match.

I’ve done this in FRC. Even with a well made, well tuned swerve drive (sample size of 4 valid odometry sources), and a good 9-axis sensor (gyro, accelerometer, and magnetometer. 0.5 degrees of drift over 2 minutes…because thats how much the earth spins in 2 minutes.) you still end up with bad lateral drift because any small error in angle is multiplied based on how far you’ve gone. You would need a way to completely zero it after a minute or even half of a minute of operation given how gyros drift and how encoders integrate over time.

Awesome! Thank you for anticipating our needs. I can’t wait for my kids to try this out - once they get a chassis of some sort put together. I would also like them to be a little more systematic in creating a vibration isolator for the gyro this year. Whatever they did last year (which was based on a post you had made about using some sort of rubbered screw thing) helped a lot but there was still significant error by the time the robot drove 6 feet or so. Tuning the isolator is probably key, since putting rubber between the gyro and the robot doesn’t necessarily reduce vibration - it’s conceivable that such an elastic could actually create a resonance condition and make things worse. Maybe some high school or Vex U kids could work out a good isolator this year??? hook up a couple of accelerometers and get some serious input/output isolation data?

I’m also curious if it helps the gyro to be at the very center of the robot’s rotation, or is its response decoupled enough that it could be mounted anywhere on the robot (far from the center of rotation)?

I think it takes a little over 1 second to reset the gyro, so if kids can use the tape line or press against a wall to reset their gyro a few times during a match, do you think it would still be worth the possible savings in trying to aim well? In other words, given the limitations you noted from your experience with these things, do you think it’s still worth trying to use the gyro to aim automatically?

1 Like

We learned hard way that it takes exactly 1118 msec to initialize the gyro. I see that James has it at 500 msec and that might not be enough.

Instead of putting gyro initialization in pre_auton(), we had it in the beginning of autonomous (don’t ask me why) and then would start to move the robot right away. We were getting garbage readings from the gyro. It took quite a few days before technik jr figured out that wait1Msec(1117) will not work but wait1Msec(1118) will let the gyro properly initialize.

For the NbN, I was thinking to use accelerometer to detect collisions, and either mark gyro as invalid, until there is a chance to derive true heading from crossing tape lines, or, if collision is small, increase uncertainty on the gyro value and somehow ride it out. But that might require some sort of filter implementation.

There were a number of posts few years back discussing filtering. Did it lead to anything useful?

1 Like

Could the gyro itself be used to detect collisions? If the gyro value is seen to jump suddenly, especially when no turn has been commanded, could the code determine that something has gone wrong?

Does anybody know what a collision “looks like” to the Vex accelerometer? I mean, does its value get updated fast enough in the Cortex to see the spike(s) caused by a collision?

Yes, that would be possible and it will be a type of filtering. Essentially, if you look at IME software workarounds the library will “flywheel” over bad IME data by predicting what values are reasonable, based on the physics, and throwing out any outliers. This would be one sensor filter.

We could apply similar code to gyro. We know robot cannot suddenly jump 30 deg. But sensor reading can, if robot was bumped. Then we could count the jump as an error and assume actual robot still within a couple of deg from where it was before the “jump”.

If we are to use proper multi-sensor filter then we can do sensor fusion. When we ride over tape lines we could correct robot position and biases of the gyro and quad encoders, based on the line tracker timings. To do that we need to model geometry of the field. It will get complicated but should give superb positioning solution.

I was hoping to find either an existing filter library or an easier method for sensor fusion, before trying to write filter code myself. Maybe it will be possible to get just enough accuracy by treating all sensors independently.

1 Like

Well, sort of. The theoretical delay is 1224 mS based on the ROBOTC notes about how the gyro is initialized, but that’s beside the point.

I pushed some code to the same git repo that shows how ROBOTC does it’s internal calculations, you could, with care, use this as an alternative to the internal integration. This would need to be a high priority task, unfortunately it will still be interrupted by ROBOTC background tasks so may have small errors. I used an almost exact copy of this in ConVEX as well.

The core of the code looks like this.

    // Delay for 200 milliseconds. This is to allow the gyro to stabilize when it is first powered up. 
    // The datasheet indicates that this may take 50 milliseconds so we'll run it a little longer.
    wait1Msec(200);
    
    // calculate bias
    for(i=0;i<1024;i++)
        {
        GyroBiasAcc = GyroBiasAcc + SensorValue gyroAnalogPin ];
        wait1Msec(1);
        }

    GyroBias      = GyroBiasAcc / 1024;
    GyroSmallBias = GyroBiasAcc - (GyroBias * 1024);
    // Ok bias done

    // Run forever
    while(true)
        {
        // Get raw analog value
        GyroRaw   = SensorValue gyroAnalogPin ];
        // remove bias
        GyroDelta = GyroRaw - GyroBias;

        // ignore small changes
        if ((GyroDelta < -GyroJitterRange) || (GyroDelta > +GyroJitterRange))
            {
            // integrate angle
            GyroRawFiltered += GyroDelta;

            // compensate for error in bias
            if ((++GyroJitterCycles % 1024) == 0)
                GyroRawFiltered -= GyroSmallBias;
            }

        // calculate angle in deg * 10
        GyroValueTmp = GyroRawFiltered / GyroSensorScale;

        // Clip to +/- full scale
        if( abs(GyroValueTmp) >= GyroFullScale )
            GyroValueTmp = GyroValueTmp - ((GyroValueTmp / GyroFullScale) * GyroFullScale);

        // and store for the user
        GyroValue = GyroValueTmp;
        
        // sleep
        wait1Msec(1);
        }
1 Like

I was thinking of combining gyro and accelerometer using a complimentary filter to try and get better results. It’s been done before with non-VEX sensors, not sure about ours though.

Also, see this.
using-line-sensor-to-update-gyro-heading

Chris has made many attempts to get the gyro working more accurately, unfortunately there’s only so much to be done with the relatively low resolution data we can get.

1 Like

I’ve been looking for an excuse to introduce the accelerometer for a while, so I’m looking forward to seeing whatever you’re working on with this. When I google gyro accelerometer complimentary filter, however, I’m seeing mostly things from the quadcopter world dealing with pitch and roll and so forth, so I’m interested to see how this filter works on a level plane. :slight_smile:

1 Like

With reference to the code that simulates the internal workings of ROBOTC, the Gyro actually outputs a voltage proportional to Angular Velocity i.e. Degrees per second.

So in the integration step I would expect:
Angle = Angle + Angular_velocity*time_in_seconds_since_last_update

Since the loop is running at 1ms delay, the Delta_T is 0.001. I couldn’t figure out where in the main loop the filtered Gyro value is being multiplied by Delta_T. Can you clarify? Thanks.

(Also, Delta_T is usually calculated by reading the current time - time of previous update, to account for scheduling delays)

It’s folded into the GyroSensorScale constant. I will figure out where the number comes from but TL;DR version is that I based this code on some supplied by the ROBOTC team as part of an example and that’s the number they used.

You are correct, but this is just demo code and there are no other tasks running other than the ROBOTC background task and the error introduced is small.

1 Like

I see – if Delta_T is constant, then one can do the integration as done in the ROBOTC code and multiply the final value by Delta_T. Thanks for explaining that.