V5 and the Gyro sensor

Hello! To my knowlage, no one had gotten the v5 and the gyro sensor to work. Since my team went to worlds and one of my other teams tried to use the gyro sensor with v5 and failed. I figured I would take a shot at it, and I think I’ve got it working!

The first thing to note is that when your program starts up the gyro seems to take a second to “boot up”, at least from my testing. Any attempts to turn the robot in that time will result in the gyro sensor counting up forever in the value it went towards.

Second thing is that at least in VCS there is no way that I know of, or that is documented, to set the gyro value equal to zero. This led me to develop some interesting code to make sure that it was consistent in using the gyro to auto straightening and turn.

Here is the code I used to control turning using the gyro sensor in VCS

void gyroTurn (int x, bool turnRight) // This void statement controls the gyro turn, it is important to leave one second at the start of auton to allow
{ // the gyro to start up so that it will not count infinately.
int x10 = (x10); // ex of code gyroTurn(degrees you want to turn,true or false if you are turning right);
if(turnRight==true)
{
x10=x10
(-1);
}
bool lineup = false;
int gyroValue = Gyro.value(rotationUnits::raw)+x10;
if(gyroValue > 3600)
{
gyroValue = gyroValue - 3600;
}
else if(gyroValue < -3600)
{
gyroValue = gyroValue + 3600;
}
while(lineup==false)
{
if(Gyro.value(rotationUnits::raw) > (gyroValue + 10) || (gyroValue - 10) > Gyro.value(rotationUnits::raw))
{
if(turnRight==true)
{
//turn right
leftDrive(25);
rightDrive(-25);
}
else
{
//turn left
leftDrive(-25);
rightDrive(25);
}
}
else
{
stopHold();
lineup = true;
}
}
}

Also, be sure to mount the Gyro in a spot where it is insolated from the rest of the electronics so that any static electricity would build on the gyro. Try not to use it in driver control since static tends to build up a ton. Make sure before every match to discharge any static electricity by touching the gyro, then touching the edge of the field.

3 Likes

My team got the gyro to work. I’m pretty sure others did as well. There were some posts here on the forum with sample code and suggestions on how to get it to work. You might want to try a search for those threads.

1 Like

I got this to work. I got the info from my coach and my other teammate that no one else was able to do it.

Can you do multiple turns with this code? I ask because gyro turns using the V5 are relative not absolute.

1 Like

Yes, as long as it does not go over 720 degrees in one command since the overflow code cant take it.

overflow code
if(gyroValue > 3600)
{
gyroValue = gyroValue - 3600;
}
else if(gyroValue < -3600)
{
gyroValue = gyroValue + 3600;
}

If I was not using a holonomic in testing (Lots of play in the drivebase) the code and gyro should be able to do exact turns.

Many people have managed to use the gyro sensor, there is nothing inherently difficult about it.
However, this is why I think gyro is not the best solution for V5:

Yes, when the code starts, the gyro goes through a calibration process that requires the sensor to be completely still. That should just happen on startup, and you should not have to worry about it in autonomous unless you create the sensor right before you start autonomous, which is bad structure.

When dealing with sensors, you should never have to reset them. Doing so is a blunt and inefficient way to deal with relative angles.
Instead, the solution is to take into account the current position of the gyro to convert your relative angle into an absolute angle.
For example:

//given wanted angle of 90 from current position
int wantedAngle = 90;
//instead of setting gyro to 0 and turning until gyro reads 90
//find angle that is 90 away from current angle
int target = Gyro.value(rotationUnits::degrees) + wantedAngle;
//now you can use that value as your target

Also, there is a way to simplify the logic in a code such as this. This is just a suggestion, but it helps with neatness.
Instead of providing logic that takes into account direction, if you use some simple math, you can reduce the complexity of the code. For this, you don’t need to specify the direction, just a negative angle.

void gyroTurn (int angle) {
 int target = Gyro.value(rotationUnits::degrees) + angle;

 int error = 0; //represents value between current angle and target angle
 //do-loops loop at least once, we are using it to calculate error
 do {
  error = Gyro.value(rotationUnits::degrees) - target;
  if(error < 0) {
   //turn right
   leftDrive(25);
   rightDrive(-25);
  } else {
   //turn left
   leftDrive(-25);
   rightDrive(25);
  }
 } while(abs(error) < 10) //exit when error < 10
 stopHold();
}

Anyway, just a suggestion to make things neater. If you wanted to do a P (proportional) loop to increase speed, you could just replace the if/else for the direction with

leftDrive(-error * constant);
rightDrive(error * constant);

which would go faster the further you are from the goal and slower the closer you are. You would tune the constant to provide the relation between distance from goal and motor power.

Small little nitpick, if(turnRight==true) is redundant :slightly_smiling_face:
It is cleaner to do if(turnRight), as before you are basically typing if(true==true), which is redundant. If you want it to return false when the output of the evaluation is true, you can do if(!turnRight) which translates to “if not turnRight”.

Finally, when you post code on the forum, please format and wrap your code in little

```cpp

//your code here

```

code tags, it helps with readability.

Hope this post was able to teach someone something.

2 Likes

You could alternatively use an encoder for turning, it’ll be a lot simpler to integrate into your code and doesn’t suffer from errors if you use a tensioned free spinning wheel system. Although the biggest set back is it takes up 2 ports instead of 1.

Do you mean 4 ports instead of 1?
To be able to measure rotation using encoders, you need 2 of them.

Nope. You can use an encoder placed horizontally on the back end of the chassis (or any part within the middle of the chassis) to measure rotational displacement. Granted you will need to experiment with it, it’s not like you can use degree values like you can with a gyro.

Basically, the encoder will measure a certain value whenever the two sides of the chassis move in opposite directions (this is what the chassis does when turning) and this will enable you to find a certain angle of the robot. I recommend graphing different values of the encoder in relation to the orientation of the robot. This will let you model the behavior of the robot (when turning) with code.

I suppose. Its just it feels like it can be quite inaccurate, and assumes the robot always turns on the exact same point. If you use omni wheels, the turning might be very different depending on the acceleration and speed.
Maybe it is possible to get it to work, but it does not feel very robust compared to two 2 vertical encoders or even a gyro.

If you use all omni wheels then the robot will rotate about a singular point. I don’t understand what you difference would using 2 encoders in a perpendicular orientation would make other than utilizing an extra port. You could easily achieve the same results as those encoders with the integrated IMEs. But I think you’re misunderstanding the geometry of a symmetrical, square robot. The horizontal encoder would work, in fact, it makes odometry possible.

Also, the horizontal encoder measures angular displacement and the point about which the robot rotates is arbitrary to the angle of its orientation.

I am a little confused.
I did not say two perpendicular wheels, I meant two parallel wheels like this (ignore the back wheel).

(image taken from pilons)

What I am understanding you saying is to just use the one horizontal wheel (labeled back wheel).

What I was saying is using just that one encoder to measure angle must be inaccurate. Depending how the robot turns, especially if it is not consistent (affected by dynamic speed/weight), that wheel will not always spin in proportion to the angle of the robot . If this is where I am wrong, feel free to correct me.

For example, if for some reason the robot was pushed so that the robot rotated directly around the horizontal wheel, causing there to be no encoder movement, would it not lose all accuracy? If there were two parallel wheels like in the diagram, it would not matter what the center of turning would be. And this will still be much more accurate than integrated sensors due to wheel slip and inexact point of turning (within the width of a large wheel).

Finally, in odometry the horizontal tracking wheel is not used for angle. It is impossible to differentiate the movement of the horizontal wheel between horizontal displacement or rotation of the robot. What tracking algorithms do is measure the orientation of the robot using the two vertical wheels with this formula:
dRadians = (dLeftInch - dRightInch) / chassisWidthInch;
Then, it can calculate how much it expects the horizontal wheel to move given that rotation.
It cancels out that movement from the horizontal wheel, and what remains is the horizontal displacement.

If all you had was the horizontal wheel, it is impossible to differentiate horizontal displacement or rotation. What if you had a tall stack at the front of your robot that made it so when the robot turns it does a slight arc around the stack? Then, the wheel would be spinning much more than usual for the same angular rotation of the robot.

If I am missing something important please correct me =)

1 Like

I think you’re overthinking this way too much. We’re talking about replacing a gyro with an encoder, we’re not talking about traveling in arcs. If you rotate about the center of the robot, you can use the horizontal encoder to correctly orient the robot to a desired target angle.

Like I said, you could do some math to find a function that correctly models the relationship of the encoder with different angles of the robot. Using that, you can simply use that function you came up with to convert an angle as an input from a parameter in a method, and use that to have the robot rotate until it reaches a desired encoder value that corresponds to that specific angle.

Also, what I meant by perpendicular wheels is that the “vertical” wheels are perpendicular to the horizontal wheel. Also, that wheel will always spin in proportion to the angle of the robot in autonomous. Remember that we aren’t allowed to cross the autonomous line and we can eliminate jerk with slew rate control and PID. This will essentially make motion really smooth and controllable so you can always measure if the encoder changed position or not. Also, the free spinning wheel will be tensioned down to the ground. Now, the only scenario I see this not working in is if your partner drives into you during autonomous and makes you travel about an arc. But as for rotating in place (which can only be done with all omni wheels), you can get a relatively accurate reading with an encoder if you code it properly.

Just visualize the motion of the horizontal free spinning wheel as the robot turns. It will essentially be moving in a “straight” line because that encoder is tangent to the robot’s rotation. I don’t know if that made sense, but that’s how I visualize it. There is a clear correlation to robot orientation and that encoder value, and you can model that with a function. I understand how the 2 “vertical” wheels with encoders work but at the point you’re better off using the IMEs.

1 Like

Alright. I still am not convinced, but I can see how it is feasible to get a reading of the robot’s orientation based on one sensor, if we assume the robot will spin in the same way.
What I meant with arcs is if the center of turning of the robot was at the very front of the robot (due to a dynamic imbalance of weight), it would cause the omni back of the robot to travel in a sideways arc and cover more distance when rotating, messing up the conversion between encoder movement and robot angle.

Yeah, it is probably possible to ensure the robot rotates in a consistent way in autonomous, and make it good enough. I can just see a lot of possibility of error. However, I understand how you would go about modeling the rotation.
Anyways, good discussion.

2 Likes

I like discussion, it’s how we learn as a community. The big take away here is that there are multiple solutions to the same problem.

3 Likes

It works great in PROS. I used it last year and it worked great.

Can you explain how you arrived at this formula and what dLeftInch and dRightInch mean? I know it is from the Pilon’s document, but I didn’t understand how they did that because they didn’t really explain what ΔL and ΔR mean. They explained the rest of the variables, but not those two.

I would assume ΔL and ΔR are counts reported by the left and right side encoders.

To understand the formula you may want to look at this example:

We have a robot of width “l” that turns around some point which is “r” inches away from it. Let say over the time unit “t” left wheel encoder reports 3 counts (travel distance) and the right wheel reports 5.

If you know the width between the wheels then you can calculate both the unknown radius “r” and angle “theta”.

The movement of the robot is the sum of translation of center of the robot along some path and the rotation of the robot body around that point.

Special case would be when the robot only rotates around the point between its wheels.

1 Like

Yeah. I figured out the math at one point, but I forget how I did it.
The math works out that (left-right)/width gives you angle of robot.

@technik3k’s explanation is good. Just a small correction, I think ΔL and ΔR means the delta in inches that both wheels have moved, converted from ticks (counts).
Just to clarify, d means delta, which means change from last iteration.

You first want to find the new encoder readings. Then, you want to convert that into a standard unit, such as Inch. What matters is that it is the same unit the chassis is measured with. Then, to find the delta wheel movement in inches, you substract the old encoder inch from the new encoder inch.

double newLeftInch = leftTicks * ticksToInch;
double newRightInch = rightTicks * ticksToInch;

double dLeftInch = newLeftInch - lastLeftInch;
double dRightInch = newRightInch - lastRightInch;

lastLeftInch = newLeftInch;
lastRightInch = newRightInch;

Now that you have the amount the wheels moved in inches since the last iteration, you can calculate how much the robot has rotated in that iteration using the formula:

double dAngle = (dLeftInch - dRightInch) / chassisWidthInch;
double newAngle = lastAngle + dAngle;

Of course, you can also modify the formula to be

double leftInch = leftTicks * ticksToInch;
double rightInch = rightTicks * ticksToInch;
double newAngle = (leftInch - rightInch) / chassisWidthInch;

where the wheel readings are measured not as deltas, but instead as relative movements since the program began.

What matters is that if you simply keep track of how the left and right wheels have moved (in distances/rotations), you can know at anytime the robot’s angle since you started measuring the wheels.

Does that clear things up?

1 Like