Is there a better way to track a MOGO?

This is my first time working with vision, please don’t be too harsh :slight_smile: Comments, improvements? I am not making use of the angle property from the vision sensor, it always reported 0 when I checked it, and the documentation didn’t explain what it was. Is there a way to make the robot move faster? When I tried increasing the driving and turning speeds, it would overshoot and then overcorrect and keep doing that forever. Is there a way to follow a MOGO more smoothly, specifically without splitting turning and driving forward into two separate blocks while not doing a custom drivetrain?

A PID where the sensor is the vision sensor horizontal position (of the largest object, assuming its larger than a threshold) and the desired value is the middle of the screen (whatever number that is).

I’m also wondering if there’s a way to parameterize the input to “take a snapshot” block to avoid duplicating the same code for red/blue/yellow MOGOs?

If I understand correctly, followMogo turns left or right until the mogo is centered on the vision sensor’s field of view. I’m not sure what waitForMogo is intended to do… Why would you want to just sit there waiting for a mogo to come into view? And if I understand this

correctly, you mean you want an input into your block that will tell it what color mogo to look for. This is certainly good; copy and pasting is one of the taboos of programming because if you find and fix a bug, you’ll have to re-copy and paste to everywhere. There’s no way to do this directly, but you can do it using number inputs. Add a number input to both of your blocks, and call it say, colorCode. Then use some code like this to interpret the number:

define waitForMogo(colorCode) {
  if (colorCode = 1) {
    take a snapshot for red;
  else if (colorCode = 2) {
    take a snapshot for blue;
  else if (colorCode = 3) {
    take a snapshot for yellow;
  else {
    print to Brain("Invalid mogo color code selected");
    wait(15, seconds);
  } // This will give you time to figure out what's wrong.

define followMogo(colorCode) {
  forever {
    // Drag the bubble from the "define followMogo" block into the input bubble.

You should also (in my opinion) make targetX, targetWidth, and targetHeight to inputs into followMogo and targetArea should be calculated automatically before the forever loop in define followMogo.

Now PID. (Be prepared to basically rewrite your entire code if you choose to follow this route.)How I would go about this, is start by setting x to the VisionSensorObject centerX - targetX. At the beginning of your when started, have set kP to __. (I don’t know what you need to set it to.) In your followMogo forever loop, do something like this:

set X to (VisionSensorObject centerX - targetX);
set turn velocity to abs(X * turnkP); // Set turnkP at the beginning of your when started
set widthError to (targetWidth - VisionSensorObjectWidth);
set drive velocity to abs(widthError * drivekP); // Set drivekP at the beginning of your when started, too
if (X < 0) {
  turn left;
else {
  turn right;

if (widthError > 0) {
  drive forward;
else {
  drive backward;

if (widthError < widthErrorMargin and X < turnErrorMargin) {
  break; // This gets you out of the forever loop;

Obviously, this needs to be debugged and stuff, and this is also only a P loop, but it should get you started. Basically, the concept is that if you’re not off by very much, don’t turn very hard or drive very quickly, but if you’re off by a lot, don’t waste time going slowly.

If there’s something in there you don’t recognize, it’s probably a variable, but feel free to ask. Variables like kP and errorMargin should be set at the beginning of the program, but variables like targetX and targetWidth should be inputs into the block. Otherwise, look up PID loops (this example only uses the P part) and ask anything you don’t understand.

Ah, thanks!

This code kicks in when a mogo we want to track is in front of the robot (more or less); however, the camera may not “see” the mogo right away so we end up querying the camera until it starts “seeing” it:
and then we proceed to ask about the object’s dimensions and position.

Thanks for explaining the P loop.

We hard-code targetX to 1/2 of the camera’s field of view, and we capture the values for targetWidth/targetHeight when we calibrate the camera (keeping the mogo a known fixed distance away from the camera), and then we initialize the variables to those values in whenStarted.

We seem to have a couple of other issues with the camera or its drivers:

  • The brain regularly throws up a warning claiming the camera is disconnected; we dismiss it and the camera seems to work after that
  • Having to recalibrate the camera. Whenever we go to a tournament, the first thing we do is recalibrate, and that’s understood. However, we also have to recalibrate the camera periodically in our practice room where we meet at the same time of day and the same lights are on, and the mogos are in the same exact position.