New Products - June 2021

Back when I was a student, the FRC and VEX PIC-based micro controllers could barely handle a few interrupts for basic PID on a few motors. Try doing PID on too many motors, or try implementing motion profiling across the board, or try adding more than a handful of floating point or trig calculations, and you bogged down the processor. Sometimes you could cheat it a bit if you multiplied all your values by 100 or 1000, and solely used ints and trig look up tables, but otherside odometry as we know it now simply wasn’t technically possible yet on that hardware.

But I’m not here to bore you with “back in my day, we had to walk 15 miles uphill in January” boomer stories like Grandpa Simpson.

I’m here because a very important lesson to learn in life, especially in engineering and technology, is that the cutting edge work you did while cutting your teeth and learning STEM does not stay cutting edge forever. This is something important to come to terms with. As computing devices get more powerful, as sensors get better, as batteries develop more energy density, eventually the really hard stuff we used to struggle with becomes common.

Usually this starts as custom code libraries or features in really high end automobiles or computers. If it’s hardware, prices drop as it starts getting more use in lower trim model cars or cheaper smartphones. Then as it reaches wider adoption, it typically becomes built in as a standardized commodity, because people are tired of having to do X every time they actually want to do the next task Y.

That’s when it happens. X becomes so standard, so basic, so universal, that we stop thinking about it as a problem, and just use it as the next tool in our arsenal, and move on to the next higher problem to solve.

And that’s fine. And normal. And the reason why our standard of life as a society has gotten orders of magnitude better since The Enlightenment. We use our prior tools and achievements to unlock new tools and achievements.

But nor does it devalue your prior struggles as you pushed the envelope. You did great work at the time, and you helped push and advance capabilities across the board. That effort made you a better person, a better future engineer, and better prepared to solve the next problem, whether in future VEX competitions or in life.

The only difference is that the next batch of students will solve slightly different, slightly more advanced problems.


But if you don’t buy the new sensor you could be at a disadvantage depending on how good it is.

… or you do it the ‘old fashion’ way and use encoders, an IMU and one of the libraries out there.

There is usually more than one way to solve a problem. Some solutions might be better than others, but they’re still solutions nonetheless.


“Huge disadvantage”??? Really? By loosing the autonomous bonus, not likely, unless the team psychologically disadvantages themselves by thinking if they lose autonomous they lose the match. Maybe for programming skills, but really, I could count on one hand the number of teams that do really well in programming skills at local events, and even at states.


I was talking about specifically auton, and I didn’t mean to put it that way. I meant that with the advantages that it brings your teams programming could greatly improve with this one sensor, but this applies to programming skills more than auton, because of how error can accumulate over time.

This price argument is really bugging me.

In this robotics competition, you will pay:
$150 to register your team
$500 for field elements
$50+ to go to any competition
$hundreds+ for parts, batteries, etc…

I burn $12k+ every year (and that’s on VEX IQ). If $200 is going to break your budget, spend your time to figure out how to remedy this funding situation. Some things are prohibitively expensive, this does not strike me as that.

About 10% of teams do programming skills runs. (VERY approximate, 12K+ teams and a look through the skills list.) This just isn’t going to have that much of an impact.

I would like for them to be available at the time. So, if you want to use a 3D printed piece, that part needs to already be in a publicly available database. That would be a lot easier to enforce. Teams could post it the night before its first use but then it’s full steam ahead…


Yay! I would love to see that!!


I am really excited by what it will let teams do in the future. The truth is a robust ish position system is the beginning not the end of cool stuff. The issue with teams trying to do good full robot motion profiling is that it was hard to “close the loop” (know where they actually were with respect to where they wanted to be and adjust).

Even without fancy high level stuff there is other cool complexity you can add to an autonomous. The first idea that comes to my mind is in vex games that allow autonomous interaction.

  1. A robot that you place on the field pointing in an arbitrary direction you want to drive
  2. It drives that way for 5 seconds, either getting to a goal, hitting a ball, or hitting a robot
  3. it uses GPS to get its current position and continues on to find a ball and score it

The opponent could then of course use GPS after it gets knocked off course to recover. It could sense that the ball it tried to pick up wasnt actually there and move on to a fallback object.

Using GPS not just to overcome drift from sensors but to reposition a robot from a completely random position.

What Taran alluded to here is important, no one I know in industry/research actually uses pure wheel odometry for anything. Everyone knows its flaws and builds some absolute positioning system as standard into any robotic system, this is VEX doing the same thing. The existing vex “problem” of trying to maintain only wheel odometry over long periods of time doesn’t exist; you are not learning to prepare for the future by doing it. What does exist is using position to do something interesting and filtering multiple sources of information together.

Lastly I want to talk about this point. Wheel odometry can be done to preform incredibly well. This robot uses wheel+imu odometry exclusively for 2 minutes

It just took a ton of work tuning the various parameters, we found the best method was turning 10 full rotations in place to tune the angle values so any error was exacerbated and signal/noise ratio was kept high.
But in the cost department, we used 3 extra wheels and 3 wheel encoders, these days probably would have used rotation sensors which makes the cost start to get pretty comparable to the GPS.


I spent some time digging through some of the details for this. It is pretty much done how I would do it. (Note I didnt read every page closely but I think I got the gist)

  1. Take Picture

  2. Find important corners from black to white squares, known algorithms solve this quickly in opencv etc image

  3. Take the positions in the camera image of the corners and fit a line between them, this is a line in image space that responds to the field perimeter

  4. Given this line and the image actually construct the signal, which squares were black and which were white

  5. Convert the signal of whites/blacks (1s and 0s) to the position on the wall those signals are located at

  6. At some point calibrate the camera to account intrinsics (lens problems) Camera resectioning - Wikipedia and the extrinsics (where the camera is in space)

  7. Then we go into an optimization loop using Newtons method, for those familiar with Gradient Descent Newtons method is Gradient Descents big brother.(Newton's method - Wikipedia)

How this optimization works is given any robot position (3D pose is 6 dof, 3 values for position and 3 for orientation) on the field we can compute what part of the wall we would be able to see.

Loop until the derivative is ~=0 and thus “stable” and future iterations won’t help

  1. Given an arbitrary position P we compute what we expect to see
  2. We subtract what we do see from what we expect to see given position P
  3. We compute the derivative of the function (what_we_expect(P) - what_we_saw) and the second derivative.
  4. move P in the direction that lowers that error P += direction * distance (cleverly while gradient descent pretends the error function is a line and moves down it, Newtons method pretends its a quadratic equation and computes this direction using this second derivative as well) direction =- f’’(x)^-1 * f’(x) for anyone really curious how the derivatives work

For anyone curious, a normal computer vision course and some numerical optimization course would teach you how to do all of this. These courses are probably listed as something like 5000/6000 level in computer science and math respectively.


I am not sure what expected values that would imply. The way I understand the patent document - box 132 of their algorithm associates raw observations (pixel offsets returned by the camera sensor) of the feature points P1, P2, P3 with their expected (x,y,z) coordinates on the properly setup field.

Assuming there is no accurate analytical formulas to derive robot position from the optical observations, you will need to use numerical methods, and box 138 is there to evaluate the exit condition for stopping the iterations.

I don’t believe it uses position from the previous time step (as in solution computed 30 msec ago from the previous image and propagated with filter to present time) as the initial estimate. I think, it starts fresh with each captured image and makes very approximate analytical initial estimate. Then iterates by predicting what sensor would see given this position and computing how much it differs from the actual observations. If the combined error drops below some predefined expected accuracy value, it declares success and returns it as a solution.

You can argue that computational time could be reduced, if you start from a current position estimate out of the onboard Kalman Filter, which combines IMU and optical sensor observations to produce continuously evolving position estimate.

It could even be more accurate if it had access to the output of the motor control algorithm to predict any expected accelerations but, I think, the bigger source of error is the less than perfect field setup.

If filter had internal states to estimate field deviations, camera or IMU biases over time, then the subsequent observations could be much more accurate, after it “learns” about the sources of error by analyzing inconsistences between apparent observations and its internal model of how everything is supposed to be.

However, if the actual source of error is something else, besides what the algorithm was designed to look for, it could start to fail in strange ways. The filter may start pushing corrections into completely unrelated states and it will be very hard to debug and understand what is going on. For example, if it expects the coded strip to be within +/-5 deg horizontal tolerance and somebody sets up one side of the field with it tilted 8 deg - you may end up with something like reported robot position jumping 12" away from the real values as it turns.

In general, Kalman Filter is supposed to estimate only values that are observable and change with time. Multiple parameters that don’t change over time, like the field setup deviations, are supposed to be handled by the least squares type of calibration process and used to initialize the filter.

I am curious to see how robust V5 GPS algorithm (without calibration step) is going to be against the real life scenarios of multiple oopsies in the field setup. It may work great out of the box, or it may fail spectacularly when somebody sets up the field it the way that nobody from the VEX engineering team ever expected.

It may be beneficial to have a VEX provided program that allows to take a look at the field with V5 GPS camera from the multiple points of view. Then it checks solutions for any inconsistencies and prints report for any out of spec parameters it detected.

Once again, I would like to ask VEX engineering team to provide raw optical observations (i.e. apparent angles or pixel offsets of feature points and their expected field coordinates) as well as raw IMU measurements to let more advanced teams to experiment with implementing their own integrators and filters.


We will take a look, probably not going to have that for the initial release but perhaps as a future improvement.

One of the dashboard displays will show an image with overlay of some of the field strip detection results. This is to help teams get the sensor aligned and seeing the field strip in a good position. Potentially that data can be made available, but it’s something I need to study and discuss internally.

Here’s a preview from a current beta, however, this may change for the final vexos release



Do you have a sense of how likely it is that this iteration of hardware will persist for more than one season?

Bob Mimlitch just stated on the VAIC stream that the GPS strips will be fitted to skills fields this year.


As @u89djt mentioned, a lot of info was just revealed about the new sensors in a presentation and Q&A with Bob Mimlitch, Levi Pope, Dan Mantz, and Jim Crane at the in-person VEXU/VEXAI competition and can be watched on the stream (VEX U In-Person Championship hosted by the REC Foundation - YouTube). It starts right at about 3 hours into the stream. The Q&A in particular had several questions specifically about VRC Skills use of the GPS sensor and the field positioning strips.

Bob, Dan and Jim all have verified that it will be legal for use in VRC skills ONLY this coming season, and that the upcoming game manual update will include information that the strips will be required on VRC skills fields. The question about how to handle fields that are used for both skills and competition matches was brought up, and varying suggestions, such as having strips that can be installed as needed vs. asking teams to unplug their GPS sensor during matches were both suggested. The strips will be single 12’ roles made of banner-like material that can be velcroed into the field wall perimeters. The opaque panels will not be required in order to use the GPS sensors and strips.


Hopefully, the strips will be provided to EPs for their skills field if it “required” as part of VRC EP welcome kit.

We keep our field perimeters assembled in 12’ sections, so should not be too difficult to dedicate one to skills only with velcro ready to go.


More Knowledge Base Articles have just been published and added to the respective product pages. Quick links can be found here


Updated Appendix B to introduce the recommendation, and eventual requirement of VEX GPS Code Strips for all Programming Skills Matches

From the game manual update thread.


Now that GPS strips will be required on programing skills fields, would odom using tracking wheels now be useless?

if I’ve never done odomentry before, would it be worth it to still try doing it and putting tracking wheels on the bot for the 15 seconds of auton in matches, or would it barely matter because of how short the auton period is?

In this season, where mid field contact will most likely happen often, odom will still be useful in match auto due to its ability to autocorrect.


Isn’t odometry refers to

Odometry is the process of estimating the chassis configuration from wheel motions, essentially integrating the effect of the wheel velocities. Since wheel-rotation sensing is available on all mobile robots, odometry is convenient. Odometry errors tend to accumulate over time, though, due to slipping or skidding of the wheels and numerical integration error. For this reason, odometry is usually supplemented with estimation techniques using exteroceptive sensors, like cameras, range sensors, and GPS.

as opposed to the V5 GPS that calculates absolute position on the field - always about the same error that doesn’t accumulate over time.