New Products - June 2021

I will say that as a professional software engineer, one of our constant decision points is: “Buy versus Build”.

Often times we find that there are existing solutions that provide 80-90% of what we want, some stuff we don’t care about, and some stuff we wish they would have done differently. Do we invest the time “re-inventing the wheel” (but gaining full control over the software), or do we use the labor of others to reduce the time-to-market. Is the functionality something “core” to who we are? Do we think we can implement it “better” than the existing solution? If we build it “better” does that give us a competitive advantage?

When @jpearman requests prior year code submissions, and I see so many teams that use PROS and write their own PID/Odomentry/etc. I wonder what led them to decide not to use Okapilib (and whether that was even an active decision).

I know my opinion may be outside of the Vex mainline (and I’m not an educator), but I don’t see much value in re-implementing the wheel, when it comes to software. Do I think teams should take a crack at writing their own PID or odometry? Sure. It’s a relatively simple concept that can teach some basic coding skills, and maybe allow them to apply what they’ve learned in math. Their implementation is likely going to have problems (straight up errors in math/logic) and probably won’t incorporate good software design practices. Without seeing better code, I think many roboteers won’t even know that there ARE better ways to program.


As a member of a team who spent a month and a half last season writing code for odometry and motion algorithms from scratch, I am excited to see this new sensor.

I don’t think I’ll ever understand why some people balk at innovation, and I don’t feel as though my work last season has been somehow invalidated by a new sensor. This sensor gives an opportunity for new and experienced teams to simulate encoder tracking in a more streamlined way (which you can already do, by the way, just use Okapi and plug in your numbers).

@Taran the main point you’ve made is that three-wheel odometry is possible without understanding or applying trigonometry, This is valid.

@Connor the main point you’ve made is that this sensor decreases learning by making three-wheel odometry irrelevant. However, consider the percentage of vex teams which used three-wheel odometry in their lifespan. 1%, perhaps 2% of teams have ever used odometry. Did you, Connor, during your time in vex, build a robot which utilized three-wheel odometry?

Anyway, with the implementation of this sensor and the corresponding field setup, the percentage of teams which use some form of odometry will likely increase tenfold. Is it really so bad for more teams to use odometry? :man_shrugging:


I agree with the many who think this is a really cool piece of functionality, and are looking forward to seeing how well it works.

I suspect the accuracy will be surprisingly good. I also expect that there will be some amount of lag in the data. Remember, the sensor has to do a fair amount of work to get a position fix:

  1. Take a picture of the field
  2. Isolate the field strip from the picture
  3. Identify the position marking within the field strip
  4. Run mathematical transforms to calculate the position of the robot on the field

Based on the fact that the sensor is relatively small and has a modest power budget, I am going to guess there will be a short, but not-trivial lag between capturing the image and returning an accurate fix. This lag can likely be ignored for a robot moving slowly, but will probably become an issue for a robot moving quickly.

As a result of this, motion profiling will become even more important for teams trying to move quickly in autonomous.

Also as others have mentioned, it is only useful if fields are correctly configured with the field strips.

I expect this will enable more advanced autonomous routines going forward. I also expect it to introduce new challenges for the more advanced teams.

In science and engineering, we are all standing on the shoulders of giants. None of us is developing things from scratch.

This new sensor should let many teams reach higher than before. More teams will be able to do position tracking, raising the bar for everyone. At the same time, I am sure we will have the top teams integrating this to reach levels never before seen in VEX. I look forward to seeing what they develop.


Taran, to be honest, you should be the last one talking about whether odometry is useful. You used time-based code.

That being said, I agree with Taran here. The GPS sensor gives you what looking at Pilons’ documentation for 3 days would’ve given you but more accurately. This allows teams to focus on more complex movement algorithms, which essentially raises the ceiling of the competition. There is a lot more learning to be had if the positioning was already given to the competitors. Sure, teams won’t HAVE to use encoders, but this doesn’t mean that they don’t have to use calculus or trigonometry or other “real-life skills.” Movement algorithms require just as much, if not more, of these skills, with the only difference being that you have to actually make them yourself instead of copying what Pilons and hundreds of others have done for years.

My only problem with this is the $200 price tag and the lack of information about whether tournaments will be required to have the strips on their fields.


I know it’s already been brought up, but are we 100% sure it’s going to be legal? Sure, the website says it’s legal, but are VEX going to supply everyone with the $40 strips? Or is that more money we’re shelling out on top of a $200 sensor?

This might just be me being blind (and apologies if I am), but is there any documentation for the sensor? The only thing I see on the website is setting up the field strips. How does it work? With 3 other robots, and all the game objects, won’t the sensor be obstructed a good part of the time?

I feel like VEX kinda dropped the ball with the release of this sensor. I get there’s a game manual update coming next week, but I feel that they should have either a) delayed the announcement to co-inside with the manual update or b) release a separate mini-update regarding this sensor specifically at the same time of the announcement.

1 Like

Also of potential interest, as referenced in the AI Status Report from Change Up:

While it seems Vex wants this other (e.g. non-GPS) new sensor to be able to do robot detection, it’s not there yet. So, this GPS sensor may broadcast out the location of this robot to all other robots on the field. That would also potentially lead to interesting decisions / options with the new “Neutral Zone” area of the auton. Do you use this sensor and benefit, while also potentially giving away your location to the opposing alliance?


I would suspect (but again, no confirmation of this yet) that robot location broadcasting will be required in VAIC (where every robot will have to have the GPS sensor anyway) but not in normal VRC play.


Definitely a possibility, and, given the game formats, likely the “right” decision.

That said, that would require Vex’s code (not the team’s code) running on the brain to know what type of competition (VexAI, VRC, etc.) the match was playing and either share or not share that data accordingly.

It could also be interesting if teams using this sensor in LRT matches would broadcast their position to the other team’s robot.

Maybe I’m looking too deep for interesting trade-offs in VRC, lately!


“Vex’s code knowing what type of competition is being played” could just be as simple as a function call in user code to tell vexOS that the robot is playing a VAIC match (so it should broadcast the robot’s position), and VAIC events just have to confirm at inspection that every robot does that.

I don’t think we know anything yet about robot position broadcasting in VAIC other than that it will be a thing – we don’t know if the GPS sensors will talk directly to each other or if the info will go over VEXnet/VEX Link, if there will need to be some sort of “handshake” between the robots in each match before it starts, etc.

  1. We don’t actually, we have some time based delays for running into the goal, but for turns and long movements across the field we use custom-made PID controllers with the inertial sensor, and the built-in motor encoders in the V5.
  2. Regardless, I’m not our team’s programmer. I take care of the designing and building. I’m more than capable of writing odometery algorithms (because I have).
  3. I think you will find that our autonomous is more than consistent enough :slight_smile:

Please use a correct set of facts before lobbing insults.


This actually counters that side of the argument. The foundation of this side of the argument is that odometry is easy and outdated so it’s okay to move on. If only a very small percentage of people have learned it then the argument is null. There is a lot more to learn from odometry right now and this sensor completely destroys the incentive to research odometry. Heck maybe the sensor works terribly, but to a rookie team looking at options, they’re gonna see a GPS tracking system and a odometry system. One that supposedly keeps track of the bots position by itself, the other requiring trigonometry and calculus to calculate the position, on top of the fact that you have to design in and build the tracking wheel system.
Which do you think will be chosen, even if the sensor isn’t as good as advertised it is still discouraging teams from researching other tracking and positioning options.
The best engineers can make anything out of nothing and that’s what vex was meant to teach, to give us the bare minimum and see the amazing things we can create with that. But if they start giving us the tools to cut corners it will stunt our growth as engineers and hurt us in the long run, that is the ultimate concern.


Honestly, if someone is not confident enough to make simple odometry algorithms from scratch, how are they going to be confident enough to take the gps sensor input and then ramp up to motion algorithms? This sensor is not going to affect many teams, only the developing teams that already were planning on shifting to odometry already. The best will sit with their tested odometry, algorithms, and quality robots unless the sensor is crazy good and then they will calmly apply their motion algorithms to the new sensor with probably an extra 8 lines of code. This should not affect elite teams at all.

My team will not have the funding for this sensor. Does this make us at a disadvantage? I don’t know yet. Time will tell. What I can say, is that with no pneumatics this year and now a sensor we can’t afford, its looking like it.


Only Appendix D specifies Field Position Code Strip and opaque field panels. Unless you’re competing in VAIC the VEX GPS sensor will do nothing for you aside from burn a $200 hole in your pocket and look pretty.

1 Like

Here the rest of you go: Patent Images

“Performance Arena for Robots with Position Location System” Maybe we should call it PLS vs GPS??

Good to know that IFI was able to move the heavy lifting to inside the device.


This is not the fear, the fear is giving rookie teams a false sense of skill because they’re given training wheels. Allowing them to compete with teams that are far more advanced.

1 Like

Obviously a patent is not a substitution for more straightforward documentation. The documentation, along with example code etc. will come.


hopefully sooner than in six weeks :slight_smile:

1 Like

Why is there fear at all? A good engineer solves today’s problems. The problems an engineer 10 years ago solved are different than the problems an engineer today solves, and will be different from the problems an engineer 10 years in the future solves.

Locking teams into solving position tracking as some sort of “Achievement Unlocked” to being deemed a “skilled” team seems like a gate-keeping behavior.

I remember similar thoughts when the Vision Sensor came out. Boy that one sure didn’t live up to the hype, but it does enable teams to play with relevant real-world technology, to understand limitations and applications. While it does seem like this particular sensor may be more likely to be more beneficial to more teams, it does let students get experience with accepting data (and potentially noisy data) from an external system.

At the end of the day, I think Vex wants their participants to be excited about robotics and technology. To play with real-world tech that we see in high tech fields should excite many participants!


I’ll answer a few of the questions that have come up.

The sensor can do all calculations for each image that the camera captures. The internal inertial sensor updates the location and heading every 5mS (a kalman filter is used internally) and data sent to the V5 brain at that rate.

No, the GPS sensor uses a monochrome image sensor, the vision sensor uses a color image sensor. The designs are quite different.

The GPS sensor does not have a radio or anyway to directly broadcast its location, a fully configured AI robot will be able to do that, but the GPS on its own cannot.

I think this is important to highlight for those who think this will replace 3 wheel odometry. All sensors have their limitations, this particular sensor does rely on seeing the field code strips at least some of the time. The sensor is designed to be another tool available to teams and not necessarily to replace all other techniques for determining the robot’s position.