VEX AI initial thoughts

(This thread was originally called Problems with VEX AI)

VEX AI is a competition where robots are completely driven by code, weather that means AI or hard coded algorithms, my initial reaction is that you could easily exploit those systems by scattering objects around the field (they would be connected to your robot like tether bots).

For example c-channels with screws sticking out and perhaps colored red and blue. Something like this, while easily avoidable by human drivers, would be very hard to deal with using AI or algorithms when you don’t know the objects until the match starts.

Or for example tile colored ramps, the robot would only mostly see it in it’s depth sensor, and when it drives on it it loses GPS since the robot goes up slightly and it’s wheel odometry is off because the robot when up a ramp.

Or even just placing random objects next to goals if robots are simply avoiding all opponent robots.

It could be anything to confuse the opponents robots. I feel like this would be a major problem/strategy in VEX AI, and would be extremely hard to deal with using AI and algorithms. You would have to detect the object identify it, determine what it is (is it a robot, and immanent object), weather you could get over it or push or pull it, etc.

What do you guys think? Would this or something similar be a problem? Are there any other problems you think VEX AI will have?

(Also please note; I am not simply hating on VEX AI. I think it is a really cool new thing vex is doing, but I feel like this issue would be a disruption to their vision of VEX AI.)


Is there a rules doc for AI yet? Not seen one.
In normal VRC, this would be illegal anyway so suspect it would be in AI. You cant intentionally detatch parts or colour parts to confuse a vision sensor in VRC.


They don’t have to be colored, but they wouldn’t detach. They would like wall bot/tether bots and connected to the robot with string.

Doesn’t the GPS sensor use the QR-code border on the field for position tracking? As for your other point, I’d guess GDC will make a rule against something like that, as your OR would have to be super versatile to recognize a dummy part.


I don’t know about the specifics, but I assume it has to be on the same height as the QR code thing. So if the robot moves up it would not get proper readings. Nevertheless the objects would still cause trouble if it doesn’t mess up the GPS.

Here is a link to the appendix specifying rules for Vex AI

The AI robots are not allowed to mimic field elements. The AI robots are not allowed to trap.


Not sure I see any issue with that, it’s just part of the game and something teams will need to deal with. A team employing that strategy would have to deal with it too as far as their own second robot is concerned.

Thanks, and I did actually read it yesterday now I come to think of it :man_facepalming:t3:

The 45 seconds isolation period does help with this, but there are still 75 seconds where this would be a problem.

Teams would likely be able to deal with their own such objects, but I just feel dealing with a never before encountered obstruction objects would be too hard to handle autonomously. Although VEX AI is supposed to hard and vex might just leave this in as part of the challenge.

1 Like

Though you bring up valid points that even professional engineers have trouble solving problems like the ones presented in the AI challenge, it seems as though vex is committed to providing as many resources as possible to make this not the case.

Tracking the robot’s position is a huge problem that only a few teams can say they’ve done successfully (in EDR, at least), but this problem is already solved for you. The amount of things that can be done with knowing the position of the robot on the field in tandem with the position of everything around the robot, makes it so writing code to get the robot to respond to the environment is much much easier. I’m curious to see what the function library will be for all the sensors, but this can always be updated.

TL;DR I think it’s too early to say this game is intrinsically flawed (at least that’s how I interpreted your argument) when we haven’t even seen what the sensors are truly capable of, nor have we ever played a game like this before.


While the VEX GPS does give you data on the robot’s position and orientation, it (like the systems that teams have already developed) is not infalliable. It will be interrupted by other robots being in the way, your robot spinning too fast to get a clear camera frame, etc. The thing is it will fail differently than other systems such as encoder-based odometry or the Inertial sensor. While it does work at a base level right out of the box, that doesn’t mean that there’s no room for teams to improve on it, and it certainly doesn’t mean that every team which owns it will suddenly be at the same level as the top teams.

It will also be interesting to see if (and when) it will be legal for use in VRC, as doing so would require having the pattern tape on field perimeters at every VRC tournament, and ensuring that it is done correctly.


I do think this game is interesting compared to other games vex produces because it’s targeted to a more experienced audience. This is evident with the lack of restrictions on the actual robot and the fact that autonomous is hard enough to code for only 15 seconds, and impossibly hard for 1 minute while the robot isn’t being interfered with. I think this will force people to look at autonomous very differently than they did before.

By the way, @nickmertin I know it’s unlikely but I think we should give vex the benefit of the doubt. They provided ways for the robot to reset its position as well, so the problems you described could be mitigated. It seems they were already aware that this could be an issue, or maybe it already is an issue. What fun would the game be if there wasn’t a real challenge anyways?


Ditto, I think it’ll be very interesting to see how teams approach it.

I’m not trying to devalue anything about VEX’s system, just trying to remind everyone that there’s no silver bullet. Three-wheel position tracking has its own set of issues as well; for example, it is not particularly good at handling very rough movement (i.e. with high acceleration rates). VEX GPS will be great, but it will have its limitations as well.

One interesting thing that I think will come from the use of VEX GPS is more teams implementing some form of sensor fusion, i.e. merging data about the same thing (where the robot is) from multiple sources (VEX GPS, tracking wheels, Inertial sensor, etc.). Each system has its own points of failure, so by combining them one can try to cover all their bases to ensure that there is always reasonably accurate position information available to their code.

I also think the VEX GPS is great because it provides a bridge for teams to get from basic encoder-only autonomous to the types of advanced custom systems that you see from the top VRC teams, by giving teams a friendlier means of learning the theory and skills needed. I think it’s very likely that the top teams in VAIC will be using custom sensors/systems rather than sticking with the VEX AI suite, but it has a huge place in getting teams to that level.


I’m just worried that there won’t be enough teams in general. I think it’s a super cool idea, but I don’t know if school’s are willing to pay an extra 200 and they have to get sensors and understand programming, it’s super hard to be competitive. This would discourage teams from even starting. I really want to see tournaments with well-done robots, but I’m afraid most tournaments in general will be buggy.


This may be a way for “elite” teams to, literally, put their money where their mouth is. If teams are frustrated that qualification matches include too many claw/push-bots, the higher barrier-to-entry for VexAI may provide a higher bar. Combined with the fact that, it appears, high school students and college students would compete together, and it could be very interesting.

That said, just because something is allowed, does not mean it is required. It may be the case that a team could field 2 15-inch pushbots at an AI competition.

My team and I are super excited about the potential for VexAI. I, at least, have concerns about numbers. Would there be enough local-ish competitions and enough competitors to be worth the time, effort, and expense of going this route? How feasible would it be for a team to put out both a VRC robot, as well as a pair of VexAI robots? Hopefully we find out more soon.


For the most part, a lot of what’s learned in VRC can be applied to VAIC and vise verse. So I don’t think it’s going to be like completely learning/inventing new ways to engineer

1 Like

True, but the infrastructure to field at least 2, likely 3 robots (15inch, 18inch, 24inch) will price a lot of teams out. Even fielding 2 15inch robots, one of which would dual compete in VRC and VAI, could be too much. And it may wind up being the case that the VexAI equipment (new sensors, etc.) are not permitted on a VRC robot. Particularly if the special components can only be acquired by registering a VexAI team (as opposed to purchase directly from the VRC section of

Still, very promising and very exciting. First year will probably have a lot of kinks to iron out, but I give Vex the benefit of the doubt on working through those and making continuous improvements. Lord knows, V5 did not roll out well, but it seems to be in a reasonable place now.


It will be a lot more feasible for organizations which currently field 3+ VRC teams to replace one or two of them with a VAIC team. I think at first VAIC will largely be composed of those teams as well as VEX U teams.


Is there a rule against pinning or trapping in the AI challenge?