Idea: Stop coding autonomous for VRC? Automate it!

Hey Guys,
I am Vansh from … High School and … Robotics Team. What are your thoughts on completely removing the hard coding aspect a skills runs and possibly even match play in order to make the robot complete the tasks truly autonomously using vision sensor, a-star, limit switches, and pure pursuit(using well developed odometry). I will be a making a video shortly showcasing the theory in action with a virtual robot but the basic idea is:
give the robot the field(and obstacles) and give it goal it has to get to and a*star will calculate the most efficient way to get there and pure pursuit(or other controller) will get the robot there accurately. then use vision sensor to path track to the goal and limit switch to clamp on. You can probably automate stacking as well using vision sensor(use the x coordinate to calculate the optimal distance to place to goal for the platform to balance)…, making the autons just strategy and not actually coding?
Just an idea, open to discussion and other responses!


How is this different from Vex AI?


not sure but our school doesn’t have vex AI so never looked into that

I think this is an awesome thing to do and would certainly be a good introduction to more interesting real-world progamming. So I would say go for it in order to get experience with more interesting programming. Unfortunatly, doing A* or D*, using Odemetry or SLAM or whatever other cool stuff you want to do is not as effecient when you already know the terrain. Sure if you had access to LIDAR you could use it to avoid robots or unexpected field elements. In auton in HS you don’t really have enough time to do much with that information, and in skills your robot is the only robot effecting the state of the field. In a competetive environment, expecially this year, having the advantage of not doing complex calculations in real time will likely lose you more time than any advantage you might get, but that’s just my opinion.

A couple thing I certainly do think are worthwhile are Odometry and pathfinding, I just think precomputing those paths, or doing basic computations (not graph-searching) in real time certainly has produced really good robust autons in VEX.

Here is the tool that I built in order to create autons more streamlined, unfortunatly I haven’t had enough time with our robots to implement pure-pursuit but maybe it will give you some inspiration for automating autonomous routines:


This kind of code is extremely hard to accomplish effectively. Last year, our (high school) AI team was only able to get a very basic version of this code running in real time during matches. To my knowledge, we were the only team at the VEXAI Championship which had a semi functional version of this code. For VRC teams, preprogramming routines is almost always going to be your best bet. The amount of effort required to make effective, truly autonomous code like you’re describing makes that sort of code impractical for VRC teams.

Here’s a video of my team’s interaction period code during a match, which attempted to use some of the concepts you describe. 7700R Vex AI World Championship - YouTube


It would be an easy way to make it work by having a GPS sensor on your robot. So based upon the robot’s orientation and position, it can figure out the team it is on and assume what it should do.

Only issue is that some comps dont have field strips

1 Like

agreed now that I think about it top teams are getting away with using just pids(nothing wrong with that) and breaking records so I don’t think all this stuff is as useful but definitely with access to better sensors it would be interesting…

or put them on wrong been to a few that do that. last comp we went to they were asking teams if they are using gps and bc most of them weren’t they decided not to put the strips on although I think it is required.

also sick tool for the autos planner

I love this idea. You should totally do it. If anything, it’d be a learning experience

1 Like

I really enjoy seeing people bringing this into VRC, I know for FRC we have a lot of systems for this, here’s one if you are interested


I might not be able to implement it soon with state/nationals coming up and then worlds but definitely an idea I will explore more during the time and get the algorithms ready. The long part is tuning.

nice!, thx for the reference frc seems really cool.

Ok, here is my wishful thinking:
Assuming all sensors work perfectly every time (which they don’t, vex moment), you could have a vision sensor for seeing goals/rings, a GPS sensor for field positioning, an inertial sensor for heading (more positioning capabilities, not super necessary), and then potentiometers/encoders on all the motors. You might be able to have the vision sensor on a spinning motor to see more of the field when searching for mobile goals
Look for Mobile Goal
Find Mobile Goal
Face Mobile Goal
Drive Forward till Mobile Goal
Grab Mobile Goal
Turn to Platform
Drive to Platform
Balance Mobile Goal
Repeat till match ends.
Theoretically possible, in reality would be a nightmare to build a super consistent robot and code that effectively since the all the code relies on the vision sensor, and the vex vision sensor can be VERY inconsistent at times

1 Like

oh tell me about the inconsistency but yeah overall I agree with u

No sensors work 100% of the time, including those not made by Vex. Dealing with bad sensor readings is a whole discipline unto itself, which you are getting exposure to here.


I would say the only way to make it simple is to keep it simple. I know im not one to talk about making convoluted code (hell I have 7 different drive codes, 11 if I hadn’t deleted some) but yeah for something with sensors you need to have something simplistic.

1 Like

yea i totally agree with you. in my post i said “assuming all sensors work perfectly everytime (which they dont)” since ive had so many issues with potentiometer values shifting and the vision sensor just being so hard to integrate into code its not even worth it.

1 Like

idk haven’t had issues with potentiometer(using v2 at least) and the only time vision sensor messes up is when it’s unplugged or something for me… I made some pathfinding code that works if you want to take a look at that

I also like this idea. Through this you should totally do it. If anything, it’d be a learning experience