2025 VEX IQ Indiana State Championship – My Personal Feedback and Suggestions for Future Events

I wanted to take a moment to share some thoughts following the 2025 Indiana VEX IQ State Championship this past weekend. This post is intended as a constructive conversation starter — not to point fingers or stir controversy, but to raise awareness about some areas of concern and open the door for collaboration between Coaches, Event Partners, RECF, and VEX leadership to ensure the best possible experience for students moving forward.

Our program has been involved in VEX for years, running multiple events per season and participating consistently at the State, Signature, and World levels. As Coaches and Event Partners, many of us pour countless hours into creating fair, competitive environments for our students — and the State Championship is the pinnacle of that experience each year.

We know how difficult it is to run an event of that scale, and we appreciate the massive effort by TechPoint, REC Foundation, and the more than 500 volunteers who helped make it happen. Moving from Lucas Oil to the Indiana State Fairgrounds is not an easy copy/paste move. The logistics of running 4 entirely separate competitions at the exact same time sounds like a nightmare that I imagine would be incredibly complex to manage, and with a couple small tweaks, I feel that the Fairgrounds could be a better venue to host all the Indiana State Championships.

That said, there were several issues this year that are worth discussing as a community.


Skills Field Consistency

Teams that arrived early and competed in Skills on Friday had to contend with fields that were noticeably uneven due to the combination of the concrete floor and riser setups. While efforts were made Saturday morning to level the fields (which we appreciated), unfortunately, teams who completed their Skills runs on Friday did so before these adjustments were made, meaning they didn’t have the opportunity to benefit from the improved, leveled fields. In events where rankings can come down to a single point, this inconsistency has real consequences. In the future, ensuring fields are properly leveled and verified before opening Skills would go a long way toward maintaining competitive integrity.

I applaud @Steve_Hassenplug for being in charge of the Skills fields and working with me to try and ensure everything was put together and ready to go before Skills opened Friday night. Our school brought 6 of the fields, extra balls, extra rubber bands for the switches, and several extra 1x20’s for the inevitable cases where the loading station broke due to the force of repeated ball loading. I even went through all of the fields with one of our students to make sure all of the Skills fields were properly put together, and I’m glad we did because several of them weren’t. We weren’t made aware of the unevenness until late Friday evening. The next morning, Steve had asked me if we had any levels, and coincidentally, one of our parents had some in her car and we were able to lend those to Steve to level the fields.


Rule Enforcement Consistency

One of the bigger points of confusion this weekend revolved around the “over the wall” loading strategy (often referred to as the “China Load”). While some teams were warned or penalized, others were not — even during Finals matches where the strategy may have conflicted with rule SG4e. This led to some confusion among teams, coaches, and spectators, especially as matches progressed into the Finals.

Some clarification may also be needed around internal guidance. A referee shared in a public Discord conversation that they had been told to be “very lenient on illegal loads during quals,” and that they could not change their rulings during Finals. If accurate, this highlights the importance of consistent rule enforcement across an event — especially when strategies that may push the boundaries of legality are involved.

We’re happy to share this information directly with RECF, as well as some recommendations, if it’s helpful in future training or communication efforts.


Student Interview Experience

While most of the judging seemed to go smoothly, we did have at least one situation where a student interview left the team feeling discouraged. In that case, the interaction felt dismissive — students weren’t given a chance to introduce themselves, and the tone from the judge was unexpectedly rigid. Thankfully, the issue was escalated appropriately, and thanks to support from Andy at TechPoint and Max, the Judge Advisor, the team was given a second opportunity with a different panel.

This isn’t to say the judging process was broadly flawed — but to highlight how important it is that students feel respected and heard during these moments. Many of them spend weeks preparing for interviews, and those few minutes can have a lasting impression. We want to encourage students to speak up when something feels off, and we’re thankful this team felt empowered to do just that.

This experience serves as a good reminder for all of us: consistent training and clear expectations for judges can help ensure that every team walks away from their interview feeling confident and valued, regardless of the outcome.


The Bigger Picture

At the end of the day, we all want the same thing: to empower students to grow through robotics. We teach them to advocate for themselves, understand the rules, and approach competition with respect. When they see inconsistencies or feel like their hard work didn’t matter, it takes a toll — and in some cases, it discourages them from continuing in STEM altogether. That’s not a reflection of any single decision — it’s a reminder of how important consistency and clarity are across all aspects of the competition experience. That’s something we should all care deeply about.

These values are also reflected in the REC Foundation’s Code of Conduct, which outlines the expectations for all event participants — including acting with integrity, exhibiting professionalism, following the rules, and creating respectful, student-centered environments. The Code encourages us to hold ourselves to high standards not just in how we compete, but in how we support one another and deliver these experiences to students.

This post isn’t meant to dwell on what went wrong — but to start a discussion about how we can improve. If there are opportunities for coaches, EPs, or local leaders to be part of the solution, many of us are more than willing to help.

I’ve had the opportunity to serve as a Head Referee at events in the past, and I’d love to continue volunteering at large-scale competitions. However, because I already help run our own events during the season, I rarely get to attend as just a coach. When it comes to events like State and Worlds, I want to be able to stand behind my team, support them, and celebrate everything they’ve worked so hard for — and I know many other coaches feel the same. We all wear a lot of hats in this community, and finding the right balance can be tough.

Let’s continue this conversation — I welcome any constructive thoughts or feedback from the community, TechPoint, RECF, or VEX leadership.

11 Likes

@Steve_Hassenplug did an amazing job as skills head ref. Even though my team didn’t make finals due to a bad (very bad) schedule, we placed 5th in skills thanks to the leveling of the fields and Mr. Hassenplug helping us get a match replay.

This was my biggest frustration with State. Because many of the teams that scored very high (like WaNee Echo) used China loading and on the inconsistency that was noted especially in finals, I am not sure that they should have been placed as high. I would have to watch the livestream to prove it, and I am not discrediting their skill, most of them are great teams and deserved to make finals. But for teams like ours, seeing that we had the ability to win State and didn’t because of a bad schedule and the uncalled violations of teams ranked above us, it is incredibly frustrating.

Not trying to point fingers, but I think this was said to make Indiana look “better” at IQ. I keep seeing people say that Indiana is the best vex region in the world and this is probably true for V5, but for IQ, regions like California, China, and Taiwan have much higher scores. These rulings may have been done to push the scores so Indiana seems better.

Do you have the link to the discord thread that this was said in?

I had a similar issue with our interview. The interviewer who did most of the talking had a thick accent, making it hard to understand him sometimes. It was also hard to actually cover everything about because he kept changing subjects. It’s also hard to be interviewed in such a big room with so much noise.

I definantly felt defeated after the bad experience in the IQ side of things, and I have heard some sketchy things in the HS V5 side of things, but seeing my friends win Skills champion and Tournament champion in MS V5 and the Floating Hotdogs triple-crowning ES IQ state made the trip worth it. Being able to see the best teams in the state compete (and almost being able to compete with them. Please have alliance selection in IQ GDC. Please) was a great experience.

Also, your teams @cookies did great at State, we were paired up with 1024G and even though we didn’t score high, it was a great match.

How did everyone like 100A Jugglenauts’ performance at opening ceremonies?

3 Likes

I just want to point out something I saw during the live stream. I noticed that in some of the qualification matches, the teams didn’t start. I don’t know if that is because of certain causes or what, but what it seemed is that the matches shared the timer. I would definitely recommend having multiple monitors for match timers during competitions, but I understand if one of them didn’t work or broke. Just something to improve on and hopefully fix next year.

2 Likes

Yeah, we were one of thoes teams. They were running two matches at once and the moniters were not in a very convienent spot. Due to is being so loud, we often didn’t hear the starting buzzer, causing us to not start in time.

3 Likes

Its hard for me to speak on all 444 qualification matches because it was difficult to get into the stands to watch most of the matches, let alone my own teams, so I can only talk about what I know. If you go back to the VOD, you can see in Q350, our team was having issues with their remote connecting to the brain. As soon as the timer started, our driver tried pleading his case to the Head Referee (in the green REC shirt) and it looked like the driver was ignored. Even the TechCats team looked confused why the timer was started. It seemed to me, that the ref gave the thumbs up that they were ready to go but didn’t even look at the drivers to confirm they were ready.

We run our competitions with 4 fields going at once using a singular timer. This year was the first time we started having individual field timers compared to just the TVs behind the drivers. I’m still able to run all 4 fields at the same time. The key is making sure everyone, the timer, the refs, and the drivers are fully ready to go. If there’s a false start, you let the other matches finish and then quickly rerun the affected match.

Again, at the end of the day, we as the adults are there to make sure the kids have the opportunity to do their best. Mistakes happen but its what we do to correct the situation that matters.

2 Likes

I agree with you!
Some teams didn’t even switch controls, I feel the judges weren’t as great.
At skills, people were shoving balls in to the robot too.

Thanks for some context @Cookies and @BananaPi. What it seems like the teams were having happened to them was they were not being “confirmed” or having the official thumbs up sign for being ready. Also @BananaPi said that the monitors were in a hard place to see and that could also have affected why these matches didn’t start. I did notice one match ( I don’t remember but it did happen) that the teams clearly didn’t give the thumbs up and the match started. What made me think that there was a shared monitor was because of that match; the other side was moving but this side was not and the students were standing there a bit confused. As a robotics student, I personally have never been to a tournament that had multiple matches that were running at the same time, so I don’t know how entirely they are set up.

Who is to say that clear expectations were not set for the judges? I am sure that there was high quality training for the judges as all of the EPs and JAs were highly experienced. It is good to hear that you were satisfied with the resolution of the unfortunate situation.

Also, I am sure that The TechPoint Foundation For Youth would be happy to hear your feedback directly via email or in the contact form on their website: Contact Us — TechPoint Foundation for Youth

7 Likes

I did a little bit of a deep dive on this with some of our teams a few years ago. They kept arguing that they didn’t do well because they were paired with “bad” teams and others got only paired with “good” teams. Every team has their strengths and weaknesses. This is why you need to strategies with your alliance to figure out how you can optimize each other’s “flaws” and get the best score you can. You cannot blame your performance on the schedule. There is a randomization to the match schedule that you cannot control but you can control how you utilize your practice time with the alliance(s) before going to the field.

I’d love to talk to the GDC or whoever that makes the decisions about how to improve match generation. We may never know how the algorithm is setup but maybe it can be improved to look at a variety of things like previous competitions and skills scores to help determine matches to be more equal across the board. Maybe changing it isn’t the right decision…

My background for the past 8 years has been esports, hosting both scholastic and amateur competitions, online and in-person. You learn how to rune better competitions by testing out new things. Not everything has to be changed but unless you try something, you’ll never know if its really a better solution. Small tweaks can make a huge difference. Maybe you make the wrong decision and have to change it back but that’s part of the learning experience.

Appreciate that! That team is one of our 7th grade teams, and they definitely show a lot of potential and drive to want to be one of the best. I’m sure next year they will be an unstoppable force.

Sadly, I wasn’t able to see it. Trying to get into the stands was a pain point of mine and I was also dealing with the interview situation. The kids got to see it, and they enjoyed it!

4 Likes

What I saw was a rough competition… teams who qualified for worlds did deserve it in some way, however it’s also unfair that teams who should’ve gotten a DQ are instead going to worlds.
I know my team had a chance for worlds, and I’m fine with others qualifying rather than my team, but only if they truly deserved it.
The judges were not consistent at all, and many robots even started wrong.
One of our alliances started crashing out on the field, because we were “the problem” .

Overall on the bright side, It was fun to go around pit areas and collect stickers, and wristbands.

1 Like

An email was sent on Sunday to TechPoint, RECF, and the GDC, and we also submitted the event anomaly log form as recommended by our RM. I don’t necessarily expect a direct response, but my goal is simply to ensure that the concerns are heard — not just for our team, but for the good of all teams and the long-term integrity of the program.

The purpose of my post isn’t to say “my team got cheated” — it’s to encourage reflection and improvement so students across the board can continue to have positive and fair experiences.

1 Like

Hi, bystander here, not my circus, not my rodeo, not my event.

I want to say that about 6 years ago the entire “TM is rigged against me for qualification matches” was at a head. I did generate a 32 team event and then generated 100 matches. I then did the same with a 64 team event. I ran statistical analysis on them. They were about as random as you could get at that time with the C function rand() using the current time as the seed. Disclaimer, no idea what TM uses for a seed. So there isn’t really a bias.

RECF does try to move the bias some by splitting teams from the same club/school into different divisions. This helps “incest clone designs” (tm Foster) from the pool.

This isn’t going to help. The issue you want to fix is “Awesome teams being stuck with square bots” and you are missing that a square bot team can rebuild into a Meta+ robot between their last match and the competition. Across the years I’ve seen this happen more than twice. Square bots get dragged to uplevel events by awesome bots and some luck. They get the hint and rebuild.

The only way to fix this is to run enough matches that everyone plays with everyone else. Actually plays with everyone twice, since mechanical failure may happen. Since the “VEX IQ Indiana State Championship MONTH” isn’t going to become a thing, lower match issues will be a problem.

Foster Opinion: I want to say that I’m a Steve fan. Anyone that would take the time to level fields to within an 1/8" of an inch for a game that uses plastic, lumpy foam filled, multisided balls that vary in diameter by 1" is amazing.

The GDC has some blame here: The entire “throw the ball into an intake and that is OK” should have been killed in week one. I feel like 100’s of roboteers have been to Penn and Teller’s class on how to make objects move. Or watched the Nolan Ryan series on how to throw a fastball. Ball manipulation has been an art since Ogg created the ball in 2034 BC. But for some of you, please come to my master class “Three Card Monte” in Hallway B. Bring a stack of 20’s, nothing smaller for “betting” umm I mean “class fees” :thinking:

Lastly, “Shotgun Starts” AKA single timer. Not sure why this was a big deal. I’ve done lots of events, the referee (not the judge) says “We are running multiple matches with the same clock. Listen to me, I will count you down” and it works. I also want to be on the record saying “THERE WERE 6 PLAYERS AT THE FIELD, YOU CAN ALL COUNT THE TIME

6 Likes

I very much appreciate the effort that goes into this competition and the amazing experience our kids get vs most states where regionals don’t look much different than a local competition. I know the venue was less than ideal in some ways, I did like the pits being split from the competition fields and we didn’t mind the walk. Obviously seating was an issue for coaches.

Re: Skills fields - This was extremely frustrating to my team. I understand adjustments were made Sat morning to improve them (and apparently some teams were able to rerun that ran Fri - we didn’t know this was an option). My team has had great skills scores all year. They have strongly objected to having wall followers or a long intake - as they enjoy the challenge of programming. They use a proportional controller based on the inertia sensor, but unfortunately, it was not enough to compensate for the inconsistency of the fields. It definitely put a damper on their day. I wish the game committee put more value in autonomous skills at the IQ level, but I do understand that this is challenging for many teams.

Re: Interview: My team had a great experience this year, however, we had a horrible experience the year before. I watched the interview from a distance last year and one of the judges spent the entire time on his phone and never made eye contact. So while we had great judges this year, I do think its really important to emphasize the importance of all judges to be attentive to all teams. 7 minutes is really tough for a team to really show what they have, so its really important that the judges are engaged.

Re: Rule following - As a head ref myself for local competitions - I appreciate the challenge of this year’s game and calling it fairly. In my mind, the game committee did not do the head ref’s favors in making this game easy to call and I hope the future game is better in that regard. I know my team was paired with one of the china load teams that had broke the loader multiple times and they were warned of being DQ’ed and therefore did not china load for our match. So I know there were some discussions.

We did have issues with match starting. There was no effort to ensure drivers were ready. We had a match where our alliance partner was asking the ref a question and they started the match during the clarification - which caused us to lose significant time at the start of the match. I know its hard to keep things running on time, but I do think the drivers should have to give a thumbs up to indicate they are ready.

I’m always frustrated walking around pits randomly and seeing some of the clear violations of student led design (coaches weren’t necessarily touching the parts but the conversations were something like “no, you need to move this gear to this hole”). As a coach, this took me some time to get used to. I know some folks are concerned with Worlds not allowing fields in the pits. I personally support that decision as I do see a lot of boundaries being crossed when teams are practicing in their own pits. I’ll be interested to see how that works at worlds and if that is something that should be changed at the regional level. That said, 6 practice fields was no where near enough for an event this big…so the only way to make that happen is to have a far larger practice area.

4 Likes

TP4Y did send out an email link for feedback to teams that attended, so if you (or your coaches) got that, use that form for getting them feedback. Personally, I had an 8 paragraph response, trying to convey what I noticed.

Personally, I have liked how Lucas Oil has worked for hosting, but I’m sure the cost is prohibitive by comparison to the Fairgrounds. With fewer teams available to qualify to Worlds, I get the point of downsizing the venue.

What I opined in my feedback is that I’d rather see them hold firm on the number of invited IQ teams (200 total), instead of adding another 22 ES and 11 MS, and thus increasing the tightness of space. I think that could have eliminated an entire row, making more room for practice fields or even blocking out the pits (like ours) that had a support post that took out about 20% of our space (see picture).

The tight quarters made for a rough time that was exacerbated by teams with huge shop carts and wagons, teams pushing their tables into the aisleway (and apparently not getting called on it like has happened in the past at LO).

As others have said, as a coach, trying to watch the teams was almost impossible, or required running up and down steps. I would have like to have been allowed to watch from the center aisle between VRC and IQ (with my Coach button to let me in), or set things up with chairs close to the field in front of the stands, or something.

All in all, I am grateful for all the work that has to go into pulling off something like that. I hope they listen to feedback and make improvements for next year. I’m also grateful for concessions that have things like BBQ Mac’n’Cheese and elephant ears, without crazy LO lines at lunch time.

3 Likes

Meant to add the second picture that showed a different angle (and the pit decorated) with the pole. This thing seriously made the pit so much more crowded…

1 Like

I agree. We were running autonomous in our pit and it consistently worked but it never worked on a field for real

I would first like to thank the volunteers of the event. I know it takes a village and I truly appreciate everyone that took time to help and be part of it.

The venue made it much more like a world experience and I know that the kids really enjoyed it.

In regards to what can make this better moving forward is more matches. As banana pi points out his matches were challenging. Only 8 matches per team means you have that and you also have a team that gets “lucky” and has great matches. I feel that state with 111 teams should be 10 or 12 matches and do it over two days. Make it a true world class event. That will really separate the good teams.

Another suggestion would be in the drivers meeting please mention how you plan to call the event. I know in full volume that was done. But why not this time? Specially with the questions of “china load”.

In regards to the event it self. I am truly sorry for every single kid that was there. They were promised an opportunity to compete for a state title. But those dreams were crushed. To no fault of their own the referee’s decision was to pass on DQs. That gave a simple advantage to teams to play the wrong way. If you would like proof I do have it. Just watch match 100 and 416 and tell me rules were followed. That does not even bring up finals. I have only watched a few but I’m sure there is more. The fact of it is that teams were in finals that shouldn’t have been and teams that were left out that should’ve been.

This post is to not dwell on the past or re invent the wheel. But every single kid and coach were affected by the rules not being enforced from as far back as game manual 0.1.

I am sorry your kid did not get a fair chance, I am sorry that your coach didn’t get to celebrate your wins. If the event is held to the highest standards and a world qualifying event they should follow the simple rules of the game.

Please don’t get me wrong. I am a huge supporter of robotics and in my humble opinion there is no better program the school offers. What I have seen kids problem solve and come up with is absolutely amazing.

I am not a coach or a competitor. I am simply a parent of a kid that loves robotics.

I want to thank every single alliance partner my son had this year and the past 4 years of robotics. I want to thank the coaches and the parents for all that you do.

My son during iq has been really fortunate to be apart of some amazing teams #96902C and these last two years of #1024S.

And hopefully maybe he has 10 more matches to go!

5 Likes

This actually is an issue (and it was alot more than 1/8". One one field, the difference was ~1"

You wer’nt there. It was bad. There was about 20-30 feet between some fields and the timers, and the reffs didn’t actually make sure all of the drive team members werre ready, they just waited for one “thumbs-up”

4 Likes

I think a good way to minimize the “unfairness” in being paired with good and bad teams is to set a minimum score to qualify for state. Looking through the scores of teams that qualified for state, there were several with combined skills scores below 20 or average teamwork scores below 9, and not surprisingly those teams didn’t perform well at state. Meanwhile, there many were teams that had skills scores of 70+ that were left out. This is unfair to both the kids that were kept out of state and also to the teams that made state that were paired with those underperforming teams.

I share your frustration with the violations of student led design. I know it’s hard to police this aspect, but there are times like you’ve described that the violations are pretty obvious and other times where there is scuttlebutt about how teams are running with “dad bots” or autonomous sklls programs being produced with a lot of hands on assistnace by coaches/parents. I don’t know what the answer is but I think it’s definitely something that should be addressed.