Vex iq competition excellence award question

I had a middle school slapshot competition yesterday in Korea. Our team was 30597A and we were the only team in our group who went to the finals, but the team who got the excellence award was 30597B who had much less score than us. The difference were:
Qualification match:
30597B - 84.60, Rank 17
30597A - 130.60, Rank 3
Skills match:
30597B - 34, Rank 22
30597A - 44, Rank 18

Of course, our team did not expect an excellence award since rank 18 out of 25 isn’t a good score, but they got it with even lower ranks.

I read the requirements and criteria of winning an excellence award, but there was no objective requirements but they were all written as
‘~ ranking’, meaning there is no actual standard and the final decision is made by the judges?

Both of the teams had engineering notebooks. Our team was quite confident for our notebook since we worked as how it is written in the engineering notebook rubric; we thought we fulfilled all standards as 4~5 score. Our notebook was quite thick than others as well although it does not directly mean that ours is better (our notebook was around 65 pages, we had no limit of writing since we used online notebook provided by vex. Meanwhile, even 30597B themselves claimed that their notebook was written in two days: the day before a local tournament in school December and the day before the tournament yesterday. Even if the fact is exaggerated we can at least infer that they did not constantly record their progress.

The last criteria I can think of is the interview. I couldn’t watch them having an interview but for us, it was in much less depth than we thought. We had some practices over days before the competition; we practiced according to the interview rubric. However, the interviewee did not let us to show and explain all that we practiced. It was so short that I asked the interviewee to say more when the interviewee was going to the other team, and explained how we could program the robot. Indeed, the interviewee did not asked us about the programming part which she should’ve asked since it includes a whole standard of the rubric.

So, my question is, how can the judges judge the excellence award with this incomplete criteria? Is it just their subjective view of how good teams look?

The judges usually pick the team that can easily explain what their robot does and how it does it and if they have a good notebook. Skills is also important but you don’t have to be first to get it.

I thought the excellence award is given to a team that got to a certain rank, like you at least have to go to the finals to achieve it. Even if not, rank 17 out of 28 in the qualification match isn’t enough for the excellence award I guess?
I actually asked a man who seemed to be the head of the competition, and he answered that the judges mainly focused on the interview. Is it right to do so?

There is a specific process that judges follow. But long story short, it’s based on the top teams that are in contention for the design award.

Judges will assess both the notebook and the interview. According to the RECF judging guide the judges will look at which teams are in the top the skill rankings and teamwork qualification rankings (I guidances is something like top 10% or top 10 whichever is larger). Judges don’t take in teamwork finals rankings into consideration. Judges will also look at sportsmanship and other factors during the competition. In the end judges will debate which team they felt were the most qualified for the excellence award. It’s subjective, as the judges look at all these factors and determine which team stood out the best.

1 Like

Here is the link to the judges requirements for excellence:

Here is the important piece of info you are looking for outside of notebook and interview:

Be ranked in the top 10 or top 30% of teams (whichever is greater) at the conclusion of qualifying matches

Be ranked in the top 5 or top 20% of teams (whichever is greater) at the conclusion of the Robot Skills Challenges if Robot Skills Challenges are offered at the event.

There isn’t something listed as far as what to do when there isn’t a team(s) that meets all 4 of the criteria, which is what it sounds like happened here. (Maybe there should be an alternative back up, be top in 3/4 categories). So it could have been left up to their discretion. And most often teams with great interviews are who the judges remember, so maybe the other teams interview was better and that’s what persuaded them to be chosen.

You’ll never know what the scoring was and the event is over, so nothing will be changed. You can message the EP and make sure they are aware of the requirements for next time. That’s about all there is.


There was an FAQ on this topic a week ago that is helpful. But the judges may not have been aware of it since it just came out.



Then the award shouldn’t be given out. I don’t know if the spot would just drop or go to the skills list. Given that it was a regional event, I think it would go to the skills list at that event.

Here’s the event:

For context, there were 27 teams at this tournament, so they were very near the bottom of the rankings. So, yes, this seems very odd. Judge’s deliberations are confidential. However, I would find it very difficult (even in an extreme hypothetical) to believe that the award would be given to a team with these rankings.

But, it’s done. Congratulate your friends and be a good sport!


The quality of events has certainly been an issues, but is improving with the requirements of Certified Head Referees, Certified Event Planners, and Certified Judge Advisors. While this won’t guarantee that all events follow the game manual/judges guide, etc, it certainly helps alot. You can have your team mentor contact the EP and ask them if there will be a certified Judge Advisor and certified Head Referee (since there currently isn’t a way in Robotevents to find out), and if there isn’t, don’t go to that event.

As an organization that hosts events, including regional, states, and now a signature event, we meet all the requirements of certifications of EP, JA, and Head Referee…(in fact, we have multiple certified personnel), and, given that our entire organization is volunteer led and run, I don’t think there is any excuse to not having certified key volunteers. I look forward to a time when an event will not be allowed to be posted until the EP, JA, and Head Refs are all both certified and listed by name for every event.

This policy has been in the judge’s guide for at least three years if not more.


One more thing to think about was that most high-rank robots seemed to be the same, also known as, clone bots. They had a flywheel as a shooter, a basket containing the disks, etc.

Something similar to this, and our team was as well identical to the other robots. However, we had written that the idea was our own and we did not copy other robots on the first page of the engineering notebook. In addition, 30597B also claimed that their robot referred to other robots on youtube. It was just that their robot structure was rare in the competition.

However, according to sankeydd, there should be no excellence award.

I don’t know if you can reach this conclusion and call them “clone bots” just because there are multiple robots with these similar mechanisms. You’ll find most high performing robots to be similar in design for various reasons, including that fact that they may be the most efficient. Or teams have been to multiple tournaments and they see other teams perform well with them and identified these as designs they can incorporate. Or they saw it on YouTube. It’s good that your team came up with your own designs.

You’ll find many teams with robots that have flywheels, baskets to hold disks, etc. But their designs still unique. You can’t call them clone bots. However, clone bots do exist. My IQ team was at a tournament in November and there were 2 robots from 2 different organizations that had pretty much the exact same robot. I was taking a picture of our team and our robot on the practice field while both of these 2 teams’ robots were on the same field. To my surprise when I downloaded my pictures to my computer was that both robots had had their touch LEDs, connector pins, expansion, etc. all at the same location, etc. When there are 2 identical robots that have the same non-critical or functional components in the exact same layout then those are clone bots and they must have gotten the design somewhere. Teams that see a mechanism and get inspired to build a similar mechanism will build something that functions the same but their design will be quite different, especially when it comes to the non-functional choices.

I’m not sure if sankeydd was referring to bots with similar design. According to the judge guide, if no team turns in a notebook, or if there are other issues that the judges see that they determine a team not deserving of the Excellence Award (bad sportsmanship, clearly robot did not reflect skill level of kids and it came out in interview, etc.) then one shouldn’t be given. But, I don’t believe just because there are similar robots at the competition that they are clones and an Excellence Award should not be given. At the end of the day the decision about the Excellence Award is a subjective decision (based on a qualitative decision/debate by the judges). There is no black or white formula to it,.


A word about this, it seems as if you are the one approaching both judges and referees regarding how they are conducting the tournament. In the student centered policy, this is not behavior that is encouraged. If the students have an issue, the students should be the ones bringing it up. Parents and/or mentors approaching the volunteers to advocate for their child’s team is absolutely ‘red’ behavior and could have contributed to the overall impression of the team.


I apologize for my words if you understand it that way. I and my team did approach refs, interviewees, etc, but there was no interference from parents/mentors. We politely and cautiously treated them and they also gave permission to us to ask such things.