How much does each category weigh into excellence?


I just got back from a tournament and I was surprised by the team that won excellence. The team who won it has won a judges and design award already this season, but they weren’t the best in the other categories.
The event had 22 team and the winning team got 3rd in skills and 5th in qualifications. There was another team that has won a judges award and placed 1st in skills and 2nd in qualifications. Another team got a design award earlier in the season and place 2nd in skills and 3rd in qualifications.
My question is, how much does each category weigh into the decision of excellence award?

in excellence award the notebook tends to be considered more heavily that performance. of course, performance still matters a lot, but placing 5th or 6th is still plenty good enough to win excellence if you have the best notebook and the best interview.


I think you are misunderstanding how the Excellence Award winner is chosen. If you look through the judge guide at the details on how to determine the excellence winner, you will see that there are a few hurdles to jump through before you ever get to ‘how they actually did in the competition’.

1st, they have to turn in a notebook
2nd, that notebook has to score very highly on the rubric
3rd, the teams with notebooks that scored high (ballpark approximately 5 teams) will then get a Design Award interview, which also needs to score very highly on the rubric
… All of the above is being done while the competition is still going on, before quals are complete and before skills is closed…

At this point, the judges should know the contenders for the Design Award (which is the foundation of the Excellence Award; if you aren’t a Design Award contender you should not be considered for the Excellence Award).

It is only after all of the above steps are completed that the rankings are considered, and it is more as an “are in the top xx segment” (usually top 10 of quals and top 5 of skills) versus “are the top in skills/quals”. Also considered at this point would be their rank in other judged awards, their robotics program as a whole, their team conduct and professionalism, etc. The judges will then make a qualitative assessment of all of these factors in determining which of those 5-ish teams wins the Excellence Award (and I’ve found by this point in the process you are often down to maybe 2-3 teams, at least at smaller competitions).

Keep in mind that the Excellence Award is NOT an award given to the team placing highest in Skills or Quals; highest Skills gets Robot Skills Champion, highest Quals is well set up to potentially win Tournament Champion - both of these are performance-based awards. Excellence Award is a judged award and “is presented to a team that exemplifies overall excellence in building a high-quality robotics program. Team is a strong contender in numerous award categories.”

I highly recommend that ALL teams look over the judge guide at the start of each season in order to better understand what awards are available and how they are determined!


I hate to say this but a lot of the time it depends on the judging team. Especially at smaller, local events, almost all judges are just volunteers and many don’t have any experience judging tournaments.

At the end of the day, although there are great guidelines and judges guides, the judges have complete control and all of these awards are subjective.

I’m not saying that its rigged or anything (although sometimes it can be biased). I’m just saying that, although there is a specified way to determine the Excellence Award winner, judging is a subjective process and many technicalities are up for interpretation.


It’s totally true that things can ‘go awry’. Judges are just people, and are generally volunteers, and are sometimes not as aware of the rules as one would hope, and sometimes have only skimmed through the judges guide (ggrrrr), and can potentially be prone to bias (generally unintentionally). With over 7 years of being involved as an adult in Vex, I have definitely seen some… questionable… decisions. However, the leap to conclude that ‘the judges were bad/biased/whatever’ is often too quick. At the very least, teams owe it to the people willing to volunteer as judges to fully understand what the criteria ARE before assuming they weren’t met.

Case in point:

We have here 3 teams listed.
Team A - Qual 5, Skills 3, previous design award, previous judges award, currently won Excellence
Team B - Qual 2, Skills 1, previous judges award
Team C - Qual 3, Skills 2, previous design award

From this scenario, my immediate assumption (without even knowing what state this is in, but based on knowledge of the criteria and previous judging experience) is that all was well in the judging world at this event.

  1. Only Team A and C are likely to have been in the running for Excellence. Team B has not previously won a design award and apparently didn’t win it this time; they maybe either don’t have an Eng Notebook or it is not yet developed to the level that the other 2 teams displayed.
  2. Both Team A and C, having previously won design awards (indicating they have a decent Eng NB & Interview, as judged by other events) and being in the top 10 Quals & top 5 Skills, are well situated to be Excellence Award winners.
  3. There are many other factors that would be at play as to which one of those 2 teams would end up with Excellence, and most of those factors are things that other teams (and outside observers such as the forum) wouldn’t be able to determine because they weren’t involved in that specific judging process, but it could easily go either way. That Team C ranked just a little higher in Quals and Skills than Team A would not likely be one of the factors. How their Eng Notebooks compared, how well they did in the interviews, how the team conducted itself during the event, how they ranked in other award categories - all of those things (and more) would be considered, and the judges would make a qualitative decision that could go either way without being a surprise (IMO). The same situation at a different event might go the other way due to minor variables (like maybe a teammate has a bad day and is a bit grouchy during the interview or towards other teams, or maybe one of the teams got good news and is on fire during their interview, or a team has a new innovative design piece they are able to explain well, or any one of a bunch of small differences, including different judges).

To be fair, the competitors never really know if a team that was higher ranked bombed their interview or notebook. Ideally, its the most well rounded team for the lack of a better term, but if judges have one team that did great in most categories but failed one, they may choose a team that was good in each category instead. It’s unfortunately rather subjective.


I actually had this same issue at one of our comps, but like a lot of people have said, it’s mainly who and where you’re being judged with. If it’s mainly volunteers at a smaller event, it’s most likely going to be subjective. However, if you’re at an event with experienced volunteers or engineers that know what to look for, the awards are purely based on the rubrics with little to no considerations about who’s on the team or what the robot looks like unless there’s a huge tie.

With that being said, what I learned is that it just depends on the judges. Some judges have preference towards all-girls teams, some judges have preferences towards ‘pretty robot’ teams, and the like. I do believe that design qualifications hold the highest weight, then skills, then match rankings, then team conduct, robotics program, etc.

Here’s the link to the rubric, that has the required percentages for rankings and skills:


@TeamTX did a very good job describing judging team workflow.

I would like to add that, if there are multiple teams that score high on the notebook rubric and in the rankings, then team interview starts to play even more prominent role as the tiebreaker.

The closer top teams score on all other factors - the more challenging questions they will get during the interview. Ideally, we try to ask each team questions of the increasing complexity until we see them stumble and reveal the limits of their knowledge.

During the interview we don’t want to see team members trying to evade hard questions and run out the clock. We want to see them cooperate in the task of getting to the tough questions close to the edge of their abilities.

If, after that we still have more than one team running head to head for an award, then another possible tiebreaker would be to know what other teams think about top contenders.

When I do pit interviews, I always try ask every team to compare and contrast their robot to the top robots at the competition. This way we could informally learn who are the teams that everyone else is looking up to and also if they are open to helping other less experienced teams.

It is always a pleasure to give Excellence Award to a team that is helping and educating everyone else and not taking the threatening attitude of keeping their robot design shrouded in secrecy from the rest of the World.

Finally, if there are still multiple teams tied for the top awards at the end of the day, then judges could make conditional award decision to make sure that every one of them goes home with a trophy. Whoever gets TC - the other team will end up with Excellence.


Many teams just don’t realize just how important a good interview is. At a large event (especially worlds), sometimes the “starting point” for the judges going out to do interviews after looking at all the engineering notebooks, is half a dozen teams with identical scores on the EN rubrics. In this case, the judges’ impressions during the interview (even things like one person does all the talking, another looks bored, or a professional level of politeness) can be what determines an award.


My team learned that our interview has been the sole reason for not receiving qualifying judged awards. It’s definitely super important and shows how well you are able to communicate with your teammates and convey that you work together well.


Yes, first start by reading. It’s two steps. The first step gets you into the pool. The second step is a subjective decision.

Judging Process for the Excellence Award

Step One
Judges complete the rankings for the Design Award following the Design Award Judging Process. The top contenders for the Design Award should be considered candidates for the Excellence Award.

Excellence Award candidates should:

  • be at or near the top of the Design Award rankings;
  • be ranked in the top 10 or top 30% of teams (whichever is larger) in qualifying rounds during the last round of qualification matches played;
  • be ranked in the top 5 or top 20% of teams (whichever is larger) in Robot Skills (does not apply to VAIC-HS or VAIC-U);
  • rank among the top teams in other judged awards;
  • exhibit a high-quality team interview with the Judges;
  • exhibit a high-quality robotics program;
  • be student-centered, show positive team conduct and dynamics, sportsmanship, and professionalism.

Note: A team does not have to be among the Teamwork or Tournament Champions or Finalists to receive the Excellence Award but must be competitive in the qualification and skills rankings (skills rankings does not apply to VAIC-HS or VAIC-U).

Step Two
Judges use their best qualitative judgment based on observations and interactions with the teams to choose the team they believe best exemplifies the best overall robotics program at the event. Judges should ask themselves the following questions:

  • Has the team met the criteria to be considered excellent?
  • Does the team exemplify overall excellence?
  • Would the Judges want the team to be emulated by other teams?
  • Do the Field Note to Judges forms returned by event volunteers reflect the candidate’s overall excellence?

My notes:
At your standard 24 team tournament you need to be top 10 in quals and top 5 in skills. If you’re not focusing on skills you’re not going to even be considered. The don’t give a cutoff for design but really you are looking at 1, 2 or 3. They usually don’t have to go deeper than that to find a team that knocks it out of the park. (I’m generally the EP, so I don’t know how the points really work out… Is it 60/70? I’m not sure.)

For the design award, the notebook counts more than the interview, but without a good interview you will never get close to the top. It’s too many points for you to neglect. It’s also the best way to make an impression on the judges to get them to consider you on Step Two.

Notebook: 45 points (64%)
Interview: 25 points (36%)