Tournament managers: how do you run your judging of awards?

In IQ, we only interview those teams who indicate that they have a STEM Presentation as that is a requirement for the Excellence Award. We have two judges who read the Engineering Notebooks and then check the robots for how well they and the teams match with what they saw in the notebooks. Their ranking is then used with the entire judging panel to aid in determining the Excellence Award as ranking in the Design Award is a requirement for the Excellence Award.

As stated above, teams can only win one judged award, so if the top ranked Design Award candidate becomes the the Excellence Award winner, then the 2nd ranked Design Award candidate becomes the Design Award winner.

I could see where that could cause a lot of resentment among the students if they see teams with bad robots consistently winning Design awards (and State or World slots) based on little more than excellent marketing brochures. I’ve seen judges dismiss notebooks because they were too tidy and “therefore not a real documentation” of the design struggles. I’ve seen other judges dismiss notebooks because they were not tidy enough, they were full of sloppy sketches, notes jotted down hastily during experiments, things taped in, handwriting that wasn’t the best, etc. the kind of thing that you see in real-world notebooks. The notebook evaluation process is immensely subjective and so it seems to me that the robot’s performance should play at least some part in the decision process.

You are welcome. This has worked for us for the 11 years we have been running events and is the model that all Wisconsin event partners follow. (Thanks to Marc Couture, our first regional director. I have to give props where props are due.)

It’s not supposed to be a portfolio; it’s supposed to represent the design work that actually happened. Please don’t take offense at this, it’s a serious question: Have you examined the Design Award Rubric? It shouldn’t be possible for a “marketing brochure” to score highly against the rubric.

Neat is nice. Messy, so long as it is legible, is okay. Neither is part of the rubric. So if they score to the criteria, it won’t matter.

I agree there is some subjectivity. But there is a rubric with score sheets, and criteria with descriptions of the levels, and point scores to award for hitting any particular level in each criteria. Same thing with the interview sheet. Then you total the points.

And yet, for the Design Award, the people who developed the competition tell us it doesn’t.

I was working on a reply, but @kypyro 's response is dead on.

I agree 100% with the last post.

I think the Design Rubric is really well adapted to bringing consistency to Design Award expectations for Worlds. Not using it as a tool in judging is a disservice to the teams. As we get better at using it, the next step is to hand the filled out rubric back to the teams so they too can learn from the experience.

This has not always been the case. And I believe that this is the first year for IQ that engineering notebooks and STEM presentations are required to be considered for Excellence.

I agree that reviewing the notebooks can be difficult to be consistent. I think the Design judges need to be the most carefully selected and best trained. My best results have been when I’ve used past VRC students and actual engineers as Design judges.

From the other perspective, a teams ranking can be greatly affected by the capabilities of their alliance partners and their opponents. At my daughters competition this past weekend 4 teams ended up undefeated and won autonomous in all their matches. A couple weeks prior some of these same teams ended up with one or two losses and didn’t win autonomous in their matches because they ended up paired against each other.

Amen! RECF is finally getting to the point of having good material so that judging can be more consistent, we just need to make sure it is being used. Providing the judging rubrics back to the teams is invaluable for them to improve.

My personal opinion is that teams should be able to win more than one judged award. In past years at my IQ events I’ve had teams that were by far the best candidate for more than one award and I gave them both (the rules allowed that at the time). When there are multiple strong contenders for multiple awards then I have tried to spread them around to as many teams as possible because they were all deserving.

Don’t get me wrong: I’m not saying that the rubrics are useless. But I’m not sure how you go about calibrating the rubrics with a group of judges who might have little or no experience with Vex and who don’t know each other. In other words, one judge might give more points than another for the same accomplishment in the same category, so just because you can generate numbers doesn’t eliminate the fact that those numbers are highly subjective.

I know that, no matter what, you can’t get around the subjective factor. But, in my opinion, ignoring the physical reality of the robot’s performance seems to make the subjective/nepotistic/political factors all the more difficult to average out and the judge room debates more difficult to suffer through.

My feeling is that the minor awards (Judges, Think, etc. ) should be considered almost completely separately from the major awards, such as Design and Excellence. I strongly encourage my judges to award the minor awards to great teams that are just beginning, or who are overcoming especially difficult obstacles, or performing in some other way but that is not in the running for Excellence and Design. My feeling is that the kids going after Design and Excellence on a local level are “out for blood” and really don’t care - or might even be embarrassed - about receiving a minor award. It’s the psychology of how the awards work, in my opinion.

I can’t speak for IQ as I have only run hosted two IQ events and they where in the past two seasons. It has always been our criteria to not a team more than one judged award in the past 11 years that we have run robotics competitions. We carried that philosophy over to our IQ events as well. I agree with FullMetalMentor, in that, I like to see the lessor judged awards go to teams other than the top 2 or 3 teams. At events I host, we give out Amaze, Build, Create as lessor awards. They are targeted to specific areas. Obviously, the best robot at the competition could conceivably win all of these awards. And should be mentioned as a candidate if appropriate, however if they win the Excellence Award, in my opinion, there is no need for the to win the Create Award, for example. Plus I like spreading the awards around. If I have a 32 team event, I want as many different teams as I can be recognized for doing great work.

Although the Design Award doesn’t specifically deal with the performance of the robot, I do ask my Judges who read the notebooks to go to the pits and see their top 5 robots to see how well the actual robot fits what they saw in the notebook and to watch at least one match of each as well. There are usually only two in that panel for my events, so they are collaborating throughout the process.

We have been using professional working engineers as our judges. The tournament we are hosting this weekend, we will be doing things a little different than in the past. The judges will start by evaluating the notebooks. After selecting the top several notebooks, they will seek out those teams in the pits and on the fields and watch them and interview them in their element. They will, as time permits, visit with other teams as well.

We have a matrix they use that awards points in various categories. After all is said and done, they add up the points and a winner is chosen for the design award. That goes into the matrix for selecting the excellence award winner.

This tournament we will have a very high level electrical engineer and a mechanical engineer who now works in IT. These two have been at our last couple of tournament. We may also be having an engineering professor that teaches materials engineering.

FYI, based on the Judges guide ( http://www.roboticseducation.org/documents/2014/11/local-judges-guide-vex-robotics-competition-2.pdf ), to be a qualifying event you must use the VEX Design Rubric and not any other rubric for the Design and Excellence awards.

In the Judges Guide it states on the first page:

On page 8:

But does it say anywhere how exactly the results of that rubric must be used? For example, in one scenario the judges will use the rubric to shape their own, personal opinion of a team, and in the end they will cast their vote by raising their hand. In another scenario, judges do not vote by a show of hands but merely turn in their scoring sheets so the judge advisor adds up the points. In the “add up the points” scenario, the judges who give more weight to categories will dominate the point system. And since the points are so small (1 points for this, 2 points for that), the scoring is somewhat lumpy, or highly granulated as some might say, and not as likely to average out so smoothly.

Plus, in our area, we have multiple teams that “max out” the score. So someone needs to make a subjective decision as to who wins.

Edited to add: This is why judges should not just read off the script for awards. If my team gets a perfect score on the rubric but does not win an award, information on what set the winning team apart is very useful and gives us information we can use to improve. If nothing is said, it remains a big secret.

Our judges are trained that the rubric is just one tool used to rank the teams. In the case of the Design Award, the two judges who read the Engineering Notebooks start by sorting through them and setting aside the obviously inferior ones. They then split the remaining notebooks and run through them using the rubric to come up with a top 5ish within the group they read. They read the others top five, then discuss where those 10 rank. Ultimately, reaching consensus on the top 5 notebooks. We ask them to comment in every notebook, even briefly, something to help the team improve their notebook. They are then asked to search out those top 5 teams to determine whether 1) the team has a good grasp of what is in their engineering notebook (can the team verbally enunciate the design process they followed) and 2) Does the robot reflect the design as stated. Not looking for sheer achievement on the field, but does the robot do what it was designed to do. They then present their top 5 to the overall judging panel.

Yes! Exactly! And so, as a judging advisor, you’re right back where you started from, standing in front of a room of squabbling judges.

Which is another reason why game performance is an important factor, I believe. When a team wins the Design award and you can hear the collective “Huh? Who’s that?” moaning from the audience, you know that, as a group of judges, you’ve got a credibility problem.

Tomorrow I’m going to serve as a judge for the first time, so I’ll know a lot more after that. But from my observation, in our area, you have to be in the top 8 to get an excellence award. There have been exceptions, but I’d say 90% of the time this is true. Design award can go to any team regardless of rank. The judges don’t appear to take performance or uniqueness of design into consideration for the design award (not a judgement - just an observation). Also, judges in our area tend to comment on poise and sportsmanship, how well a team did in skills or autonomous, etc for the excellence award. The design award seems be awarded more along the rubric- with subjective tie breakers when needed.