I’ve seen judging from all sides, and how the judging of awards has been conducted has been very different each time. So I would like to ask: how do you operate your judging of awards?
For example, I’ve seen tournaments in which the judges are in rooms separated from the noise of the pits, in which kids sign up for specific interview time slots, etc. And I’ve seen others in which the judges wander around giving interviews on the fly, reading lips over the chaos. I’ve seen some where the judges start by looking over notebooks, and others where nobody looks at notebooks until the judges have reduced the number of award candidates to a very small number.
It seems to me there might be really good ways and really bad ways to run the judging of awards, so I was wondering what tournament managers do and why.
In the tournament we hosted (≤24 teams), judges first looked through engineering notebooks in a separate room, and then came to the pits to interview every team. After that, they went back to isolation for deliberations. With that said, our pits weren’t so chaotic that teams and judges couldn’t hear each other.
My perception of judging is that there is simply not enough time to do everything judges are supposed to actually do in the time allowed, which is why I’m looking for ways to make this process more efficient and still fair. Considering how complex notebooks can be for some teams, why does it make sense to go through all the notebooks first before seeing what the robots actually look like, and before seeing how the teams are doing in the tournament, etc.? I would think that qualification rankings would, by themselves, cut down the number of teams being considered for awards requiring notebooks and therefore reduce the amount of time needed to examine notebooks. Is there something about starting with all the notebooks that you have found is essential?
Firstly, if I remember correctly, the way we did it is roughly the same as what the RECF documents suggested.
Secondly, I would argue that a good journal should give judges a good idea of what the robot will be like before they actually get to see it, which would make it easy to eliminate many teams from awards (judges can picture the robot before the interview = good journal).
Additionally, journals can be looked through at any time during the tournament; qualification rankings are horrible indicators of teams’ performance until around midday. And, even then, there are often bad teams that got carried or good teams with really hard schedules that don’t belong in their respective rankings.
Edit: Out of curiosity, what size tournament are you targeting for streamlining judging? For tournaments with 36 or fewer teams, I would argue that, with enough competent judges, there is definitely enough time to do a thorough job of judging with the structure I mentioned before.
We just got through a tournament with over 50 teams. Since then, we’ve been trying to make a crude math model to decide if things would run faster with, say, 36 teams, etc. but fewer teams means that the tournament can end faster, too, which puts more pressure on the judges to crank out their decisions.
It seems to me that the “obvious” solution is to have a lot more judges, but the more judges you have, then the more people you need to bring together for a deliberation/vote, which can turn into an even bigger circus the more judges you have.
Ideally, you have all your judges in one room interviewing all the teams, so everyone sees the same teams at the same time. But that isn’t possible with 50-ish teams. Even if you give each team only 10 minutes, that’s over 8 hours of solid interviewing, not even counting the debates, the notebook examination, or taking a vote. But once you split your judges into 2 interviewing groups or more, you somehow have to bring them all together to filter through the teams and make a decision. And that takes time, too.
I am also aware that a lot of tournaments can’t always get very many technically experienced people to serve as judges - very often it’s a group of people who are nice enough to volunteer but they don’t necessarily have any knowledge of Vex or of engineering, etc. which is one reason why I think the rankings are important. Too often I’ve seen judges buy a great sales pitch by a team and the team has a great looking notebook but their robot doesn’t match up with all the dreams presented on paper or in the sales pitch. Which is why starting with notebooks is a time-risky situation, in my opinion: judges can be very impressed with a team only to find out later on their robot leaves a trail of nuts and screws and burnt wires and broken chains and illegal widgets everywhere it moves (and even where it doesn’t) on the field.
Just out of curiosity: how many minutes does each of your judge teams allow each team during the interview?
It is my very strong impression that an engineering notebook is key to Design Award, and therefore the Excellence award. Consider this, from the Judge guide:
and this:
and this:
and this:
I think RECF is trying to tell us something here.
All this leads me to the belief that evaluating the Engineering Notebook first might well save some time. Based on my reading of the guidance documents, if a team does not submit an Engineering Notebook, then they shouldn’t win Design or Excellence. Further, since the notebook is considered key, if the notebook is weak in comparison to the competitors, it’s difficult to see how you could justify a Design or Excellence Award, and still be doing what RECF had in mind.
I can get you more details about how many judges we had and how many teams actually showed up tomorrow. I do know, however, that we were somewhat lucky in that virtually all of our judges had technical experience, and some had even worked together before our tournament, so separating them into 2 interviewing groups as we did wasn’t a problem at all.
In terms of interview length, I haven’t actually attended a competition in Arizona with a hard limit. With that said, every tournament this year (except for the upcoming state championship with 48 teams) will have had 36 or fewer teams competing.
For judges without much technical experience, I think the key to judging engineering notebooks is detail. What I said before about being able to visualize the robot before seeing it applies even more to inexperienced judges who don’t frequently work with things like engineering notebooks, as they won’t have much idea about what a good engineering notebook should look like. (In other words, it’s a good-enough way for them to separate good and bad notebooks.)
I did not mean to imply that the notebooks are not important. My comment was aimed more at when in the decision process the notebooks should be evaluated. I’m not sure what a terrific notebook means if the robot can not function properly and the students can’t answer even basic questions in an interview. Of course, checking to see if teams turned in notebooks when registering in the morning is part of the process, but in my opinion to actually examine a notebook and read through it is a very time consuming process and should be reserved for only those teams that are clearly performing near the top of the game that day.
Having said all of that, I should add that I’m very impressed by those teams that can present an outstanding sales pitch (while standing behind a machine that clearly can not function). They are marketing geniuses and politicians in the making. But I don’t think it’s efficient to spend lots of time flipping through their book-sized sales brochure if they are ranked that day so close to the bottom. I should also add that we give every team an interview who wants one, of course, because that’s part of the education process and, in my opinion, that is the most important part of the judging process - letting the kids know that people care about what they do. But too often I see judges running out of time near the end of the day, and then everyone sloppily making decisions because the morning was consumed flipping through piles of notebooks.
I’m coming from the VEX IQ side, but I like the way that we do it. It’s also the way that was suggested by RECF for VEX IQ. Our judges take the Engineering notebooks from check-in and immediately start going through them using the rubric provided. Then, they only interview the top 25%. So, in a tournament with 28 teams, they would review around 20 engineering notebooks. Then, they would interview 7 teams. We like doing the interviews in a quiet room, but that’s our preference.
That’s a lot of notebooks. How many minutes would you say it takes to review each notebook and by how many people? And how do you calibrate the rating scale for each judge when using the rubric? Personally I feel that the interview process is most important to the kids, providing them encouragement to those teams that are beginning, advice to teams that are still rising, and kudos to teams that are clearly at the top. So using interviews only to sort “winners” from “losers” seems to me to be a lost educational opportunity. But that’s just my opinion.
It took about 3 hours. That’s about 9 minutes per notebook. They each reviewed 10 notebooks, and then they switched. Each judge saw each notebook. I only had two judges. They said that it wasn’t that difficult finding the top notebooks. It was obvious which teams had spent time putting them together, and which teams hadn’t really worried about them. Then they averaged their scores to determine the top 25%.
I’d like to run a tournament where we were overwhelmed by quality notebooks. A good notebook is really tough for teams to accomplish. It takes a lot of dedication. Sometimes it’s fairly easy to determine the group of top engineering notebooks.
For the tournament we ran just a couple of weeks ago we had a panel of (4) judges who were probably the best panel I’ve worked with. Here is a close approximation of their schedule.
Judges instructions at 9am, start judging at 9:30am
If you collect all the notebooks at check-in, let’s say there are 30 of them. If you have 4 judges and they all individually score the notebooks, you can spend an average of 8 minutes on each notebook and have a pile of the best notebooks in 1 hour. Spend another half hour reviewing the best notebooks as a group.
11am
Split into pairs of judges and go interview ALL the teams in the pits. Say there are 36 teams, average 6 minutes per team interview. You can be finished with interviews in 2 hours. Add another hour for a “working” lunch, judges can discuss what they have seen so far.
2pm
Interviews complete, judges can start discussions on the top contenders. Judges liaison can bring the update skills scores and qualification rankings to the deliberation room. All 4 judges can revisit the top teams together.
From my experience, interviewing at the pits is also a big problem, especially if the pits are in the same space as the matches - the noise can be crippling. Unless you have scheduled time slots in which the judges have agreed to appear, it’s often hard to find the team - they are sometimes at the practice field, or on the Skills field, or at a match, or their head spokesperson is in the bathroom, etc. Six minutes provides very little buffer for tracking them down or deciding you can’t find them - ever. Also, it’s clear to me that about half to a third of the teams really do not want to be judged at all, so it might be possible to save time by interviewing only those teams that want to be eligible for a judged award. Some teams just want to play the game and don’t want to be bothered with interruptions. And the teams that are truly prepared to compete for these awards - or are required to interview for a class - deserve more time.
We use various scenarios depending on the number of judges we are able to procure.
VRC
For us, the ideal is a judging panel of 2 judges for 6-8 teams plus 2 judges. If we reach that number of judges, then every team is scheduled for a 10 minute interview in a classroom with the panel to which they are assigned. Assume 32 teams, then 2 judges per 8 teams means 4 panels of 8 judges plus 2 for a total of 10. The two extra judges sole responsibility is the Design Award. They evaluate the Engineering notebooks, then check the pits to see if what they saw in the notebooks. The interviews typically are scheduled for 15 minute blocks, 10 minutes for the interview, 5 minutes for transition to the next team. Each Panel will see 8 teams over a 2 hour period. Each panel is looking primarily for evidence to fit whatever judged awards the event is offering (other than Design and Excellence) and ranks the teams in their groups accordingly. They then meet together to share their rankings with the other panels. Then they go out into the pits and cross-check each others rankings to come up with the top teams in each judged category. They meet again as a group to determine their overall rankings in the judged awards (usually around Lunch time). They then wait for the results from the Design judges and the Qualifying Rounds and Skills to determine the Excellence Award winner.
However, if we don’t have as many judges as ideal, we dispense with formal interviews. We will still try to have a pair of judges take care of the Design award and the remaining judges make their determinations of the non-Excellence judged awards by wandering around. Then meet later to determine the Excellence Award winner, once all of the necessary information is in.
For IQ, we send out a survey of all participating teams asking whether they are going to present an engineering notebook and whether they are planning on doing s STEM Presentation. Those that are doing a STEM Presentation are scheduled into 10 minute time slots (4 minute presentation time, 6 minute transition time). Otherwise, judging follows the above scenario.
Interesting approach. Are the Design Award judges the only ones who evaluate notebooks? Do they then present their findings to the other judges, who I presume are doing the Excellence award? I’m trying to understand the time sequence of how people run these things.
There are obstacles to overcome interviewing at the pit tables.
Having teams come to a room to be judged is also fraught with challenges. Teams don’t show, match schedule runs behind, judging schedule runs behind… it’s a trade-off one way or another. Block scheduling the tournament can take care of some of these issues.
How does your state championship interview, at the tables or in a room? In the end we decided that it would be good for the students to experience an atmosphere for judging similar to what they will see at the state championship (if they are fortunate enough to make it that far).
We like to give out awards that don’t require a good notebook, so I make sure every team gets interviewed. It’s good practice for the students anyway. I inform the judges which awards rely heavily on the design award rubric scores. Judges, create, build, those are all awards that some teams might qualify for even if they don’t have the best over-all notebook.
I’m only pulling out one line here to make a point, so please read the whole thread for better context. From the Judges guide:
The Design Award should have nothing to do with how the team performs at the event. The key criteria are process and documentation focused. The Excellence Award does require an engineering notebook to be turned in, but the Excellence criteria doesn’t mention it specifically. Excellence takes into account robot/team performance in qualifying and skills as well as all judged awards that the team was a top contender for. The way I read this means that as long as an engineering notebook as turned in, as far as Excellence goes, being in contention for the Amaze Award or Build Award counts the same towards Excellence as being in contention for the Design Award.
I have only run IQ events as an event partner (including the Indiana state championship) and we had judges review every engineering notebook and then decide the top group of teams to interview. How many teams to interview and whether that was in the pits or in a separate room varied based on the space available and size of the event. The Design Award when then be decided and the list of top contenders provided to the judges determining the Excellence Award.
One of the biggest problems I’ve had with judging over the years is event partners using the Design Award essentially as a 2nd place Excellence. Back before the state/regional championships were in place I think this was done because many times it meant an invitation to worlds.
This is because teams can only get one judged award. Excellence trumps Design, so if the #1 Design candidate got Excellence, they can not get Design as well. My team had to return the Design Award after being awarded as the RECF caught that error.
Yes, the Design Award Judges are the only one who see the notebooks. And yes, they present their findings to the other Judges. They give their ranking for the Design Award to the group as their #1 choice might not be the winner, if that team is also the Excellence Award winner. The Design Award then goes to the next highest team in their Ranking as no team can win more than one judged award, as opposed to the performance awards.
We will run the formal interviews from approx. 7:30am 'til 9:30am, driver’s meeting at 9:45am, Qualifying Rounds begin around 10:00am to 10:15am and end around 2:30pm. Assuming 32 teams, running two fields on a 4 minute cycle time, we can get in 8 qualifying rounds (for this game, field reset is fast). We then have alliance selection around 2:45pm, start the Elimination Round about 3:15pm. Give out some of the minor awards between the quarterfinals and the semi-finals and again between the semis and the finals, give out the major awards at the end. Send them home and tear down, go home and collapse. We like top have the Judges have their minor judged awards done by the end of the qualification rounds. The major awards done by the time the finals begin.
Thanks. I’ll think about that. It’s an interesting way to maybe speed up the process.
Yes, we do the same thing, trying to get the minor awards out to the teams we feel need encouragement before they pack up and leave (since some of them do not get picked in an alliance).