Return of Design Award Rubrics

During the past events I have been attending, when asked, the Judges told me that they’re not supposed to return design award rubrics when they return our notebooks, or give it to us at any time. Is this the correct procedure? If so, what is the rationale behind this? Wouldn’t we want teams knowing the correct way to write their notebooks?

We have been to tournaments where we got our scored rubrics. We have been to tournaments where we haven’t received our scored rubrics. I think it varies by tournament by area. But I don’t think there is a rule that expressly prohibits it.

That being said, the best way to know what a good notebook is is for someone from your organization to volunteer to judge at a tournament. You could also ask the design award winner if you could see their notebook.

I’m not above stalking the bin where the notebooks are collected just to see the size/form of the notebooks that are turned in.

Given the variance of interpretation of the rubric between events and within a judging team, I can see how event partners might see returning of rubrics may not help the teams. Imagine if at your school you had different teachers assessing your same assignments - clearly you would have different results. I think the rubric is really useful for teams to understand what the judges are looking for and should guide the teams in developing their design process.

When I’m the Head Judge at events we leave the score sheets for the teams. We make a huge effort to write a meaningful improvement comment since “Good Job” doesn’t help unless the scores are all top marks.

As an aside to @MayorMonty, you should have the scoring rubrics at the start of the season and every day look at your notebook at say “did we cover all the points in the rubric?”

We don’t give back rubric scores. We want the judges to be unfettered to be brutally honest in their notes on the sheet. I want them to be able to remember what their honest impressions are so they can relay those to other judges. I don’t want such blunt feedback to be given to students. Even if a judge is intending constructive criticism, written words can be misconstrued and could be discouraging to participants.

On rare occasions teams have asked for feedback, in those cases I have summarized the rubric for them and given them input on their performance.

You also don’t want a team that scored higher on the rubric to be upset when the winning team had a lower points score. Scores can vary between judges, and it’s really only to assess what teams are highly ranked to be considered for certain awards.

At every event I’ve been a judge (both VEX and FRC, including Worlds), while not an official rule, “what goes on in the judging room stays in the judging room.” Comments made on a rubric fall under this rule too. That being said, as a mentor I would not be opposed to having some method that judges could provide positive and/or constructive comments back to teams, but it would need to be something other than a scoring sheet or rubric. As a judge, though, I would find it burdensome to fill out such a sheet.

The main problem I have with this policy is that it turns the Judging Process into a Black Box, and makes me somewhat suspicious of award rulings. Oftentimes, due to Judge Shortages conflicts of interests arise. For example, at the competition we hosted, one of our mentors had to serve as a Judge. Luckily, he proved to be impartial, but this type of situation is more common than some might think. That said, this impartiality is also dependant on the Judges’ comments “staying in the Judges’ room” as @kmmohn says. A good compromise would be perhaps a few sentences on why they notebook was flawed, and what to do in the future to change that.

We give our judges together with the rubrics and other information a set of post it notes for feedback.

Maybe some sort of compromise. All finalists for awards get a sheet of comments. Teams that don’t earn finalists are instructed to ask to see other teams for examples of good practices. Also it shows the teams that were finalists that they had done well.

While there’s no award for coming in second (or third), it would be encouraging to a team to at least know they were being considered for an award.

Agree. It is so binary - either you win or you don’t. When you don’t win, you have no idea if you were second place or last place! It would be so nice to know when you’re close…

That makes sense. It’s hard to judge if you can’t write down what you think. But the unfiltered comments would be too harsh, and ultimately not helpful.

I agree that would be helpful, and something like it is the best idea for the participants. But it takes a lot of time to do. That was one of my major complaints when judging First Lego League–you’re supposed to provide feedback to every participant. At every event I judged, they gave us a set of rubrics for judging, on which we were to write the raw comments. Then, in the judges room, each judging team was assigned a group of teams for which to create feedback sheets, which we wrote onto clean rubric judging forms. This took a lot of time. I’m sure it was useful, because we worked very hard to make it useful. But it was an excruciating experience as a judge.

And yes, I know it’s all about the kids. But I was very relieved when my kids moved on to other things, and I no longer was offered up as a judge.

For events I run in VRC and VEX IQ, I give rubrics back. I know my students have found the feedback invaluable. I really like giving back feedback to teams.

I like that about tournaments in Virginia. There’s no way to improve without frank feedback. That’s life.

IDC about some Judge being entirely honest with me, but I absolutely hate this current system where I can’t even know what I did well and what I need to improve. It’s very irritating to not be told what was wrong with it. I compare this situation to programming without the ability to debug values during runtime, you have no idea why your code/notebook isn’t working, you just know that it isn’t working.

I was recently a judge at the Indiana Middle School IQ State Championship. We marked scores on the rubrics in addition trying to add one or two comments. Comments came primarily in two areas, for low scoring rubrics we touched on the couple items we thought needed the most improvement. For high scoring rubrics we noted the items that really stood out to us from a positive aspect. All of the rubrics were given back to teams. In addition to the rubrics, we had another sheet with the list of teams and place to make notes. This is where we made notes that would be used when discussing teams with other judges and would not make it back to the teams.