Judges’ Comments to Teams

There is another thread going on about comments back from the judges to teams.

Currently RECF has said to not return the rubrics to team since the point value is very subjective and that some teams will come argue the score with the judges.

In Delmarva for 2018-19 I piloted the return of feedback to VIQ teams about their interview and notebook.

The form had on it “These Judges comments are offered as a way that your team can improve.”

The guidance to the judges was to give 2-5 comments back to the team with more “things that were good” vs “things that you can improve on”. They are to be in the same order as the rubric and reference the section (ie. “Brainstorm Solutions”) and a comment may be “Good on documenting the five ideas you had so well, it looked like you spent a good deal of time” or “You only had one idea listed, you should try and identify a few more ideas that you looked at”.

Both notebook and interview comments were noted. I looked at the comments before they were tucked into the engineering notebook.

I did not have any parents, mentors or coaches complain about the comments. The feedback that I did get said the comments were very helpful.

Your mileage may vary.

I will note two things

  1. Notebooks vary from really wonderful to just three pages filled in. (Or not submitted). With a large number of notebooks there is a cycle to pull out the top notebooks to be reviewed. They are the ones that get comments. The low level (3 - 4 pages) will get a copy of the rubric and a note that this is what the judges are looking for. One of the biggest things would be to have mentors/coaches know about the rubric up front. I’m not sure what the best way to do this is, maybe in the “Welcome kit” with the license plates, they drop in the Judges guide that shows all the awards, etc.)

  2. Once we narrow the books down, we go over two of them to try to level set what / how to score. That helps keep the scores into similar bands. We then break into teams to review all the books. I ask the judges to track their top 2 books to make sure they get looked at and included in the overall review.


Thanks for the suggestion. I am liking the idea of a note card with the Notebook criteria with a “+” to indicate one area team did well and a “-” to indicate an area for the team to improve on. Keep it simple.


This is a great idea! Having the specific comment for is a great solution the the problem of private notes being seen by teams on a rubric. I hope the RECF is taking note and that they recommend this in some way in future seasons.


I agree. This sounds like a great idea. As a mentor, I would love this kind of feedback for my teams. As a judge I don’t think it would take me much longer than I usually spend, and I wouldn’t mind taking the time. Most people who volunteer to judge are doing so to help the kids learn, so I think they would appreciate a way to give feedback.


I agree as well. I think that judges providing teams with a few general comments/areas for improvement might go a long way towards improving a team’s overall performance in judged awards, particularly in regions like mine where there can be quite a bit of variance in notebook quality.

However, I can definitely understand the reluctance to give numerical rubric scores back to teams. For some perspective, I have judged quite a few FLL events, where every team gets their complete rubric back at the end of the day, including numerical scores and judge comments in each category. One potential problem with this is score inflation. I have seen judges (including myself sometimes) be a little reluctant to give a low rubric score if they know the team will see the rubric.

So I’m OK with keeping rubrics confidential, but I don’t see anything wrong with judges giving some comments back to teams if time allows.


I need to pull something together by the EP meeting to show what I did and how it worked. I had asked about doing this to Dan and Tarek, so this is something RECF knows about and they know I was piloting it.


I think that is a great idea. I would recommend providing a brief (one sentence) comment on each of the two feedback areas.

Highlighting too many areas for improvement is just as likely to demoralize teams as to help them.



Hence my original post of 2-5 things, more good than bad.

So 1 good 1 bad
2 good 1 bad
2 good 2 bad
3 good 2 bad


I just got a PM about this thread because the REC Foundation has previously said to not provide feedback.

Last April (2018) Foster requested to implement a feedback mechanism for the Design Award and the REC Foundation did approve this pilot for Delmarva only. I did communicate this pilot at the VEX Worlds Q&A and the July EP Summit.

Judges Feedback is one of the questions I am most frequently asked about. I’ll share a quick story. I was a coach / judge / judge adviser prior to ever joining the REC Foundation. One of my complaints to Paul and the RECF was lack of feedback from the judging process. In fact, this complaint is one of the reasons Paul suggested I join the Board. I even made a boast during one of my early meetings with the RECF staff that I was going to get this fixed. Well after meeting with the RECF staff and after attending my first EP summit (Louisville 2017) as CEO, I got a different perspective. Most EPs explained how hard this was to actually do for reasons explained so well on other thread (time, quality, teams complaining, etc.). So I changed my opinion. But obviously Tarek and I are still open to ideas hence agreeing to the Pilot program. I too am excited to hear what Foster learned and look forward to discussing it more.

Note: I still understand all the valid concerns and am not making any judgments / decisions until we all have a chance to reflect and discuss.


(from 35,000 feet)


As a team that has one the design and excellence award multiple times, I believe that this would be really useful as it provides students with feedback on the least obvious part of this competition.

I do also agree that while feedback would be great, It can’t be done efficiently enough to ever work. If for some reason the judging rubrics get changed, I humbly ask that RECF thinks about how the rubric/system can be changed so that a team can receive their results, perhaps a system where judges enter a teams score/other notes into a google forum, and gives the team their results back. I am not sure if scores are already being stored on a computer somewhere, and I don’t think that my idea would work 100%, but it is the best I got. Any other ideas?


Giving (unfilled) rubrics to teams is a great idea. At all the competitions I’ve ever judged (mostly IQ), the fundamental problem was that students didn’t even know what they were being graded on. This will apply less in VRC where the students are older and more experienced, but I imagine it’s still an issue. A lot of the events here in SC actually include Design Award rubrics in their tournament packet, which I think is very beneficial.

In all the events I’ve ever judged at, both Excellence and Design were determined by fewer than a 2pt differential. For these teams, it can be incredibly frustrating to not get any feedback on the judging process at all (I’ve experienced this a lot as a competitor). It can’t take more than 5 or 10 minutes to write a sentence or two for these few teams, and I know it would give us competitors ease of mind.

Until the RECF implements some sort of feedback system, I would encourage competitors to talk to the judges after competitions. If you’re following the rubric as you make your notebook, you’re probably only a few points from a perfect score every time, and talking to judges can help you figure out what you need to improve


Not to say this is practical, but if the judges filled out a single rubric with all of the best features in each category that they saw in any notebook, then printed off a copy for each notebook and took a few minutes to highlight specific points that that notebook did well or could improve on, in green and orange, that may be a way to trim down the amount of work required while still providing good feedback. If would also clarify what specific judges consider to be good details. It may be too time consuming, and i would want to test it before making it a standard, but it is an option to consider. Everyone would get a rubric, with the specific things that stood out to the judges as good features, and green and orange marks to indicate where they were successful or unsuccessful.


The points don’t help and part of the pilot was returning help. Having a written comment on ways you can get better was received better. Just getting a number makes it a random number unless you see the full set of numbers it fell into.

Part of the issue is there can only be one. So judging will go back and do a book to book match to rescore. And I’ve been on the second interview team where we re-interview both teams to get a new matched score.

And before people like @Anomaly jump up and go “UNFAIR you are the EP you can’t do that” the answer is simply “I did not move the judges decision. I watched and observed”. I did validate the comments written to make sure we followed the protocol that I was piloting and that we were giving feedback that would help vs “Nice” and “Needs Work”.

And I have been the Judge Advocate at other events and we’ve done the same process to make sure we get a good outcome.

The judges room is not an easy place to be working in. Lots on the line and a lot of very thoughtful people are being very thoughtful.


Thanks for explaining.

I’m an IQ Coach, but I have coached EDR in the past and the same frustrations apply. I’ve also been a judge and understand how difficult the job is.

I’m of 2 minds on this. Constructive feedback would definitely help. As a coach it would be valuable to help the team improve. It would also be good if the feedback happens to be in an area they didn’t want to listen to the coach’s advice about :wink:

On the other hand, I can also see where it can cause issues. Even some feedback could be difficult to take if you think you have a great notebook and don’t even get an interview at an event.


I like this idea, especially with the number of new members we have.

I also like the idea of including the rubric in the welcome kit so everyone is leveled. I know where to look for it, but from the number of questions I’ve answered at competitions on judging, notebooks, what the girls talk about etc. this would be helpful for younger teams.


I think one of the toughest issues to deal with is that many judges skim through notebooks to quickly eliminate those that are clearly not contenders in order to spend more time with the top notebooks. Although it likely would not be too difficult to write a few comments on notebooks that judges spend more time reviewing, it would be nearly impossible to give everyone comments on their books.

I don’t really think this is an issue. If notebooks are clearly not contenders for awards or further review, then there is likely obvious areas where the team can improve, and no real reason for a judge to give specific areas to focus on.

If teams are close to getting an award, I believe it wouldn’t be too hard for a judge to come by a pit and give some tips. Granted, parents or coaches getting involved is much more likely with comments given.


My two cents, for what they are worth (probably about .02)… I have argued repeatedly for feedback. When we started, in our state, teams routinely received their rubrics back. I have notebooks with lots of rubrics pasted into them. Then, the teams used those rubrics to improve. We became consistently better, because of those rubrics. At the state competition that year, they gave out seven awards, my teams took home five of them. I directly credit this to improving throughout the year, based off of feedback. Then, the rule came that the rubrics weren’t to be handed back. I fought against it. I argued against it at the EP Summit, (but the backlash from that was immediate). I created a quick little form, two wows and a wish. It got changed to a wow, a wish, and one more (either a wow or a wish). The judges filled these out for each team based on the rubric. I have never had a judge compain about the work. The teams loved them. I was told not to even do that this year, so I didn’t. I know that judging is subjective. I know that the way that one judge sees a team is not the same as the other. I see these as pros as to why feedback should be allowed, not cons. Teams need to know what they can do to improve. I’ve NEVER judged a perfect engineering notebook (and I’ve seen a lot of them). I know that I’m harsh, but that helps the teams. I asked, at VEX Worlds last year, when they announced that a region was going to pilot feedback, if my region could as well. I was told ‘No’. I would still love to give feedback as an EP, as a judge, and as a coach. (And yes, I wore ALL of those hats this year). (Never at the same competion). Even it’s as simple as two wows and a wish, at least we know how to do better.