I have seen in the past other topics like this but I have always wondered why RECF has stopped allowing feedback for notebooks. This is my 6th and last year doing robotics and at our state championship our notebook, and others similar to it were put to the side and we did not qualify for an interview. What we do not understand is why this happened. Three of these teams have won excellence multiple times this year including myself. I would not be so upset about it if we were able to learn why we got DQ’d. For a foundation that promotes learning over the competition I do not understand the notion that we cannot see what the judges had to critique on our notebooks. It does not seem to parallels other learning environments such as
When writing up a report or paper many of our teachers allow us to submit drafts to them or have them check our paper and give feedback
After taking a test or any assignment after it is graded we are allowed to see it again with corrections that our teachers made for use as study material for tests or finals
There are more examples of this give and response learning system and it just baffles me why it was dropped from judging.
TLDR why don’t we receive feedback anymore from judges
You never were supposed to get feedback from judges, with the main reasons being:
It’s not realistic for judges to do it for all 40 or 50 teams at the event
It can lead to questions and people disputing the rulings.
However I would agree with you that getting the feedback can be incredibly useful and when I judge I like to mention things we liked and things we think could be improved during the interview
I’ve been a proponent of doing feedback, and have run a 3 year pilot on doing this. We don’t return the actual scores, we write up comments (positive and negative) to try to help the roboteers. Books that score very low are given the rubric and a note that says this is how we judge to give teams a chance to fix things before their next event.
At most events, the notebooks are all scored and the top N teams are invited for interviews, the remaining teams get pit interviews (for the other awards) At a recent event there were 38 teams, 12 notebooks submitted, the top 5 notebooks were within a 1 point range (we gave 1/2 points) . The next highest team was well below that.
We picked the method (comments not points) to stop the dispute on the points that the Mayor has stated. The advent of digital books has make it easier to judge more books across the week.
But the bottom line is that it’s extra work for the EP, they are short staffed and anything extra is a huge pain and not worth doing.
I wanted to clear something up in the OP. It says “I would not be so upset about it if we were able to learn why we got DQ’ed.” The OP’s team’s notebook didn’t actually get “DQ’ed.” A notebook never gets DQ’ed. The process in the judging room is that the judges review all notebooks to see which ones are the top notebooks with a chance of winning the Design Award. These are the teams that get interviews (unless they have enough judges to interview all teams which, in my opinion, is best). So it just appears that the judges did not feel that your notebook was one of the top notebooks. It was in no way DQ’ed.
As to feedback. It sounds like you are on a school team or in a program with multiple teams. Does your coach provide feedback on notebooks? This is your first line of defense, so to speak. You coach should know what it takes to have a high ranking notebooks and should be able to provide feedback. Also, pull out the design award rubric and do a self assessment. How do you score in each element? If you score low in some areas, try to improve that in your notebook. Honestly, just by doing those two steps, it should get you among the top notebooks at most local events. Once you get to that point and, if you still miss out, then it is time to talk to a judge to get some feedback on what it takes to improve enough to win an award.
We were told by the event coordinator that some notebooks were in fact disqualified due to some information they could not disclose. All they said is it was justified but never told us why it was justified.
We are a program with many different teams and yes we ask our coach for feedback. We always try to hear back from judges when we can but these judges were a separate panel from the ones doing the interviews.
I am not trying to sound boastful or pretentious on this last point but this year we have been in contention for the excellence award multiple times. We have won it three times this year along with a design and judges. Even last year at tournaments like NATM we qualified for final judging for design and excellence. The other two teams who also did not get interviewed with us are also excellence award recipients with one team who everyone agrees should have won excellence at this competition. So we do listen to feedback and this time is an outlier, all I want to know is why did our notebook this one time not qualify for an interview after our history in judging.
At higher levels of competition, everyone is getting max (or close to max) notebook scores). Last year at a tournament, we got our notebook rubric back and we got a 5 in everything except one category where we got a 4 yet we still didn’t win an award.
Our sister team and I both had competitive notebooks this year. We both won multiple judged awards. However, at some tournaments we got awards and our sister team didn’t. Then our sister team would get judged awards and we didn’t. It could have just been day of interviews, but I would like to know why some judges like certain things yet others don’t.
Right. So this is from the judges guide and could get a notebook removed from consideration (even though the judge at your event said DQ, there is no provision in the judges guide to DQ a notebook - so that judge misspoke. They took your team out of consideration. That is not a DQ).
“All Engineering Notebooks should contain these elements:
â—Ź Team number on the cover.
â—Ź Errors crossed out using a single line (so errors can be seen).
â—Ź Notebook has not been edited.
â—Ź All pages intact; no pages or parts of pages removed even if they contained
errors.
â—Ź Each page numbered and dated in chronological order.
â—Ź Each page signed or initialed by student author.
â—Ź Team meeting notes as they relate to the design process.
â—Ź Pictures, CAD drawings, documents, examples of code, or other material
relevant to the design process are glued into the notebook (tape is acceptable,
but glue is preferred).”
So it is possible that your notebook did not meet one of these items. Also teams can be removed from consideration by their behavior at the event. That sounds like a possibility.
It’s hard to say that a team should win excellence award if you were not in the judges room and take part in interviews.
Also, and I know this sucks - but judging is somewhat subjective. What one judge loves, another might not like as much.
This goes back to my original point. I just want to know why we can’t get the rubric back with our score, or the notes the judges have on the notebooks. Any feedback is good feedback
Not sure that will fix what you are hoping it will. Say there are five teams in contention for the design award and they all get the exact same score in the rubric. How will having your rubric inform you on why you did not win the award and what you need to do to improve for the next time? Plus, remember, it’s not all notebook - it is also the interview.
THey do this, because they don’t want teams questioning the rulings. Vex doesn’t like things being questioned after the fact. This can be seen in thing like the no video replay rule. IMO it isn’t justification enough.
More than that, I’ve been involved with multiple threads on this since the forum was started (VBulletin days). And I’ve talked about the pilots that I’ve done. So there is a TON of commentary.