I sent out a poll a while back to collect opinions on engineering notebook in the VRC community, and one popular response was the request for judge feedback that you’d recieve at the end of every competition to know from a judge’s perspective what your team’s engineering notebook strengths and weaknesses were. I really want to make that happen!
I’m writing a proposal for the REC Foundation to include judging feedback in their system, so we can all benefit from this amazing resource of judging feedback. Could you give me some of your ideas so I can make sure I’m asking them to implement it in the best/most requested way possible?
I have recently had this conversation with one of my teams. It comes up regularly. I have answered your form, and will provide some of it here for discussion.
Q: Can you think of any possible drawbacks to or obstacles in the way of making Judging Feedback happen?
A: The very reason that the data you are requesting is not provided. Finding Judge volunteers is difficult. Asking them to evaluate students objectively is difficult. Adding to that the requirement that their evaluation is digestible by the students is a burden. As students, you are familiar with frank assessments from your instructors. Judges are not commonly in the same mindset of offering frank assessments of peoples work. Try and empathize with the position of a peripheral adult wanting to make an objective assessment. Must they also show their work? It is a big ask. Students are in the wheelhouse of being numerically evaluated and red-note critiqued. Forty something adults left that behind your lifetime ago and are awkward with the process you feel is normal.
Q: Anything else you’d like to mention?
A: I have had this discussion many times over the years. Although I would like more feedback for students, providing it is fraught. My suggestion is that students build a report with other teams at an event such that they share their notebooks at the end of the day and self-assess.
For the record, I would like students to have some insight into what the Judges think about their work, and understand that students have a thick skin about critique, since they live it on the daily. BUT, think about the Judges. If there can be a system that transfers the assessment without impairing the Judging process (day of and, as an event partner finding Judges at all.) I am open.
IMO, It is not as simple as students familiar with receiving multiple grades a day might suspect.
You should do a search on this topic. I ran three seasons of pilot programs. I printed the rubric on legal paper, that gives about a 5" gap at the bottom. The judges filled the rubric. They also wrote comments at the bottom. The intent was to write no more than 6 comments with good comments to be less than or equal to comments for improvements.
At the end of the event, the comments were detached from the rubric. Comment slips were inserted back into the notebooks, the rubrics were destroyed (per the judging instructions). Judges thought it was a great idea, they did not think it added much time to the process.
Teams and coaches loved it. At two different Event Partner meetings I presented the results. At the 2022 meeting, the consensus by the Event Partners was it was
a) Too much work for the Judges, they had problems now getting judges.
b) Left the EP open to criticism about the fairness of the judges and how the comments would create arguments.
c) Would create problems at events where there was not commentary vs. ones that did have comments.
I do want to say thanks to @DanMantz for letting me run the pilots. It’s the Event Partners that you have to convince, not the roboteers or the mentors/coaches.
I will further answer… give your mentor the judging guide and have them assess your work. That alone is a bit of a lift, but it is feedback you can work with. Ask them to Judge at one of your area events so they can see the process and their feedback will be more informed.
The closed door is a difficulty, you are not wrong in looking for other options.
This is unfortunately true. Judges tasked with providing positives and critiques to all teams, teams share and compare. “You got the design award, but I have this positive comment and its way better than your negative comment. Not fair.”
If there were a system, I think it would need to be more of a 1 to 5 ranking in each of the rubric categories rather than a narrative critique – which could still lead to upset when Judges have to separate teams that have the same numeric totals.
As an occasional judge I have often thought of all the work I put into judging notebooks and then it is simply thrown away. Yes, let’s avoid asking judges to do more than we already do, but could we at least return the rubrics with the notebooks? Every academic contest I know of returns the scores of the assessment. Debate, speech, math, writing - students get some kind of feedback or score from the judges…
Maybe we encourage the judges to comment in the “Notes” section already provided and let coaches know you get the rubrics “as is” with no follow up allowed? Feedback from judges would REALLY enhance the learning opportunity in VEX.
hmm - this is the root problem of feedback - inconsistency … unlike a coach/mentor/teachers who see teams regulars and can provide guidance to teams about how to use the interview and notebook rubrics to self reflect on team’s growth with regards to applying an engineering design process with authenticity and fidelity.
As for your efforts on tournament days, they are greatly appreciated and not “simply thrown away” … It is not the job of the judges to teach kids “VEX”, but rather evaluate the team’s process in the context of other teams present at a particular time.
I do agree with Foster that feedback is great - but it should not be only volunteer judges at a single day.
I have ARGUED for feedback for years. When we started, we received the rubrics back. I don’t know if we were supposed to, but it was common in our state. We took them, analyzed them, and pasted them in our notebook. And grew tremendously. Then, explicit information was given saying specifically not to do that. But, they did say it was okay to give back some feedback. I created a form for the judges to fill out. It had a wow and a wish and a third spot for either a wow or a wish. We put these in the notebooks for the students. The judges never complained about it, and the students enjoyed knowing what they were doing right and what they were doing wrong. Then, we were told not to do that… But, we had permission to give verbal feedback. So, my judges did. This was probably the least effective. Now, we’re told we can’t even do that. At a tournament I judged last Saturday, I was the judge advisor. Everyone of my judges asked how we gave the teams feedback. I told them we couldn’t. They were ABSOLUTELY appalled. How were they going to get better? We HAVE to be able to make this work. I run the Facebook page for my state. I have offered to judge notebooks and provide feedback through this form, and have asked for others to do the same for ours. I have gotten no response. It’s frustrating.
Thank you so much for your feedback and helpful insights, Foster! I ran a search on the VF for previous threads, and these are some of the ones I found (please lmk if there are any other resources I can look into for this, I’d so appreciate it):
To address some of the issues mentioned in your post (to which I have directed my reply):
Too much work for the judges
In the replies I’ve been getting, most people would be happy with a simple numerical rating out of 5 and some simple feedback for why that rating was given for each of the multiple categories in the rubric. I’m thinking of creating a draft feedback-oriented rubric form where the judges can simply annotate the score of team in that specific criteria, as well as any additional comments they may have.
Doing this would reduce time for the judges, as they wouldn’t have to fill anything out additionally, and they could only do the feedback transfer for the teams who request to be given feedback. The rubrics used for judged award deliberations could be eliminated as per the REC Foundation policy-- or, we could change the REC Foundation policy to allow the rubrics to be returned to teams as-is with no extra transfer needed.
Criticism from teams/parents directed at EP and judges
An effective feedback management system could solve this issue. By asking parents, teams, and volunteers to fill out a feedback form which could be managed regionally, the feedback could be collected and used to create positive change while saving the EP from undue criticisms. Teams could report judging bias, provide suggestions for improvement, and provide lots more useful feedback for the REC Foundation to collect and use.
Competitions with v.s. without judging feedback
Could stating whether the event does or does not offer feedback be a possible solution? Additionally, offering a specific section of the judge guide which would include examples of good and bad feedback (“You scored a 3 because your notebook lacks detail in programming logs” versus “3 because it needs more detail but good job”). Any complaints could be handled through the solution for item 2, mentioned above.
Further Research I’m Looking Into
I’m looking into other education-oriented systems with judging feedback such as the National Speech and Debate Association to see how they do it. If there are any other organizations such as the NSDA which you think I should look into, please let me know and I’ll check them out as well.
Final Note
I’m really passionate about creating positive change in the VRC program. And, as NReese mentioned,
Given that 98.3% of 58 responders to my poll so far have indicated that judging feedback is desired, the question is not if, but how! I’m confident that we can find a viable solution together as a community.
I agree so much with this. Many times we’ve had experiences where we’d wish that we could have gotten feedback on what we did wrong. Just this last weekend, at a competition, we felt like our interview went horrible but we were just left with nothing at the end of the meet and were so confused why it went so poorly.
Need feedback on what you did right too. That’s why I had my judges write good and improve comments together.
There is a book called the One Minute Manager, and it suggests that when you are giving feedback to do the needs to improve part first and close with the “this part was great”. It lets the person (in our case the roboteer) to walk away with “I have two things to work on, but they really liked these other thing we did”. It’s a weird psychology thing, but it does make a difference in the attitude and the way the “need to improve” part is received.
I support all that has been said here. The issue of “consistency” both at a given tournament and across tournaments is not only with regards to feedback but judging in general. I have wondered why the rubrics could not be digitized and uploaded to the REC and evaluated in aggregate. Simple checks could be performed such as ensuring ALL teams at an event had at least one interview. You could also easily detect anomalies across tournaments and potentially give the REC a list of tournaments that might benefit from some judge room support. Perhaps such an effort could also provide a basis for an anonymous feedback system to the students (via comments as @Foster notes or some other means). Just some thoughts, but I would love to see more done to give the REC insight into how this process is being carried out so that they can be proactive in ensuring consistency, especially given how much time and efforts the students put into their notebooks.
Basically I see no issues as an EP or a judge as long as it is made clear that scores aren’t the end all be all and it isn’t required by the EP.
In the mean time, Alternative solutions ARE offered. In the vex notebooking discord server we host weekly “open interview” events. A team does a virtual interview with an official judge and then feedback is given. So far the teams who have done this have loved it as it is a great learning opportunity for the participants and watching teams.
We’re just getting ready to host our first judging competition where we will be handing out awards. Maybe I’ll look into asking someone from RECF to watch…
Can you post the link for the Notebook Judging Discord, I think that is a great idea, I’d like to watch.
The hearts and minds you have to win of those of the rest of the Event Partners. RECF is driven on events by what the Event Partners want to do since RECF is dependent on the EPs for ALL the events except for Worlds.
as an EP who would get teams coming to our event saying “but at the other event they do it this way…” I think that the hard part would be “it is made clear”…
I like this community approach to reflective practices. It is formative for the teams before they go through summative evaluation.
That would be a good idea, maybe invite some EPs/Judge Advisors to the mix… or maybe this is something to have Dan Mantz EP Advisory Group take a look into and have a discussion.
honestly, i would be happy with any sort of feedback. It doesnst need to be a whole paragraph or anything like that, just a few numbers or words saying what teams have done well and what they need to work on. Also i did the form and thanks for making it :]