Does the governing body of the IQ Challenge have any suggestions for giving feedback to teams/coaches after their STEM Research Project? I have had many teams voice their displeasure about how feedback is generally unavailable to them after the competition. I would like to make a suggestion. For next years competitions, If we could take the form for the STEM research rubric and expand it horizontally to allow for a tear off section at the far right that is dedicated to team feedback (1 for each rubric category). I feel that this will be more professional than just handing sticky notes or torn paper to teams that are not as specific.
We went through long discussions about this after World’s last year. The tear off strip idea came up and is a great idea. However, RECF has rules set right now so there is NO feedback from STEM judging. They even ask that all rubrics get disposed of without teams viewing them.
Any ideas on the reason why they demand such a “black-box” process?
We all know that this is a small circle and some coaches can be very close with event sponsors/planners. I am not making accusations about any unfair judging. But I am pretty sure that some of those coaches may be able to get feedbacks while others cannot.
From my post in a very similar thread, I hope this helps to explain why it is a “black box” process:
This come from my experience as an event organizer and judge advisor at both local events in the past, and at Worlds currently:
Also, if I ever caught a judge giving feedback to an organization, that would be one of the rare instances where I would go nuclear and ban that person from participating in any event I was a part of, and make public that collusion so other teams know not to trust that judge in the future. Taking judging materials or providing teams an advantage by revealing what transpired in the judging room is a “nuclear” offense for me. It violates the entire process
The reason feedback or rubrics are not given, from my experiences, is because teams will tend to take any rubric or feedback, no matter how rudimentary, and start comparing notes. What if a group of teams from the same school gave very similar STEM Presentations, but got awarded different marks? What if 2 teams as an experiment hand in identical notebooks to compare how close the judging is? What if they find discrepancies between judges? What if they got a minus mark on something they paid particular attention to do? What if a judge felt bad about giving too many minus marks and put in a few positive checks to balance them out, even if those weren’t really plus worthy?
None of these things are probably what you want, which I presume is to give teams feedback so they may meaningfully invest their time and energy to improving. Many people here come from the world of education, where we give students rubrics so they may improve for the next assignment or project. In a classroom setting this makes sense, and speaking as a teacher, a lot of this comes from that I am the only person looking at their present and future work for the course. A part of my job is building the relationships and rapport with students so that the results of my rubric, shared openly, can offer a student insight on what they should do to do better next time.
At a competition, judges do not have that same relationship, so feedback generally becomes a point of fixation, frustration, or rationalization. The rubric may minimize the direct commentary, but it does not minimize the risk of teams misinterpreting what you give them, and it furthermore invites more questions - questions that I candidly tell teams at events I have a hand in organizing judges are not permitted to answer. Giving this kind of feedback isn’t fair to the judges who now how their thought processes exposed to those children whose work they needed to judge, and it isn’t really productive or fair to those students either – ultimately they are far better served by looking at the publicly available rubrics, looking at the award winning example, and by self-evaluating. This lets students learn and improve at their own pace and continues to let judges deliberate, score, and judge openly without having the individual judgments leading up to their ultimate decision exposed to be canvassed after each event.