STEM Judges feedback for teams

I have been thinking about how I would like to do feedback for teams. I agree that the rubrics should not be passed back for the sake of candid judges notes. How about this one? I think it would be really easy for the judges and would give the kids an idea of what they should focus on.

STEM Judges Feedback for teams.pdf (174 KB)

It’s not too bad compared to some other ideas but I still think the following scenarios happen with this:

  1. Teams (in particular adults/mentors/parents) look at each others feedback and get confused why they didn’t win if they had more + then a team that got a - (remember rubric is only part of winning a judged award.) Teams/parents/mentors demand to see the sheet from the team that won, etc. etc.

  2. This will still add some amount of time to the judging process - 30-60 seconds per team is significant at anything larger then a small event. I’ve seen judges take deliberation all the way to the final cutoff just because scores/impressions were so close more then once.

I feel like teams should be able to get a pretty good idea from having parents/volunteers/mentors/teachers/adults they don’t know go through the process with them during PRACTICE. This is something we encourage mentors to make happen BEFORE events.

I think it’s a great start. Would love to see it at events.

I strongly disagree with any notion of providing teams direct feedback.

The best way I’ve found to help teams improve is by directing them to ask the team that did earn the award to share their work - most teams are happy to point out technical details, show engineering notebooks, and the like. By not showing them checkboxes or a rubric, you are giving them an opportunity to observe and reflect - a skill that is very useful as they go on in other endeavors, including their next competition.
​
Teams will tend to take any rubric, no matter how rudimentary, and start comparing notes. What if a group of teams from the same school gave very similar STEM Presentations, but got awarded different marks? What if 2 teams as an experiment hand in identical notebooks to compare how close the judging is? What if they find discrepancies between judges? What if they got a minus mark on something they paid particular attention to do? What if a judge felt bad about giving too many minus marks and put in a few positive checks to balance them out, even if those weren’t really plus worthy?

None of these things are probably what you want, which I presume is to give teams feedback so they may meaningfully invest their time and energy to improving. Many people here come from the world of education, where we give students rubrics so they may improve for the next assignment or project. In a classroom setting this makes sense, and speaking as a teacher, a lot of this comes from that I am the only person looking at their present and future work for the course. A part of my job is building the relationships and rapport with students so that the results of my rubric, shared openly, can offer a student insight on what they should do to do better next time.

At a competition, judges do not have that same relationship, so feedback generally becomes a point of fixation, frustration, or rationalization. The rubric you have provided minimizes the direct commentary, but it does not minimize the risk of teams misinterpreting what you give them, and it furthermore invites more questions - questions that I candidly tell teams at events I have a hand in organizing they are not permitted to answer. Giving this kind of feedback isn’t fair to the judges who now how their thought process exposed to those children whose work they needed to judge, and it isn’t really productive or fair to those students either – ultimately they are far better served by looking at the publicly available rubrics, looking at the award winning example, and by self-evaluating. This lets students learn and improve at their own pace and continues to let judges deliberate, score, and judge openly without having the individual judgments leading up to their ultimate decision exposed to be canvassed after each event.

I agree with this. But after an event it’s just not going to happen for the vast majority of teams. I’m trying to bring a great STEM experience to a vast number of students.

I posted a new version on a different thread, limiting the +'s and -'s to 2 each. Is that any better? Perhaps one each? I teach in a middle school classroom and for most of my students it would be great for them to have something to fixate, frustrate and rationalize over. Getting no feedback gives them absolutely nothing to do. I go over their materials at least twice a month but something from the outside helps a lot.

And how do you show your students the winning STEM presentation? I have thought about doing recordings, and playing the winning one at the end, but am I going to make all those parents and coaches sit there for extra time after the whole thing is over? Would the kids even be able to pay attention at that point? Should I post in on youtube? Would the winning team ever consent to either?

Bringing a STEM experience and offering direct feedback on judging are 2 entirely different things. If students are interested, they will ask each other questions and seek each other out.

The number of pluses and minuses don’t matter - feedback so overbroad invites comment and followup questioning in order to become useful, which is the problem.

Perhaps at the end of a season, the teams with winning STEM awards could be convinced to post a youtube video of their presentation as an example for the next year. Perhaps someone can organize a post-season workshop for those who are interested.

Ultimately teams need to be self-reflective and do the best they can - and self-improvement should be their goal, not winning the award itself. Teams can be encouraged to video themselves giving their presentation at various stages of the season to see how they grow and improve.

I know that as a judge and judge advisor, I would flat out refuse to volunteer if notes from the judging room, even notes as broad as what you suggest, were to be made public.

Ben Mitchell, I’m sorry you feel that way. As a coach, and as an event partner, I know that feedback is beyond crucial, it is necessary. Yet, here we lock the kids in a room, don’t allow them to see their competition, and then don’t tell them how they did. Huh? How does that make any sense. Everyone knows that in order to get better, you have to have feedback. Even the RECF knows that: Robotics Education & Competition Foundation - Calling all VEX IQ Challenge Teams! Ask your teachers, mentors, friends and parents to watch you present your STEM research project. You may receive different feedback and advice from each person which will help you make improvements. #TipTuesday #omgrobots | Facebook. But, they don’t want the kids to receive feedback from the most important person, the judge?

Where is it stated that we cannot let them see their competition? There is nothing prohibiting teams from displaying their notebooks, robots, or STEM presentations on whatever medium is appropriate - be it youtube, a web site, or in person. That’s something teams can always elect to do, and many do so.

I think I described the problems with giving students judging feedback rather extensively. Wouldn’t it be better, perhaps, for you to organized a workshop where you can talk about awards and what makes a good submission, without comparing teams to each other? Perhaps the winning teams from the past season would be willing to show their work, explain their process, and give examples? This way the feedback is general, and again, students and coaches can take that general advice and reflect on their own work.

The link to the REC Foundation suggests having teams present to adults or mentors for practice and for feedback. This makes sense, as I mentioned previously in my first post on the subject, because those adults and mentors would presumably have the personal connection that would make the feedback constructive. Having judges reveal their judging process is entirely different, since that connection does not exist. Teams will be getting feedback that is impersonal, one-sided (You can’t expect judges to engage every team in a conversation about every submission) and because it is happening immediately after the award ceremony, many teams I fear would fixate on outcomes rather than improvement. “What can we do to improve?” - the core question of constructive feedback, is replaced by “Why didn’t we win?” When every team is getting a rubric back, it invites this controversy.

I understand that many coaches would want direct feedback, but again, what works in a classroom does not work at a competition, and it isn’t right to put judges in a position where their feedback may inadvertently give offense or cause hurt, where perceived slights or oversights can be given a forum to be aired and perpetuated, and it isn’t fair to students, since feedback this general is likely of very limited use anyhow, and all it takes it one person to cry foul for the entire process to be needlessly called into question, damaging the experience for many participants.

Lastly, the judges guide mentions that all notes and rubrics should not be returned to teams, so doing this would be a violation of the guidelines set forth in that document.

If you want feedback, by all means get some adults who are good with kids to review the rubrics and guides, and see your students’ presentations or notebooks or whatever and give their feedback in the form of a conversation. Asking competition judges for that feedback is neither appropriate nor truly helpful, for the reasons discussed above.

Ben, I agree with you from the point of view from the judges. I don’t like the idea of handing back the rubric as I don’t want the judges to be distracted while they are taking notes. I don’t want them to be worried about offending anybody and I want their remarks to be candid. I also know that any feedback is extra work for the judges, and the “easier” you make it for the judges the less meaningful the feedback is.

I also agree that judges are just human, and five different sets of judges could have selected five different winners after watching the exact same presentations or looking at the exact same notebooks… Their feedback isn’t terribly useful in that sense either as different things.

I was a STEM judge for our FL Elementary IQ State competition last year. It was a long stressful day and you better believe I did not want to return those rubrics! It was my first experience on that side of the coin. There were a couple of groups that did so poorly that I was a little upset with their coach for not just telling them to not do it. One of them was actually a little offensive. There was no feedback I could have given that team that would have been helpful.

I do disagree with this model. In my experience it is just not practical. My kids are all middle school students in my classes. I can’t just take them to some other school or team to see notebooks and projects. You see a few videos for stem presentations but not many. And a lot of the published notebooks are from teams that win design at worlds, that’s not exactly the type of standard I want to throw at my kids. And after events coaches, parents and kids want to go home, and don’t have time to share what they have with anyone.

I really don’t have much of an answer for you. I will try to do… something. Maybe I could create another volunteer position for “feedback,” someone who is not a judge but watches the presentations only for the sake of providing feedback to the groups. If only we had this many people!

I would like to continue this discussion and I am trying to approach it from all sides. I want to help point these kids in the right direction.

Perhaps on a local level, teams that have winning submissions can be asked to do their STEM Presentation for youtube, or scan some selected pages of their notebook to be posted online or put into an email for other teams to view when convenient for them?

This would allow those teams that want to improve a model to compare their performance against, and doesn’t take up time at the end of the day at a competition. It also shows a model while keeping direct feedback out of the equation. I think the majority of teams would be proud to show off their projects.

Transparency is essential for any competition to work. Not only does this make logical sense but there is a long history within other sports and competitions to back this statement up. As long as there are no scoring sheets returned, judged awards in Vex will always leave participants with a feeling of mistrust and the results will always fall far short of a consistent and fair outcome. Further, and just as important, as @sankeydd and @daddycrusader aptly points out, if a team is not given feedback about how THEY did, they will never be able to improve. And learning should be the key motivator for this whole program.

Come on guys, lets get real. This shouldn’t even be a discussion. Its like if you took a class and at the end the teacher gave you a grade but they were prohibited from telling you why you got it. This is just wrong on so many levels and really needs to change for this program to advance.

Please read my post that explains why the classroom analogy doesn’t work in the case of judged awards

At some point you need to have faith in the process and have some degree of trust that the people doing the judging and organizing the events are informed and working for the right reasons.

A lack of feedback, you say, causes mistrust and a lack of a fair outcome. What will change when you get a rubric back and dispute the scoring your team received?

Providing direct rubric feedback to work actually opens up more problems than it solves.

Ben, what I took from your earlier post is that blocking judge feedback is necessary because the judging process may be flawed so blocking the feedback can avoid troubles.

If that is the case, I believe making it open is the first step toward making it better.

And let’s face the fact, VEX competition is a small circle, some “insider” coaches will be able to get feedback from their planner and judge friends anyway. By blocking other teams from receiving feedback, you are creating an unfair playground for the kids.

In our state this year, I saw a few teams that are significantly better in bot design, scoring strategy, and driving skills than all the other teams. But they NEVER won any design awards, STEM awards, or excellence awards. Those awards always go to the teams from a few schools who are VEX veterans and have a tradition of hosting VEX events. I believe it’s not because those kids are better at problem solving methodology, project management, or documentation skills. Instead, I believe it’s simply because their coaches know the “rules of the game” better, and can help tailor their projects and documents toward judges’ taste.

I talked to two highest scoring teams in our state about those judged awards. Both are independent teams. They were both like “we know we won’t get those awards. We tried before, now we are smarter and won’t waste our effort in that route”. That kind of mindset, is what your blackbox approach promotes.

It’s not that the judging process is flawed, but that opening up that process and giving teams rubrics or any other feedback will ultimately leave people with an incomplete picture of the process. People will then fill in the gaps with conjecture. Unlike a teacher who is giving feedback to a student so they can improve their work on a personal level, judges at these events are ranking teams over each other and there isn’t a one on one relationship, which is needed for constructive feedback.

Perhaps I expressed myself badly - judging necessarily involves some degree of subjectivity. Is it a good idea to put that on display to be second guessed by the teams who are the subject of the judging? Teams that may have strong emotions from the events of the day? Is this productive, or a recipe for misconstrued conclusions?

I think it is far better than teams see who won an award, and they can then compare their work to the team that won, and gain feedback by self-reflection, than to simply compare rubric scores and feel cheated that they didn’t earn the points that they “should have” earned. Teams also should not be able to see how they rank with one another with regards to these awards - it only can serve to incite bitterness as teams dispute their award scoring and cry “favoritism” for other teams.

I cannot speak for anything beyond my own province., but at the events I organize in NJ, judges are vetted and generally come from sources outside the host school or program, and come from multiple sources - specifically to avoid conflicts of interest. Judges all get very specific instructions on sharing what goes on in the judging rooms and I personally collect judging materials to destroy. As far as I can discern, there is no collusion between judges and teams. Some programs tend to earn certain awards not due to any bias, but because they have made those awards part of their institution - some programs consistently have great engineering notebooks - built on a history of having good notebooks and also a team culture promoting the notebook as a vital part of the process.

We also have teams that tend to do well in robot performance and not on earning awards. Some teams ignore aspects of the competition to focus on the robot. That is their prerogative, but I think they are missing out on the full experience that IQ has to offer.

If you have some kind of issue with the way judging is managed in your area, you should bring it to the attention of your regional support manager. What you are describing is an unethical practice and I’m sure the REC Foundation would want to hear about it.

1 Like