I feel like if you go up to the judges after the event and just ask “hey, could you give me some feedback on my notebook? how could I do better?” very few judges would say no to you. but I’ve never tried so I don’t really know.
It takes either more judges or a longer time to do the process.
You need to do some more judge training on how to do it
Someone (more time / people) need to check the comments.
The pilots that I ran tried to give an even balance of positive and negative comments, with up to 3 each. On a 24 team event, with 5 judges, this added about an extra 45 to 60 mins to the process.
Lots of EPs are hard pressed to get 3-4 judges. It’s also extra work and setup. You need to print the rubric on legal size paper to give you a strip to write the comments on. We never give back the scores, just the comment strip.
For teams that did zero work on the notebook, I gave them the rubric back with a note that said this is what we use to judge notebooks.
I am another EP who is in the minority and would like to provide feedback to teams. Foster has already done a great job of explaining why things are the way they are. I also volunteer to judge sometimes at other EP’s competitions, and I usually like to be on the team that judges Design.
I think most EP’s are afraid to ask their judges to do one more thing since they are already giving a lot of themselves. My experience on the judging side doesn’t really support that. Most of the people that I have judged with volunteered their time because they enjoy interacting with the kids and supporting future engineers. They are happy to offer suggestions, and I have met several first-time judges who were very surprised to learn that we don’t return the rubrics. I don’t actually think it would take much extra time to provide feedback- what Foster said about that seems about right to me. Scoring with the rubric is time-consuming. Writing a few comments on a separate sheet to return to teams would only take a minute or two.
If you are on a team that is working to improve, I highly recommend two things. First, DO talk to the judges and ask them for feedback. Most are very happy to provide it. Second, congratulate the team that won Design and tell them that your team is trying to improve your notebook. At least in our area, most top teams are happy to show you their notebook and answer questions.
I remember when my team was just getting started, we thought we had pretty good notebooks but never won awards. Another local team nearly always won the Design award. One day, after watching them win it yet again, I half-jokingly asked their coach what their secret was. She showed me their notebooks, and my mind was blown. That was the first time that I truly understood what needed to be done to be an award-winning team, and I was able to help my teams to improve so much. We now have won many design awards, and I always try to pay the favor forward. We are always happy to show new teams our notebooks and answer questions. It is very likely that teams in your area are happy to do the same.
Example from Brentwood Halloween 2019 Competition:
The top 10 notebooks, whether or not they won awards, got direct feedback from the judge advisor. To be clear this guy is really good with notebooks (the team he coached during turning point won Worlds Excellence).
Personally, I thought it was really fantastic of them to do this, and I’d be happy to see it at more compeitions.
No. Getting feedback about how your documentation process is going during the season is the job of your coach/mentor. Think of coming to a major competition as a summative assessment. Either you are ready or you are not.
Why I do not believe giving rubric back is of value? Well, typically you will have multiple judges assessing your work with different beliefs about how well you met each of the indicator. All that can be really told is which teams bubbled up to the top. It is not an indicator of what a team needs to improve (formative assessment), that role should be squarely on your teachers/mentors/coaches. If you have not reviewed your design process or documentation with your teacher/mentor/coaches, and are mad that judges are not informing you about ways to improve, then you might be approaching self-assessment the wrong way. The best analogy I can think of for HS teams - do you expect college admissions teams to return to you their rubrics as to why you did not make it to their institution? Or in real world, do you expect a potential employers to have a hiring team to return to you an assessment why you were not hired? Not likely to happen.
So, instead of expecting judging teams or judge advisor to return feedback, I would instead seek local engineers to work with you to self assess the merits of your documentation and design process. Or better, have discussions with teams at the events that have consistently earned judged awards to discuss how to improve your design process and get better results. I would hope that those teams are ready to share what they have learned with others - that is what being part of this community is about.
I disagree. The point of an employer is to find the best parson for the job. The point of the judge shouldn’t just be to find the team that wins the design award. The judge should be there first and foremost to help a team grow and become better at being engineers.
I agree with @lacsap 100%. But I will add to his discussion a couple more points: the engineering notebook rubric is only one part of the judging process. The interview process is equally, if not more, important, to the judging process. So not only should mentors/coaches be helping review and provide feedback to their teams on their notebooks, but they should be helping teams with interviewing skills as well. Mentors/Coaches, one of the best things you can do for your team is to volunteer (or get one of your kids’ parents to volunteer) as a judge at a local event to learn how the judging process works.
I partially disagree as although a judge is not inherently responsible for telling you anything. And that judging is more of an assessment like you were saying. They should at least tell you how you did and point out any obvious mistakes so a team doesn’t get stuck with a low judging score tourney after tourney. Although at the same time your notebook is only part of the equation as our teams notebook last year was average but we really came alive in the interview which helped our score a ton.
I think this is expecting/assuming a lot. Your students are fortunate to have you, but not all teams are so lucky. There are many teams that are just getting started, where kids are really passionate about wanting to learn and build robots, and some staff member at their school who knows little to nothing about robots or engineering signs up as their mentor just so that they can have the chance to have a team. That is how I got started as a coach. I knew nothing at all. I just wanted the kids at my school to have the opportunity. I know that I am hardly the only one- many/most new coaches in my area are in the same position.
We can’t ignore the fact that there are teams that are more or less just doing their best basically on their own with little to no help. I think getting feedback in some manner (not necessarily returning rubrics) would go a long way towards making robotics competitions truly the learning opportunities that they are supposed to be.
True… but I know that as a new coach starting out in IQ (many years ago, now!), and then moving into VRC, I had a LOT of trouble giving useful feedback after a certain point - even though as my son’s coach, I was very invested in doing my best and learning what I needed to learn to support the team. There seemed to be very few resources available for an independent team coach to really know how best to help them fully develop into the “Expert” category in this area.
We were fortunate that the kids were able to do as well as they did even fairly early on, and this was largely due to a few specific tips from 2 or 3 very kind, helpful individuals who had done some judging over the years and the extensive on-line research the team did into both Vex notebooks and general better documentation work.
However, we often considered the judging room as “the big black box”, where notebooks go to either succeed or die (and the only way you know which yours did was if you got an award). The team I coached went to dozens of competitions over many years and often won Design (and occasionally Excellence, later on)… but often didn’t, as well, with the same notebook and the same kids - and we really often had absolutely no idea what distinguished the win days from the didn’t win days (for Design). That’s really not conducive to giving kids a solid game-plan on what needs to be changed in order to make their work be the best it can be within the guidelines that the judges are operating.
That said - by FAR the best thing that I did to help us all better understand the process and how to improve a notebook to the next level was volunteer as a judge. I truly, honestly, deeply wish that I had done that within the first or second year that I started coaching. If I had known how insanely useful the information I got from judging even 1 event would be, I would have just asked to volunteer at any local event my kids weren’t competing in (so that I could focus fully on the event and not have to worry about anything team-related, while they were all so young).
In fact, the absolute best advice I can give for teams wanting feedback regarding the judging process and tips on how to improve their notebook is to have an adult close to the team go and volunteer for judging at an event.
So, after all that… I guess I totally agree with @lacsap that it should be the mentors giving the feedback, I just think that the mentors should BE judges so that they can give better feedback (or there should be some sort of training camp / class for mentors to attend that is structured in a way to give them that knowledge).
Actually, the same can be said about assuming Judges are the best ones to be coaching teams about design process for a season at one event.
The take away I have from the survey and responses - 1) teams feel they are ill prepared to understand design process on their own, 2) Coaches/Mentors/Teachers feel they are ill prepared to support teams in learning design process and documentation, and 3) there are insufficient tools/resources for teams/coaches/mentors/teachers to learn about design process and documentation. These are legitimate concerns for all.
That said, it is not realistic to have EPs or their volunteers fill the void. It is something RECF and VEX have been working on, perhaps a page that makes all these resources available for design and documentation should be more easily made available to teams (with a grain of salt, even though it is easy to access information about game rules, they are ignored by some no matter how easy to access them are.).
My concern for judge feedback is the consistency - it should be consistent quality for all events. Judges are volunteers, and they are usually not teachers. They are looking at the product against the same rubric all teams have access to.
Moreover, there are many team created video how-to design process that are quite good - just look in past online challenges.
Yes, that would be great if RECF provided consistent videos/tutorials that are both simple enough for novices to understand (without getting scared by amount of details) and for the experienced students to find something that will be useful in their future professional career.
However, our primary goal is students education and I am not going to stop giving feedback to the students who want to learn just because there is no consistent feedback given at the competitions in a neighboring state.
Here is an idea: now that many teams are doing electronic versions of the notebooks, we (mentors that hang out on vexforum) could volunteer to do an open EN review for a couple of teams a week. Interested teams could post a copy of their notebook and several of us could grade it and give feedback in the public forum thread.
Everybody will be able to learn something new about the judging process.