At long last I finished the final day!
they feel that if the biggest complaint from the community is that the wrong team won an award then that’s good because it means the event wasn’t a disaster and we aren’t losing thousands of teams and the kids are having a good time, BUT they know people feel passionately about the awards process and they are making it a big deal to improve that this year
they are going to have more hotel rooms close to the expo center/downtown at worlds this year
they are looking at a new way of doing Q&As outside of the forum
part of the extra revenue for higher fees this year goes to hiring more IT support staff. They provide a lot of benefits to their staff that most nonprofits don’t because they want to retain their high quality staff. They also need to higher more regional support reps and other positions. In general, new highers sounds like a big reason for the fee increase. The decision for the fee increase was made by the RECF’s board of directors. They felt a one-time increase was better than a gradual increase.
v5 will have a color touch screen
v5 will support vision processing
There was a big discussion about throwing matches and over-involved adults
- they do not get people on the forum who rationalize throwing matches. Jason can see how children could but not adults
- the recf does not want to require the people running events to have to accuse competitors of lying
- the first audience commentor tried to rationalize it - the rest seemed against it, but not all agreed on what to do
- there is a no-awards list for misbehaving teams at worlds
- one solution suggested by the audience was to make a PSA about good sportsmanship/not-overly-involved adults which would be shown at competitions
-another suggested solution was to make a rule prohibiting an alliance from picking a sister team as their third member. This was generally not liked and it was recognized that sister teams often aren’t the problem
- another suggestion was to create a cut-off point where teams below a certain rank are ineligible to be picked for an alliance
- the consensus sounds like to create an ethics code of conduct/honor code that teams have to sign and that would be displayed at events. This would be targeted towards the adults issue and the throwing matches issue
- they cannot say no adults in pits and don’t want to say no adults working on robots or robots must be 100% done by students
- people stressed the need for due process for teams accused of violating the code
- it seems ambiguous what kind of punishments there will be for violating the code, but it sounds like it won’t be anything beyond what they already have
Regarding the part that Tabor brought up earlier (thanks for pointing that out to me well before I would have gotten to it) where they feel the need to address what was said on the forum
- regarding this comment “Who made that decision, and do they still feel like they handled it appropriately?” by robo_eng Paul made the decision and it was difficult and painful. They stand behind it. They did not want to put out information and then make people wait 8 months to actually have the product
- There were claims on the forum that the recf isn’t transparent but Jason feels they are
- coffee being the number one job of EPs with regards to the judging room was only a joke
- some of the things about the summit on the forum could be examples in a college presentation on taking things out of context
- according to jason, the claim was made that the majority of EPs want the rubrik given back but the panel vetoed that. Jason says this claim is incorrect that only one person raised their hand voting to give rubriks back.
- Jason then said he thought there was a consensus to not give them back due to the various pros and cons and everyone there clapped
- Another one taken out of context was “Jason says we don’t need feedback on judged awards because we should be able to figure out what areas we need to improve in on our own.” Jason only says that to mentors who think they don’t know how to tell their teams how to improve without being given the rubriks
- It’s frustrating when things are taken out of context and Jason will defend his staff
- If they made a decision against what we wanted or if we think they aren’t being transparent, we should please tell them
- and they recognize that it’s difficult for those on the webcast to participate
wow that was long
Thank you RECF for having and for streaming the summit! I think a lot of good came out of it
I made this comment.
Not giving feedback is one thing, going so far as requesting rubrics be destroyed is my concern for lack of transparency.
Generally though they are pretty good.
I think I have a good grasp of their concerns, having been a judge for other robotics competitions, including FLL. The raw stuff people wrote down during the judging wasn’t, in most cases, the sort of thing that would help the team. However, the event organizer asked us to write up feedback for each team. I took that seriously, but it was very difficult to get it done in time. I coordinated the responses from the other judges and we coalesced the input into a couple of sentences for each team. This was a very lengthy, very difficult process.
I hope the feedback was helpful to them, but I know it would be very, very difficult to recruit those judges back next time to help. When the event organizer polled the judges asking for honest feedback on the event, including anything that would make it more likely they would judge again, the consensus answer was “get rid of judges feedback.” When he asked later in the process what would prevent them from judging again, the consensus was “requiring judges feedback.”
On the rubric scoring sheets, the negative judges notes said things like “don’t understand sensors,” “teacher programmed,” “bully can’t share,” “teachers pet,” “what about gravity?,” “all excuses, no reasons.” I believe handing that back to a team would do more harm than good.
Great post - this about sums up what many of us who have judged (from local competitions to Worlds) and are now EPs discussed among ourselves during the Summit. It also led to discussion on changing the name of the rubric to “judges worksheet” to alleviate some of the thinking that goes with “rubric.”
We also discussed giving a few sentences of feedback but the sheer amount of time to add that in to an already challenging process should not be a requirement on an international level.
Not to say the EP opinion doesn’t matter but would you agree the EP’s are the most biased against giving feedback of any portion of the community? Just because it would mean more complaints and more work for the EP?
If you asked the mentors or the students the opinion would be quite different. Not disagreeing, really just pointing something out.
Many EPs are mentors themselves.
Heck, I’ve been in all 3 positions. I can understand not giving back judging rubrics. Especially when they have comments that might not necessarily have been helpful or beneficial to teams.
I can also see teams wanting to get some sort of feedback. Maybe that is in the form of verbal feedback to teams in an interview, or something else. I’ve seen the case far too many times of students and adults getting a rubric back and thinking “we got a perfect score, how’d we lose the Design Award?” or adults interrogating and harassing volunteers hounding them on the judging process.
Yeah, no. Forget giving rubrics back.
For an event to occur in one day, with even 24 teams, it’s a lot of work on the judges. Providing feedback to all those teams, and getting design, excellence and other awards completed really before elims is a tough job.
Ultimately though, while I like the rubric as an effective filter tool for judging, I think a re-emphasis needs to be applied to the fact that it is a filtering tool, and not a final decision maker. That gets lost on a lot of teams, and is a cause for a lot of issues.
Having seen both sides of the argument, i believe we will continue going on as we already were. We will not discourage our teams from putting forth effort toward the judged awards if they choose to do so, but we as mentors will continue not to give those awards any sort of priority or value. We will continue teaching good design process and note taking skills, and we will continue to encourage our students to pursue Tournament Champion and Skills awards which they have more control over and clear feedback from.
okay, but at the end of the day, If Students aren’t learning (receiving feedback) why do it?
Why pay to $70 to buy the hardware so a team that already knows they are good gets a pat on the back and the teams that don’t understand the judging process just get their notebook back without any word as to why they weren’t considered.
I’ve applied for a lot of jobs, and about every time someone says I am no longer being considered for a position I’ll ask for some sort of feedback as to why (politely) generally its something along the lines of not having the experience they were looking for, or they had more seasoned applicants. sometimes they’ll say that something was funny about my resume, or maybe my cover letter could use a more personal touch little critiques that have made my application process pretty steller. I remember once a recruiter told me I didn’t have experience doing a specific thing which i had done and forgot to put on my resume or didn’t think it was important. Point is, I learned something from their words of wisdom or their perspective.
I understand judges want to do less work, but their entire propose at an event is to provide a teaching moment so its obviously going to require effort.
I’d bet Judges are the group least likely to want to prepare feedback. I believe most judges are perfectly willing to give feedback; they just don’t have enough time to judge and prepare feedback.
Maybe we could propose a feedback tool that would make it easier/faster.
Mentors and coaches want the feed back, and have to provide it even when it isn’t supplied to them. So you’re dead on with this; it would really help them to have it.
I’m cool with this. I’d be okay with the rubric with just the score, without notes. That at least tells us where we need to put more effort into.
Have the “judges worksheet” divided into a score section and a notes section, maybe a public and a private notes section. As they go through, if they have quick feedback, they can put it in the public section, otherwise their notes go in the private section, then tear of and destroy the private section before handing back the rest.
I like the idea of getting some sort of feedback and understand that, logistically, it is difficult. When we were a new team just learning the ropes, one simple thing helped us improve. When announcing who won an award, the announcer would describe some of the attributes of the winning team (" included lots of phots," “detailed description of programming process,” etc.). That helped us immensely as a new team.
I judged at a tournament where we strove to give each team personalized feedback. I was all for it and agree that it was lots more work. Plus, when I give negative feedback, I always want to “open” with positive feedback. Sometimes it was more difficult to come up with something positive that wasn’t an out right lie. So I can see why there is pushback on doing that.
Plus, with only four levels of scoring (0, 1, 2, 3) - multiple teams at a tournament can get a perfect score - so it only adds to confusion when a team gets a perfect score and doesn’t win an award. To add onto that, some judges are more strict in scoring than others - so it is conceivable that a team with a less than perfect score can win over a team with a perfect score. It’s just a judging variance.
RECF has a tough job. But I wish they could figure out a way to provide some sort of feedback on what it takes to be a winning team.
It seems to me that if the position is it’s too much work to do something to make the judging process more transparent to teams so that they have a decent idea of what to do to win (because we already have established that the judging criteria or whatever you want to call it does not do that since it’s only guidelines for narrowing the contenders) then we must also be ok with teams taking the position that it’s too much work to put extra effort towards trying to win something that appears to be a crap-shoot anyways.
Especially now that the requirements are being made so that it’s less convenient to put together a notebook, which is a key conponent for the two awards offered at most events.
It seems like since the judges are already doing so much work, a little extra effort to do something that makes teams appreciate the process more would be worthwhile.
You can get a perfect notebook score and bomb an interview .
I can have a perfect Resume and then bomb an interview.
its basically the same.
It still would be good to avoid the situation where two teams score exactly the same on a rubric/scoresheet, but one wins design and the other gets nothing. You might need more than just the whole-number values, to do that.
At a couple of things I judged, we used the rubric scores as the “going in” numbers. Then, we used our notes together with the scores and our overall impression of all the teams we’d seen to re-rationalize the numbers. Sometimes, a team I thought had maxed out a rubric line was outdone by a team I saw later. I adjusted the scores to reflect that. If I had given them both a “3” I decide whether one was a 3.5 and the other was a 3.4 for that particular rubric line.
In effect, we created a more granular rubric score. The FLL rubric score sheets imply an integer score. I remember how shocked the other judges were when I said “I scored them at 2.5 on that.” One of the other judges said “you can do that? I didn’t know!”
After that, we made a much more fine ranking of each rubric line item. That really helped put some distance between the teams.
Maybe getting feedback like that would be helpful.
[Edit: looks like a couple of you (hi Gear Geeks!) were typing things along the same lines.)]
I 100% agree.
this provides a good teaching moment. You can have a perfect project and not convince the right people.
@kypyro I think teams are intelligent enough to realize that a lot of teams at a competition probably get a perfect score, BUT what often happens (and what makes it really hard to provide useful feedback which you said mentors should be able to do) is that a team that obviously did not get a perfect score will win an award.
I really like the idea of tear-off sheets or the like. I also think that just saying in what ways the award winners excelled is a great idea.
This is a side note that is sort of irrelevant until feedback in general is provided, but i would request that no judge EVER write directly in a student’s notebook to give feedback. Sticky notes, or a loose piece of paper are appropriate, but no one but the students should be writing among their own designs, programs, data, and notes.
My personal experience being lab notebooks in college. There were some pages that had lasting value that i kept all four years. It is annoying having “Great Job!” written in purple pen at the top of something you reference for years to come.