So this means that each tournament needs at least four additional computers (one for each judge pair). In an ideal world, this is great, but in a practical sense, this would be very difficult for most EPs.
They would need a phone or tablet to use the tm app and it will need to be 5ghz wifi compatible, or any device with a browser. I feel a mobile device would be way easier since judges are mobile throughout most tournaments.
I can think of a quite a few events even within my region that wouldn’t be able to swing that sort of feedback system.
Feedback is just hard to do for many EPs. And many events are already understaffed for judging.
Everyone ready for this… MAKE IT OPTIONAL
If teams complain about it, ask them if they brought enough judges to volunteer, if not o well. Also I think at the very least if a tool like this is created it would definitely start at worlds, then sigs, then states and maybe trickle down. Will not happen over night. States like Minnesota have huge trailer infrastructure with the supplies and advantages of amazing users, even with all of that infrastructure, volunteers are going to be an issue.
I would agree with doing a graduated system like worlds, signature events, states, and then a broader rollout, but I don’t think making it optional at any of those levels would be a good idea.
If some tournaments have digital judge reporting and others don’t, it would cause many teams to only attend the events that did have it. This would lead to an inconsistent experience between teams, where some teams got lots of feedback on their notebook and others wouldn’t. There’s a reason RECF works so hard to make sure the tournament process is as standardized as possible between tournaments.
I was unable to go to the EP Summit in July. I did create a presentation about our process to give feedback to the roboteer / team and Dan did a very good job of presenting it.
In the pilot the rule from Tarek at RECF that I could not change the RECF form. So what I did was shrink it slightly and then print it on legal (8.5/14) with the RECF form shifted far left on the front and far right on the back. This gave me a 4" margin on one end. I ran a perforation down to make the strip easy to separate from the main form. Judges wrote comments along the strip, it was removed at the end of the judging and placed back into the notebook.
Because it was intended to help teams, there was no “judging” ie you scored a 2 on this section. Judges were asked to put comments in that would help the team, be they either positive or negative comments (things that were very good, things that need work).
The audience sound at the Summit was pretty bad. I could hear the speaker (Dan) but could not hear most of the comments clearly. There were people on the chat that were supportive of the comments and what I did. There were people in the meet that were supportive, there were also some that were not.
On the no-support side it boils down to time/people. EP’s have a lack of judges and that forces the issue of how much time they can spend commenting. I’ve been very lucky to get a large number of judges and they all were willing to write comments.
EP’s in general (me included) are worried about the backlash from mentors/parents around the comments. I did not have any issues with last years pilot, but I witnessed snide comments about the judges being biased (*). So it’s a valid concern.
The paper method worked since it was simple, didn’t need tech other than the box of pens.
Using electronic stuff is problematic for the events that I run, there is not public wi-fi and cell coverage is iffy at best. I’m using all my tablets for scoring and all 4 of the PC’s to run the events.
Based on the meeting, it is unlikely that RECF will come up with a way to do feedback.
I’m going to continue my pilot into this year at the events that I run, since I’d like to have a second set of data points.
(* Bias: I had a parent complain that the judges were biased. The were all from the Air Force, none of them had roboteers at the event. I explained this to the parent, they huffed off that “they still were not being fair”. Sigh… )
I was at the EP summit and thought your slides were very enlightening. I went in thinking that feedback was very important - but came away with the distinct understanding that many ENs fall short simply because the team did not follow the rubric. If I remember correctly, a significant amount of the feedback you provided to teams was to simply understand and implement the guidance in the rubric. At each event there were perhaps 5 to 10 teams that got guidance above and beyond that (if I remember correctly). Can you talk about that a little?
After seeing your slides, I started to think that coaches are the key. They should be able to read the rubric and provide feedback to their team on where they fall short. Having coaches, parents and mentors serve as judges themselves is also a great way to get a understanding of the judging process and what makes a passable/good/great EN and share that knowledge with their teams.
One key point I remember being made in a prior EP summit is that feedback need not be limited from those at a competition. Coaches and mentors can also provide quality feedback outside the competition.
I’ve had issues at some events where rubrics were returned and judges comments were left that weren’t quite helpful or really appropriate to be on the rubric and sent out.
Also, it is a really hard task to give adequate feedback to a decent number of teams while also having to deliberate and decide the other awards.
A digital solution would be nice, and in VA we have a great trailer infrastructure, but not every EP has the know how, the tech infrastructure in their venue, or the resources to utilize it.
Sure. At any given event there were a number of notebooks that had only 3-4 entries and it was clear that the team didn’t understand.
I would write on the comment strip that this was the way that notebooks were scored. They should use it/follow it. Each entry should try to cover every point on the rubric. The note and the rubric were then placed into the notebook.
In some cases when the notebook came back, there was a huge improvement in the notebook information.
I had made a suggestion that the “welcome kit” also contain a copy of the score sheet. I also suggested a copy of the awards guide also be included. While this is killing trees, it puts the score sheet and awards in the face of someone when the box is opened. I would think that the coach/mentor would at least glance at it.
When I do my first session meeting with teams, EVERYONE gets a copy of the score sheet and we talk about that first, talk about the game, and talk again about the score sheet. Teams that think about the score sheet every meeting are also doing more brainstorming, planning, analysis, etc.
@Gear_Geeks is that what you were looking for?
Edited to add:
- I hate the term rubric. It’s a professional educator buzzword. Most people when you say “rubric” to them think of the cube, not the score sheet.
- At an event, scoring the notebooks takes less time than trying to get the top 10% of the teams in for interviews. At most events only the top notebook teams ALSO get the interview. This is a shame, but with time constraints it’s the way most of us process things.
- There is a thread about how to do a top notebook. The answer is make sure that every day’s entry covers EVERY point in the scoresheet.
Just a thought, as one-to-one feedback will take a lot of time and potentially require more volunteers then are currently available. Would it be beneficial if at these events, there is more general feedback given to all teams? For example, just after finals matches and before the awards are given out. If judges could talk about things they liked in some teams’ notebooks, and other things that they felt needed to be changed. This would still benefit the teams that struggle to get past the one or two entries by giving them some ideas and those teams with better notebooks who need to make smaller adjustments. I’m hoping something like this would lead to less dispute over the results.
You probably dont need an internet connection to make this happen. All Windows Laptops now support hotspots so you can create an addition to the Tournament Manager app for phones that would use the Laptops wifi just to send data to the Tournament which would be stored until there is an internet connection. (You dont need an internet connection to make a hotspot, and instruction manuals exist to help people who dont have internet connections but need to use the system). The system is already there for scoring and live scoring, so it shouldnt be too hard to figure out.
I do not believe that we should not do something because some tournaments dont have enough computers or the internet is lacking in the competition. We could always discuss alternatives that would solve these problems.
First, I will say that the lack of feedback for the notebooks has been an issue for me too, for a long time.
That being said, I do understand the issues with providing feedback, the biggest, in my mind, being the subjective nature of the scoring and the very granular scoring. At a large competition with several very experienced teams, you can easily have several teams with perfect scores and the decision comes down to a certain je n’ais se quoi that leads to only one team getting to go home with hardware.
I’d recommend teams coordinating meet ups with other local teams and swapping notebooks, interviewing, scoring and providing feedback and discussion.
I’ve been a judge at all levels, including a World design judge. At worlds, all the design award finalists have perfect or near-perfect rubric scores. States is usually similar, and local events, there is usually a couple teams that stand out. The je n’ais se quoi, as you put it, is the evaluation and comparison of the finalists during the interviewing process: but the interview is an area that many teams seem to discount and don’t prepare for, but is where the important Judges’ decisions are ultimately made.
One of the points cited for using a computer instead of paper was that it would be easier/faster than handwriting. Tablets/phones wouldn’t have that advantage.
I really don’t like this argument. Differences between tournaments are commonplace and will probably continue to exist as long as the current tournament model exists.
- Some tournaments livestream; others don’t.
- Some tournaments upload scores and rankings live; others don’t.
- Some tournaments have well-seasoned volunteers in every role. Others have inadequately trained, inexperienced volunteers. (Most fall between the two extremes.)
- Some tournaments are much more efficient than others.
- The list goes on…
Teams familiar with their region’s tournaments already pick and choose the ones that provide the best experiences to attend. Yet tournaments without the additional quality-of-life improvements continue to exist, because there are not enough opportunities in the best tournaments alone.
Obviously one can argue over the significance of the existing differences versus an optional judging feedback system. However, I do not think such an argument should bear any weight:
Instead of using differences in budget/technology between tournaments as a justification for artificially stymieing an otherwise-welcome system, why not just enable EPs to provide the best experience (and, more importantly, the greatest learning opportunity) they can for their competitors?
Think about it this way:
Who is benefiting from not having some sort of judging feedback system?
I don’t think anyone is, and EPs that think they are are missing the point of VRC — to provide an educational opportunity.
Yep, the interview is the “proof of the pudding”. I’ve seen team be so amazing, and I’ve seen other teams implode. The ones that do really well put almost as much practice into the interview in as they do for an event.
wouldn’t teams with already great notebook practices and have won multiple design awards benefit from no feedback due to the lack of competition? If teams were given feedback, the notebooks could greatly improve from feedback and there would be a larger amount of notebooks considered for design award. This however, is another reason why feedback would be useful. It would allow teams who don’t know good notebook practices to improve so it’s not the same 6 or 7 teams in a region who are design award finalists.( in competitions such as State Qualifiers and maybe even States)
To briefly recap a much-argued point:
Most people, EPs included, would like to provide feedback.
In order for there to be a VEX competition, EPs organize and hold events. In order to hold events, EPs must recruit, train, manage, and oversee volunteer staff, including judges. In order for an EP to recruit people to commit to giving up 6 to 14 hours of weekend time to judge at an event, the work asked of them has to be reasonable, with a minimal number of complications, controversies, and conflicts. So, to some degree you are correct. By not increasing the work on volunteer judges, it does make it easier for EPs to hold events.
But holding events helps everyone; without them there isn’t an educational opportunity at all.
The problem with using hotspots is that WiFi (usually) isn’t allowed because it can interfere with the signals sent between robots and controllers.
(I have seen tournaments where they had a WiFi network just for the event, but I’m not sure whether they made changes so that it wouldn’t affect the robots. Either way, I remember Worlds didn’t have [public] WiFi.) [Edit: Now that @holbrook mentioned it, I do remember Worlds had a WiFi network, but it wasn’t available to the public.]
I’m not sure how the scoring tablets send data, though. Maybe they’re attached to a wired network, or use a different form of wireless communication.
[Edit: Or, like @holbrook said, they used 5 GHz WiFi. I thought of this, too, but I forgot to mention it in my post, and I figured that if they could use 5 GHZ WiFi, they would have made it available for the public at Worlds - I forgot that not all 2.4 GHz WiFi devices can use 5 GHz, and even then, it might not be feasible to use it for public WiFi at Worlds anyway.]
But I imagine referees and emcees could use a wired connection since they are always near one of the fields, while judges could only use it in the judges’ room, and not when they’re interviewing teams.
More properly, WiFi hotspots aren’t allowed because they can interfere with communication between robots and controllers, whether they’re connected to a field or not.
The ‘change’ you referenced is simply to only run the network on the 5GHz frequency band - VEXNet uses 2.4 GHz so 5GHz WiFi isn’t a problem.
There is WiFi at Worlds (for connecting and some other TM devices to the network). What there isn’t is publically-accessible WiFi, and they prohibit things like hotspots because having a lot of networks in the venue is much more likely to cause robot interference.
Nope, just WiFi. Our solution is to bring a consumer WiFi router with us to the venue, and connect that to a building ethernet port, so that we have a private network which we know TM will behave nicely with. This is also great for us because we run events in lots of different venues, so this way all our scoring tablets and other devices will connect automatically to the network no matter where we are.
But several of our venues also have in-building WiFi, which we make no effort to suppress, and I can’t recall this ever causing any robot connection issues.