When I did my thesis (too many years ago), I got help from our statistician/math professor on analyzing the statistics. That’s the only reason I know about such things…
One more reason to love BO3 (and other things that could make VRC less stressful)
Let say, there is, yet unknown to us, “true” probability P of each team being the best team in the World.
Values of all those individual probabilities will add up to 1 and we could sort them to get “true” ranking of all teams.
Qualification rounds are always subject to the random team pairings, game specifics, hardware failures, and some luck.
But with enough matches played, we could say that team rankings at the end of qualifications are very close to the “true” ranking (still unknown to us).
There is a very high confidence that ranking of each individual team is within few spots from its “true” ranking (very little chance that the “best” team will end up ranked 53rd).
The elimination rounds are supposed to give the “best” teams chance to prove in less random environment that they are indeed the best by playing against opponents of the increasingly higher rank and difficulty.
First, we could assume that qualification rounds over multiple seasons are imperfect but are equally good at approximating “true” team rankings.
Then, we could establish a metric that, for example, if a team was ranked #2 after qualifications, and lost in R16, then we place it in the middle of 16-32 range and rank_adjustment: 2-24 => -22.
This way we could quantify the BO1 vs BO3 filters by looking at how far individual team rank has moved (upset magnitude) vs imperfect but statistically known benchmark of qualification rankings.
If we correctly average statistics over all BO3 seasons we should be able to remove a lot of season and game specific correlations. Similarly, averaging last two BO1 seasons, we could smooth out some game and control system specifics as well.
My understanding is that system with less rank movements in eliminations is better, but the real trick is how to properly account for conditional probabilities of round advancements and then to sum individual team results (absolute rank value, range averages, sum of squares, etc…) without making your probability math invalid.
Adding some interesting data I found. So in Turning Point, there were 1254 events with 1288 divisions total. Only in 561/1288 of these divisions did the first seed alliance go on to win the tournament (defined here as having received the Tournament Champions or Division Champions award)
As compared to In The Zone
Turning Point: 561/1288 (43.55%)
In The Zone: 577/1229 (46.94%)
Performing a two-proportion z-test, we can know that our chance of getting that result or more extreme is about 9% (p=.08726) if the true difference of proportions were the same, which is not generally considered statistically significant.
From this evidence, it is reasonable to conclude that there is not a significant difference between the number of first seed tournament wins for In the Zone versus Turning Point
I verified my script worked on my region, but I bet these numbers are kinda off (for canceled tournaments, tournaments that never uploaded data, tournaments that named their awards something strange, etc.). Take the exact specifics with a grain of salt, but the general shape of the numbers should be accurate
I would recommend also looking at the total number of cases where a white screen caused a change in who earned a specific Worlds-qualifying spot (even if the teams with the white screen qualified anyway).
Sometimes a team may lose one Worlds spot due to a white screen but qualify for Worlds another way, but if they hadn’t had that white-screen, a different team would have earned that team’s second spot through Skills double-qualifications. (This could have happened in Florida if a couple of things had gone differently.)
I would also recommend looking at cases where a team lost due to a white screen issue prior to the round that determined whether they made it to Worlds.
The most accurate figure would be the total number of teams who lost a Worlds spot due to someone’s white screen, but if this is hard to determine, the teams who missed a specific spot and the teams who missed altogether due to their own alliance’s white screen could be analyzed separately.
Edited to add:
Another thing to potentially look at is qualifying matches with disconnects that lowered the rankings of the teams with the disconnects, or raised the ranking of other teams, in such a way that it likely affected alliance selections.
This would not be as important as the elimination results (especially if no one is sure who would have picked whom among the top-ranked teams), but it would be something to look at in addition to the other data.
(As @vexvoltage said in his reply to me, none of data is available in Robot Events, so it would need to be collected through a survey, probably only looking at a subset of teams, or by counting how many teams have said they were affected and using that as a low estimate.)
How would you suggest this data be pulled? DWAB/RECF does not track that at all…
If this data were to be collected, it would be through a survey of some sort, or by collecting data from teams who have already said somewhere that they lost in the elimination rounds at States (or a Signature Event) due to a disconnect.
Then someone could analyze that team’s situation (including what other teams from their region qualified for Worlds), and see whether they or another team missed a Worlds spot due to this. (From what I have heard, when a team won a spot at a Signature Event and won another spot at States, the second spot rolled down to the next-best team in Skills in their region, so this should be factored in as well by whoever crunches the numbers.)
A survey that included all teams who might have been affected would not be feasible, but someone could collect examples where a team is known to have missed Worlds due to a disconnect, and use that as a low estimate of the number of times it happened. (I’m sure would be drastically lower than the real number, but it’s a start.)
Edited to add:
And like you said, Tournament Manager does not have any way of specifying whether a disconnect occurred (as far as I know), or what caused it. I imagine this could be mentioned in the match anomaly log, but I’m guessing that’s separate from Tournament Manager and isn’t uploaded to Robot Events.
This is correct as the spots come from different silo allocations.
TM does not communicate with the device other then to say which mode to go into ( very dumbed down version of what it really does) .
Field control - a technical analysis @jpearman and @Dave_Flowerday did an amazing job here going through what everything does.
As far as I know, no such “Anomaly log” goes to RE or if TM even has one…
I wonder if we would make a statistics thread and see what people are interested in getting data on.
I love the math and the statistics about upsets. That said, isn’t that beside the point? Aren’t upsets an exciting part of the game? This thought that “inferior” teams should just be happy to play on the field against “better” teams, and accept their fate to lose is really insulting. My teams have been on both ends of an upset, both having won an upset and having lost one, and the excitement and cheers from the crowd is AWESOME! Doesn’t that bring a level of joy when you pull off the upset? One of my teams was doing great last year at states, but lost to a lower seed opponent during eliminations, the reaction of the crowd was really great as they cheered on the alliance that won. We can’t have sour grapes about what could/should have happened. Were my kids a bit down? Sure, but that is how the game goes and I really think they were happy for the teams that won (a really nice bunch of kids that worked hard for the win). BO3 does not lead to less stress, what leads to more stress is enabling the myth that the kids are somehow being cheated of something by not having a BO3.
All the math about upsets can’t change this. Once again upsets make the game worth playing and sometimes you really don’t see the true measure of a team during the qualification rounds. I have seen really good teams with bad records during qualifications due to their schedule (constantly with claw bots against a couple of really good robots… no matter how good you are, winning a 2 v 1 is never easy) and I have seen some teams that maybe didn’t have a great robot, but had a high ranking due to having very good alliance partners against teams that had driver, robot, or other issues. BO1 with the expanded number of teams in the eliminations really brings a level of excitement, and yes sometimes some upsets, isn’t that what it is about?
I’m also somewhat confused what upsets show in regards to this debate. Upsets just mean an alliance’s captain ranked lower but were able to work together to beat the higher ranked alliance. If anything this comes from issues with qualifications leading to rankings not being accurate or just better performance by a lower seeded team during the match. If we were able to, the best thing to research would be how many teams (whether they were higher or lower seeded) lost due to something out of their control - disconnect / white screen / ref. Unfortunately, this is near impossible - but would show whether BO1 caused an increase in this.
Agreed. Plus how many of the items that are called “out of their control” actually are in their control? I have seen teams complain about loosing connection during matches, but when I look at their robot the radio control is buried in metal or has wiring all around it (and this is definitely within their control). As mentioned previously, by guarding the V5 from static and shock white screens are less likely (I had two teams that played in 6 tournaments and never had a white screen). Ref issues are more about knowing the rules and not trying to push the boundaries rather than the ref. All that said, I just can’t imagine how the BO1 has potentially lessened the experience for the kids or “caused” any upsets.
Yeah, I’m sure there are extra measures that can be taken to hopefully reduce disconnections etc. something official from vex telling teams what to do would be the best way to reach many. I’d agree that it hasn’t caused upsets (in the sense of a lower ranking team beating a higher rank due to better performance) but losing any match whether higher / lower ranked alliance purely due to the items we’ve mentioned would “lessen the experience” for kids.
I agree with you that losing because your V5 gave you the white screen, or you lost connection (these things, or at least variations also happened with the old Cortex) is frustrating and yes does lessen the experience, as the kids didn’t get a chance to show what their robot could do. I just fail to see how a BO3 would change this. Good placement of electronics and wires, ensuring the battery is charged, and knowing the rules will minimize problems. I have been doing robotics for about 10 years (this will be my 5th year of VEX) and unfortunately electronics fail us, this is nothing new to the V5. When I coached FTC we had similar issues (my favorite was the IR beacon on a camera at a Super Regional competition interfering with the IR sensor on the robot causing problems), we can’t shield the kids from all of these problems (though we do try and minimize them) but we can teach them how to react when these issues arise and to feel pride in their work.
I understand the point you make and I agree that there are ways to reduce the chances of receiving a disconnection or a white screen. However, these will never be 100% successful with all these precautions, things will still go wrong. If we go back to a BO3 system (that is being used in FTC) it gives the teams another chance. By having another match / 2 more matches that aren’t awarded with BO1, teams have another opportunity to “show what their robot could do” and perform again instead of their season being over.
What better way to react to these issues than to improve and get ready for the immediate next match to have another try at progressing further in the competition.
This is a good point, although I suspect some teams wouldn’t be as motivated to fix the rare problems if they had a second chance (especially if they know another good team will pick them in the elimination rounds).
However, the teams who are truly the best will make sure to fix any problem that they notice (especially if it costs them a match), both for their sake and their partners’ sakes, unless they postpone it so they can resolve another issue (whether robot-related, scouting-related, marketing-related, volunteering at an event or to help another team, or something else).
Plus these issues could come back to bite you in the Round Robin, where losing one match can drop you out of contention for the Grand Finals (especially if you lose to another team with the same win-loss record as you).
It is always a good idea to stay well within the rules rather than trying to see how close you can get to the line without going over it. If you go too close to the line and you get DQ’d due to a judgement call, that is your responsibility.
And you lose because a referee didn’t notice a violation of the rules despite trying their best, then this is outside of your control, but it is something you will need to live with. (You may be able to help avoid it by encouraging the teams around you to study and follow the rules, and by not pushing the limits of the rules yourself.)
However, in some cases, a referee will make the wrong call because they have not studied the rules enough, or because they did not pay enough attention during a match.
There were several cases of this at Worlds. I’ve heard referees in multiple divisions failed to notice pinning that lasted way more than 5 seconds (maybe even most of the match), and that autonomous was often not scored correctly, and could not be changed. I heard one match was misscored as a loss when it was really a tie. (I can’t prove it, but that was what one of the teams who got the loss told me.)
Referees in at least one division took away the autonomous bonus from teams for accidentally shooting a ball out of the field - in at least one case, it was after their autonomous got misaligned by a cap that was not positioned correctly (possibly even in a way that was outside the tolerances listed in the game manual). The teams weren’t even allowed to show the referee the correct rules in the game manual (although the referee may not have been the person who put this rule in place).
These situations are also beyond your control, and there isn’t much you can do about it.
You can volunteer as a referee yourself (starting out as an assistant referee/scorekeeper), but you can’t referee and compete at the same time (and you may not have the skills to make a good referee anyway).
You can encourage others to volunteer as referees (and in other positions, so existing volunteers can serve as additional referees), but you may not be able to find many people who can do this (especially for Worlds, or States and Signature Events if they’re far away).
If you have enough local events to choose from, you could choose to have your team not compete at events you expect to have poor refereeing, and have the whole team volunteer at these events instead (assuming this would help improve the refereeing), or if not, just stay home or attend as spectators.
If you have multiple State Championships within driving distance, you could referee at one while competing at the other. (Many teams would not be able to afford this, though, unless they are competing at the one further away from them.)
Hopefully the refereeing this year will be better now that every Signature Event will have at least one Certified Head Referee. (I expect them to be present at Worlds, States, and some local events, too).
I still expect some local events to have blatant refereeing errors, but maybe not as much as in past years. (By local I mean State/National/Provincial Qualifiers.)
@B-Kinney. I agree that sometimes the referee may miss a call or make a bad call. As you said, this is beyond our control and the best we can do is know the rules and abide by them. Volunteering to ref or recruiting volunteers that know the rules is the event best we can do. Referees are volunteers and I don’t think they miss calls on purpose, as you said we should help strengthen the volunteer staff. With more judges that know the rules (not to mention more eyes on the field after all i have been to events with only 1 ref on the field due to not having enough volunteers ) things will improve.
I believe that almost all referees try their best based on the knowledge they have, and I trust that most of them do their best to learn everything they can about the rules before they referee. (How much they learn depends on how much time they have, and how well they are trained.) [Edit: The rest, if they exist, might not try their best, but still put in quite a bit of effort.]
I suppose it’s hypothetically possible that someone who is biased could end up being a referee, but it takes a lot of effort to become a referee - if you’re not willing to be unbiased, it’s much less likely that you would be willing to spend many hours volunteering as a referee. (If you did, you’d probably only do it once, considering how many disagreements you’d get into with students if you made a call that looked biased.)
The people who look like they’re guilty may actually be innocent. And even if there were to be undisputable bias in a refereeing decision, it may be unintentional.
Hypothetical types of unintentional bias
For example, maybe someone has a natural tendency to notice wrongs done against their own teams, but just pays an average amount of attention to wrongs done against other teams. I’m sure many students are the same way.
Or they trust their own teams to follow the rules and thus don’t pay as close attention to them - although I assume many referees would pay closer attention to their own teams than to others.
Or they know their own teams as people, and believe based on the evidence that their team’s rules violation was unintentional, whereas another team who did the same thing could have done it intentionally.
This is all hypothetical, though - I have no evidence that any referees have thought this way; these are just ways that someone could hypothetically show unintentional bias (thus explaining a decision that was obviously biased) while still being innocent of intentional bias.
My own knowledge
Over the past three years, I only know of two or three cases where I’ve had reason to suspect a decision was biased based on the events that happened in the match.
(All three of them occurred at local events and involved a team who had disputed a referee decision earlier in the event, and only two of them involved a team from the referee’s school. For one of them, I have since reviewed a video of it and realized that the decision may have been fair, and was likely to be unbiased after all.)
All three of these incidents may very well have been genuine misinterpretations of the rules. (Or if it was more than this, the person who called the violation may have just been paying extra attention to the teams who they’d recently had an argument with, and thus noticed their rules violation without noticing someone else’s.)
There was also a fourth incident where I am suspicious of the decision as a result of my existing suspicion about the referee and teams involved, but the ruling itself was not suspicious, and the decision may very well have been legitimate.
I assume there are many regions where no one would ever have a genuine reason to suspect bias. (People may think there is bias in those regions, but they wouldn’t think that if they knew how dedicated these referees were to being unbiased.)
When we see a referee, we should always assume they are unbiased. The vast majority of referees would never dream of showing bias in any way.
If we have genuine reason to believe someone may be biased, we should bring this up to a trustworthy adult who has the power to change it (or who can talk to someone who does), and I would say not to mention it to anyone else unless they absolutely need to know.
Even if biased referees were to exist, they would still deserve respect for all the time and energy they put in by being referees. [Edit: Not anywhere near as much respect as the unbiased ones, of course, and only really helpful to the teams who have no other referees available, but still some amount of respect.]
Note: I withdrew this post for a couple of hours, edited it to clarify that I don’t have any proof of referee bias existing (except perhaps a few cases where I examined the evidence I had available to me and concluded that the situation was very suspicious in my opinion, plus the fact that I have heard accusations of judge bias and season-DQ bias already), and re-added it.
There are too many comments in this thread since I last posted for me to respond to every point I disagree with, but I want to try to explain a few points.
First of all, and most importantly, there are some upsets that are totally out of the control of competitors. For example, in US Open Finals match 1 in ITZ, both robots on the alliance that ultimately won DCed for several seconds, and no replay was called. This issue seems to have been caused by an undetected field problem, not by lack of antistatic spray or poor remote batteries. There are countless other examples of referees not disqualifying a team that did break rules (there’s nothing the losing team can do in this case), motors overheating after last minute PTC checks at worlds, and so on. These incidents, if they cause a team to be eliminated from a high stakes tournament, directly go against the mission statement of the RECF. Rather than getting students excited about and inspired to pursue STEM, they frustrate students in a situation directly tied to STEM. In this way, it seems like Bo3 works to the benefit of the RECFs mission statement at higher levels of tournaments, where students have more on the line.
The second thing is the concept of the “better” alliance and the associated discussion has been kind of stupid IMO. Better is pretty clear. In a sample size of infinity, the better alliance wins a majority of their matches. That’s just how it’s defined. In this way, an alliance that has freak mechanical problems 2% of the time and is the fastest robot in the world the other 98% is better than an alliance that has freak mechanical problems 1% of the time and is of average speed. A system that increases the odds that the worse alliance (and I’m sorry if you find that offensive, but that’s just how words are defined) is objectively unfair. I feel like this argument is pretty rooted in basic statistics and definitions of English words, so you may find it “snobby”, but this is about as close to objective fact as these theory arguments can get.
There’s also an element of strategy that needs to be considered. When the 2019 soon-to-be world champs were slaughtered in world finals 1 and got the benefit of a DQ, they adapted their cap strategy, only narrowly losing finals 2 and winning finals 3. This kind of strategic evolution in the face of adversity is right up the RECF’s ally, and it’s incredibly exciting to be a part of or even to watch. Bo1 removes the opportunity for teams to adapt to one another.
And the last thing I want to say is winning on an anomaly is often really unnerving. At an early season tournament last year, I was on the 1st seed, facing the 3rd seed in finals. The third seed illegally trapped my alliance for almost 30 seconds, leading to a 4 point loss for my alliance, and it was honestly a really awkward moment. I felt like the tournament had been stolen from us. A few months later, I was at a different local tournament as the 2nd seed in the finals against the 1st. I knew we were worse. (Again meaning in a sample size of infinity, we would lose a majority of the matches.) When my alliance illegally entangled both of the opponent’s robots and I was left an open field to score on uncontested for 30 seconds, and the ref didn’t call a DQ, it again felt really weird. The first seed unequivocally deserved to take home the tournament champion trophy and I felt like I had stolen something from them.
Winning a tournament is a lot of fun, and winning your first tournament is a unique experience, but winning because you deserve it feels fundamentally right in a way that winning on a white screen or unfair DQ just doesn’t. When an upset occurs (and I’m not talking about seeds) it causes mixed feelings on both sides, and those feelings don’t advance the ball in any way. The sixteenth seed can win a tournament, but they’re going to feel a lot better about it if they actually deserved that win.
Dear forum readers, I don’t have mod powers and cannot lock this thread.
However, I would like to put a temporary “soft” lock on the topic of BO1 vs BO3 by asking everyone to voluntarily refrain from posting for about a week. I promise there will be something interesting at the end…
If you have a strong opinion and cannot wait, please, try to channel it through the “vote”, “like”, or “mute this topic” buttons, instead of creating new posts.
- I cannot wait for this thread to reopen, so I could keep bashing BO1!
- I am tired from hearing same arguments over and over again. This thread must die!
- I will wait patiently and I am open minded to a solution that is an improvement over BO1 but is not BO3.
yeah bo3 would be nice.
we kept losing by one point in important elim matches.
we lost by one point at states due to lucky park, so we didn’t qual for worlds.
at worlds in research semi-finals 1, we lost by one point to china
if it was bo3, the chances of us and 929u going to round robin would have increased because we would have the ability to adjust. but then again i don’t know or mind.
bo1 in elims make it so that you can’t make any mistakes, but it’s whatever
i guess one thing good about bo1 in quals is that it stressed us out so much but ended up helping us go undefeated in research division quals