Tipping Point Team Rankings - Updated 23-JAN-2022

Rankings updated at http://vrc-data-analysis.com/

Another week in the books, again with over 3000 matches played, bringing the season total to nearly 28,000 matches by nearly 6000 teams! Autonomous Win Point success rate continues to hover right around 8-9%.

With the Kalahari Signature Event (more on that later) featuring so many of the Top 200 teams, I expected a bit more movement. However, the top of the leaderboard seems very stable, with both 7316X (Xcalibur) and 60470S (Semicolon) moving into the Top 10 with their fantastic performance this weekend, with Xcalibur hitting 8 (up 6) and Semicolon hitting 10 (up 5).

I’ve indexed the YouTube livestreams for the Kalahari High School event, and @holbrook and provided an index for his tournament. If you have matches you’d like to see here, send me a link along with the Event Number and Match Number. There are now over 2000 matches indexed.

Skills scores continue to improve; we now have over 20 teams that have more than 500 points combined driver and programming. Remember that Skills can offer a path to both Regionals/States as well as Worlds and is entirely within a team’s control - no mismatched opponents, no judges. Pure skill!

I attended the Kalahari event this weekend and was very pleased with the passion, skill, ingenuity, and camaraderie on display. I was less pleased with how many adults (and students) who chose to go maskless (granted the event said masks were optional). RECF requires eyewear protection at all competitions, in spite of many who would prefer not to wear them, and masks should be no different during this pandemic. End of gentle rant.

On a more positive note, I heard that the students at the event organized a mild protest to change the way that the elimination rounds were refereed from the qualification rounds. The students involved were organized, polite, and firm about the request to have multiple certified referees observe the elimination round matches (during qualifications, each match had a single referee officiate followed by scoring referees coming in after the conclusion of the match). While the full extent of the student request didn’t come to pass, I believe once the quarterfinals hit, multiple referees were able to officiate.

EDIT: Forgot to release the actual update…that should conclude shortly

EDIT: As of 6:10pm Central US, site is updated with latest results.


Quite right. Thanks again for the sweet sweet data!


Forgot to add the the Finals match featured the highest-scoring match of the season-to-date:

PiBotics + Freedom Gladiators: 254
XCalibur + Semicolon: 205

Total match points: 459

I have to say, the 3 Finals matches were very entertaining to watch, including the part where the Red-places-last rule was enforced!

EDIT: Just realized several other high-scores occurred:

2011C + 2011A: 229
2145Z + 99999V: 187
Total match points: 416 which I believe is the 2nd highest total this season (certainly the highest for a semifinal)

Very high-quality tournament. I’m sure 80% of the teams there will be at Worlds.


My own experience:

At every event we’ve run since resuming in-person tournaments last May (including 3 so far this month), we have had a strict mask requirement for everyone in the venue as part of our health & safety policy. The policy is posted on RobotEvents, there are signs at the entrance to the venue, we mention it at the driver’s meeting, our judges and refs are told to enforce the requirement, etc.

Before we ran our first event with this policy, I had mentally prepared myself to spend half the day arguing with people about masks. But actually the response has way exceeded my expectations – there have been a couple of kids who I have to persistently remind to pull their mask up over their face, but overall, pushback has been pretty minimal and the vast majority of people have followed the policy without complaint. I think it says a lot about the robotics community that when asked, we are willing to take the reasonable steps necessary to keep having in-person events in the current environment.

I hope that other EPs, especially of large events, will view mask requirements as what they are, which is both the easiest and most effective thing they can do to improve the safety of their events.


The Sahara Division Finals Match had a score of 256-192. That may be the second highest.

1 Like

How in God’s green earth is my team 82nd lol, not complaining tho

If you don’t mind me asking, how exactly are all these numbers calculated?

TL;DR - Pulls data from RobotEvents and feeds it through TrueSkill (basically a squad-based version of an Elo calculated used by Microsoft for matchmaking Halo and similar games).


Treat it like College Football/Basketball rankings. Something fun to talk about, that has some basis in reality, and useful to tier teams. I’d say that the predictive quality of this is in the 65-75% range.


This is actually really cool, do the score of the games themselves matter more (higher or lower quality of win), or do the quality of the opponent you face contribute more to your overall ranking?

because i cant help but kick myself for mechanical issues that cost us games lol

I don’t factor score into TrueSkill. A 100-point loss is the same as a 1-point loss; especially for non-linear scoring games with some zero-sum components (like the typical game Vex and FRC designs) it doesn’t make sense to use score. The one caveat is that I throw out games where one team scores zero points. Originally, I did this as I thought it was a proxy for DQs, only to realize that only applies in Elimination rounds. But I’ve kept that for consistency, and to punish BM a little.

Assuming you go to multiple events, a bad match here-or-there won’t matter much (and evens out against other teams who have bad matches against you). The benefit of modeling skill as a probability distribution is that it inherently understands that nothing is a “sure-thing”.


Question. My team went down 292 ranks even though we won a competition. We ranked 8 better this time than the last competition. And scored more than 100 points in four different matches. How is true skill calculated?

It was an amazing grand finals and tourney as a whole, we hadn’t seen competition on the level of Freedom Gladiators and pibotics all season. There strategy was incredible, and we couldn’t have been happier to have been with an experienced team like Semicolon. We’re very happy to have done so well.

Having the record for highest scoring match isn’t so bad either.


I posted above how TrueSkill is calculated. The last competition RobotEvents lists for you was on January 15, 2022. Your TrueSkill value (~15.68) hasn’t changed since then. Other teams, a net of 292 of them apparently, that had a TrueSkill below 15.68 now have a TrueSkill above 15.68, dropping you 292 places.

1 Like

OK, thank you!
(20 Characters)

I met a bunch of teams I see in the forums and high on the rankings last year at the Space City Showcase Tournament in Houston. If the guys from 580X or railgunawesome are reading this, Cohen and the rest of 33848B says “hi!”, lol.


You are correct. I’ve updated the Fun Stats page to have the Top 20 highest scoring matches: