Since Karthik has now posted raw data to the forum, I’ve spent a little time trying to figure out a quick-and-dirty way to calculate the strength of a team. I’ve picked a simplistic measure for this first pass. I took each team, then figured out the average final qualifying rankings of all their partners, then compared it to the ranking of the team in question. For example, if the team itself finished 10, and the average ranking of all its partners was 50, the team would get a score of +40, meaning it finished 40 points higher in the rankings than the average of its alliance partners in qualifying.
“P Rank” is the average final ranking of the team’s alliance partners.
This does not include the strength of the opposition alliance. I haven’t tackled that one yet. I suppose someone who had done some matrix mathematics much more recently than me would get this done faster.
Then (41+91+57+37+73+48+92+86)/8, which gives 65.625 (i’m rounding 1-4 down, 5-9 up). Now we take the “P Rank” and minus the ranking for the team 66-15, which gives us a score of 51. This might be a little simple, but should do the job.
Assuming this is the correct method, the top three teams for the math division: 8192A, 1826 and 7709 have scores of 48,47 and 24.
There are a few problems with this though, it does not take into account if the alliance partners worked at all or if it did anything. But as simple methods go, this is the best way of finding the strength of a team.
Jesse Knight has applied some non-trivial analysis to FRC rankings and says his results match reality pretty well. If you would like to hear about his methods (I don’t remember, bu they might be copied or derived from professional book-making approaches) give him a shout over on Chief Delphi. He is JesseK there.
I think it fails to account for the fact that whenever you win a match with a robot that doesn’t work, you raise their rank, so you should try to account for the fact that when you raise their rank, they will be in front because they have a ton of sp from previous matches. It should go by the distance they are from the bottom of the list of teams with the same wp. That should more accurately show how big a part the other team played in the match.
Someone posted that in the Science Division, 721 and 575 had the lowest SP of all the teams. 721 went 7-1 and was the 8th place alliance captain, and 575 was 5-3 and the first pick of the 1st place alliance captain (and also had one clean sweep and a 170-2 match in its record). If a good team ignores SP, then SP says nothing about their ability to score. This makes statistical analysis trickier, but does reward good scouting.
Being a stat nerd, a better method for ranking the teams intrigues me. Out of curiosity, I summed the wins and losses for our alliances and opponents from all our matches. I then calculated the overall win percentage (wins/total matches) for our alliance and opponents. Using this data, I calculated a power ranking with this equation:
This equation penalizes you if you have easy opponents or if your alliances were stronger than average. Obviously, the converse is also true. You are rewarded if you have weak alliances and strong opponents.
Just for discussion, it would be interesting to lump all ~400 teams together in a power ranking comparison (based on qualifying matches) to see where your team would rank.
Rick Tyler - If you could make all the match results and team records from Dallas available in a spreadsheet format I would crunch through these numbers and publish them on the forum.
In the future, some teams might consider a different ranking system for selecting their alliances, provided the results were published in a spreadsheet that could be downloaded after each days competition for this type of analysis.
Brad, not meant as an attack on your system (I wouldn’t be able to think of a better way), a problem I see with your method is that strong teams can potentially be “overpenalized.”
If a strong team, lets say in terms of overall quality on an “ideal” ranking system (factors everything in) is ranked 1-5, can clean sweep any opponent under rank 30, they get less points clean sweeping a rank 80 opponent than a rank 30 opponent. If they played opponents 80-87 in an 8-round qualifying round, out of dumb luck they get penalized more than if they played rank 30-37 opponents.
Does your system incorporate Ties? Because if you’re looking at win %'s, then Losses=Ties.
Perhaps a heavier weight to a team’s record could offset the situation where a strong team beats a weak team, thus offseting relatively little compensation from the power ranking system. The power ranking system shown uses an equal weight for each of four factors.
For this system, ties are ignored and aren’t counted in the total matches. So if a team played eight matches and had six wins, one loss, and one tie, your record would be 6-1 with a win percentage of 0.857.
I would hesitate using current scores in the ranking system as too many teams pull back after having a large lead to boost their SP. This skews the data and makes it somewhat irrelevant.
Completely agree. Alliance robots that have no clue how the ranking system works (or chooses to ignore it) can almost be as devastating to your ranking as a loss, in some cases. No good to use current data…
So that means that the only thing to go by as far as selection goes is looking at how many wins each team had and just watching a ton of matches?
It wouldn’t make much sense to have a position on the team completely devoted to watching matches during tournaments, but it’s better than having completely no idea who to pick.
It makes a lot of sense to have several people on the team do just that! That’s the best way to scout for robots.
While I’m not too familiar with how QP played out this year, if people stopped dumping after a minute of play in order to raise their QP, why not just double the stats you took on them in order to extrapolate?
Since QP doesn’t matter in eliminations, what would be the harm in picking such a team?