2010 VRC Worlds Statistics

Since Karthik has now posted raw data to the forum, I’ve spent a little time trying to figure out a quick-and-dirty way to calculate the strength of a team. I’ve picked a simplistic measure for this first pass. I took each team, then figured out the average final qualifying rankings of all their partners, then compared it to the ranking of the team in question. For example, if the team itself finished 10, and the average ranking of all its partners was 50, the team would get a score of +40, meaning it finished 40 points higher in the rankings than the average of its alliance partners in qualifying.

“P Rank” is the average final ranking of the team’s alliance partners.

The top 20 teams in the Science Division:


Team	Rank	P Rank	Score

254E	3	50	47
402C	10	55	45
8199	7	51	44
918	2	41	39
24B	13	52	39
44	1	38	37
721	9	46	37
1101B	15	51	36
136M	32	67	35
368E	11	45	34
929Z	29	61	32
1114Z	21	52	31
80	30	60	30
8202	8	37	29
1009	17	43	26
677	5	30	25
211C	12	37	25
1	6	30	24
368	20	44	24
8164A	4	26	22

This does not include the strength of the opposition alliance. I haven’t tackled that one yet. I suppose someone who had done some matrix mathematics much more recently than me would get this done faster. :slight_smile:

What matrix multiplication are you seeking to do? Perhaps I could help. :slight_smile:

Nice job, looking at that shows another side to the rankings.

Out of curiosity, just so I could do this for the math division do you work out the score like this.

MAGS (2908C)-15 (so i’m going to find the score for my team)

Alliance partners
Team-Rank
1146E-41
2610B-91
2606F-57
8756-37
8219-73
7701-48
2638A-92
1150A-86

Then (41+91+57+37+73+48+92+86)/8, which gives 65.625 (i’m rounding 1-4 down, 5-9 up). Now we take the “P Rank” and minus the ranking for the team 66-15, which gives us a score of 51. This might be a little simple, but should do the job.

Assuming this is the correct method, the top three teams for the math division: 8192A, 1826 and 7709 have scores of 48,47 and 24.

There are a few problems with this though, it does not take into account if the alliance partners worked at all or if it did anything. But as simple methods go, this is the best way of finding the strength of a team.

  • [98 - average opponent ranking]?

Also couldn’t help but notice that the teams with the lowest SP in Science were 575 and 721 (with 305 and 308, respectively).

Technology Division top 25:


Team	Ranking	P Rank	Delta
359A	3	54	51
394	1	49	48
8165A	2	49	47
702C	10	57	47
1034B	6	49	43
10V	8	50	42
1031	23	57	34
960A	12	44	32
656C	11	42	31
674	13	44	31
24C	9	39	30
383N	24	54	30
8224	27	56	29
8208	32	61	29
177	4	32	28
1107C	17	45	28
944B	31	57	26
10Q	7	32	25
383	16	41	25
478	22	46	24
10E	14	37	23
1069	5	27	22
1000B	19	41	22
72B	29	51	22

O = average opponent ALLIANCE rank? Or just robot/team?

so: P - R + (98 - O) = S?

Folks,

Jesse Knight has applied some non-trivial analysis to FRC rankings and says his results match reality pretty well. If you would like to hear about his methods (I don’t remember, bu they might be copied or derived from professional book-making approaches) give him a shout over on Chief Delphi. He is JesseK there.

Blake

I searched a little and didn’t take long to find this: http://www.team2834.com/documents/Scouting_Database.pdf
Not exactly what you are talking about, but gives a reference to Jesse Knight and many other people.

I tried “P - R + (98 - O) = S” and these are my results for the top 6 on Mr. Rick’s Science division list:

R = Team Rank
O = Average Opposing Teams Rank
P = Average Partner Teams Rank
S = Final Score

Team_____R____O____P____S
8199_____7____52___51___90
254E_____3____58___50___87
44_______1____53___38___82
402C____10____62___55___81
24B_____13____63___52___74
918______2____66___41___71

I think it fails to account for the fact that whenever you win a match with a robot that doesn’t work, you raise their rank, so you should try to account for the fact that when you raise their rank, they will be in front because they have a ton of sp from previous matches. It should go by the distance they are from the bottom of the list of teams with the same wp. That should more accurately show how big a part the other team played in the match.

How do you know they will have a ton of SP? What if they are an awesome team that clean sweeps every time and there robot doesn’t work 1 match?

Someone posted that in the Science Division, 721 and 575 had the lowest SP of all the teams. 721 went 7-1 and was the 8th place alliance captain, and 575 was 5-3 and the first pick of the 1st place alliance captain (and also had one clean sweep and a 170-2 match in its record). If a good team ignores SP, then SP says nothing about their ability to score. This makes statistical analysis trickier, but does reward good scouting.

Being a stat nerd, a better method for ranking the teams intrigues me. Out of curiosity, I summed the wins and losses for our alliances and opponents from all our matches. I then calculated the overall win percentage (wins/total matches) for our alliance and opponents. Using this data, I calculated a power ranking with this equation:

power ranking = (your team’s win percentage) + (1-alliance overall win percentage) + (opponent#1 overall win percentage) +(opponent #2 overall win percentage).

This equation penalizes you if you have easy opponents or if your alliances were stronger than average. Obviously, the converse is also true. You are rewarded if you have weak alliances and strong opponents.

Just for discussion, it would be interesting to lump all ~400 teams together in a power ranking comparison (based on qualifying matches) to see where your team would rank.

Rick Tyler - If you could make all the match results and team records from Dallas available in a spreadsheet format I would crunch through these numbers and publish them on the forum.

In the future, some teams might consider a different ranking system for selecting their alliances, provided the results were published in a spreadsheet that could be downloaded after each days competition for this type of analysis.

Brad

Brad, not meant as an attack on your system (I wouldn’t be able to think of a better way), a problem I see with your method is that strong teams can potentially be “overpenalized.”

If a strong team, lets say in terms of overall quality on an “ideal” ranking system (factors everything in) is ranked 1-5, can clean sweep any opponent under rank 30, they get less points clean sweeping a rank 80 opponent than a rank 30 opponent. If they played opponents 80-87 in an 8-round qualifying round, out of dumb luck they get penalized more than if they played rank 30-37 opponents.

Does your system incorporate Ties? Because if you’re looking at win %'s, then Losses=Ties.

Quick question, has anyone run a fairly simple Calculated Contribution matrix for every division? That would be fairly interesting to see.

You could also try to incorporate the amount of points each team won/lost by in each match.

For example, if your team won by 27 points, just add .165 to their overall score. If your team lost by 54 points add -.27 to their overall score

Power ranking = (your team’s win percentage) + (1-alliance overall win percentage) + (opponent#1 overall win percentage) +(opponent #2 overall win percentage) + ((your score - opponent’s score)/2)

Perhaps a heavier weight to a team’s record could offset the situation where a strong team beats a weak team, thus offseting relatively little compensation from the power ranking system. The power ranking system shown uses an equal weight for each of four factors.

For this system, ties are ignored and aren’t counted in the total matches. So if a team played eight matches and had six wins, one loss, and one tie, your record would be 6-1 with a win percentage of 0.857.

I would hesitate using current scores in the ranking system as too many teams pull back after having a large lead to boost their SP. This skews the data and makes it somewhat irrelevant.

Brad

Completely agree. Alliance robots that have no clue how the ranking system works (or chooses to ignore it) can almost be as devastating to your ranking as a loss, in some cases. No good to use current data…

So that means that the only thing to go by as far as selection goes is looking at how many wins each team had and just watching a ton of matches?

It wouldn’t make much sense to have a position on the team completely devoted to watching matches during tournaments, but it’s better than having completely no idea who to pick.

It makes a lot of sense to have several people on the team do just that! That’s the best way to scout for robots.

While I’m not too familiar with how QP played out this year, if people stopped dumping after a minute of play in order to raise their QP, why not just double the stats you took on them in order to extrapolate?

Since QP doesn’t matter in eliminations, what would be the harm in picking such a team?