This idea kind of came up in the scouting thread, and I decided to flesh my thoughts out a bit more. Bear with me for this wall of text, I hope some of you read through, but for those who don’t I will include a TL;DR at the end.
-
25% of the ranking are organization coaches – they give their personal ranking of 1-25 for teams that are competing. These rankings are all averaged out between the results to give a final ranking. The first place team in this overall ranking gets 25 points, the second place gets 24 points, etc.
-
25% of the ranking are from each team’s captains, following the same system as the coach votes.
-
25% of the ranking is final place at the end of the qualifying matches, top 25, and ranked with the same scoring as the other previous polls but dependent on how you do in your matches. The team ranked first will get 25 points, while the team ranked second will get 24 points, and so on down the list.
-
25% of the ranking is from results in tournament play, with captains of an alliance getting the most points and the third pick getting the least. So in the winning alliance, they get point values of 25, 24, 23 and the runner up alliance gets scores of 22, 21, and 20.
All of these categories are worth the same total point value and each make up the same part of the final score. To scale this metric for teams that haven’t had as many tournaments your cumulative score will be an average score of all of your competitions. There is no rule against voting for your own teams in the polling, but a larger amount of polling will hopefully eliminate some of this bias if it exists. The polling itself is also only 50% of the overall score, so hopefully some of the popularity contest aspect of a power ranking is removed by that and actually has the teams prove themselves.
Q1. What purpose does this serve?
A1. While it currently serves no purpose other than analysis for the entertainment of teams, I can see a localized system like this serving as information to help teams not from the area coming to compete there have an idea of the strongest teams, as well as being relevant for larger tournaments like US Nationals and potentially even worlds.
Q2. Don’t team’s robots change a lot over the year?
A2. While this is true, success in a tournament is often just as much a result of the team members than the robot. The best teams in the world are not good simply as a product of their robot, but because they are skilled drivers and programmers. This concern is also minimized by the polling being a metric of the teams themselves hopefully.
Q3. Can I have an example of how this works?
A3. Sure! Team 1234A is really excited for their first tournament, but not too many people have heard of them. They end up being ranked in 24th place by the Coach’s, and in 20th place by the team votes. This information isn’t compiled until after the event of course, so they aren’t worried at all. At the end of the qualifying matches, they are ranked in 8th place after going 4-2. The third ranked alliance selects them as their second pick. The tournament plays out, and they end up finishing as the runner up. So what score do they get? Lets look at the point values for each portion of the ranking scale. 24th place is pretty low, so they only get 2 points for that. 20th place gives them an additional 6 points. Finishing 8th is pretty good, so they get 18 points for that. Because they are the second picked team on the runner up alliance, they end up getting 21 points. Adding these all up, we get a total of 47 points.
**Q4. But Yoder, I don’t like this idea! **
A4. The only way this can improve is with your criticism, fellow roboteers.
Q5. I’m just here for the summary, give it to me please!
A5. TL;DR: Power rankings can be a fun way to analyse how good teams are on a local scale, and provide information to others in a simple manner for scouting at large tournaments.