@Steve_Hassenplug, I see that you have the live data for middle school up on your blog. Are you going to do the same for elementary school?
can you share this sheet with me so I can see the entire thing… the blog is hard for me to read email@example.com
Here’s a link to the spreadsheet. It’s only MS teams.
They are sorted by: (Average Match Score * 2 + World Skills Score) / 4
And ES Data
Thank you so much!
this is great! Thank you!
My pages have been updated to include four divisions. But they really mean nothing, until the team list is final.
Hey guys, we’re an elementary team (11016B) and new to worlds. Can you explain the relevance of the divisions? Will we be grouped like this in determining pairings? Any insight on pairings is helpful. Thanks!
A couple additions to the MS page: '16 rankings (where the team ranked after qualifying at Worlds '16) and Awards, which shows Excellence, Teamwork, Skills, Design and STEM awards won during the season. These are not guaranteed accurate.
It would be interesting to see some type of strength of schedule based on each team’s actual schedule at Worlds. Would that be difficult?
Sorry I didn’t see this, sooner. I have a bunch of alliance data, and I added 4 columns to my above spreadsheet.
AN: how teams finished qualifying in '17
AO:Alliance Rank (ranking of AP)
AP: Average Alliance Score; Average of: each alliance (10 teams) in matches with OTHER teams (9 matches, not including match with this team)
AQ: Team average score - Alliance Average Score. ** Did team do better than all partners.
AO is my calculated “Strength of schedule”.
The two most interesting items:
The team with the best alliance pairings, finished first.
The team with the 308th pairings, finished 307 (of 308)
I also added a sheet (“Results”) that shows: [match#]Team#(rank)=Score
(sorry, no new data for ES)
Thank you Steve! I intended on reviewing this sooner, but I’m just getting to it now.
It is interesting how this relates to the “Change to Worlds Format” thread. There are underlying assumptions about how teams perform in matches based on how they qualified. Does this data support that assumption?
The other thing I’m interested in is how well the data collected prior to Worlds could predict the actual performance. For instance, how did the actual strength of schedule compare to the predicted strength of schedule. The following spreadsheet has the predicted strength of schedule, based on Weighted Score in your spreadsheet (column U). For some teams there was no data prior to Worlds, giving them a weighted score of zero skews the results, so I removed them from the calculations.
I’m still working on the How Qualified column. For teams that double qualified I am using the ‘non-judged’ criteria.
I just added the Skills scores from Worlds.
If you’re removing teams with a zero weighted average, you may also want to remove teams with a zero skills or match score.