A Second Spreadsheet from Recent Competition Data

Hello!

You might remember (or not remember, that’s cool too) a post from a few months back where I interviewed all of the teams at a competition about their robot capabilities and compiled the results into a spreadsheet. With states coming very soon, I thought it would be a good idea to do the same again, so teams could get a good idea of what robots have been winning competitions and of what scoring elements seem the most successful to go after. The spreadsheet can be accessed here.

In the very possible case that my team’s language for some elements of the game are different from yours:

Low Flags means the ability to toggle the scoring color of the three bottom flags.

Middle Flags means the ability to toggle the scoring color of the three middle flags.

Top Flags means the ability to toggle the scoring color of the three top flags. I’ve made this category separate from the Middle Flags category since enough teams that I interviewed mentioned that their ball shooter was not powerful enough to either reach or toggle the top flags.

Ground Caps means the ability to flip any of the caps on the field to the other color. Thankfully, the Tilt-Only Caps and Ground Caps categories (from my last spreadsheet) were able to be merged into one category since no robots fulfilled the requirements of being Tilt-Only Caps.

Descore means the ability to remove caps from any of the posts.

Low Posts means the ability to place a cap on any one of the four 23-inch tall posts.

High Posts means the ability to place a cap on any one of the two 34-inch tall posts.

Platforms means being able to park on alliance or center parking at the end of a match without assistance from another robot.

Finally, Ideal Autonomous Points is the amount of points scored from a team’s best autonomous, assuming that no points are scored by their alliancemate that would have been otherwise scored by them.

Survey data was taken at the South Florida Melee at McCarthy VRC Competition on February 17, 2019. That was the last competition in Florida before states.

Thanks for reading through this! :slight_smile:

2 Likes

Wow @vanLidth very cool

That’s a really cool spreadsheet, our team plans on doing something like that at our next competition. If I was to make it and it didn’t take up too much time, I would add ratings on how well it could do each of those functions (i.e. a 1-10 scale). And also a column for V5s (because V5s pretty OP). I’m currently working on the spreadsheet. After seeing the competition, do you have anything else that you would want to add if you did this again?

1 Like

A V5 column is a good idea, I regret not having one of those!

I understand the idea of having a 1-10 scale on how well a robot can interact with a certain scoring element, but in my opinion it’s difficult to judge reliability, speed, accuracy, and vulnerability entirely objectively and accurately in a single number without taking too much time of whomever you’re interviewing, which I want to mostly avoid, although if you could make a spreadsheet where a scale would work, I would love to see that happen!

1 Like

are you the guy who people thought you’re the design judge???
awesome data, and thank you for sharing!!

Yes, that’s me, I’m glad someone remembers me!

(Sorry for the repost, I misunderstood the context of what you posted)