Clean Sweep-Awards

Do any of you know what awards/challanges they will have this year? The Promote would be a good one to do this year don’t ya think?:smiley:

We have already been told that the IFI folks are working on it.

I would like to see something like the robot skills, except maybe the sections of your scoring zone marked by the tape are worth different points. Or maybe a challenge (driver or autonomous) where you try to bring as many objects as possible to a bin or something.

I think it would be nice to see different challenges for both the skills and autonomous challenges for next year. This would be a great way to get more people from a team involved because it would be a whole other competition. Plus it would allow for each team to build more robots.

How about a pseudo-time attack driver skills challenge; have half the field (and the balls positioned on the middle) set up, and score X medium balls and Y small balls. Fastest time wins. Alternatively, play it with the fastest time scoring Z points (the medium balls’ points might need to be tweaked a bit so that scoring small balls would be an option)

For the Robot Skills Challenge, what’s wrong with “score the most points possible in one minute?”

it would be problably be too easy for the top teams (ending with ties)
since there isnt opponets to block/throw balls back at you
all you need is a good pick up/drop off system and no strategy

Based on watching the Programming Skills Challenges at regionals and at Worlds this year, I suspect you are underestimating the challenge involved. My prediction is that (if the PSC is organized like it was this year) no team will score every ball.

but in elevation you need to go around the field and have precise scoring
in this one its just about collecting and dumping quick enough

  1. Dumping doesn’t score 3 points on the small balls.
  2. The objects you are collecting this year roll.

So I still see precise scoring for 10 small balls that can resist being gathered easily.

Perhaps this season an autonomous challenge would start with the balls scattered around your side of the field???

But then they’d need to be identically scattered each time, to make it fair; if one person’s random is easier to play than another’s, it trivializes the award.

No - If you have a large enough group of balls, any variations fall out “in the wash”.

No - If you let each team perform more than one attempt, then you can use a ranking metric that is insensitive to any single attempt’s initial conditions.

No - Who said random variation had to be totally excluded from the results; and how would you ever remove all “random” variation or biases from the initial conditions of every team’s every attempt?

No - Differing set-ups would be completely fair in many important senses. “Fair” does not mean identical-in-every-regard. Random scatterings would be Fair in the sense of not favoring any team because of who they are. There would be no advance correlation between the teams and the pattern(s) of balls they were challenged to pick up.

You have to remember that “fair” is not something that can be determined in isolation from other information.

Was it fair that rookie teams had to compete against experienced teams? Was it fair that teams with less money had to compete against teams with more? Was it fair that self-taught teams had to compete against those who had been through extensive classroom training?

Yes, all of these were “fair”, because they were fair in the sense of presenting every team with an identical ranking metric and set of rules; and then determining which team combined skill and luck, and all of the tools and advantages they have at their disposal, to get the best score against that metric.

The rules/metric last year ignored all factors other than bottom line performance on the field. Using who-can-obtain-the-highest-score as the only metric, the competition was fair, even though many things (including luck) other than the cleverness of the students affected the end results. This was the choice the contest designers made.

Knowing it is impossible to create a contest in which every team’s opportunities are identical in every regard, IFI choose to use highest schore as their only metric, and with the exception of the limitations in the rules, they ignored all other factors that could affect who attained that high score.

The same could apply to randomly scattered balls this year, if IFI wants to use that approach.

Blake

I agree with Blake, but in a slightly different way.

If the balls where scattered randomly, it would add a new aspect to the skills challenge, so instead of it being based a lot on the robustness and accuracy of sensors and readings it would be based on the ability of the robot to adapt to the balls with its programming.

I think this would prevent some of the dissapointment that teams get when they have an autonomous that works repeatably at their practice fields, but their robot doesn’t perform well at competition due to misalignment, a slight change in the filed surface, different lighting, or a myriad of other things.

“random” variation or biases from the initial conditions of every team’s every attempt?

No - Differing set-ups would be completely fair in many important senses. “Fair” does not mean identical-in-every-regard. Random scatterings would be Fair in the sense of not favoring any team because of who they are. There would be no advance correlation between the teams and the pattern(s) of balls they were challenged to pick up.

You have to remember that “fair” is not something that can be determined in isolation from other information.

Was it fair that rookie teams had to compete against experienced teams? Was it fair that teams with less money had to compete against teams with more? Was it fair that self-taught teams had to compete against those who had been through extensive classroom training?

Yes, all of these were “fair”, because they were fair in the sense of presenting every team with an identical ranking metric and set of rules; and then determining which team combined skill and luck, and all of the tools and advantages they have at their disposal, to get the best score against that metric.

The rules/metric last year ignored all factors other than bottom line performance on the field. Using who-can-obtain-the-highest-score as the only metric, the competition was fair, even though many things (including luck) other than the cleverness of the students affected the end results. This was the choice the contest designers made.

Knowing it is impossible to create a contest in which every team’s opportunities are identical in every regard, IFI choose to use highest schore as their only metric, and with the exception of the limitations in the rules, they ignored all other factors that could affect who attained that high score.

The same could apply to randomly scattered balls this year, if IFI wants to use that approach.

Blake
Hmm…didn’t think of it that way

PS: While randomly scattering the balls around the field is a “fair” method in many senses. Arranging them in a pattern (other than the set-up pattern for the regular 2-on-2 game) is another “fair” method. If you have a preference for either of these or a completely different alternative - Let the folks at IFI know it. Every opinion can help them either in this season or in a future season.

Blake

Borrowing from the military policy of “train like you fight” I would prefer to see the PSC set up like the competition game. Autonomous already takes so long that having to create completely separate routines for the game and for the PSC is too much work. For Elevation, I know team 575 spent well over 150 person-hours in programming and testing of their Elevation routines. I imagine that Clean Sweep won’t be any easier, even if the starting position is the same for both the game and PSC.

and if the balls are scattered, you would need big expensive sensors.
and most teams cant afford those
(or they use the money for another starter kit to form a new team)