Award Confusion:Why would a team with 0 autonomous win the Think (Programming) award?

Confused with our State Championship Awards: Why would a team with a 0 in programming win the Think (for Autonomous Programming) award which qualified them for Worlds? Please understand that this is not a hit against the team that won, but an important question about consistency and fairness in judged awards. Remember also that the team with a 0 in programming that won replaced teams that actually had programs that worked and will now not go to Worlds. I want to make it clear that the team I am associated with was not adversely affected by any of the judged awards.

The language from Vex is:

“The Think Award is presented to a team that has developed and effectively used quality program as part of their strategy to solve the game challenge.”

In our State Championship I was thoroughly confused and dismayed with the unusual choices for Judged Awards, all of which were World Qualifying Awards. Almost all of the judged awards went to teams that had very low scoring robots and performed poorly in teamwork. Worse, they seemed to defy logic in how they were chosen. I know there is leeway in these awards and points aren’t everything but it is important that some consistency and validity be assigned to these awards especially when leading to a world qualification when from all quantitative measures many teams were much more deserving of the awards and will have to sit out.

The first award that confused me was the awarding of the Think Award (Programming) to the team with a 0 in Autonomous with 3 attempts. Just for reference the team was 27th of 29 teams in both Skills and Teamwork.

This begs the simple question, no matter how well documented and versioned code is, if the robot doesn’t score any points (in other words it is not “effective” in solving the problem), why is the team receiving an award, let alone an award that qualifies for Worlds?

Were they the only team that did autonomous programming skills? They are two sets, lots of teams run driver skills and not the auto skills.

There are rubrics / descriptions for all the awards, we get judges to follow all of them, and don’t have a problem. You will find “unstacking” so the team that wins Excellence, may/may not win Design/STEM. Winners of Design / may/may not win other judged awards.

I’d suggest talking to the EP of the event, most of us are forthcoming on how the judged awards went down.

Obviously I wouldn’t ask if they were the only team :wink: To answer your question 25 teams competed in autonomous. 19 teams scored points ranging from 1 to over 100. I understand the “unstacking” and it definitely makes sense to spread out the awards. My question is about choosing between teams that do not have awards and pairing them with awards that at least make sense. Typically these type of awards are cool to pass out to teams but don’t qualify people for anything, but in this case they do.

If you could answer my question from your experience it would be helpful. Is it possible given the rubric for “Think” awards for teams that failed to score any points to win over 19 teams that did score points? This competition is over, nothing can change about it, but I am curious if this happens frequently due to how the rubric is constructed or how “unstacking” is performed and what might be possible to improve this problem.

The autonomous portion of the skills challenge is not the only area of the game challenge. Many teams develop quality programming into their “driver control.” We have often given the Think Award at our events and more then once it has not had anything to do with the autonomous portion of the challenge.

If you look at the Think Award criteria it does not even mention autonomous skills.

Thanks Quarkmine, however, (and I hate to pick on this particular case) the programming didn’t heavily help their driving scores since they scored near the bottom of skills and 27th out of 29 teams in teamwork. And since autonomous is at least 1/2 the programming challenge ignoring it would be highly unfair to the many teams that bothered to solve it.

Maybe I should phrase my question differently. Are judged awards like these more after thoughts than actual point value based judging or is it the rubric itself that causes anomalies that would allow a team that failed to solve the programming problem at all to be awarded as the best over a majority of teams that did. Hopefully I am not alone in finding it inappropriate to award a programming award to a team that received a 0 in autonomous and had poor performance in driving as well. I am curious how this happens, how often it happens, and how we can correct it in the future.

As an aside, awards like these should be evaluated more stringently in the case of qualifying matches for Worlds too than in every day matches.

Out of curiosity, I looked up the tournaments over the weekend. I think Florida Robot Coach was referring to team #6855C on the past Sunday’s state championship match. I have to agree with him, as it’s quite confusing to see a team with only 28 points driving and 0 point autonomous winning a think award, which also qualifies for the world. (Also, can someone explain to me why Florida has so many tickets to the world?)

Either the team has a really good stem project, or the judges really hate some team on the skill rank that may take that spot.

Thanks saltshaker, I was beginning to think I was the only one in the world who found this award to be an anomaly. I am assuming slots are handed out proportional to the number of teams in each state, but they should at least give out the awards with fairness and consistency. Really confuses the kids when they give them an award for programming and they know they got a 0 in it.

While you are at it take a look at some of the other awards. Excellence (“Vex’s highest award”) went to another team from the same school that scored 22nd out of 29 in skills and 23rd out of 29 in teamwork. Neither team made it to the Finals, lucky they knew to stick around or that would be awkward. Anyway, I thought it was supposed to go to a team that scored high in every category. That’s at least what they say. I understand it usually goes to teams that do not win the Teamwork so they don’t “stack” the awards, but how far down should you go and why are all the other teams THAT bad? I am hoping someone will shed the light on how these things work so my faith in some sort of reliable award metrics is restored.

The tournament he is referring to was on Monday, elementary IQ, I was a ref. The only thing I can figure is that by looking at their sister team winning excellence, the team that won the think award had very well documented software in their notebooks, flowcharts, etc…, just a guess, regardless of it’s effectiveness.The team that won think for middle school on Sunday had a good score, tied with us for first in auton. I have not seen the rubric, so I don’t know. You would have to email the RECF rep, Matt, to ask and see if you can get the criteria to shoot for it next year.

We had this issue at worlds in High school last year, one of the divisions gave the think award to a team who was in the last few percent after quals and only had their auton work 1 out of 10 times, some people with very good functional code were not very happy.

As far as # of World spots, this was probably due to growth in the region, large number of teams added in south FL so they get bonus spots. Just my guess.

Judging is by far the thing that takes the most time at an event if done properly so I wouldn’t say they are afterthoughts. There is a lot more to judging then performance on the field - that is what performance awards represent. My suggestion would be to check out the judging material that is available on the REC site and talk to your regional rep. Volunteer as a judge at an event you don’t have teams competing in. Volunteer for judging at Worlds.

Thanks TriDragon. I really appreciate you shedding some light on my question. That this same thing happens at World’s is disheartening. I believe you are correct in your statement that they probably had nice documentation (both the team at World’s and this particular team at State) and I know you had no input into the decision. Don’t think I am directing this at you. Reffing was expertly performed at the event as were many aspects of the event. I am just trying to point out that the results of a number of World Qualifying judged awards seemed to not reflect what the award was being given for and I hope that someone can work next time to correct this. If it is a system wide problem across Vex because of poorly constructed Rubrics then it is even more important to bring it to light.

Giving the award “regardless of it’s effectiveness” is contrary to the language by Vex of what the award is for:

“The Think Award is presented to a team that has developed and effectively used quality program as part of their strategy to solve the game challenge.”

Furthermore if it doesn’t work at all (as in 0), it doesn’t matter how pretty they made their solution look. Neither in business, life, or school can you get the highest marks if you can’t get any of it to work. A 0 is a 0 however you dress it up and it is highly inappropriate to present awards to a team with this problem. Why bother solving it if you can take any old code and dress it up well and win the award?

Quarkmine, I understand that judging is a hard thing. But judging is by definition biased intentionally or unintentionally so you do have to rely on performance and actual metrics as well. Surely you can at least agree with me that a team that gets a 0 should not get the award for being the best at what they got a 0 for. This doesn’t seem like splitting hairs to me :wink:

Yes, documentation! When it comes to judged awards, documentation apparently outweighs actual functionality! Yes, that’s the fact! We need more PowerPoint warriors! What matters is a kick-*** presentation that can get investor’s attention and ultimately their money! Actual work? We can always hire some cheap engineers from third-world countries to do the actual work. Way to go STEM education!

Sorry about the sarcasm – But I would rather hope that this is the result of some reckless coin toss to distribute Florida’s ample resource of world game tickets, than to believe it’s an actual judged decision based on some fine state-of-art design documentation. BTW, the team’s world skill standing shows a highest programming score of 23 and a highest driving score of 35. That tells me that their programming most likely is a 2-attempt drive-straight bonus tray release plus some pushing. I wonder how fancy could they have documented that! I would really like my team to learn that documentation skill.

@Florida_Robot_Coach No, I didn’t;t think you were pointing it at me, no problem.I was a ref, not a judge, and a general volunteer/coach the day before (Sunday). But, there were a lot of judges back there working like crazy.
@saltshaker I know the result wasn’t random, I just don’t know what it was, and I was guessing at code documentation. BTW, these issues are RECF issues, not VEX and the new (this year) head of RECF was at the FL State tournament. I will make sure he gets a look at this thread, he is very approachable and really wants the best for RECF. He knows there are things to fix and wants input for decisions. I think the two areas that always bring up issues and need clarification in the community are ref preparedness/rules and judging consistency.

BTW: I reviewed the rubric and it does not have any entry for how well the Think award scored in programming, just how it is documented, so maybe that is the place for improvement. I know if these judges are heads down in rubrics and notebooks then that is their focus. Might be a good change to suggest adding a column for how the robot placed in autonomous skills or such.

@TriDragon. That is exactly the kind of answer I am looking for. If the actual score is not in the rubric then that would be a really big problem that needs to be solved. I picked this particular award since it should have been an obvious flag that something is wrong and needs to change. A team that gets a 0 in programming should never win the programming award. This is not only unfair to those who get passed over but sends the wrong kind of message to those that win the award. No one benefits in the long run from this kind of problem.

I know the judges worked hard and kudos go out to them. Vex is fun, a great idea for helping kids learn STEM and robotics and the people involved in making it work are giving their time to make this happen. I appreciate your attention to this issue and my goal is to make sure that the judging results are consistent with what the award is actually about.

I would love to be in the loop on whether this is being solved for next year, especially if it is being used as a qualifier award for either State or Worlds.

@TriDragon You are right. I think the problem is in the design of rubric. I feel many of the current awards are rewarding little project managers instead of future engineers. IMHO, a major problem with this country’s STEM crisis is that most parents want their kids to be managers instead of engineers. By putting so much emphasis on planning and documentation, it will give the kids an impression that it’s all about presentation, while the actual work and solving the problems are not as important. I hope RECF will consider revise the rubrics to reward more engineers than technical writers.

If VEX is only going to have one award for programming, then I am fine with it being a mix of documentation, understanding, and the actual score. We do have to keep in mind that even if the judges were all industry professionals and teachers or professors unaffiliated with any team judging is inherently biased so at minimum it should be equally balanced out by a quantitative measure. In the case of Vex competitions most judges (while working hard to do their best) have only a minimum understanding of the subject they are judging. This will change as the subject of Robotics matures of course, but that is the case at the moment. This has to be taken into account when weighing out these awards especially at the State and World levels. Obviously a team that has no idea how their program works or whose program scores either 0 or very low should not be a contender for this award.

@Florida_Robot_Coach Remember, it is RECF that runs the tournaments and rules and such. VEX provides hardware and equipment.

@saltshaker Just conjecture, but I would imagine RECF answer may be that the Think Award rewards the software process while the Skills and Matches reward the software function (the description could be improved). It would be a little easier to see if it was setup like it was a few years back where Driver and Auton skills were separated. So the best functional software got the programming skills award. However, they combined the two for good reason, to encourage more software development (I assume).

@Florida_Robot_Coach Remember, it is RECF that runs the tournaments and rules and such. VEX provides hardware and equipment.

@saltshaker Just conjecture, but I would imagine RECF answer may be that the Think Award rewards the software process while the Skills and Matches reward the software function (the description could be improved). It would be a little easier to see if it was setup like it was a few years back where Driver and Auton skills were separated. So the best functional software got the programming skills award. However, they combined the two for good reason, to encourage more software development (I assume).

This is exactly what I was getting at. In the current system “performance” and “judged” awards are 2 different things. I am not making a judgement call on it, just stating the rules that are being used world wide.

This is the exact wording of the Think Award:

The Think Award is presented to a team that has developed and effectively used quality program as part of their strategy to solve the game challenge.

Key criteria:
• All programming is cleanly written, well documented, and easy to understand
• Team has explained a clear programming strategy to solve the game challenge
• Team demonstrates their programming management process, including version history
• Students understand and explain how they worked together to develop their robot programming

This does not mention at all performance in autonomous or on the field. As a software engineer all of these things are very important and it is AWESOME that the REC is giving an award for this aspect of STEM.

On a little bit of a tangent VEX IQ has heavy iteration - just because a program didn’t work at 1 competition doesn’t really mean very much. I think it is great that the judges have a chance to look at the whole season via engineering notebook and interviews. It is impossible to even know how many robots a team has built over the course of a season without judging. It is also possible that problems happened at the event to any given team - robots get dropped, gyros fail, etc. etc. Personally I think adding performance at a single event as part of the judging criteria isn’t a black and white subject.

@Quarkmine , first of all the main definition states “effectively used quality program as part of their strategy to solve the game challenge” . If it doesn’t work and scores a 0 its not effectively used. Further however the details pan out, if you are rewarding a team that gets a 0 in autonomous for having the best programming that is clearly a problem and it needs to be solved going forward. Surely you can see this. I am amazed this discussion is still going on.

As far as past contests, should they take their high score from other contests as well? No. This is about this contest. Since you are a software engineer I am sure you also care if what they are diagramming actually solves the problem because that is the whole purpose of doing the diagrams otherwise it is just a useless bureaucratic exercise. Since it would take a lot of effort for a judge to go through the whole diagramming process for each of these teams and determine between spit and polish is actually solving the problem at hand, if judging is based on purely form it is completely invalid. I highly doubt you would ever submit a program as finished without it solving the specified problem no matter how nicely diagrammed and documented it was.

Since the Think award is the only award specifically addressing programming and is being advertised as such when being presented, this award should go to a team that has all three aspects well done. Documentation, Understanding, and Implementation (which in this case can be measured by their score). If any one falls far short of this or completely fails then that team should not be the one receiving the award. I think you would agree with me that if a team obviously has no idea how their program works, even if they have the highest score they should not get the award, the inverse is equally true.

@TriDragon . Sorry about my incorrect reference to VEX instead of RECF. I will make the distinction in the future. You seem like someone who knows the people who can make a difference here. I would love to see this award be more reflective of what it is about in the future by clarifying it in the rubric by adding a row for whether it actually is “effective” in solving the problem (which would be the score). I appreciate you forwarding on this matter and look forward to its resolution.

@Florida_Robot_Coach No apology necessary. I am just trying to save you a step in that if you email VEX about these types of issues they will say go to RECF. It’s great to discuss it on the forum, but it would be good to email your local RECF rep when you have concerns if you want to get the ball rolling in understanding their official stance. It’s good for them to hear from the community and welcome it. Then report back to the forum if you learn anything new so we all know.

Your RECF for FL is Matt Conroy, his email is: [email protected]