I didn’t even think to look for updates as it’s typically not out until the EP summit. Having a June 15 revision and an Aug 15 revision I think gives them more flexibility.
So there are some changes to the guide to judging…
June 15, 2023
- The Judges Award can optionally be given to two different teams at an event.
- The criteria for the Excellence Award have been modified.
- Excellence Award requires an Autonomous Coding Skills Challenge score.
- Teams must be in the top 30% of teams at the event for both Qualification Rankings, Skills Challenge Rankings, and Autonomous Coding Skills Rankings
- There is no minimum number of teams to be made eligible for the Excellence Award based on performance metrics.
- An Excellence Award Criteria Checklist is added.
- The Innovate Award description has been changed to be based on a specific aspect within a Team’s Engineering Notebook
- The criteria for the Innovate Award have been modified.
- The criteria on the Engineering Notebook Rubric have been modified to include additional criteria.
- Slight changes to other award descriptions and criteria verbiage
- Changes made to Engineering Notebook for ease of use, understanding, and to be more in alignment with Award criteria.
Maybe TM will be able to filter out the teams that meet the 30-30-30% criteria. I know they had to put out guidance last year at some point as teams were given excellence that didn’t meet the % criteria in the judge’s guide. At this point, they are only able to catch it after the fact. If the dropdown in TM filtered those out it could be caught before awards are given out. Perhaps TM could also have a “judge’s report” with rankings, driver, and auto all listed for them to see ranked separately.
To get a ranking at this point I would have to run three reports at the end of the tournament, which I don’t want to do!
I’m assuming it’s the top 30% of all teams, not just the participating teams. For example, if 24 teams do qualifications, 16 do driver skills and 8 do auto skills, the top 30% should be the top 24*.3 = 7.2 or 7 teams on each list.
I don’t really mind the extra criteria, but I don’t think it will result in different teams winning awards. Teams that tune themselves to the rubric do well, and those that don’t just don’t win awards. The finer details of the rubric don’t really matter as long as it enforces the engineering design process.