VEX Robotics and the Design Award (Opinion)

The VEX Robotics Design Award is a very immersive and enlightening award for students who wish to advance in not just building competitive robots but practice professionalism in a simulation that VEX offers in following the steps an Engineer would. Although I have won Texas States in 3 years in a row, I truly believe that 1 of those years I did not deserve such an award. During In The Zone, I have given the judges my Engineering notebook that I have not even touched for over ½ a year and yet I got the Design award. But despite getting the Design award that year I did double qualify so I did not feel as bad when I went to the World Championship. After bringing this up to the two of my closest mentors even they were shocked that I got such an award. Keep in mind, this was two years ago.

In my opinion, I think that the entire structure of the Design Award should be re-evaluated. Sure, students can make a design notebook and turn it in, but when everything is judged most students would not get a design award or rubric and just get their notebooks back. Absolutely no advice and no criticism to make any design notebook-devoted students better. And this, in my opinion, is why many students in VEX are turning away from working on the Design award and just focus on the tournament bracket itself. The only way for me to get my Design notebook evaluated is by talking to other mentors, but many students do not have such a connection outside of their team whether it’s a social or distance barrier. Why, if we wish to put an emphasis on the Design Notebook, is there no structure in the VRC tournaments that provides critiques and criticism to those who wish to practice being a professional Engineer? Should we do something about this? If so, then what could we do? Suggestions?


The design award is based on a rubric for notebook and presentation right? I think a simple way to improve the award system would be to return these rubrics along with your engineering notebooks, perhaps with comments/notes the judges may have taken.

However, one problem I see with my solution is that judging tends to not be very consistent or specific through each competition, so it may be hard to gauge the best way to improve upon the received rubrics. Thoughts?


While the idea behind this is good the issue comes into play when a overly involved mentor or upset student disagrees with the judges critique and starts arguing with the judges about it. The judge can say something along the lines of “This segment needed a more detailed description.” and the mentor/student would say “It was plenty detailed.” and it could escalate out of control. It’s the same reason that the judges use a private room to judge for awards, even with the judging template, it will still vary a little bit from judge to judge. As for solutions to giving critique, like you said you could always talk to mentors, but some people don’t have access to that. They could also talk to other teams, but then you end up showing your design when you may not be comfortable with that. One option that I believe could work well is for experienced notebook writers to display their previous seasons notebooks. You would no longer feel uncomfortable about showing designs due to their irrelevance to the new season and they could give good direction to newer teams. I’ve seen a few notebook tutorials on YouTube and they helped me quite a bit when I first started on my notebooks, but there were very few good ones. I say if you make a notebook tutorial explaining your notebook and the judging template it would really help. If it’s a YouTube video then it also gives them a place to ask questions or you can post it on the discord or forum to help.
That’s my two cents. I’m pretty sure the RECF isn’t going to change there position on this, and for pretty good reason in my opinion, but there are definitely ways that critique can still be shared, just maybe not from judges at a competition.


There were a few good discussions on this earlier in the year, with interesting information from both the competitor side and the EP side as to the pros and cons of feedback options.

I found the link to one that would be good to peruse (it includes a note from Dan about this topic): Judges’ Comments to Teams
I can’t seem to find the one referred to inside that post (I remember it also having good insights from different points of view).


This sounds liks something that would happen in any normal call by a judge or referee. This is not uncommon and I do not believe that, although many say this arguement, it is not really a good and solid reason to not do it. Because if this is the case then even at the tournament bracket there should be no referee to student/mentor discussion when a decision is made, would you be happy then without being able to provide your opinion?
Anything could escalate out of control, but the rules and cough cough Code of Conduct cough cough (I don’t like how the Code of Conduct is made) should protect students, mentors, or judges getting out of hand and from going out of control or else consequences will be made.


The RECF and VEX themselves said “One should always keep an open mind.” If they are not being open minded about this, wouldn’t that be ironic about what they said?


This is very true. Luckily at my school, the parents of some of the vex students are actually in a field of engineering, so when we have a scrimmage they judge our notebooks and give us back our rubrics. They then discuss individually with each team what was great about the notebook and what should be done to improve.


This seems to be the most realistic answer with minimal effort technically speaking.

I see, but we can always understand that complaining really shouldn’t be too much of a problem. Why does time seem to be such a common response to not do something in a competition, especially for something as important as the Design Notebook? If VEX really wants to put an emphasis on the Design Notebook and make better and more suited Engineers for the modern world, then time should be irrelivant, especially since it’s only a couple of minutes to help improve hundreds of thousands of future Engineers. In terms of quality, I am confused as to why this is something that needs to be so incredibly considered. If we look at music or dancing competitions, judges would give a score then provide their reasoning as to why they put the score. If we make this be a common practice where judges give advice and responses, it will only be a matter of time that everyone would be used to the changes and everyone can improve themselves one Engineer at a time?
(By the way, these are just my opinions. I am fully interested in hearing inputs and responses even if you don’t agree with me)


I have found another way to improve how you write notebooks is to look at your old one(s), and just glance through the entire thing. Don’t actually pay too much attention to the exact information, but just how you put all the information together. Why did you present information that way? Do you still like having it presented that way? How does it compare to your newest notebook (if you have another?) If so, why? If not, why and how should it be changed to be better? Etc…

This is what I did after my last competition in addition to researching other notebooks and browsing the forums and I honestly feel like it is so much better. Also you could ask sister teams to trade off notebooks (if you trust them) and trade ideas about formatting and such.

1 Like

This was discussed at the EP summit a couple weeks ago, and although it was my first summit, it was obvious that it had been discussed previously as well. RECF definitely knows that this is something that many competitors and mentors want. The reasons mentioned by ZackJo are exactly correct for why this is not being done. Additionally, at competitions it is very common that only the top 10 or so notebooks are actually fully scored because it is a time consuming process. The consensus was that scoring every notebook would make the judges’ job much more difficult and tedious. They are volunteers and finding judges can be difficult as it is.

At the summit we discussed many options for how EPs might some day in the future be able to provide some feedback without taxing the judges too much or creating “discussions” with over-enthusiastic parents or mentors who disagree with the score. Some ideas were feedback cards that have a few positives as well as a few suggestions, or maybe an email to teams after the competition that would allow them to see he rubric scores electronically.

In any case, there are many EPs (myself being one) who would like to be able to provide some design award feedback, and maybe someday it will happen.


I agree. I was very confused and shocked at my first ever competition when we didn’t receive any feedback on our notebook. It is also a little disappointing because the design notebook is used as criteria for multiple awards.

I know when notebooks are judged, there is only so much time to get through 20-60 notebooks. Because of this, even if feedback was given, it could probably only be given to considerations for awards because the judges will be spending a majority of their time looking through those specific 5-20%. However, I do know when judges are looking through the notebooks, they do write notes because during a competition, the judges accidentally left their note sheet in my notebook. So even if they don’t have time for everybody, the judges could hand back the notes they have written during their time and because so many teams would be able to get feedback and improve, they could help the teams who didn’t receive feedback.


Whats your thoughts about providing the rubrics to all teams but the top 10 get extra notes to further improve the notebook, as likely the top 10 would be so close that a number likely is harder to give advice?

This sounds like a good idea as well.

I guess we have similar ideas too about this :wink:


In my opinion that would be an amazing idea. During registration for an event, the mentor could include an email (isnt that already done?) when they register their team(s). Then rubrics could be scored online or pictures could be taken of the rubrics and later emailed to the mentor to give to the teams.

I actually mostly agree (although I don’t think time can or should be irrelevant, since it is a major resource). One of our biggest complaints with judging as an independent team over the last many years has been the lack of specific feedback on the design and excellence awards (the team has won several, so at least we have general feedback saying they’re going the right direction). We don’t have real access to individuals who do the judging, so we don’t get the nitty gritty that would really help take “pretty good” and make it “totally great”. I’m hoping this year to volunteer for judging some so that I can get a better feel for this area, and to get some direct exposure to the EP issues such as time constraints.


You have a good point there. I guess I misspoken as to how I should’ve said it and probably should’ve been a bit more realistic with my words. I know that time is limited, but I definitely know that just a couple of minutes to help teams be better is definitely worth it.

This makes a lot of sense, and I totally understand your point with this one. I agree.


I completely agree that the Design Award system needs re-evaulating.

The VRC spirit has always been about learning from past mistakes and improving. This is integral to both the main tournament and the skills challenge as teams can plainly see what they could do better. If a team’s shooter is unreliable, they know that they need to fix that to get better. And teams can look at other successful teams in order to get a sense of what makes those teams great. I know I’ve watched the 365X skills run video about 20 times so far :smile:

Not providing any feedback on the Design Award seems to fly in the face of this idea. Teams often have no clue how to improve their notebooks. How can the RECF encourage learning and growth when they don’t tell teams how they need to grow?

As for the point about the team arguing a judge comment, that seems like a simple rule like “All judge decisions are considered final” could solve. If anything, increased transparency would help teams be more trusting of the judging process. I’ve heard other teams say things like “the judges were so biased” or whatever, and helping teams understand what exactly they need to improve on would drastically help teams trust the judges more and try to improve their skills instead of just saying “this is rigged, let’s do something else instead.”

It does seem like we need some sort of “compromise” however, as Judges are already pretty overworked as is. Perhaps the rubric could be returned (either partially scored or fully scored, considering that some rubrics might not be fully completed if a team is obviously failing), and for teams in the top 5 or 10, some associated comments on why they didn’t make the cut. Perhaps a standardized “Judge Comment Sheet” would help with this.

I certainly don’t pretend to have all the answers, as I’m sure nobody on this forum does, but I agree some sort of shift is needed. Perhaps a trial period of returning judging documents and seeing how things go would be in order. But the important thing is, some sort of shift is needed to keep kids interested in pursuing a good quality engineering process.


Re-Evaluating a previous post

This sounds like something that can be easily solved through automation, in which instead of having papers everything can be scored, noted, and documented online and a team can be able to see responses through robot events. This can also save trees and ink with the addition of saving time from using the printer and having everything online means that everything can be typed (instead of gruesomely handwritten over and over) which would remove much of the concerns stated.
Also edit//: The fact that this is automated will also mean that it’s one way. Inputs and opinions would likely have to be discussed with the RECF foundation if someone disagrees if there is something seriously wrong.


You’d have to train the judges to use this software though, which might be difficult considering it’s hard enough to find judges already…

But I certainly agree that in the long run, making the judge rubric digital like the scoring system would be a great idea. Imagine being able to see the rubric on the RobotEvents website (after logging in of course, it might not be smart to make this public) after the event ends, just like you can see the scores and match results online.


I definitely see this as important, but I think that much of this can be solved through trial and error to create a system that is simple and easy-to-use for the judges.


This is just my opinion on the fact of getting everything done digitally, having something like this get add into TM (Sorry DWAB you guys already have enough on your plate) might make a really good judging tool solution. You could have the TM app distribute team numbers to all your judges, they score all of the teams in the system that they have been assigned, and then the system gives the top 10 in each category (It is all numeric based on the judging guide IMO) to have the judges re-evaluate after lunch (or what ever time period) to make their final decision. Notes could be added if judges see fit to give feedback or other information in which could all be sent to and then the judges scores/notes/feedback would be sent there. The coach who has access to the account can then decide to share the information or teach the teams based off the feedback so it is not a direct feedback system, but instead going to the coaches.