Should Skills count more than Design?

As of right now, Design qualifies before skills. The problem is, Design is subjective, and isn’t nearly as good a metric of how good a team is as skills is. So I want to know what everyone else thinks: should design or skills qualify first?


I believe the reason for this is the RECF wants to reward certain behaviors and skills. Robotics is fun, but the real point is to have students get exposed to STEM, understand the engineering design process and possibly pursue a STEM career. While driving a robot exquisitely and getting a high skills score is great, having an excellent engineering notebook will reap way more benefits in a future STEM career. So RECF is encouraging students to maintain an EN by handsomely rewarding that. There have been many posts that point out that RECF is not really trying to fund the best robot. They are trying to recruit and prepare future STEM leaders.


Being able to document and present designs is something that you will benefit from in both high school and college. While I’ll grant you maybe not in a $20 bound notebook, but the skills are something that I use every day.

@gear_geeks has the reward part out pretty clear. RECF rewards things they want you to do, ie the Engineering Notebook. A few years ago, skills were ment as a path to allow robots and roboteers to show that their robot were worthy. No more complaining about team 23T messing you up, you have the whole field, use it.

Then skills got to be out of control, and lots of people thought teams were gaming the skills system. Now we are back to the last few slots at a state/regional event will be filled by skills.

There was also “Height wars” in the EN contest, where teams would fill binders and binders full of “information” It wasn’t unusual to for a judge to dig through 6" of notebook. This has been tamed down by the 5 point bonus for a bound book. Most of them are 100 pages, some at 175. So that is also becoming more manageable.

I see this as a positive evolution of the competition, steps to keep focus on what will be important later on, what are lasting skills that we can start and enhance.

Like B03 for every elimination, skills to worlds will also become a fond memory.


Well said - from my perspective, there is a good balance of Judged awards rewarding good engineering, performance on a competitive field with alliance partners and opponents, and team performance with skills challenge. To be clear, in many cases, large events, team performance on the skills challenge will yield an opportunity to advance to the next level that season.

GDC also has the option of rewarding coding more, by increasing the time for Autonomous portion of matches. We are seeing fewer robots being stationary during autonomous period. Also, we are seeing benefit of APs in ranking making teams spend more time on coding.

All in all, good growth on all fronts.

That said, expect more unexpected tweaks and twists (NO NOT WATER GAME) in the future…


I think that it is a very good metric of how good a team is. I think it should be a higher award based on its difficulty.

If you are a high school student, volunteer at an IQ tournament as a judge. Being part of those discussions will help you see how it works. It’s pretty cool. (And there is always that team that hands in one page filled out…)


When you buy a product that was engineered, how often do you look at the stats and product testing results over the customer reviews?

My question meant to sponsor thinking.

My Answer to My Question

You should do both. But the majority of people would look at the customer reviews. Which are subjective.


There are two big problems with the Design award.

The first problem is that once a team or two establish a firm lead in the quality of their notebook at the beginning of the season, then they virtually lock their hold on the Design and sometimes Excellence award at every local competition they attend. There is no easy way for other teams to break that hold because getting notebook feedback on what they need to improve was made difficult and, also, redoing notebook from scratch is not a good option.

The second problem is that it is really a documentation award, because you could have a robot that consistently ranks last, but have a high scoring notebook and impressive interview. Some teams had figured out this loophole and Design award gets really bad reputation among the rest of the competitors.

As a judge, I absolutely hate when I have no choice but to give the Design award to a team with a poorly executed copy of the meta, yet almost perfect notebook score by the rubric.

So, when an outside group of judges comes in and assumes that good design process must yield and effective robot, I secretly enjoy seeing them pass on a teams with high notebook scores and rewarding the team that used design process as intended.

Excellence award makes so much more sense in that it fuses all meaningful metrics to reward truly excellent team.

When I talk to students on the top performing teams in my region, I get an impression that three quarters of them consider Design award a joke and choose not to waste their time on EN, because one of those other teams are virtually guaranteed to get the Design award, which is very sad.

If RECF or its sponsors want to reach those students or have more meaningful documentation only award, then they may want to change it to a merit badge system where once you get an achievement of local level design award you step back and let other teams win it. Then you can compete for the next level of state/regional award, with Signature and Worlds being the topmost levels. This way more teams will actually consider investing their time into doing documentation.

So, to answer the OP, unless RECF changes Design award criteria to consider, at least, some on the field performance, then it would be better to have skills qualify first.


I think something like this has been posted in the past about teams with good Engineering Notebooks stepping aside for the rest of the season so that other teams can win a design award. However, the flip side to that is that there are powerhouse teams in many regions that win every tournament they enter. How would they like it if they were told they couldn’t be in future tournaments because they had already won one? That’s the same thing when you tell a team with a great EN to sit it out.

Also, just like in tournament play, ENs change and evolve over the season. Just because a team wins a design award in the early season doesn’t mean they will maintain that momentum throughout the season. A team that starts slow can pick up steam and end up with a great EN. I think it’s just pure laziness that teams cop out. It’s an excuse not to take the time to learn how to document their engineering design process. But, just like every team with an EN won’t win a design award, every team with a robot won’t win a tournament. I mean, the same can be true for the competition itself. There are teams that may never win a tournament, should they just give up and not build a robot? Of course not.

Is this about winning or learning? If it is only about winning tournaments then about 75% of the teams that field robots should stop building now.


I totally agree with you that it is mostly about learning.

Vex Robotics Competition is a process maintained by RECF to facilitate and encourage student education in the number of key areas, and following proper Engineering Design Process is one of them.

Awards are just the tools to get students motivated, and by adjusting their criteria you could influence where students will direct their energy and how they will prioritize their time.

As any engineered process, VRC structure itself is a subject to the Design Process. To make it more effective at achieving stated goals you need to constantly seek feedback and adjust the settings accordingly.

Students on the top performing teams, who have effective robots and must be following, at least, non-formal iterative design process with good testing and incorporating feedback, are the target audience to teach them more formal methods and how to properly document it.

Right now when I take an unofficial feedback from students, I see that the VRC process of reaching the core audience for “follow the formal Design Process” message is less than optimal.

I asked this very question about making Design award more like a badge system at the open Q&A session with Dan and Paul at the beginning of ITZ Worlds, and they acknowledged that they are aware of the issue and considering making changes.

While they are considering those changes, I have to find a way to (avoid) explaining to my daughter that hundreds of hours she put into engineering notebook have next to zero chance of being recognized with an award, because teams A and B that come to every competition, already have better entries at the beginning of their EN that will always give them few more score points.

I hope she gets some learning benefits from working on EN now before, like most of the seniors, she recognizes it as futile exercise and not worth to spend time on, unless you are one of those two top teams.

I am sure there is a better way to do this thing.


Can you explain this a little more? I’m not sure I’m understanding where you are seeing shortfalls. Also, if you had some suggestions on how to improve it, I’d be interested in hearing that, too. Thanks!

1 Like

At least in our region (California) skills is still very important, in fact I would argue that skills is probably the most consistent way to get invited to States. In our region, at most VRC tournaments Design is not a qualifying award. The only awards that qualify are Excellence and Tournament Champions. Often, teams win these awards win them several times creating many double qualifications. These spots are skilled from the World Skills Ranking of non qualified teams. Therefore, having a good skills score is very important as it is almost a guaranteed way to get an invite to State if you put in the time and the effort.

The nice thing about skills is that for the most part, you have control over how you do. In a match anything can happen. For skills, you get 3 tries of each type, and if your robot functions well and you put in the time on driver practice and programming you will get at least a decent score. I like that good skills teams do usually get the invites that they deserve.


Well, good / high performing teams are following iterative design process informally anyways. It is almost a must given the level of competitiveness we see. That’s a good thing and it happens independent of the Design award.

I did a pit interview with one of the strongest performing teams a few years back and asked why they didn’t submit a notebook. Their answer was literally that they created decision matrix comparing pros and cons of investing time into EN given that two other teams A and B already have top notebook. Their conclusion was not to invest any time into handwritten notebook.

One of the stated goals of the Design award, as I understand it, is to introduce those students, who are more likely to work in STEM, to more formal ways of maintaining good project documentation.

However, the structure of this award makes the very students who would benefit from learning those formal methods the most, likely to decide not to invest time into EN, unless they are one of a handful of teams that got ahead early.

The reputation of EN among students in my area is that it either for suckers who don’t know how to build well or for the few teams that already know how to build very well and want to reach all the way to the top by competing for Excellence.

The rest of teams in the top or in the middle, who could benefit the most from learning and following formal iterative design process, don’t see it as worthy way to spend their time.

I would assume that restructuring the award to reach more teams would be a possible solution to overcome this issue.

As everybody likes to say this is not about rewarding the best robot notebook every time, it is about motivating more students to learn skills that they will benefit from in the future.


I have seen teams win the design award, and they actually finished dead last, as their “design” didn’t work. My complaint was, how can they have such a great design if it doesn’t even work. Just because they went over everything in highlighter, they won the award. I recognize it’s more than that, but still, I think design award criteria should be a successful design as well. If your unique “design” doesn’t even function, how can it be so great?


So one more conundrum with Engineering Notebooks is that they are not out in the open. By this I mean, that one can observe a robot on the field, or in reveals, or (usually) in the pits and see how a team built a component, etc. One knows who the good teams are (they’re the ones winning matches and showing up in the Finals typically), and emulate (be inspired or, less charitably, “copy”) the robot.

With the Engineering Notebooks, this is not really possible. There are a few templars posted on-line (and they are wonderful), but I know my team struggles with both what content to put into it, how to organize it, as well as the time/motivation to work on it. But there’s not really visibility into what teams that regularly win Design Awards at local competitions do in this regard.

I know opening up the Design winner’s EN to the rest of the teams at the competition is fraught - I’m sure lots of teams will say “Mine’s just as good” and complain to the EP about unfair judging, etc. And this is a reasonable rationale for NOT opening winners’ EN to the rest of the competitors

So I’m not really sure what the best approach is for improvement in this area.


This all makes sense. I used the analogy above about teams not winning and deciding to not build a robot with people deciding not to do an EN and wondered what the difference was. And there is a difference. Even if a team does not win at a tournament, they still know where they stand (finalist, quarterfinals, ranked 20th, etc.) For ENs, there is one winner and no one else knows where they stand. So I get what you’re saying. Now I do wonder if there is a way to let the teams know where they stand (not exactly but relatively). For example, top 25%, next 25% Bottom 50%. I do agree that some sort of feedback would be nice but struggle about what form it should come in.


Exactly, and to add on to what @Mentor_355v said, the very first thing that RECF needs to do is to recognize that activity (notebooking) that Design award tries to encourage is very different from the other awards.

You can instantly see the robot shortcomings as you test it at the competition and you can almost entirely rebuild and reprogram it for the next competition. You get a lot of feedback, and a lot of the ideas from seeing other robots - great way to learn and improve.

With notebook it is very different. You cannot “rebuild” a notebook, if you do it will be considered unethical.

Also, it is not obvious how to get feedback for the notebook. When I judge MS, I will always tell teams that display desire to improve, that if they want notebook feedback, then they could talk to me after competition and number of them do. However, my HS kids didn’t have an easy experience trying to get feedback from HS judges with only a couple of pleasant exceptions. It is very subjective.

And if the activity is different, then the next step would be to see if there could be another award structure that better fits this activity, while maximizing the quality of education and number of students it could influence.


While the EN may not guarantee design award, teams should still do them because they’re necessary to win excellence. This means that even if a terrible robot wins design, they won’t win excellence so if you still have a strong EN and robot, then chances are you’ll win excellence.
ENs are also great to look back on later.


Exactly. That was our calculus when we were starting out. Say a tournament sends five teams through to states ( excellence, champs, design, skills). If a team doesn’t have an EN, they have automatically reduced their chances of going through by 2/5 or 40%. Just by not submitting an EN, that team’s chances of going through is 60% of what it would have been.


That’s not a lottery, because odds are not equal, tho. :wink:

I could imagine there is a small chance of beating a better robot if they whitescreen in bo1 eliminations. But to have all judges lose their minds at the same time and give my EN more points than to the other team, that have been scoring higher all season long…

Me must think quickly of strategically deploying those Girl Powered stickers, rainbow highlighters, and some leftover Pixy dust. There gotta be some good usage for those…

Sadly, still zero chance of beating those good ENs… :crying_cat_face:


In Delmarva we are the second year in giving feedback on both the notebook and the interview. Year one was presented at the partner summit. Response was mixed with lots of EP saying that it was a nice pilot but with most of them saying they were not interested in doing it. They saw it as an extra burden on an already stretched staff, would lead to whining by teams “Well you said this, why didn’t we win”, would be uneven from event to event.