Design Award and Original Ideas

I served as judge for a few events and noticed more and more teams are adopting certain designs that are populated through Youtube and past events.

However, while reviewing their engineering notebook, this adoption is usually not well documented or just something like “we found XXX issues with current design, so we have decided to change the design, current design is this (picture attached)”.

Most such notebooks show clear gap in the engineering design process (after all, they are copying mature designs). Some notebooks spent 80% of the pages discussing how to refine their claw bot, then suddenly there is a new bot that does front and rear lift, a conveyor ball picker, and a flap door at the back to drop balls on cubes – a popular design on youtube from a Chinese team several months ago. The question is, how should we take these kind of factors into consideration when judging for design/excellence awards? Should we even consider them for design awards when the design itself is not original?


I would just work my way through the rubric and score it appropriately. I love this year’s rubric because it, to me, seems so much clearer. A notebook like you describe would get minimal scores in most of the categories.

The harder question is, what if your top 4 or 5 robots in the teamwork portion and skills are all clonebots like this, then who do you give Excellence to? The team that killed it in Design but couldn’t score as high, or a team that showed they could copy a robot and practice driving it ?


I have judged quite a bit this season and completely agree that many teams have not gone through the engineering design and decision process to justify the selection of a traybot. That is very disappointing. However, I don’t think teams should be eliminated from consideration for design award if they build a tray bot. However, they do need to go through the process to demonstrate what options were considered and why they were not selected and a traybot was. If a traybot falls out of that analysis, I’m fine. However, if I open up an EN and on page one its says, “we’re building a traybot” then that is not good. Also, like you say, if a team is building a different design then switches over to a traybot with no explanation, that is also not good. Those notebooks would get low ratings in those elements in the rubric.


So, hopefully at least one or two of your top ten teams will also rank in the other areas.


There is no requirement that an event give out an excellence award. So, if no teams meats the above criteria then no Excellence award should be given. The head judge could talk with the teams at the awards ceremony about why the award cannot be given.

That would be a very teachable moment.


I’m just curious to understand more clearly… is it the lack of following a design process in going to a clonebot that you feel would eliminate then from consideration or just the use of a clonebot, even if the decision and build were well documented in the EN?

I would be OK, but not super excited as a judge, if a team had a well documented chain of deciding on a clonebot and building it.



The Judge’s guide defines “identical” clone bots as not student centered. It would be nearly impossible for robots at different schools to be clone bots… Unless there are published instructions to build a great robot. Seeing a video online or at a tournament and building a similar bot is not an “identical” bot as I read it.

I would guess that if a school showed up with 5 identical claw bots, then those would technically fit this definition as well.

RECF still has some work to do in this area to clarify these statements.


I either forgot or didn’t realize clonebots were identified as violating student centered.

I think there’s a terminology issue here: The “clone bots” identified in the judging rubrics refers to team(s) having identical robots – the narrower definition of “clone”. In the earlier discussion in this thread, “clone bot” refers to those bots that copied popular designs – the broader definition of “clone”.

I agree that teams that adopted traybot with complete documentation of engineering design process should still be considered for design award. But I think it would be proper to add more significant credits to those with more original designs. The “originality” metric is not clearly defined in the rubrics though, hence my original question.

1 Like

It says teams from the same club/school having identical designs OR clone bots. So I think the rubric is covering both

just curious @saltshaker and others in this thread that have been judges at tournaments. How much time do judges spend reading the ENs? Especially if you have tournaments with 30+ teams. Do you read them in their entirety or flip through and reading key sections?

The question is because while we may call these robots “clone bots” I’m not sure if that’s always the case. I see this as another grey area in Vex similar to how much involvement should adults have. Because there are only so many ways to do things, especially when a lot of teams look at previous season’s games (which they should and study from the past) and designs. Robot designs over the season will start gravitating towards certain designs for various reasons.

When we see a robot that looks like one similar (maybe similar conceptual wise) to what was shown on YouTube, do we believe that to be a clone bot? Or do you dig further into the EN?

The process we’ve always used is to flip through notebooks and quickly sort into 3 piles… novice… emerging and excellent, or something like that. Then the excellent ones are gone through in detail. This is done before going to the pits for interviews.

If a team has a clone bot but a very complete notebook they may not get noticed. If they have a clone bot and a sparse notebook they simply won’t progress for judges awards.

If during the interview the students can not answer questions about their design process (or just blurt out they copied it off the internet) they again won’t be moving forward.


Why is that? Does this assume robots that looks the same are clone bots? Designs tend to converge as the season progresses, and teams should always be encouraged to study past seasons, past robots, and other designs to get ideas.

Sorry, by “not noticed” I meant not recognized as being a clone bot bit rather just assumed to be the teams design. For example… seeing 3 or 4 brainstorm ideas in the notebook would lead me to believe it was NOT a clone bot.

Let’s look at the judge’s guide again…


The teams that should not win judged awards are teams with IDENTICAL robots. Two robots with a similar design are not identical.