Why is the Google Signature event allowing more than 3 skills runs?

As some of you may know, there is a current signature event going on at Google in California. We noticed some interesting things happening.

There are currently teams with more than 3 skills runs.

Teams there have said that they are allowing infinite skills runs.

According to the qualifying criteria document,

Robot Skills at events must follow these guidelines:

  1. Tournaments and Leagues with 24 or more teams registered, must offer Driving and
    Programming Skills Challenge Matches. Skills Challenge Matches are highly recommended for
    all events.
  2. Any event offering the Robot Skills Challenge must offer exactly three (3) Driving and three (3)
    Programming Skills Challenge attempts to attending teams playing in a Tournament. If in a
    league, then no more than three (3) attempts at each skill may be played in each league session.
  3. Skills Challenge scores for all official qualifying events will be included in the World Skills
    Standings on RobotEvents.com.
  4. Skills-Only Events are not qualifying events and will not have their results uploaded to the World
    Skills Standings. These are for practice only.

Note: From time to time, the REC Foundation may make an exception to one of these criteria to better support a growing region. For more information, please contact your REC Foundation Regional Support Manager.

Is this an exception or a mistake? If it is an exception, how is this an exception? This is completely unfair to all teams, including ones that went to any other signature event that disallowed this.


@DRow Maybe merge this with Google Signature Event Skills Confusion?
It seems we both had the same idea.

From the Google event discord;

This is not fair, as usually I would have 3 attempts at skills, then wait at least a week in between tournaments, but seems that teams who have the funding to go to a sig event get a pass for this rule, and can compete in the skills challenge an unlimited number of times. I would hope that this was an accident.

1 Like

Let’s not try to get too aggravated on this post quite yet. Wait to see if the RECF actually approved this, or it was just a mistake. I suspect the latter.


It appears the Event Partner has responded to some questions on the Google Sig discord server. This is not my screenshot. Please excuse the light theme.

We apologize for the confusion. We are working to resolve this issue and will make an announcement tomorrow morning after consulting with the REC Foundation, in order to ensure fairness for all competing teams.


This is not the first event this has happened at this year. I’ve also seen qualifying events with fewer than the required number of elimination alliances, and events where the Excellence and Design awards were given to the same team. This has led me to believe that some of these things that appear to be rules, are actually just guidelines. My hope is that going forward, it’s clearly stated which of these are actual rules, and if they are, have them enforced by Tournament Manager. That way EPs can’t make these sort of mistakes, whether intentional or accidental. As an EP, I can see how these mistakes could happen, so it makes sense to use TM as an additional checkpoint.


TM defaults to 3 attempts. The person setting up this event would have had to go into the TM Settings and change that value, ignoring the (IMO extremely explicit) warning in there that says:

REC Foundation qualifying events must offer exactly 3 challenge attempts per challenge, per team. This setting should not be changed for qualifying events.

It remains configurable because some non-qualifying events do choose to run more.



I’m glad you posted this. I just set up TM for an event I’m hosting next week and remembered TM being very explicit about the number of skills runs allowed (as you show in your post - it leaves really no question about the rule). Also, the EPs that run Signature Events are the most experienced EPs, so it is kind of hard to believe that this would come as a surprise.


I’m not sure exactly what they did, but this is the RobotEvents page as of now.

It looks like they took the top 3 of their skills… 7K didn’t get a 75 prog on the first 3 attempts IIRC…


What would you recommend as better solution?

For me - this is a no win scenario.

Let’s wait for EP to make a statement about rationale instead of speculating …

Good luck to the teams today!

@LegoMindstormsmaniac suggested this, and I agree;

The only fair way to do it would have been to invalidate all runs after 3, then give teams to option to invalidate up to 3 of their remaining runs.

1 Like

If teams had 4 scores and they’re trying to reduce it back down to 3 to fit the guidelines, then taking the best 3 scores does not work as far as I’m concerned - they may as well just leave 4 scores at that point since only the best score matters for each team anyway. If you’re going to trim the scores back down to 3, the only thing that makes sense IMO is to take the first three scores made by each team, not the 3 best scores. This of course is still somewhat unfair because if a team thought they would get more than 3 then they may have spent the first 3 tweaking or as practice runs. However, perhaps the teams should have known they were only supposed to be allowed 3 scores. Regardless, the teams should take up any concerns with the EP since it appears the EP is at fault here (* at least that’s how it looks, but admittedly we do not have all the information). Perhaps refunds (partial or otherwise) would be in order if the event failed to deliver what they promised (either by failing to deliver more than 3 skills runs or by failing to deliver a proper qualifying event).


Seems like the best solution

For certain teams - not all…

Why do you believe this is the best solution and helpful for all team at event and all teams on Worlds Skills?

I am not sure what is right in this situation…

Out of curiosity, is there a rule/guideline/policy against giving Excellence and Design to the same team documented somewhere? I looked through the VRC Qualification Criteria document and was not able to find it there.

I can’t think of a better way of handling this situation where it is now, but that’s still not particularly fair to teams (especially given the short amount of time left in the tournament for this to be resolved).

That kind of inverts the chain of authority—teams should respect the instructions of the EP and event staff, and expecting teams to constrain themselves in this way is not fair. As a competitor, given the option between “follow the EP’s directions even though they make it easier than the rules specify, and maybe need to argue later that the EP gave you wrong instructions” and “make it harder for yourself, and hope that everyone else’s advantage is retroactively taken away”, I’d feel much more comfortable with the first option.

I hope for the sake of all teams who are at this event (and travelled and paid to be there), as well as everyone else for whom skills scores matter, that a reasonable and fair resolution can be found here. I also hope that this is taken as a sign that VRC needs better oversight of events to prevent this from happening again int he future (for all tournaments, not just signature events).


It’s in the judges guide.



Thank you. Bottom of page 14 if anyone else was curious.

I agree with Karthik that more clarity is needed regarding what documents constitute actual rules of VRC, and how they are enforced. There are far too many different documents (game manual, appendices, qualification criteria, judge guide, student-centered policy, code of conduct…) that it is inevitable that some very important policy is overlooked by teams, EPs, or both. It is also completely unclear what recourse teams have when an EP makes a qualification-affecting mistake. I really hope that this can be a learning opportunity for everyone involved and that it can spark whatever change is needed to ensure that it does not happen again in the future.