Fully Autonomous vs Pre-Set Autonomous

Hey guys,
My team and I were wondering which one is better to have in a match (both normal and programming skills games):

  1. A predefined autonomous using distance and time for motor values, picking up balls according to their pre-set position before a match

OR

  1. A fully autonomous robot which detects game objects and interacts/scores them by its own via sensor data (ultrasonic).

Your opinion?

Always go for the one that will be most reliable. Will running based off of preset timers be more reliable than using sensor data regarding the actual layout of the field?

Which one is easier? The one with the most planning and minimizing of error.
I’d say for autonomous and in anything, I want as few variables as possible, and as many things I can make constant or easily adjustable.

Now the latter imo is definitely the most attractive of the two, but also involves a bit more effort.

The determination of it being better all boils down to implementation.

The only concern we have regarding that is the time it takes to scan and process the information. What is the scan rate of the sensor? How fast is the Cortex CPU?

The vex sonar sensors only sense in one straight line so it is very difficult to sense the whole field lay out.
If we had a nice 100,000$ LIDAR on the robot I would be all for sensing objects and path finding ways to those objects but for VRC that is hilarious over complicating autonomous and way beyond the Cortex. It is possible to use all manner of techniques to ensure that code based off of lines and encoders follows the same path every time and the objects start in the same place.

A nice little analogy.
I put my phone on my desk before I go to bed. When I wake up should I
A- search the entire room accessing were everything is
B- go to my desk and grab my phone

Loving the analogy :smiley:

Why do people always see things in black and white…

This question is, should I do A. or B.

A. being an entirely predetermined routine
B. being an entirely dynamic routine

My answer (as per usual) is to invent a C. (because screw conventional thinking).

C. being, use the best of A. and B. within the parameters of your time, budget and skill level.

Now really was that so hard?

Now what might C look like? Great question, here’s my take…

Start with a fixed routine, do what you think you can reliably do, but setup triggers that revert the bot into it’s seek and destroy mode. For example, a sharp, unexpected value on the accelerometer might indicate that the robot was hit by something, which will obviously mess up any predetermined routine, so this is a good trigger to revert to the slower scoring mode.

Another trigger might be if a game element that you want magically isn’t there when you try to collect it. Something went wrong, who cares why - go search and destroy now.

A third trigger might look ahead of the robot looking for opposing robots or unexpected things blocking your path. You could either revert to search and destroy mode or you could try to navigate around the obstacle, which should work given that your position is still known so you still have valid and good telemetry. Although the needed path-finding is advanced stuff which will tax your programmers.

As for sensors, I remain convinced that VEX does provide enough good sensors to locate a robot or to establish your position on the field. Assessing the condition of the field however is tricky.

If you do have a really good knowledge of where you are relative to where you started (also known) you do know exactly where you are on the field, which can be used to navigate dynamically around the physical parts of the field given a map of sorts. I cannot disclose the details, but I created such a map for NAR months ago and we do have plans on using it. It’s really quite simple, but powerful. Anyway, the tricky part is seeing the non-static parts of the field.

The rangefinder is slow and it’s only a single point in space, so not quite on par with Google’s LIDAR systems. But something like an Xbox Kinect sensor could possibly provide a really nice picture and 3D point cloud array of the field which could be used to detect foreign objects, both of the robot kind and playing object kind. Unfortunately using this sensor isn’t advantageous.

First of all it’s huge, second of all it takes a lot of processing power to work on the data and third … ewww xBox.

A straight up honest better approach might be to just allow yourself to hit any other robots, look for the impact on the accelerometer and make some assumptions to try to navigate around it. It’s a really special case anyway that you won’t see in skills and only care about for the first 20s anyway, so really why care? For all I care, just fail in this case. Oh well they devoted an entire robot to stopping your routine, at least it’s a 1 robot not scoring to another robot not scoring scenario.

And then there’s object tracking. Yeah yeah it CAN be done given a webcam and three hours of YouTube videos on computer vision but WHY! For a 20s span of time it’s not worth it, but for a whole minute of time like in college? Oh yeah - it’s worth it. Idk my intuition is telling me that HS teams would in 90% of cases be better off spending that time building a better robot, making design tweaks, etc. than actually doing object detection but who knows, I’ve been very wrong before.

So yeah that’s what I have. Do C, pick and choose your battles wisely, weigh the costs to gains, etc.

-Cody

While I can’t tell you numbers off the top of my head (jpearman could tell you), the scan rate, nor the processing time, will be an issue if the sensors are programmed correctly.

Teams that will do well will, with almost an absolute certainty, have advanced sensors on them.

From the VEX documentation, it shows that the sensor has an approximately 45 degree viewing angle:
https://vexforum.com/wiki/images/7/74/Ultrasonic_Range_Finder_Figure_3.jpg

@Cody

Thanks for the input. It seems that C is the ideal plan for the autonomous period of the normal game. As for the details, I will have to think about it.

However, this leaves me with a question. Would a fully sensored autonomous program be useful and prove competitive for the Programming Skills challenge?

I will point to my analogy again.

Sensing takes time.

No offense but evolution
(please don’t argue religion right now and its roughly the same point with religion)
evolution created the human mind in a very very very very long time.
Trying to match the efficiency of robot skills won’t happen by trying to have quick sensing and reactions to things going wrong.
The way to match robot skills is going to be just raw consistency and the ability to react to things going right extremely quickly.

The top robot skills score in the world last year, Jack, didn’t just go on the field and drive toward big clumps of sacks. He mapped out a routine and followed it extremely precisely. Why should we presume programming skills is any different.

My posts get longer the more homework I try to avoid :slight_smile:

Ultrasonic sensors update about 20 times per second. If you have two sensors they each update 10 times per second, the actual update rate is determined by how close the target is to the sensor. See this.
https://vexforum.com/showpost.php?p=322622&postcount=2

All other sensors update fast enough that it’s not a factor (at least in ROBOTC and ConVEX, not so sure about EasyC, its IME update rate is quite slow).

What did those senors help to accomplish?

If this proved to be successful, does this mean that the margin for error and the likelihood of things to go wrong is very small in a programming skills environment? Does this margin for error decrease each time you practice/rehearse it? How does this apply to the autonomous period in a normal game? Could you perhaps do scouting and have multiple programs ready to initiate either through a trigger or downloaded right before the match starts?

No it’s even less of a good case because the field should remain much more static because only your actions alter it’s state. This is a great case for predetermined logic, as you can carefully track how you alter the field and design code around that.

Do take care in robot positioning tho, you still have to account for variances in field surface and robot battery level. Little things add up to major drift.

Also there is a TED talk out there about how our brains learn the shortest path to do actions (particularly so in the arm) statistically this reduces the amout of possible error, point being - be direct about your actions. Try to always be going in a straight line.

-Cody

This is the TED talk, a must watch.

Actually has a lot of robotics in it. Think about it in the context of moving a Vex bot around the field. -Cody

Cody touched on it, the big thing sensors help accomplish is get rid of the variance between field set ups (perhaps a buckyball is off by an inch or something). It also helps get rid of the variance from your robot, as different voltages of the battery could lead to different max speeds (so your robot going 127 PWM at 7.0 V is very different from a robot going 127 PWM at, say, 5.5 V). It’s very difficult, even in ideal situations to program a robot using time only. Using sensors (encoder or potentiometer) is a very common thing for teams to use, as you can program a button to tell the lift to go to a certain height every time you push that button (very convenient for drivers). I would say encoders on the drivetrain are the next most common thing, as they allow you to record the number of degrees a wheel has rotated (beyond 360, of course).

I’m not sure if every team does this, but we take a battery straight from the charger to the field for each match. They’re all around the same threshold power-wise, and that keeps our routines consistent. The little variance that we do experience is hopefully corrected with sensors.

This year we’re using one URF, two IME’s, a Gyroscope, a Light/Colour Sensor and an LCD screen. That’s all we thing we’re going to need to develop a routine that is as close to as perfect as possible. For a 15 second Autonomous period (or even 60 second of Programming Skills), you really don’t need more than that. With one IME on the base and one on the lift, you’ll be able to drive any set distance you want and put the arm to any height. The gyroscope is nice for turning, but really not necessary. We’re using the URF to track how many objects are in the intake. That might shave off a few seconds, but it’s not necessary. It’s just helpful.

The LCD and Light Sensor are just for us. We like menu systems and real-time data when we’re testing. Using a terminal window and being patient would get you the same results as an LCD screen, but we really like them. They aren’t necessary by any means. The same applies for the light sensor. We like programming (to be honest, we’re building a robot to have a program to code on), and we want to see if we can use ambient light and enough logic statements to get the robot sensing what colour of game object and tile it’s on without our input. We did the same thing with a pair of URF detecting what side of the field the robot was on last year. It’s completely unnecessary (and easy once you think for even a second), but fun.

Sorry if that didn’t really contribute to the discussion. Here’s my point. Do whatever you need to in order to win. If you need a “smart Autonomous” mode, write one. If you just need the “X–>Y–>Z” type, do that. Don’t add sensors if you don’t need them. Don’t add code that’s pointless. Don’t spend your time on projects that don’t help you win or learn something. Just do what works.