Guess and Hope, two great enemies

From another thread:

Guess and Hope are the two greatest enemy’s of a robot or any process control system. All good robots know what’s going on. Sensors sense the world and give that feedback to the program(er) to act on.

Wait times are really bad when used for positioning (move forward 10 seconds and score). Friction, wear, battery voltage, etc. are all part of an ever changing motion equation that all you control is time. A timed routine may hit 10 out of 10 on the practice field, but on the competition field the slight difference in wear on the floor can have you miss by that critical 1/4".

The poster said they were the design/builder, they should know best the ever changing robot environment. They need to explain to the programmers how wear happens, bolts don’t stay tight, etc. so that the programmer knows it’s a constantly changing environment.

Sensors can tell you exactly where you are at. There are 100’s of postings by people going “how can I get my robot to move straight?” Every answer that works is “use a sensor or sensors”.

Next time you are writing code and go “and then I hope it’s in position”, remind yourself, “Hope is NOT an engineering strategy”. Go find a sensor that will give you the information you need. The difference between “I guess I’m there” and “I know I’m exactly 3” away" is scoring every time.

One of my posts a few weeks ago was about having the right sensor for the right job. A gyro and accelerometer will get you some information, but a distance sensor will let you know exactly. If you followed the Dragon/X trip to the ISS, they have a large number of sensors on Dragon that keep it from crashing into the ISS. They have gyro and accelerometers, but for distance they have a distance sensor. The right tools for the right job.

Well said Foster.

I’m excited to see what the students come up with for this years game. Gateway IMHO was not sensor friendly, round game objects that did not stay still were hard to detect. The goals did not have easy surfaces to use for detection and the gyro was only available (in terms of software support) after the season was underway. This year has more opportunities, I see ultrasonics being used much more to track walls and detect the goals. The open field is more conducive to line tracking and the software for the new IMEs and gyro is available for the whole season. I hope to put some tutorials together this summer but the best way to learn about sensors is by experimentation, so programmers get a basic push bot built soon and start programming.

I agree that sensors are completely necessary for a complex routine. For the Sack Attack season, we’re venturing into the world of Motor Encoders, Light Sensors, Ultrasonic Sensors, and possible more.

Simpler routines, however, don’t require all of the bells and whistles. For Gateway, our autonomous raised the lift, drove to the center 30" goal, and waited there. It was simple, reliable, and very effective. However, it was only viable because it controlled a strategically important part of the game. There’s no obvious central focus point in Sack Attack, so we must adapt.

What if i say “I hope this sensor works like it should”?

I should also mention that sensors are your best friend ALWAYS

Well then you should probably plan for that! :wink:
If a certain “step” of your programmed routine takes 5 seconds to complete, then have a “fail safe” (stop all motors and break from the autonomous loop) that will trip if you haven’t reached your target after 8 seconds…
It’s a fairly sure way to prevent overheating motors/cortex during autonomous, which seems to be the bane of many newcomers to VEX.

Ah yes, a segment of code that’ll break from whatever part you’re on so it doesn’t ruin motors or something. Coding for contingencies and slightly more complicated logic can be pretty useful. Especially for college, teams have an entire minute (of true autonomous, woo), so they have to plan for if/when something throws a wrench in their plans.

Speaking of which, is there a way to detect incoming opposing robots (that may be on a collision course), preferably with Vex sensors? The ultrasonics might work, but I’m not sure how reliable those are, especially on robots with weird surface features.

If the “true autonomous” remark is a reference to repositioning, let me remind you that the increased distance between starting tiles, objects, and goals in Sack Attack makes repositioning very impractical for a good autonomous routine (unlike Gateway). I think we will see less repositioning autonomous runs at Worlds 2013, and those who can navigate the field without human help will have a serious advantage.

Perhaps, for detecting other robots, you could have extending “tentacles” with bump sensors or limit switches on them.

Here’s one problem our team has been thinking about:
If your robot is following a line, and another robot pushes you completely off the line, how could your robot find the line again?

if line not seen in 5 seconds
go forwards and backwards until u see line

second method
build wall around robot made of plates attached to bumber sensors
then straif against what is pushing you

What if you accidentally drive into a wall, then spend the whole autonomous pushing back against a wall this isn’t likely to move? :smiley:

Just use an accelerometer to detect the collision.

O ya good point
Thats why i run ideas past people before i try them
I guess that you could have it go the direction that you hit first but ya i see the problem with this

For line following, have your code remember the last sensor that saw the line and move in the appropriate. If you get pushed off to the left, then your right line sensor will be the last to see the line and vice versa.

Essentially, your code needs another case where if none of the line sensors see anything, then it falls back to the last remembered sensor values.

Can code remember the order in which the sensors saw the line? This sounds like a very complicated command to write.

I suppose you could say if leftsensor = x for 2 seconds, and the middle and right sensors = x for >2 seconds, then strafe left (and vice versa).

Remember it’s not all about the autonomous part of the match. Many teams last year had arms that lifted to drop elements into the different height goals. Many of them had “pushbutton” routines to get the arm to the right height. Rather than have the operator guess, they pushed a button and the robot, using sensors, positioned the arm to the right height.

For Snack Attack, you can do the same thing, position your claw to drop the gold bags on the high goal.

If you plan the senors first, it’s easy to build a robot around them. Fitting sensors onto an already assembled robot is much harder.

Remember to think out of the box on design. Take the limit switch, attach the 75Mhz receiver tube with a rubber band, and now you have a sensor that can see 12". Maybe up to sense when the robot passes under the scoring trough?

i was consider having a very similar set up as that to to sense when i drove under the trough but it doesnt seem useful except maybe for atonomous
drive until under
drive until not under
Those wall bots that rush the fields this would be good for or if you had a type of arm that had no forward reach you so you could drive under a little and raise and arm off the back.
What would your opinion be of a reverse six bar like aura had so it had forward reach at max height to score in opposite high goal but drove under a little to score in troughs.

Well, it would probably look like a lot of blocks in Easy C but its not as complicated as it sounds. Here is some pseudo-code:

// We need some global variables to remember our last valid sensor readings:
global int last_left_sensor = 0;
global int last_center_sensor = 0;
global int last_right_sensor = 0;

void Update_Line_Tracking()
  // Read the current values of the line sensors:
  int left = Read_Line_Sensor(LEFT);
  int center = Read_Line_Sensor(CENTER);
  int right = Read_Line_Sensor(RIGHT);

  if ((left == 0) && (center == 0) && (right == 0))
    // Oh no, we don't see any line, so use the values from last time we saw it
    left = last_left_sensor;
    center = last_center_sensor;
    right = last_right_sensor;
     // ok, we see the line, remember these values in case we get pushed!
     last_left_sensor = left;
     last_center_sensor = center;
     last_right_sensor = right;

   // Now, just do your normal logic using left,center,right ....


Hopefully that gets the idea across. Keep in mind that is pseudo code and won’t work as-is, just trying to demonstrate the idea.

In addition to macros, having some kind control loop for a lift can make all the difference. We’ve learned the hard way that having a lift go to certain height, and having it stay there can be two completely different things. Rubber bands help a ton, but too many can prevent lifts from being completely lowered.

Our driver actually preferred manual arm height control, because with all the height variance in scoring, de scoring and balancing, you often needed to go to heights that are not one of the 3 goal levels. I would say this is especially true for teams who have a driver/operator team, rather than single driver, because the operator can focus on getting smooth accurate arm control, without having to worry about the game itself.

Yet Guess and Hope are two of the greatest tools of robotics teams.

Hope is the idea that “next year will be better”.
Hope is the great motivator for teams to continue in spite of past difficulties.

Guess, also known as Hypothesis, is an essential part of the scientific method. The ability to Guess at a possible answer and be willing to try it with Hope of success, yet without certain knowledge, is a great productivity aid and a way to work around “analysis paralysis”.

Keep your friends close, and your enemies closer…

I can see both sides of this, though. Back in my college days, one of our robotic classes had us using the VEX with ROBOTC to perform a variety of challenges (stick a dart in a dartboard at specific scoring locations, line tracking, pick up a tennis ball and drop it in a bin, etc). I sat back and let my team decide how to build/code it, and despite my suggestions they decided to go with timer-based coding.

Big mistake. Long story short, the only thing we were able to complete successfully (due to differing battery levels causing a ‘distance traveled in x seconds’ issue) was the line-tracking section. Which, ironically, is the only one that the team wisely decided to use sensors on :slight_smile: It was a good lesson to learn, but there it is:

When going autonomous, use sensors, and use them well. +1 for Foster.

I’m the new support guru for ROBOTC by the way. It’s nice to meet you all, I’ve been reading through many of these threads and besides a great many smiles (and a few laughs; 13 air tanks? Really? :D) I have also picked up some good ideas for ROBOTC that I’m going to swing by the development guys later on.

I’m going to be around a good bit on here and the ROBOTC forums to make sure we are making the best possible software for all of you. If you have any suggestions, ideas, comments, or tech jokes/puns, please let me know. I’m here to help!

-John Watson