Automatic robot navigation - why it will determine winners of NbN game

Based on the recent activity on the forum and what I’ve been observing during brainstorming session of our team - everybody is talking launchers, game strategy, and then some more launcher ideas. I must admit that I am just as guilty as everybody else - launching projectiles is much more exciting than figuring out how to program VEX line trackers.

I hope that more experienced teams have already realized, and everybody else will realize, once they put their launcher on the wheels, that automatic robot navigation and orientation will be the key to the success in NbN game.

If I had to guess what feature will be universal for all teams that will make it to the NbN World’s finals - that will be it. You could design various good launchers and intakes, but if you don’t know the direction to shoot at the high goal at all times, you will not make it to finals.

Luckily some people already starting the ground work. I cannot post anything to this thread so I’ll answer it here.

In my experience, VEX accelerometers are only useful for detecting tilt and collisions with other robots.

Ultrasonic sensors are only reliable in the controlled circumstances - such as directly facing flat surface. They will not work on curved and soft surfaces as well as at an angle. I am not going to spill all the secrets from our brainstorming, but we are planning to try ultrasonic sensors during autonomous for ****** (redacted / top secret).

The best way to track your position and orientation would be through quad encoders and gyro. See AURA and QCC2 threads for the fresh examples.

Then you need to apply some filtering to your position tracking. Every time you cross a white line with a line tracker you will need to apply some corrections to your estimated position.

With VEX game objects being not available for a while I think everybody should let launcher designs rest for a while and spend some time pouring over QCC2 and AURA code as well as anything related to gyro’s, quad encoders, and line trackers.

I don’t have experience tracking quad encoders on mecanum drive, but I am sure somebody else could share the code or give some insights.

The red and blue starting tiles are very close this year.
There are no major obstacles between the two sets of tiles.
One line of drive forward code could easily render your fifty-plus lines of ball scoring code useless by simply ramming into your robot in auton, or ramming into ball pyramids you plan to pick up in your routine.

I am not a great fan of this strategy, but it is a very effective strategy.

Might need to make use of that accelerometer after all, so you can detect you have been pile-driven halfway across the field.

Does this qualify as a conflict to resolve? I am kind of old school, so I just call it a problem to solve. :slight_smile:

Pneumatic brake. Design one that has power, requires a lot of air, will not be used frequently but can virtually lock you down on the field. Plus PID velocity control brake for the base.

Holding position will be crucial no matter in auton or driver control.

Sometimes it is not about the opponents crashing onto you.
But it is also about your opponents knowing your movement before hand and getting their robots to reach there before you or before you can activate your brake.

I believe Torqueative is speaking from experience - anyone remember all those disruptive robots in Round-up?

Thank you for moving this here. I didn’t know that others couldn’t respond. I do agree that automated driving systems will be key this year. If anybody knows how you strafe with mec wheels using Entegrated Motor Encoders please do let me know

It is the exact same as driving forward with any other wheel except with a few motors inverted. Mecanum wheels aren’t magical. The code just needs to set the motors and read the IMEs. Any addition of P or PID or even multistage PID with gryo running a kalman filter is coded the exact same as it is normally.

Some of the motors should be given the inverse of the goal rather than the goal to ensure they go the right way but that’s just a few negative signs.

Yes but like I said in the explanation, how would I use the MotorEncoder to know how far my robot has gone sideways? I’m basically making the field a virtual graph. Each MotorEncoder tick forward increases Y value but how would I manipulate the X value?

The best way is to draw a vector based off the encoder changes but the easiest is just to change X when the strafe function is called and Y when the go forward function is called.

Something you could do is write your code so that when your wheels are spinning in a direction that would mean your robot is strafing, you can have your X coordinate change in place of your Y coordinate. However, this may get a little more tricky than you think, since a point turn would change the direction your robot needs to go to travel along the X or Y axis you initially set up.

You should be able to take the core math we use to calculate wheel speed, ie.

    // Set drive
    drive_l_front = forward + turn + right;
    drive_l_back  = forward + turn - right;

    drive_r_front = forward - turn - right;
    drive_r_back  = forward - turn + right;

and rearrange that to calculate forward, turn and right based on encoder input. The tricky part will be that there is a lot of wheel slippage during strafing and the theoretical position will not be the actual position, you will need to compensate for that.

Mixing your rotational translation and horizontal displacement translation in X Y plane gets a bit more tricky to maintain the calculation of the exact field position of your shooter. (We’re lucky this is just two dimensions)

If you really want to do it at the same time, the change in rotation should be factored out first so you can then get to true displacement of the shooter relative to target. The change in robot rotational angle produces an arc based upon the radians rotated and is reflected in your encoders along with the translational displacement. That rotated arc needs to be broken up into X and Y components and taken away from the displacement of the robot in those areas. Then you will know your new field position and what you need to do to adjust the shooter.

I encourage our folks try and do one set of movements at a time autonomously as it makes the math so much easier. Do x&y displacement together and keep the rotation to its own movement.

I’m thinking the best way to do this could be to use sensors not connected to the wheels. A gyro (yes I know it has some issues) for rotation and quad encoders connected to two non driven wheels in the center of the robot, think old school mechanical mouse. One wheel/encoder measures Y, the other X.

Could you replace the gyro with two non powered wheels/quad encoders to measure rotation?

I completely agree, those are valid points based on the very real experience. My thinking goes like this: if you have implemented automatic navigation then you have a chance to use, if you didn’t - then you have no chances. The best teams will go extra mile to be as competitive as possible, so they will have it.

When our team did brainstorming we discussed combination of defensive and offensive strategies. It is clear that both need to play role here. Kids have come up with a number of very good “playground” type of strategies. As for the opponents know what to expect - we have brainstormed several ideas. For example, if ****** ***** **** ****** *** then ** **** ***** * **** ******** **** *** ******** (redacted/top secret) and good luck dealing with that!

Even though, some of those strategies will not be easy to implement, I am sure that programming part will be very educational and the matches will be fun to watch.

In theory it would work but the gryo was recommended because the only data it would acquire is rotational compared to more wheels with encoders having translation data as well as rotation.

Also at some point you will take all the weight off of the powered wheels and the robot will just have wheel spin.

Im new to programming this year, and my team was thinking something similar to the old school mouse style, and have two free spinning wheels on an encoder , one for Y axis, one for x, so that we know our location in the field, but the only problem is that whenever we turn it will mess up the values. I can see how a gyro would tell us how much we turned and how to correct our value, but how would we make the two pieces of data co-inside?
our idea was to use two high speed spinners to launch the balls, and lower the motor speed to decrease the distance it would launch as we got closer to the goal. We were thinking mabey having a button that we would push that would use the information the robot has as to where it is in the field to allow it to automatically turn to face the goal and shoot the ball at the proper speed to make it into the goal (of course, we would need to know the correct formulas for trajectory to do so) basically, in short, how does anyone have any idea how to make the two pieces of information (gyro and encoder) co-inside?

I guess you are referring to this thread which I started to introduce some TRIZ basics after we hit a wall in this one.

I had taken introduction to TRIZ as an elective graduate level class before the total domination of the computers. The textbook was boring but instructor was very good and gave us a ton of great examples.

Today, the only people whom I personally know, that are using anything of this sort are my fellow EEs who mostly work in software. The only exception is a friend of a friend who works for Boeing and they use it semi-formally on a project team level.

I use it daily and don’t know what I would do without such a valuable tool. But I never seen it used on the company level and I never met a CS graduate who is aware of any similar methodology by this or any other name.

I am most curious to know if there are similar methods that are taught in NZ or other countries. Is there anyone else who used anything like that before?

As someone who played through round-up, I have seen how effective this can be first hand. Great auton teams like 1103 (Josh Wade) were just shut down. In vex-u this should get interesting…

Offense will require good navigation as well. What is the point of having fast 8 motor drive if you end up pushing into a wall 45 deg off course half of the times?

Collisions are inevitable. It is those who could successfully recover after them will have best scores.

Everyone who is saying that if you get hit you are ruined is so wrong. A good programmer has code written as a safe guard to make sure outside interference wouldn’t hurt his code. Just by say that we wont be able to make a accurate code because of whatever reason just fuels us to make a better code to prove you wrong. Just keep thinking what your thinking and I wont be seeing to at Worlds. :wink: