Computer Controlled Shooting

This year my team had the idea to have the computer control the shooting of our robot, instead of trusting a human to do it.
Last year we developed a program that would track or robots location on the field in an x,y Coordinate grid, using encoders and a gyro. the idea is that if you know where you are you can calculate the angle to the goal and the distance. then using the gyro your robot can aim itself towards the goal, and if you know how far you are from the goal you can calculate how fast you need to launch the ball to land it in the goal.
what do you guys think, is it possible, is it worth it?

This idea is definitely worth pursuing. It would practically eliminate any user error that would result in lower scored balls. My team was thinking about this idea as well, but with variation in the mode by which it is accomplished. However, this appears to be a much better way to do it than we were thinking if successful. Thanks for the post.

If i could ask, what idea did you team have for this?

What kind of drift did your team experience last year, as in how accurate was your calculated position by the end of a match?

We were thinking, rather than sensors that read the field environment, we would have button presses that dictate preset heights for the launcher. This would be a good resort in the event that your idea is not successful.

we never used it for a full match it was only ever intended to be used for auton, I made a kind of test code last year, the goal of which was to have the robot perform a complex maneuver and wind up where it started. the test involved I think about 5 turns and 6 movements, and the robot wound up withing about 3 inches of where it started, but the angle was dead on. the total distance moved was only about 12 or so feet, but it did involve lots of rapid turns and movments

This is also being discussed in the following thread:

https://vexforum.com/t/automatic-robot-navigation-why-it-will-determine-winners-of-nbn-game/29393/1

To the starter of this thread:

Would you mind elaborating a bit more on how you made your tracking system? I am curious about what drivetrain you guys used, and how you guys handled the processing of swing turn data. Also do take a look at this thread; some good discussion and ideas going on.

the system operated on two quad-encoders on the back wheels, and one gyroscope, here’s how it worked:

I have a “gps” task where the robot keeps track of its location, this task basically records the encoder values and then resets them, and then records the angle from the gyro, using the encoder values, it pair them with the wheel circumference to figure out how far its gone, then using the angle and a little trig, the robot can solve how it’s position changed in X,Y coordinate. in order for the robot to navigate it first figures out where it is and where it is facing. then, I tell the robot "go to this X,Y, coordinate and face Z angle) so the robot figures out what angle it has to travel at, and how far. the self navigation has 3 phases, 1: robot turns to face the direction of travel, 2) robot moves in the direction, because the encoder values are reset ever cycle through the “GPS” tasks we cannot us them to measure distance, so what I do is that the “move” function runs on a loop that is pretty much "while(distance to target > 0) so every time it runs this loop it pulls the current robot position from the “GPS” task then calculate the direction it need to face, and sets the motors to a certain proportion of the overall distance. then it runs the task again, adjusts angle and speed, and on and on until we get there. 3) the robot faces the final angle

the hardest part of the whole was actually the part most relevant to this years game. that is, if the robot is facing x angle, and it needs to be facing y angle, how does the robot determine the fastest way to get there. essentially what it came down two was determining the angle of the two turn possibilities and finding the best one. this is what would allow the robot to aim itself at the goal.

the best part about this program is that even if the self navigation screws up, the robot still knows where it is, even if it didn’t go where you wanted it to, and knowing where you are is what is important for this program

as for handling turns - the “gps” task actually added the two encoders together and divided by two, so if you were going forward this would not change the overall result, but if left was negative and right was positive (ie, you were turning) they would add to zero and the robot would know you were just turning and not changing position

Nice explanation. One question though, for the entire community: How do you deal with swing turns? You can’t expect drivers to not swing turn in driver control can you, although this can be achieved with programming.

When both the angle and encoder values are changing, you can’t simply record three change values and say the robot swing turned this way. In actuality swing turn is complicated and swing turn center shifts. The three output values are not a state function of the robot’s position-- how the robot got there does matter, and how do we know how exactly the robot produced the three values and got where it is now?

I guess the thing we are looking for is during a match, tell me where the robot is. Then turn, face the goal precisely, kick up my flywheels at precise velocity, all done autonomously, and I click button and shoot.

This is by no means easy to do.

please define “swing turning” and exactly what that would entail.

as to your question of acquiring a precise launch velocity
the physics equations, well they are truly terrible but here I will give you the one to figure out the required launch velocity

Vo =1/((SQRT(( ( y - (SIN(Theta))(x/COS(theta))))+LA))/-16)))(COS(Theta)/x)

Vo = launch velocity, y = goal height, x = distance to goal, Theta = launch angle, LA = launch Altitude, or how high you shooter is when the ball is released

here is a link to the excel sheet with that math and a graph of it, sheet two for the document is a trajectory simulator you can mess with as well. also it wont work as a google sheet you need to download it
https://drive.google.com/file/d/0B1L...ew?usp=sharing

Imagine a tank driver, one joystick controlling one side of the base. The robot basically never goes precisely straight; difference between two base sides velocity exists constantly.

How would you still keep track of precise position?

When the absolute velocity of the two sides of the base are not the same, meaning when the robot is not center turning, going straight forward or backward, how would you still precisely process data and track position?

I can see somehow recording average velocity over short time interval and integrating the values would work around this without using a gyro, but hey, we hate integrating because error accumulates.

Edit: actually with a gyro. I wasn’t thinking straight.

as of now this is not a problem I have addressed, as it never really came up for auton except when the robot adjusted position while driving, in that instance the angle was so slight that the averaging out and cortexes sample rate kept up with it. I have two solutions right now but will think on it more

  1. hope that the cortex samples fast enough so that the triangle divisions of the curve are small enough that they don’t effect error, but as you say, accumulated error.

  2. this would take some math or testing or calculus or something over my head at current time, but if the two wheels were uneven you can some how calculate the arc they take by the amount of offset. this is sounding more and more like a calculus rate of change derivative somethin or another, and I have yet to take calculus, I will have to ask my smarter friends at school about this.

it seems to me, that if you are taking the angle, and you wheels are arcing, you would not be moving as far forward as if you were in a straight line, and by averaging the slower inner wheel and faster outer wheel you would arrive at something like a true forward motion at a given angle. this will require major testing, thanks alot for brining this up, it is not something I had considered before

I had an idea. Using standard arc to approximate the swing turn movement over a small time interval, and work out the math, basically like an integration approximation of curve length or area with quadratic curve (concept of Simpson’s rule). But in this case dealing with swing turn we use circle because calculation is easier.

Thanks for the inspiration. I will work on it and make another thread discussing this, after doing a bunch of research.

It might be better to have the program just rotate the robot to the right angle and then a secondary driver can control the angling of the launcher with a couple of buttons.

I figured it out, it turns out that you don’t even use the gyro for this. if you know the distance between the wheels (which you do) you can take the difference in distance that the two wheels travel, then apply that to find the radius of the circe that the robot is turning on, then you can take the arc of one of the wheels, and find out what number of degrees that wheel went around its circle (each wheel is forming its own circle, the inner wheel a smaller one and the outer wheel a larger one, and if you know the radius of the larger circle is x inches greater than the smaller one and the outer wheel traveled x more inches than the inner one, you can figure the radius of the inner circle. this forms you a triangle with two side length (these sides have a length of the radius and go two the two points in an arc formed by the wheels. at this point we can use law of cosines to find then length of the straight line distance we traveled and the angle we moved and and from there find how we changed in x,y. with this new system we may no longer even need the gyro. however I will have to consider what happens in normal turns where one wheel goes positive and the other negative

the problem with this idea is that we plan to vary the velocity of our launch and not the angle, and it is much harder to have a human guestimate the speed of the shot based on something like a joystick

Bingo. Integration just means you do this and add up coordinate every 20 milisecond or so.

I have not fully figured out the math yet, but I would say that you need the gyro to measure the angle? Or maybe you can prove that you can solve the angle with two encoder readings? I am not sure yet, but will make further calculation and updates.

Don’t assume the center of rotation is in the middle of the two wheels if a two wheel drive. (or center a four wheel drive with omnis and four driven wheels)

However if you are a holonomic drive you can figure out each wheel based upon encoders and the gyro (if the gyro is acting nice).

Sorry for the long URL…

https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=67&ved=0CEoQFjAGODw&url=http%3A%2F%2Fweb.cecs.pdx.edu%2F~mperkows%2FCLASS_479%2F2013%20lectures%2F2012-1809.%20%20Kinematics_mobile_robots.ppt&ei=pztAVae9MtPjsATPtoDoCQ&usg=AFQjCNG-9SXRlS84pzIoSeaydPeayynL1A&bvm=bv.92189499,d.cWc&cad=rjt