Computer Controlled Shooting

The proposed idea directly estimates the swing turn track of a robot. I do think I would rather determine the radius of rotation by rotating the robot 10 full timea and using forumla to calculate the standard center rotating radius.

Plus, this issue can be simplified by using hiGH traction wheels in the middle of your tank drivetrain.

Is going through all this programing work really worth the results? Looking back at past FRC experience, might it just be better to tune a shooter to have as flat of an arc as possible that’ll give the largest range of shooting positions that will make it in and just practice to stay within those ranges? With some good building and tuning, I’m pretty sure a shooter that can make it in at almost any distance will be feasible. From there the only matter is facing the right direction, which can come from practice. Opinion or thoughts?

Edit-100th post! :slight_smile:

I am sure that you could practice a lot and make sure that your swing turns are perfect circles, but usually driving does not consist of circular motions but rather a snake like motion.

To which games are you referring to? Looking back at Aerial Assist, I can see where you are coming from, having flat arcs is better due to being able to adjust distance more readily and the fact that the goals were vertical. However, I don’t think that this concept applies this year due to the goals being angled more horizontally. Instead, looking back to Rebound Rumble seems to give a better indication that shooting with more of an arc and adjusting the firing angle/velocity will be better at adjusting the distance of the shot.

In rebound rumble, many teams chose to adjust their shots by changing the arc as you could not shoot directly into the hoops. This year (nbn) it seems as if you might be able to shoot directly at the backboard and have it fall in to the goal (this needs to be tested). If that is true it would make sense to look at ultimate ascent for inspiration as the shot in that game was similar.

I was mainly referring to aerial assist.

That’s exactly what I was thinking. If, when it’s tested, it works fine to just hit the back netting, I could see that being an ideal solution. And it’d certainly be a lot easier than trying to compensate for higher arcs with advanced coding. Than again, coding is part of the fun…

I’ve been thinking about sensor setups for code similar to this since right after the Skyrise release.

For my test, I plan to utilize 2 IME’s and treat all movements as extremely small vectors, and use a gyro as a ‘check’ of sorts to track and allow for the difference between the actual change in angle and the calculated change in angle according to the IME’s. This should make it possible to see a more accurate picture of how the base has moved on the field.

I wasnt able to open the excel file do you happen to have it in a different format? thanks for the assistance and for sharing

There are a few problems with using IME’s to track a robot’s heading.

The first is that an IME cannot take into account wheel slip due to a large acceleration. This happens if you suddenly stop the motors while the robot is driving. It can be mitigating by implementing a slew rate controller, but some slip error is unavoidable in a competition setting.
The second issue is that if the wheel the IME is connected to is not horizontally in line with the center of rotation, it will slip when the robot turns. You could fix both of these problems by having a six-wheeled robot where the center two wheels were passive (not driven) with encoders on them.

You could do some research on sensor fusion that may help combine the two together (gyroscope and encoders). A complimentary filter would be the simplest to implement.
Don’t forget to zero the gyro bias. Letting it sit over night and figuring out the average drift rate should serve as a good general offset, then performing a quick average before the match starts should remove any thermal factors.

If you are really dead set of getting a nice result, a Kalman filter is probably the way to go where the error in the heading measured by the encoders is a function of the difference in the magnitudes of displacement for each discrete step. I’d have to do some doodling to figure out the details. It would probably prove to be to much of a hassle for the time and benefit in VEX tho.

Listen to AURA and use large red encoders. IME is ideal for smart motor library and directly monitoring the conditions of the motor (estimated current, motor instantaneous velocity, PTC temperature, etc.) but after the compound gear reduction inside the motor IME cannot measure actual displacement as accurate as big red encoder.

When you are talking about cumulative position tracking calculation, the slack and error IME picks every time you rapidly change direction of the motor might just be too overwhelming for the precise calculation.

But what happens if these two center encoder wheels are not at the actual point of the robot’s rotation?

Like the second point said, if the wheels aren’t in line with the point of rotation you will have errors due to slip. Now these center encoder wheels don’t have to be physically centered on the robot; you can put them wherever you need to fit this criterion. In general, your point of rotation will be between the front and rear wheels. However as your COM shifts so does the point of rotation (this is a problem with non-linear lifts). This shift away from your encoder wheels is another source of errors. You can mitigate this by having your encoder wheels not be omniwheels. This will help draw the point of rotation back closer to the encoder wheels if the COM shifts at a cost of a performance penalty due to friction.

I like the idea of passive traction wheels with red encoders. They seemed to work great in toss up (I was rewatching a reveal of our 6 wheel tank drive from toss up just last night). As soon as I have time to throw together some code and a drive base I’ll post my results.

Thanks for everyone’s input!

Accelerometer + gyro = problem solved?

Doesn’t the accelerometer give you 3 axis acceleration? Acceleration & time = velocity = figure out distance traveled? Then the gyro tells you what your heading is? = position on field?

Tried that in Toss Up. They had trouble with what is the integration period for the accelerometer and I had no good answers.

How many milliseconds is that value from the accelerometer good for to properly integrate? You could be really off if you go too infrequently.

So noise there, noise due to wheel slippage on the encoders/IME, weird angles for the sonar, the tape is 3/4" wide so are you on the leading or trailing edge of that, gyro drift, etc. Hmmm. Lots of decisions to make to know your real truth.

Bumping into the wall and line followers are absolute truths. The rest can get murky

With the location tracking code, it is also important to be able to recover a reading after being rammed by another robot. Otherwise, an opposing alliance could easily mess your entire program up.

The problem with accelerometers is their large level of noise with respect to the measured values. This noise should be relatively Gaussian over a long interval, but when you are integrating over millisecond lengths of steps, this isn’t the case. You can reduce this error by making your integration intervals longer, but what error you lose by doing this, you gain because your integration is becoming less “continuous.”
Accelerometers become much more useful during high acceleration (impact) where the white noise error is dwarfed by the measured value. If you have another sensor to normally track your robot, you could fuse the two together by a weighted average that depends on the accelerometer reading.
There are so many ways to solve the problem of position tracking. In VEX you are limited by the hardware at hand, however it is still doable. You just have to start simple and try more complex methods until you get one that is “good enough.”

was the problem that the excel file was bad or corrupted or you couldn’t access it,
or is the problem that you don’t have excel on your computer, if you this is the case, what do you have?

I probably will ask jpearman or ROBOTC technician, but how is integration usually performed in any sort of programming environment?

I would guess it is the old fashioned rectangular integration, but can Cortex handle Trapezoid or Simpson’s quadratic curve approximation every 20 ms or so? And will this decrease the error of double integration performed on accelerometer?

IDK and Jpearman might pop up and answer this question… :slight_smile:

ok, so as I posted earlier, I have figured out the mathematical way to handle swing turn, you take the difference in encoder values, paired with a known distance between the wheels, to find the radius of a the circle you are traveling on, and how many degrees you have traveled. you then pair this with the gyro value to find the exact angle of the path you took, and law of cosines will give you the distance.

the problem I encounter now is this, what happens when one side goes in the negative direction and one goes positive, I don’t know enough about how the robot would behave here to even guess at any math,

for example, what would the robot do if the left half of the drive went full power forward while the rear went half power in reverse?