My team’s robot is currently shooting differently according to the battery and position. I was told by a mentor that it would be good to program our robot to change the fly wheel power according to our batter power. How do I do this? First thing to know is that I had 4 motor on our flywheel and our gear ratio is 8317. Please give me a sort of sample code or elaborately describe how I could do it. Thank you
Instead of going off of the battery, I would recommend just implementing some kind of velocity controller.
We have already tried that and because our code is running off of something I can not talk too much about that will not work. So if you could provide a way that I could calculate the necessary speed of the motors in order to reach the net which is based off of the battery that would be great! But thanks for the response though
You can access the current battery level in RobotC using the “nImmediateBatteryLevel” variable. By default it outputs the value in thousandths of volts, so make sure to divide it by 1000. It may require testing what distances are achieved at different battery levels and creating a compensation program from there. Do with this what you want.
what you are asking for is very complicated, I would recommend using a PID Velocity Controller, that has worked very well for my team
in terms of calculating the speed you need to shoot based on distance and battery Voltage, that’s very difficult
first for distance, the speed you need to launch at is calculated by this equation
Vo = 1/((( Y (Sin(LaunchAngle )* (x/Cos(launchAngle)+LA))/16) * (Cos(LaunchAngle)/X))
where LA is launch Altitude (how high your shooter is from the ground.
Y is the height of the goal, and X is the distance to the goal
so plug those numbers in and this will tell you how fast to launch the ball
in order to account for battery voltage, you will need to do some testing
you can get the battery Voltage in your code as someone else said
what I would do is then, plug a motor controller 29 into your cortex, and send it a constant signal, say 127, then, at different battery voltages, test the current with a multimeter, across the outputs of the motor controller 29,.
so if at 127 and full charge you get x voltage, and you get data points for X at other battery charges you can make some equation to represent how battery power affects voltage to the motor, and, I am not totally sure of the equations here, but you can convert voltage to the motors, to amps the motor receives, and the RPM of the motor is linearly related to the amps you send it, so you can say well, if at full charge I need to send the motor a signal of 80, then at half charge, I need to send it X, because using the data you gathered you could then predict how fast a motor would spin with X signal and Y batter charge, so if you need your motor to spin at V speed and you know Y battery voltage you should be able to calculate the signal to send to the motor
May I ask what is the units used for LA, Y, X and Vo?
Also, does the small letter “x” in
refer to X? Or is it something else?
Also, to determine distance you use the ultrasonic sensor. But won’t balls in the field affect the reading of the ultrasonic sensor? (Unless there is another way to determine distance without an ultrasonic sensor).
it doen’t matter what units you use for the variables, as long as they are the same, if you use Ft, your answer will be in feet per second, if you use meters your answer will be in meters per second
all of the X’s represent Distance to the goal, that one is just lower case by accident
in terms of determining distance, that’s difficult
you cant relly use ultrasonics, because the goal is mesh and messes with them
you can do one of two things

just have preset firing locations of known distances

have some method of keeping track of your robots position on the field, for this you could use encoders and a gyro to keep track of your robots displacement from start
You could just set it so when the battery voltage is between two different preset points you have the flywheel motors spin at a preset value. You could do this by trial and error, or by using the equation mentioned above.
Pseudo Code
if(battery voltage between 7.2 and 7 volts)
{
flyWheelMotorSpeed = 80; (or whatever you have figured out)
}
Is it possible to just use encoder readings to replace the reading the gyro?
it is possible, I will try and post a more detailed explanation later tonight.
I would imagine you could, but it would be less accurate and might be subject to drift. Remember the Gyro measures rotation, the encoders measure the rotation of the wheel. Ideally you would use encoders, a gyro and an accelerometer.
Aren’t gyros themselves also subject to drift? From what I’ve heard, gyros can be quite notorious for drifting.
Along with the ideal solution the robot would have a kalman filter for sensor fusion. Lots of sensors with drift can be better than 1 sensor with less drift than any one sensor.
I’ve heard of that, but has anyone actually successfully implemented a Kalman filter in RobotC?
Writing the code for a Kalman filter isn’t the issue, it’s the sensor selection available for use to nonVEXU teams. When calculating heading or position during the innovation step, there are not any driftless sensors available, regardless of noise levels, that have a Gaussian error distribution. Also, when trying to calculate the position on the field the problem becomes nonlinear and the optimal nature of the Kalman filter is lost.
Don’t get me wrong, a Kalman filter is a major step in the right direction for solving the localization issue. However, I don’t think it will provide precice enough results over a 2 minute period of robots crashing together for most launcher range calculations to still be valid. Of course that depends on how sensitive your launcher is to changes in range.
One thing that you can do is use the pythagorean theorem to determine the appropriate distance. Assuming that the robot is on the white line going through the center of the field and your goal, this is the equation:
a = √2(12s)
s = distance from the robot to the back wall (the wall the driver is next to).
a = distance to goal.
You can also do this for anywhere on the map, but that equation is more complicated, requires 2 ultrasonic measurements, and is more likely to be interfered with.
a = √(x² + (12  y)²)
x = distance from the robot to the right wall (the wall adjacent to your goal and the opposing alliance’s base).
y = distance from the robot to the back wall (the wall adjacent to both basses).
You could measure y in the opposite direction and change the equation to √(x² + (12  y)²); however, this will result in you trying to read an ultrasonic value off of the goal when you are close to the right wall.
The way I’m thinking of trying to figure out how far the goal away is mounting 2 sonars at 90 degree angle and a gyroscope on a motor controlled axel. The motor has an ime or encoder. The motor keeps the sonars pointing at 2 of the walls using the gyroscope’s data, the sonars are mounted 8 inches off the ground so they only read walls and robots and goals. Your x and y is what the sonars read. The encoder or ime will tell you the direction the robot is facing. Sonars might have a bit of inaccuracy but the drift 1 second ago doesn’t effect the current value which would make it a lot more accurate then accelerometer and encoders at the end of the match. The gyroscope is what I’m mostly worried about. After getting x and y it is child’s play to get the distance from the goal.
There is a program (made by moderator Jpearman) that can automatically do this. It can be downloaded here. Just make sure to read the pdf included so that it is formatted right.