Black = Bot
Red = Vision sensor
from that distance could you tune a vision sensor to auto-align the bot to the goal, like maybe making the vision sensor track the color red on the goal, and once it see’s it, auto-aligns the bot to the goal. Or is the camera in the vision sensor terrible quality, so much so it can’t track the goal from that distance.
It is possible, at my last comp there was one team that did this and they made like 80% of their shots.
The vision sensor’s quality is not the best (316x212 pixels) but it doesn’t matter too much because it recognizes colors. The vision sensor has enough range to track the goal from across the field if configured right. You should run vision sensor tests for yourself and document your results in your engineering notebook.
My aimbot is just a PID, which is the word I will use for any closed loop feedback controller where a sensor is used to drive a motor. However, my aimbot is only a PD controller, in that the integrating term is not used. For those new to pid, the terms proportional, integral, derivative might be confusing, so just think of them as Push, Damper, and Iterator/Increasing.
In my program, the Push term is my error, which represents how far off the object.centerX is from the middle of my camera. While I set this to 160, you can set it to whatever you need, even changing it on the fly if things get out of alignment.
The Damper term is based on the speed, where speed = change in position / time. However since my program runs in cycles of every 50ms (with a wait term) I forgo the “/time” since it never changes, and things work just fine. There is no need for an integral/increasing term (sum+=err) or a feed forward (output+=5 or whatever), but you can try them out. I really just wanted to write the shortest possible aimbot, not a good aimbot. So here is the code
Vision1.takeSnapshot(Vision1__SIG_1); //Take a picture
err=160-Vision1.largestObject.centerX; //160 is my desired value.
speed=err-lasterr;
lasterr=err;
pidout=err*.08+speed*.12; //I directly set my kp and kd without variables for brevity.
if (!Controller1.ButtonX.pressing()||!Vision14.largestObject.exists) {pidout=0;} //zero out my pid if I am not holding the "PID" button (X), or if it doesn't see an object.
drive=Controller1.Axis3.position()*.12
turn=Controller1.Axis1.position()*.12 - pidout; //add pidout, though I subtract since otherwise the robot would turn the wrong way
leftWheels.spin(forward,drive-turn,volt);
rightWheels.spin(forward,drive-turn,volt);
You will need to declare that lasterr=0 before your while loop, and make everything a double, no ints allowed.
So where do you go from here? After implementing and tuning to your own robot, you can take it to the next level by: Incorporating aimbot into your autonomous. Filter your objects to reject red and blue license plates. Make your robot automatically fire when the error is small enough, clean up your camera image by comparing multiple snapshots to find the one with the best image, use multiple cameras, adjust parameters on the fly with a secondary controller, combine with distance sensors for range or to count the number of discs.
If any team has made this possible, if possible could you send us your vision sensor configurations.
You could test the default configurations in the devices menu. They are available here: VEX Visual Studio Code Extension - #93 by jpearman
Not sure what code platform you are using but here’s a tutorial for vexcode pro
Blocks: Vision Sensor - Using the Vision Sensor - Blocks-based | VEX Education
Which team was this?
We tried aimbotting with vision sensor but the signature was too hard to make accurate in all lightings.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.