In the game, there will be a lot of uncontrollable factors.You can’t control the color of the lights in the arena.
and even if you do get perfect vision sensor code working, your event could always decide to do this:
Yea, but I dont recommend you to use vision sensor
What me and my team are planning on doing is making a series of autons, obviously like we would have one for either side of the field and for the different alliance colors, but also have a few autons written for contingencies depending on our partners auton? So like if they can only score the goal on their side, we could do one to score our goal and the middle, and so on and so forth. Idk if this will work out for us the way we intend it to but we thought it could be a good idea. It felt a bit reminiscent of my old days in RARC, I mean there is always variability in ur auton depending on the other person’s auton but this game feels a bit more drastic seeing as win points will be a factor now in the overall competition itself
185A in turning point:
Did they use it successfully for more than one competition? It seems after people realized that lighting at competitions was wildly different than lighting in their garage, everyone gave up on the sensor.
Yes, they were pretty successful at worlds, and they even got picked and made it pretty far. I think @Ryan_26982E might have something to say about them…
They are by far my favorite team from that year.
And like I said USC used it throughout the season and only lost 1 match, and then 2 at worlds, and I believe they had the highest prog skills, I could be wrong tho
While I agree that those teams had very impressive bots and skills runs, I do not think that is thanks to the sensor.
I would be very surprised if their autonomous runs actually relied on the sensor, even though it may have been attached to their bot. If they did use it, I’m sure they went through so much pain on the day of the competition trying to get their sensor to work.
That USC video was filmed in a well-lit room. I wonder how it worked on the day of the competition. If it actually did work, then I’m pretty confident that their autonomous did not rely on the sensor, or their programmer was struggling all day to re-calibrate the sensor.
And of course, the sensor does work, I’m just saying that the amount of effort it takes + the bugginess and inconsistency makes it practically useless.
One thing I was planning on doing was putting the vision sensor on the backing of the ball intake so that I could track color and use a macro for cycling that can basically just put the alliance color on the top. I hope that since the sensor is inside the robot it will be a little more consistent since it’ll block out a lot of the light.
Yeah, assuming the sensor works, that is probably one of its best uses: close-range detection with more consistent lighting and background. Note that since the camera’s pinhole lens is so small, if you lose a bit too much light the output will lose most of its contrast.
However, the problems and frustrations associated with the configuration process still exist, so best of luck dealing with that.
It worked very well at worlds, I was there watching and they got a 25, the second best, I am a trustworthy source for this because my sister is the USC captain
Just write an auton that does the home row.
That way you can just have your alliance partner disable their auton, either by code or by not starting their code or something
This defiantly isn’t effective, even if your partner has an auton that just gets one goal you should then have one that just gets two and then one of the middle 3
Lol, the picture is funny
( I took it )
If you’re going to put it inside the robot, it’s going to get blocked by some part and it’s not going to recognize it properly.
Or you could say more clearly where you will put that.
Behind a lexan sheet that will be the backing for the intake. Also at the bottom of it because then if the if it detects a red ball (assume I’m on red alliance) I can push the rest of the balls out the top and outtake the front rollers at the same time.
Pretty bad drawing but this is a side view. Blue is structure + rollers + backing. Purple is the sensor. Red is a ball.
Oh, I know that.
But have you ever thought that there must be a light on the field, and even if you avoid the light from hitting the sensor, if the light hits the ball, it will still matter
I have and honestly ik that’s going to be a problem so that’s why this is gonna be heavily experimental. I was thinking of maybe calibrating three tones and shade of red and blue to create a “range“ of sorts for the vision sensor but that’s just a desperate grab at straws. It would’ve been nice if the flashlight was still legal cause then I could overpower the lighting variation of the field but that can’t happen. Tbh idrk. If it works it works, it it doesn’t oh well.
In lego, my team used to calibrate the color sensors by giving it a sample of black and white from the field with the current light. Is this possible with vex?