Does the vision sensor have any real use or viability in Tower Takeover? I can’t think of anything it could really do.
You could possibly use it to precisely grab cubes in autonomous, but I don’t think it would make a big difference
Yeah, I’m only using it in auton
In autonomous you already know where each cubes and what color it is. It would only be useful to maybe align or possibly see what color cube the other teams scored in the towers.
Giant intake that mows through everything and stacks the one color you want.
Yeah that was the only thing i really thought of, but i feel like a bumper switch or an ultrasonic in the intake would be equally adequate to make sure you have the cube.
My thought with the visions sensor this year was to count how many of each color of cube you have scored so that you could print to the controller screen towards the end of the match which cubes will be the most valuable in towers.
My issue with that is that it’d be hard to count while in motion, and it sucks to have to, like, turn around for it to count. I think it’d be easier to use a color sensor to just count that as they come on, but even then, you can’t really track which one’s didn’t land scored and which ones you towered, etc. I think it might be best to at least partially dedicate a coach to figuring out what colors to take.
EDIT: I’m stupid, color sensor is IQ. Even so though, using a vision sensor for that doesn’t seem especially viable.
I don’t know much about the vision sensor because our team is just getting our V5’s now, but how practical would it be to use the vision sensor to detect the cubes in towers and the ones that are stacked in the other alliance’s scoring zone? I was thinking specifically for autonomous, to use the vision sensor to look at the towers and other alliance’s scoring zone to detect which plan to run based on how much it would help you versus the other team. I also heard that one team was planning to use the vision sensor to display the number of cubes stacked for each team on the controller. Would that be possible?
It doesn’t seem impossible but I don’t think it makes sense to wait for the other team to place some blocks before your auton starts doing anything. Our plan is to send a team member out beforehand to see if the opponent’s auton targets any specific color and which auton to choose based on that, if it’s even nescessary.
Auto line up and intake the cubes seems like a solid plan.
One idea that is probably almost not viable or practical at all but sounds very cool is using a vision sensor (or multiple) to make a robot that can compete 100% autonomously
I did that last year with my bot but the states venue was entirely red and blue :,(
There should really be some rule protecting teams from this
Yeah, they are supposed to face field away from crowd, it was the bleachers that where colored, but this comp didn’t cuz they wanted to show off a crowd on their live stream, it was good at worlds tho and this year they are using strange colors so it should be fine gulp
Would it be possible to attach a vision sensor to the end of an arm to easily line up with prestacked cubes during autonomous (i.e. line up height of arm and robot alignment relative to the cubes)?
i’m sure you can use it for autonomous. perhaps using the ultrasonic sensor can tell you the distance from a wall to get a better estimate of where you are on the field.
Honestly, ultrasonic sensors aren’t all the good. They will be nice for detecting if a cube is, for example, in front of a claw, but honestly I don’t think you can effectively use them for distance measuring on the wall to make a consistent autonomous.
In my experience, the ultrasonic sensor is pretty consistent from around 15-25in give or take. Too much more or less and it become inconsistent. As far as finding where you are on the field, I would use bumper sensors on the corners of the robot and run into the wall. Alternately, you could just run into it for a specified time limit. Last year this proved very effective in autonomous (especially with V-5) because it squared you up and you knew where you were on the field.
As far as the vision sensor is concerned, if it is pointed generally down, I think you could get some good tracking on cubes on the field. I would use it as a automatic alignment assist to take less guesswork out of autonomous. The problem with the vision sensor last year for our team was that it had to point at the flags which cause lighting problems when it saw stadium lights in the background. In addition, the judges had shirts that were a similar color to the green on the edge of the flags creating false positives.