What do you guys think will be the best sensors to use for this season? My team and I have never really messed with sensors that much and want to begin using them this year to see if it will help make us more competitive. Any particular ones that you think would be best or at least better than others? Also, how hard is it to set up a particular sensor if you have used it before and have experience with it?
Complicated, but this season is a good one to use 3 encoders to try Team 5225 Introduction to Position Tracking Document.
Other than that, you can use limit switches, a potentiometer, or an ultrasonic sensor to automate ball movement. You can try the vision sensor as well, but you may have mixed results.
Yeah I’ve seen lots of things being said about the effectiveness and variability of the vision sensor, probably not gonna try that one anytime soon but will for sure check out that link
My whole team want to use vision sensors. Are they hard to program or something?
Rangefinder sensors (ultrasonic sensors) but only if they’re installed higher than the height of the balls, Vision sensors to sense the flags (ish) at the top of the baskets, and perhaps Light sensors to sense when the ball is indexed.
I think the color sensor showed in the AI product reveal could be useful, especially in snailbots. It would make cycling much easier in my opinion. That way you could see when to stop cycling and it would minimize any errors.
They tend to be inconsistent and hard to use. It is extremely hard to get consistent sensor readings, especially with differing lighting.
I know people have tried to filter the readings given by the vision sensor in order to get better readings, but without much success. Do not get me wrong, there have been few teams that have made some use of them, but those teams are few and far between.
I know @2775Josh’s team used one last year. You could probably find good information regarding their usability through him.
I’m planning on using a vision sensor to automate the trapdoor on my snailbot.
Aren’t sensors for the AI product reveal only legal in the AI competition? Because it’s a separate competition? If it is legal then I’m all for it, but wasn’t it not allowed?
The color sensor may or may not be legal for EDR. We don’t know for sure yet (unless I’m mistaken), however, but I hope this sensor and a few of the other ones will be legal for EDR. Fingers crossed.
The vision sensor has its issues, and was particularly difficult to use in the year it was released: Turning Point. The flags were well above the floor and translucent - which made room lighting and the background beyond the flags problematic. There was also a desire/need to target the flags from a fairly long distance, which made targets occupy only a tiny number of pixels in the sensor.
I think the vision sensor is better suited to Change Up, as the objects are low to or on the ground and opaque. Objects on the ground are better lit and have less issues with background confusion. Scoring looks to be a close range thing as well. I intend to require my teams to demonstrate why they should NOT use a vision sensor this year.
If I remember my discussion with 2775’s programmer last year (in Tower Takeover) The only cubes they were able to sense usefully were the (orange?) ones because the other cubes were not able give a consistent reading. I dont know how well this would be able to be used on this years objects, but there were still difficulties with sensing more comparable objects.
Correct. We could not accurately sense purple or green cubes as the base color was dark enough to be confused with the background and each other. Only orange cubes were bright enough to be sensed accurately, and even then only when lit from the sides, rather than from above. We actually had issues with the sensor seeing an orange cube in the reflection of the field perimeter.
However, the green targets are fairly large and bright, and could be usable with a vision sensor.
Yeah, while the sensor is better suited for this year, I think its so tricky to use and buggy that it isn’t really worth too much effort working with it. I’ve spent a lot of time in the past programming a filtering system that merges different signatures together, and tried everything, but it never got put on my robot because its such a pain to configure.
This year is definitely the best year to use the vision since it was released, but as it stands the vision really needs to be improved by vex before its practical to use (practical as in time spent trying to get it to work, especially on the day of competition).
Interesting to get your more informed take.
My thinking is that for skills auton the vision sensor can be used for the final alignment to balls before pickup and goals before score – a point at which the object to be detected should occupy a significant portion of the field of view.
My students haven’t progressed to the point of using odometry, and I am hopeful that they can use the vision sensor to help account for small errors in their drive base motion (which will likely be built of only the motor sensors and basic drive functions.)
Yep, that is a good plan, and is within the sensor’s capability. It should be able to decently pick up a target that takes up most of the field of view and that has a neutral background.
The problem is getting to that point. In my experience, whenever I worked with the sensor, it sometimes took me hours of trying and trying to get the vision sensor to remember a signature, and having it report information back to the code. It was just so frustrating, it feels like a product that they released as soon as it seemed to function.
Then, I took a different approach to try to configure the vision sensor, and I wrote a script that took the signatures injected into the VCS config file and converted it to a format PROS accepted, in hopes that the code would beam up the signatures to the sensor when the program started. I had mixed results, but at that point, I was disillusioned with the sensor, and decided I would give up on the painful task of trying to get the sensor to work until vex improved the sensor (as I was sure they would). Unfortunately, they haven’t improved anything, and just went on to release new stuff.
I have developed a whole library of vision filtering, sorting, and merging algorithms that let you manipulate the objects that the vision sensor reports. The problem is that it can only improve any data the sensor does send, so when the sensor can’t be persistently configured to detect a large brightly-colored object on my desk, there isn’t much I can do.
The detection of the sensor is decent, and can definitely give good results. It is limited by the hardware and its image processing algorithms, which vex probably can’t improve. However, vex can improve literally everything else, from the configuration experience and working detection to the lack of promised features.
The best sensors will depend on your robot and your strategy.
I would say that a vision sensor would be helpful in this application because you could sense the red and blue balls very easily and you could center with that to make auton or programming skills a lot more accurate.