Autonomous of Change up

I think this automatic stage may be an embarrassing scene, because there are three Goals on each side of the field. If two robots put balls in the middle together, it may prevent them from being put in.
So I think we need to write more automatic programs this time.

3 Likes

So what you are saying is that the two teams on each alliance need to talk more before each match? i think this would be very easy to solve. all you have to do is have the two alliances switch their sides. you see, because if both are GOING to go for the middle, one would turn right, and the other would turn left. all you have to do is flip which side of the middle they start on.

3 Likes

Or you could just develop several programs, as @13536G suggested. This way you could work well with your partner regardless of their auton.

2 Likes

Yea that’s what I mean. This game is not like Tower Takeover, tower takeover just have 2 goals, but Change up have 3, and many other public goals.
So we must write more programs, think more.

2 Likes

You should probably write at least 4…

  1. Left no center
  2. Left and center
  3. Right no center
  4. Right and center

And, yes, talk to your partner ahead of time. :slight_smile:

9 Likes

Yeah, I always had many, like, at least 6, programs for each side for exactly this reason, its nothing new. For example SS had this with the center cube

3 Likes

Well, I would definitely add

  1. Left, center, and right
  2. Right, center, and left

:wink:

9 Likes

Id also have one that just fills the home row incase your alliance does nothing

13 Likes

To this point I think TT was an outlier game for autonomous. Almost every game has elements that need to be split between partners. And to add to the list of programs an auton that does the entire home row will be good in those far to often cases where you cannot rely on your partners to complete one goal.

2 Likes

Can’t you guys use the vision sensor?

1 Like

Vision sensor is probably hard to code I’d imagine. I haven’t even looked at trying to code one. May or may not start to try this year. IDK

1 Like

the backboards make me think that gdc wants us to use vision sensors, but tbh it’s probably a lot easier and just as reliable to use some sort of passive aligner or even just no aligner at all.

3 Likes

yes that is what I was thinking. The green would really help with using vision sensors actually seeing the goals.

1 Like

I was thinking an in range function would be sufficient
https://docs.opencv.org/3.4/da/d97/tutorial_threshold_inRange.html

I have no idea how well this could be made and used with the brain’s software.

2 Likes

It’s no different from the low platform in TP. Just need different auton codes.

1 Like

I dont like vision sensor, I think it’s useless, maybe.
But you can use it.

This is an example of a robot using a vision sensor, every time it aims at flags it autocorrects with the sensor, so it doesn’t 100% rely on it but it defiantly helps

3 Likes

No. The vision sensor can only be configured to detect color blobs, and then it reports those coordinates back to the brain. You don’t have access to the pixel data, so you can’t do any custom processing.

If VEX wants us to use vision sensors, they are not doing a good job making that obvious. The vision sensor is awful. There have been no updates since the release of V5, and the configuration is tedious, frustrating, and buggy.
If, by some miracle, you manage to properly configure a signature (and the sensor actually remembers it), the detection is very inconsistent and noisy. It does not respond well to changes in lighting.

VEX took a pixycam, made it worse, and then instead of making any updates to it they went on to just release new products. There are plenty of ways the sensor can be improved to be quite usable, its just that the sensor seems as dead as VCS. However, if those problems are fixed, it is likely the targets in this game are unique enough to be consistently recognized by the sensor.

Sure, after a lot of time spent wrangling with it, it can be useful (in perfect lighting), and teams have have minor success with it, but it is way too much pain to use than it is worth.

TLDR; the vision sensor is currently useless and frustrating/impossible to use. I don’t know anyone who has had a positive experience with it, nor have I seen anyone use it after the initial excitement. It can be improved by software updates to be useable, but if vex is trying to encourage us to use it, its definitely not showing.

6 Likes

Agreed. Even as someone who had success with the vision sensor last season, it is almost always necessary to configure the sensor on-field before auton or programming skills. If your competition doesn’t let you do this, then :man_shrugging:.

For example there is a certain tournament in TN (won’t say the name but fellow TN roboteers will know which one I’m talking about) in which the lighting is dark with only spotlights above the field for lighting. I haven’t tried to use the vision sensor there just because it would probably be an absolute nightmare. If Vex wanted teams to use vision sensors they would provide regulations on the lighting at competitions, and possibly update the sensor.

6 Likes

I haven’t had any luck with it either, it couldn’t recognize the color green from the color orange in our field without us opening all the windows for extra light.

1 Like