is there a way to tell the v5 brain what team it’s on? Im messing around with vision and im trying to make it so if were on blue it looks for red things like caps and flags. I looked at some robot mesh things and it seems possible with the block programming but I’m using c++ pro on vex coding studio.
Every function in Robot Mesh Studio’s Blockly is also in RM Studio C++. What exactly were you trying to do?
I’m trying to have the code check for what team the robot is on so it can run different code based on the team color. For instance if the robot is on red, it will look for blue caps and flags in vision.
C++ style, I would probably do the following: Create an object (autonomousSetting) that holds the team color (a boolean isRed or isBlue would do), the starting location (a boolean nearFlags or nearPoles would do), and a method to start up the appropriate autonomous program. Autonomous would essentially just run whatever start-up things are needed and call that method.
C style, I would create those booleans separately and refer to them when starting up the autonomous section.
In both cases, in the pre-autonomous you can identify the true/false for each boolean using the screen.
There’s no way to just do that by default, I don’t think it’s a flag in comp control systems.
One way you can accomplish that if you have a vision sensor is at the very start of autonomous you know that on the blue side the closest flags are flipped to red and vice versa for blue, so if those flags are your red flag signature, you know you’re on blue.
What I would do is create buttons on the touchscreen to select which team you are on in pre autonomous (really just any time before the autonomous function is called) (just check the X and Y value when the screen is pressed to see if you clicked the button), and then set a variable to either the red color signature or blue color signature depending on the button, and then search for the corresponding signature when parsing the vision sensor data, and then act accordingly based on interpretation of the vision sensor data. It will be fairly complex, but I think it’s the easiest route to go regarding handling the vision sensor, as well as different starting positions for the game.
Yeah I don’t think the competition control is that advanced.
The touch button solution mentioned above is probably the best solution there is for now.
What my team did to conquer this problem was pretty simple. We wrote two different competition codes and uploaded them to the brain. One code for the blue side and one for the red side. Then you just select which ever program you need to run.
That seems like a okay solution, however I can see it becoming difficult to manage once you have 4 separate programs for each starting position, especially as your code gets more and more complex. If you change one line of code for the driver portion, or a function, or port change etc, you have to go to each separate program and make that change. If you start implimenting different solutions to problems in each program, fragmentation is going to be a pain to deal with.
That is a good point. Once we begin to experiment with Vision and write a more complex autonomous program then we may decide to use your idea. Thanks for pointing that out!
How would I make the buttons? I ued rectangles as the buttons and I don’t know what to do next. I tried using the brain.screen.xposition function but it didn’t work. do you know how to do this?
See this post (and the rest of the thread).
If you were looking how to do LCD interaction in Robot Mesh Studio, check out this example for how to do it in Blockly. The Minesweeper project linked in my signature also makes use of LCD touches, but it’s rather complicated. I also did versions of that example in Python and C++. Our C++ is very similar to VCS’ C++, so it should provide some additional insight into VCS C++ if you choose not to use our stuff.