This is a demonstration of how to write code using the V5 Vision Sensor to find a vision target object, and then orient your robot to “face” directly towards the target. You will need to configure the vision sensor before trying the program. We include a link to how to configure the Vision Sensor here.
Wifi in settings on the brain should allow wifi to be turned off, however, it’s a bit messed-up in vexos 1.0.2, the next release this coming week will fix that. From user code you can do this (if Vision1 is the instance of the vision class).
If you use the vision sensor configuration utility, you give it some NAME and the config code generator adds the sig_ prefix. Then you use the full sig_NAME identifier in your code. The configuration utility doesn’t check if the identifiers you give it are duplicates or not, so we have it add the prefix to help avoid creating duplicate identifiers when it generates the config code.
@ezra The numbers all have meaning to a camera’s eye, but very little meaning to a human eye. Robot Mesh Studio lets you use the camera configuration tool to generate these signature constructors based on what the camera sees and that you select in a GUI. The first, second-to-last, and last are the only ones intended to be human-comprehensible. The first is the id of a particular signature. The second-to-last is the range: how strict the camera should be about matches.
As for whether or not those numbers will work for you, they almost certainly won’t. Those numbers were generated in my office on a September afternoon. For me, they changed daily depending on when and where in the office I was using the robot. Three months later and half the country away with different lights, you will need to calibrate a signature for yourself. I did do a write up on how to take signatures in Robot Mesh Studio for VEX IQ, and the steps for making signatures are the same for V5 (it just outputs different code depending on whether you use it in VEX IQ or V5).