I’ve been trying to program our vision sensor, however, with no code to go off of, I’ve had to attempt to program blindly off of the api and vcs. The camera window to set the signatures won’t open and I haven’t been able to get the sensor to work with snapshots or getting the color sensed. Does anyone have any examples they have made on vex code?
I don’t know but you can try something that you have not tried yet
Try searching the forums or the internet if you haven’t. I know there’s a help page for all vexcode stuff
have a look at these knowledge base articles.
There are two different ways to configure the sensor depending on whether you want to use the robot config panel or just create vision header files.
Hi @jskinner Not sure if you are using VEXcode V5 Text or Blocks, but we do have a STEM Lab that discusses how to use the Vision Sensor with VEXcode V5 Blocks. You can find it here: https://education.vex.com/parent-wrapper.php?id=vision-sensor-v5
There is also an example project in both VEXcode V5 Blocks & text that features the Vision Sensor.
Hope this helps.
Brilliant, will do, thanks
Thanks, I’ve got it figured out now. I had trouble configuring the sensor using Vex Code, because I could not find the vision sensor configuration window. The new update (1.0.1?) gave easy access to this window that I could use to setup the signatures the sensor used, and from there I could program it.
Ok cool, checked them out to get an understanding of the sensor and now the update allows access to the configuration window, thanks
I am also using the Vision Sensor. I am going thru the STEM Lab,. Is calculating the center x and center y used in having the robot follow an object thru the vision sensor? Is there any example code for this using the vexcode V5 blocks?
Yes, calculating center x and center y would allow you to track an object. Unfortunately, that is not covered in the STEM Lab that I shared before. It just covers discovering if an object exists (not tracking an object). However, that is on our roadmap to add soon.