Hello! I have recently purchased a VEX IQ Generation 2 education kit. I am currently trying to use VEXcode IQ (using blocks) to code the Optical Sensor to detect a certain colour, and when it does , the robot should drive in reverse. The problem is that the Optical Sensor can not seem to detect colours accurately. For example, if I code the robot to reverse whenever the Optical Sensor detects green, it will never reverse, no matter what shade of green or green object I present to it. In addition, if I go into the devices window within the Brain and go to the optical sensor tab and check the “hue” section, it will always display either 0 with a red circle or quickly flash between 24 with a yellow circle and 0. I would like to know if the optical sensor is broken, and if I can fix it, or if I am doing something wrong, and what I am doing wrong.
Here is the code I am currently using (Note: This is the text version and I am using blocks):
What happens if you go to the Device Info screen and go to the port with the color sensor? I’m going off memory here (I need to get my brain back from the team I mentor…), but I believe it shows the detected color in the info screen. That would be the first check.
Also, ensure there is good, bright lighting, and see if that helps any.
Edit: Oops, I skipped over the part where you said you went to that screen. Still, have you tried lighting changes?
Yes, try using more light to check it’s working ok. I tested one here quickly and it does seem to be less sensitive than I remember, and the VEXcode team didn’t flag anything unusual when they did regression testing on the last version. I will compare to a V5 sensor later this week and make sure no bugs crept in to the vexos 1.0.4 release, IQ2 and V5 should be returning identical values.
I assume you are aware the code you are using will require bothButtonLUp to be pressing and the optical sensor to detect green? Other than that, usually the best idea is just to ask the brain what it’s seeing, when telling it to do something under a given condition isn’t working.
These are the available sensing blocks for the optical sensor:
Here’s some quick code I wrote up that prints whether the optical sensor found an object, the color it sees, the hue it sees, and the brightness it sees.
Try experimenting with the brightness you set the sensor to. You may also want to use if hue is between x and y, rather than if color is green, because we don’t really know what the brain counts as green.
In this particular situation, it’s okay to copy that code provided:
You know what it does.
You either copy it exactly or know why you’re changing something.
Hue between 80 and 140 degrees is considered green
Blue is 200-240
Red is 340 - 20 (with 0 and 360 being the same)
The color you eye sees and the color the optical sensor sees can vary. We optimized this originally to work well with the colored disks that come with the V5 workcell under typical classroom lighting (ie. normal residential LED lights). But you may get different results using daylight or fluorescent lighting.
No, you have it. I should have been more clear: While there are other blocks relevant to the optical sensor, the blocks I mentioned are the major sensing ones which will be most useful in debugging. The only sensing blocks I didn’t mention were OpticalSensor gesture up detected?, which I left out because I didn’t see it (1st gen thing), and OpticalSensor detects [some color]?, which I ignored because we want an “open-ended question”, so to speak. You could write:
if OpticalSensor detects green?
print "Detecting green!" to Brain
But you would rather know why it’s not seeing green. Is it seeing red? blue? nothing? We don’t know, so it’s easier to diagnose the problem if we just say, “What do you see?”
The other blocks are, as I mentioned, not sensing blocks; you can tell they do something because of their stacking shape. I’m guessing they’re in the “Sensing” category just because they are relevant to a sensor, but nonetheless, they don’t sense, so I largely ignored them. I did mention, however,
It doesn’t matter whether something is in front of it or not. It seems to auto default to red?
I have tried the device info screen on the Brain and it shows accurate color detection, however it doesn’t work with code. We are trying to run the Testbed Challenge for 2nd Generation with multiple sensors.
Without seeing the code and the configuration, it’s difficult to provide any guidance. If the sensor is operating correctly in the devices menu but not in code, then that would lead me to believe it’s a configuration or coding issue.