I have recently encountered a problem running the I2C encoders. At random times our cortex suddenly “bugs out”. The VEXnet light shows that it is still connected but the robot light shows a “User Microprocessor Error”. When this happens, the motors keep going the same power that they were when it “bugged out”.
I unplugged the encoders from the cortex and it doesn’t “bug out” any more. This is very puzzling. I have replaced the cortex and all of the batteries. I haven’t tried setting the sensor value to “SensorNone” in RobotC, but will try it this afternoon.
Caveat: we are avoiding these I2C sesnors until May. Others in our lub are rolling the dice in my mind, but we were on that early release version of Robot C with the I2C and had our robot spin uncontrollably at a competition even with the competition switch set to disable. So we had similar issues with this as well.
If you look at how I2C communication works, you have to send messages out to poll each device for information. There are some old posts on this when the I2C sensors came out so look them up.
We put in some timer delays in our loop to let the I2C sensor get another value and that helped somewhat but it seemed to be more like playing the lottery than a real fix. The idea was that we wanted to ensure we were not interfering with this polling communication over the serial communication to the I2C sensors (we had 2 in the chain). Unless there’s proper semaphore blocks on the sensor values you’re reading in the underlying RobotC code masked from you, could you be reading while it’s writing or in the middle of communicating and parsing the I2C causing some nastiness? Or it could be in the I2C communication handler with some memory out of bounds or other nastiness. It could be a lot of things.
So just a thought and sorry it’s not a definative fix. Otherwise it might just be some other bug related to the I2C.
Also, doesn’t I2C work pretty reliably for Mindstorms on Robot C? I’ve never used it so I have no idea. What makes Vex different? Cortex hardware/firmware and SensorValue masking the underlying I2C communication. If successfully masked it’s wonderful.
Hence we’re back on quad encoders for worlds… On the to do list for May though! Those I2C sensors have huge promise.
I actually tried setting the SensorType to sensorNone. I did this for both encoders and it eliminated the problem. I think what I am going to do now is try only initializing one of the encoders. Technically this should only “bug out” only half the time. Can you confirm?
If CMU have implemented this correctly, the I2C communication should be going on in the background and placing the encoder counter into memory. Reading the encoder from ROBOTC code should access the value in memory and place in a user defined variable (or whatever you are doing), it should (hopefully) not initiate any I2C communications directly. I have to admit that I lost interest in using the IMEs when it became clear they would not be ready for the team to use in March, perhaps I should pull them out and see if I can reproduce any of the problems you are seeing. I’ve used I2C in lots of products and there is nothing inherently unreliable as long as the firmware can handle potentially corrupted data. As the CPU generates the clock it is in some ways easier to deal with than say RS232 asynchronous communication.
As far as I can tell it’s the user processor. My notes show they are using port B bits 8 & 9 on the STM32. This means (I assume) that CMU and Intelitek have implemented this independently unless VEX gave them the low level code.
How many IMEs do you have in the chain? How long are the cables between them?
How long is the “random time”, do you have any way that you can cause this to happen under your control or is it just a case of waiting?
Do you know if your code is still running when this happens, is there a code controlled LED or LCD display you can use to help show what may be happening?
Do you know if you can still send values to the motors? You could add an “emergency stop” switch that would disable everything, although not a proper solution it would help by showing that motor control was still possible from user code.
We are using only 2 sensors and are using just one cable between them (i think 12 inches). I can’t send any values to the motors as my arm keeps rising even after the software limit. I can try to see if the code is still running by flashing an led, but I really don’t think that it is still running.
We are also experiencing a severe version of this problem. We currently are utilizing four of the I2C based new encoders with approximately 48" of cable running among them. The problem is almost frequent in occurrence. When it occurs, the same symptoms described previously occur. In one instance, after turning the cortex OFF (with the backup battery plugged in), the front motors connected to the Power Expander continued to spin. Disabling the robot does nothing.
I can only imagine this could turn into a problem at worlds with disabled robots spinning or roaming across the field.
Edit: We are using robotC 3.08 with the world championship firmware: 3.21
Can somebody confirm if this problem occurs when using EasyC as well? Could it be a problem with the robotC Firmware itself?
Turning the cortex off with backup battery plugged in may do this, the whole system is running from the backup battery, although cortex motor power will not be present, the power expander will probably continue to work. I don’t remember if the PWM signals are still present but it sounds like they may be. I will check this tonight and see if it is normal behavior. Disabling the robot should have stopped the motors.
My assumption (unproven) is that EasyC and ROBOTC have different implementations of the I2C driver. I have 2 IMEs so will setup something later and see if I can duplicate any of this. How often does your code poll the nMotorEncoder value?
We have experienced the same problem - we use EASY C. Single encoder on a single motor. All motors seem to “lock” in place with it happens. We have bent shafts because of this as well as bent robot pieces caused by the runaway motors.
I never thought about it before now, but ever since we replaced the motor with the IME we have not had the problem. In other words, we are no longer using the IMEs and we no longer have the problem.
I would be happy to post the code if you believe it would help.
We have encountered this problem on occasion as well. I have come up with a theory (yet unproven) that the severity of the problem is proportional to the number of encoders you use. Our team uses 2 269 encoders, and for us the problem occurs only rarely; I would say about once every 15 or so runs. For 254pride’s team though, they are using 4 encoders (2 269s and 2 393s) and they encounter the problem much more often, maybe once every 2 runs. I don’t know if anyone has any data to back this up or disprove it, but it is an idea.
So you had the same problem and lost control of the robot? Did you have the same LED pattern as tutman96 describes? It’s a good theory, as more encoders are added to the chain the signal integrity may degrade and perhaps there will be more I2C communication. One test I want to do later is see what happens if the I2C chain is broken after initialization, does the firmware handle this error gracefully or not.
jchang254 has actually just left our build session this afternoon but I think i can speak for him. They do experience the same LED pattern that tutman96 described (We on 254B also experience the same pattern). Both 254B and 254E (two of our teams who use the i2c encoders) are running robotC 3.08
I since i am in the lab i went ahead and ran a few quick tests.
Set two motors (the intake) to spin continually.
2: unplugged all four i2c encoders directly from the cortex.
3: plugged all four i2c encoders back into the cortex.
observed the cortex for error code indication .
repeated the process ten times.
No error was observerd
for the second test, rather than unplugging directly from the cortex, I interrupted the chain(i.e. leaving two encoders connected to the cortex and the other two disconnected).
The same process above was repeated.
on the 2nd and 7th trial, after step 3, the cortex went into the same error state described by tutman96 in his initial post.