Anyone have any info?
Right now, I’m real jealous of the Project Lead the Way teachers that got the sneak peek at the PLTW conference.
Which I would like to say how impressed I am with PLTW. They’re really doing an awesome job from in developing curriculum and programs for teachers to bring STEM into the classroom in a real way. Our school should look into it.
I’ve got my hands on the PLTW kit (270-5826). No documentation whatsoever and the whole internet is silent. This PLTW must be really secretive organization.
Anyway, what I have gathered so far:
The camera has 3 ports and a single button
- The normal VEX IQ port (I2C). A brain which came in the kit was equipped with firmware v2.1.0.b1 recognizes it as Vision Sensor. A brain with regular v2.0.1 doesn’t see anything.
- an rj11-like 4-pin jack, I’d assume serial line (for a reason spelled out below)
- MicroUSB port
On the USB, it enumerates as:
[969748.016686] usb 2-1.2: new high-speed USB device number 61 using ehci-pci
[969748.109575] usb 2-1.2: New USB device found, idVendor=2888, idProduct=0507
[969748.109579] usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[969748.109581] usb 2-1.2: Product: Vision Sensor
[969748.109583] usb 2-1.2: Manufacturer: VEX Robotics
[969748.109584] usb 2-1.2: SerialNumber: 276-4850
and the USB interface is proprietary.
Anyway, noone is going to do anything like that in vacuum, so I places a bet on CMUcam/pixy. Stock PIXY utilities didn’t recognize the camera, but when I hardcoded the aforementioned VID/PID,
libpixy become happy and tried talking with the camera:
Hello Pixy:
libpixyusb Version: 0.4
Pixy Firmware Version: 1.0.6
Detecting blocks...
frame 0:
frame 1:
frame 2:
frame 3:
frame 4:
frame 5:
Still not helpful and the pixymon didn’t work either. But the button had some effect, so I tried holding it and it apparently “trained” the camera, or maybe switched different modes.
After that, both VEX Brain device info started showing coordinates, as well as hello_pixy over USB:
frame 1355:
sig: 1 x: 158 y: 105 width: 316 height: 211
frame 1356:
sig: 1 x: 158 y: 105 width: 316 height: 211
frame 1357:
sig: 1 x: 158 y: 58 width: 316 height: 116
sig: 1 x: 223 y: 143 width: 10 height: 7
sig: 6 x: 141 y: 127 width: 30 height: 21
sig: 6 x: 107 y: 143 width: 30 height: 8
So I think I’m getting close to something usable.
Hey, VEX, the cat is out of the bag, can you share more info on the venerable 276-4850?
OK, the I2C protocol is very different from the PIXY and follows the I2C communication patterns of other Vex IQ devices, except it looks like having a banked set of registers.
The camera reports as sensor type 0x0b (0x02 is motor, then comes LED, light sensor, bumper, gyro and 0x07 is sonar).
It seems that the camera is able to recognize 7 different classes of objects (likely just as 7 different colors) that you have trained the camera on.
Then, it reports all (up to 4? more?) the objects of each class. The brain queries each class separately, first by switching the register bank. Let’s say my camera is on port 8, thus gets assigned I2C address of 0x22.
Brain first selects the bank (1) by I2C write to the register 0x24
<start 0x22> 0x24 0x01 0x00 <stop>
Then, it sets up a read from register 0x26 up and reads (getting “no object” report):
<start 0x22> 0x26 <restart 0x23> 0xff 0xff 0xf0 <stop>
It then proceeds through other banks to query for other classes. I have had viewport covered by a single green blob, class 6, so:
<start 0x22> 0x24 0x06 0x01 <stop>
<start 0x22> 0x26 <restart 0x23> 0x00 0x00 0x9e 0xd3 0x00 0x00 0xff 0xff 0xf0 <stop>
Here, the report says object at 0,0, width: 157, height 211.
For multiple objects, the report was like:
... <restart 0x23> 55 13 49 4d 00 00 18 bf 44 14 00 00 87 7a 17 1b 00 00 42 a6 06 0b 00 00 ff ff f0
(that was 4 objects, I have no idea what the 5th and 6th bytes of each object report mean…)
With the above, it should be quite easy to implement a camera library using direct i2c messages on a regular firmware.
I wish I could get the usb interface decoded as well - pixymon didn’t work even with fixed PID/VID…
I’m pretty excited about this, I have the Pixy that I’m using and I really like that it can do a ton of on-board processing. Thanks Nenik for doing the research on this!
Now that the V5 sensor has been formally announced for VRC, I’d really like to see the announcement for VIQ so we could have the support in RobotC etc. Maybe when @Paul_Copioli gets back from CES we’ll see something next week.
Do you think that VEX IQ will get an update also? About to buy 20 new kits and don’t want to if they are going to upgrade the VEX IQ brain… or do you think both platforms will be able to use the V5 brain???
Presumably the PLTW kits are just standard IQ Brains with a firmware update.
IQ getting V5 update is unlikely. The IQ system works reasonably well and is prretty much on the V5 level feature-wise, so upgrade would be hard to justify.
Anyway, the vision sensor has both V5 and IQ connectors and is compatible with both systems. The APIs would likely be the same.
Waiting for the VEX Coding Studio to become available, but at this point, I think I’d be able to write a RobotC library for IQ that would make the vision sensor fully usable without any official firmware or RobotC support (you can send custom I2C messages to port tripplets from RobotC already, it is supported for custom sensors and whatnot…).
I don’t want the V5 update, I want the V5 sensor. I’m happy with the VIQ brain. I want the sensor and support for it in RobotC. I’ve got a Pixy that I use with other things, it’s a great device.
The V5 Vision camera is IQ compatible, so it will likely get the official support soon. But you applied for V5 beta, didn’t you? Fingers crossed for you!
I did apply, along with 2,000 other teams. So getting just the camera to use with VIQ would also be great.
@Foster The V5 Vision Sensor will be compatible with VEX IQ, but we will not be updating ROBOTC to support the V5 Vision Sensor.
VEX Coding Studio will be compatible with both V5 and VEX IQ at the production launch, which will be timed just prior to VEX Worlds. However, for the Beta VCS is only compatible with the V5 hardware as we need our Beta testers to beat up both the hardware and the software.
Additionally, the programming languages that VCS will support will be:
Modkit (both blocks and text with a bi-directional block to text and text to block feature to help students with the transition to text)
ROBOTC++
C++
For the Beta we will only have Modkit and C++ available as we are still working on the functionality that will be included in ROBOTC++ (including which C++ libraries will be supported in ROBOTC++)
We are trying to make the ROBOTC++ language extremely user friendly to continue the transition from block to simple text to “almost” C++ to C++. We have a grand vision for how the blocks to text transition should be taught and managed and want to get it right before we put it out to our users.