By far the most fun I’ve had on vex. This past year has been unforgettable.
If you’d like to learn more about how our autonomous code works, you can find out more in the video description.
On the build side of things, this year raised new challenges for our designs. Fitting into size constraints is challenging, especially when you’re packing on a Clippard air tank, Jetson Nano, battery pack, two Intel Realsense cameras, and a bunch of cables.
Our 24" robot makes use of 14 laser cut and 3D printed parts throughout its build. At its core: an 8 motor X drive. This drive makes no compromise between strength and speed, resulting in a robust and flexible platform for code and experimentation. The wheels are Vex Pro omni-wheels to handle the unusual forces of an X drive under such weight. This robot is capable of scoring rings using a wide intake for margin of error with object detection.
Our 15" robot required us to get a little creative to make space. This year, we made use of 3D printing, laser cutting, and CNC milling across both robots to reduce size and increase functionality. Featuring custom milled gears, low profile motor mounts, ball bearings, drive sides under 3.2" wide, over 41 total custom parts and 221 total design revisions, our CAD can be found using this link!
Feel free to ask any questions about the build / machining process! My discord is mae#1194, and our team can be found in VTOW as well.
This is very cool. I am curious do people in vrc being able to do object detection with vex vision sensor.
I can’t believe someone actually did it. Good, efficient object detection. Good job to your team, you sure as hell earned it. I can only hope our robot is able to work that amazingly.
Yes, but the Vex vision sensor is nothing more than a detect a blob of color in a camera image device.
The Intel realsense cameras they used have depth perception and, (I haven’t read their code yet, it’s possible they did it) could be used to detect the actual size and distance of aforementioned color blobs. That is a whole different level of object detection.
Can programmers straightforwardly get access to the camera in vision sensors? It would be very helpful for people who want to do object recognition.
I’m a builder, but I can speak on how the code worked to do this. The Intel Realsense cameras have color cameras built in along with their depth sensors. The jetson nano does object recognition on the color frame using a model with YoloV5, and with the bounds of said object it finds depth from the same frame on the depth sensor to find that object’s depth.
The cameras use LiDAR for the L515 cameras and stereo depth recognition for the D435 cameras.(similar to how an xbox kinect works)
If they could that would still be great, however object recognition takes a lot of processing so the brain probably would not be able to handle it on its own. The Vex AI camera that’s currently in the works, afaik, takes care of that by doing that model processing onboard
No, unfortunatley. The vision sensor is completely self-contained, and the only data it passes back to the brain is what is exposed to you over the Vex API
You can set up hand/face/posture recognition with the combo Python, Media Pipe and OpenCV using any camera plugged into your computer. The VEX vision sensor is just a camera which works with the V5 systems.
Yeh, I have done that with my webcam before, I just want to implement it on vex.
We use serial between the Jetson Nano and the brain’s microUSB port to be able to communicate whatever info is processed by the Jetson Nano to the brain. Even if you could, If it’s simply to detect regions of color, a raspberry pi 4 would probably suffice. Object recognition with a trained model would definitely require a more graphically powerful computer, such as a Jetson nano as we did.
I haven’t tested this, but I think it would be possible to get similar results to game element detection without a model by finding regions of color. Vex game elements, like discs for example, have bright colors that are pretty distinguishable from their surroundings on camera.
Which type of plastic are you using on the x-drive gear assembly?
The plastic used across both robots is 1/8" Delrin (excluding 1/4" Delrin on the lift). Delrin has a very low friction coefficient and is pretty easy to machine.