Hello VAIC Teams and the VEX Community
from Bob Mimlitch, Co-founder of VEX
It has been a long time since we announced the VEX AI Competition (VAIC) Pilot. The global pandemic has slowed down many teams’ abilities to participate in the Pilot. VEX has used our time to accelerate our hardware and software development for the next VAIC season. This status report is intended to inform the VEX community of the potential and future of VAIC. Many of the plans and information about the future of VAIC were never announced in detail, so I would like to take a minute and do that now.
We developed VAIC for several reasons. The purpose goes beyond wanting to make sure that VEX and RECF have the best robot competitions in the world. We heard from experienced VRC and VEX-U teams that they wanted bigger challenges.
VAIC Goals
-
An autonomous only challenge
-
More software engineering: object detection, GPS, communications, and strategy
-
More mechanical engineering: 3D printing, machining, new materials, and two robots
-
More electrical engineering: custom electronics, circuits and sensors are allowed
A little background first
VEX has been internally developing AI for use in competitions for some time now. Along the way we realized that without robot position information, the usefulness of object detection was limited, specifically for Teams programming an adaptive strategy. So we expanded our efforts to develop an indoor GPS specifically for VEX Competitions. We also realized that robots needed to coordinate their efforts, so we also added robot-to-robot communications to our growing list of development needs.
We realized along the way that the typical VAIC team might not have the time or experience to train an object detection model. For one reason it takes months of effort. For another reason, we use a custom dataset of hundreds of thousands of tagged images of VEX game and goal objects in a variety of locations, orientations, lighting conditions, etc. Training models takes significant experience with specialized tools. VEX, however, has one distinct advantage. We know the game a year in advance. We can develop, train, test and fine tune the object detection model and have it ready at Game Kickoff. This allows teams to make use of the information, without requiring every team to duplicate the effort year after year. Teams can still develop their own object detection methods if they desire. We have future plans to make this task easier.
VEX GPS Camera Sensor
The VAIC Pilot implementation of GPS used a separate Flir camera and processing on the Nvidia Nano, which is sent to the V5 Brain via USB. This method did not allow easy integration into VEXos and VEXcode. We plan to correct this with new hardware and software.
VEX is building a GPS Camera Sensor that captures VEX GPS Field Code images and calculates location. This sensor uses a monochrome camera to improve low-light performance and a global shutter to eliminate vibration from the images. An internal processor does all the position calculations in real time. This sensor has a V5 Smart Port to pass this data directly to the V5 Brain and VEXos, making the data easy to use.
The GPS Software has also improved significantly since the original release for the Nvidia Nano. The first GPS software improvement is that it can now find its position when viewing cluttered, fragmented scenes with only bits and pieces of the Field Code visible. Because this improvement helps the camera use codes that cover a larger angular span, the results are better triangulation and positional accuracy. Additional software improvements have sped up the time it takes to get an optical position location to 40 milliseconds. The second improvement in the software is made possible by the GPS Camera Sensor’s internal 6-axis Gyro and Accelerometer chip. A Kalman filter combines gyro and accelerometer data with optical position locations in order to arrive at position estimates every 5 milliseconds. This position estimate allows the sensor to continue to estimate position even if the GPS Camera Sensors view of the Field Code is completely blocked.
This sensor is expected to be available in May of 2021. The sensor kit will come with a new 12 foot long GPS Field Code strip that is more rugged and easier to install.
VEX AI Stereo Camera Sensor
The VAIC Pilot implementation of AI object detection used an Intel stereo depth camera and processing on an Nvidia Nano. The disadvantage of using the Nano is it’s bare PCB that needs a housing, the required fan for cooling, and the need for power from the V5 3-wire ports. Additionally, the nano can only communicate with the V5 Brain via USB, making software more complex than necessary. We plan to correct these issues with new hardware and software.
VEX is building a stand-alone AI Stereo Camera Sensor that integrates two cameras and more powerful AI image processing into a single device. This sensor has a V5 Smart Port to pass this data directly to the V5 Brain and VEXos, making the data easy to use. This sensor also has two USB host/device ports and built in 5.8 GHz WiFi.
The software for the AI Camera Sensor is undergoing significant improvements. Our goals for the 2021 Game will be increased object detection in low light and at a distance. In addition to object detection improvements, Kahlman filters will be used to reduce flickering in object detection. We will implement support for object detection issues. If a team is struggling to detect an object, they can send us images of their environment, and what their robot sees and we can create additional data and push an update. Getting this feedback will help us address all the environment scenarios.
Robot detection is currently not built into the VEX AI network because we lack sufficient data for VAIC robots and their design. This is largely due to the machining rules. To alleviate the issue of needing to know where all four robots are located, we will implement robot location wirelessly. VEXos will broadcast GPS location and receive all other robots locations. We will evaluate the usefulness of this once teams start using the new system.
This sensor is expected to be available in late Q3 of 2021.
Documentation in the Knowledge Base
When the new system is available we will have significantly more documentation to get teams up and running quickly. The topics are listed below.
-
Getting Started
-
VEX Field and Game Object coordinate system
-
VEXcode example code (C++ and Python)
-
Software API list and description
-
Using Object detection data
-
Driving to a position
-
Moving a manipulator to an object
-
Communication between robots
Final Thoughts
VEX and the RECF have put significant thought and effort into VAIC. We will continue to grow and evolve this program with the goal to keep it the best robot competitions in the world. We hope to see you all in person soon. Thank you all. Stay safe.