Reinforcement learning ( Machine Learning ) in VEX

After implementing PID, Motion Profiling, and Odometery in Vex Robotics, I wanted to take this to the next level and try to implement Reinforcement Learning via Machine Learning as a side fun project to better understand and grasp the ideas of AI through VEX Robotics. I am looking out to see if others would like to collaborate with me on this topic. If you are interested in joining, contact me through discord. :slight_smile:

Username: Robotic Pizza#9607.

9 Likes

LMAO. The amount of iterations this is going to take to train seems unreasonable. Sounds like fun though.

3 Likes

There was a somewhat recent discussion on VTOW, and the consensus is (as it always has been) that there is no use for machine learning in vex, given the limited processing of the brain and not much sensor information to process.

Discord

Edit:

Here are the relevant quotes:

From Thomas | Hail

On the note about using “AI” in autonomous, I don’t think the v5 Brain can handle any remotely intensive ML application. You should however be using sensors in autonomous to make sure your routine is accurate and precise. decision making autonomous programs are extremely complicated and due to the isolated nature of both the VRC and VEXU auton periods I don’t believe having any decision making would be worthwhile due to the time it would take to code for not the greatest of gains. If you do want to check out robots doing decision making in a vex setting the VAIC competition is starting up this season which will be fully autonomous bots, most with complex onboard decision making

From me:

technically the v5 brain has enough hardware to make it possible to run a ml model on tensorflow lite
but yeah,

  • no decision making (or obstacle avoidance) needed in VRC/VEXU
  • I have no idea what you would use ML for, the sensors aren’t great and you don’t have a ton of data in the first place
  • the v5 can’t train a model, so you will have to find a way to simulate/get data from the v5 and train on a pc
13 Likes

Only way I could see this being done effectively is through the use of some simulator (namely gazebo), to simulate games but, as stated above, the amount of iterations to achieve even a small amount of success pretty much rules this out.
If you are truly interested in machine learning, start a VEX AI team. I personally cannot wait for the upcoming season and like the idea of competing against collegiate students.
I also heard rumors that vex was considering legalizing raspberry pi’s for vrc, which would definitely make ml viable in vrc (most likely not anytime soon though).

1 Like

V5 does not have the power or the reason to implement machine learning.

but, if you’re doing that, might I suggest a blockchain as well?

8 Likes

As I understand it, blockchains are used, mostly in cryptocurrency, to increase security. Not as a way to store data that is not really all that private.

yes I understand, I was just making a joke about how everything these days is blockchain machine learning because those are the buzzwords that sell kickstarter campaigns

13 Likes

My team has been thinking about doing this for a while now and we would be willing to collaborate on a project like this, I think we have a few workarounds to the limited processing power of the vex brain and I would like to discuss. I friended you on discord, my username is Shoes#7777

So I would definitely use machine learning for VAIC but adding reinforcement learning to fully autonomous robot would be something I did only after everything worked. There would likely be some performance increase on your robot with replacing an existing high level strategy AI with a learned one. Until you are at the point of having everything else sorted out though, there is no point.

4 Likes

@1961Z were you thinking with only game-legal equipment, or outside of that? I think it is reasonable to add a Raspberry Pi and a Google Coral to do vision processing. This approach is commonly used in FRC. In FRC, vision processing is useful enough that there is a standard Rasp. Pi image for it, and in 2020, an intern made a semi-official ML model to track game elements, using AWS SageMaker and the Coral. One thing that you would have to figure out is how to facilitate communication between the Rasp. Pi and the V5 brain. The Rasp. Pi can be connected by USB to the V5, and C++ I/O statements should work for this, but I have not seen a bona fide code example with VexCode. I have seen one with RobotMesh only.

2 Likes

Most of the examples for this around are actually in PROS. As well as the alternative method that uses the smart ports.

1 Like

This reply largely exists for posterity, considering members of this thread have likely graduated, but I will leave this for any people who stumble onto the thread as I did.

My team has used AI models trained offline in Keras from Tensorflow for the past two years to accomplish relatively simple tasks. In Tipping Point, it took all sensor inputs and output the desired state of the claw pneumatic (Grab MOGO/Release MOGO). In Spin Up, we took three distance sensor values and had the network output a drive state (Full speed back, half back, none, half forward, full forward) to block disks being shot toward the goal in autonomous. Both had success theoretically (96-98% training accuracy) and practically (blocked multiple disks at worlds).

We started with a simple ANN using Dense layers or RNN with LSTMs and built using a sequential model in Keras. Once the network architecture was determined, we added code to export the resulting model to a .model file to be loaded onto a microSD card. This exporting process was accomplished using a slightly modified version of [Keras2Cpp] (GitHub - gosha20777/keras2cpp: it's a small library for running trained Keras 2 models from a native C++ code.). Essentially we cut out lines of code that through errors until the model exported correctly. Incredibly unprofessional but hey, I was only a sophomore. Once loaded onto the microSD, the file was opened using Keras2Cpp inside our robot’s code and inferences were run every loop using an input array of sensor values (repeated every 20-40ms). There was never any increased latency between controller movements and robot actions, suggesting it could handle the calculations just fine.

However, because of the sketchy nature of using random open source code that has not been developed for a significant amount of time, we decided to simplify the process to create these models and create copilot. The repo can be found here. This code creates basic models trained offline in python and exports the weights in a readable weights.txt file, simplifying debugging and decreasing memory usage. This method offers less customization/advanced techniques, but is a far better starting point.

I am just realizing after typing all this that it moves the conversation to the top of the forum, so pardon my revival of such an old topic.

7 Likes