I am pleased to announce the release of our open source PROS library copilot, designed to bring machine learning to the masses. With ML and AI booming in the tech industry, I see this library as a great way to introduce people of all experience to these technologies. This also provides an opportunity to stimulate growth in the VEX community around implementing ML/AI in more and more game solutions. Please see our repository here to get started.
copilot is a growing project that has only just begun, so while implementing it in your own projects is encouraged, we are always looking for collaboration and constant improvement of the code. We hope that as the seasons progress, more and more teams choose to implement this code, but also choose to help add new features for all to use.
And a fair disclaimer, I am no expert, so advice inside and outside of this thread is always appreciated.
Fair objection, I’m not sure if changing the name is the best course of action, but I for sure will think it over. Have some doubts this thing will end up popular enough to cause confusion, but it’s definitely something to keep in mind.
Appreciate the advice. Can you explain this a bit more? Are you suggesting pursuing other avenues than machine learning/supervised learning, or suggesting a different path for development? I want to learn as much as I can from those with more experience than myself.
I have been using Github Copilot and it does an incredible job with autocompleting roughly 50% of what I expect when coding in PROS which saves me a lot of unnecessary strain. But as for something simple like coding an autonomous with pre-built functions I have made, GitHub copilot can write me 70% of an entire auton path based upon what I ask by writing down comments.
Given that Github copilot is trained on open source code, I won’t be surprised if Github Copilot is already fundamentally sound for the majority of PROS C++ programming as I bet a lot of people have pushed their PROS code to Github. Have we verified the effectiveness of Github Copilot before jumping through the ML loophole?
Thanks for the advice, it really is appreciated. The Todos are really just giving more tools to potential users to find creative solutions to challenges. The CNN was planned to be used on time series data or a large image that has been created from smaller collections of objects recognized on the vision sensor (that option is very experimental and may never work). The release of copilot only comes after two years of successfully running models on the v5 brain exported from keras during matches at Worlds. However, I do agree that I may be pushing this at a bit too fast of a pace, so slowing down is likely the best solution. Hopefully the project gains enough traction that people far more experienced than myself can help stimulate the growth.
I’m afraid my original post was unclear. Copilot is simply a library to create models from data of the users choice. Technically, it isn’t even v5 specific. The user creates a .csv file of data (likely sensor values) and specifies training parameters in the python section, trains the model, loads it onto a microSD, and then runs inferences in c++ on the v5 brain. It can be applied to any task the user chooses, but is meant to solve challenges on the v5 brain, such as controlling motors or pneumatic cylinders. This has nothing to do with generative AI for code completion or things of the sort.
I’ve used a lot of AI in my free time. I think it could be really cool to use vex as a teaching tool for ai, as it is a robotics platform that many schools have access to, due to their competition clubs.
On the other hand, I can’t think of anything I would trust AI to on a robot. I know models get too very high accuracies, but models cost a lot of compute and are outdone by an extra button press or some sensors being programmed to behave in a certain way. I just don’t see a usecase.
Ya that is one of the other use cases for CNNs. Depends on the exact task, I have used them for this personally. I had an IMU attached to someone’s wrist (think smart watch) and I was trying to detect how often they had tremors in their hands from parkinsons. So I took in a window of 30 seconds of IMU data into a CNN and produce tremor confidence.
I can’t quite picture what this means. The vision sensor just tells you “object was this size at this pixel location” not actually gives you any actual images. Its possible you could do something with that though.
Lets talk about this. What have you done in the past? I think the obvious use case is learning “dynamics models” of your robot. Given motor commands and current velocity you will accelerate y. Now I am not sure if learning this is particularly useful or more accurate than a really simple model. But the data can make this possible.
This detail wasn’t clear in original post and incredibly important. Really should have started with this.
The first year, we took sensor values such as an ultrasonic rangefinder, IMU, and lift motor rotations to determine whether the state for the pneumatic cylinder should be 1 or 0. This automated the grabbing of MOGOs in matches. Pretty useless, but a fun experiment. This past year, we took the values of three different distance sensors to track the movement of an opponent robot and block the disks being shot in auton. The best way to see this in action is the worlds livestream, team 6210K. This could’ve been accomplished with similar quality using a pure algorithm, but we decided it would be fun to redo the machine learning in a new way.
I reiterate, this may not appear to have clear use cases at the moment, but the hope is it stimulates more innovation in the area, rather than provide a complete solution.
I worry primarily that ML would be a distraction for teams due to it being a conplex concept for VRC. I’ve tried to use AI to simulate motors to no avail due to numerous setbacks. I think that concepts such as gradient descent are useful for other aspects of design and coding, but a full network has never been implemented because other methods have been successful. In a recent project, I was thinking about using an ML program to simulate the optimal placement of rubber bands, but refactoring the code allowed brute force. The point is that NN and ML are too complex when simpler options exist.