What if you could program your robot to learn of the field, then in the game have sensors on it so it knows whats going on, then say it knows that a cone gets picked up and placed on a mobile goal by a teammate or a opposing team member. then it bases all the data it collects as the game goes on it continues to pick up, score, and record the cones it scores. is this possible if so I’m no-life-ing this and making it.
There are some existing relevant threads. Here is one. It’s long, but you should read it.
VEX U team WPI1 did something like that last year. It would be hard with the limited VEX sensors, but in VEX U you can use other sensors. Read through this thread: https://vexforum.com/t/wpi1-reveal/41037/1
What would be the benefits of doing this, as opposed to actual driver control?
Or is this more of a challenge or fun project?
Anyway, if you are going to do this please keep the forum updated, it will be very interesting to see!
Atlantis
I think there is something to be said for style if you walk up to the field, set your controller on the ground, and watch your robot win unassisted. But otherwise, just for fun. Alternatively, if it is programmed with enough flexibility, it could make a large impact in autonomous skills and in the autonomous section of a match. The other team can’t make a program to counter your autonomous if it doesn’t know what it will be doing until the match starts.
Winning this way is so very unlikely, for all kinds of reasons.
In games and other conflict modeling, there are many effective defensive strategies that do not rely on knowing ahead of time what the other side is going to do.
Absolutely true. But it is a possible reason to pursue AI in a vex robot, that is all.
heyyyyyyy, We beat someone.
so what you’re saying is that I can in fact do this, if so I’m not leaving my room for a couple of days.
With the available sensors it is hard. We used an 80$ sensor off a vacuum cleaner and a 40$ computer.
No, with VEX sensors I daresay it would be impossible to build a complex robot with AI. Starstruck was a much simpler game, and a team of college students with better sensors couldn’t make a highly competitive robot of the sort you are describing (although it was functional, and I am impressed by their efforts). Save yourself some time if you think this will work for VRC.
Now, you could probably do AI for a robot that stays within a tape square or something as a fun project if you really want to get into this. That would be more realistic with VEX sensors.
I have some experience with neural nets and I had this same idea earlier this year but in my opinion while you could do it the return would not be worth it, for as many have pointed out this is not a simulation so you can’t get perfectly accurate data, this can be overcome but it automatically greatly hampers the net’s ability to learn. There is also the fact that in order to get better results than human selection from several pre-made autonomous programs you would need thousands of data sets. This is not practical at all because you would have to test all of them individually (you may be able to do something with virtual worlds but then you run into the problem of it being an imperfect or too perfect simulation.) If you do decide to pursue this please keep us informed but I would caution against it.
EDIT: all of this assume the use of a neural net I don’t really know much about other kinds of ai and learning so ignore me if that is the route you intend to take.
You could try something like TensorFlow but, as alex99 pointed out, the trick to getting these things to work is feeding it a lot of data samples. Figuring out how to format that data - sensors, images, etc. - is one of the big problems. Have a look at how AlphaGo was trained.
And let’s not even mention whether the cortex would even be able to run any of these solutions (e.g., TensorFlow)
Of course, you could always have the output be a runnable code file, and run the intelligence on a beefy computer…
I was just thinking about the newer, hypothetical cortex that is coming out any minute now… any day now… any year now…
In theory you could train it on a different far more powerful computer then transfer the result to the cortex.
That’s how this sort of thing is normally done when the roving computer is resource challenged. This was described a bit in the thread I referenced above.
You can use Robot Basic to devise robot control algorithms and also use it to simulate your robot with sensors and actuators. It provides a graphics display. In addition, there are many examples of using it to solve mazes, detect objects, navigation, etc.