Even in a perfect mechanical and electrical system, that method will require thousands of runs. You are going to have better luck with your own algorithms.
Just for fun you can check out this video. Remember a video game reacts identically given the same inputs. (a robot would not be like that)
@tabor473 is exactly right here (as he usually is.) If instead of training or evolving an AI, you spent the time programming, you’d be far, far ahead. Consider the time it would take to do 10 runs. Don’t forget to include field reset times. Now bump that up two or three orders of magnitude. (1000 to 10,000 runs.) That’s what it takes.
And again, there’s no reset time. Also, depending on the hardware, you can run simple video games in multiples of real time. Here IRL we’re stuck with a one-to-one mapping from game time to real time.
To optimize your effectiveness, I’d suggest using your time some other way. Since you’re talking about 15 second autonomous programs, my advice would be program two or three, make them execute for each tile. Spend all the rest of your time driving and scrimmaging.
If you want to investigate evolving or deriving AI, do it in Virtual Worlds. Then, the techniques you develop now (interfacing to the game, evaluating fitness, scoring the results, automatically determining the next strategies to attempt) will all still be useful for next year’s game.
They’re trying to tell you that even if you wrote the perfect AI to drive your robot, you would still waste your time training it because of the sheer magnitude of training iterations required.
A team in hawaii made a robot that during autonomous grabbed the back stars threw it over the fence, knocked down the middle stars by doing so and launched the cube over. But after the cube was on the other side the robot would wait for another cube to come and it would throw it back. it sorta matches your situation, I personally don’t know how they programmed it but they are tied to be best in my state. When teaming with them, they told us to avoid going going anywhere near 2 square panels near them or they would try and lift us.
That’s pretty simple. All you need is an Ultrasonic Range Finder. It sends a sound wave and counts how long until the sound wave returns. The duration of this can tell the robot if there is something immediately in front of it and how far away it is.
My take is that it will be no issue if the sensor is used on stationary robot.
But we mounted them on moving robots - by the time the echo returned to the sensor, the position of the robot could be already way off.
The faster the robot is moving, the greater the difference in the positions when the sensor is sending out the signal and when the echo returns to the sensor.
I am sure you can overcome this with some sort of PID or anything that take into account of this systematic error.
It didn’t actually use LIDAR, it had a Pixy camera to find balls, and ultrasonics at 45 degrees on the back to detect the distance to the walls when it hit the bar, and then use trigonometry to determine the exact angle the robot should turn to shoot the balls into the goal.
For VEX, I agree with others that you should probably stick to a set of premade behaviors. If you’re ambitious, have it change between behaviors in response to some sensor input. For instance, seek, turn, score, detect collision. Give it a small number of “decisions” it can make depending on common occurrences. For instance, if the 15" catapult bot missed a ball (it could tell with line sensors in the tray) it would not go all the way up to the bar to shoot.
If you want something that develops more hands off by itself, check out the evolutionary robotics course my professor runs on Reddit. Reddit - Dive into anything
Just be warned that there is an extremely low chance of something like this ever being practical for this competition.
Ultrasonic sensors don’t work on fabric. I have not done tests but how does it work on cubes? LIDAR isn’t all that much better. It can’t successfully see the vex perimeter. (goes through polycarb and is absorbed by black steel)
I would agree with the previous posters that, if artificial intelligence is defined as implied in the above posts (autonomous decisions, based on the extensive prior learning), then implementing it with the limited cortex processing power and input sensors would be somewhat impossible.
However, it doesn’t have to be that hopeless. I think, we could find a way to bring some “artificial intelligence” to the cortex platform even without fancy sensors available in VEX-U.
But first we need to understand what do we mean when we say that we want to implement “artificial intelligence”.
I don’t think there is a standard textbook definition of artificial intelligence yet, because, depending on which textbook you are reading, you could find multiple definitions.
The “classic” Turing definition is “machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.”
Another popular notion is that if the machine (computer system) is able to learn, and does so to the degree where it could solve complex problems, which are usually solved by humans, then it could be considered an artificial intelligence.
If you think about it, then you would agree that both of those definitions are somewhat subjective, because it depends of what behaviour you are talking about, and what problems or tasks you consider complex. For some, computing pi up to million digits would be complex, but finding a kitten among images of other similarly furry animals is easy.
Yet we all agree that there is nothing remarkable that any cheap modern laptop could easily compute gadzillion digits of pi, while any 4-year-old still beats best supercomputers at recognizing images of the cute animals.
Also, I think, most will agree that “Siri” and “Google Now” are more knowledgeable than some of the humans, but I don’t think many will seriously argue that those “AI” engines are actually intelligent. Anybody who ever used them will point to the gaping holes when it comes to figuring out context of the conversation or reading between the lines. There is clearly something important missing from them to let them be called intelligent.
I, personally, like the following definition of intelligence:
You could program computer to parse languages better than humans; you could program computer to recognize visual patterns better than humans; you could feed it all the encyclopedias, historical, fictional, and scientific knowledge we have, then program it to assimilate (learn) any new information it could observe, and figure out all the internal connection inside that knowledge base (deep learning); you could even get it to perform ridiculously complex tasks that only few people could do;
still you could not call it intelligent if all it does is just sitting there, waiting for a command to be typed at the command prompt, a voice query, or some other event triggered by visual input, radio signal or something else. It is still only a large computer with fancy program and, huge facts database and simulated neural network for holding relationship between them.
Only when it will “understand” the necessity of the acquisition of the new knowledge, and will start to actively seek it, by probing its external environment, then it could be called truly intelligent.
Or, as one of my college professors liked to say: “curiosity is a prerequisite of intelligence.”
Sorry, for this long aside, now to get back to the autonomous robots, please, tell me how intelligent do you think these robots are?
As far as I know, they run on predefined routes and use differential GPS for precision navigation.
I don’t think the OP meant AI in any formal sense. And I think the cortex is fine for the challenge, and the sensors, though not optimal, are adequate for a problem of this class. I’ve been on projects where we did very complex things with less processing power and fewer sensors.
One simple technique that overcomes much of the cortex limitations is to do the analysis and training off-board. You can use a laptop (or a network of supercomputers, for that matter) to analyze the training data, adjust a neural net (if that’s the design you’re pursuing; many other learning possibilities exist) and feed the result back to the cortex by generating a program, compiliing it, and downloading.
All the heavy lifting is off-board, and the “learning” is embedded into the generated program. The program can be a simple state machine, with a rich set of states and a complex set of switch criteria. That would encode down to a small enough executable to run on the Cortex. Once the program is on the Cortex, you cut the link, and the robot is on its own.
Many similar approaches are possible.
I’m cutting out a lot of very good stuff. Nice post.
I imagine the OP doesn’t really care if other people would call it AI; he wants some complex reactive behavior that isn’t under his team’s direct control. As a bonus, he’d like it to score some points. And probably to stay on the field.
That’s a great aspirational definition of hard AI. We won’t see that on a VEX field very soon, but it’s not necessary for some pretty cool results.
The routes aren’t preprogrammed, but they have to fulfill a set of criteria. Differential GPS isn’t an absolute necessity, but it is the most standard way to do it, and it makes the task much easier.
@kypyro, yes, what you are saying is very similar to my own thoughts. The reason I wrote the previous post was because it felt like different people were meaning different things when using term “artificial intelligence.” I wanted to spell out alternative definitions to make sure we better understand each other when talking about various levels of artificial intelligence that would be possible to implement for our purposes.
AI must be a hot topic these days. Few weeks ago technik jr decided that we should be doing AI for the next season and started learning about neural networks and stuff like that. Is there something trending on the Internet that I didn’t get the memo about?
Let me give a couple of very close to home examples, that should be easier to relate to.
The first one is from the Nothing But Net season. Anyone who had programmed flywheel PID must remember how you would spend a lot of time calibrating PID coefficients for the stable flyweel speed control. Yet it would still be very sensitive to the external factors like frame alignment or friction in the bearings.
We had PI loop with additional battery voltage scaling and the most important piece of telemetry for us was the value of “I” which would estimate amount of power loses or friction resistance in the system. We got LCD specifically to monitor it, as it was indicating the health of our flywheel. If one of the motor screws got loose or a piece of foam got into the flywheel bearing then we would see it immediately in the changed value of “I”.
Now if we look at our system from the artificial intelligence point of view, then instead of saying that “we programmed the system where ‘I’ estimates any unexpected power losses” we could say that “when we turn the power on, the robot ‘wakes up’ and ‘learns’ how much friction there is in the flywheel assembly and then adapts its speed control accordingly”. To allow that, we still had to calibrate Ki offline based on the multiple runs.
If we relied only on the human drivers to adjust motor power up and down, it would never be able to be as accurate as it was. So this is a good example, where a little bit of “artificial intelligence” could make a big difference when it applied to tasks where computer has an edge over the humans.
Here is another example from the Starstruck season. Less than a week before our States we figured out how to get our robot to high-hang. The hanging sequence turned out to be more complicated than anyone wanted or anticipated, since it needed motors to run close to their PTC limits. There was almost no way for a driver to execute the hanging without overheating PTCs. The proper way to fix it would be to rebuild the robot to be more mechanically adapted for the hanging. However, there was no time for that.
So the hanging sequence was implemented in software. To make things even more interesting / complicated there are two ways our robot could achieve the high-hanging. The first is fast, but risky (robot fell twice off the pole during the practice) and the second is safe but slow. So there is a piece of code which, at the beginning of hanging, looks at the battery level and the amount of time left in the match and decides whether to go for the safe or risky hang. I thought it is a crazy way of doing things, but kids thought it is cool, invested a lot of their time into programming it, and ended up winning Innovate Award for “thinking outside of the box.”
From these examples you can see that for some tasks, that are easier for computers than for humans, you could relatively easy incorporate the elements of the “artificial intelligence” such as (limited) learning and autonomous decision making. They could make a lot of difference for the gameplay but without the large costs that are usually associated with the full scale machine learning.
You could head into the “artificial intelligence” direction, one step at a time, incorporating more and more sensors where it makes sense, but still rely on human intelligence to program aspects of the operation where our understanding of the problem is superior. If this automatic cube detection works as suggested, it would be a great example of that:
Essentially, you need to look at the time it takes to train the driver vs the time to program and train (calibrate) the robot: