Why do people always see things in black and white…
This question is, should I do A. or B.
A. being an entirely predetermined routine
B. being an entirely dynamic routine
My answer (as per usual) is to invent a C. (because screw conventional thinking).
C. being, use the best of A. and B. within the parameters of your time, budget and skill level.
Now really was that so hard?
Now what might C look like? Great question, here’s my take…
Start with a fixed routine, do what you think you can reliably do, but setup triggers that revert the bot into it’s seek and destroy mode. For example, a sharp, unexpected value on the accelerometer might indicate that the robot was hit by something, which will obviously mess up any predetermined routine, so this is a good trigger to revert to the slower scoring mode.
Another trigger might be if a game element that you want magically isn’t there when you try to collect it. Something went wrong, who cares why - go search and destroy now.
A third trigger might look ahead of the robot looking for opposing robots or unexpected things blocking your path. You could either revert to search and destroy mode or you could try to navigate around the obstacle, which should work given that your position is still known so you still have valid and good telemetry. Although the needed path-finding is advanced stuff which will tax your programmers.
As for sensors, I remain convinced that VEX does provide enough good sensors to locate a robot or to establish your position on the field. Assessing the condition of the field however is tricky.
If you do have a really good knowledge of where you are relative to where you started (also known) you do know exactly where you are on the field, which can be used to navigate dynamically around the physical parts of the field given a map of sorts. I cannot disclose the details, but I created such a map for NAR months ago and we do have plans on using it. It’s really quite simple, but powerful. Anyway, the tricky part is seeing the non-static parts of the field.
The rangefinder is slow and it’s only a single point in space, so not quite on par with Google’s LIDAR systems. But something like an Xbox Kinect sensor could possibly provide a really nice picture and 3D point cloud array of the field which could be used to detect foreign objects, both of the robot kind and playing object kind. Unfortunately using this sensor isn’t advantageous.
First of all it’s huge, second of all it takes a lot of processing power to work on the data and third … ewww xBox.
A straight up honest better approach might be to just allow yourself to hit any other robots, look for the impact on the accelerometer and make some assumptions to try to navigate around it. It’s a really special case anyway that you won’t see in skills and only care about for the first 20s anyway, so really why care? For all I care, just fail in this case. Oh well they devoted an entire robot to stopping your routine, at least it’s a 1 robot not scoring to another robot not scoring scenario.
And then there’s object tracking. Yeah yeah it CAN be done given a webcam and three hours of YouTube videos on computer vision but WHY! For a 20s span of time it’s not worth it, but for a whole minute of time like in college? Oh yeah - it’s worth it. Idk my intuition is telling me that HS teams would in 90% of cases be better off spending that time building a better robot, making design tweaks, etc. than actually doing object detection but who knows, I’ve been very wrong before.
So yeah that’s what I have. Do C, pick and choose your battles wisely, weigh the costs to gains, etc.
-Cody