My team plans on creating an autonomous for our robot soon, and we have some questions for how to do it. I heard the robot can use driver movements to create the autonomous and we want to know how that is done.
You can pull up the dashboard on the Brain and see the amount of rotations you need and then just put that into code such as if you move your robot by hand and see it needs 4 rotations you can just put that into code
Do you want to record someone using the handheld controller as an autonomous program or do you want to select one of several programs based on human input? I’m not entirely sure which you are referring to.
For the first, it’s rather complicated. It will involve reading the input as your driver drives, storing the information in memory, and then writing it to the SD card (V5 only) or debugging console for copy and pasting into a new program. Then you have to figure out how to use the recording to duplicate his driving. If you want to do it as something cool to talk about, go for it. If you think this will save work in producing a good autonomous routine, it will not. The regular way of producing autonomous routines produces more robust and repeatable products.
For the second meaning, you will want to look at some examples of using the available options for human input via sensors. If you tell us what language and platform you are writing for we could provide more information on this topic.
And you will spend a lot longer building the infrastructure to record and play back than writing a dozen normal autonomous routines.