One thing that I’ve noticed is that there’s not really a debugger for the V5. Which is kinda disappointing after the product being out for so many years now ngl. For the Cortex, RobotC had a debugger with breakpoints and everything, but there’s none of that for the V5 (at least for VEXcode and PROS). Would it be possible to add a GDB Stub or something into a user program or perhaps the kernel itself? GDB can communicate over serial, so that wouldn’t be a problem, but is there (or a way to port) a GDB stub on ARMv7, specifically for the VEX V5?
PROS had JINX in progress for a bit, although that project has been dead for a while now.
There is someone who talks in VTOW occasionally about working on a similar project (although I don’t think they’re on VF), not sure how that’s going really.
My understanding, from what JPearman said on a different thread, is that RobotC worked within a virtual machine and that allowed the debugger to work. VexCode and PROS do not work that way and therefore it would be extremely difficult, if not impossible, to add a debugger.
It was a design decision. A lot of things work in VM’s, full Python, Java, etc. all run in a VM / interpreter. People have whinged for years about “the JAVA VM is sooooooo slow that’s why real programmers write C++” when in fact in benchmarks the difference is about 1-2%.
Having a VM or using an interpreter gives you lots of abilities that become much harder on bare metal.
Every good programmer knows the only valid debugger is printf
<sarcasm>I’m sorry, I can’t tell if this is sarcasm.</sarcasm>
It isn’t. Long before the times of interactive debuggers we had printf. Much like using Alert Box for Javascript.
I see you can’t seem to find the funny. The joke is that @Mentor_355U included <sarcasm> tags around their post, but seemingly forgot that Discourse treats those like html tags.
Can’t find the funny in things I can’t see. Not sure how you were able to see it.
Discourse magic, clearly.
Lol, Taran is so tough he reads his vexforum posts Raw!
@Skyluker4, I am not sure remote gdb session will work well or at all with V5.
Even if you could get gdb stub to work with v5 architecture and have debugging symbols generated and in proper sync with the compiled code, the main reason I am skeptical is the limited bandwidth and potentially dropped packets of the VexNet wireless link, where the debugging capability is the most useful.
This is the warning they put in the “Debugging remote programs” section of the gdb help:
When using a UDP connection for remote debugging, you should keep in mind that the `U’ stands for “Unreliable”. UDP can silently drop packets on busy or unreliable networks, which will cause havoc with your debugging session.
If you are not ready to learn something similar to JINX, your best bet may be to write a single function that your V5 program will call in every loop.
Then that function will be checking serial input and, if nothing was sent over the link, it will do nothing and return control back to V5 program.
However, you could send some command characters over the serial link and it could print the values of some global variables, or pause V5 program execution and let you execute one loop iteration at a time, inspecting those globals after each run.
Then you could get more sophisticated and use V5 screen and remote to interface with this debugging function.
I believe, that in 98% of the cases, pausing the program and inspecting variable values is sufficient to debug the issues, without the full set of gdb capabilities, like inspecting the stack, setting conditional breakpoints, or modifying arbitrary memory.
There has been some progress in recent years to make it possible to run your v5 programs as just raw c++ programs on your computer.
For example the graphical library pros uses supports compiling into standalone gui programs emulating all screen activity. This means if you wanted to isolate your entire graphical code interface you could use a proper c++ debugger to test it.
https://docs.lvgl.io/latest/en/html/get-started/pc-simulator.html
Now there are some problems with that approach, mostly the effort to isolate code and the fact you aren’t working with live updating robot information.
The first problem is something solveable, all PROS would need is a proper HAL (hardware abstraction layer), what this would do is make the dependency on the VEX SDK layer optional. You could have an alternative dummy motor functions that just mimic the pros api but do nothing/return 0 as is reasonable. The dummy implementations of VEX functionality could even call any generic usb controller or emulate it as necessary if people wished.
With this modification(to PROS) you would be able to take any PROS program and compile it and run on your computer with 0 modifications.
Sadly the sensor values and motor values are much more difficult to dummy up and your control flow might not work correctly. For example if you are waiting forever for an encoder to get close to value X, the dummy motor probably will never reach that point. ( that requires full physics simulation, which gets more and more complicated the closer to reality you want it to be)
A simple and yet totally reasonable hack would be when actually running the robot record all sensors/joystick/motor values to the sd card and have the dummy HAL just play them back. Maybe that is more useful than always saying 0 but has its own flaws.
For my codebase, I interface with okapilib instead of directly with PROS. This means I can use okapi’s testing framework, complete with abstract types and mock implementations, to remove any dependency to the v5.
This allows me to run my code as a normal c++ project on my computer, with my code in a library that can be compiled for any platform. This is very useful for debugging (I can use gdb), running simulations, running unit tests, and visualizing the output of my algorithms.
This is less useful when designing code that is specific to running on a vex robot, but when you start to want to develop generic motion algorithms, it can be very useful to not have to iterate with a real robot.
Also, as mentioned the LVGL graphics framework is cross-compatible so I can use my computer to develop my screen gui as well.
Does documentation for this approach exist?
Separately, I am amazed by the number of teams that use PROS but not okapilib
There is not really any documentation for it, no. The okapilib mocking framework is just used for its internal unit tests, and is not exposed to the end user. I had to pull files from the okapilib repository, the code which allows it to run on a computer is not part of the library.
The core of it is not too complicated though. For example, let’s say I write a class that accepts a okapi::ChassisModel
, which I then use for the output of an algorithm. The chassis model is made of okapi::AbstractMotor
s. When running on the V5, the polymorphic motor objects are okapi::Motor
, which are a wrapper for pros::Motor
. When running on the computer, they are okapi::MockMotor
, which just stubs out all the methods to do nothing. Then, I can compile and run my generic code without depending on v5 motors.
That’s all there really is to it. My code is designed to use okapilib’s abstract polymorphic classes, which then can be implemented by V5 motors or mock motors. However, most of my code does not even need to use okapilib devices, it’s just generic c++ code that can be run on a computer. When I do need to interface with a robot, I use the abstract classes.
For many HS students, okapilib does not offer a compelling enough reason to learn. There just isn’t a large advantage for teams who just are trying to write driver control and pid functions. Okapilib does have some parts that allow you to “cheat” the learning process and use its powerful built-in motion algorithms, but luckily it seems like many teams would rather use their own code. I think okapilib’s value comes from its device abstractions, helper classes (timers, units, etc), and modular design.
I actually had a stubbed version of the VEX Runtime API (that VEXcode and PROS use) on my computer from when I was messing around with emulating the V5. I went ahead and published it here. It can run code made for VEXcode directly on the computer, but not directly with PROS so far.
Honestly, I don’t get why the VEX Runtime API isn’t just open source – you still need a physical V5 brain to actually run the code IRL, so it would only help the commuity.
I also came up with a concept for emulating everything a little while ago, so that you could literally drive your robot with the program that you normally use with a virtualized game. A big project that I’ll never do, but it would be cool to have something like that one day.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.