How many cores does the vex brain have

I know that you can run pretty much unlimited processes on the vex brain but if they are running on the same core it doesn’t really matter. I did research and the closest I got to an answer was that the cortex a9 has 1-4 cores.
-thanks

I don’t know about cores, but I think at least PROS swaps out task every ms so given a 10 ms motor update command delay that seems plenty, and I assume given VEXCode is made by VEX (who made the brain) should perform just as well). Have you had issues with multi threading before?

4 Likes

As James mentioned in the linked post, user code runs on a single Cortex-A9 core (called CPU1). VEXcode tasks are managed through a cooperative scheduler on this core, while PROS uses FreeRTOS which is a more industry-standard preemptive scheduler. On PROS, you have to be more cautious about thread safety, since the kernel is free to interrupt your code to go run another task in the middle of it running, rather than you having to explicitly yield back to the scheduler through something akin to a sleep.

Across the entire brain, there are technically four different CPU cores to my knowledge. The main SOC that runs VEXos and your user code is a dual Cortex-A9 chip (the Xilinx Zynq XC7Z010-3CLG400E I believe). There are also two additional NXP-branded Cortex-M0+ processors on the brain as well though. One is for the onboard three-wire ports and the other is speculated to be for power management.

3 Likes

Does this mean that running two threads at once has no performance boost

On a system with one logical core like the V5 brain, you can only execute a single instruction at a time, meaning there isn’t a performance benefit from running two tasks at once if what you are trying to do is performance bottlenecked by the CPU itself. This is rarely the case unless you’re doing something very computationally heavy, though (I can only think of a handful of teams that have ran into this problem).

The actual “performance advantage” of multitasking comes from being able to do other things in the background while waiting on something non-computation heavy to happen. For example, when you wait(500, msec); you aren’t blocking all execution on the CPU for 500 milliseconds. You are just telling the scheduler “okay, go run other tasks for the next 500 milliseconds and ignore this one”. You’re cooperatively sharing CPU time with other tasks by voluntarily yielding execution at points in your code.

Here’s some excerpts from some docs I wrote for another project that explains it with (maybe) a better analogy.


image

8 Likes