Displaying Images on Brain

Is there a way of uploading an image to the V5 Brain to be displayed? Like for instance, during teleop, displaying your team logo, or maybe even playing a gif.

I made my logo by drawing lines and circles in this thread: https://vexforum.com/t/vex-c-1814d-rainbow-delta-logo-displayed-on-v5-brain/50760/1
Other than that I don’t know if it’s possible to display an image.

I know that reading images is on the list of to-do items for Robot Mesh Studio, but is not done yet.

LVGL in PROS supports this, you can load and display images from the SD card, or you can convert the image into a c file using a tool.


sort of, VCS has an early undocumented API for displaying bmp images that are stored on the SD Card. The next sdk, that we can’t release until VCS is updated, improved that API and added support for png images. There will be no native support for gif, however, you can already read a gif using standard file io functions from the SD card and write your own decoding program, the next sdk will also allow a raw image buffer (ie. your decoded gif) to be displayed.

1 Like

So I can directly load images from an SD card in bmp format and open it up from the brain to display it?

@John TYler:
In RMS, is there a way to draw hollow/outline rectangles/circles on a v5 brain (Python)? I see there is a vex.Color.TRANSPARENT property but it just filled the shape with default color, not sure how to use.

I only tested in Mimic, not on a real brain. By the way, the Mimic is the best thing since sliced bread, it is so convenient to be able to test things while not having access to the bot.

Is Mimic a part of RoboMesh?

Yes, it is pretty neat, runs in the cloud and it has a CAD style interface that is sometimes annoying to use but you can test a lot of your coding this way.

@pkrish Yes, we have simulated robots for V5, Cortex, and IQ with all of our supported programming languages for each platform. (Open Robot Mesh Studio in Chrome, click “Create New Project”, select one of the Mimic versions as the target platform.)

@roboballer I wrote a little minimally functional test and tried it on both a real robot and a Mimic and got working results. Then I realized you were using Python and tried this test, and found out that this appears to be a Python thing. I’ll add a bug report.

Can you run the display on a v5 Brain as an emulator with Mimic?

Yes. If you try drawing to the screen in a Mimic, it will open a window with an emulated screen on it. You can try running either of the projects I linked before (in Chrome) to see it.

I get this error: “Current environment does not support SharedArrayBuffer, pthreads are not available!” What would be a fix to this? Mimic is so amazing! I can’t wait to build my robot and test things in it…

Like I was saying, make sure you are using the latest Google Chrome. We’re using a browser feature for Mimics that most browsers have off by default: SharedArrayBuffer.

i got lvgl to display images on the brain but i want to remove the image after a short time interval, because im trying to load an image sequence to make an animation, do you know how to remove images in lvgl

https://docs.littlevgl.com/ :wink:
Your best bet is probably to destroy the object and create a new one (in a loop), or just hide and display them in sequence. Watch out for overflowing the memory.
Something more advanced programmers might do (I couldn’t help you) is to modify the image source to be a multidimensional array and create a buffer in memory where you can load all the frames into, then you could give lvgl a pointer for each frame in memory.

Does anyone know the center point of the cortex screen?

The usable area of the screen is 480 pixels wide by 240 pixels tall

Ok thank you

<20 chars.>

From there do you know if it is possible to tell the brain to load that image I the code for matches.