I’m thrilled to share with you all a project born out of necessity and innovation. Like many of you, I’ve faced the frustration of wanting to personalize my robot with custom images on the V5 Brain’s display, only to be hindered by the lack of a micro SD card. This challenge inspired me to create a solution that resolves this issue.
I proudly present to you a free open-source web app that generates C++ code for displaying images directly on your Vex V5 Brain’s screen, bypassing the need for a micro SD card. Leveraging my skills in Vue.js, I developed an application that transforms any PNG into a format the V5 Brain can display by encoding the image data into arrays within C++ code.
Here’s how you can start using it:
Prepare Your Image: Create a PNG with the dimensions of 480x272 pixels, the exact display size of the V5 Brain.
Upload and Convert: Visit https://suhjae.github.io/vex-image/ and upload your image. The web app will then work its magic and generate C++ code for your image.
I created this tool with the hope of making it easier for teams to customize their robots and add a personal touch without the hassle. I’m excited for you all to try it out and look forward to your feedback!
I had a look at this out of interest and have a few comments.
First, the area of the screen available to user programs is actually 480 x 240 as the top 32 lines are used for the V5 statusbar. This means that the last 32 lines of any image displayed using this code will be cropped.
Next, the code seems to run length encode the data, a color chosen from a lut is repeated for a run of pixels. While this will work for simple images (like a team logo) it will not work well for a complex photo realistic image (and actually the web site more or less crashes if given that type of image).
Finally, I wonder why you took this approach rather than just encoding the png data into a C array and using drawImageFromBuffer which will decode the image directly.
/**
* @brief Draws an image on the screen using the contents of the memory buffer.
* @param buffer A pointer to a buffer containing image data in either bmp or png format.
* @param x The x-coordinate at which the left edge of the image will be drawn.
* @param y The y-coordinate at which the top edge of the image will be drawn.
* @param bufferLen The size of the source image buffer in bytes.
* @return Returns true if the image was successfully drawn on the screen.
* @details
* This function draws an image on the screen using the contents of a buffer into which
* either BMP or PNG raw data has already been read. The contents may have come from a
* file on the SD card or have been statically declared in the code. The image should be
* no larger than the V5 Screen, that is, a maximum of 480 pixels wide by 272 pixels high.
* The top/left corner of the image is placed at the coordinates given by x and y, these can
* be negative if desired.
*/
bool drawImageFromBuffer( uint8_t *buffer, int x, int y, int bufferLen );
edit:
web site finally finished processing the test photo I tried, rather large data structures, 1,755,601 bytes
You are indeed correct, and I greatly appreciate your invaluable feedback. First and foremost, I want to thank you for taking the time to review the web app and for your detailed observations.
Regarding the screen dimensions, you’re absolutely right about the V5 screen’s usable area being 480 x 240 due to the status bar occupying the top 32 lines. I will promptly adjust the application to reflect this correct dimension to ensure that images are displayed without being cropped.
The original intention behind the encoding scheme was indeed to optimize for simple, flat images such as team logos, which typically have a limited color palette, rather than complex, photo-realistic images. This approach was chosen to minimize the size of generated, and compiled code, aiming for efficiency in images where color runs are long and repetitive. However, I acknowledge the limitation you pointed out, where the application struggles with more complex images due to the current run-length encoding method.
I was aware of the alternative method of using drawImageFromBuffer to directly decode PNG data into a C array, and I did experiment with it initially. My motivation for developing a custom encoding scheme was to explore a solution that potentially offered smaller code size for simpler images. However, I understand the advantages of direct image decoding, especially for more complex images, and the significant reduction in processing time it can offer.
Based on your feedback, I am motivated to enhance the web app by adding a feature that allows users to toggle between the current encoding method and the direct buffer approach. This will provide more flexibility and accommodate a broader range of image complexities while addressing the issues with file size and processing time for complex images.
Thank you again for your constructive feedback. It’s insights like yours that drive improvements and innovations. I look forward to implementing these changes and continuing to evolve the app to better serve the Vex community.
RLE encoding is a perfectly valid way to encode simple images. Most of the icons and images we display on the V5 (EXP and IQ2 also) are encoded as 8 bit bmp images with run length encoding. That format uses a 256 entry color lookup table and a similar method to that you have chosen.
My career before VEX was related to digital imaging in the entertainment industry. At one time the most popular computers used for post and CGI were made by a company called Silicon Graphics. Their preferred file format for images also supported optional run length encoding,
Anyway, I should have said before, nice work and web site.
So… I’m a block coder, and I found this and wanted to try it out (I don’t currently have my bot). When you say use the “drawLogo()” method what does that mean… I have the code directly copied into a c++ file, what exactly do I do with it after that? Or is it going to work just being there? I know block code is frowned upon but im the only one on our HIGHSCHOOL team that has any code experience at all so were stuck with blocks for now, but i really just want to play with imaging on the brain.
I picked up text code pretty quickly (the basic stuff, like configuration tab and editing robot configuration stuff, not PIDs)
Plenty of tutorials online to help you
Btw, the OP deserves more likes, he made a web app for you!
right, I know, I’m just playing with it in free time our actual comp code most likely wont have the imaging on the brain. I was just wondering how to set it up in a c++ file.
Hey this is great, thanks. However, I am totally new to text coding. How do you do step 4 - "call the drawLogo() method within the the robot’s code? Can you give an example? Many thanks!
The thing is, C++ uses these { curly brackets/braces } for arrays, while Python uses [ square brackets ] for lists (extremely similar to arrays, if you ask me).
That means you will have to change the curly brackets to square brackets after putting it in.
Lucky for you, @SuhJae put the curly brackets on a different line rather than on the line of all the color values, so it is extremely easy to do this.
def drawLogo():
imageColors = [] # imageColors from C++
imageIndices = [] # Same thing but for imageIndices
imageCounts = [] # You know the drill..
x = y = 0
for i in range(5074):
index = int(imageIndices[i])
count = int(imageCounts[i])
if index>=0:
brain.screen.set_pen_color(str(imageColors[index]));
for j in range(count):
brain.screen.draw_pixel(x,y)
x = x+1
if x>=480: (x,y) = (0, y+1)
else:
x += count
while x>=480: (x,y) = (x-480, y+1)
For people who care, I purposely made the code intuitive in a way where you might be able to learn some new tricks here
This project is not available to the block code user.
But, you may try recreating this logic using block code and storing the generated value on the list manually.