Cody's BeagleBone Black / Cortex Bridge

I wasn’t going to post this because frankly I’m sick and tired of having projects fall apart and die around me after I tell the world that they’re going to happen. I tend to be an open person, I don’t like secrets, nor do I see the reason the world is so secretively, but when it comes to projects this nasty little cycle seems to happen.

I get all excited about something and go off and do like a fraction of the work required to complete the project, then life catches up with me, schools starts or I realize I don’t have what I need in terms of knowledge, tools or just money / time to complete my goal. So the project gets shelved in my head.

Well I’m running out of shelf space (flat surfaces are a premium in my life both physically and mentally), so I think it’s time to actually get one of these ideas alone a little further.

A while back NAR posted this:

http://polynomic3d.com/user/smith/NAR/NARPE_forum.png

The team genuinely wants a better on-robot co-processor, because like all good teams we aspire to write better code. But what we really want is more capability. As one of the prominent programming people on NAR, last year I wrote thousands of lines of code for our robot. This code tried to structure the robot in a way that allowed the code to understand the robot’s makeup. The idea was to write one set of code that could operate (at least drive) any robot that the team could build within reason.

The problem for me was the lack of any OOP support on the Cortex and the difficulty of debugging with a very limited console. As a programmer I’ve spent most of my life writing OOP code, and while I favor a component based structure for distributing logic, OOP is the best we have at the moment. Yet we here at Vex don’t have a means to write such code on the Cortex and even if we did the lack of RAM on the Cortex would limit what we could do.

When the Raspberry Pi came out, suddenly putting a full Linux box on a mobile platform didn’t seem as crazy as it used to. Now we could have a whole CPU on the robot, with gigabytes of ROM and hundreds of megabytes of RAM, USB, Ethernet, WiFi and some GPIO to boot.

Problem is, the rules of VEX U forbid us from removing the Cortex from the equation. The motors and VEXnet must go through the Cortex, so it became necessary to bridge the Cortex and Pi.

This is not a trivial task. And as time went by, I began to realize that the Pi has some shortcomings. As such, I chose a different platform, namely the BeagleBone Black. The biggest reason was that fact that the BBB uses DDR3 RAM which is much faster than the SDRAM on the Pi, and the fact that the BBB has way more I/O.

http://polynomic3d.com/user/smith/B4C/Cover.png

So I’ve made some progress on this already, which I’ll share here. But I’ll also use this thread as a build / programming log so that others can pitch in, or learn to do what we’re doing.

SO let’s get started, first thing is the Cortex’s UART.

http://polynomic3d.com/user/smith/B4C/UART-Close-Up.png

I chose to use the UART because it’s a pretty easy protocol, and the Cortex has two UART ports, leaving both an I2C and UART port left over. UART is build for two-way communication between exactly two devices.

If you want to learn more about UART, see this Sparkfun guide. Also be sure to read the Cortex pinout to get everything plugged in correctly.

THE FOLLOWING SECTION WAS NOT ACTUALLY NECESSARY!

I mistakenly believed that the Cortex used 5v logic level and needed to be shifted to 3.3v, the Cortex does in fact use 3.3v logic level and does not need to be level shifted.

That being said, some Arduino’s DO use 5v logic level and therefore need to be shifted, so I’ve chosen to leave this section here.

The BBB can be damaged by this higher voltage, luckily Sparkfun sells a small level converter board.

http://polynomic3d.com/user/smith/B4C/Sparkfun-Level-Shifter.png

I used right angle headers because I wanted to create something closer to a wire, you could alternatively use straight headers and set this up on a breadboard if you wanted. Which may be what I do with my second level shifter.

END OF NOT ACTUALLY NECESSARY SECTION!
Thanks James for being James

Now comes the task of actually making these devices talk. Unfortunately I couldn’t get PROS to output anything over UART. I’ve asked them for help, so hopefully the issue can be figured out but for now the following ROBOTC program serves as a decent test:

#pragma config(UART_Usage, UART1, uartUserControl, baudRate9600, IOPins, None, None)
#pragma config(UART_Usage, UART2, uartUserControl, baudRate9600, IOPins, None, None)
//*!!Code automatically generated by 'ROBOTC' configuration wizard               !!*//

task main() {

  while(true) {

	    sendChar(uartOne, 'A');
	    delay(250);
	    sendChar(uartOne, 'B');
	    delay(250);
	    sendChar(uartOne, 'C');
	    delay(250);
	    sendChar(uartOne, 'D');
	    delay(250);
	    sendChar(uartOne, 'E');
	    delay(250);
	    sendChar(uartOne, '\n');
	    delay(250);

	}

}

All it does is write “ABCDE” over and over. But that’s enough to test the communication link.

Notice the “baudRate9600” up in the config, 9600 is the default baud rate for the BeagleBone Black. I hope to up this to 115200 later.

NOTE: I’ve flashed my BBB with the Debian image, because Debian rocks. You can find the image here and the flashing guide here.

Be sure to read the following (at the bottom):

Don’t be an idiot like me and NOT read this and leave the microSD card in, and wonder why the BBB shuts down every 15 minutes, incurring unnecessary writes to the eMMC the whole time.

On the BBB, you have to enable the UART ports. This isn’t all that simple. This guide explains the device tree and gives some good examples.

The basic process looks something like this:


mkdir /mnt/boot
mount /dev/mmcblk0p1 /mnt/boot
nano /mnt/boot/uEnv.txt

Then you add these lines:

# UART 1
cape_enable=capemgr.enable_partno=BB-UART1,BB-UART2,BB-UART4,BB-UART5

And reboot, this tells the BBB to enable all the UARTs.

To verify that the UARTs are enabled:

cd /sys/devices/bone_capemgr.*
cat slots

You should get something like this:

root@beagle:/sys/devices/bone_capemgr.9# cat slots
 0: 54:PF---
 1: 55:PF---
 2: 56:PF---
 3: 57:PF---
 4: ff:P-O-L Bone-LT-eMMC-2G,00A0,Texas Instrument,BB-BONE-EMMC-2G
 7: ff:P-O-L Override Board Name,00A0,Override Manuf,BB-UART1
 8: ff:P-O-L Override Board Name,00A0,Override Manuf,BB-UART2
 9: ff:P-O-L Override Board Name,00A0,Override Manuf,BB-UART4
10: ff:P-O-L Override Board Name,00A0,Override Manuf,BB-UART5

After that all we have to do is:


cat /dev/ttyO4

http://polynomic3d.com/user/smith/B4C/First-Contect.JPG

And we now have an open channel between the BeagleBone Black and the Cortex.

Right now, it’s kind of like a marriage, the Cortex does all the talking, repeating itself all the time and the BBB just listens. The next step will be to establish a more complicated communications protocol between the two devices.

I’ll probably use Jame’s P3, after I actually figure out how it works beyond the basics or I may derive my own protocol based off of it.

Once we have a good communication protocol, we’ll begin to leverage that WiFi adapter and the apache2 server running on the BBB.

In the short run, we’ll use the BBB to do some major real-time data logging.

In the long run, we’ll (hopefully) begin to send RPC calls to the Cortex, basically we’re going to tell it what to do.

-Cody

Cody

I will double check on this later, but pretty sure the IO on the cortex is actually 3.3V. You probably only need GND, Tx and Rx and can scrap the level shifter. Even if that’s not the case, and again I will check on this, many micro controllers have 5V tolerant IO pins.

For example, this is for the processor in the cortex.
[ATTACH]8651[/ATTACH]

I’ve read from several sources that the BBB’s GPIO pins should not be directly interfaced with 5v hardware, that the board will become damaged, etc.

That being said, now that I think about it I did assume that the Cortex used 5v logic level for it’s I/O. If the Cortex does use 3.3v logic level, then you’re correct, the level shifter wouldn’t be necessary.

I don’t have a scope, but I was able to quickly measure this from GND to TX.

I’m not 100% sure b/c I know that the crappy multimeter averages over time and this is a signal that is changing from HIGH to LOW, but it does seem like you’re right on this one.

http://polynomic3d.com/user/smith/B4C/3v.png

It’d be great if you verified this properly though. Thanks!

Time to get one of these :slight_smile:
https://www.saleae.com

Here is a scope shot of the UART Tx line (the green wire in your first post). Tx is at 3.3V.

[ATTACH]8653[/ATTACH]

The Rx input would be whatever you send, a VEX LCD is sending back 5V which is ok as the cortex is 5V tolerant (and may have series resistors, don’t know about that).

So you would not use the 5V power from the cortex (red wire), just GND (black wire) Tx and Rx.

Most STM32Fxxx devices (including the one in the Vex Cortex) run at 3V3 and can withstand 5V inputs perfectly fine (except for the ADC, which will break at 5V)

Everything labeled “FT” in the datasheet is 5V compatible

[ATTACH]8652[/ATTACH]

In terms of the project, the data-logging aspect is very interesting. Any thoughts about running ROS on the beaglebone (if that’s even possible?). You could potentially run some pretty interesting simulations in rviz.
5V_FT.jpg

I’ve updated the lead post.

WANT much… Site says there’s a two week lead time, hopefully I can get some work done and buy the 8. Depends on money and stuff.

I have a few jobs lined up, hopefully I can make some dollars in-between classes.

Never heard of it. I’m interested in serving the real time data off the apache2 web server. I’m aiming to have the data viewable by any device with a browser and WiFi connectivity.

The trick will be in all the crazyness involved with getting the data moved around. I’ll have to implement something like P3 to send JSON or Google PBF then it’ll have to be stored in either a file or DB on the BBB, then that’ll have to be read by a PHP script and spoon fed via AJAX to the client.

EDIT: I’m leaning towards the idea of storing the data in a MySQL DB, would be the easiest to parse in PHP.

They used to do a student discount, I don’t see that since the website was refreshed but I would email them and ask, it was something like $100 off the $300 version.

+1! Might have to check this out for my classroom.

That would totally work, not too difficult either. MySQL would make your life easier.

I wouldn’t throw away the idea of ROS though, as it already does what your saying and more. What basically happens is that you boot up a roscore “server” and launch nodes which will be your individual programs. The server allows different nodes to communicate with each other over “topics”. For instance, one node can be your motor controller, one to communicate on UART, etc. Specifically to your example, one of those nodes can be rosbridge which can take in/send out data on a Web Socket. The benefit to ROS is that everything is already done for you, you just have to learn how to use it.

See here : robotwebtools.org
Also this is what rviz looks like: rviz

Enough with the advertising though, just throwing around different ideas.

[noparse]:o[/noparse] nice render

earlier this year i made a bridge between the Cortex and the Raspberry pi including a full communication protocol (checksums, resending, pings) allowing me to set motor values using a webserver

apache is waay too slow to use on a pi though, i ended up using my own Lua webserver so i could easialy access the UART in the rpi lib that i made in C

id rather just rewrite it at this point because it was made in RobotC, arrays are much easier to manage in PROS because of malloc

They still offer a student discount. You have to send in a support ticket saying you want the discount and which model you would want.

That thing looks glorious. I’m not competing with my college team this year, but man for my own testing purposes and projects that is something that needs to be on my desk.

I’ve been meaning to continue this story, but work has been slow because of all the college stuff, stuff.

http://polynomic3d.com/user/smith/B4C/bsworking.png

So at this stage, I’m working on basically creating a protocol that fulfills the role of the OSI Layer 2.

We kind of get this ability to send bytes on each end done for us, this is implemented at a low-level by the operating systems in both the BBB and Cortex on PROS / ConVEX / ROBOTC.

But this is still a physical link and things go wrong sometimes, while we can send bytes, we have absolutely no guarantee that those bytes will arrive on the other end. It’d be nice if we could confirm that not only something arrived but that the right something arrived.

In order to do this we’re going to divide up our data into “packets” and we’re only going to send one of these packets at a time, each time a packet is received we’re going to quickly check it using a checksum, and send an OK response or an NOT OK response. When we get an OK, we’ll unlock the system and allow the next packet to be sent, on an NOT OK we’re going to resend the packet up to three times. After three NOT OK’s it’s likely that something is wrong and it would be silly to keep trying.

But before we get there, we need a simple way to determine where a packet starts and stops. Like dude? What IS a packet anyway…

To answer that question, we need to frame our packets.

http://polynomic3d.com/user/smith/B4C/145590378.jpg

Something like that. Now a common way to do this is to just pick two bytes that are not likely to be in your message, call those bytes your frame bytes and hope that your input doesn’t contain them. There are some clever ways to tell the interpreter not to consider message bytes as framing, etc. but I just really don’t like this notion. I’d like any joining member of this conversation to be able to definitively KNOW when a packet has stopped and another is starting.

What I’d like to do is use a single 0x00 “zero” byte between packets and ensure that my data doesn’t contain such a byte - ever. Problem is, my data does contain zero bytes.

So I did some searching and managed to find a clever way around this called Consistent Overhead Byte Stuffing.

I still wasn’t happy with this until I found someone else who made a slightly better implementation of the process, which I was quite happy with.

COBS adds a byte to the beginning of the data, which points to the next zero byte encounter. Every subsequent zero byte if filled in with the offset between it and the next zero. That way all the zeros go away except the ones I’m deliberately inserting between packets.

The dark blue arrow above shows the zero offsets. During encoding these are written, and during decoding these are replaced by zeros. So during encoding in goes data with zeros, out comes offsets guaranteed not to be zeros. Then during decoding we turn the offsets back into zeros.

One catch, every 254 bytes of non-zero data we have to add a new extra byte. I cut out the code that does this in COBS and just limited my packets to 254 bytes.

Most of the code for my messaging protocol has been written, about 550 lines with heavy commenting. But I’ve only began to test the code.

So that’s where I’m at. -Cody

Here you go, check out this one. $99.00 with the student discount.

First successful packet encode / decode.

http://polynomic3d.com/user/smith/B4C/goodpacket.png

There were a few issues that popped up, two actually. I was accidentally checksumming an extra byte during the encoding step and during the decoding step I goofed the loop that read the data back. It was a minor offset kind of thing, was easy to find and fix.

At this point I’m worried mostly about the not so obvious mistakes, things like what if I send a max payload with no zeros, and problems arising from bad timing, like what happens when we check with half the packet data in the rx buffer, etc. I still have to write some timeout code in the WAITING_FOR_ACK state so that it doesn’t get stuck there. Things like that.

For those of you wondering, yes I’m testing this in Xcode. I wrote it in PROS but am testing in Xcode because it’s a lot faster to work in. Nothing about this is really specific to PROS or VEX, so Xcode is just fine.

To test this I’m basically writing to a file, closing it then opening it to read from. It’s a crude simplification, but it’s enough to test a fair amount of the mechanics of this protocol.

IDK, this is really exciting stuff to me :slight_smile:

-Cody

Thanks for the info!, I have a question, can I use a beaglebone or Raspberry pi with the Cortex in Vex U competitions?

Yes(10char)

Great to see you dealing with sync issues at the beginning of development. Now don’t get the wrong impression about what follows, it’s not meant to pour water on your fire!
Most embedded software engineers will admit to rolling their own protocol at some stage and more often than not regret it later on. I would highly recommend putting some thought into using an existing protocol for this application. Unless the motivation is just experience and ownership I’d suggest MODBUS or something else sensor/actuator oriented.

Serial comms is actually one of the more difficult things to get right but you’re on the right track. Dealing with lost or corrupt data takes more effort than the protocol itself. Interestingly the IME falls into this category too and that’s been less than perfect.

That’s a really good point, the whole “Don’t reinvent the wheel” mantra.

I’ve talked about this before, I regularly find that this argument doesn’t stand up in the real world. I’ve spent more time trying to learn how something else works and gone through more frustration trying to bend those off the shelf wheels into what I need that these days I regularly discard most non-mainstream code.

Like even in the protocol you mention, which is really neat btw, the following doesn’t meet my criteria:

I want to be able to push or pull data from the Cortex. At times, I only have data (such as a sensor reading) at the instant of communication. That is 10ms later that value has changed and I don’t want to have to store these values on the limited memory on the Cortex. I want to continuously send (push) this data off (to the BBB), and forget the data as soon as the packet forming function ends.

In reality yes there will be some kind of packet queue, so technically I’m saving some data. But w/e.

It’s a fun project, actually and I’m getting to write neat code.

Too bad I’m knee deep in the tail end of calculus 2 atm.

So yeah, all good stuff.

Just promise not to go inventing new protocols when you’re in the real world :wink:
Sometimes it’s better to fit the application around an existing protocol and re-evaluate requirements (do you really need multi-master for example). MODBUS is easily understood, free to use and there are plenty of existing tools kicking around hence my first suggestion however there are a few other popular serial protocols. That said the experience rolling your own is certainly valuable, much like good calculus grades.