VEXpro - Communicating with external software

I hope this isn’t a really stupid question, but I’m really having a hard time trying to figure this out:

How would a program on the VEXpro communicate with software running on your PC other than TerkIDE?:confused:

The reason that I’m asking is that I would like to add vision to my robot and learn image processing. I found a really easy to use vision software called RoboRealm http://www.roborealm.com and an article in the March/April issue of ROBOT Magazine about streaming video to your PC from a webcam with the VEXpro.

I just don’t have any idea of how to pass the information from RoboRealm back to the VEXpro. I’m assumming that the TerkIDE would have to be open, but I’m not sure. The article mentioned something about the software for the project being able to do that, but I haven’t figured out how to run it yet. It’s a Java app.

Any help would really be appreciated

Not a stupid question, the VEXpro is running linux and you have Wifi so the most common way would be using TCP/IP. Start by looking into this - berkeley sockets

The VEXpro (or PC) will create a server, the other a client, the client talks to the server and they exchange information.

Edit.

I took a look at the roborealm website, here is an excerpt from some of their documentation. I highlighted some relevant parts.

I’ve found this site to be a good hands-on primer to low-level network programming. The sample source posted there works on the VEXpro, and I’ve used similar code to communicate between the VEXpro and a Mac.

This doesn’t cover the higher levels of protocol (like XML), but it will at least get you a basic stream of data going between a program on the Vex and a program on your PC.

Cheers,

  • Dean

Thank you both again for the help!

The RoboRealm software has a “sockets” interface module built in. I had looked at it along with others ones trying to see what I could use. I will give it another look. Maybe between it and the pdf file Quazar sent me a link to, I can get it working.

I really wish that I could get the software provided with the article I mentioned to work though. There were detailed instructions on creating the project for the VEXpro part on the TerkIDE, but as I said, nothing for running the Java app for the pc part. The article “did” mention the app allowing you to view the streaming video coming from the VEXpro, being able to send it to RoboRealm, and pass information back. I will have to take a close look at the files for the VEXpro too.

I wish I could attach a link to the article so you could look at it, but as far as I know it’s not available online. I have included the software in hopes someone might be able to take a look at it along with this link to the instructions: http://find.botmag.com/031202 . There is also a “Readme” file, but it didn’t really say too much.

I don’t know if the best thing might be trying to contact the authors for help or if I should forget all about trying to use that software and just focus on trying to write my own. Any thoughts?
VEXpro Webcam.zip (132 KB)

I don’t know what you have tried already, I don’t have a VEXpro but looked at the java code you attached. The code ran in eclipse and opened a window which presumably would show the video image. Did you try and run it in the eclipse IDE? If not you can download it from here. You need to have a java virtual machine installed to use it, if you do not have java, one source for it is here.

I’m attaching an executable jar file that you can run from the windows command line as follows.

java -jar watchVideo.jar --videoServer 192.168.0.99

replace the ip address with the address of your VEXpro, you should have the server already running but I cannot test that for you.

you can see the available options by running as

java -jar watchVideo.jar -h

hope this helps, I don’t use java at all in my work but use the eclipse IDE for other development purposes.
watchVideo.jar.zip (31.4 KB)

I had tried to open the file several times and I kept getting an error message. I read the article again, checked for Java on my pc, downloaded and installed Java again, checked the internet for help, etc. I figured there was probably something that I missing so I decided to ask the forum for help. I didn’t even think about trying the windows command line.

Why did you try running it in the eclipse IDE and how did you do that?:confused: I read the “readme” text file included with the software and I think it did say something about running the app on the IDE (TerkIDE for the VEXpro). That didn’t make any sense to me though and I didn’t know how to do that.

I don’t know what version of the eclipse IDE you used, but if you could let me know what you did - I would really appreciate. I guess it’s not really necessary since you included the jar file for me to run from the windows command line, but I would still like to know.

Thank you for all the help,

did you try running it in the eclipse IDE and how did you do that?:confused: I read the “readme” text file included with the software and I think it did say something about running the app on the IDE (TerkIDE for the VEXpro). That didn’t make any sense to me though and I didn’t know how to do that.

The readme file says the following

So as I have no way to compile the java I used eclipse.

The procedure is (more or less)

create a new project
import the client/watchServer directory
add the jargs.jar as an external library
run

I downloaded eclipse for java as linked to in the previous post. Use the top file “Eclipse for Java developers”

Could you run the file I attached before? could you connect to the VEXpro?

I’m sorry, I feel really stupid - I should have re-read the readme file before I asked for help. When the information for the VEXpro was first posted, it said the eclipse IDE would be used for programming. The VEXpro actually uses the TerkIDE so when I read that - in my mind it’s what I thought they meant. The article never mentioned needing any other software besides what you normally used for programming the VEXpro. When you said you used the eclipse IDE, I wasn’t sure if you meant a version you could use to program the VEXpro with or the one for Java at the top of the list.

I had some time tonight and tried the exacutable jar file you sent me. After a few tries, I finally got the pop-up window. I didn’t try the eclipse IDE though, I decided to just use the windows command line instead. I really didn’t want to use anymore software than I had to.

I haven’t tried the files for the VEXpro yet, this was the part I wasn’t sure about. The files for the VEXpro seemed pretty straight forward with the instructions provided. I did take a chance to look them over though. Hopefully, I can try everything out soon and it will all work - maybe I will get a chance this weekend!

Well, I’m beginning to think this project was way over my head. I tried getting everything working this weekend and the server software that’s suppose to run on the VEXpro kept displaying this error message in the Console window:

[FONT=Verdana][FONT=Arial]ERROR opening V4L interface [/FONT][/FONT]
[FONT=Arial][FONT=Verdana]: No such file or directory[/FONT][/FONT]
[FONT=Arial][FONT=Verdana]Child’s result=256…awaiting child’s termination…[/FONT][/FONT]
[FONT=Arial][FONT=Verdana]Child’s result=256…starting child[/FONT][/FONT]
[FONT=Verdana][FONT=Arial]Child’s result=256…awaiting child’s termination…[/FONT][/FONT]
[FONT=Verdana][FONT=Arial]ERROR opening V4L interface [/FONT][/FONT]
[FONT=Verdana][FONT=Arial]: No such file or directory[/FONT][/FONT]
[FONT=Verdana][FONT=Arial]Child’s result=256…starting child[/FONT][/FONT]
[FONT=Verdana][FONT=Arial]ERROR opening V4L interface [/FONT][/FONT]
[FONT=Verdana][FONT=Arial]: No such file or directory[/FONT][/FONT]
[FONT=Verdana][FONT=Arial]Child’s result=256…awaiting child’s termination…[/FONT][/FONT]
[FONT=Arial][FONT=Verdana]Child’s result=256…awaiting child’s termination…[/FONT][/FONT]

[FONT=Arial][FONT=Verdana]I checked the instructions which said in case of trouble:[/FONT][/FONT]

[FONT=Arial][FONT=Arial][FONT=Verdana]6. If you don’t see the “waiting for video client” message, your webcam may need different program options. Launch a terminal, cd /opt/usr/bin, and run ./uvcsrvr –h. Look at the help for different options that may be appropriate. Read the README.txt in the watchVideo folder, which will contain the latest suggestions.[/FONT][/FONT][/FONT]

[FONT=Arial][FONT=Arial][FONT=Verdana]I tried this, but all I got was the following in the terminal window:[/FONT][/FONT]

[FONT=Arial][FONT=Verdana]root@qwerk:/opt/usr/bin# run ./uvcsrvr -m[/FONT]
[FONT=Verdana]-sh: run: not found[/FONT]
[FONT=Verdana]root@qwerk:/opt/usr/bin# run ./uvcsrvr -h[/FONT]
[FONT=Verdana]-sh: run: not found[/FONT]

[FONT=Verdana]I re-read the article and the instructions for creating the project and tried quite a few times - creating the project, deleteing it, and re-creating it, but nothing worked. I didn’t know if there was a problem using the webcam I bought - a Logitech C210 USB webcam, but the comments on one of the files says that it works with all Logitech UVC compatible webcams.[/FONT]

[FONT=Verdana]I don’t have any idea what’s wrong or how to get it working. I’m sure that there’s something that I’m missing or don’t understand. I figure the only way to get it working is if someone else tries it and sees what happens or to try contacting the authors of the article. I figured there might be a glitch or two, but I was really hoping that the software would work with out too much trouble.[/FONT]

[FONT=Verdana]Any thoughts?[/FONT][/FONT]

[/FONT]

[/FONT]
[FONT=Verdana]-sh: run: not found[/FONT]
[FONT=Verdana]root@qwerk:/opt/usr/bin# run ./uvcsrvr -h[/FONT]
[FONT=Verdana]-sh: run: not found[/FONT]
[/FONT]

Well you don’t ned to type “run” just type

[FONT=Verdana]./uvcsrvr -h[/FONT]

when you are in the subdirectory where that executable is. (the ./ means current directory)

Let me read you post in detail later and try and help.

This is telling you that V4L cannot be initialized. V4L is short for Video for Linux and is the mechanism that used to interface to your web cam. The home page here implies that your Logitech C210 is a supported camera so you should be good to go there.

I do not have a VEXpro so it’s hard to debug this for you. However, the first thing we need to do is see if the camera driver is being loaded and a file in the /dev directory being created to access it.

The usage for the server is as follows (I could not run the server but can look at the source copy the usage function into test code to produce this).

[FONT=“Verdana”]Usage is: uvcslicesrvr [options]
Options:
-v Verbose (repeat for more verbosity)
-d V4L2 Device(default: /dev/video0)
-l<logging_port> Logging port (default: 5006)
-p Server port (default: 5005)
-x Image Width(must be supported by device)(>960 activates YUYV capture) (default: 160)
-y Image Height(must be supported by device)(>720 activates YUYV capture) (default: 120)
-c Command to run after each image capture(executed as <output_filename>)
-t Take continuous shots with seconds between them (0 for single shot)
-r Use read instead of mmap for image capture
-w Wait for capture command to finish before starting next capture
-m Toggles capture mode from YUYV to MJPEG capture
Camera Settings:
-B Brightness
-C Contrast
-S Saturation
-G Gain[/FONT]

This line

[FONT=“Verdana”]-d V4L2 Device(default: /dev/video0)[/FONT]

is the line we are interested in, have a look in the /dev directory and see if there is a file named video0, be sure the camera is connected. If there is a different file but with a similar name then start the server using that as the device. For example, lets say you fine a file named “video12” then start the server as follows.

change directory to /opt/usr/bin (or whereever the server is)

[FONT=Arial]cd /opt/usr/bin[/FONT]

then run the server

[FONT=Arial]./uvcsrvr -d /dev/video12[/FONT]

Try this and let me know what you find out. I’m sure I could have this running fairly quickly but without the VEXpro in front of me it’s hard to know if the drivers are being loaded correctly etc.

Edit:
Here is the code that is causing the error, /dev/video0 cannot be opened so the child process started from main exits.

  if ((vd->fd = open (vd->videodevice, O_RDWR)) == -1) {
    perror ("ERROR opening V4L interface \n");
    exit (1);
  }

I just looked in the /dev directory and there wasn’t any file named video. I unplugged the camera and plugged it back in, but when I refreshed the window, there still wasn’t anything there.

Sorry, that was the one thing I forgot to check.

I looked on the internet earlier to see if I could any other information about the Logitech C210 webcam and I found a post on another website talking about a similar issue. They thought the webcam should work, but couldn’t find any information to confirm whether or not it actually would. They found some information that said the Logitech C270 webcam should work and confirmed that it did. Now, I’m wondering if I should try it instead. I bought my C210 at Radio Shack last Friday so I still have time to return or exchange it and they do have the C270. As a matter of fact, it was one of the webcams I considered buying.

I looked on the internet before I bought the webcam just to see if I could find out which ones would work with Linux and both the C210 and C270 were on the list. I choose the C210 because it was just a basic webcam and I figured the simpler the better chance of working.

You can check if the webcam is at least being detected on the USB bus by using the lsusb command. This should be available, I took a quick look at the default file list that Quazar posted and I see it in sbin. I also see the video drivers there.

OK, I just checked and the camera is being detected by the usb port. :slight_smile: Here’s a copy from the terminal window:


root@qwerk:~# lsusb
Bus 001 Device 001: ID 1d6b:0001  
Bus 001 Device 002: ID 046d:0819 Logitech, Inc. 
Bus 001 Device 003: ID 148f:2573 Ralink Technology, Corp. 
root@qwerk:~#

There still isn’t any video file in the dev directory. Where should I go from here - any ideas?

Just an update.

The streaming video part finally works! I went back to Radio Shack yesterday and purchased a different webcam (a Microsoft Lifecam HD-3000) and tried it out last night. I checked using “lsusb” and it was listed and there was a video file in the “dev” folder. I received the [FONT=Verdana][FONT=Arial][FONT=Verdana]“waiting[/FONT] [FONT=Verdana]for video client connection on port 5005” message in the “console” window of the TerkIDE and ran the Java Jar file. [/FONT][/FONT][/FONT]

[FONT=Verdana][FONT=Arial][FONT=Verdana]Now, I just have to make the image bigger and get it working with the RoboRealm software. There is a java file “RR_API.java” that I was able to look at that I think is the part I need, but I keep getting the same error when I try to run it as when I tried to run the Jar file. I know I need to try the eclipseIDE for Java that jpearman suggested in an earlier post and somehow make an exacutable file that I can run.[/FONT][/FONT][/FONT]

[FONT=Verdana][FONT=Arial][FONT=Verdana]Any advice on this part besides what’s already been mentioned?:confused:[/FONT][/FONT][/FONT]

Well thats good news, sorry I had not got back to you but I was out of ideas and didn’t really know what to try next. When I plugged a web cam into a Fedora linux install I have (don’t use linux much) everything had worked as advertised.

You should be able to increase the size by using different arguments to the server and client. The default is 160 x 120 so start the server with the following additional args.

./uvcsrvr -x 320 -y240

The usage from the java client source code is

String usage = 
"Usage: \n"+
"  roombacomm.Roborama --videoServer <IP> --videoPortNum <port> -x <image X size> -y <image Y size>[options]\n" +
"where [options] can be one or more of:\n"+
" -X | --debug       -- turn on debug output\n"+
" -x <width> or --width <width>: set width\n" +
" -y <height> or --height <height> -- set height\n" +
" -c | --color -- use color mode, otherwise grayscale\n" +
" --videoserver <IP> : set IP address of video server\n" +
" --videoPortNum <port> : set port number on video server\n" +
" -t | --threshold <threshold> : set quantization threshold\n" +
" -R | --RoboRealm : connect to Roborealm & send image to it. Roborealm requires color\n" +
" -hwhandshake -- use hardware-handshaking, for Windows Bluetooth\n";

so do the same thing there as well

java -jar watchVideo.jar --videoServer 192.168.0.99 -x320 -y240

There is also a --RoboRealm argument for connecting to the RoboRealm software.

That’s OK, I didn’t know what else to do either except to try a different camera. I don’t why the Logitech camera didn’t work, but the Microsoft camera works great!

I have to appologize again for not looking into things more before posting. :o I found out the same things you mentioned above after I played around after dinner. I took some time to actually look at the java files. I’m just not used to this - I’m used to loading the software and running it with any options being available from a menu.

Anyway, I was able to connect with RoboRealm and display the video from the camera. I just wasn’t able to make the image bigger. I was only trying it with the file on my pc and I wasn’t sure if I actually needed the “x” and “y” or not. I never went back to try it with them.

Now, I’m trying to figure out how variables get passed back and forth and how I would get one from the java app into my program to control the robot. I looked at the “RR_API.java” file and I found the code for sending and receiving variables - I’m just not sure how it works. I copied it into a WORD file that I attached to this post. That code begins about the middle of page 10. I’m going to keep trying to figure it out, but if you wouldn’t mind taking a look at it too, I would appreciate it. Hopefully, you will be able to give me some advice or at least help me understand it.

Thanks again for all of your help so far, I really do appreciate it. All of this is new to me and is probably why it says the VEXpro is for advanced users. Although, it really isn’t that hard - it’s just knowing what to do and how to do it.
RR_API.zip (42.4 KB)

There are two communication paths

  1. from the java client to the sever on the VEXpro
  2. from the java client to RoboRealm

The code you posted is part of the second case, I think what you really want is the first. Take a look at the server code in uvccapture.c

This is where the server accepts connections and starts a new thread for the client messages. (around line 245)

	// wait for a video client to connect before proceeding
	fprintf (stderr, "waiting for video client connection on port %d\n", port);
	if ((ctrl.videoSocket = wait4client(port)) <= 0) {
		fprintf (stderr, "error connecting to client: %d\n", ctrl.videoSocket);
		exit(-1);
	}
	// start the thread that handles the video client requests
	pthread_create(&videoSocketThread, NULL, (void *)cmdHandler, (void *)&ctrl);

This is the code that is running in the thread.

/* handle commands from RoombaComm client as long as the socket stays open */
void
cmdHandler(ctrlStruct * ctrl)
{
	unsigned char clientCmd[256];
	int rv;
	int netInt;

	while (run) {
		//fprintf (stderr, "waiting for command from client\n");
		rv = read(ctrl->videoSocket, &clientCmd, 10);
		if (rv < 0) {
			perror("socket read: ");
			exit(-1);
		}
		if (verbose > 1) fprintf (stderr, "received command %d\n", clientCmd[0]);

		if (clientCmd[0] == 200)
			outputType = 0;		// grayscale output
		else if (clientCmd[0] == 201)
			outputType = 1;		// rgb output

		ctrl->doCapture = 1;	// tell startvideoserver() to capture a snapshot
		if (verbose > 1) fprintf (stderr, "waiting for capture to complete");

		while (ctrl->doCapture == 1) {
			fprintf (stderr, ".");
			usleep(50000);
		}

		if (verbose > 2) fprintf (stderr, "writing %d image bytes to socket, width %d height %d, pixelCnt %d\n", ctrl->imgLength, ctrl->imgWidth, ctrl->imgHeight, ctrl->pixelCnt);

		// convert size variables to network byte order
		netInt = htonl(ctrl->imgWidth);
		write (ctrl->videoSocket, &netInt, 4);
		netInt = htonl(ctrl->imgHeight);
		write (ctrl->videoSocket, &netInt, 4);
		netInt = htonl(ctrl->imgLength);
		write (ctrl->videoSocket, &netInt, 4);
		write (ctrl->videoSocket, ctrl->imgArray, ctrl->imgLength);
		if (verbose > 2) fprintf (stderr, "wrote image & params\n");
	}
}

You can see it handles two different messages, the byte stored in clientCmd[0]. You probably need to modify this first and then start working to modify this code at the java end (from FrameProcessor.java)

public int readFrame(int width, int height, int captureType)
	{
		int bytesPerPixel = 1;
		int readLength = 0;
		int rxImageSize;
		int rgbShiftSize = 16;
		int pixel = 255 << 24;
		
		if (captureType == 1)	// if rgb color, 3 bytes/pixel, otherwise 1 byte/pixel for grayscale
			bytesPerPixel = 3;
		frameSize = width * height * bytesPerPixel;
		
		int maxReadSize = frameSize;
		readBuf = new byte[frameSize];			// analysis form - temp buffer for raw received data
		vidBuf = new byte[frameSize];			// raw image bytes
		vidDispBuf = new int[frameSize];		// display version (alpha set, bytes replicated if needed)
		int vidDispBufIx = 0;
		int vidBufIx = 0;
		
		readBuf[0] = (byte)(200 + captureType);	// send the "capture" command
		byte] t = new byte[4];
		
		try {
			out.write(readBuf, 0, 1);
			out.flush();
			

None of this is simple, it may be better to start with your own version of the java client. If I were doing this I would rewrite the java code in one of my preferred development tools.

Hey guys - sorry for being so late to this thread. Blackstag just pointed me to it. Allen - I hope you had some success. I wrote the article you refer to, and the code, and if I can help I will. It’s good to hear someone’s looking at this stuff - I thought all that writing & coding was going into a black hole.

Here’s some overview information. I’m sorry the documentation in the article wasn’t better - they limit you to about 3000 words, and you have to try & get the central themes over, and give the technical details & build instructions in online docs linked from the article. If I don’t reply to this thread (I’m not sure it emails me when someone posts) you can email me a paul.bouchier at gmail

The video on vexpro app is based on code I’ve been running on Chumby & Vexpro for a long time. I’ve integrated the java code with Chumby & Roborealm & it works great - there shouldn’t be problems with vexpro - in fact I’m going to do that for Roborama 2012a in the next couple of weeks. If you look at this video: Roborama 2011a - YouTube you’ll see a mower (the silver/grey one) that uses odometry to get close to the cone, then switches to visual seeking, where it captures a frame from the webcam on Chumby, ships it back to roombacommcli, which sends it to Roborealm and reads back the location of the cone if Roborealm could find it. roombacommcli then tells the robot what to do. Same scheme should work with Vexpro - it just uses standard Linux capabilities.

The overall application architecture is a java app running on my laptop under eclipse, Roborealm running on the laptop, and a derivative of uvcserver running on Chumby/Vexpro. I use the TerkIDE to develop for vexpro, but it doesn’t let you specify arguments on the command line so when I’m trying stuff out I ssh into vexpro & run the binary with command-line arguments, then set them as the default when I want to debug with Terk IDE.

For the robotic motion application, I start Roborealm first, (must enable the socket on Roborealm so it accepts connections from the java app. Then I start the daemon on Chumby or Vexpro which opens the webcam and listens on a socket for requests from roombacommcli to capature a video frame. Then I start a 2nd app on Chumby/Vexpro which listens for motion requests from roombacommcli. The java app on the PC (roombacommcli) runs the robot to the point where video seeking should start, then stops the robot (so video is sharp), requests the daemon on the robot to capture a frame & send it back. When the frame arrives back at the java app it sends it to Roborealm using the java library RR_API, which formats the request as an XML document & sends it over a socket to Roborealm. Roborealm is configured with a series of filters (filter for red, improve image, do shape recognition), and sends the X & Y coordinates of the shape (if found) back over the socket to the java app (roombacommcli). Depending on whether Roborealm found the cone, the java app commands the robot to spin a little, then looks again, or seeks toward it. Most of the interesting java code that does this on the PC is in an app called roombacommcli. (Despite its name it will run a roomba, Mo’bot the mower, and tankbot.) Watchvideo is a limited version of roombacommcli that only displays video - it doesn’t run the robot. I haven’t actually tried connecting watchvideo to Roborealm outside of roombacommcli.

Looking at the error message you’re getting on vexpro, I’d say the uvcsrvr app on Vexpro isn’t finding the webcam. Try:
% ls /dev/video0
(This is like DIR on windows.) The app expects your webcam to be at /dev/video0. You can override that with command-line options.

I suggest you attack this incrementally. 1st, get the uvcsrvr daemon to connect to the webcam successfully. Critical points will be the device name (/dev/video0), and whether it uses YUYV or some other video format. Then get the watchvideo app working on the PC and connecting to uvcsrvr on vexpro. You’ll see output on vexpro as it sends each frame to the PC. The video X & Y size has to match between the two apps. You should get about 3 frames/sec.

regards

Paul

Oops - didn’t realize there was a 2nd page of discussion on this. That really makes me feel good! jpearman and quazar have really dug in & understood this system. Glad you got the video going Allen. Let me know if you need further help. As I said, I haven’t tried connecting the watchvideo system up to Roborealm but it should be straightforward. If it doesn’t “just work”, I can fix it so it does.

Just FYI - the watchvideo system & the roombacommcli system it’s part of is an unsatisfactory architecture. The latency over TCP/IP is too long to do good robot control. I’m moving to a subsumption based architecture. An implementation of subsumption which runs on Vexpro is available from the DPRG svn server here:
http://svn.dprg.org/repos/dprg/VEXPro/trunk/roombaVEXPro

Here’s an excellent, and very readable introduction to subsumption, on which my subsumption implementation is based, written by David Anderson - one of the DPRGs preeminent roboticists:
[Subsumption for the SR04 and jBot Robots

It runs a roomba, but doesn’t have the video logic or Roborealm connection you want. It should be readily modifiable for other robots - I’m going to generalize it to run Mo’bot (the lawnmower). But first I’m fixing to tie it into uvcsrvr and have uvcsrvr talk to Roborealm directly, which is what you’re really looking for. Having the robot control running on the robot will give it much better responsiveness to object collisions, better tracking of current location based on odometry, etc, while keeping video processing offloaded to the PC.

Paul](Subsumption for the SR04 and jBot Robots)