I am a teacher and my students have spent quite some time trying to debug their use of the vision sensor. We have used sample code and we can get it to detect color and printout verification, but we can’t seem to get it to sense anything over three inches away. I can’t find any documentation that says anything besides recalibrate the sensor. Does anyone have any help out there for the sensors beyond the basic documentation. Their goal is to make a smart robot without hard coding everthing step by step.
I’m surprised the vision sensor can sense anything that close, Just to verify, you are using the device linked here.
Yes, and it follows the color as long as it is less than five inches away like in this video VEX IQ Vision Sensor Object Follow Tutorial using the V5 Vision Sensor - YouTube but it won’t look further out. If we shut it off and come back tomorrow it doesn’t work. It is a controlled environment as far as lighting so it should still work.
perhaps post some screen shots showing what the vision utility is seeing and the code you are using.
The video you linked was for an IQ robot following a 4 inch ball (or at least, that’s what it looks like), if you are using a V5 robot with the vision sensor mounted higher than it would be on an IQ robot, then I would expect all the numbers for ball location and width to need adjusting accordingly. The general idea of the video was to have the robot turn left or right if the center of the object was not near the center of the image (and the numbers 60 and 100 really don’t make sense, as the center of the image is at 158) and then drive forwards until the ball width reached a certain value.
The vision sensor signatures (the colors) are stored inside the VEXcode program, so using the same code should always produce the same results assuming all environmental conditions remain the same.
This is what we have. One time it did follow the ball a bit, but when we came back, we can’t get it to work. We even tried recalibrating it in case lighting is different.
We are using the VexCodePro and planned to convert this to text once we got an understanding of how to make it work since nothing we did seemed to work, we reverted to blocks just to reduce the probability of error
Placing of the vision sensor and its field of view are critical, how you do this will determine the numbers that are needed to be able to follow an object. The robot will only find an object if it is within the field of view, for a V5 clawbot, I might place the vision sensor here (and I’m running the clawbot backwards as the claw gets in the way).
so fairly high and looking towards the floor at an angle. If the code has a different goal, then placement may need to be changed.
In that position, this is how the vision sensor is seeing the object.
so with the ball centered in the image, it’s about 18 inches away.
the field of view where objects can be detected ranges from about 8 inches.
( 24 inch ruler added for reference)
to about 36 inches.
If the ball goes outside of that range then the vision sensor will not detect it.
As I said in the previous post, center of the field of view is at 158 (the vision sensor image is 316 pixels horizontal by 212 pixels high). With the ball at the closest distance it is about 74 pixels wide (see image above). So with those numbers in mind, the code I threw together to control the robot was along these lines (only part of the code posted)
// if we found an object
if( Vision1.objectCount > 0 ) {
bFound = true;
// is object too far to the left
if( Vision1.largestObject.centerX < 100 ) {
drive.turn( left );
}
else
// is object too far to the right
if( Vision1.largestObject.centerX > 200 ) {
drive.turn( right );
}
else {
// now object is more or less centered
// drive forwards until it is large
if( Vision1.largestObject.width < 70 ) {
drive.drive(forward);
}
else {
// stop when it is close
drive.stop();
}
}
}
else {
I’ll post all the code I was using below, it used some from an old demo I put together for VCS here.
I’m not including the vision header needed to use this, and this is using an older version of VEXcode so will need small changes to run in the latest, vex::brain will need commenting out etc. it also did not use graphical configuration so that why all devices are in main.cpp
code
/*----------------------------------------------------------------------------*/
/* */
/* Module: main.cpp */
/* Author: james */
/* Created: Wed Oct 28 2020 */
/* Description: V5 project */
/* */
/*----------------------------------------------------------------------------*/
#include "vex.h"
#include "vision.h"
using namespace vex;
// A global instance of vex::brain used for printing to the V5 brain screen
vex::brain Brain;
vex::motor motorLeft(PORT2, ratio36_1 );
vex::motor motorRight(PORT9, ratio36_1, true);
vex::drivetrain drive( motorLeft, motorRight );
int screen_origin_x = 150;
int screen_origin_y = 20;
int screen_width = 316;
int screen_height = 212;
// function to draw a single object
void
drawObject( vision::object &obj, vex::color c ) {
int labelOffset = 0;
Brain.Screen.setPenColor( vex::color::yellow );
Brain.Screen.drawRectangle( screen_origin_x + obj.originX, screen_origin_y + obj.originY, obj.width, obj.height, c );
Brain.Screen.setFont( vex::fontType::mono12 );
if( obj.originX > 280 )
labelOffset = -40;
if( obj.originY > 20 )
Brain.Screen.printAt( screen_origin_x + obj.originX + labelOffset, screen_origin_y + obj.originY-3, "Sig %o", obj.id );
else
Brain.Screen.printAt( screen_origin_x + obj.originX + labelOffset, screen_origin_y + obj.originY+10, "Sig %o", obj.id );
}
// function to draw all objects found
void
drawObjects( vision &v, vex::color c, bool clearScreen ) {
if( clearScreen ) {
Brain.Screen.setPenColor( vex::color::black );
Brain.Screen.drawRectangle( screen_origin_x, screen_origin_y, screen_width, screen_height, vex::color::black );
}
// draw all objects
// for(int i=0;i<v.objectCount;i++)
// drawObject( v.objects[i], c );
// draw largest object
if( v.objectCount > 0 ) {
drawObject( v.objects[0], c );
Brain.Screen.setFont( vex::fontType::mono20 );
Brain.Screen.printAt( 10, 20, "X: %3d", v.objects[0].centerX );
Brain.Screen.printAt( 10, 40, "Y: %3d", v.objects[0].centerY );
Brain.Screen.printAt( 10, 60, "W: %3d", v.objects[0].width );
Brain.Screen.printAt( 10, 80, "Y: %3d", v.objects[0].height );
}
else {
Brain.Screen.printAt( 10, 20, "X: ----" );
Brain.Screen.printAt( 10, 40, "Y: ----" );
Brain.Screen.printAt( 10, 60, "W: ----" );
Brain.Screen.printAt( 10, 80, "Y: ----" );
}
}
int main() {
bool bFound = false;
// Draw an area representing the vision sensor field of view
Brain.Screen.clearScreen( vex::color::black );
Brain.Screen.setPenColor( vex::color::green );
Brain.Screen.drawRectangle( screen_origin_x-1, screen_origin_y-1, screen_width+2, screen_height+2 );
drive.setTurnVelocity( 20, rpm );
drive.setDriveVelocity( 20, rpm );
while(1) {
// get objects from vision sensor
Vision1.takeSnapshot( SIG_1 );
// show on display
drawObjects( Vision1, vex::red, true );
// if we found an object
if( Vision1.objectCount > 0 ) {
bFound = true;
// is object too far to the left
if( Vision1.largestObject.centerX < 100 ) {
drive.turn( left );
}
else
// is object too far to the right
if( Vision1.largestObject.centerX > 200 ) {
drive.turn( right );
}
else {
// now object is more or less centered
// drive forwards until it is large
if( Vision1.largestObject.width < 70 ) {
drive.drive(forward);
}
else {
// stop when it is close
drive.stop();
}
}
}
else {
// no object
// did we have one before ?
if( bFound ) {
// stop the drive
drive.stop();
bFound = false;
// start a timer
Brain.Timer.clear();
}
// sacn for object after 1 second
if( Brain.Timer.time() > 1000 ) {
drive.turn( right, 10, rpm );
}
}
// Allow other tasks to run
this_thread::sleep_for(100);
}
}
anyway, hope that helps. The vision sensor does need some time invested in experimentation.
I bet that is our problem. We will move it up so it can look out. We had it mounted under the claw. I can’t thank you enough, we will let you know tomorrow.
This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.