I thought it would be an interesting computer vision exercise to see if I could score a Starstruck match based on a picture of the field (since only the general position of the object matters). However this is extremely difficult with the current videos/pictures out there (I don’t have a field yet). Could anyone with a field take a picture of the field from say, on top of the hanging post? Even better is if you could randomly move the stars and cubes around.
Unfortunately the angle isn’t sharp enough. Straight from the top of the post many not be either. Almost a completely above view would be optimal, like some of the shots at worlds finals 2015
@Cody has so kindly published his field renders for us. The renders look great, and they may be realistic enough for computer vision. I think there might be an overhead view posted somewhere.
Ehhh it’s more the brain but it shouldn’t be that hard. If you do it by color, and have one program watch each side, it should be simple.
Count how many orange things are on each side. Count how many yellow things are on each sides. Add up the score. Exclude things that’s sizes are too small or too big if you have problems.
Yeah i was thinking about how yellow and orange looked a like and how all of the match videos i’ve seen have all the stars clumped together at the bottom. The yellow or orange on the field added by a robot can be solved by what i said though, by filtering out anything that isn’t of a similar size to a game object. That’s also over simplifying that though.
For telling apart individual stars, you could utilize multiple angles. A side view would only show one star on the fence, but the top views and front views clearly show more than one, thus telling the program that it should filter out the information from the side view and take the top and front view ones. (See “Kalman Filter”)
As for the telling apart yellow and orange, you could probably use the size of the object. A big thing is a cube and a small thing is a star (You probably thought of that already). You might also use the multiple views here as well, since if it’s the lighting that’s distorting the colors the other views should be fine.
Just a few ideas. You’ve probably already thought of a few of these but hope these help anyways.
Now I’m imaging some monstrous attachment to the field powered by RaspberryPis with many camera modules, all working together just to score the match live. It is a beautiful sight.
With what i’m thinking of there’s going to be 2 cameras for the two side views, one on each side. Two cameras for front and back view, and one camera for the top view. That’s 5 cameras total. Not to mention the top view camera would have a LONG cable coming all the way down to hook up with the system. That cable management is gonna be a NIGHTMARE
Edit: Another problem i foresee is that the red from the tiles and hanging bar is very close to the orange on the cubes… The hanging bar can be filtered out by shape but the red tiles on the other hand…
Edit #2: You could totally look for the black words on the cubes. Since only two opposite sides don’t have the words it should be guaranteed that you’ll have one of the cameras catch the letters.
When you have that many devices and not much space, Bluetooth is your answer: Various Cameras Modules on RPIs all streaming pictures to a central computer processing it all. Sounds delicious!