Has anyone ever tried using two vision sensors for stereoscopic imaging? Most of the trays block the center of the robot from using the vision sensor, but mounting one on either side may make up for this deficiency. It would be interesting to see how simple you could program things to behave.
Furthermore I want to issue a challenge to any teams that end up developing programming skills code using the vision tracker: No position based drive commands. Use vision-pids to pick up cubes, test the number of cubes you have in the tray, and then use vision/line sensors to try to find zones you have not stacked in.
Boss Mode: Towers.