One of the most interesting projects that I have seen while in Dr. Kapila’s research group has been the development of a third person view control system for a swarm of robots. The idea uses the camera from a mobile device (typically a smartphone or tablet) in order to identify and control robots within the camera’s field of view. This is accomplished through the use of markers, which allow the visual field to be interpreted and the robots identified.
Now, in order for the software to “see” the markers, it must know what to look for to identify the robots. To do this, markers (in our case, neon orange and neon green dots) are used for the software to see. The software must be calibrated so that it “sees” only the markers. The video below demonstrates the calibration process.
The interesting thing is how the robots are controlled (either individually or en masse); the control is through an interface on the touchscreen of the mobile device.
Another remarkable thing about this project is that there are no sensors on the robots themselves. Each robot consists of two servo motors, and an arduino with wifi shield. So how does a robot know where it is or if other robots are nearby? It’s all about the reference robot (which I like to call the “Alpha”).
Now, the alpha robot is no different than the others, its only uniqueness lies in that all of the other robots in the swarm are associated to it. The alpha robot is the reference point by which other robots move. This is fascinating because commanding the robots to move does not require that the robots know their surroundings…only their position relative to the alpha robot.