This is a Bot that does not have wheels but is smart in it's own way. TargetBot will use image recognition to find a target and then point a Laser at that target.
This project is a true showcase of a variety of great subsystems so watch for links below!
A Raspberry Pi 3 is loaded up with a variety of components and software to form a nice system to find and target a pattern with a laser and show info on the target on a display. The project is based on ROS platform on top of Ubiquity 16.04.
The Mark-Toys USB Multi-Port dongle configured for Servo Control has a simple serial port interface so we control an X and Z servo from this little board as an easy add-on subsystem for the Pi or for that matter ANY computer.
The RaspberryPi cased in a nice transparent 3D printed thin case runs ROS and thus hosts other add-ons with minimal effort.
I have a ROS node that drives the very popular 1.3 inch diagonal OLED display from the Raspberry Pi I2C lines. This is not yet published on my Github (so much to do so little time). But when it does appear it will likely be on my github for ROS stuff here
Notice the display node shows host name at top line, the IP address of the bot on line 3 and then app specific info below where in this case it shows the Tag number found (100) and then below the angle to the left as -8.5 deg and the angle up as 7.8 degrees. Fun stuff.
On many of my projects like this one I use my sys_monitor.py code that I start in linux in the rc.local file. This allows 'auto start' of the ROS nodes and allows either shutdown or just a restart from a simple switch on the Raspberry Pi GPIO. It also has a single LED that will give basic feedback as to what the sys_monitor code is doing like gettting ready to shutdown or whatever. Please refer to the Mark-Toys sys_monitor.py code and directions here