Awakening of a robot

About two months ago I held in my hands something that could barely be called a robot. At that time it was just a shell showing no signs of “life”. Just Something constructed from some components laying on my shelf.

I started working on the hardware and software separately. The connections between the two was done gradually. At the beginning it was just to test some isolated parts of the machine, a servo, camera, Arduino to Raspberry communication. At some point I realized that I was holding something that can actually be called a robot. Well, just an awakening one.

First steps

So what do you do, when your neighbourhood friendly robot awakens? Connect to it and start playing of course. In this case the connection is working by using the Raspberry Pi 3 on board Wi-Fi module. With just opening a random web browser and typing the IP address of the robot  we connect to the web server running inside.

Hopefully a few seconds will be enough to recover from the initial shock caused by the horrible sight of the user interface (Will make it better at some point I promise).  After this we get the first data from the machine. A list of all of the available modules on the robot is displayed in the right section of the web page.

The interesting ones are:

  • Camera0 module – This is the module that handle the Raspberry Pi camera. With the current programming the system supports about 15 frames per second on 320 x 200 resolution or 5 frames per second on 640 x 400.
  • ChassisControl – This module is used to control the movement of the chassis. Basically it translates commands such as: forward, backwards, turn left or right. Then generates the control signals for the continuous rotating servos which are moving the wheels.
  • Servo3/Servo4 – Those are the modules that control the Pan/Tilt mechanism servos.

One of the best features about the control station is that we can customize its layout. All we need to do is drag the modules from the menu section and drop them in one of the cells of the table. Doing so will generate a control interface for the selected modules in the target locations. The user then will be able to control and give commands to all controllable values of this module. Two such examples are the move “forward” for the chassis controller and “rotate to degrees” for the camera servos. There is also support for read only data (that can be read from the robot sensors). Example for this is the stream from the camera module.

Different control options

Having the eyes of an ex gamer, I thought it would be cool feature to add a mode for a first person control. Using the keyboard to control the moving direction of the chassis and the mouse to control the turret just like a game. Right now it’s a bit of a bumpy ride since the system have about 100-150 milliseconds delay before a command reach it’s destination. I really hope to improve this in the next versions.

Leave a Reply

Your email address will not be published. Required fields are marked *