Two small robots on stage

Trying to make a Youtube video with my two robots was a difficult experience. Lat time I had to synchronize the movements of both robots and four different cameras (one for each robot, one watching from above and one recording my desktop).

Needless to say, this took me several hours in failed recording attempts. And just when it was getting better, the batteries for one of my robots went dead.

Wouldn’t it be great if the bots can record a video by themselves?

This interesting thought kept returning to me every time I thought about a new video. Almost a month later, and a few modifications on my controlling software, I finally got it working.

You can watch my first attempt in this video:

 

The robots are moving according to a predefined scenario and my only job was to hide from sight and point the camera (and drink some beers).

The setup

I connected the robots and my laptop in a cluster using a WiFi network. Every machine has information about what functionality is deployed on the others and can access all of the available controls.

I added some modifications to the control station allowing it to see all of the connected machines as one complex computer. In this concrete setup, the robots have only the clustering module and all of the motor controllers active on the Raspberry Pis.

The cluster node on the laptop has some additional improvements:

They can speak!

MarryTTS is a popular open source text to speech software written in Java. There are also multiple voices available and most of them are free (If you use it, make sure to check the licenses of the voices you select).

In this setup I have installed the TTS software on the laptop. The original idea was to install it on the robots themselves, so they can speak without the laptop. This however required a lot of resources that my machines currently don’t have. Instead I decided that it will be a good opportunity to play with a distributed system.

The Boss

The final improvement is a function written in Javascript that serves as an orchestrator. It is located in the control station and  is able to access all of the modules by using the clustering protocol. Its main purpose is to run predefined scenarios and make my life easier when recording videos:


Example scenario:

second 1 – Access the first robot and start moving forward
second 4 – Stop the movement of the first robot
second 6 – Turn the camera of the second robot to the left
second 10 –  Say something cool


Points of improvement

It may all sound great, but I still had to make a few recording attempts until I got a satisfying video. The reason is that my robots don’t have any feedback sensors (yet).

How is this a problem? Well, the control signals are simple commands like “move forwards with speed X”. However, without a feedback, the robot have no idea what the actual speed is. The wheels might have hit an obstacle, the batteries might have went dead and much more.

Even when we have perfect conditions, there are small variations in the movements. A simple example is when executing a rotate command. The robot might turn 91 degrees to the left instead of 90. This might seem insignificant at first, but have in mind that small errors are added up. After a fеw turns, the robot might end up in a completely different position than expected.

The solution to this problem is adding some sensor feedback, so the robot knows its actual movements and can calculate it’s exact location, thus enabling it to regulate it’s movement.

Leave a Reply

Your email address will not be published. Required fields are marked *