Laboratory One Research Blog

Lets Talk - An open-source robot

April 15, 2018

This open-source robot is an example of how one could build a robot which many users can remotely control over the internet. It seeks to lower the barriers to entry in robotics research. Although the cost to researching robotics is at an all-time low, it can still be diffcult to get started. Lets fix this.


Comsumer robotics has been in stasis for a number of years. We’ve been stuck with “battlebot” type robots, and only recently have we graduated to consumer drones. These robots are controlled remotely and don’t do much else. Sure, we can have the robots destroy each other, race, or take photo and videos, but that’s about it. They’ve fallen short of our expectations.

Why? I suspect that our focus on the hardware layer has prevented growth in software applications. In the end, isn’t software where value is created? So the logical conclusion may be to create an accessable robotics framework so that we can focus on the software layer.

Learning Robotics

The multidisplainary nature of robotic proves to be challanging for aspiring roboticist. There are several layers to implement when building a remote controlled robot. Let’s examine a minimal robot stack:

The Robot

  1. Actuators and Sensors: Allows movement and sensing of the environment.
  2. Microcontroller: An interface that is able to connect with actuators and sensors.
  3. Embedded Computer: Interfaces with the Microcontroller to allow for higher level controlling of the robot.

Control and Command Server

  1. Server: The computational unit which allows Control and Command software to run.
  2. Control and Command Software: Takes commands from the user and informs the robot on what actions should be performed.

User Interfacec

  1. View Rendering: Displays the interface to the user so they can understand what the robot and Control and Command Server is thinking.
  2. User Input: Allows the user to control the robot.

Higher level robotics

Now that we understand the minimal robot stack, we can examine what is required to implement a “useful” robot. To implement a useful robot, we should be able to:

  1. Design the robot’s form: Using CAD tools to design how the robot will look.
  2. Manufacture the robot: Create the physical components of the robot.
  3. Localization: Programatically determine the robot’s position and pose which allows the robot to know where it is and how it’s positioned.
  4. Navigation: Programatically plan the robot’s path which allows the robot to plan how it can perform an action.
  5. AI: Building some behaviour into the robot so that friction in user interaction can be reduced.

Robotics Framework

Let’s build a framework that allows the roboticist to move to higher level robotics sooner. Let’s make it so the roboticist can utilize our framework so that they don’t have to ‘reinvent the wheel’, and can focus on building applications. To do this, I’ve written a barebones framework which takes care of the robot, command and control server, and user interface.

In the beginning, this framework allows the user to build a minimal stack inexpensively. As this framework matures, it will compass more and more of the framework. These are first steps into the world of robotics. We will gradually move become ROS-based as key components become less inexpensive.


What’s the plan?

  1. Add CAD Models
  2. Add Bill of Materials
  3. Add PCB


The robot is a 2 wheel differential rover with a POV web cam. It is controlled by a Raspberry Pi and communicates with the API over Wi-Fi.

This requires a client to run the drivetrain and communicate with the API, and another to run the webcam. I employ ngrok to create an introspective tunnel so that the User Interface can render the webcam stream.

The drivetrain is powered by two - 5v DC motors, which is routed into a POLOLU DRV8833 Dual H-bridge IC. The DRV8833 addresses these motors as AIN and BIN (leftside, birdeye view, with the bot pointed away from your viewpoint).

BOUT1 - blue wire // red wire // leftside BOUT2 - green wire // black wire // leftside AOUT2 - orange wire // red wire // rightside AOUT1 - yellow wire // black wire // rightside

	// POLOLU DRV8833 Dual H-bridge Configuration
	AIN = new five.Motor({
		pins: {
			pwm: 24, // white wire // AIN2
			dir: 2 // red wire // AIN1

	BIN = new five.Motor({
		pins: {
			pwm: 26, // brown wire // BIN2
			dir: 7 // black wire // BIN1

NOTE: that the raspberry pi 2 only has 2 PWM channels.

Robot Breadboard

Robot Schematic

Lets Talk Front

Lets Talk Side

Robot Design

I’m currently working on a design for the body. Right now it looks like a moving cube. CAD Rendering

CAD Rendering - Front

CAD Rendering - Back

CAD Rendering - Bottom

User Interface

The user interface is a React.js web app which displays the robot’s POV video stream and API statuses, and allows the user to control the robot using WASDX controls. A user can watch the stream, then queue up to control the robot for 5 minutes.

User Interface Home - Desktop

User Interface Home Stream - Desktop

User Interface Home - Mobile

User Interface Home Stream - Mobile

User Interface Race - Desktop

User Interface Race Stream - Desktop

User Interface Race - Mobile

User Interface Race Stream - Mobile


The API is a Node.js middleman between the robot and the user interface. It manages the pilot queue, and proxies commands to the robot. It also serves the User Interface to the user.


The user interface and API are wrapped up into Docker containers. This allows for future scaling, and easy deployments. The user can run programs locally as Node instances, Docker containers. They can also deploy to a Kubernetes cluster for production. Additionally, we provide environment variable so that the client knows where to get the POV stream, and which websocket to connect to.

The Dockerfile is straight-forward:

FROM node:carbon

# Default args

# Default environment variables

# Create ui directory
WORKDIR /usr/src/ui

# Clone UI repository
RUN git clone .

# Install the production node modules
RUN npm install --only=production

# Build the UI into it's bundle which will be served by the API
RUN npm run build

# -----------

# Create api directory
WORKDIR ../api

# Clone api repository
RUN git clone .

# Install the production node modules
RUN npm install --only=production


CMD [ "npm", "start" ]

NOTE: The Kubernetes configs don’t include Mongo and Redis containers.

Peter Chau

Written by Peter Chau, a Canadian Software Engineer building AIs, APIs, UIs, and robots.