Skip to main content
Tim-Doerzbacher.comTim-Doerzbacher.com

CATPOO Progress

It’s been some time since I’ve posted on this blog. Now I’ve actually been working on projects worth mentioning, the long period of neglect is over.

Sometime in fall of 2017, I decided to see what kind of robot I could make. Since going down the rabbit hole, the biggest hurdles faced thus far:

  • Learning how to model in 3D.
  • Electrical engineering in general.
  • Mechanical engineering in general.
  • Explaining why I’m doing this.

I’m no expert in the first three after less than a year of learning. I never expected to be. I only wanted to know enough to accomplish my goal: build a proper robot.

What makes a “proper” robot? I’ve decided upon these criteria:

  • It has to be built entirely from open-source tools and designs.
  • It has to be remotely controllable from a cell phone or tablet.
  • It has to have a high resolution camera and should have night vision.
  • It should have additional sensors to avoid hazards in the environment.
  • It should powered by an Arduino and Raspberry Pi together to allow future software updates for expanded functionality.

Now that I’ve accomplished those goals, I’ve moved on to the polishing phase. Part of that phase includes writing things up in such a way that anyone else in the world could create the same robot.

In the mean time, here are some pictures from along the way:

Late 2017 Prototype
Early 2018 prototype
Early 2018 prototype
Mid 2018 Prototype with its top off on the pool table
Mid 2018 Prototype in the dark
Mid 2018 Prototype in the dark
Mid 2018 Prototype
Mid 2018 Prototype

 

Video Stabilization

Recently I purchased a kayak. I wanted to take photos with my GoPro mounted to the top. Seemed simple enough. It was, and thus was totally unacceptable.

Let’s make this user story more complicated.

The Story

I wanted to be able to create a time lapse video quickly to send to my friends. From there, I’d send the video to anyone who was with me that day. If they wanted an original high resolution image used to create the video, they could give me the time stamp. From there, I could send the dozen or so images that created that second of video easily.

Put another way: I was in essence creating a quick way to preview the thousands of photos GoPro can take in a short time.

Naïve First Try

Seems simple enough:

  1. Hit record on GoPro
  2. Kayak
  3. Download Images
  4. Let GoPro Studio do its thing

That didn’t work.

That shakiness makes it impossible to watch. Ain’t nobody got no time for that.

Second Pass

I came to realizing no filter GoPro could provide would help that. Manually rotating and lining up all frames would take forever. Time to automate.

First I created script stabilize:

#!/bin/bash
# Copyright 2016 Tim Doerzbacher <tim at tim-doerzbacher.com>

MELT=melt
# Stabilization options
MELT_FILTERS="-filter vidstab shakiness=8 smoothing=20 optzoom=0 maxangle=0.05"
# Encoding options
MELT_OPTIONS="vcodec=libx264 b=16000k acodec=aac ab=128k tune=film preset=slow"

if [ -z "$1" ] ; then
  echo "Usage: $0 <infile> [outfile]"
  exit 1434;
fi

SRC="$1"
DEST="`echo $1 | sed -r 's/\.(mp4|mpg|mov)/.stabilized.m4v/'`"
MLT_FILE="`echo $1 | sed -r 's/\.(mp4|mpg|mov)/.stabilized.mlt/'`"

if [ ! -z "$2" ] ; then
  DEST="$2"
fi

if [ "$SRC" = "$DEST" ]; then
  echo "Did not recognize that file type."
  exit 1 # Not random
fi

echo "Stabilizing: ${SRC}"
echo "Destination: ${DEST}"

echo
echo "======== Round #1 ========"
echo
$MELT "$SRC" $MELT_FILTERS -consumer "xml:${MLT_FILE}" all=1

echo
echo "======== Round #2 ========"
echo
$MELT "$MLT_FILE" -audio-track "$SRC" -consumer "avformat:${DEST}" $MELT_OPTIONS

So the new order of things was:

  1. Hit record on GoPro
  2. Kayak
  3. Download Images
  4. Let GoPro Studio do its thing
  5. Process resultant video with stabilize

This method left a bit of weirdness at the edges. I tried using the auto-zoom functionality but didn’t like the result. Since GoPro studio already threw out a lot of the original resolution, zooming in further throws away even more information. The end result is far from optimal and looks blurry:

There Must Be a Better Way

There was a better way. I have a large amount of sequentially named files and a snazzy Linux server. I’m losing a lot of resolution by stabilizing the video after its already been scaled down.

Time to make a new script: encode.

#!/bin/bash
FFMPEG=ffmpeg
$FFMPEG -pattern_type glob -i '*.JPG' -c:v libx264 video.mp4

Now I have an easy way to create one video out of all images taken that day.

$ encode JPG; stabilize video.mp4

At this point, I can take video.stabilized.m4v and import it into GoPro. Now I have full resolution video that I can then crop as necessary. The result works out quite a bit better:

Still, I’m having problems smoothing the video out. At this point, I began I wonder how it would work with better data. The water drops taking up so much of the field of view. It is probably affecting the stabilization.

Better Data

A few days later I had the opportunity to revisit the lake to do some plein air painting.

After that was done, I recorded and processed another time lapse using the scripts above:

Ultimately, I’m pretty happy with the last few attempts at making a kayak trip bearable. Things are still not perfect. I’m not aware of a way to make the GoPro take more than two frames per second. That’s causing the Flux processing in GoPro Studio to really make a mess of things.

With any luck I’ll have a another post soon explaining how to mitigate these issues.

References

Here are some of the pages I referenced while creating this. Thanks, guys!

Holonomic in the Hizzouse

Partially assembled top view
Partially assembled view from above

I’ve spent months planning my next CATPOO and it’s all printed out. All requisite parts are either here or on their way in the mail.

Highlights include:

  • Omnidirectional wheels for holonomic goodness
  • Infrared night vision camera
  • 16x High intensity IR LEDs
  • 7″ LCD with touchscreen
  • External USB and network ports
  • Wireless AP support
  • Rear motion sensor for Spidey Sense
  • Forward and rear proximity sensors
Front view showing IR LEDs and camera
Front view showing IR LEDs and camera

Once I get the few minor parts remaining from Pololu, I’ll be ready done with the building in a day. If you think you might need it, and it’s only a $1 or $2, just get it. Waiting three days to finish the build because I’m short on 1″ #4 screws is a terrible experience.

Once built, I’m nervous when it comes to figuring out how to coordinate all four servos to move in a straight line. Previously, I ran into trouble with getting the forward and reverse speeds to be equal. It could also be related to the servos I’m using. I plan to isolate whether it’s my fault, the motors’ faults, or a combination of both. I’ll have eight servos soon.

If I can’t get four functional servos out of that, I’m smashing the servo controller I have a sledgehammer and switching to stepper motors. I didn’t want to spend the money or effort, but it’s looking like that’s the direction I’ll be going.

It’s Alive!

Finally! After over a year off and on, I finally have all the parts and necessary free time available.

Milestone Highlights

  • The basic robot is now working with the two servos to maneuver with
  • The night vision camera is working (at about half the full brightness)
  • Video lag to web UI on a smartphone is less than one second in bright areas, twice as long in dark (due to exposure times delaying transmitting next frame)

Next Up

  • Add in more sensors (highest priority is the proximity sensor for the front)
  • 5″ LCD screen for the top for debugging and status output
  • Integrate the momentary switches with built-in LEDs to perform operations (ex: reset networking, turn on/off IR lights, etc.)