It’s been a long planned project, promised to my spouse, to capture the garden’s growth and continous change on a time-lapse movie. I wanted to do it for over a year, but it took quite a while to set the system up and running. The stages I went through to get it done are to be documented in this article. The current version of the camera is available here
Choosing the camera
The aim of time-lapse is to cover a long time period, so the battery of the camera cannot be changed/charged, it must be remotely powered. Going through all the cameras lying around it turned out that I don’t have any digital camera that can be remotely powered and controlled. As I happened to have a few web cameras around I decided to try and go with one. This seemed like a really good idea, as the web cameras are not only relatively cheap to be wasted in a hobby project like this, but are pretty well supported in stock Linux and are powered through USB.
Unfortunately both of two cameras I finally laid my hands on only support the standard VGA resolution (640×480) tops. (a new one is already ordered straight from China, and should arrive in … maybe by Christmas) I’ve performed some routine checks, and even made a short test movie to see how to grab frames from the UVC driver. Unfortunately the driver as of the time of writing has no support for grabbing still images directly or from a frame stream. This doesn’t allow for the use of higher resolution for some cameras either, as still images usually can be shot in higher resolution than the supported video streams. There is also an issue with gain white balance and even the lens itself. The cameras are designed to be stupid proof and have an “intelligent” gain and white balance control, this algorithm is designed to provide maximum clarity with no adjustment whatsoever. Unfortunaltely when I’m grabbing a frame every minute the camera has no time to properly adjust to the light conditions, that change throughout the day. I will have yet to work on finding a way of disabling these functions without the loss of quality. These cameras are designed to be used for short distances 2-3 meters at most, their optics are either fixed or can only be adjusted to this range. I’m however planning to shoot at a relatively wide angle and at a 15-20m distance.
Wiring it up
I have a file server based on 64 bit Debian on a decommissioned low-end Dell server. Finding a USB extension cord that is long enough to cover the 10m that are between the server and the planned location seemed tricky, Most of the cables are 5m long and cost quite a lot considering. I however found an interesting solution instead. I happen to have a set of RJ45 pliers and CAT5 cable left over from my past. That’s standard ethernet cabling. You can order USB to RJ45 converters for a few dollars at DHGate and put a standard network cable between the camera and the computer. This setup can reliably transfer the USB signal for the required distance and probably could do even more.
All this setup is worthless unless I find a way to capture images at given time interval and llink them all together to form a movie.
After doing a few simple searches on the Internet and running through the web camera’s capabilities I learned the camera produces mjpeg data streams that are accessible on video for linux UVC driver. I decided that the simplest way to do the capture would be using avconv (better known as ffmpeg). As a source device I gave the /dev/vide0 s a v4l2 device and a single frame capture with a jpeg output.
As I thought it would not be sensible to write the entire software from scratch I looked up a solution that uses a few shell based capture and render scripts with an extremely simple PHP based web front-end. I can’t unfortunately remember where I downloaded the package from and can’t find who I should be crediting for this solution, but as soon as I find the link I will properly credit him! The shell scripts included are very well writen, and probably can be used out of the box provided you run on Ubuntu, you want to use the webserver root to access the files, and you want to generate tons of HTML snipets as files.
There are 2 important scripts that are to be run periodically one is to capture a single frame, the other is for creating a video from the captures. Both scripts accept lot of parameters, however it turned out they are quite sensitive on how you set your environment up. The images are captured in a directory structure CAMNAME/year/month/day and are named after the timestamp they werre taken at. The capture supports several different capture utilities, like gphoto2 (great for capturing images from actual digital cameras) uvccapture (tool for capturing from UVC based devices, unfortunately with abug that results in unreadable images) and capture (couldn’t try it, as it’s not available as a Debian package). It doesn’t support ffmpeg or avconv, that I intended to try them on.
The display looked nice, but it relied on html file snipets generated at capture time. I set the directories so www-data (the user running the web server) could access them. I placed the php files from the pack under /var/www/webcam, and the scripts under /opt/webcam. I’ve tried numerous settings, but it turned out, that no matter how it’s set up, unless it’s placed in teh web server root I’ll always ended up with some bad paths.
After I got to know the way around the code I scrapped the path misery by using dynamic PHP instead of pre-generated HTML code. This greatly reduced the number of files produced per day, and also simplified the capture and generation scripts. There are two files are produced every minute between 5 and 22 hours, one for “high res” and one thumbnail.
The daily movie is produced hourly as it takes less than a minute to run on this resolution. This results in the movie itself, and a thumbnail image with 24 images on it from the video. The monthly image made up from these thumbnail montage is also updated. The GUI is very simple and uses file system access to produce overview, monthly, daily views for the generated files.
After the first days two major issues turned up. First of these is pretty simlpe. There are black frames when there is not enough light. As I don’t want to add an astronomical calculation to see when the sun is up, or down I decided to go on a brute force approach. The images are scaled down after capture and if their brightness is below 10 out of 255 (about 5%) they are considered as dark and are disposed of, This logic has been built in the capture itself, so black images will not litter the directories, the movies and the thumbnails. The second issue was much more difficult. The camera after a few days running simply refused to respond anymore on the daily startup. It took several hours of digging to find that the USB port goes into power saving mode, and the camera might not always recover from there. A new script was therefore created that removes and restores the USB bridge if it doesnt’t respond, It’s a bit brute forcce, but doesn’t happen too often. This is also created as a separate script and is invoked using sudo. To reduce the security risk the www-data user, who is performing the capture can only sudo this command and nothing else. Power saving is also an issue that results in bad images from time-to-time. These images are partially captured half of them are There are 2 important scripts that are to be run periodically one is to capture a either blank or discolored. A line added to udev rules however mitigated this problem as well.
I’m quite satisfied with the UI, it’s quite ugly, but functional.
The camera is to be replaced with another one with higher resolution.
Automatic gain and white balance is to be disabled in the camera, to produce more constant stream of images
Interpolation technique should be tested, to see if if can be used for creating less “jumpy” movie streams
Monthly videos should be based on images with less variation or from a pre-defined time range
The issue with the partially captured images that only occur on the earl afternoon should be investigated
As there are times when something is blocking the camera view it should be investigaed what’s that?
The source code of the software is to be made public, once I can credit the original author, especially because about 50% of his code has not been rewritten from scratch.