Code Ramblings

Code. Rant. Repeat.

Git: Moving History

The title of this post may make it sound like it’s going to be about how git changed the face of software development and entered us into a new era of coding collaboration. No. This post is about how to copy a subtree from one git repository to another, while keeping its history.

Let’s say you’ve been working on a component in an incubator-style repository, and it’s time to move it to the main project’s repository. Simply copying the code would destroy the valuable history in the target repo, so that should be avoided. There are also lots of other components in the incubator repo that you don’t want to move.

Step 1: Export your component to a temporary repository

You can do this using git filer-branch. Warning: this will reduce your local copy of the incubator repository to just your component. Either clone it locally to a different path or be prepared to clone it again from the remote.

1
incubator$ git filter-branch --subdirectory-filter components/my-component -- --all

Now your repository has been reduced to just the contents and history of components/my-component.

Step 2: Create a patch for the entire history of your component

Use git format-patch to export all of your commits to a patch file. Props to @rombert for this idea.

1
incubator$ git format-patch --stdout --root $(git rev-list HEAD | tail -n 1) HEAD > my-component.patch

I feel like this command could use some explaining. First off, we’re telling format-patch to output everything to --stdout. Otherwise, it would create one patch file per commit, which can get pretty clumsy if there are lots of them. Second, we’re passing the output of git rev-list HEAD | tail -n 1 for the --root parameter. The enclosed command will find find the sha1 of the very first commit, while the --root parameter will tell format-patch to include that commit, not start from it. Lastly, HEAD is the target ref, which is basically the most recent commit.

Step 3: Apply the patch

Now it’s time to add your component to the main project. Of course, you don’t want to add your component to the root of the repository (which is what all the paths in the patch are relative to). Thankfully, git supports prepending a path to all filenames in a patch.

1
main-project$ git am --directory components/my-component my-component.patch

Now your main project contains your component and all of its history. Some notes:

  1. Only one branch will be copied. If your component development happened on multiple branches, that will be lost.
  2. No tags will be copied to the main repository, you will need to tag your commits manually.
  3. Commit IDs will change, so you will not be able to copy your tags automatically.

Pretty neat.

Video Surveillance… in Bash

My parents recently asked me to install an IP video surveillance camera at their house (for reasons). Since I already have a Linux server there running 24/7, it was only natural that I somehow set it up to record the camera’s video feed (mp4 over RTSP). After a bit of googling I found ZoneMinder, which looked like it did everything I needed. However, after wasting half a day fiddling with it, I realized that it is, although very complex and feature-rich, not up to my needs: no matter how you configure it, it will convert the video feed into thousands of JPEG files and store them randomly on the filesystem. It also uses a MySQL database to store its information (so extra dependencies), viewing the camera live is very slow (refreshes a static image every 4-5 seconds), replaying a recording needs the Java plug-in in your browser (which I don’t have on Ubuntu) and you have to “export” a movie clip (i.e. it has to convert all the images it saved back to a video). What a waste, but I can see how this solution could be the most compatible with all possible environments (camera types, OSes, etcetera). I was unable to set up authentication or a maximum total file size, either.

Being fed up with ZoneMinder, I ditched it and decided to try and hack together something of my own. After fiddling for the next half of the day with ffmpeg, I came up with a viable solution, in bash.

bashsurv.shView Gist
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/bin/bash

OUTPUT_DIR="/var/www/bashsurv"
FFMPEG_INPUT_FLAGS="-rtsp_transport tcp"
FFMPEG_SOURCE="rtsp://192.168.1.123/video.mp4"
FFMPEG_OUTPUT_FLAGS="-r 20 -acodec libspeex"
FFMPEG_OUTPUT_EXT="ogv"
CLIP_LENGTH=600 # seconds
TIMELIMIT=620 # seconds, allows for network timeout over CLIP_LENGTH
KEEP_FILES_FOR=10080 # minutes

while [ true ]; do
  avconv -timelimit $TIMELIMIT $FFMPEG_INPUT_FLAGS -i $FFMPEG_SOURCE -t $CLIP_LENGTH $FFMPEG_OUTPUT_FLAGS $OUTPUT_DIR/$(date +%F.%T).$FFMPEG_OUTPUT_EXT
  if [ $? -ne 0 ] ;
  then
    sleep 1m ;
  fi
  find $OUTPUT_DIR/ -type f -mmin +$KEEP_FILES_FOR -delete
done

Unfortunately, I was not able to just copy the codec, which would have been optimal in terms of resource (CPU/memory) usage on my server, but it doesn’t use up that much. Now I can set OUTPUT_DIR to somewhere in my /var/www, set up authentication on it via .htaccess and be done with it. Dad can now easily view the recordings using his browser, and if he really needs to view the live stream I can just bookmark the RTSP stream in VLC for him or something.

Bottom line: K.I.S.S.

How to fuck up with find

Welcome to the tutorial on how to successfully fuck up with find, the command line tool. In case you’re not familiar with the find command, RTFM, it’s freaking beautiful. However, in the hands of a non-RTFM-ing user (like me) it can be quite destructive. Let me demonstrate.

I wanted to use find to delete a bunch of files from a hierarchy, based on a simple name rule. So I thought I should use xargs in combination with find to delete them. After a quick lookup of the -print0 and -name arguments of the find command, I concluded that it would be a good idea to run the following command:

1
$ find . -print0 -name example | xargs -0 rm

Can you guess what it did? Let me help you: it deleted all the files under .. The directories are still there, but they’re not any help, are they? Now, fortunately for me, this was in a git repository that I had just pushed to a remote, so I was able to just clone it again.

In order to understand what happened, I had to actually read the man page carefully and pay attention. Apparently, everything that comes after the path (in my case .) on the command line is treated as an expression by find, where every argument evaluates to true or false, and is ORed with the next one. The -print0 argument always returns true, because it’s only meant to change the output, not the actual filtering. Because of this, my -name argument was completely ignored.

To conclude, the correct command would have been:

1
$ find . -name example -print0 | xargs -0 rm

This way, the -name argument will take precedence over print0. However, because I actually spent time reading the manual this time, I discovered find also has a -delete argument, so xargs is not actually needed at all:

1
$ find . -name example -delete

As I was saying, freaking beautiful.

Beware of the JavaScript reference!

As you (hopefully) know, variables in JavaScript are actually references. As you (most surely) know, references can bite you in the ass in the most unexpected places, so here’s one of them. Say you have an object that you continuously change (e.g. recursively) and want to see how it “evolves” by calling console.log() on it. You would expect to see “snapshots” of the object logged, right? Wrong. For example, try running this code in a console (Note: I have only tried this in Chrome, not sure how other browsers behave):

1
2
3
var mutant = {"name": "Leela Turanga"};
console.log(mutant);
mutant["characteristic"] = "One eye";

Now, expand the Object and voilà, you’ll see both properties. This can be very confusing with more complicated objects. If you’re too lazy to run that code, here’s proof:

As a solution, you can clone the object before logging it. Here are a few suggestions for how to do that.

Rails setup with RVM on Ubuntu

Given how developer friendly as Ruby on Rails is, getting a development environment up and running can be surprisingly tricky. I will describe the process that I consider to be “best practice” here, step by step. Although my guide will be focused on Ubuntu, the general idea should be the same on other distros as well. Also, please take note of the date of this post. If you’re reading this even a few months after I wrote this, don’t be surprised if your mileage will vary.

First off, we’re going to need Ruby, right? So let’s install that, and also throw in curl (we’re going to need it in a bit):

Update: actually, you don’t need Ruby installed on your system at all. Installing curl is enough here:

1
$ sudo apt-get install curl

Next, we’re going to install RVM. RVM is, at the website puts it, a Ruby Version Manager. Basically, it allows you to install multiple versions of Ruby in your home folder and use them seamlessly (as if they were the default Ruby in the system). This is useful for us because the version of Ruby Ubuntu installed is 1.8.7 (as of this writing), while the latest stable version of Ruby is 1.9.3. RVM also allows us to keep separate collections of gems (called gemsets, for obvious reasons). This is very useful when working with multiple projects that may have conflicting dependencies.

Installing RVM is dead simple. Just run the command under Quick Install:

1
$ curl -L get.rvm.io | bash -s stable

Warning: the above command might not be the latest one. Make sure to double check with the RVM website.

Now, close the current terminal and open a new one. This is so that RVM can load into the new bash session. Now, let’s see what RVM suggests we should install:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
$ rvm requirements

Requirements for Linux ( DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=11.10
DISTRIB_CODENAME=oneiric
DISTRIB_DESCRIPTION="Ubuntu 11.10" )

NOTE: 'ruby' represents Matz's Ruby Interpreter (MRI) (1.8.X, 1.9.X)
             This is the *original* / standard Ruby Language Interpreter
      'ree'  represents Ruby Enterprise Edition
      'rbx'  represents Rubinius

bash >= 4.1 required
curl is required
git is required (>= 1.7 for ruby-head)
patch is required (for 1.8 rubies and some ruby-head's).

To install rbx and/or Ruby 1.9 head (MRI) (eg. 1.9.2-head),
then you must install and use rvm 1.8.7 first.

Additional Dependencies:
# For Ruby / Ruby HEAD (MRI, Rubinius, & REE), install the following:
  ruby: /usr/bin/apt-get install build-essential openssl libreadline6 libreadline6-dev curl git-core zlib1g zlib1g-dev libssl-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison subversion

# For JRuby, install the following:
  jruby: /usr/bin/apt-get install curl g++ openjdk-6-jre-headless
  jruby-head: /usr/bin/apt-get install ant openjdk-6-jdk

# For IronRuby, install the following:
  ironruby: /usr/bin/apt-get install curl mono-2.0-devel

Install the dependencies suggested by RVM for Ruby (see the marked lines above and make sure to use sudo).

Good. Now it’s time to install Ruby! Again. The next command will install the latest stable Ruby into your home directory (so the system is not affected). Beware, it will take a long time to complete, as it downloads the Ruby source code and compiles it!

1
$ rvm install ruby

Now, if the command above finishes with a message similar to:

RVM is not a function, selecting rubies with 'rvm use ...' will not work.
Please visit https://rvm.io/integration/gnome-terminal/ for a solution.

Then do visit that URL and follow the instructions there. Finished? Good. Open a new terminal, again. Now type:

1
$ rvm use ruby

If all went well, you should see a green message telling you which Ruby is being used. Next, you should create a gemset (we talked about them in the beginning) for your new project, and also switch to it:

1
$ rvm gemset create myproject && rvm gemset use myproject

You finally have your environment set up. Next step is to install the rails gem:

1
$ gem install rails --no-rdoc --no-ri

Note: the —no-rdoc —no-ri parameters are passed so that it doesn’t waste time installing docs that are available on the Internet anyway. If you wish to have them installed for some reason, just omit those parameters.

Good, now you’re ready to create your new Rails app! Simply type:

1
$ rails new myproject

To run the new app:

1
2
$ cd myproject/
$ rails server

You just got an error now, didn’t you? Did it sound like this?

1
in `autodetect': Could not find a JavaScript runtime. See https://github.com/sstephenson/execjs for a list of available runtimes. (ExecJS::RuntimeUnavailable)

What’s happening is that Rails needs a JavaScript runtime (well, duh, it says so right in the error message!). What it needs it for, from what I can tell, is the new CoffeeScript (and LESS?) functionality added in the asset pipeline in Rails 3.1. To fix this, just require a JavaScript runtime gem in your app. We’re going to use one called “therubyracer”. Open up the file called “Gemfile” and add this line to it:

1
gem 'therubyracer'

Now, in order to actually install that gem, just run:

1
$ bundle

in your project’s directory. The bundle command basically looks in your Gemfile and installs / updates dependencies based on it. Running the app should now actually finally ultimately work:

1
2
3
4
5
6
7
8
$ rails server
=> Booting WEBrick
=> Rails 3.2.3 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[2012-04-17 00:24:51] INFO  WEBrick 1.3.1
[2012-04-17 00:24:51] INFO  ruby 1.9.3 (2012-02-16) [x86_64-linux]
[2012-04-17 00:24:51] INFO  WEBrick::HTTPServer#start: pid=4595 port=3000

Now, every time you open a new terminal and intend to run the project, you will have to:

1
2
3
4
$ cd myproject/
$ rvm use ruby
$ rvm gemset use myproject
$ rails server

If you want to skip the two rvm commands, you can use an .rvmrc file. That, however, is beyond the scope of this post, and is documented here.

As I was saying, surprisingly tricky. Sure, you could have used the Ruby and Rails provided by your package manager, but given the rapid development of both, you would be several major versions behind and lacking many features. Also, managing multiple projects is now a breeze. Just install the required Ruby (some projects might rely on older versions of it), create a new gemset for it, install dependencies, run.

Whew!