Web experiments with the Leap Motion

The Leap Motion Controller is a new product that allows users to control software interfaces with their hands in the air above the keyboard. It’s one step closer to that amazing looking Minority Report UI that we all wanted in 2002 on our home computers.

The controller allows you to track real time hand and finger movements above the device and detect gestures made by a user. It is available for pre-order now and is due to start shipping at the end of July.

Understanding that applications that use the controller are as important as the controller itself, Leap Motion have already sent out over 10,000 developer devices to start testing and building on. There are examples of beta applications using the controller appearing, like Corel Painter Freestyle and Google Earth integration.

The Leap developer portal opened on 5th July to anyone, whether you have a device or not, and recently HP and Asus announced deals with Leap Motion to embed the technology within their computers, so now seems like a great time to introduce you to the Leap.

myleap

Leap for the web

As a web developer, this sort of hardware extension to our users’ experience doesn’t usually affect me. However, one of the nicest things about the Leap Motion is that user interaction through the controller is available to the browser as well. When a controller is running on a user’s machine, it opens a web socket connection on localhost port 6437, so we can write JavaScript to listen to the events it sends us. Even better, Leap Motion clearly see JavaScript as a primary platform as they also provide us with the LeapJS library.

Getting started

I’m not going to go through the installation of the controller so we’ll start with a fully installed Leap Motion Controller and get it showing up in our browser.

We’ll need the source of LeapJS, saved into our project folder. Then, let’s start up a basic HTML page to starting playing with.

<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <title>Leap JS Example</title>

    <link rel="stylesheet" href="./style.css">
  </head>
  <body>
    <p id="info"></p>

    <script src="./leap.min.js"></script>
    <script src="./app.js"></script>
  </body>
</html>

I have included links to a stylesheet, style.css, for any styles we might require and an JavaScript file, app.js, where we will put our application code.

The LeapJS API

We are ready to write some Leap code, but first we need to know what we are dealing with. LeapJS gives us a few classes that we’ll need to use. The API reference is available online, but here is a quick rundown of what is available.

Controller

The Controller class is our interface to the device. You can create an instance of the Controller and poll it for access to frames, although there is an easier way to do this in the browser which we will see later.

Frame

A Frame is a set of hand and pointable tracking data and any live gestures from a moment in time.

Hand

All the information the frame has about a detected hand. This includes the palm position and velocity, information about a sphere that would fit in the hand and a list of the hand’s fingers. You can get the id of a Hand so that you can refer to the same Hand in a later frame.

Pointable

A Pointable can be a finger or a tool. The Leap Motion Controller tries to detect “tools” by the fact that a pen or pencil is thinner, straighter and longer than a finger. Pointables have an id, the same as Hands do, as well as a direction and the position and velocity of the tip.

Gesture

There are a number of in built gestures that the device recognises and this is the super class. When a gesture is detected, it is added to the frame alongside the hand and pointable data. The subclasses are CircleGesture, SwipeGesture, KeyTapGesture and ScreenTapGesture.

Reading the Leap data

Now we have seen what is available, let’s look at how to use the above classes. As I mentioned, instead of instantiating our own Controller object and polling it for frames, there is an easier way. Leap.loop is a method that takes a callback function and uses the browser’s built in requestAnimationFrame (or a polyfill for it) to provide us with 60 frames per second of motion data.

In an example later, I am going to use the positions of the tips of each detected finger, so right now, let’s just print that to the screen.

var info = document.getElementById('info'),
    data = [],
    i, len;

Leap.loop(function(frame){
  // the frame object is an instance of a Leap.Frame
  for(i=0, len=frame.pointables.length; i < len; i++){
    data.push(frame.pointables[i].tipPosition);
  }

  info.innerHTML = '[' + data.join(', ') + ']';
});

What we see is an array with 3 dimensional co-ordinates printed to the screen. When you move your hand around, you can see the numbers fluctuate together, although even when you hold your hand in one position they still change quickly. It is, of course, very difficult to keep your hand completely still.

leap-data

Making it a bit more visual

Just looking at numbers on a screen isn’t much fun, so why don’t we hook the data we have up to a canvas and start to play about with it a bit. First, we’ll need to update our HTML to get going with this though. All we need to do is add <canvas></canvas> under our <p> (which we’ll keep around for debugging purposes).

Setting up the canvas

We now need to do some work to prepare our canvas element. Let’s set it up to cover our browser window. We also want to move the canvas’s co-ordinate system to make it easier to plot the points from the Leap Motion. Traditionally the (0,0) point of a canvas is at the top left, but looking at the data we got from the last stage, you’ll see that the device’s co-ordinates have their origin in the centre of the device. If you hold your hand to the left of the device, the x co-ordinates of the pointables are negative, if you hold your hand to the right of the device, they are positive.

var canvas = document.getElementsByTagName('canvas')[0],
    ctx = canvas.getContext('2d');

// set the canvas to cover the screen
canvas.width = document.body.clientWidth;
canvas.height = document.body.clientHeight;

// move the context co-ordinates to the bottom middle of the screen
ctx.translate(canvas.width/2, canvas.height);

// set up a fill colour for our drawing later
ctx.fillStyle = "rgba(0,0,0,0.9)";

Painting the fingertips

Now we have our origin set up to match our device, we should draw something onto the canvas. For this example we will take the x and y co-ordinates of each pointable we have and plot them on the canvas. First, we’ll set up a draw function to pass to Leap.loop. The draw function gets passed the current frame.

function draw(frame){
  // set up our variables
  var pos, i, len;

  // loop over the pointables from the current frame
  for(i=0, len=frame.pointables.length; i < len; i++){

    // get the position of the tip of the current pointable as before
    pos = frame.pointables[i].tipPosition;

    // draw a circle centred at the tip
    ctx.beginPath();
    ctx.arc(pos.x-radius/2 ,-(pos.y-radius/2),radius,0,2*Math.PI);
    ctx.fill();
  }
}

And finally, we pass this function to Leap.loop:

Leap.loop(draw);

If we now run this with the device, we’ll start to see circles drawn on the screen where the fingers are pointing, much like the Rorschach test I appear to have drawn in the screenshot.

rorschach

We can do slightly better than this mess though, by clearing our canvas on each draw so that only the current frame’s pointables are printed. As I was playing about a bit more with this, I went a bit further and just drew over the canvas with a 10% opaque white rectangle, which gave the pointables a trail. You can see the full code for these examples on GitHub.

leap-prettier

Further capabilities

So now we can plot points on the screen, we should try to use some of the other abilities the Leap Motion has to do something a little more interesting. Firstly, as we found out when looking at the available classes in the API, we can distinguish between fingers and tools, like pens or pencils, and we can recognise some built in gestures.

So let’s write a very basic drawing application using both of those bits of knowledge. For this part, I’ll show the main bits of code and describe anything else. Again, the full code is available on GitHub.

Drawing with a pencil

We set up the canvas the same as before, but this time we need a couple of different variables, one to save a tool ID and one to save the previous position of the tool. Within a frame, each pointable, hand and gesture has an ID so that you can refer to it in later frames. What we are going to do is keep a reference to one tool and keep looking it up in the current frame. We then draw a line between the current position of the tool and its position in the previous frame. Of course, we need to get hold of a tool in the first place, so, in the case where we do not have a toolId we need to see if there is a new tool to get hold of.

function draw(frame){
  var tool, currentPosition, i, len;
  if(toolId === undefined){
    // we do not have a toolId, so let's go looking for one
    if(frame.tools.length > 0){
      // if the frame has some tools in it, we choose the first one
      tool = frame.tools[0];
      toolId = tool.id;
      lastPosition = tool.tipPosition;
    }
  } else {
    // we have a current toolId, so we should look for it in this frame
    tool = frame.tool(toolId);
    // if the tool is valid, i.e. it is still in the frame
    if(tool.valid){
      // we take the position of its tip
      currentPosition = tool.tipPosition;
      // we draw a line between the current position and the previous one
      ctx.beginPath();
      ctx.moveTo(lastPosition.x, -lastPosition.y);
      ctx.lineTo(currentPosition.x, -currentPosition.y);
      ctx.stroke();
      // finally, we update the last position
      lastPosition = currentPosition;
    }else{
      // the tool is not valid, let's find a new one.
      toolId = undefined;
      lastPosition = undefined;
    }
  }
}

With the above draw function as the argument to Leap.loop, now whenever we point a pencil at the screen over the Leap Motion we will draw a line. That can get messy, as proved by my attempt to write some text:

poor-handwriting

So, we should make some way to clear the screen too. The most natural way to me to do this, was by swiping my hand over the Leap Motion to clear off what had been done. The swipe gesture is built into the device, so we just need to inspect the frames gestures. Swipe gestures are also continuous gestures that go on for more than one frame, so they come with a state of either ‘start’, ‘update’ or ‘stop’. For this example, once the swipe is over I want to clear the canvas. This means we can look out for any gestures that are swipe gestures with a state of ‘stop’. Here’s how:

if(frame.gestures.length > 0){
  // we check each gesture in the frame
  for(i=0, len=frame.gestures.length; i<len; i++){
    // and if one is the end of a swipe, we clear the canvas
    if(frame.gestures[i].type === 'swipe' && frame.gestures[i].state === 'stop'){
      ctx.clearRect(-canvas.width/2,-canvas.height,canvas.width,canvas.height);
    }
  }
}

To avoid doing this whilst drawing, I placed this conditional below looking for new tools.

One final thing, to enable the gesture recognition, we have to start the Leap.loop differently, by passing an options object as the first argument.

Leap.loop({ enableGestures: true }, draw);

Leap Developers, Leap!

So that was my brief introduction to using the Leap Motion Controller in a browser. It wasn’t quite Minority Report, but hopefully you can see where we can go with this.

For further information the LeapJS repo has further advice on using the Leap Motion on the web and points to some interesting 3rd party examples too. Also, the Leap Motion developer portal is now open to all and within you will find forums full of other developers who have been building apps since the first developer devices were shipped out.

The Leap Motion is a fun way to use a computer and just as fun to program with. If you do get the chance to play with a device then you should. I’m certainly looking forward to seeing more and more applications appear as the devices start to ship.

Back to the articles