LEAP Motion: A Bittersweet Love Story

Introduction

Current is a wonderful place for a developer, we work hard but we always get new toys to play with. One of those toys was the Leap Motion. Preloaded with some applications, it was gimmicky, fun to use for 5 minutes and then got forgotten. Until yours truly discovered the Leap was compatible with Unity. So then playtime started.

I will be trying to help you set up a project, and tell you what worked for me, the different pitfalls I have encountered and how I solved them.

Since the Leap SDK is very clearly described, I will be sharing links redirecting to the appropriate webpage.

Installation

Let’s get you started shall we, in order to use the Leap, you first :

  • Need a Leap, obviously
  • Follow the instructions found HERE, and come back later.

Now, if it doesn’t work at this point in time, I am sorry but I don’t think you should keep on reading. For the elite that managed to get it working, let’s start coding.

Principles

So the basic idea is, you have a Controller that gets all the data for you, and on each and every frame you get a lot of sweet sweet motion data to play around with.

To get you started on how to get that sweet data to play with, start by reading THIS. Since I am used to the Unity way, I like to use the method described in “Getting Frames with Callbacks

You can now access all of the things described HERE, apart from the Gestures, but I will talk about that in a later post.

Tracking Your Hands

I have to say, I am truly amazed by how accurate the Leap can detect the position of your hands. Although be warned, because the Vector returned by hands .PalmPosition is in millimetres, which can be quite confusing the first time you are assigning the transform of a gameobject to it.

Confused as to why my gameobject is so far away

Confused as to why my gameobject is so far away

So the first thing I did was to attach particle emitter to my fingertips so I could feel like a wizard. To access your finger position :

Which is fun for 5 minutes but kind of leaves you wanting more.

Okay what if I created a Particle Emitter that simulated a galaxy and would let me navigate through it with the slightest movement of my fingertip, like some kind of all powerful being.

The writer's ego

The writer's ego

Hand Motion To Control Your World

Coding for the Leap is fairly easy, their documentation has lots of code examples to lead you step by step and write functional code in a blink.

Coding for the Leap is fairly easy, their documentation has lots of code examples to lead you step by step and write functional code in a blink.

Coding for the Leap is fairly easy, their documentation has lots of code examples to lead you step by step and write functional code in a blink.

But the first pitfall I encountered was not how to use the SDK. Noooooo, it wasn’t. But :

How are you supposed to use your hands to control a scene ?

How to make those hand movements intuitive enough for the user to understand them and pick them up fairly quickly

At first I would be doing the following :

If one hand is detected, use the difference between the previous position and the current position to rotate the universe (it is as cool as it sounds)

If 2 hands were detected, get the distance between the 2 hands and the closer they get, the closer you get to the center of the galaxy.

Which on paper is cool and all, especially if you wrote the code and know how it works. It looks pretty cool when you show it off. Now if you don’t explain anything and let someone try it … they just wave their hands quickly once or twice above the LEAP and just stop while saying :
“Meh, doesn’t work”

Average user

Average user

To try to fix that, I added and removed some features. First, you need something to immediately reward the user for waving one hand above the LEAP.  To do that you need to take into account the “Woooh” factor. So instead of just rotating the universe, I added the possibility to Zoom into the center of the universe with one hand only, depending on the height of the hand over the Leap, decreasing the rotation speed the closer you get to the center.
That completely changed the scene, as the user would be inside the galaxy, rotating it with his hand, usually stopping him in his movement, then slowly waving to turn around which was the expected behaviour.

What usually comes next is trying to do an explosion with the hand (like a wizard casting a spell). To enable that, I would immediately think:
“Easy! Just count the fingers. If you detect a change from 0 fingers to 5 fingers = BAM EXPLOSION “

The expected user reaction

The expected user reaction

Yeah, except that :

The Leap is too precise, what you would end up having is actually a count of fingers detected on each frame (like 0 fingers, then next frame 1 or 2, … finally 5)

On the other hand (see what I did there?), depending on the hand rotation/position, the LEAP simply doesn’t see the fingers. To put it simply, if you place your hand like you were about to karate chop the LEAP, the fingers won’t be seen. You might see the first one, but then the others are hidden behind.

Also, my thumb was sometimes not being detected for some reason, which led me to believe I had some kind of thumb deformity.

Then the dreaded moments when the user starts using 2 hands … I have yet to figure out a simple/obvious enough control to handle it.

Conclusion

Lots of people were disappointed when the LEAP was finally released. It has its quirks but nothing a good algorithm can’t fix. I think coupled with a good design to handle the different movements, it would be a wonderful tool. I am just starting to use it, and haven’t even told you what you can do with the SDK gesture recognition yet, so stay tuned.

Finally, I am surprised I haven’t mentioned Minority Report at all so here it is.

Ugh, using my arms to code is too painful

Ugh, using my arms to code is too painful