The mouse was for the computer. The …….. is for Augmented Reality. – LITHO

The mouse was for the computer. The …….. is for Augmented Reality.

Author Nat Martin

The original HoloLens was the most magical piece of technology I had ever experienced. As someone who has a love-hate relationship with their smartphone, experiencing HoloLens was exciting because I glimpsed a future where technology was no longer an obstacle to the real world, but instead a tool to enhance it.

I’ll give a rundown here:

Hand tracking

Over the past few years, progress has been made to increase the resolution of hand tracking. Now Leap Motion provides excellent tracking and the fully-articulated hand tracking on the new HoloLens appears to be greatly improved.

However, even if these resolution problems are entirely resolved, I don’t think camera-based hand tracking is the silver bullet for XR input. For hand tracking to work your hands need to be in the view of the camera. Even wider field of view promised by new hardware, (e.g Leap Motion) there’s no escape from the tiresome act of holding up your hands for long periods of time (aka Gorilla Arm Syndrome).

This Minority Report–style input takes up space and looks stupid. I’m aware social norms around technology change quickly — I mean look at airpods — but I really struggle to imagine a future where everyone is waving their hands in front of a headset on the bus to work.

So, what other options are there for the future of XR?

Voice

Voice works great for things that are easily verbalised — e.g. text messaging or selecting categorical data — but not for spatial continuous interactions like scaling an object or turning up the volume.

Eye tracking can play a role in XR input too.

However, the problem with eye tracking is that humans tend to use their eyes for other things besides AR input — like seeing. And therefore as a primary input, eye tracking can end up feeling a bit like controlling your car stereo with the steering wheel.

Handheld Controllers

There’s also the option of a handheld controller. For those who had been using HoloLens, the Magic Leap controller was a breath of fresh air.

Yes — as Palmer Lucky pointed out — the controller’s electromagnetic tracking is far from perfect (and even further from perfect when near anything large that’s made of metal) … but it does make the headset much more usable than the Hololens v1.

Yet, the biggest problem with the Magic Leap controller is that it undermines the core experience of AR: it stops you interacting directly with the real world. Imagine using it on a construction site, while driving a car, or during your breakfast…

Using a Magic Leap during breakfast (Credit: Magic Leap)

EMG Controllers

Finally, there are the wrist-mounted EMG (Electromyography) controller options — e.g. Ctrl Labs, Myo, Pison. These technologies promise the ability to fully track the hand’s movement (or even intended movement) without the user having to hold anything.

These EMG technologies have the potential to be a good compromise between handheld controllers and camera-based hand tracking. However, there are still some unresolved signal processing challenges before EMG is ready to be a reliable input device. (Having said that, I am yet to try Ctrl Labs latest demos.)

All of the above methods will likely play a role in the future of XR interaction, however none of them provide a complete, viable and intuitive input solution for XR creators today.

CTRL Labs Demo (Credit: CTRL Labs/ VentureBeat)

So, we made LITHO…

Litho is a wearable controller, complemented by our tracking software and UX toolkit. The controller itself is worn between your first and second finger. It has a capacitive trackpad on the underside, an array of motion sensors, and provides haptic feedback. LITHO’s design means you can wear it while doing everyday activities (e.g typing, driving, drilling).

LITHO’s tracking software outputs the approximate position and rotation of the controller relative to the phone’s/headset’s AR camera — allowing developers to create apps that let you interact with objects in the real world, simply by pointing.This is achieved by taking data from the camera and the IMU to estimate the controller’s position in space. In real life this equates to an intuitive experience which feels somewhere between a typical ‘3DOF’ VR controller and a fully tracked controller like the VIVE. We have created demo use cases which include AR 3D creation apps that allow you to design buildings in context and to scale.

What’s more, LITHO can also be used without a visual output. For example, with your phone in your back pocket you can create apps that directly interact with connected objects such as lights. And these interactions can be enhanced further through LITHO’s haptic feedback.

LITHO is unique in that it’s cross-platform and output agnostic. This means that developers creating Unity apps for AR headsets and can build the app for iOS and Android too — with one click. This enables billions more people to try these complex AR experiences as opposed to thousands.

Hope you enjoyed reading our first medium article. We plan to write articles on the below topics in the coming months:

  • Why input is the barrier to handheld AR.
  • How spatial interfaces can change how we process and understand data.
  • The problems with XR terminology, and the solutions.
  • Rapid prototyping spatial interfaces.

Leave a comment if any of these are of particular interest, or you have any questions or feedback.