Join Our Newsletter

On Democratizing the Next Generation of User Interfaces

Musings of a dev trying to plan for the future

I remember making a program come alive back in the days of the IBM System 23. It was my first endeavor into programming. I must have been 7 years old and already using terminals for data entry to help my dad out in his business. I wrote code that highlighted items on the screen in different shades. For the younger me, it was an achievement to be proud of, even though color was not yet available in these monitors.

Fast forward a few decades — yes, only a few — and we’re in a completely different universe. One where 3D gaming with thousands of people at the same time is possible. A space that collides with realistic physics, represented by lifelike graphics and beautiful scenery.

While a gigantic leap from the olden days of monochrome monitors and ASCII graphics. From a human’s perspective, there’s still a layer of separation between reality and the digital world. But we’re about ready to break through that barrier.

Cryengine Scene

On the one hand, virtual reality is only a few steps away from fully immersive experiences. Able to fool your senses into thinking you’re inside an alternate universe. On the other, augmented reality is on its way to bringing that alternate universe into our own. Rendering it on top of how we perceive it.

But you may already know this, so why am I stating the obvious? Why does it matter? I happen to think that VR and AR are it. The final visual, audio and tactile user interface frontier. Regardless of how we render it or what hardware we use to experience it. Whether wearing head gear or plugged into your neural cortex, soon we’ll fully experience these worlds.

As a developer that produces user interfaces on a regular basis — one that dislikes that process — I should sit back and reflect on how this future is going to look like. What are the implications for those of us that have to write the code? How can we plan for it?

The current state of things

While the existing toolset has come a long way, its main purpose is building games, not user interfaces. Even with the work that folks at Unity and Epic Games have done to make it easier, most of the effort is still at a low level. This takes time away from designing better user experiences.

This means we have to start thinking about general purpose interfaces. While there are several asset libraries available, there’s no reusable standard “widget” library. Nothing that’s designed for generic user interactions, like say a button click. As simplistic as that may sound, it’s one of the basic building blocks we’re lacking. If your aim is to build a complex interface, you have to start from scratch every time.

Let’s look at the famous Iron Man interface as an example. As a software developer, how would you build that? How far are we from being able to do so? The second question is easier to answer than the first: we can do it right now with the tech we have.

To answer the first, current tools and supporting software can help create a world in which to spawn such interface. But it’s not quite a platform for interaction with the rest of your software. It’s not applying standard intuitive usage metaphors that everyone understands. It also requires skills that go quite a bit beyond writing software.

Immersive designs need a whole other level of artistry including modeling, animation, story telling, music composition, and more. To complicate things more, providing fantastic interactions means nothing if we can’t invoke or integrate with the outside world of mundane software.

Imagine a web page enabled to interact with your 3D browser app. What would you spawn in 3D space to better convey the data that you’d like your user to see? You could go to Medium and find posts that show you animations of the stories they are telling. What about video streaming with an extra visual component where the audience can see and interact with each other as if they were sitting in the same room. Even wikipedia could benefit from spawning a representation of the concept it’s describing and allowing you to interact with it.

Possible improvements

With this focus in mind, let’s step back and look at the environment of engines and assets we have today. The different languages, scripting backends, graphic formats, shading systems, lighting mechanisms and rendering pipelines. To achieve ubiquity, we need either standard interface methods or extensible abstraction layers. Likely both if we want to make a true impact.

I want a library, usable from any programming language, that lets me communicate with a pre-configured 3D environment. One that allows dynamic positioning of standard assets in 3D space. Able to load custom assets when needed. With hooks into rendering for VR and AR hardware, as well as any available gesture recognition metaphors. Something that forms a basis for building common reusable patterns that everyone else can enjoy.

The communications backend I implemented with WebSockets while working on Sofi could be a decent start. We can use it to command game engines (like Unity3D) and receive events, putting logic at the Python layer. Imagine being able to leverage the entire Python ecosystem at the backend with a 3D frontend.

The pieces exist, so let’s have a conversation about the type of interfaces we’ll need and the common assets we could provide.

If you have an opinion on the subject or any ideas on where we could take this, feel free to leave it in the comments below, or hop over to gitter and let’s chat about it: https://gitter.im/try-except-pass/future-interfaces

© Copyright 2020 - tryexceptpass, llc