XR Development Overview
I’ve seen a lot of questions and misconceptions about XR development over the past years and decided to compile my thoughts and observations in this multi-part post.
The type of development that I am referring to is more in the realms of prototypes, serious games, research software, simulations and indie titles. It is easily possible to sink endless time, money, and the effort of a large team in any software development project, but I am more curious about the scope of what individuals or small teams can achieve. The motivation of this post is my repeated exposure to many clinicians, researchers, and managers who are curious about XR, are planning projects, applying for grants, but generally do not know where to start when it comes to scoping out development effort. A lot of this is not exclusive to XR development, but also applies to other software development projects. Moreover, I am focusing on the game engine Unity, the most popular game development tool currently available. There are many other tools out there such as Unreal Engine, Amazon Lumberyard, and CryEngine, all of which support building applications for multiple platforms, including VR systems, but I’ll mostly stick to Unity for this post. Don’t expect to see tremendous differences in how development is approached using these different game engines.
Let me preface this by saying that XR development is easier and more accessible than ever. Anybody can get started with XR development or more broadly game development without much prior knowledge and without large upfront investment. Most software development tools are free and online video and text tutorials are available in abundance on YouTube or paid platforms like LinkedIn/Lynda.com, Udemy, Coursera, Pluralsight and others. Moreover, all the hardware we use today is actually compatible with one another and most devices are directly integrated in development tools such as Unity and UnrealEngine. This stands in stark contrast to just 5 or 10 years ago when the development tools and VR/AR hardware were expensive, unavailable to consumers and not compatible with each other. This is a point in time of much growth and opportunity - let’s dive in!
Let’s start with making important decisions at the outset of a project:
Assuming we are not building a game, but rather want to address a consumer need or fill an industry gap, it helps to start with that gap and identify suitable technology to fill that gap. I’ve seen a lot of projects start with a cool gadget or new developer kit of a technology which struggled to have real-world impact because it was not matched to a need or gap properly. This is where the experience of consumers like clinicians, educators or researchers come in: you know what problems you are facing, now find technology experts that can offer solutions by matching your problem with a suitable technology. VR or AR can solve a lot of problems, but it is important to consider a few factors:
Where will the technology be used: how much space is there; is internet access available; are there special requirements for medical or military use such as user privacy or interference with existing technologies
Who will use the technology: what user age group; any physical/cognitive limitations, e.g. risk of falls, hemiparesis; any expected experience with technology; are there user-specific challenges such as lack of motivation for repeated/long-term use in rehabilitation
Once an agreement is made on the requirements of the development effort, let’s focus on how difficult it actually is to develop different aspects of VR applications (AR is still somewhat more complex to develop and has its own quirks and requirements - we will focus primarily on VR here, but some of this also applies to AR and MR with caveats).
1. How difficult is it to create a virtual environment for a VR experience?
Of course, it depends on each project’s requirements. Before hiring a team to develop custom art, design levels, and build environments, it is always worth to look at 3D marketplaces such as TurboSquid, Unity Asset Store, Unreal Marketplace or CGTrader. There are some high-quality assets available that work really well in VR, ranging from twenty to a few hundred dollars. Tip: Check whether your type of environment exits or whether it’s too specialized and needs to be built from scratch.
A few things to consider: how important is it to have a unique art style with assets that are not found in any other games or applications? How many environments or different scenes do you need? It is much harder to find multiple models of the same art style, as you heavily depend on purchasing art models from the same artist for a consistent style. There is also a difference between finding usable individual art assets or purchasing complete, fully-assembled scenes. If you find a collection of appealing scene objects, you might still have to work on the level design, that is, assembling the individual objects to produce a believable, engaging scene for your user.
If you cannot find any usable art assets, it might be worth contacting artists that have produced high-quality content to develop custom art for you. If you don’t require exclusive use, the artists might give you a good deal if they can sell the art assets on TurboSquid or CGTrader lateron.
Environments need to be optimized to not cause performance issues. Such assets are usually labeled as game-ready, VR-ready or low-poly, optimized art assets. There are use cases for highly-detailed, complex art assets or CAD models as well. For example, Unity recently started supporting such workflow with https://unity3d.com/pixyz and Lumion (https://lumion.com) supports building architectural visualizations using complex 3D models that also support VR walkthroughs. Most of your standard VR experience will require a lot of optimization though, including highly optimized art assets.
Other forms of optimization can take anywhere from few minutes to few days to set up. Features like occlusion culling, baked lighting, Levels-of-detail for art assets and many other optimizations are commonly available through game engines and will go a long way toward a smooth experience for the user without stuttering and the risk of simulator sickness. The goal is to achieve a consistent 90 frames per second for most VR hardware configurations. This step will require some in-depth knowledge of the technical aspects of game development and is rather important. An unoptimized application will create an unpleasant experience with the potential to cause simulator sickness whenever lag/stutter is noticeable.
Next, virtual environments for VR use need to support VR interactions. Normal games can get away with a lack of realism or a limited amount of interactions. As soon as you support VR hand controllers and allow the user to reach out, pick up items and interact with objects, it feels unnatural if many objects in your environment don’t support such interactions. Let’s take a virtual house - a user would expect to open cupboards, flick light switches, turn on a TV, microwave, radio or other devices, open doors and really do all the things you would normally do in a house. This requirement is more applicable to environments that look realistic, as opposed to cartoon-like or surreal scenes. In order to support such interactions, the environment needs to consist of individual items that can be interacted with. If the cupboard is modeled as one solid object without any separate doors and hinges, you won’t be able to open these doors! This either means a 3D artist needs to spend additional time separating out all the moving parts of an object, or you simply cannot use the model in a VR application. Check these details upfront before you purchase anything!
Speaking of interactions, a realistic physics simulation in virtual environments is almost certainly required. Without such simulation, picking up and dropping objects and pushing things aside becomes impossible. Luckily, several integrations with the game engine Unity exist that provide this functionality out-of-the-box. VRTK and NewtonVR allow for realistic physics and a lot of intuitive user interactions with the environment while not requiring a lot of development effort. Surely, some developers might want to spend substantial time extending or rewriting these systems, but for your average project, VRTK and NewtonVR go a long way without costing any money. More complex physics such as fluid simulation is more complicated and is often “faked” and limited as opposed to simulating realistic behavior of fluids. This is due to the performance requirements of fluid solvers which can bog down your otherwise optimized VR experience. Placing a lake or river in your environment is not hard, but pouring a glass of water or beer is difficult to get right without a bit of voodoo magic.
Summary: If you find an environment that fits your requirements and that has enough individual items that can be interacted with, simply dragging and dropping it into a game engine is a good start. Connect your headset, enable virtual reality support, press “Play” and you can already look around and admire your creation in a matter of minutes. Spend a few hours on optimization such as occlusion culling and lightmapping, drag and drop a physics system in your scene and set up desired interactions and a teleportation/movement system in a few hours at best. This will already get you more than halfway there for a decent virtual reality experience without actually breaking the bank.
Tip: If you are looking at budgeting a somewhat simple virtual environment (e.g. room of a residential house, cafeteria, classroom) with some basic VR interactions, don’t pay tens of thousands of dollars for its development. VR assets are very affordable - most money will go toward hourly rates of a developer setting up your scene, physics, interactions and optimizing the experience.
Next, we will look at a few specific features that might take more time and budget to implement.
2. How difficult and costly is it to integrate virtual characters in a VR experience.
Virtual characters are a tricky topic. On the one hand, they can really contribute to your VR experience feeling engaging and believable, on the other hand, they can completely ruin any immersion by looking and feeling awkward, if not done right. Virtual characters can be quite complex to develop due to the many steps involved in the process. A 3D mesh has to be modeled and sculpted, textured, rigged, and animated. In larger production environments, different people will specialize in one or two of these development steps at a time. As a consequence, you should spend some time deliberating whether you need characters, and if so, how many are needed and how much movement and interaction with the characters is essential for your application. Similar to virtual environments, there are many existing characters available on 3D marketplaces, ranging from tens of dollars to thousands of dollars. Characters are usually sold as either rigged, animated, or ready-posed. If you want any control over your character’s movements in your application, rigged models are needed. There are several 3D character design tools available that allow inexperienced users to assemble 3D characters including hair, clothing items, and accessories, and export these characters in a format that can be used within game engines. Unfortunately, many current character design tools are unnecessarily complicated or very limited. Unfortunately, the arguably best tool currently available, Adobe Fuse, is not being actively developed further by Adobe. A highly promising tools, iClone Character Creator 3 (standalone), has just been released and should make character design and implementation much more accessible for (VR) projects.
From my experience, the biggest challenges for using virtual characters are the availability of a wide range of clothing items and hairstyles, the availability of children and elderly characters, and the time required to properly animate characters. There are many tools available to add further realism to your characters, including lip-synching, affordable motion capture, or even scanning real-life persons to create realistic, recognizable virtual personas. Traditional motion capture is costly in a studio environment, but tools like iClone can make motion capture more accessible for smaller projects by leveraging more affordable tracking systems like Leap Motion, Perception Neuron, and FaceWare.
If characters are essential, it is worth considering how much these characters need to move and how close they will get to the user. There is a large difference between multiple characters running or fighting each other in a busy, dynamic scene and those same characters sitting in a corner being exhausted or sleeping. Also, it is much easier to get away with a low-quality character if it is standing far away in the background as opposed to right in front of the user, engaged in a direct interaction or conversation.
Every small detail can make or break your VR experience if the character is right in front of you. As an example, I built a simple prototype with a high-quality character. However, I set to character to always directly stare at the user, following the user’s every move. Without exception, each user commented on the character and most of the time reacted with intimidation or discomfort. Characters matter and should receive considerate attention if they are being used in an application.
In summary, virtual characters can be vital for many projects and often add time-intensive development processes that can substantially impact a project’s budget.
3. How difficult and costly is it to collect data from my VR experience?
Data collection per se is not difficult. One can easily save information about the user’s interactions, the items they picked up, or the user’s viewpoint by approximation of the view direction of the head-mounted display.
Writing information to a local text file and loading that information lateron are dead simple. An online database or a connection to an online hosting provider can add complexity, depending on the project’s requirements. There are many affordable service providers (e.g. Playfab, HeroicLabs, AWS GameLift, Multiplay) that take care of server hosting and make it easy to connect any application to their servers. Monthly costs are usually based on usage, that is, how many players connect and how often and how much data is up- or downloaded from the servers. Building a custom backend that connects to cloud services such as Amazon AWS, IBM Cloud, Google Cloud, or Microsoft Azure can be much more costly upfront. Even more costly, and I mean much more costly, is the prospect of dealing with sensitive data that requires either HIPAA-compliant hosting or data storage that safely hosts identifying information for education, banking, military, and other domains. If that’s what you are after, be prepared for significant financial investment. In a first instance, ask yourself whether it is absolutely necessary to deal with identifying data, or whether it is sufficient to only use anonymous data and provide the means for your end user to match your application’s data manually to their secure identifying information (e.g. an electronic medical record). Just don’t ignore the topic altogether. Collecting sensitive data about your users comes with responsibilities!
Another important topic is the complexity of user data that can be produced or recorded in XR experiences. It does not really matter whether such information is gathered for user testing/playtesting, for scientific or clinical inquiries or as an outcome measure for training or educational XR applications - fact is that XR applications can produce rich datasets of significant value.
Firstly, XR applications are spatial by nature, so stimuli and the user’s gaze can be spatially distributed, which in turn allows us to record 3D coordinates of all relevant objects in our scene. Think replaying a full scene or log files with millions of timestamped entries.
Secondly, XR applications can achieve a remarkable degree of realism and ecological validity. Since we can simulate incredibly complex scenes, we can also record all associated data with such complex scenes. In hindsight, my colleagues and I have spent far more time designing and conceptualizing VR tasks and the data they produce, compared to building actual environments, characters and 3D assets. Data is everything! What would you do if everything can be simulated, recorded, and analyzed? Be prepared to spend significant time around data conceptualization and analysis.
I will conclude this first part of our XR development overview with the following:
Don’t break the bank! Build smart, not necessarily from scratch.
Choose your battles wisely, whether that may be 3D characters, data, or web access.
I hope this was helpful, leave comments, feedback, and suggestions below.
Thanks for reading!