06 Nov 2016 VR science apps.
It's 5 months since I put my first VR science app onto the web as SphereVR. I've learned a lot since then (and SphereVR has improved greatly) and because people, including myself, are still uncertain about VR in general, I wanted to find out my own thoughts on VR. And the best way for me to know what I think is to write about it.
In these 5 months the professional VR world has been showing the successes and failures of any new technology. Even the professionals, with powerful tools at their disposal, aren't fluent in the language of VR. We all know how to navigate with mouse, keyboard, joysticks around 2D/3D worlds but what works and what doesn't in VR is still relatively unknown.
To my surprise and dismay I've not found others writing public domain scientific VR to see how they cope with the needs to navigate through a science space. Sure, there's the fantastic Labster which is doing great things for science education. And I know that there are some (but surprisingly few) scientists doing classic molecular visualisation within VR. But the sort of ordinary science that interests me still does not have a rich set of VR apps from which we can learn the best language for interacting with data.
- Sphere VR. Hansen Solubility Spheres set within the Milky Way:
- KB-Ternary VR. This shows complex Kirkwood-Buff calculations in ternary diagrams within a world floating in the sky above the sea:
- Emulsions. A basic exploration of the physics of emulsion drops that can join or split when they collide.
- Y-MD. Some basic Molecular Dynamics of atoms forming, say, a zeolite.
- Y-Solvation. A look at how the charges on solvent molecules vary within a solvent depending on the molecular orientations.
- Crystals. An exploration of all the possible crystal forms and what happens when two of them are merged:
- With the help of some geologists I met at a barbecue, I've written an app that lets you explore underground rock strata within a sort of glass elevator that moves any direction you want. The current limitation is getting the geological data sets into a suitable format, but once we have those data issues sorted out, I'll make the app more accessible. Incidentally, using VR to explore external strata is "trivial" in that any geologist with a 360 video can create a compelling field trip tutorial for others to explore. The teaching of geology will be greatly enhanced by VR but that sort of VR isn't what I'm trying to do - it requires skills and equipment that I will never have.
So here are the thoughts I've come up with about science VR.
- The Three.js and WebVR community is awesome and generous. Most things I will describe have, one way or another, come from them. I pass on my thanks to these pioneers and geniuses.
- You need proper controllers. I've had the Oculus Rift for many months and simply don't bother to use it other than to check that my XBox controller code works OK. I've ordered the Touch controllers and when they arrive I'm sure I will enjoy using the Rift. But till then, the HTC Vive is no contest especially as I, and everyone, find room scale such a natural part of the VR experience.
- It's important to be able to fly effortlessly through one's world. Because the rule of VR is "no motion except in the line of site" my flying is really simple. Look in a direction, pull a trigger and fly at the (variable) speed of the trigger. It quickly became clear that one trigger should take you forward and the other in reverse. I get sea sick, car sick, everything sick, but I have not once been sick through travelling through my VR worlds when they are working properly. I can also confirm that travelling, say, sideways (I tried it using a trackpad) induces nausea very quickly. And if a bug sends the world spinning, just close your eyes and take off the goggles!
- Teleporting by looking and clicking is really cool when required.
- I'm told that aural and haptic feedback are a good thing. I've added both to one of the apps so that I know how to do it, but neither makes any difference to my experience so I've not bothered to add them to the others. They seem unnecessary distractions to me, but others might value them.
- You need to know where you are looking. The Oculus Rift has a little circle to identify your gaze, which is used for menus. This makes up for it's lack of real controllers. I quickly adopted this trick in my own code. The circle is unobtrusive, but always there when you need it.
- Although there are warnings that head-up displays (HUDs) are not a good idea for VR, I've found that for conveying small amounts of useful data about specific objects (what atom is this, what is its charge, what are the x,y,z coordinates and value of this datapoint) they are wonderful. You just look at whatever interests you (using the circle to show where you are looking) and the information is just "there" in front of you. There's a problem. The HUD must be visually far enough away to require no re-focus. That's fine till you get close to an object of interest and the HUD is now "inside" the object and no longer visible. There may be fixes for this, but the advantages of this sort of HUD are huge and the problem is small. I agree that a full-on HUD with lots of wizzy information would not work well.
- Providing the main scientific outputs is not a problem. VR is such a natural way to show data that I find it hard trying to do stuff back in 2D. I've had very few problems knowing how to get the core data looking OK. Sure, I could do it much better, but data so easily comes alive in VR that it's no contest. VR and science were made for each other.
- However, scientists need to be able to load datasets, change settings (e.g. scales on a graph, values for inputs) and also get numerical outputs in addition to the dramatic portrayal of the main data. Here I've struggled. I now have a "wall" on which a number of text options are available and using the gaze circle to know which item is being looked at, with triggers and trackpads it's not too hard to have scrolling menus, numerical selection of values, buttons etc. However, I've hit a big problem, to be described in a moment.
- I'm a chemist so I think in terms of molecules and nanoparticles or spherical drops in an emulsion. These things are in constant motion and the way they interact is crucial to understanding formulations. However, it wasn't obvious to me that being in a virtual room with lots of moving blobs would be anything other than awful, confusing or scary. But having a physics engine to hand I decided to give it a go. It's wonderful! I can quite happily sit for ages in a corner of this virtual box of liquid watching the spheres bouncing around doing their science. The possibilities are very exciting and my latest app (not yet ready to go on line) is starting to be quite powerful.
- This app really needs (perhaps surprisingly) a 2D graph on the wall of the virtual room. In my 2D apps I use a standard graphing package and I tried to use some tricks to get it to work in VR but failed. In the short term I'm creating the graphs with VR tools and they are good enough for my purposes. But in the longer term, scientists will definitely need to be able to "project" graphs onto some 2D part of their VR worlds
- My menu system has hit a problem in the world of bouncing spheres. If I allow the menu to be active through the particles in the room, it's all too easy to accidentally change a menu item when you are looking at a particle. If I allow the menu to work only when it is the only thing in the line of sight then you often have to fly over to be close to the menu to be able to change things. You might not, then, be able to see the graph so you have to fly over to that. It's not good!
- Those who've used Tilt Brush from Google know that you can have a multi-purpose menu right on your virtual controller. Those who've used Fantastic Contraption know that you can call your "menu" (actually, a virtual cat) over to you and/or you can put on a helmet to pop out to another room to set various options. Each approach has problems for science VR but they suggest that with some imagination a better, more natural system will emerge for menus, graphs etc. This is going to be vital for scientific VR to flourish. I may have neither the imagination nor the coding skills to do it, but I'll have a go.
- Update 10 Nov. The blog crystallised my thoughts and forced me to think of a new input/output system. After 2 days of trial and error I made the assumption that some sort of hand is going to be required in most higher-level VR experiences, I added a little screen above one hand of my Vive controller (the trick will work on the Oculus Touch) and it has plenty of room for I/O and can be toggled or swiped through various views, in my current case between text I/O and a graph. It's a great improvment.
The VR science challenge
I'm just an old guy on my own with rudimentary coding skills operating at the bleeding edge of WebVR. My currents VR apps aren't brilliant, but they show me (and everyone who has tried them) that science and VR are made for each other. I will carry on developing my own skills and I hope that my VR language will become more fluent over time. When I look back at my early 2D apps I am embarrassed by them - they are so crude and yet were so difficult to code. But at the time I could not envisage that the appification of the science that interested me would become routine and that I'd have ~150 of them working away pretty well and used by people all over the world who "get" why science apps are so useful. I'm hoping that in a few years time I will look back with similar embarrassment at my current VR apps.
But there's so much I have to do. Coming real soon now is the Google Daydream challenge. I will soon purchase a Pixel phone and Daydream headset combination so I can start to see the possiblities of VR on smartphones. My current VR requires a powerful games PC (with a big NVidia graphics card) to work and I'm sometimes near the limit of the power - though usually through inefficient code rather than raw graphics power. What will they be like on a smartphone? Well, they'll be useless. So I'll have to learn to make science VR that works on a smartphone. At the same time, I know that my current PC+graphics combination will seem laughably slow in a few years time, so the sorts of VR science I can do will be extended significantly.
So those are my challenges. Where I've so far failed completely is to persuade anyone else to be doing VR science stuff. It's so obviously the future, why isn't everyone doing it? But just as I've been able to build on the pioneering work (successes and failures) of others in the nascent WebVR field, I'm hoping that one or two scientists will see my work, see the potential, learn from my mistakes and take science VR to the next level.