My first VR project

2023-06-08



I don't remember exactly when my first virtual reality experience was, but I vaguely remember the feeling. It was heavy, nauseating, and quite tiring after some time. But if I exclude all the technical limitations and look beyond the physical discomfort, putting on the headset was pivotal in my way of thinking about game development and the like. Later, during NSAC'22, my team and I had a chance to build our project for a mixed reality port. We never finished that, but I picked up one or two things about how VR development worked. A few months ago, I had another opportunity to work on a VR project, with my friend Saadman Sakib. It's been a month since we finished it, and I would like to document my recollection while the memory is still vivid.

I thought the project was over-ambitious, at least in spirit, when it was first proposed to us. I think I still don't have the authority to disclose details about the proponents and how the project came to be. We were asked to overlook the technical implementations. Now, I say over-ambitious not because I thought the project was unfeasible for technical implementation, although it was definitely a remarkable feat. Our proponents had a deadline looming over and my belief is that they misunderstood certain aspects of the programming side. We have, since then, cleared up a lot of confusion and miscommunication, some of which I think could have gone better. They have considered the project a success, so I'll take the win. But I would still like to improve some parts of it.

We had decided for a virtual reconstruction of the Panam City, with a touch of interactivity and modernism, which allowed a visitor to experience firsthand how the city was felt in its prime. The city is the example of an impressive archaeological feat despite now being in ruins and it is now one of many tourist attractions of Bangladesh. The plan was to allow visitors to log-in (with their PC and/or VR headsets) to the site, allow them to see and communicate with each other (via text or voice channels), and present the cultural attributes of the time (through visual or auditory means).

What went right

  1. Choosing Unity: I had thought of using Twinmotion or Lumion. I also had people advising me to use UE. I was heavily biased towards using game engines, as the project specifications were very close to a game than they were to a simulation. I went with Unity because a) most forums online had recommended it and b) I was very much used to it. The second factor might have played a bigger part than I am giving credit for, but I am glad of my decision. It meant less hassle for me dealing with the learning curve as I was familiar with how Unity handles things. And from what I have seen, the support for VR development in Unity is much more stable than its counterparts, so if I were to get stuck, it would be many times easier for me to debug.

  2. Not re-writing the Netcode: I had experience of developing backend for multiplayer games before. But this project would have needed a good number of concurrent users, and the added complexity of it being a VR multiplayer, I opted in to using a Fusion for network handling. I now had to worry less about the lag, servers and connection issues. And later I found that Fusion had support for voice and text channels built into the framework, which was a big plus.

  3. Same codebase for both PC and VR builds: Now this is something I would not recommend in most cases. Ports for different platforms should be handled in different codebases no matter how much cross-platform capabilites the engine has. Unity has been denying this theory of mine for almost a decade but this is one of those rules of mine that has never let me down or held me back. I have always found it easier to debug platform specific problems by keeping the codebases separate instead of introducing a mess of conditional logic inside my scripts. But this one was an exception obviously. First, we didn't have enough time to properly maintain two codebases. And second, both ports shared most of the features, they shared the server, the communication channels and almost everything save for some UI changes. It was really not worth the time and headache. I figured that I could write some checks to determine what platform the user was in and perform minor adjustments based on that. This saved a lot of time in both the development and build process. And I was almost 90% certain that what worked for PC will also work for VR.

What went wrong

Now, nothing went disastrous or abysmal, or I would not be writing this from a partially successful point of view. But I think that if I were to redo the project, I would have done some parts differently.

  1. Not changing part of the Netcode: I would like declare something. The official Fusion docs are awful. I wish I could write it in a bolder font. I had to scrape through forums, articles, videos and open source projects to get an idea of how Fusion handled things. And some of these sources were several years old. Multiple times, I tried an accepted solution only to find out that something else was changed under the hood over the years. I should have modified or redefined some methods which I had been using everywhere, so that it wouldn't break some stuff when I write something new every other time. Also, credit where it's due, the scripts provided by Fusion were commented adequately for me to understand what a script was going to do, despite not clarifying what it was not.

  2. Not communicating enough with other groups: I have said before, my friend and I were only brought in for the programming and Unity-specific shenanigans. All the 3D models, UI, and designing were handled by other people. We developed the prototype quickly and were under the impression that we could just plug-and-play the 3D models (I have found that it is a dumb assumption over and over for the last decade). Unsurprisingly, that was not the case. When I first opened the scene provided to us (in which we were supposed to integrate our prototype), my laptop battery went from 80% to 68% in a minute. I know my laptop is not a beast, or even in the upper territory of powerful workstations. But it has been able to do pretty much anything I have thrown at it. The scene I opened needed 10 gigabytes of memory allocated in the RAM. Apparently, they made every tree interactive and subject to wind simulation, and there were ~10,000 of them. The site area was a few hundred meters long only but they placed trees all over the terrain and placed rocks, trees, water planes at places the player would never be able to go. There wasn't event any occlusion culling implemented. Each water plane was simulated too. After I disabled the terrain, the rocks, and all of the water simulation, I was able to barely navigate through the scene in the editor view. I did not even run the game. And this is not even the worst part, which Sakib pointed out to me. Look at this balcony. Every groove here is part of the mesh. No textures, normal maps, nothing. The railings alone account for roughly 1 million polygons. For some context, the VR headset we were using, the Meta Quest 2, can handle ~750k polygons. I can go on about resource managment, optimization techniques and all. But the point is, I take partial blame for this. My peers were evidently not used to game development. I learned that I cannot assume that everyone I work with knows the exact things I happen to know, because it might not be their primary job. I cannot take that for granted.

Phew. We have gone off in a tangent but I think this rant was due for some time. Let us refocus.

  1. Trying to integrate the VR controller to my controller prefab like a camera: This reads funny as I type it now. I coded the controller to have a variable camera, so that I can use two differnt camera rigs for the different builds. One is a regular Unity camera for the PC, the other one is a OVR Camera Rig provided with the Oculus SDK. It worked flawlessly. But what I didn't understand that the VR camera rig operates independent of its parent controller. So, if the player rotates 180 degrees, then the player tries to move forward, they would evidently seem to be going backward (as they are rotated). This kinda baffled me, even though Sakib warned me about this beforehand, because the PC version also took its input relative to the orientation of the camera but worked fine. This led to a late stage bug, where the avatar of a VR player would not rotate correctly on the screen of a remote player.

That's enough. When I started writing this, I thought of a few more things that went wrong. But I can't seem to remember them now. We can leave it at this today. I will look out for any more VR projects, because despite all the problems, I had fun working on this. And we did deliver a finished product, so can't really complain. The source code of the prototype is available at this repository.