Goal and Motivation

We knew that we wanted to work with VR and include physical objects as part of the interactive experience. The physical object ended up being a wooden structure resembling a cannon on a stand. In order to interact with the cannon while wearing a VR headset, a VIVE Tracker was fitted on the cannon, and the tracking worked surprisingly well and was very accurate.

Our overarching goal was to create a graphically good-looking interactive experience that embraced full body movement. We wanted to put effort into the graphics, while also making sure that the interaction felt natural and enjoyable. To accomplish this, we chose to focus on getting the interaction part done as soon as possible. It included building the cannon and setting up the wireless VIVE Tracker. The rest of the time was spent on the graphics, which inlcuded water, particle effects, and modeling, among others.

Explanation and Justification

Graphics

Interaction

We wanted to fully embrace the theme of full body movement with this project. The decision to construct a wooden structure that were supposed to work together with a VR environment without a visible representation of the users hand meant that the VR environment was aligned with the physical structure with utmost care. This way we were able to create an interactive and immersed experience where the user were able to grab the handle of the physical structure by reaching out their hand and grab the handle of the VR canon.

Obstacles and Lessons learned

When implementing the water shader to the project with the cannon, ships and targets, we realised that they were made in different rendering pipelines.
Since the ocean shader was only compatible with the built-in render pipeline, we had to import everything else to that project

The fact that shader coding has no debugging feature was a big challenge during development. Especially hard was finding any information on geometry shaders (the type of shader the ocean was). Though supported by unity, it still does not have any official information on what they are, how they work etc.

And even less information on how to implement them into VR. First we tried adding macros from forum posts to define inputs and outputs for multiple displays with no progress. After some time we looked at rendering modes and found that you can render to both eyes by changing from single pass instanced rendering to multipass.

Media Gallery

Related Work

We found that PhyShare: Sharing Physical Interaction in Virtual Reality made by Zhenyi He, Fengyuan Zhu, and Ken Perlin in 2017, and Substitutional Reality: Using the Physical Environment to Design Virtual Reality Experiences made by L. Beever and N. W. John in 2022, were relevant for this project in the way interaction between the physical and virtual space was used.

Designa en webbplats som denna med WordPress.com
Kom igång