I know you! : The importance of accurate self-representation in VR

I know you! : The importance of accurate self-representation in VR

At the heart of the VR-Together project lies the objective to enable social VR experiences with strong feelings of immersion as well as co-presence. To achieve this strong social sense of sharing a space together, photorealistic real-time representations of users are used, rather than relying on abstract avatars as found in such offerings as Facebook Spaces or AltspaceVR. Using state-of-the-art technologies developed by consortium partners and off-the-shelf hardware such as Microsoft Kinect or Intel RealSense sensors, users are scanned in real-time and the captured representations are processed and streamed as point clouds or time varying meshes (TVM). These approaches to user representation, combined with HMD removal technology, allow users sharing the virtual space – while in geographically separate locations – to see each other in all three dimensions.

Early feedback from users of the Pilot 1 demonstrations regarding the ability to see themselves and others, has been positive. The question still remains however, whether or not accurate self-representation has a significant positive impact on your sense of immersion, co-presence and the overall quality of experience. Both when seeing yourself as well as when interacting with others sharing the same virtual environment with you.

To answer this question, VR-Together consortium partner Artanim will this summer run an experiment in which users will be placed in a virtual environment in which they are virtualized by a representation of themselves at varying levels of realism and likeness.

User representations will be created at 3 different levels of accuracy:

  • An abstract avatar-like representation which does not match the participant
  • A realistic representation of the participant
  • An in-between more abstract – perhaps cartoon-like – representation of the participant, which is still recognizable, but steers clear of such undesirable effects as the “Uncanny Valley”.

To evaluate self-representation, single users will be placed in a virtual environment in which, by means of a virtual mirror, they will be able to observe themselves. The question there is whether or not an increased likeness improves the overall VR experience. To evaluate the importance of avatar likeness in the representation of others, pairs of users who know each other (i.e. friends or family) will share a virtual environment together, again being represented at varying levels of likeness. The goal there is to understand the effects on such aspects as immersion, togetherness and quality of interaction.

The proposed experiment will help us better understand what scenarios benefit most from realistic and recognizable user representation in Virtual Reality experiences, and to what extent realism is desirable in social VR.

Your body feels good

Your body feels good

Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, we performed an experiment in the framework of the VR-Together project to assess the relative impact of different levels of body animation fidelity to presence.

In this experiment, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar’s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.

VR-Together VR-Together

Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define what makes a compelling social VR experience.

VR-Together

 

3DIM @ Montreux Jazz Festival

3DIM @ Montreux Jazz Festival

3D In Motion (3DIM) – our experimental setup of capture, visualization and sonification of movements in real time – was presented at the Montreux Jazz Festival on the 12th of July 2014 during an one hour workshop. For this occasion, a special sound design was developed by Alain Renaud from MINTLab. A mapping of different artists’ audio tracks recorded during their visit to the festival was performed. The tracks were generously provided by the Montreux Jazz Lab of EPFL.

Moreover, several improvements in the 3DIM application were achieved. For instance, an iPad App was implemented to easily switch between the sound design scenarios, a prompter was added to provide instructions to the user and an OSC connection between the sonification and graphical applications was programmed to control visual feedback from sound events.

Again, the feedback from the public was positive. We look forward to developing a first live performance using this system.

More information about the 3DIM project here.

A demo video of the setup including the sonification part will arrive soon. Stay in touch!

Montreux Jazz Festival demo Montreux Jazz Festival demo

3DIM @ Electron Festival

3DIM @ Electron Festival

In collaboration with Alain Renaud from MINTLab, we gave on the 20th of April a workshop at Electron Festival in Geneva. At this occasion, we presented the 3D In Motion (3DIM) experimental setup of capture, visualization and sonification of movements in real time. About twenty people participated to the workshop, where they had the opportunity to test the system and to learn more about the underlying techniques.

It was the first time 3DIM was officially presented to the public and the feedback was positive. This gave us a good basis for further developing the system both in terms of visualization and musicality. The next demonstration will take place in June at the New Interfaces for Musical Expression (NIME) Conference in London.

More information about the 3DIM project here.

Electron Festival demo Electron Festival demo

Full body virtual reality with Oculus Rift and Xsens

Yesterday, we tested with Tobias Baumann, game designer and freelancer, the Oculus Rift with our Xsens MVN in Unity 3D. The goal was to virtually hit columns of cubes and balloons. Some simple applications to start with, but resulting in a nice full-body immersive VR experience. The first tests were convincing. We will definitely continue working on this topic with Tobias. Stay tuned!