Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, we performed an experiment in the framework of the VR-Together project to assess the relative impact of different levels of body animation fidelity to presence.
In this experiment, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar’s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.
Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define what makes a compelling social VR experience.
We will soon start shooting cinematic content to be used for showcasing the technology developed by the VR-Together consortium. In this post, we bring some of the production effort developed at Artanim, which is currently exploring the use of Apple’s iPhone X face tracking technology in the production pipeline of 3D animations.
The photos below show the iPhone X holding rig, and an early version of the face tracking recording tool that was developed by Artanim. The tool integrates with full body and hands motion capture technology from Vicon to allow the simultaneous recording of body, hands and face performance from multiple actors.
With the recent surge of consumer virtual reality, interest for motion capture has dramatically increased. The iPhone X and ARKit SDK from Apple integrate depth sensing and facial animation technologies, and are a good example of this trend. Apple’s effort to integrate advanced face tracking to their mobile devices may be related to the recent acquisition of PrimeSense and FaceShift. The former was involved in the development of the technology powering the first Kinect back in 2011, the latter is recognized for their face tracking technology, which is briefly showcased in the making of Star Wars: The Force Awakens trailer. These are exciting times, when advanced motion tracking technologies are becoming ubiquitous in our life.
Image from the iPhone X keynote presentation from Apple
Artanim participated to the Nuit de la Science on July 9-10, 2016 at the park of La Perle du Lac. The team showed the public through different interactive and immersive installations how motion capture technologies could be used to better understand the movement of your articulations, immerse yourself in virtual worlds or turn you into avatar.
Artanim just added a new tool to its motion capture equipment: the Optitrack Insight VCS, a professional virtual camera system. From now on, everyone doing motion capture at Artanim will be able to step into the virtual set, preview or record real camera movement and find the best angles to view the current scene.
The motion capture data captured by our Vicon system is processed in real time in MotionBuilder and displayed on the camera monitor. The position of MotionBuilder’s virtual camera is updated by the position of the reflective markers on the camera rig. In addition, the camera operator can control several parameters such as the camera zoom, horizontal panning, etc. The rig itself is very flexible and can be modified to accommodate different shooting styles (shoulder-mounted, hand-held, etc.).
We can’t wait to use it in future motion capture sessions and show you some results. Meanwhile, you can have a look at our first tests in the above video.
3D In Motion (3DIM) – our experimental setup of capture, visualization and sonification of movements in real time – was presented at the Montreux Jazz Festival on the 12th of July 2014 during an one hour workshop. For this occasion, a special sound design was developed by Alain Renaud from MINTLab. A mapping of different artists’ audio tracks recordedduring their visitto the festivalwas performed. The tracks were generouslyprovided by theMontreuxJazzLab of EPFL.
Moreover, several improvements in the 3DIM application were achieved. For instance, an iPad App was implemented to easily switch between the sound design scenarios, a prompter was added to provide instructions to the user and an OSC connection between the sonification and graphical applications was programmed to control visual feedback from sound events.
Again, the feedback from the public was positive. We look forward to developing a first live performance using this system.
Very excited to expose our technology at the @IBCShow and to spend these days with other great projects and researchers! Come to see us! #IBC2018 😎 (let's pretend these are VR goggles) https://t.co/ackneFbllS