3D character creation for interactive VR

3D character creation for interactive VR

The third year of VRTogether is in full swing, and despite the recent pandemic scare our consortium is hard at work creating the technology and content for our third-year pilot. As we have described before, pilot 3 will involve an interactive scenario where users take on an active role trying to solve the mysterious murder of Elena Armova.

Pilot 1 involved a one-take interrogation scene observed by our users, and was created using three different content formats; 360 degree video, a 3D scene with billboard video, and a 3D scene with 3D characters. For this last option the actors were captured using a custom photogrammetry rig from Artanim with more than 90 cameras, after which their 3D mesh was reconstructed and subsequently rigged and animated using mocap data. The three formats allowed us to evaluate their tradeoffs in a VR context. The results indicated that in general the video billboards were preferred by users for their visual quality and natural appearance, but their main shortcoming is that user movement is very limited before such billboards no longer maintain their illusion.

Pilot 3 puts the users at the scene of the crime. Standing at and moving between different locations in Elena Armova’s apartment, they observe and interact with the characters in the scene. This, given the aforementioned shortcomings of a video billboard solution, means we will see the return of 3D characters. An additional downside of a billboard solution would have been the difficulty to have a seamless continuity of a variety of recordings. The interactivity inherent to the plot will see our characters respond to our user’s actions, wait for their response and progress along several possible paths. To seamlessly blend these various branches in our timeline, the only truly feasible option is to opt for 3D characters, driven by a full performance capture (motion capture for the full body including the fingers and face).

Having evaluated the 3D characters of Pilot 1, the consortium wanted to aim for a higher visual quality for Pilot 3. While a commercial 3D photogrammetry solution has been considered, both financial and scheduling constraints made this impractical. Luckily content creation solutions have not stopped evolving over the last few years, among which Reallusion’s Character Creator ecosystem – as well as other tools – which will be used to create our 3D characters. Characters created with the tools in this ecosystem are easily animated in real-time 3D engines such as Unity, by motion capture data from the body and hands all the way to facial animation.

Besides executing the motion capture, Artanim and its artists have taken on the task of creating our high quality virtual characters. The first step in the process is the collection of as much reference material as possible. The COVID-19 pandemic made taking in-studio headshots impossible, but some of the actors involved were already photographed as a part of pilot 1, while others supplied as much reference material as they possibly could. This includes high quality photos of their head from all angles, full body shots, as well as a set of body measurements to make sure their virtual counterparts match their own morphology. Ensuring a close match between the two also simplifies the retargeting stage while post-processing the motion capture.

Reference headshots of actor Jonathan D. Mellor who plays the character of Sarge.

Modern AI approaches have evolved quite significantly in recent years, leading to major advances in image processing in general, and obtaining 3D information from monocular views in particular. These developments find their way into content creation tools such as Character Creator’s Headshot AI plugin used as a basis for our character’s heads. Such tools do a commendable job based on a single image from a single view, but the creation of a more correct likeness still involves quite a lot of manual work, taking the generated output as a basis, carefully modifying the obtained 3D mesh to more closely match our actors’ morphologies.

The AI-generated output (left) needs significant artist intervention to get a good likeness (right), in particular for faces with asymmetrical features.

This is then followed by the addition of hair geometry, textures, shaders and materials to get a final real-time VR ready result.

After the addition of hair, textures and material setup, Sarge’s head is complete.

Without the use of 3D scans, the creation of the body is still a largely artistic process. Starting from a basic avatar body, it is up to the artist to adjust the basis to the actor’s appropriate sizes, and to adapt or model 3D clothing and accessories – including meshes, textures and materials – to get a high quality end result.

The end result is a high quality 3D representation of our actors, which can be used in interactive real-time VR scenarios for an exciting immersive experience.

Shooting of the VR-Together Pilot 1

Shooting of the VR-Together Pilot 1

Artanim collaborated with Entropy Studio on the shooting of the first pilot of the VR-Together project. After a short flight from Madrid to Geneva, the actors were 3D scanned with our photogrammetric scanner consisting of 96 cameras, to obtain the 3D surface of their body. Steve Galache (known for his work on The Vampire in the Hole, 2010; El cosmonauta, 2013; and Muertos comunes, 2004), Jonathan David Mellor (known for The Wine of Summer, 2013; Refugiados, 2014; and [Rec]², 2009) and Almudena Calvo were the main characters of this first experience. They were dressed the same way as in the shooting scene.

3D body scan

The shooting was split over two days. The first day was dedicated to shoot the actors in costumes on a complete chroma background with a stereo-camera. This will allow the creation of photoreal stereo-billboards that will be integrated in a full CG-environment. The second day of shooting was focused on full performance capture of the actors. Each equipped with 59 retro-reflective markers and an head-mounted iPhone X, the actors were able to perform the investigation plot (an interrogatory scene) with success. These data will later be used to animate the 3D models of the actors generated from the 3D scans. These full-CG models will be finally integrated in the same virtual environment.

Mocap with iPhone X Mocap

This pilot project will thus offer two different rendering modalities for real actors (stereo-bilboard and CG characters). The impact of both techniques will be studied through user studies with an eye on social presence and immersion.

New venues for capturing facial performance

New venues for capturing facial performance

We will soon start shooting cinematic content to be used for showcasing the technology developed by the VR-Together consortium. In this post, we bring some of the production effort developed at Artanim, which is currently exploring the use of Apple’s iPhone X face tracking technology in the production pipeline of 3D animations.

Facial mocap rig

The photos below show the iPhone X holding rig, and an early version of the face tracking recording tool that was developed by Artanim. The tool integrates with full body and hands motion capture technology from Vicon to allow the simultaneous recording of body, hands and face performance from multiple actors.

With the recent surge of consumer virtual reality, interest for motion capture has dramatically increased. The iPhone X and ARKit SDK from Apple integrate depth sensing and facial animation technologies, and are a good example of this trend. Apple’s effort to integrate advanced face tracking to their mobile devices may be related to the recent acquisition of PrimeSense and FaceShift. The former was involved in the development of the technology powering the first Kinect back in 2011, the latter is recognized for their face tracking technology, which is briefly showcased in the making of Star Wars: The Force Awakens trailer. These are exciting times, when advanced motion tracking technologies are becoming ubiquitous in our life.

Image from the iPhone X keynote presentation from Apple