A New Era For Motion Capture at SIGGRAPH 2023

A New Era For Motion Capture at SIGGRAPH 2023

Vicon debuts its markerless motion capture technology in collaboration with Artanim, the research arm of Dreamscape, the leading VR company, at SIGGRAPH 2023

Vicon today marks a new era of innovation in the field of motion capture, announcing that it will debut its machine learning (ML) powered markerless technology at SIGGRAPH 2023 in Los Angeles, in collaboration with Artanim, the Swiss research institute and the award-winning VR group, Dreamscape Immersive.

The announcement follows nearly three years of research and development focusing on the integration of ML and artificial intelligence (AI) into markerless motion capture at Vicon’s renowned R&D facility in Oxford, UK. The project has been undertaken by a new dedicated team, led by CTO Mark Finch, and leverages nearly four decades worth of Vicon innovation to enable a seamless, end-to-end markerless infrastructure, from pixel to purpose.

Commenting on the news, Imogen Moorhouse, Vicon CEO, said:

Today marks the beginning of a new era for motion capture. The ability to capture motion without markers, while maintaining industry leading accuracy and precision, is an incredibly complex feat. After an initial research phase, we have focused on developing the world class markerless capture algorithms, robust real time tracking, labeling and solving needed to make this innovation a reality. What we are demonstrating at SIGGRAPH is not a one-off concept, or simply a technology demonstrator. It is our first step towards future product launches which will culminate in a first-of-its-kind platform for markerless motion capture.”

For Dreamscape, markerless motion capture can now provide a more true-to-life adventure than any other immersive VR experience by allowing for more free-flowing movement and exploration with even less user gear. Commenting on the decision to partner with Vicon, Aaron Grosky, President & COO of Dreamscape said:

We have been anxiously awaiting the time when markerless could break from concept and into product, where the technology could support the precision required to realize its amazing potential. Vicon’s reputation for delivering the highest standards of motion capture technology for nearly forty years and Dreamscape’s persistent quest to bring the audience into the experience with full agency and no friction meant that working together on this was a no-brainer. We’re thrilled with the result. The implications for both quality of experience and ease of operations across all our efforts, from location-based entertainment to transforming the educational journey with Dreamscape Learn, is just game-changing.

The collaboration between Vicon and Artanim was key to ensure the desired requirements for the VR use case were met.

Achieving best in class virtual body ownership and immersion in VR requires both accurate tracking and very low latency. We spent substantial R&D effort evaluating computational performance of ML-based tracking algorithms, implementing and fine-tuning the multi-modal tracking solution, as well as taking the best from the full-body markerless motion capture, and VR headset tracking capabilities,” said Sylvain Chagué, co-founder and CTO of Artanim and Dreamscape.

At SIGGRAPH, Vicon and Artanim showcased a six person, markerless and multi-modal real-time solve, set against Dreamscape’s location-based virtual reality adventure called The Clockwork Forest, created in partnership with Audemars Piguet, the Swiss haute horology manufacturer. The experience allows participants to shrink to the size of an ant and explore a mechanical wonderland, while working together to repair the Source of Time and restore the rhythm of nature.

With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.More information about the collaborative R&D project here.

 

Vicon - Artanim team at SIGGRAPH 2023  Vicon booth at SIGGRAPH 2023

Your body feels good

Your body feels good

Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, we performed an experiment in the framework of the VR-Together project to assess the relative impact of different levels of body animation fidelity to presence.

In this experiment, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar’s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.

VR-Together VR-Together

Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define what makes a compelling social VR experience.

VR-Together

 

Shooting of the VR-Together Pilot 1

Shooting of the VR-Together Pilot 1

Artanim collaborated with Entropy Studio on the shooting of the first pilot of the VR-Together project. After a short flight from Madrid to Geneva, the actors were 3D scanned with our photogrammetric scanner consisting of 96 cameras, to obtain the 3D surface of their body. Steve Galache (known for his work on The Vampire in the Hole, 2010; El cosmonauta, 2013; and Muertos comunes, 2004), Jonathan David Mellor (known for The Wine of Summer, 2013; Refugiados, 2014; and [Rec]², 2009) and Almudena Calvo were the main characters of this first experience. They were dressed the same way as in the shooting scene.

3D body scan

The shooting was split over two days. The first day was dedicated to shoot the actors in costumes on a complete chroma background with a stereo-camera. This will allow the creation of photoreal stereo-billboards that will be integrated in a full CG-environment. The second day of shooting was focused on full performance capture of the actors. Each equipped with 59 retro-reflective markers and an head-mounted iPhone X, the actors were able to perform the investigation plot (an interrogatory scene) with success. These data will later be used to animate the 3D models of the actors generated from the 3D scans. These full-CG models will be finally integrated in the same virtual environment.

Mocap with iPhone X Mocap

This pilot project will thus offer two different rendering modalities for real actors (stereo-bilboard and CG characters). The impact of both techniques will be studied through user studies with an eye on social presence and immersion.

New venues for capturing facial performance

New venues for capturing facial performance

We will soon start shooting cinematic content to be used for showcasing the technology developed by the VR-Together consortium. In this post, we bring some of the production effort developed at Artanim, which is currently exploring the use of Apple’s iPhone X face tracking technology in the production pipeline of 3D animations.

Facial mocap rig

The photos below show the iPhone X holding rig, and an early version of the face tracking recording tool that was developed by Artanim. The tool integrates with full body and hands motion capture technology from Vicon to allow the simultaneous recording of body, hands and face performance from multiple actors.

With the recent surge of consumer virtual reality, interest for motion capture has dramatically increased. The iPhone X and ARKit SDK from Apple integrate depth sensing and facial animation technologies, and are a good example of this trend. Apple’s effort to integrate advanced face tracking to their mobile devices may be related to the recent acquisition of PrimeSense and FaceShift. The former was involved in the development of the technology powering the first Kinect back in 2011, the latter is recognized for their face tracking technology, which is briefly showcased in the making of Star Wars: The Force Awakens trailer. These are exciting times, when advanced motion tracking technologies are becoming ubiquitous in our life.

Image from the iPhone X keynote presentation from Apple

Opening of our new offices

Opening of our new offices

Artanim has just moved its offices in Meyrin located only minutes away from the city center and with easy access to the highway, the train station and the International Airport of Geneva. Artanim is now housed within a facility of 273 m2 with a capture stage twice bigger than before. The facility includes offices and a conference room, a sound/post-production studio and a wardrobe/make-up room with shower.

On December 15th, we inaugurated our new offices with our friends, colleagues and partners. For the occasion, we organized a real time dance performance using our Xsens suit in collaboration with the Cie Gilles Jobin (dancer: Susana Panadés Diaz).

Artanim opening 2015