A New Era For Motion Capture at SIGGRAPH 2023

A New Era For Motion Capture at SIGGRAPH 2023

Vicon debuts its markerless motion capture technology in collaboration with Artanim, the research arm of Dreamscape, the leading VR company, at SIGGRAPH 2023

Vicon today marks a new era of innovation in the field of motion capture, announcing that it will debut its machine learning (ML) powered markerless technology at SIGGRAPH 2023 in Los Angeles, in collaboration with Artanim, the Swiss research institute and the award-winning VR group, Dreamscape Immersive.

The announcement follows nearly three years of research and development focusing on the integration of ML and artificial intelligence (AI) into markerless motion capture at Vicon’s renowned R&D facility in Oxford, UK. The project has been undertaken by a new dedicated team, led by CTO Mark Finch, and leverages nearly four decades worth of Vicon innovation to enable a seamless, end-to-end markerless infrastructure, from pixel to purpose.

Commenting on the news, Imogen Moorhouse, Vicon CEO, said:

Today marks the beginning of a new era for motion capture. The ability to capture motion without markers, while maintaining industry leading accuracy and precision, is an incredibly complex feat. After an initial research phase, we have focused on developing the world class markerless capture algorithms, robust real time tracking, labeling and solving needed to make this innovation a reality. What we are demonstrating at SIGGRAPH is not a one-off concept, or simply a technology demonstrator. It is our first step towards future product launches which will culminate in a first-of-its-kind platform for markerless motion capture.”

For Dreamscape, markerless motion capture can now provide a more true-to-life adventure than any other immersive VR experience by allowing for more free-flowing movement and exploration with even less user gear. Commenting on the decision to partner with Vicon, Aaron Grosky, President & COO of Dreamscape said:

We have been anxiously awaiting the time when markerless could break from concept and into product, where the technology could support the precision required to realize its amazing potential. Vicon’s reputation for delivering the highest standards of motion capture technology for nearly forty years and Dreamscape’s persistent quest to bring the audience into the experience with full agency and no friction meant that working together on this was a no-brainer. We’re thrilled with the result. The implications for both quality of experience and ease of operations across all our efforts, from location-based entertainment to transforming the educational journey with Dreamscape Learn, is just game-changing.

The collaboration between Vicon and Artanim was key to ensure the desired requirements for the VR use case were met.

Achieving best in class virtual body ownership and immersion in VR requires both accurate tracking and very low latency. We spent substantial R&D effort evaluating computational performance of ML-based tracking algorithms, implementing and fine-tuning the multi-modal tracking solution, as well as taking the best from the full-body markerless motion capture, and VR headset tracking capabilities,” said Sylvain Chagué, co-founder and CTO of Artanim and Dreamscape.

At SIGGRAPH, Vicon and Artanim showcased a six person, markerless and multi-modal real-time solve, set against Dreamscape’s location-based virtual reality adventure called The Clockwork Forest, created in partnership with Audemars Piguet, the Swiss haute horology manufacturer. The experience allows participants to shrink to the size of an ant and explore a mechanical wonderland, while working together to repair the Source of Time and restore the rhythm of nature.

With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.More information about the collaborative R&D project here.

 

Vicon - Artanim team at SIGGRAPH 2023  Vicon booth at SIGGRAPH 2023

Your body feels good

Your body feels good

Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, we performed an experiment in the framework of the VR-Together project to assess the relative impact of different levels of body animation fidelity to presence.

In this experiment, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar’s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.

VR-Together VR-Together

Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define what makes a compelling social VR experience.

VR-Together

 

Shooting of the VR-Together Pilot 1

Shooting of the VR-Together Pilot 1

Artanim collaborated with Entropy Studio on the shooting of the first pilot of the VR-Together project. After a short flight from Madrid to Geneva, the actors were 3D scanned with our photogrammetric scanner consisting of 96 cameras, to obtain the 3D surface of their body. Steve Galache (known for his work on The Vampire in the Hole, 2010; El cosmonauta, 2013; and Muertos comunes, 2004), Jonathan David Mellor (known for The Wine of Summer, 2013; Refugiados, 2014; and [Rec]², 2009) and Almudena Calvo were the main characters of this first experience. They were dressed the same way as in the shooting scene.

3D body scan

The shooting was split over two days. The first day was dedicated to shoot the actors in costumes on a complete chroma background with a stereo-camera. This will allow the creation of photoreal stereo-billboards that will be integrated in a full CG-environment. The second day of shooting was focused on full performance capture of the actors. Each equipped with 59 retro-reflective markers and an head-mounted iPhone X, the actors were able to perform the investigation plot (an interrogatory scene) with success. These data will later be used to animate the 3D models of the actors generated from the 3D scans. These full-CG models will be finally integrated in the same virtual environment.

Mocap with iPhone X Mocap

This pilot project will thus offer two different rendering modalities for real actors (stereo-bilboard and CG characters). The impact of both techniques will be studied through user studies with an eye on social presence and immersion.

A step into virtual cinematography

Artanim just added a new tool to its motion capture equipment: the Optitrack Insight VCS, a professional virtual camera system. From now on, everyone doing motion capture at Artanim will be able to step into the virtual set, preview or record real camera movement and find the best angles to view the current scene.

The Optitrack Insight VCS shoulder mounted

The motion capture data captured by our Vicon system is processed in real time in MotionBuilder and displayed on the camera monitor. The position of MotionBuilder’s virtual camera is updated by the position of the reflective markers on the camera rig. In addition, the camera operator can control several parameters such as the camera zoom, horizontal panning, etc. The rig itself is very flexible and can be modified to accommodate different shooting styles (shoulder-mounted, hand-held, etc.).

We can’t wait to use it in future motion capture sessions and show you some results. Meanwhile, you can have a look at our first tests in the above video.

Virtual cinematography

Bread mocap – A tribute to Charlie Chaplin

Bread mocap – A tribute to Charlie Chaplin

We were recently contacted to perform the motion capture for an upcoming short movie entitled “The Great Imitator” created by Boris Beer. This short animated movie will be a tribute to Charlie Chaplin. Without giving too much details, the goal of the shooting was to capture some iconic scenes of Chaplin’s most famous movies.

For example, the first scene we captured was the one from The Great Dictator where Chaplin plays with an inflatable globe. We also had to capture the famous nut screwing scene from Modern Times as well as some scenes from The Kid. Fabrice Bessire (the actor) did a great job reinterpreting Chaplin in those scenes.

Finally, among the selected scenes was the famous “Bread roll dance” from The Gold Rush. In this scene, Charlie Chaplin creates a small ballet by giving life to two forks and two bread roll in order to entertain his friends. As you can see on the pictures, this capture required a very specific and unique bread motion capture setup (patent pending!).

We will talk again about this short film when it will be finished. Stay tuned!

The famous bread roll dance from The Gold Rush Bread Mocap Charlie Chaplin Mocap

Mocap in the dark

Setup for the project "Motion and unconsciousness"Last week we tested the motion capture protocol for the research project Motion and unconsciousness. The setup is uncommon: subjects are asked to execute movements in the dark while still being able to see the other participant’s hands thanks to phosphorescent tape. They are also equipped with a respiration sensor and headphones with white noise to be isolated from external stimuli.

100 volunteers will participate to the study and be distributed in different groups according to specific criteria. One group will also be captured with simultaneous EEG recording. The goal of the project is to compare the subjective sensation of synchrony with objective data of motor coordination and synchronization acquired from motion capture when two people are engaged in joint action tasks.