A New Era For Motion Capture at SIGGRAPH 2023

A New Era For Motion Capture at SIGGRAPH 2023

Vicon debuts its markerless motion capture technology in collaboration with Artanim, the research arm of Dreamscape, the leading VR company, at SIGGRAPH 2023

Vicon today marks a new era of innovation in the field of motion capture, announcing that it will debut its machine learning (ML) powered markerless technology at SIGGRAPH 2023 in Los Angeles, in collaboration with Artanim, the Swiss research institute and the award-winning VR group, Dreamscape Immersive.

The announcement follows nearly three years of research and development focusing on the integration of ML and artificial intelligence (AI) into markerless motion capture at Vicon’s renowned R&D facility in Oxford, UK. The project has been undertaken by a new dedicated team, led by CTO Mark Finch, and leverages nearly four decades worth of Vicon innovation to enable a seamless, end-to-end markerless infrastructure, from pixel to purpose.

Commenting on the news, Imogen Moorhouse, Vicon CEO, said:

Today marks the beginning of a new era for motion capture. The ability to capture motion without markers, while maintaining industry leading accuracy and precision, is an incredibly complex feat. After an initial research phase, we have focused on developing the world class markerless capture algorithms, robust real time tracking, labeling and solving needed to make this innovation a reality. What we are demonstrating at SIGGRAPH is not a one-off concept, or simply a technology demonstrator. It is our first step towards future product launches which will culminate in a first-of-its-kind platform for markerless motion capture.”

For Dreamscape, markerless motion capture can now provide a more true-to-life adventure than any other immersive VR experience by allowing for more free-flowing movement and exploration with even less user gear. Commenting on the decision to partner with Vicon, Aaron Grosky, President & COO of Dreamscape said:

We have been anxiously awaiting the time when markerless could break from concept and into product, where the technology could support the precision required to realize its amazing potential. Vicon’s reputation for delivering the highest standards of motion capture technology for nearly forty years and Dreamscape’s persistent quest to bring the audience into the experience with full agency and no friction meant that working together on this was a no-brainer. We’re thrilled with the result. The implications for both quality of experience and ease of operations across all our efforts, from location-based entertainment to transforming the educational journey with Dreamscape Learn, is just game-changing.

The collaboration between Vicon and Artanim was key to ensure the desired requirements for the VR use case were met.

Achieving best in class virtual body ownership and immersion in VR requires both accurate tracking and very low latency. We spent substantial R&D effort evaluating computational performance of ML-based tracking algorithms, implementing and fine-tuning the multi-modal tracking solution, as well as taking the best from the full-body markerless motion capture, and VR headset tracking capabilities,” said Sylvain Chagué, co-founder and CTO of Artanim and Dreamscape.

At SIGGRAPH, Vicon and Artanim showcased a six person, markerless and multi-modal real-time solve, set against Dreamscape’s location-based virtual reality adventure called The Clockwork Forest, created in partnership with Audemars Piguet, the Swiss haute horology manufacturer. The experience allows participants to shrink to the size of an ant and explore a mechanical wonderland, while working together to repair the Source of Time and restore the rhythm of nature.

With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.More information about the collaborative R&D project here.

 

Vicon - Artanim team at SIGGRAPH 2023  Vicon booth at SIGGRAPH 2023

Creating varied virtual populations

Creating varied virtual populations

Virtual avatars play a key role in many of our activities. They embody the actors in our productions, serve as reliable subjects in our machine learning efforts, or are the virtual bodies we inhabit when going on an adventure in one of our experiences. In cases where an avatar needs to represent a specific character or historical figure, the skilled work of our artists is often sufficient to create the final result.

Sometimes however we need a lot of variation. Whether this is to allow users to create avatars which more closely match their own appearance, or when creating datasets with an appropriate – and often overlooked – diversity for machine learning efforts. In such cases, the creation of tailor-made avatars is no longer feasible. It is with this in mind that we started what we call our Synthetic Factory efforts.

The basic premise of the Synthetic Factory is a simple idea. Given a neutral default character, and a set of outfits, footwear, and hair styles it can wear, can we create a set of size, shape, gender, and ethnicity changes which can be applied in real-time to create any subject we would like? The potential benefit is clear to see. Artists can simply design an outfit or a hairstyle which applies well onto a base model, and the Synthetic Factory would take care of the variations.

At the basis of the Synthetic Factory is a default avatar model. Artists can take this model and deform it to give it a certain characteristic. Whether this is a gender appropriate change in body type, a specific set of facial features, or an overall change in body size. The synthetic factory takes these inputs, as well as meta-data characterizing the “deformation”, and encodes them as blend shapes which can be applied on a weighted bases, smoothly blending between the original shape and the blend shape target. At runtime either a specific set of deformations and weights can be selected, or a set of broad characteristics can be supplied – such as “Female”, “Adult”, “Indian” for example – after which random deformations and weights are selected which are appropriate for those characteristics.  

The base model (center) with 2 of its 80 blendshape.

Any assets applied on top of this model, be they outfits, footwear, hair styles and the like, need to be able to deform appropriately given any changes made to the body blend shapes. The Synthetic Factory automates this process in a pre-processing step. After applying the asset to the neutral base model, a mapping is created, encoding how the asset fits the model. Once this mapping is complete, the factory runs through all the body blend shapes, and then determines how the asset needs to deform to keep the appropriate mapping in relation to the body.  

Three avatars wearing the same outfit at 3 different proportions.

These deformations are then once again stored as blend shapes, this time as a part of the asset. This precomputation happens once, and at runtime no further heavy computation is required. All assets get annotated with a set of meta-data, making sure they are only ever combined in a way matching the model’s chosen characteristics.

As a part of its run-time tools, the Synthetic Factory also enables the modification of the skeleton driving the avatar. Either through specification of exact body measurements which match a specific dataset, or by supplying an overall target height, what we call the “AvatarBuilder” will generate a new skeleton matching the requirements and applies it to the Avatar.  

Combine these tools with more fine-grained randomizations such as appropriate skin tones, as well as color and texture randomizations on outfits, and from a relatively compact set of basic inputs you have the ability to generate a large variety of diverse avatars fit for any purpose. In the video below, you can see the result for 36 randomly created avatars based on gender, age and ethnicity input.