Markerless Mocap

Markerless Mocap

Markerless Mocap

Killing the markers

Project Info

Start date:
September 2020

End date:
September 2023

Funding:

Coordinator:
Artanim

Summary

Current motion capture (Mocap) solutions used in Location-based Virtual Reality (LBVR) experiences rely on a set of either active or passive infrared markers to track the user’s movements within the experience space. While this offers a stable and low-latency tracking result ideally suited for a VR scenario, equipping users with such markers is cumbersome, and the maintenance and cleaning of such devices takes time.

The Markerless Mocap project aims to leverage modern Machine Learning (ML) approaches to create a markerless solution for the capture of users in LBVR scenarios. The low-latency requirements imposed by this scenario are the primary challenge for the project, with an ideal photon-to-skeleton latency being in the order of 50ms or less. To achieve this goal the project focuses on approaches to pose-estimation which strike a good balance between accuracy and speed. The markerless pipeline consists of 3 stages: the processing of raw camera input at 60 frames per second, the 2D pose estimates of subjects in each view, and the final assembly into a full 3D skeleton. To manage this computationally heavy task at the desired latency, the overall markerless pipeline leverages the massively parallel computation abilities of modern CPUs and GPUs, allowing us to optimize every stage of the computations involved.

Another critical aspect is the training stage of any ML approach which generally requires a lot of annotated input data. And while in the field of pose estimation a variety of public datasets is available, they are often limited to a single view (where we require multi-view data for our multi-camera setups), and their annotations are not necessarily precise. Rather than record our own real-life dataset, the project opts to leverage our expertise in the creation of avatars in what we call our Synthetic Factory. Based on a single avatar model equipped with a variety of blend shapes, we can create a large variety of distinct avatars by providing broad characteristics such as age, gender, ethnicity, etc. Add in a variety of body proportions, skin tones, outfits, footwear, hairstyles, and animations, and you get a virtually unlimited set of subjects you can record from any angle, with their underlying skeleton forming the ground-truth annotation. This dataset then forms the basis of any of our training efforts.

This project was performed in collaboration with Vicon, the world leader mocap provider. First results of this joint collaboration were presented at SIGGRAPH 2023 on their exhibition booth, showcasing a six person, markerless and multi-modal real-time solve, set against Dreamscape’s LBVR adventure called The Clockwork Forest. With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.

Partners

Artanim
VR requirements definition, ML-based tracking algorithms evaluation, synthetic factory development, multi-modal tracking solution implementation and fine-tuning

Vicon
Hardware development, ML-based tracking algorithms implementation, real-time solve

VR+4CAD

VR+4CAD

VR+4CAD

Closing the VR-loop around the human in CAD

Project Info

Start date:
June 2020

End date:
November 2021

Funding:
Innosuisse

Coordinator:
Artanim

Website:
https://vrplus4cad.artanim.ch/

Summary

VR+4CAD aims to tackle issues limiting the use of VR in manufacture and design as highlighted by the industries: the lack of full circle interoperability between VR and CAD, the friction experienced when entering a VR world and the limited feedback provided about the experience for future analysis.

In this project, we target CAD-authored design to be automatically converted and adapted for (virtual) human interaction within a virtual environment. Interaction is made more immediate by means of an experimental, markerless motion capture system that relieves the user from wearing specific devices. The data acquired by motion capture during each experience is analyzed via activity recognition techniques and transformed into implicit feedback. Both explicit and implicit feedback are merged and sent back to the CAD operator for the next design iteration.

Partners

Artanim
Markerless motion capture, VR activity annotation

Scuola universitaria professionale della Svizzera (SUPSI)
CAD/VR interoperability, human activity recognition

VR-Together

VR-Together

VR-Together

Social VR experiences

Project Info

Start date:
October 2017

End date:
December 2020

Funding:
EU H2020

Coordinator:
Fundació i2CAT

Website:
http://vrtogether.eu/

Summary

VR-Together offers new ground-breaking virtual reality (VR) experiences based on social photorealistic immersive content. For this purpose, this project developed and assembled an end-to-end pipeline integrating state-of-the-art technologies and off-the-shelf components. The challenge of VR-Together was to create photorealistic truly social VR experiences in a cost effective manner. Immersive media production and delivery through innovative capture, encoding, delivery and rendering technologies. The project demonstrated the scalability of its approach for production and delivery of immersive content across 3 pilots. It introduced new methods for social VR evaluation and quantitative platform benchmarking for both live and interactive content production, thus providing production and delivery solutions with significant commercial value.

In this project, Artanim worked on content production for the 3 pilots combining VR and offline/real time motion capture and developed new tools for immersive media production. We also participated in the evaluation of the developed social VR experiences.

Partners

Fundació i2CAT (Spain)

Netherlands Organisation for Applied Scientific Research (The Netherlands)

Centrum Wiskunde & Informatica (The Netherlands)

Centre for Research & Technology Hellas (Greece)

Viaccess-Orca (France)

Entropy Studio (Spain)

Motion Spell (France)

Artanim

Related Publications

Galvan Debarba H, Montagud M, Chagué S, Lajara J, Lacosta I, Fernandez Langa S, Charbonnier C. Content Format and Quality of Experience in Virtual Reality, Multimed Tools Appl, 2022.
PDF

Revilla A, Zamarvide S, Lacosta I, Perez F, Lajara J, Kevelham B, Juillard V, Rochat B, Drocco M, Devaud N, Barbeau O, Charbonnier C, de Lange P, Li J, Mei Y, Lawicka K, Jansen J, Reimat N, Subramanyam S, Cesar P. A Collaborative VR Murder Mystery using Photorealistic User Representations, Proc. IEEE Conf. on Virtual Reality and 3D User Interfaces (VRW 2021), IEEE, pp. 766, March 2021.
PDF

Chatzitofis A, Saroglou L, Boutis P, Drakoulis P, Zioulis N, Subramanyam S, Kevelham B, Charbonnier C, Cesar P, Zarpalas D, Kollias S, Daras P. HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media, IEEE Access, 8:176241-176262, 2020.
PDF

Galvan Debarba H, Chagué S, Charbonnier C. On the Plausibility of Virtual Body Animation Features in Virtual Reality, IEEE Trans Vis Comput Graph, In Press, 2020.
PDF

De Simone F, Li J, Galvan Debarba H, El Ali A, Gunkel S, Cesar P. Watching videos together in social Virtual Reality: an experimental study on user’s QoE, Proc. 2019 IEEE Virtual Reality, IEEE Comput. Soc, March 2019.
PDF

CycloVR

CycloVR

CycloVR

VR rehabilitation device

Project Info

Start date:
April 2019

End date:
September 2019

Funding:
University Hospitals of Geneva

Coordinator:
University Hospitals of Geneva

Summary

Immobilization and prolonged bed rest are causing deleterious consequences at the musculoskeletal, pulmonary, cardiac, endocrine and metabolic levels. Physical exercise is an effective way to maintain muscle strength and limit these adverse effects.

The University Hospitals of Geneva developed a “Cyclo” rehabilitation device that adapts to the patient’s physical abilities, irrespective of the nature of his/her pathology (neurological, orthopedic, etc.) and his/her conditions of hospitalization (bed, chair, wheelchair). Several scenarios are proposed according to the type of movement (circular, linear).

In this project, Artanim integrated a virtual reality (VR) module to the device in order to get patients moving actively or passively into an immersive VR world that would make them forget during the time of care the clinical context (or even forget the pain). For example, a patient could go pedaling on a river (as in the video above) or take a bike ride in the countryside by pedaling at his/her own speed from an hospital bed. The goal iss also to gamify the rehabilitation session in order to motivate patients to do physical exercises.

Partners

University Hospitals of Geneva – Division of Intensive Care
Development of the cycloergometer, medical supervision

Artanim
VR module, gamification