Char4VR

Char4VR

Char4VR

Interactive characters for VR experiences

Project Info

Start date:
September 2020

End date:
December 2023

Funding:

Coordinator:
Artanim

Summary

Virtual Reality (VR) opens the possibility to develop a new kind of content format which, on one hand, is experienced as a rich narrative, just as movies or theater plays are, while on the other hand, it allows interaction with autonomous virtual characters. The experience would feel like having different autonomous characters unfold a consistent story plot, just as a movie or a theater play does, within an interactive VR simulation. Players could choose to participate in the events and affect some parts of the story, or just watch how the plot unfolds around them.

The main challenge to realize this vision is that current techniques for interactive character animation are designed for video games, and do not offer the subtle multi-modal interaction that VR users spontaneously expect.  The main goal of this project is to explore different techniques for interactive character animation that help creates interactive characters that can engage in a more compelling way with players. To achieve this goal, we use a combination of research techniques derived from computer graphics, machine learning and cognitive psychology.

Related Publications

Llobera J, Charbonnier C. Physics-based character animation and human motor control, Phys Life Rev, 46:190-219, 2023.

Llobera J, Jacquat V, Calabrese C, Charbonnier C. Playing the mirror game in virtual reality with an autonomous character, Sci Rep, 12:21329, 2022.
PDF

Llobera J, Charbonnier C. Physics-based character animation for Virtual Reality, Open Access Tools and Libraries for Virtual Reality, IEEE VR Workshop, 2022 Best Open Source tool Award, March 2022.
PDF

Llobera J, Booth J, Charbonnier C. Physics-based character animation controllers for videogame and virtual reality production, 14th ACM SIGGRAPH Conference on Motion, Interaction and Games, November 2021.
PDF

Llobera J, Booth J, Charbonnier C. New Techniques on Interactive Character Animation, SIGGRAPH ’21: short course, ACM, New York, NY, USA, August 2021.

Llobera J, Charbonnier C. Interactive Characters for Virtual Reality Stories, ACM International Conference on Interactive Media Experiences (IMX \’21), ACM, New York, NY, USA, June 2021.
PDF

Markerless Mocap

Markerless Mocap

Markerless Mocap

Killing the markers

Project Info

Start date:
September 2020

End date:
September 2023

Funding:

Coordinator:
Artanim

Summary

Current motion capture (Mocap) solutions used in Location-based Virtual Reality (LBVR) experiences rely on a set of either active or passive infrared markers to track the user’s movements within the experience space. While this offers a stable and low-latency tracking result ideally suited for a VR scenario, equipping users with such markers is cumbersome, and the maintenance and cleaning of such devices takes time.

The Markerless Mocap project aims to leverage modern Machine Learning (ML) approaches to create a markerless solution for the capture of users in LBVR scenarios. The low-latency requirements imposed by this scenario are the primary challenge for the project, with an ideal photon-to-skeleton latency being in the order of 50ms or less. To achieve this goal the project focuses on approaches to pose-estimation which strike a good balance between accuracy and speed. The markerless pipeline consists of 3 stages: the processing of raw camera input at 60 frames per second, the 2D pose estimates of subjects in each view, and the final assembly into a full 3D skeleton. To manage this computationally heavy task at the desired latency, the overall markerless pipeline leverages the massively parallel computation abilities of modern CPUs and GPUs, allowing us to optimize every stage of the computations involved.

Another critical aspect is the training stage of any ML approach which generally requires a lot of annotated input data. And while in the field of pose estimation a variety of public datasets is available, they are often limited to a single view (where we require multi-view data for our multi-camera setups), and their annotations are not necessarily precise. Rather than record our own real-life dataset, the project opts to leverage our expertise in the creation of avatars in what we call our Synthetic Factory. Based on a single avatar model equipped with a variety of blend shapes, we can create a large variety of distinct avatars by providing broad characteristics such as age, gender, ethnicity, etc. Add in a variety of body proportions, skin tones, outfits, footwear, hairstyles, and animations, and you get a virtually unlimited set of subjects you can record from any angle, with their underlying skeleton forming the ground-truth annotation. This dataset then forms the basis of any of our training efforts.

This project was performed in collaboration with Vicon, the world leader mocap provider. First results of this joint collaboration were presented at SIGGRAPH 2023 on their exhibition booth, showcasing a six person, markerless and multi-modal real-time solve, set against Dreamscape’s LBVR adventure called The Clockwork Forest. With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.

Partners

Artanim
VR requirements definition, ML-based tracking algorithms evaluation, synthetic factory development, multi-modal tracking solution implementation and fine-tuning

Vicon
Hardware development, ML-based tracking algorithms implementation, real-time solve

Mocap RSA

Mocap RSA

Mocap RSA

ROM analysis of RSA

Project Info

Start date:
January 2022

End date:
June 2023

Funding:
La Tour Hospital

Coordinator:
Artanim

Summary

The goal of this project was to motion capture and simulate reverse shoulder prostheses (RSA) to evaluate post-operative ranges of motion during daily living activities. More specifically, we were interested at better understanding resulting glenohumeral and scapulo-thoracic motions, as well as kinematic changes after RSA.

The most challenging aspect of the project was to accurately reconstruct the post-operative prostheses of the patients. Indeed, the presence of metallic implants in CT can cause substantial image artifacts, which renders the 3D reconstruction difficult to perform. To solve this issue, patients were post-operatively scanned with a dedicated imaging protocol using a cone beam CT with reduced ionization, and a registration technique was developed to register the patients’ pre-operative 3D reconstructed bony models (scapula, humerus) and the CAD models of implants on the post-operative images.

In this project, Artanim was responsible for the segmentation and reconstruction of pre-operative and post-operative 3D bony and implants models, as well as for the dynamic simulation of reverse shoulder prostheses from motion capture data. The Haute Ecole de Santé was in charge of the post-operative registration.

Partners

Artanim
Motion capture and simulation of reverse shoulder prostheses

Haute Ecole de Santé (HEdS)
3D registration of RSA models

La Tour Hospital
Clinical tests

MRgHIFU Ablation of Liver Tumors

MRgHIFU Ablation of Liver Tumors

MRgHIFU Ablation of Liver Tumors

Minimally invasive intervention

Project Info

Start date:
July 2019

End date:
June 2023

Funding:
Swiss National Science Foundation (SNSF)

Coordinator:
HUG – Division of Radiology

Summary

High intensity focused ultrasound (HIFU) is a precise method to thermally ablate deep-seated tumors in a non-invasive manner. A prerequisite for a safe and effective application of HIFU is image guidance, to plan and control the ablation process. The most suitable imaging modality is MRI, with its high soft tissue contrast and its ability to monitor tissue temperature changes (MR-guided HIFU, MRgHIFU). The therapy of abdominal organs, such as the liver, still poses several problems due to a moving target location caused by breathing, motion related MR-thermometry artefacts and near-field obstacles, i.e. thoracic cage or bowel.

The goal of this project is to treat unresectable liver malignancies with MRgHIFU, using fast volumetric ablation. Enlarged acoustic window and enhanced focusing number of the HIFU applicator, rapid automatic control of volumetric sonication, as well as self-scanning of the lesion by exploiting the respiratory motion are envisaged in this research.

In this project, Artanim was in charge of the segmentation of target region (tumor) and organs at risk, as well as the computation of the optimal transducer positioning of the robotic assistance system taking into account the segmented structures and the coverage target area.

Partners

University Hospitals of Geneva – Division of Radiology
Imaging, data analysis, transducer development, clinical tests and project coordination

Artanim
Segmentation of target region and organs at risk, transducer positioning

University Hospitals of Geneva – Division of Oncology
Clinical tests

Related Publications

M’Rad Y, Charbonnier C, Elias de Oliveira M, Guillemin PC, Crowe LA, Kössler T, Poletti P-A, Boudabbous S, Ricoeur A, Salomir R, Lorton O. Computer-Aided Intra-Operatory Positioning of an MRgHIFU Applicator dedicated to Abdominal Thermal Therapy using Particle Swarm Optimization, IEEE Open J Eng Med Biol, IEEE, 2644-1276:1-11, 2024.
PDF

Lorton O, Guillemin PC, M’Rad Y, Peloso A, Boudabbous S, Charbonnier C, Holman R, Crowe LA, Gui L, Poletti P-A, Ricoeur A, Terraz S, Salomir R. A Novel Concept of a Phased-Array HIFU Transducer Optimized for MR-Guided Hepatic Ablation: Embodiment and First In-Vivo Studies, Front Oncol, 30(12):899440, 2022.
PDF

 

Aboard the Mayflower

Aboard the Mayflower

Aboard the Mayflower

A VR experience

Project Info

Client:
International Museum of the Reformation

Year:
2020

Summary

In November 1620, the passengers and crew of the Mayflower reached the coast of Massachusetts and established a colony there. Among them were Puritan Reformers who crossed the ocean to found a community in accordance with their aspirations. This episode is considered in the USA as a founding moment of the country.

This VR installation immerses users into the life of these pioneers just before they decide to land. Using a very light set-up, based on Oculus Quest, you begin your journey by discovering details of the Reformation Wall of Geneva before being transported inside the Mayflower in 1620 to witness the discussions between the pilgrims. Finally, you discover the bay of Cape Cod from the upper deck of the boat in the cold winter.

The exhibition was open from October 28th, 2020 until August 31st, 2021 at MIR – International Museum of the Reformation.

Credits

Original idea and scientific committee

MIR – Gabriel de Montmollin, Hanna Woodhead

Production, direction, screenplay, 3D content creation, gameplay, photogrammetry and motion capture
Artanim

Actors
Vincent Coppey, José Lillo, Edmond Vuilloux

Sound design
ActMedia – Felicien Fleury, Alain Renaud

Voice over
Sabina Foeth

Dubbing
Tony Clark, Jonathan Cotton, Jan Peter Richter, William Sage, James Smillie, Thomas Stecher