by artanim | Mar 21, 2022
Project Info
Start date:
April 2015
End date:
December 2023
Funding:
–
Coordinator:
Artanim
Summary
Artanim invented and develops since 2015 the VR technology driving Dreamscape Immersive. This multi-user immersive platform combines a 3D environment – which can be seen and heard through a VR headset – with a real-life stage set incorporating haptic feedback elements.
The user’s movements are sparsely captured in real time and translated into a full-body animation thanks to a deep understanding of body mechanics. Contrary to other static position VR systems, the platform allows up to eight users to truly feel immersed in a VR scene. They are rendered as characters (avatars) inside a computer generated world where they can move physically, interact with objects and other players, and truly experience worlds previously accessible only in their imagination. The bodies of the users thus become the interface between the real and virtual worlds.
The platform combines the following features:
- Wireless: complete freedom of movement across large spaces.
- Social: interactive multi-user experiences within a shared environment or across connected pods.
- Accurate: full-body and physical objects tracking with less than 1 millimeter deviation.
- Real-time: zero discernible lag eliminating most concerns of motion sickness.
- Flexible: SDK allowing content creators to easily create experiences for this particular platform.
This platform is leveraged by Dreamscape Immersive through their worldwide location-based VR entertainment centers, as well as through educational and training programs and other enterprise solutions.
The platform was also used to produce VR_I, the first ever choreographic work in immersive virtual reality, as well as Geneva 1850, a time traveling experience and historical reconstruction into the Geneva of 1850.
Related Publications
Chagué S, Charbonnier C.
Real Virtuality: A Multi-User Immersive Platform Connecting Real and Virtual Worlds,
VRIC 2016 Virtual Reality International Conference – Laval Virtual, Laval, France, ACM New York, NY, USA, March 2016.
PDF
Chagué S, Charbonnier C.
Digital Cloning for an Increased Feeling of Presence in Collaborative Virtual Reality Environments,
Proc. of 6th Int. Conf. on 3D Body Scanning Technologies, Lugano, Switzerland, October 2015.
PDF
Charbonnier C, Trouche V. Real Virtuality: Perspective offered by the Combination of Virtual Reality Headsets and Motion Capture, White Paper, August 2015.
PDF
by artanim | May 22, 2021
Project Info
Start date:
September 2020
End date:
August 2023
Funding:
–
Coordinator:
Artanim
Summary
Virtual Reality (VR) opens the possibility to develop a new kind of content format which, on one hand, is experienced as a rich narrative, just as movies or theater plays are, while on the other hand, it allows interaction with autonomous virtual characters. The experience would feel like having different autonomous characters unfold a consistent story plot, just as a movie or a theater play does, within an interactive VR simulation. Players could choose to participate in the events and affect some parts of the story, or just watch how the plot unfolds around them.
The main challenge to realize this vision is that current techniques for interactive character animation are designed for video games, and do not offer the subtle multi-modal interaction that VR users spontaneously expect. The main goal of this project is to explore different techniques for interactive character animation that help creates interactive characters that can engage in a more compelling way with players. To achieve this goal, we will use a combination of research techniques derived from computer graphics, machine learning and cognitive psychology.
Related Publications
Llobera J, Jacquat V, Calabrese C, Charbonnier C. Playing the mirror game in virtual reality with an autonomous character, Sci Rep, 12:21329, 2022.
PDF
Llobera J, Charbonnier C. Physics-based character animation for Virtual Reality, Open Access Tools and Libraries for Virtual Reality, IEEE VR Workshop, 2022 Best Open Source tool Award, March 2022.
PDF
Llobera J, Booth J, Charbonnier C. Physics-based character animation controllers for videogame and virtual reality production, 14th ACM SIGGRAPH Conference on Motion, Interaction and Games, November 2021.
PDF
Llobera J, Booth J, Charbonnier C. New Techniques on Interactive Character Animation, SIGGRAPH ’21: short course, ACM, New York, NY, USA, August 2021.
Llobera J, Charbonnier C. Interactive Characters for Virtual Reality Stories, ACM International Conference on Interactive Media Experiences (IMX \’21), ACM, New York, NY, USA, June 2021.
PDF
by artanim | Mar 17, 2021
Project Info
Start date:
September 2020
End date:
June 2023
Funding:
–
Coordinator:
Artanim
Summary
Current motion capture (Mocap) solutions used in Location-based Virtual Reality (LBVR) experiences rely on a set of either active or passive infrared markers to track the user’s movements within the experience space. While this offers a stable and low-latency tracking result ideally suited for a VR scenario, equipping users with such markers is cumbersome, and the maintenance and cleaning of such devices takes time.
The Markerless Mocap project aims to leverage modern Machine Learning (ML) approaches to create a markerless solution for the capture of users in LBVR scenarios. The low-latency requirements imposed by this scenario are the primary challenge for the project, with an ideal photo-to-skeleton latency being in the order of 50ms or less. To achieve this goal the project focuses on approaches to pose-estimation which strike a good balance between accuracy and speed. The markerless pipeline consists of 3 stages: the processing of raw camera input at 60 frames per second, the 2D pose estimates of subjects in each view, and the final assembly into a full 3D skeleton. To manage this computationally heavy task at the desired latency, the overall markerless pipeline leverages the massively parallel computation abilities of modern CPUs and GPUs, allowing us to optimize every stage of the computations involved.
Another critical aspect is the training stage of any ML approach which generally requires a lot of annotated input data. And while in the space of pose estimation a variety of public datasets is available, they are often limited to a single view (where we require multi-view data for our multi-camera setups), and their annotations are not necessarily precise. Rather than record our own real-life dataset however, the project opts to leverage our expertise in the creation of avatars in what we call our Synthetic Factory. Based on a single avatar model equipped with a variety of blend shapes, we can create a large variety of distinct avatars by providing broad characteristics such as age, gender, ethnicity, etc. Add in a variety of body proportions, skin tones, outfits, footwear, hairstyles, and animations, and you get a virtually unlimited set of subjects you can record from any angle, with their underlying skeleton forming the ground-truth annotation. This dataset then forms the basis of any of our training efforts.