Sometimes however we need a lot of variation. Whether this is to allow users to create avatars which more closely match their own appearance, or when creating datasets with an appropriate – and often overlooked – diversity for machine learning efforts. In such cases, the creation of tailor-made avatars is no longer feasible. It is with this in mind that we started what we call our Synthetic Factory efforts.
The basic premise of the Synthetic Factory is a simple idea. Given a neutral default character, and a set of outfits, footwear, and hair styles it can wear, can we create a set of size, shape, gender, and ethnicity changes which can be applied in real-time to create any subject we would like? The potential benefit is clear to see. Artists can simply design an outfit or a hairstyle which applies well onto a base model, and the Synthetic Factory would take care of the variations.
At the basis of the Synthetic Factory is a default avatar model. Artists can take this model and deform it to give it a certain characteristic. Whether this is a gender appropriate change in body type, a specific set of facial features, or an overall change in body size. The synthetic factory takes these inputs, as well as meta-data characterizing the “deformation”, and encodes them as blend shapes which can be applied on a weighted bases, smoothly blending between the original shape and the blend shape target. At runtime either a specific set of deformations and weights can be selected, or a set of broad characteristics can be supplied – such as “Female”, “Adult”, “Indian” for example – after which random deformations and weights are selected which are appropriate for those characteristics.
The base model (center) with 2 of its 80 blendshape.
Any assets applied on top of this model, be they outfits, footwear, hair styles and the like, need to be able to deform appropriately given any changes made to the body blend shapes. The Synthetic Factory automates this process in a pre-processing step. After applying the asset to the neutral base model, a mapping is created, encoding how the asset fits the model. Once this mapping is complete, the factory runs through all the body blend shapes, and then determines how the asset needs to deform to keep the appropriate mapping in relation to the body.
Three avatars wearing the same outfit at 3 different proportions.
These deformations are then once again stored as blend shapes, this time as a part of the asset. This precomputation happens once, and at runtime no further heavy computation is required. All assets get annotated with a set of meta-data, making sure they are only ever combined in a way matching the model’s chosen characteristics.
As a part of its run-time tools, the Synthetic Factory also enables the modification of the skeleton driving the avatar. Either through specification of exact body measurements which match a specific dataset, or by supplying an overall target height, what we call the “AvatarBuilder” will generate a new skeleton matching the requirements and applies it to the Avatar.
Combine these tools with more fine-grained randomizations such as appropriate skin tones, as well as color and texture randomizations on outfits, and from a relatively compact set of basic inputs you have the ability to generate a large variety of diverse avatars fit for any purpose. In the video below, you can see the result for 36 randomly created avatars based on gender, age and ethnicity input.
In November 1620, the passengers and crew of the Mayflower reached the coast of Massachusetts and established a colony there. Among them were Puritan Reformists who had crossed the ocean to found a community in accordance with their aspirations, firmly rooted in Protestant and Calvinist values. This was a founding moment for what was to become the United States of America.
Experience first-hand this poignant event and the identity to which it gave rise through a multifaceted exhibition at the International Museum of the Reformation in Geneva.
Aboard the Mayflower is one piece of this exhibition. For 5 minutes, embark in the company of the first Reformed community in America on the emblematic boat that crossed the Atlantic in 1620 – an unforgettable virtual reality experience created by Artanim.
October 28th, 2020 – February 28th, 2021
International Museum of the Reformation (MIR), Rue du Cloître 4, 1204 Geneva.
Opening hours: Tuesday – Sunday from 10am to 5pm.
Free admission for visitors with an admission ticket for the permanent collection.
At the heart of the VR-Together project lies the objective to enable social VR experiences with strong feelings of immersion as well as co-presence. To achieve this strong social sense of sharing a space together, photorealistic real-time representations of users are used, rather than relying on abstract avatars as found in such offerings as Facebook Spaces or AltspaceVR. Using state-of-the-art technologies developed by consortium partners and off-the-shelf hardware such as Microsoft Kinect or Intel RealSense sensors, users are scanned in real-time and the captured representations are processed and streamed as point clouds or time varying meshes (TVM). These approaches to user representation, combined with HMD removal technology, allow users sharing the virtual space – while in geographically separate locations – to see each other in all three dimensions.
Early feedback from users of the Pilot 1 demonstrations regarding the ability to see themselves and others, has been positive. The question still remains however, whether or not accurate self-representation has a significant positive impact on your sense of immersion, co-presence and the overall quality of experience. Both when seeing yourself as well as when interacting with others sharing the same virtual environment with you.
To answer this question, VR-Together consortium partner Artanim will this summer run an experiment in which users will be placed in a virtual environment in which they are virtualized by a representation of themselves at varying levels of realism and likeness.
User representations will be created at 3 different levels of accuracy:
An abstract avatar-like representation which does not match the participant
A realistic representation of the participant
An in-between more abstract – perhaps cartoon-like – representation of the participant, which is still recognizable, but steers clear of such undesirable effects as the “Uncanny Valley”.
To evaluate self-representation, single users will be placed in a virtual environment in which, by means of a virtual mirror, they will be able to observe themselves. The question there is whether or not an increased likeness improves the overall VR experience. To evaluate the importance of avatar likeness in the representation of others, pairs of users who know each other (i.e. friends or family) will share a virtual environment together, again being represented at varying levels of likeness. The goal there is to understand the effects on such aspects as immersion, togetherness and quality of interaction.
The proposed experiment will help us better understand what scenarios benefit most from realistic and recognizable user representation in Virtual Reality experiences, and to what extent realism is desirable in social VR.
We are proud to announce the opening of the VR experience Geneva 1850: A Revolutionary Journey co-produced by Artanim and the Museums of Art and History of Geneva. The immersive installation will be open to public starting from April 12th at Maison Tavel, the oldest standing building in the city of Calvin.
Headset strapped to your head, movement sensors attached to your wrists and feet, you are now all geared up to dive into the Geneva 1850 experience. This VR installation immerses users into the Geneva of a time when the city was on the brink of modernity. In groups of four, impersonating period costume-clad avatars, users are given a chance to discover the city as it was in October 1846, just days before its people rose up in insurrection. Featuring a spectacular number of special effects, the experience is not only a visual, auditory and olfactory one. It also includes a physical and haptic dimension — users can feel themselves walking around the virtual city and pick up actual objects to interact with their environment, their fellow travellers and city dwellers.
Based on 3D data collected through the digitization of the Magnin model exhibited at Maison Tavel, this extraordinary experience is an invitation to travel back in time and witness the outbreak of Geneva’s “Fazy” revolution.
April 12th-July 14th, 2019 – Maison Tavel, Rue du Puits-Saint-Pierre 6, 1204 Geneva.
August 1st-September 29th, 2019 – Museum of Art and History, Rue Charles-Galland 2, 1206 Geneva.
Open from 11am to 6pm, closed on Monday.
Reserve your tickets for a journey through time at MAH-GENEVE.CH
Approximate duration: 45 min.
Dreamscape Immersive, the story-oriented VR company leveraging Artanim’s VR technology and whose backers include movie chain AMC Entertainment and several Hollywood heavyweights (MGM, Warner, 21st Century Fox, Steven Spielberg, Hans Zimmer), opens its first permanent location in the same upscale Los Angeles mall where it ran a pop-up storefront last February featuring its sci-fi-themed Alien Zoo experience.
The permanent storefront in the Westfield Century City mall will initially provide dedicated “theaters” and suit-up areas for three different experiences: Alien Zoo, the undersea adventure The Blu: Deep Rescue, and the Indiana Jones-style Lavan’s Magic Projector: The Lost Pearl. Tickets cost $20, get them here. The new location is part of a rapid expansion effort by the company, which plans to roll out stand-alone and in-theater venues (in partnership with AMC Theatre) outside of California.
Follow Caecilia Charbonnier and Sylvain Chagué, who are also the co-founders and co-CTOs of Dreamscape during the opening day at Westfield. Below, some pictures of the elegant Dreamscape store.
Do you feel in control of the body that you see? This is an important question in virtual reality (VR) as it highly impacts the user’s sensation of presence and embodiment of an avatar representation while immersed in a virtual environment. To better understand this aspect, we performed an experiment in the framework of the VR-Together project to assess the relative impact of different levels of body animation fidelity to presence.
In this experiment, the users are equipped with a motion capture suit and reflective markers to track their movements in real time with a Vicon optical motion capture system. They also wear Manus VR gloves for fingers tracking and an Oculus HMD. At each trial, the face (eye gaze and mouth), fingers and the avatar’s upper and lower bodies are manipulated with different degree of animation fidelity, such as no animation, procedural animation and motion capture. Each time, users have to execute a number of tasks (walk, grab an object, speak in front of a mirror) and to evaluate if they are in control of their body. Users start with the simplest setting and, according to the judged priority, improve features of the avatar animation until they are satisfied with the experience of control.
Using the order in which users improve the movement features, we can assert on the most valuable animation features to the users. With this experiment, we want to confront the relative importance of animation features with the costs of adoption (monetary and effort) to provide software and use guidelines for live 3D rigged character mesh animation based on affordable hardware. This outcome will be useful to better define what makes a compelling social VR experience.