ABC-Space

ABC-Space

ABC-Space

Attention, Behavior and Curiosity in Space

Project Info

Start date:
September 2024

End date:
August 2028

Funding:
Swiss National Science Foundation (SNSF)

Coordinator:
Artanim

Summary

The aim of ABC-Space is to better understand how Attention and Curiosity deploy in Space, and their impact on Educational Virtual Reality (VR) experiences that use embodied social virtual characters. ABC-Space will also bring foundational insights into how social signals from interactive automated characters influence cognitive and motivational processes that guide attention and memory in VR. 

Our results will provide new insights in VR science, education psychology, and cognitive science. These results will be achieved through a combination of novel experimental paradigms probing attention, motivation, and memory in VR. By the end of the project, ABC-Space will have contributed to the understanding of the relation between attention and curiosity, memory and learning, 3D space representations, as well as their interplay in fully immersive VR. This knowledge will have important implications both for VR science and for cognitive science and psychology. ABC-Space will thus pave the way to design a new kind of educational VR agent, expand current theoretical models of attention and 3D space representation in humans, and demonstrate their educational impact for applied purposes.  

In this project, Artanim develops Physics-based controllers based on Deep Reinforcement to control virtual reality characters, produces the VR environments and contributes to the design of the behavioural experiments to validate their impact in terms of attention and learning. 

Partners

Artanim
Physics-based interactive Virtual Reality characters capable of signaling and social cueing

University of Geneva – Laboratory for Behavioral Neurology and Imaging of Cognition (LABNIC)
Cognitive and affective processes governing human attention in space

University of Geneva – Educational Technologies (TECFA)
Models of space representation used to form cognitive maps, and their impact in educational VR experiences

Markerless Mocap

Markerless Mocap

Markerless Mocap

Killing the markers

Project Info

Start date:
September 2020

End date:
September 2023

Funding:

Coordinator:
Artanim

Summary

Current motion capture (Mocap) solutions used in Location-based Virtual Reality (LBVR) experiences rely on a set of either active or passive infrared markers to track the user’s movements within the experience space. While this offers a stable and low-latency tracking result ideally suited for a VR scenario, equipping users with such markers is cumbersome, and the maintenance and cleaning of such devices takes time.

The Markerless Mocap project aims to leverage modern Machine Learning (ML) approaches to create a markerless solution for the capture of users in LBVR scenarios. The low-latency requirements imposed by this scenario are the primary challenge for the project, with an ideal photon-to-skeleton latency being in the order of 50ms or less. To achieve this goal the project focuses on approaches to pose-estimation which strike a good balance between accuracy and speed. The markerless pipeline consists of 3 stages: the processing of raw camera input at 60 frames per second, the 2D pose estimates of subjects in each view, and the final assembly into a full 3D skeleton. To manage this computationally heavy task at the desired latency, the overall markerless pipeline leverages the massively parallel computation abilities of modern CPUs and GPUs, allowing us to optimize every stage of the computations involved.

Another critical aspect is the training stage of any ML approach which generally requires a lot of annotated input data. And while in the field of pose estimation a variety of public datasets is available, they are often limited to a single view (where we require multi-view data for our multi-camera setups), and their annotations are not necessarily precise. Rather than record our own real-life dataset, the project opts to leverage our expertise in the creation of avatars in what we call our Synthetic Factory. Based on a single avatar model equipped with a variety of blend shapes, we can create a large variety of distinct avatars by providing broad characteristics such as age, gender, ethnicity, etc. Add in a variety of body proportions, skin tones, outfits, footwear, hairstyles, and animations, and you get a virtually unlimited set of subjects you can record from any angle, with their underlying skeleton forming the ground-truth annotation. This dataset then forms the basis of any of our training efforts.

This project was performed in collaboration with Vicon, the world leader mocap provider. First results of this joint collaboration were presented at SIGGRAPH 2023 on their exhibition booth, showcasing a six person, markerless and multi-modal real-time solve, set against Dreamscape’s LBVR adventure called The Clockwork Forest. With this showcase, Vicon earned a CGW Silver Edge Award for technological innovation.

Partners

Artanim
VR requirements definition, ML-based tracking algorithms evaluation, synthetic factory development, multi-modal tracking solution implementation and fine-tuning

Vicon
Hardware development, ML-based tracking algorithms implementation, real-time solve

VR+4CAD

VR+4CAD

VR+4CAD

Closing the VR-loop around the human in CAD

Project Info

Start date:
June 2020

End date:
November 2021

Funding:
Innosuisse

Coordinator:
Artanim

Website:
https://vrplus4cad.artanim.ch/

Summary

VR+4CAD aims to tackle issues limiting the use of VR in manufacture and design as highlighted by the industries: the lack of full circle interoperability between VR and CAD, the friction experienced when entering a VR world and the limited feedback provided about the experience for future analysis.

In this project, we target CAD-authored design to be automatically converted and adapted for (virtual) human interaction within a virtual environment. Interaction is made more immediate by means of an experimental, markerless motion capture system that relieves the user from wearing specific devices. The data acquired by motion capture during each experience is analyzed via activity recognition techniques and transformed into implicit feedback. Both explicit and implicit feedback are merged and sent back to the CAD operator for the next design iteration.

Partners

Artanim
Markerless motion capture, VR activity annotation

Scuola universitaria professionale della Svizzera (SUPSI)
CAD/VR interoperability, human activity recognition