Archiving Performative Objects | 2017


Puppetry as a practice leaves artifacts, puppets, which are difficult to demonstrate or showcase in a museum or educational setting. They cannot be transcribed like books, or photographed like paintings. Their performative qualities are inherent to their use. However, the normal processes of wear and tear still exist, so museums can display them physically and, at best, perform with them for people. Digital tools like VR could however help here and provide a way for people to use puppets from archives in a way that gives users the experience of what a given puppet is like without damaging the actual form.

Technology Unity , C#, VRTK
Role Sole Developer/Designer
Links NEH Grant , Project Website

Marionette puppet in use


I developed and helped design playable VR interface prototype with was tested in a IRB study with 19 participants, some of which were professional puppeteers and others of which were amateurs, that informally measure the usability and creative expression available in our interface. The prototype itself contained 15 puppets, each of which was scanned, modeled and implemented in Unity as an intractable version of their real form.


Starting in my first term at Georgia Tech I was working on this project as a GRA for Michael Nitsche. While I had no prior experience as a puppeteer, but I did have a great deal of experience using Unity, a game development tool that supports VR development. I brought to the project enough technical knowledge to implement this project.

We started by researching the subject matter. Michael already knew people at the Center for Puppetry Arts in Atlanta, so we used artifacts from their archive as our subject matter. We took multiple trips there and took hundred of photos of the puppets in their archive. This gave me a chance to see how puppeteers handle puppets and gave us physical references for our remediation. We even took measurements of the exact sizes of the puppets and used those measurements to accurately represent the size of the virtual puppets in VR.

Image of X

We next moved onto technical prototypes. At the outset, I suspected that accurate representations of strings would be difficult to do. This is because an accurate representation of a string uses physics joints to simulate the way a string can deform. More accurate representations require more joints, and more work to keep stable. Additionally, these puppets have a large number of strings, and I rarely saw multiple strings hold up the weight of a game character from different points. The results of the first tests can be seen below, and although for my master's project I made headway on this problem, for this project I ultimately used a third party asset to create strings for me for this project.

Very early technical prototype

During this process of technical prototyping and data gathering, another Georgia Tech student began the process of scanning and recreating the puppets digitally. Our initial idea was to directly use the 3D scans of the puppets as real-time models. We hoped that this would make it faster to digitally archive objects. However, we quickly realized that 3D scans of physical objects nearly impossible to use as real-time models. They're extremely high poly and unworkable edge flow. We opted instead to have the scans and pictures serve as reference for another student to come in and recreate the puppets by hand, which turned out quite well in the end.

Very early technical prototype

Around this time, our biggest design challenge made itself clear. Michael realized that in order for many puppets to be used we would need to devise a way for users to grab more than one object at once in VR. This is no small task, as we could not find any VR games or experiences that tried to tackle this interaction challenge. We tried a few approaches, such as a click and hold of the back trigger on the motion controllers then sweep across objects interaction to grab multiple. This ended up defying too many user expectations of how to grab objects, as every other VR game treats the back trigger as a toggle to grab objects.

We settled on an interaction where uses click the trigger to grab objects, and click and hold the trigger to drop any objects held. This interaction maintained the expected interaction from users, but did not support dropping just one object at a time.

Example of puppet requiring two-object interaction

When we went to test this project, we were pleasantly surprised by the reactions of expert puppeteers. They reacted with delight to the detail of the recreated puppets, and bought into the idea of playing with the puppets for their own sake. However, novice participants did not buy into the performative aspect of the puppets as readily, and experiment less with the range of motion they could perform. This was the partial inspiration for my master's project, A Bridge Between Bodies.

2018 © All rights reserved. Made by Pierce with ♥