Inner Cadence
An interactive video wall that captures and amplifies people’s characteristic movements, enhancing motion as a form of expression.
Movement is a form of expression that’s inherently linked to our identity
This is a concept that I’ve been trying to explore for quite some time. It conveys individuality and emotion: we can be identified by our gait, or infer someone’s mood by their pose. I want to maximize this expressiveness potential and explore ways of enhancing people’s movements. I want to foreground motion by separating it from the body, silhouette, features, or clothes, making people rethink how they build their identity and their relation to others.
Movement capture
To capture movement data, I could rely on different technologies:
- OptiTrack Motion Capture studio
- Microsoft Kinect v2
- Pose detection algorithms
Despite having access to a motion capture studio and it being the most reliable source of motion data, this option would have made it impossible to create an interactive installation. The process of installing the entire system is highly complex, and the studio itself does not provide a suitable location for the project. For pose detection algorithms, there are several alternatives, such as CMU’s OpenPose and tensorflow’s PoseNet. These models have become increasingly reliable, can run in real-time given the proper computer and require just a webcam. But the data they provide is only 2d, same as the input they receive, limiting the information I could process. In order to get 3d data with a portable equipment, I decided to use a Kinect v2, which provides real-time skeletal tracking and depth perception.
Movement classification
Now, to classify the data I first need to normalize and select the relevant data it. Without normalization, the movement of a user closer or further away from the camera would be processed in a different manner, and the height of the person would take a much more important role. To solve this, I only took the orientation quaternion of each joint, using the position data only for replaying purposes.
The selection of relevant data comes from the need to evaluate and classify different types of movement. I.e: to analyze the movement of the arms, I won’t take into account the legs. Thus, I created 7 joint sets from which I ended up using 3: arms, legs and spine.
Finally, I formed batches of 60 frames with the corresponding orientation and angular velocity values for each set of joints. Bundling frames together is crucial for analyzing snippets of movement over time.
All of this was analyzed in Python, using k-means to create clusters for each movement type, extracting different characteristics across the recorded individuals. These clusters would later on be used for the real-time movement classfication.
User tests & prototyping
By putting everything together, I created the first prototype, an interconnected application that consists in two steps: movement classification and movement enhancement.
Exhibition & results
Finally, I created a 3-part video wall that analyzes people’s characteristic movements, classifies and amplifies them, enhancing its expressiveness. Each wall is a single-user experience, and by placing the three walls next to each other, the audience will hopefully reflect and make connections between the repeating patterns that that arise between each other. This was exhibited at the Unfolded Realities showcase (organized and curated by me).