Over the past week, I returned to CTH for another meeting with Joanne, where we reviewed the sensors I’ve been building. Moving beyond the breadboard stage, my priority was to make each component robust enough for exhibition—avoiding exposed cables, ensuring stable connections, and designing a layout that felt intentional and visually integrated with the table surface.
We discussed the configuration of each sensor: capacitive touch, proximity sensing (Adafruit MPR121), the LDR component, and the stretch sensor. The LDR was embedded into the hand-training grip so that when the user closes their hand, the reduced light exposure becomes measurable data. This simple but effective setup allowed me to detect hand closure without obtrusive hardware. Soldering the power, ground, and data connections, and making sure the resistor bridges were secure, was a careful process—especially knowing these sensors needed to withstand continuous public use.


Making sensors and soldering the wires.
The stretch sensor was the most challenging to engineer. Since it relies on changes in resistance, the material had to be both responsive and durable. Many tests failed—the material would snap under repeated tension. After numerous prototypes, I found a physiotherapy-grade resistance band that could handle high force without tearing. It became the final outer layer, allowing the stretch sensor to survive real interaction while still producing clean data for animation.
Another layer of complexity was the spatial layout of the sensors on the table. Proximity sensors react to distance, so placing them at the front would have caused accidental triggers if users reached for objects behind them. I redesigned the arrangement so the interaction zones were intentional, with enough separation to avoid cross-activation. I prototyped the entire layout in cardboard, cutting holes to match the table dimensions and routing the wiring discreetly. This allowed the exhibition team to drill precise openings while keeping the final setup clean and mysterious—no visible cables, no obvious interface.



All of this aligns with the core aim of my project: restoring a sense of physicality often missing in virtual reality. While capacitive sensing projects often rely on sound as feedback, I wanted animation to be the primary response—movement that reflects users’ tactile engagement. The work avoids conventional UI elements like keyboards and mice, instead grounding interaction in the body’s own motility. Phenomenologically, we understand the world through movement—our bodies act, reach, stretch, press, and respond. I wanted to translate this embodied knowledge into the metaverse.
Matthew Ball’s definition of the metaverse emphasizes interconnected, real-time 3D environments, not necessarily tied to head-mounted displays. Yet many VR experiences compromise the user’s full physical expression. With my system—combining touch, proximity, tension, and light—I’m exploring a new interaction model where the user’s body remains central. These sensors provide a richer tactile language, carrying physical actions into a virtual space where animated characters respond accordingly.
This iterative, hands-on process has been demanding but essential. It’s helping me reimagine interaction in the metaverse—not simply as digital input, but as embodied engagement rooted in the physical world.