Categories
Final Major Project

FMP Summary

When Worlds Collide is an interactive installation that spans two interconnected realms:
the physical world, navigated through an Arduino UNO microcontroller mounted beneath a table and connected to a custom circuit, and the virtual world, built in Unreal Engine and running in real time. The transition between these worlds is enabled by sending data from the physical interface into the virtual environment via a serial communication plugin.

The piece offers a gamified experience in which the audience interacts with capacitive touch sensors, proximity sensors, a stretch sensor, and a grip sensor—bringing back the sense of physicality often missing in immersive media accessed through HMDs (Head-Mounted Displays). As Slater highlights in his research on Plausibility and Place Illusion, interaction in VR is often not considered “valid” because users touch virtual objects but feel no tactile feedback. This installation reintroduces physical engagement to address that gap.

In the virtual world, characters inhabit a theatre-like scene. They begin in an idle, subtle breathing state and remain so until user input is detected from the sensors on the table. Once activated, the characters move according to the game logic: Unreal Engine retrieves data from Arduino, checks predefined conditions, and calls the corresponding animation blueprints—specifically, the relevant states within each character’s animation state machine.

The final work is not a rendered video but a packaged Unreal Engine application for Windows. It requires a live Arduino connection using the exact COM port and baud rate specified in the Unreal plugin blueprint. These settings must match the Arduino code uploaded to the microcontroller and the wiring of the custom circuit, both of which are explained in the making-of video.

This is a data-driven, real-time application, where character movement responds directly to user interaction with physical sensors. While the animation states are predefined, Unreal’s blend spaces allow for real-time interpolation based on incoming data. One example—shown on the poster—is a toggle sensor that makes two central characters begin walking. When the toggle remains on, a neighbouring proximity sensor controls the interpolation between walking in place, walking forward, running, and finally speed-running. As the user’s hand gradually approaches the cushion, the character accelerates until full contact triggers a “Naruto-like” sprint.

To achieve this, each sensor had to be calibrated. I measured their output ranges, mapped those ranges in Arduino, cleaned the data, sent it to Unreal, stored it globally, and then retrieved it locally within the blueprints.

In this sense, the installation functions as data visualisation through digital body motion.

The project explores the role of animation in user experience, focusing on the visual feedback loop generated by user actions. Each action triggers a corresponding animated reaction, reinforcing the user’s sense of agency.

The interaction design is grounded in the motility of the user’s physical body, which is fundamental in VR and essential for enhancing avatar fidelity.
Ultimately, this work proposes a novel interaction model for the metaverse era—one that moves beyond keyboards and mice and instead relies on the user’s body, while still maintaining physical contact with an interface, either directly (touch) or indirectly (proximity).


FMP Artefact

Making Of/ Showreel




Exhibition

High fidelity concept design for the exhibition

Wireframe for the setup.

The end goal delivery.


The work offers a gamified experience in which the audience engages with a bespoke set of tactile sensors arranged on a table. Each sensor is mapped to a specific character or behaviour within the virtual scene. The interface includes:

  1. Silver cushion — Constructed from conductive fabric and foam. Programmed as a toggle (on/off).
    When activated, characters transition from idle to walking; when deactivated, they return to idle.

  2. Copper cushion — Conductive fabric with foam filling. Functions as a proximity sensor.
    When the silver cushion is on, this cushion modulates movement by interpolating between animation states based on hand distance. Full contact triggers the extreme value, causing a speed-run effect.

  3. Oval cushion — Conductive fabric and foam. A quick-touch sensor that triggers an Animation Offset in Unreal, forcing the character to perform a jump over the base animation.
  4. Apple under a glass dome — A real apple connected to the circuit. Serves as a proximity-based trigger.
    When the glass dome is touched, the apple’s signal causes a character to flip after bouncing off the screen edge.

  5. Touch Pad 1 — Made from conductive fabric and thread on a non-conductive base.
    A touch-and-hold sensor that makes the character dance for the duration of contact.

  6. Touch Pad 2 — Conductive thread stitched onto a non-conductive surface.
    Similar to Touch Pad 1, but interpolates from static pose to an animation trough unreal blend space BP, and also drives a light effect: the character becomes illuminated or fades out as movement intensifies and slows.

  7. Stretch sensor — A resistive rubber sensor that detects changes in resistance.
    Each stretch event triggers a jump-and-hang animation.

  8. Grip sensor — Built with an LDR inside a grip-exercise tool.
    When fully closed and held for several seconds, it triggers a jump animation for both characters.
    Unlike the oval cushion’s offset jump, this version becomes a looping base animation.


Project Management delivery of final piece

For the project management of the final setup—including soldering, wiring, consultations, getting the project approved for health and safety, laser cutting, fabrication tutorials, ongoing thesis work, exhibition team deadlines, printing, and more—I used a Miro board to organize and navigate the workload.

Setting up the UI, wiring the Arduino.

Reflection



My motivation for this project began during the course, when I was introduced to the concept of virtual twin technology. I had also worked with this idea during my summer internship and I became fascinated with the transition between the physical and virtual worlds.

The project started as an exploration of that connection through creating a virtual twin of a simple LED light as a proof of concept. Previously, I had successfully connected Arduino with Unity in my third term, but this time I wanted to use Unreal Engine because of its cinematic capabilities as well as built on top skills gained during the course. This LED experiment became the starting point that pushed the whole project forward.

Although similar projects existed, most focused on audio or visual effects rather than real-time animation from Arduino input. This led me to develop an application that enables real-time exploration of a virtual twin.

My project was also shaped by my thesis research, which focused on immersive media and the sense of presence. I initially planned for the piece to work in VR, but technical limitations—mainly not having a strong enough GPU—made that impossible at the time.

Two key research sources influenced my direction:

  1. Matthew Ball’s work on the metaverse, which argues that traditional interfaces like the mouse and keyboard are outdated, and that future interactions will rely more on full-body engagement. This encouraged me to bring physical movement into the experience.
  2. Mel Slater’s research on Plausibility and Place Illusion, which explains how immersion breaks when virtual actions don’t produce physical sensations. That insight pushed me to design a more physical interface, leading me to experiment with capacitive touch and proximity sensors to reintroduce real, tactile interaction into the digital space.

A major part of the project was designing a full technical pipeline:

  • capturing sensory information from the physical space
  • processing and cleaning that data
  • sending it to the virtual world
  • triggering meaningful visual feedback based on user actions

Animation became a central part of the user experience—it acts as a feedback system, showing users what action they’ve triggered and reinforcing their sense of agency. I didn’t want this feedback to be too literal or obvious, so I used digital body movement as an abstract form of data visualisation. To make this work, I essentially had to “gamify” the experience inside Unreal Engine, creating rules and conditions that connect physical input to animated responses.

By gamifying, I mean implementing rules and conditions—so if a certain sensor is activated, a specific animation or visual effect happens. I also created dependencies, for example: one sensor must be activated before another one becomes interactive. This adds a learning curve and makes the experience more exploratory for the audience.

Parallel to the conceptual work, a huge amount of technical development was happening.

This included:

  • building and rebuilding the circuit multiple times
  • testing different ways of creating capacitive sensors
  • eventually switching to the Adafruit MPR121 for better precision and lower latency
  • constantly recalibrating sensors due to environmental noise
  • designing and sewing custom conductive sensors
  • soldering and securing all wiring safely
  • prototyping the exhibition table, drilling holes, measuring cable lengths, and making sure everything was hidden for both aesthetic and experiential reasons

The physical layout required careful planning because proximity sensors are extremely sensitive. They needed to be spaced correctly so they wouldn’t interfere with each other, and the wiring also had to be placed to avoid accidental noise or cross-triggering.

On the Unreal side, I worked on creating the entire virtual world populated with characters, some as Unreal Manny fully optimised, others generated with MeshyAI from own concept art sketches, autorigged with mxiamo, and re-paontes ksin weithg with maya. I made an extensive use of the campus motion capture lab, using both Vicon and Rokoko systems. Then I retargeted and cleaned dat in Maya and Cascadeur respectively . This gave me a deep understanding of how different mocap systems behave, what types of noise they produce, and how much manual cleaning is needed.

Inside Unreal Engine, much of my time went into building animation blueprints, state machines, and blend spaces. I learned how to design the logic behind transitions—how the system moves from one animation to another based on the incoming data from the physical world. This was a huge technical learning curve but also essential for creating smooth and immediate feedback, which is crucial for immersion.

I rewrote the Unreal code at least five times to reduce latency and ensure the system responded almost instantly. Without that, the illusion and the whole concept of physical-virtual connection would fall apart.

So overall, the project operated on multiple layers at once:

and constructing a new, playful interaction experience that the audience can explore

exploring interfaces for the metaverse era

integrating body movement and physical interaction

building a robust hardware system

designing a virtual world

creating animations that serve as abstract feedback

Categories
Final Major Project

The novel Interface

Over the past week, I returned to CTH for another meeting with Joanne, where we reviewed the sensors I’ve been building. Moving beyond the breadboard stage, my priority was to make each component robust enough for exhibition—avoiding exposed cables, ensuring stable connections, and designing a layout that felt intentional and visually integrated with the table surface.

We discussed the configuration of each sensor: capacitive touch, proximity sensing (Adafruit MPR121), the LDR component, and the stretch sensor. The LDR was embedded into the hand-training grip so that when the user closes their hand, the reduced light exposure becomes measurable data. This simple but effective setup allowed me to detect hand closure without obtrusive hardware. Soldering the power, ground, and data connections, and making sure the resistor bridges were secure, was a careful process—especially knowing these sensors needed to withstand continuous public use.


Making sensors and soldering the wires.

The stretch sensor was the most challenging to engineer. Since it relies on changes in resistance, the material had to be both responsive and durable. Many tests failed—the material would snap under repeated tension. After numerous prototypes, I found a physiotherapy-grade resistance band that could handle high force without tearing. It became the final outer layer, allowing the stretch sensor to survive real interaction while still producing clean data for animation.

Another layer of complexity was the spatial layout of the sensors on the table. Proximity sensors react to distance, so placing them at the front would have caused accidental triggers if users reached for objects behind them. I redesigned the arrangement so the interaction zones were intentional, with enough separation to avoid cross-activation. I prototyped the entire layout in cardboard, cutting holes to match the table dimensions and routing the wiring discreetly. This allowed the exhibition team to drill precise openings while keeping the final setup clean and mysterious—no visible cables, no obvious interface.



All of this aligns with the core aim of my project: restoring a sense of physicality often missing in virtual reality. While capacitive sensing projects often rely on sound as feedback, I wanted animation to be the primary response—movement that reflects users’ tactile engagement. The work avoids conventional UI elements like keyboards and mice, instead grounding interaction in the body’s own motility. Phenomenologically, we understand the world through movement—our bodies act, reach, stretch, press, and respond. I wanted to translate this embodied knowledge into the metaverse.

Matthew Ball’s definition of the metaverse emphasizes interconnected, real-time 3D environments, not necessarily tied to head-mounted displays. Yet many VR experiences compromise the user’s full physical expression. With my system—combining touch, proximity, tension, and light—I’m exploring a new interaction model where the user’s body remains central. These sensors provide a richer tactile language, carrying physical actions into a virtual space where animated characters respond accordingly.

This iterative, hands-on process has been demanding but essential. It’s helping me reimagine interaction in the metaverse—not simply as digital input, but as embodied engagement rooted in the physical world.

Categories
Final Major Project

Unreal Project packaging


1.Visual Studio Code 2022 setup with the following modules/dependencies:

.NET desktop development

.Desktop development with C++

.NET Multi-platform App UI development

.C++ profiling tools

.C++ AddressSanitizer

.Windows 10 or 11 SDK (10.0.18362 or Newer)

.Unreal Engine installer




2.Testing against working setup with creation of empty C++ game template

to ensure that version of Unreal engine is compatible with the VSC IDE



3. Change BP into C++, by creating an empty C++ class actor

this will automatically run the project in the VSC IDE


4. Unreal Project setup

correct level must be assigned for the application


customisation of the project to make it branded and bespoke


shipping – for release and sharing of the updates

* it throws error and fails to compile unless the following are disabled:
“Settings”,
“XRCreative”,
“VirtualScouting”


developer for debug node

Final packaging of the app:

Sensors need to be calibrated before final export

Be sure to update the correct COM port for Arduino inside the Unreal Blueprint, otherwise pack



Categories
Final Major Project

MOCAP data


Motion has been captured with use of the facilities at LCC. I have used both of the existing MOCAP, the IMU (inertial measurements) such as ROKO and the Camera Based MOCAP using body markers attached to the suit visible by cameras. This started as the experimental practice based approach into both of my thesis topic, where I evaluate the animation fidelity of the the avatars in the Virtual Worlds, of Social VR, but as well the experimental tool for producing animation on the level of the production studios of today.

There’s always a learning curve. Though it must be said that, this is a particularly hands on experience due to a nature of the technology, and despite the technician being there to supervise you, there’s so much practical things to learn through very own experience and self taught trough an array of mistakes of wrong assumptions. So here’s my very own list:

1. Do not stare at the PC. Although your avatar must be catching all your attention, you will end up with data of the head somewhat constrained to that TV screen.

2. Every time, start off your pose with T shape. This is going to be very important for retargeting of that data inside the Maya built in tool, the character definition. If you don’t have that T pose of reference, your retargeting will output very awkward results, broken body, twisted and tangled or maybe just misplaced slightly.

3. Come to mocap with the clear idea in mind. For example, creating game cycle animation, means you have to try match the starting pose with an ending pose, for the better result, with less straggle of recreating more poses later on.

4. Be mindful of timing. Long recordings gonna add up in data very quickly, which means Maya is going to take a sweet time importing, processing. Do even mention working with animation graph of long enough footage at 120 FPS.

5. Camera based system are the best mocap. These require more of the setup to start with, such as getting all the wearables and then the calibration process right. Sometimes it must be repeated. Give yourself an hour on top of anticipated time on the day of recording. This MOCAP is of a high precision and accuracy. Though, DIY setup for hands before the arrival of VICON gloves, was painful to work with, as sweating hands were causing all the taped on fingers markers to fall off.

6. IMU like ROKO are easy and quick (as long as the designated router is working) but there are not precise and drift over the time. There’s significant misalignment between the physical body and the avatar. It almost makes you think as you need to move very slowly to keep your avatar following along.



Workflow for Cleaning and Preparing Mocap Data

SOFTWARE USED: VICON – MAYA – CASCADEUR – UNREAL

The below is a guide I’ve written based on the process I undertaken.

Below is the full pipeline I use for processing motion-capture animation before bringing it into Unreal Engine. This workflow combines Maya, Mixamo, and Cascadeur to achieve clean, polished animation.


1. Prepare the Animation Environment in Maya

Set Maya’s animation mode to Spline (or Auto/Legacy Spline) so the curves are smooth from the beginning.


2. Import the Mixamo-Rigged Character

Import your Mixamo-rigged character into Maya.

If you have already created a HumanIK definition for this character, load the preset; otherwise, create a new Character Definition and save it for reuse.


3. Import the Mocap Skeleton

Bring in the raw mocap FBX.

  • Add a root joint and parent the mocap skeleton under it.
  • Rename its Character Definition (e.g., Driver).
  • Assign the appropriate joints in the HumanIK panel.

    Vicon data is missing root joint which result in the errors in Maya/Unreal. Solved with creating a joint (0,0,0) at origin and making it a parent of the rest of the skeleton.

Editing the namespace with the ‘mocap’ prefix for Maya to recognise the mocap skeleton and successfully apply HIK.

4. Characterise the Mocap Skeleton (HumanIK)

Use HumanIK to create a proper rig definition:

  • Use frame 0 (or any frame where the character is in a T-pose).
  • Complete the HIK mapping for the mocap skeleton.
  • This becomes your Driver.


5. Retarget the Mocap to the Character

Set the Source to the mocap rig (Driver) and the Character to your Mixamo rig (Driven).

Check joint alignment in the viewport to ensure the motion transfers correctly.


6. Baking Options

Depending on where you plan to clean the animation:

6a. Clean in Maya

Bake animation onto the Control Rig if you want to refine the movement using IK inside Maya.

6b. Clean in Cascadeur

Bake animation onto the skeleton if you plan to refine the movement in Cascadeur.


7. Export the Animation (Spline Curves Recommended)

When exporting from Maya:

  • Ensure the animation curves are in Spline.
  • Optional: Apply Euler filtering and Butterworth smoothing to reduce mocap jitter and improve visual quality.
  • Export only the animated skeleton.

8. Transfer from Maya to Cascadeur

Use the Maya-to-Cascadeur Bridge (must be installed beforehand).

Send the characterised Mixamo rig to Cascadeur.
Make sure you select Mixamo Spine 2 as the chest controller since Cascadeur’s Auto-Posing relies on that structure.


9. Import Animation into Cascadeur

Drag and drop the exported FBX directly into Cascadeur.

9a. Extract the Mocap Keyframes

  • Switch to Joints Mode.
  • Select all joints and all keys of the mocap animation.
  • Copy the full interval (Ctrl + Shift + C).

9b. Paste onto the Cascadeur Character

  • Open your Mixamo-rigged character scene in Cascadeur.
  • Switch to Joints Mode.
  • Select the character’s joints and timeline.
  • Paste the mocap interval onto the timeline.

Now you can use Cascadeur’s Auto-Posing, Physics Tools, and Tweaking workflow to produce high-quality, natural animation.




Resoures:

Retargeting MOcap in maya with use of mixamo character:

https://www.youtube.com/watch?v=sfNlMUMdVyw


Cascadeur bridge with Maya:

https://www.youtube.com/watch?v=tiTGzay7Xto
https://github.com/Nathanieljla/cg3d-maya-casc

Categories
Final Major Project

Custom character creation workflow





For this character, the creation pipeline followed a multi-step process. I began by producing a hand-drawn concept sketch. After photographing the drawing, I brought the image into Photoshop to clean up any unwanted marks, sketch lines, or background artefacts. Next, I ran it through a light pass of KreaAI—only around 1% strength—just enough to smooth out the lines and clarify the artwork without altering its style. This refinement step is important for the stages that follow.

So the left hand side is a drawing I’ve done and on the right hand side is an KreaAI output and this is the image comparison

Once the artwork was properly prepared, I used it as an input for Meshy AI (which I subscribe to). Because the concept art is my own original work, the resulting model remains fully under my licence, as per MeshyAI regulations.

Working with image-based generation in Meshy has a learning curve. The input must be extremely precise, especially when specifying a correct T-pose. Details like hand orientation and finger separation are surprisingly important. When the fingers overlap in the drawing—or when the palms aren’t facing downward—the resulting rig will often have incorrect wrist rotations. These issues don’t always appear obvious at first, but they cause significant deformation later once the model is animated.

Another common challenge with image-based generation is incomplete or distorted topology. The model might look fine at first glance, but closer inspection reveals missing geometry, merged limbs, or strange deformation—such as thighs fused together at the back of the model. These issues show why early stages matter: getting things right at the start saves a lot of cleaning later.

I used MeshyAI onboarding texturing tool based on the concept art of my own.

After generating the mesh, I exported it and moved to Mixamo for auto-rigging. Mixamo allows for quick animation previews, which makes it easy to test how well the mesh deforms. However, Mixamo’s auto-skinning can be imperfect. Problems tend to appear around the elbows, wrists, and spine, where weights are often inaccurate. Because of that, I brought the rigged model into Maya to manually clean the mesh and adjust the skin weights. This part was particularly challenging because the auto-generated models often have high face counts, making precise selections difficult.



Overall, AI tools streamline parts of the workflow—generation, rigging, and iteration become faster—but they aren’t a complete solution. Clear inputs and good technical understanding are still essential, especially when the pipeline spans multiple applications.

For this project, creating custom characters was important for both the artistic direction and the installation’s interactive feedback loop. While I used default, rigs that are fully optimized, it was important to me that the art direction was involving input of my very own. coding, interface design, and character creation is complex, so leveraging AI for parts of the workflow allowed me to bring the visual elements to life without compromising quality or originality.

Categories
Final Major Project

Understanding Latency Between Arduino and Unreal Engine: Delay vs. Millis, Data Pacing, and Smoothing

One of the first major challenges I encountered in this project was unexpected latency and jitter in the animation feedback inside Unreal Engine. At the beginning, this was frustrating because the visual behaviour looked like a hiccup—small recursive jumps in the animation, happening regularly. After investigating the issue end-to-end, I now understand much more clearly how timing, data pacing, and sensor noise interact between the physical world (Arduino) and the virtual world (Unreal Engine).

This section describes what caused the latency, why delay() and millis() behave differently, and how I ultimately fixed the issue.


1. Two Different Worlds With Two Different Clocks

The core issue comes from the fact that Arduino and Unreal Engine operate on completely separate update loops:

  • Arduino sends sensor data at whatever pace your sketch defines.
  • Unreal Engine reads the serial port at the pace defined in Blueprints (e.g., using a timer running every X milliseconds).

If these two paces do not match, Unreal will sometimes read:

  • half-written lines,
  • empty strings, or
  • strings that cut off mid-value,

which produces 0, truncated values, or invalid numeric conversions.
This is exactly what caused the visible jitter in animation.


2. Why delay() Causes Problems

In my early implementation, I used Arduino’s built-in delay() function to control the output frequency.

What delay() actually does

delay(100) literally freezes the entire microcontroller for 100 milliseconds.
During that freeze:

  • no sensor updates occur
  • no serial writes occur
  • capacitive updates pause
  • the device cannot respond to anything

This creates blocking behaviour and interrupts the natural flow of data.

When Unreal tries to read the serial port during one of these blocked periods, it often receives an incomplete line or nothing at all — which appears as a value of 0 after parsing.

This is why I was seeing irregular spikes and jitter in animation.


3. Why millis() Works Better

millis() allows us to implement non-blocking timing.
Instead of freezing the Arduino, we check whether the required time interval has passed:

unsigned long now = millis();
if (now - lastSendTime >= sendInterval) {
    lastSendTime = now;
    // send sensor data
}

The advantages:

  • Arduino keeps reading sensors continuously
  • capacitive updates continue uninterrupted
  • serial communication never freezes
  • Unreal receives consistent packets
  • no backlog or piling up of delayed operations

This is crucial when sending multiple sensor values in one line.


4. Why Unreal Recursion Made the Problem Worse

Originally, I used a recursive function in Unreal to constantly read the serial port.
Inside that recursive call, Unreal Blueprint also contained its own Delay node.

This created a double-latency situation:

  • Arduino was delayed
  • Unreal was delayed
  • Their delays drifted out of sync over time

This explains the “recursive hiccup” behaviour — Unreal repeatedly attempted to read data during moments when Arduino was not sending anything because it was frozen by delay().

The result:
empty strings → parsed as zero → visual jitter in animation.

Fix

I replaced the recursion with Unreal’s:

Set Timer by Function Name
✔ consistent “read every X ms” behaviour
✔ no recursion
✔ no stack buildup
✔ consistent pacing

This matched Arduino’s non-blocking millis() timing and removed the jitter.


5. Why Smoothing the Data Is Essential

Another problem was that analogue sensors (stretch sensors, LDRs, proximity deltas) are naturally noisy.
Unfiltered raw data is unstable:

  • small spikes
  • sudden drops
  • inconsistent readings

Using raw input for animation blending is especially problematic — blend spaces expect gradual transitions, not random jumps.

Solution: Smoothing

On Arduino, we can smooth each sensor individually before sending:

  • Moving average
  • Exponential smoothing
  • Median filtering
  • Sample averaging

This reduces noise drastically and produces stable, predictable curves for animation.


6. Guaranteeing Correct Data Parsing in Unreal

Because I send 5 sensor values in every line, Unreal must ensure it reads a complete line, not a partial one.

My message format:

touch, toggle, proximity, stretch, LDR

To prevent Unreal from parsing incomplete packets, I check:

✔ ArrayLength == 5

Only if the incoming line splits into exactly 5 values do I allow animation logic to run.
If not, that frame is skipped.

This eliminates crashes and unexpected behaviour caused by truncated serial messages.



Summary

Why the jitter appeared:

  • Arduino froze during delay() → incomplete serial messages
  • Unreal read during these frozen periods → empty strings
  • Unreal recursion amplified timing drift
  • raw analogue data introduced noise

What fixed it:

  • replacing delay() with millis() (non-blocking timing)
  • using Unreal’s timer instead of recursive calls
  • smoothing sensor data before sending
  • verifying ArrayLength == 5 before parsing

Result:

Smooth, stable, responsive interaction between physical sensors and Unreal animation.

Categories
Final Major Project

Research: How immersive is enough?


Cummings and Bailenson’s meta-analysis, How Immersive Is Enough?, challenges the simple idea that “more immersive technology always produces more presence.” Their argument is that although higher technological immersion can increase presence, the relationship is uneven across different system features.

Their findings show that technological immersion has an overall medium-sized effect on presence. Display quality is treated as a technological variable, while presence is the psychological experience of “being there.” Importantly, not all technological features contribute equally. Features such as user tracking, stereoscopic visuals, and a wide field of view have a stronger effect on presence than features like high-resolution imagery or high-quality audio.

Increased presence often intensifies user affect: the more present someone feels, the more their responses to virtual stimuli resemble their responses to real-world equivalents. The goal of immersive technology is to create interactions and sensory cues that approximate real-world counterparts closely enough to evoke those parallel reactions.

A persistent problem in the literature is that presence and immersion are often used interchangeably, which causes conceptual confusion. The distinction only began to solidify in the late 1990s with foundational work in telepresence. Presence in mediated environments is fundamentally a psychological phenomenon, which means it is subjective. When describing presence, it is essential to frame it as something that may be experienced rather than something that is guaranteed.

Presence reflects the degree to which a person experiences a mediated environment as the place where they are consciously located. Slater and Wilbur (1997) note that a system is more likely to be immersive if it offers high-fidelity stimulation across multiple sensory modalities. This echoes Merleau-Ponty’s emphasis on the multisensory nature of cognition: vision, touch, sound, and body movement all contribute to our sense of being in a place.

A general rule of thumb is that the more immersive the system, the greater the likelihood of presence, though individual differences mean not everyone will feel presence to the same degree. Immersion increases the opportunity for presence but does not guarantee it.

Slater describes two conditions needed for spatial presence:

  1. The user must draw on spatial cues to perceive the environment as a plausible space.
  2. The user must experience themselves as located within that space.

The second condition depends on how well the system represents the user’s body and actions, including body tracking and responsiveness. This links to concepts such as body ownership, self-presence, and agency.

Wirth et al. (2007) describe presence as a binary state in which perceived location and action possibilities become tied to the mediated environment. In this state, mental capacities are organized around the virtual space rather than the physical world. This is critical because mental capacities are contained within the boundaries of the mediated environment, and those boundaries are defined by the technology itself. Virtual worlds operate according to the physics of the game engine, the logic of the code, and the constraints chosen by designers. In other words, the structure of the environment shapes the structure of the user’s possible actions and perceptions.

Within my research design, I am focusing on one particular aspect of immersive systems: avatar animation fidelity. This includes the degree to which avatar expressions and movements are possible, expressive, and physically convincing. This involves both visual fidelity (how the avatar looks) and animation fidelity (how well the avatar moves, emotes, and mirrors the user’s physical actions). Since this research identifies avatar fidelity as one of the impactful features of immersive technology, my study centers on how these expressive capacities influence the user’s sense of presence.

Advanced immersive systems rely on faster update rates and higher fidelity, including more detailed avatars and richer facial expressions. Animation fidelity plays a central role in creating a believable correspondence between the user’s actions in the physical world and the avatar’s behavior in the virtual one.


Terminology:

Mediated experience

experience that is interpreted and given meaning through cognitive processes rather than perceived directly. Mediated environments deal with the secondary properties of objects: what the system presents, not what physically exists in front of the user.

Categories
Final Major Project

Capacity Sensors

Building and Refining a Capacitive Sensor System for Unreal Engine


I began this project by experimenting with DIY capacitive sensors built from conductive materials connected to an Arduino. My goal was to capture both touch and proximity data and use that information to control character animation inside Unreal Engine. The early setup used simple resistors: 1 MΩ for touch sensors (small surface area) and 100 MΩ for proximity sensors (larger surface area). After some tuning, the system worked well enough to stream sensor values into Arduino and pass them to Unreal through a serial connection.

Mapping Sensor Data to Animation

Once Unreal was receiving a comma-separated stream of values, I used the “Parse into Array” function to split the data and feed it into animation logic. One of my first tests was to connect proximity values to a blendspace controlling a character’s transition from walking to running (see blend samples). Higher proximity values meant the user’s hand was closer to the sensor, which increased the character’s running speed.

Before building the blendspace logic, I had to measure the actual sensor range. For example, if the maximum proximity reading was around 700 (on the pic above the reading at the time was 100 from arduino and I opt for 75 for blendspace), that value became the threshold for the “full running” state in the blendspace. Values between the minimum and maximum produced interpolation between walking and running, visually resulting in a sort of jog. This was an important early step: confirming the sensor range and mapping it cleanly into Unreal’s animation system.

A Layered Workflow

The overall workflow quickly became layered:

  1. Hardware
    Building DIY sensors, wiring circuits, and tuning resistor values.
  2. Arduino Programming
    Reading input, filtering noise, and sending reliable data over serial.
  3. Unreal Engine Integration
    Parsing the data, storing it in game components, and distributing it to Blueprints.
  4. Animation
    Designing blendspaces, setting up transitions, and creating smooth visual feedback.

Each layer depended on the previous one. Small issues in hardware or serial timing often showed up later as animation glitches.

Prototyping process so far

To begin testing, I wired a potentiometer and LED on a breadboard. I verified that changing the resistance adjusted the LED brightness. Then I mirrored this behavior in Unreal by sending potentiometer values over serial and mapping them to light intensity. This created a simple “digital twin” of the LED inside Unreal. This was my first complete hardware-to-software loop, and it helped confirm that serial communication and input parsing were working correctly.

Tinkecard wiring LED with 220KOhma

Choosing Capacitive Sensors

My goal was to move beyond conventional interactions such as buttons or sliders. I wanted users to interact through bodily movement and proximity rather than standard UI controls. Capacitive sensors fit this vision because they support touch, toggle, and proximity detection, which creates richer possibilities for movement-based input. Although I initially considered distance sensors, the flexibility of capacitive sensing seemed more appropriate.

Problems with the DIY Sensors

The DIY sensors worked, but they produced unstable behavior when used at high frequency. Unreal was receiving rapid spikes or drops in values, which created harsh jumps in the blendspace. The animations responded too literally to every jitter, so the character would abruptly shift between states, which was visually disruptive.

From an animation perspective, I wanted ease-in and ease-out behavior instead of instant transitions. The jitter not only looked bad, but it also broke the intended user experience, since the system emphasizes animation as feedback for user actions.

Another major issue was latency. At times, there was a noticeable delay between the user touching a sensor and the animation updating in Unreal. This lag broke the illusion of responsiveness and the immersion and made interaction feel unreliable.

Set up with DIY capacity controller using 1M resistors, single capacity sensor (conductive copper material + DIY alligator clip)


Capacity sensing

#include <CapacitiveSensor.h>
Imports the library that lets Arduino measure touch input by timing how long it takes to charge/discharge a pin.

CapacitiveSensor Sensor = CapacitiveSensor(4, 6);
Creates a CapacitiveSensor object.

  • Pin 4 → “send” pin (outputs charge pulses).
  • Pin 6 → “receive” pin (connected to your conductive fabric through a resistor).
    So between pins 4 and 6, you have a resistor (usually 1–10 MΩ).

long val; → stores the sensor reading.

 int pos; → stores the LED’s state (0 = off, 1 = on).

 #define led 13 → defines the LED pin (the built-in LED on most Arduinos).

Wire a 1–10 MΩ resistor between SEND_PIN and RECV_PIN.  Connect your conductive fabric to RECV_PIN.

Capacity sensing.

It requires 2 pins, send pin and receiver pin, in such an order within the argument, connected through the resistors, of min 1M ohm resistance. The second pin is the one that will be connected to the material.

Tinkecard wirirng, 1M Ohm Resistor for capacity sensor

Switching to the MPR121

During a consultation with a Joanne at CTH, I was advised to switch to the Adafruit MPR121 capacitive sensing microcontroller. Unlike my DIY setup, the MPR121 includes built-in resistors, calibrated touch detection, and support for up to 12 electrodes. It is safer, more stable, and better optimized for consistent proximity readings.

Switching to the MPR121 meant starting over. I had to learn how the sensor worked, rewrite my Arduino code, and adjust my Unreal parsing logic. Because my original Unreal code ignored zeros to avoid noise from the DIY sensors, the new sensor output behaved incorrectly until I removed that filtering. Once updated, the readings became consistent across all electrodes.

One issue I encountered early on was a mismatch between the Arduino’s pace of sending data and Unreal’s pace of reading data. If these two rates do not align, Unreal sometimes reads the serial buffer at a moment when no fresh data has arrived. This produces an empty string, which Unreal interprets as 0. Those unexpected zeros caused visible spikes in the animation, because a single frame at “0” looked like an abrupt drop in proximity.

Delays can occur at any stage of the pipeline, but in my setup they were mainly caused by the Unreal Blueprint logic. I had followed a YouTube tutorial that used a recursive function to continuously read the serial port. Inside that recursive function was a Delay node. Over time, this delay created latency between the actual sensor updates and Unreal’s polling rate. Because of that latency, Unreal would occasionally read the buffer too early, producing empty strings and therefore zeros.

To avoid the sudden drops in values, I filtered out zeros entirely and only processed values above zero. This prevented jitter and kept the animation stable.

To optimize the Blueprint, I stopped using the recursive function approach and instead called a separate function with a Delay node only once at Begin Play. This removed the accumulated latency and made the system more consistent.

However, once I switched to the new MPR121 capacitive sensor, the old filtering no longer worked. The new sensor could legitimately output low values, so ignoring zeros caused incorrect behavior. I had to revise the Unreal code to properly read and handle the MPR121’s data without relying on the previous workaround.

Touch vs Proximity on the MPR121

The MPR121 handles touch and proximity differently:

  • Touch readings are highly stable and binary-like. The sensor reliably reports when an electrode is touched or released.
  • Proximity requires interpreting the raw filtered data rather than relying on simple touch events. Proximity values fluctuate more gradually, which is useful for driving continuous animation parameters such as blendspaces.

This difference helped create much smoother transitions in Unreal and eliminated the jitter issues I had with the DIY approach.

MPR121

Set up with MPR121, single capacity sensor (conductive copper material + alligator clip) no breadboard required, electrode 0 connected via female to male connector

So when working with the capacitive sensors and different fruit, the readings still needed fine-tuning. I had to find a “sweet spot” for the pacing of the sensor data, because the delay inside each loop of the Arduino code directly affects how smoothly the visuals animate in Unreal.

After troubleshooting, I discovered that a 200-millisecond delay in the Arduino loop produced the best results. Anything lower or higher caused the animation to drift out of sync with the sensor cycle, which created visible artefacts or stuttering in the visual output.

With the 200-millisecond timing, the animation in the Unreal visual prototype appeared smooth, stable, and visually pleasing.

Reflection on the Learning Curve

This process involved much more than I expected at the beginning. I had to:

  • Troubleshoot noisy DIY sensors
  • Understand how serial timing affects Unreal’s data processing
  • Rework Blueprints to handle real-time input cleanly
  • Learn a new sensor system (MPR121) from scratch
  • Refactor Unreal parsing logic after switching hardware
  • Rebuild the animation logic to respond smoothly rather than abruptly

The experience highlighted how interconnected physical computing and digital animation are. Each layer—hardware, code, engine integration, and animation—affects the next. Small assumptions in one layer can create major issues downstream.

Despite the challenges, the result is a much more stable and responsive system. The MPR121 significantly improves reliability, and the animation feedback now matches the user’s physical actions more closely, which is important for my project’s focus on embodied interaction.

Categories
Final Major Project

Game istance, casting, inheritance UE5

The final result is shown in the video above. The process is described below.

During the setup for this project, I’ve explored and implemented a basic game logic system in Unreal Engine that integrates real-time data from an Arduino device. The first important concept is the Game Instance.

The Game Instance in Unreal Engine allows to store data that persists across all levels. It effectively acts as a single source of truth, holding all global variables that other Blueprints can access.

In my case, the animation system is driven by real-time data coming from the Arduino. The Arduino sends sensor values to Unreal Engine via a serial communication plugin, which I handle inside a dedicated Serial Communication Blueprint.

Here’s the overall data flow:

  1. Arduino → sends sensor data.
  2. Unreal Engine Serial Communication Blueprint → receives and processes the data.
  3. Game Instance (GL_logic) → stores the processed data globally.
  4. Character Animation Blueprint → retrieves the data to drive animations.

This order follows Unreal’s hierarchy (heritance) and data flow principles. It would be incorrect to try to reference the communication Blueprint directly inside the animation or main game logic Blueprints. Instead, Unreal uses a system called Casting to access data or functionality from another object.

In the Serial Communication Blueprint, I perform a cast to the Game Instance (GL_logic) and set a variable called Intense with the mapped Arduino value.

The process looks like this:

  • The Arduino sends data as a string.
  • The Blueprint reads data (serial read line function),converts that string to a float.
  • The float is then mapped from the Arduino’s range (0–255: this is the range of values which are predefined within the code logic) to a new range (0–100,000 this is the range value used for the light intensity).
  • Finally, this mapped value is stored in the Intense variable inside the Game Instance, which acts as the global data container.

Then, inside the Animation Blueprint’s Event Graph, I use the Get Game Instance node, cast it to my custom Game Instance (GL_logic), and retrieve the value of Intense. That value is then assigned to a local variable called SensorIntensity inside the Animation Blueprint.

This local variable (SensorIntensity) is used as a threshold within the State Machine to determine transitions between animation states.

For example, in my testing setup:

  • If the mapped Arduino value exceeds 70,000 (roughly 70% of the mapped range), the character transitions from the Idle animation to the Walking animation.

This system allows me to achieve real-time, data-driven animation, where the animation is not pre-rendered but responds dynamically to user input and sensor data.

In summary, this setup demonstrates:

  • The use of the Game Instance as a persistent global data store,
  • The use of Casting for cross-Blueprint communication,
  • And how the data flow follows Unreal’s inheritance and logic hierarchy to connect Arduino input with character animation in real time.


Animation – state machine

The Animation Blueprint class is created using a Blueprint that’s assigned to the same skeleton as the character you want to animate. It’s essential that both the character and the Animation Blueprint share the same skeleton; otherwise, the animations won’t work correctly.

Once the Animation Blueprint is saved, it can then be applied to the character in the Animation section of the character’s details panel, under the Animation Class of the character mesh.


Resources:

Animation State Machine: Unreal Engine 5 Tutorial – Animation Blueprint Part 1: State Machines

Main Game Component: How to use the Game Instance In Unreal Engine

Casting: A COMPLETE guide to CASTING in Unreal Engine 5!

Categories
Final Major Project

Mulit-input interaction; handling CSV within Unreal

Exploring Multi-Input Interaction Between Arduino and Unreal Engine

After getting the basic serial communication between Arduino and Unreal Engine working, I wanted to push the setup further — not just sending a single stream of data, but handling multiple inputs at the same time. The idea was to move from simple one-sensor control to a system capable of reading different kinds of data and using them to drive changes in real time within a virtual environment.

Phase One: Connecting and Testing with a Single Input

I started by connecting a potentiometer to the Arduino and sending its values to Unreal through the serial port. Once I confirmed the data was arriving correctly, I used it to control a visual parameter inside Unreal — for example, adjusting the brightness of a light in real time.

This first step was crucial for understanding how Unreal’s serial communication works and how data from Arduino could be mapped to environmental changes. It also helped me get familiar with how to handle Blueprints, Unreal’s visual scripting system.

Phase Two: Managing Communication Between Blueprints

Once I had data coming in, I needed a way for different Blueprints to share and use that information.

At first, I tried direct Blueprint referencing, manually linking one Blueprint to another. This works but isn’t scalable — especially when more Blueprints need to access the same data.

So I moved to a better solution: using a Game Instance or Game Mode Blueprint to store data globally. This means that values coming from the serial communication Blueprint can be saved in a shared space and then retrieved anywhere — even across different levels.

In my setup:

  • The Serial Communication Blueprint reads data from the Arduino.
  • The Game Instance Blueprint stores the values globally.
  • The Animation Blueprint retrieves those values to control animation states.

For example, if the sensor value goes above a certain threshold, the animation Blueprint triggers a specific motion.

This made the system modular and ready for more complex interaction.

Phase Three: Moving to Multi-Input (Potentiometer + Capacitive Sensing)

Next, I wanted to add another input — a capacitive sensor.
This meant Arduino would now send two separate values:

  1. The potentiometer reading
  2. The capacitive sensing value

Both needed to be read by Unreal, processed, and used independently.

On the Arduino side, I updated my sketch to send the two readings in one line, separated by a comma (,)

  // --- Send both readings to Unreal Engine ---
  // Format: potentiometer,capacitiveSensor
  Serial.print(mappedValue);
  Serial.print(",");
  Serial.println(capValue);  // Sends e.g. "128,2450\r\n"

Each line contains both sensor values.

Inside Unreal, this required some changes in the Blueprint logic. Previously, Unreal expected just one value per line. Now, Unreal had to:

  1. Read the full line as a string.
  2. Split that string using the “Parse Into Array” node.
  3. Use the comma as the delimiter (because that’s how the Arduino sends data).

This function breaks the string into separate elements in an array. For my setup:

  • Index 0 contains the potentiometer value.
  • Index 1 contains the capacitive sensor value.

Phase Four: Parsing and Converting Data in Unreal

After parsing the string into an array, I used two local variables inside the Blueprint:

  • PotValue (for the potentiometer)
  • CapValue (for the capacitive sensor)

From the array, I Get the value at index 0 and index 1.
Since Unreal receives them as strings, I then use String to Float to convert them to numbers.

Finally, I assign these values to the local variables. This gives me two live, numerical data streams from Arduino — both updating in real time inside Unreal.

Reflections: Building Toward a Digital Twin

This multi-input setup opened a door to more complex, multimodal interactions — what could be considered the foundations of a digital twin system. Instead of a single control parameter, multiple sensors can now inform and influence the virtual environment simultaneously.

Whether it’s lighting, animation, or physical simulation, each input can be mapped to different components in Unreal. And because the data is handled globally, it’s easy to expand: adding new sensors or mapping them to other behaviours is just a matter of extending the parsing and assignment logic.

In essence, this phase marks the shift from a simple Arduino-to-Unreal bridge to a flexible, scalable communication system that supports multiple real-world inputs and real-time virtual responses.

Arduino sketch, potentiometer and capacity sensing as inputs.