Categories
Project 2

Mixed Reality Project Reflection

Project Title: Venus Flytrap – Mixed Reality Game Experience
Platform: Meta Quest 3
Software: Unity (with Meta XR SDK), Arduino (for physical computing integration)



Overview & Context

This project was developed in response to a design brief that asked us to create an engaging experience using animation and storytelling, with a focus on user interaction and immersive feedback. The brief encouraged experimentation with multiverse concepts — blending physical and virtual worlds — to craft a compelling, thought-provoking experience.

I chose to approach this by designing a mixed reality (MR) game experience for Meta Quest 3 using Unity and physical computing via Arduino. The main focus was on user agency, visual feedback, and the integration of real-world components that react dynamically to virtual actions.


Concept & Gameplay Summary

The game centres around a Venus Flytrap theme and is designed as a seated, single-player experience. The player is surrounded by “flytrap monsters” and must shoot them to rescue a trapped fly, physically represented in the real world via a mechanical flytrap installation.

Core Mechanics:

  • Shoot flytrap monsters (each requiring 3 hits to be defeated)
  • Defeat all enemies to trigger a real-world reward: opening the physical flytrap and “releasing” the fly
  • Fail condition: If monsters reach the player, health depletes, and the game ends

This concept questions the idea that “what happens in VR stays in VR” by making real-world elements react to in-game choices.


Animation & Visual Feedback

A central focus of the project was the role of animation in creating strong audiovisual feedback and reinforcing player agency:

  • State-based animations: Each monster visually reacts to bullet hits via Unity’s Animator Controller and transitions (e.g., three different states reflect remaining health).
  • Color feedback: Monsters transition from vibrant green to desaturated red as they take damage.
  • Stop-motion-inspired effects: When monsters die, animated flies burst from their abdomen — a stylized visual reward.
  • Audio cues: Complement visual states and further reinforce player actions.

This dynamic feedback enhances immersion, turning simple interactions into meaningful, sensory-rich experiences.


Technical Implementation

  • Unity + Meta XR SDK: Developed the core game logic and animation system within Unity using the Meta SDK for mixed reality features like passthrough, hand/controller input, and spatial anchoring.
  • Arduino Integration: Connected the physical flytrap installation to Unity using serial communication. Actions in VR (e.g. defeating monsters) trigger real-world servo movements.
  • Pathfinding with NavMesh Agents: Monsters move toward the player using Unity’s NavMesh system, avoiding terrain obstacles and interacting with the player collider.
  • Procedural Bullet Logic: Shooting was implemented via C# scripts that instantiate projectile prefabs with force and collision detection. Hits are registered only on monsters’ abdominal areas.
  • Game State Management: Score tracking, monster kill count, and health depletion are all managed via a custom script, providing a clear player progression system.

Design Decisions & Constraints

  • Asset Use: Monsters were rigged using Mixamo animations and a model sourced from Unreal Engine assets (Creative Commons licensed), allowing focus on interaction design and technical implementation over custom 3D modeling.
  • Animator Learning Curve: A major portion of the project involved learning Unity’s animation pipeline, particularly in creating clean transitions and state-based logic driven by gameplay.
  • Hardware Limitations: Due to Meta Quest 3 restrictions, the final APK shared for submission runs without Arduino hardware. The version with physical feedback requires an external microcontroller setup, which is documented in the accompanying video.

Physical & Virtual Worlds Integration

This project explores the interplay between digital interaction and real-world feedback, where actions in the virtual world have tangible consequences. It challenges typical game environments by asking: What if virtual success or failure could reshape physical reality — even subtly?

By positioning the player in the center of a game board-like world and creating a reactive environment both inside and outside the headset, I aimed to explore storytelling through presence, consequence, and multisensory feedback.


Key Learnings

  • Deepened understanding of Unity’s animation system (Animator Controller, blend trees, transitions)
  • Gained practical experience with physical computing and VR device integration
  • Improved technical problem-solving through scripting gameplay logic and Arduino communication
  • Recognised the power of animation fidelity in conveying emotion, feedback, and agency

Submission Info

  • Final APK (without Arduino requirement) included
  • Showcase video with documentation of the Arduino setup + physical installation
  • Additional documentation includes technical breakdown, flowcharts, and design process
Categories
Project 2

World Defining

The word-building element is really just there to make the scene more visually engaging. The main purpose, though, is functional. It’s meant to enhance the gamified experience by adding some challenge, and also by introducing objects into the environment that interact with the gameplay.

These new digital “plants” serve two main purposes:
First, they help constrain the user’s field of vision, with long, plant-like forms appearing on the horizon. They also embrace the Venus flytrap aesthetic, reinforcing the overall theme. The idea is to partially obscure distant vision, creating a more immersive and slightly tense experience.

I’m keeping this project in mixed reality because it’s important that the user can still see the physical Venus flytrap installation and observe how it responds at the end of the game. That physical interaction is core to the experience.

By introducing these digital plants into the scene, I also managed to constrain the movement of the monsters. Now, not all monsters move directly toward the user in a straight line. Because the mesh surface for movement has been reduced by the placement of these digital obstacles, the monsters must now navigate around them to reach the user. This makes the gameplay more dynamic and adds natural variation to how and when monsters arrive.

Another layer of diversity in movement was achieved by tweaking the NavMesh Agent settings on each monster prefab. By adjusting parameters like speed, acceleration, and angular speed for each monster individually, their behavior became more varied and unpredictable.

I also added more detailed textures to the floor and plane to better fit the aesthetic, updated the UI to match the theme, and fine-tuned the color palette. All of these design choices help reinforce the atmosphere of a plant-monster, green Venus flytrap world.

So overall, these updates have made the project feel more cohesive, engaging, and visually in line with the original concept.

Categories
Project 2

Venus Fly Trap Physical Computing Installation

Blending Worlds: Building My Venus Flytrap VR Game

Over the past few weeks, I’ve been figuring out how to bridge what I’m doing creatively with the technical setup—finding a way to connect my ideas in Unity with a working space that includes a VR headset, microcontroller, and physical components. After sorting out how to get Unity communicating to the microcontroller based on my exploration, described in the previous blog post, I began working on my more sophisticated idea.

One small but meaningful breakthrough was setting up a button that, when pressed, would light up an LED on a breadboard. It was a simple interaction, but it confirmed that the virtual and physical systems could communicate. It might seem basic, but this helped me break through a technical wall I’d been stuck on. Sometimes the simplest prototypes are the most important steps forward.

From there, I started thinking about how to bring coherence between the physical and virtual elements of the project. The game I’m building revolves around a Venus flytrap, and I wanted the whole aesthetic and gameplay experience to revolve around that concept. In the game, the Venus flytrap acts as both protector and trap. It hides a real-world object inside its petals (in this case, the user) and stays closed. The player’s goal in VR is to defeat all the “fly monsters” surrounding it. Once they’re defeated, the Venus flytrap opens, revealing the trapped player and marking the win.


Repurposing Physical Models

For this, I built a physical model of a Venus flytrap. The petals are made of painted cardboard and 3D-printed components, designed around a gear-based movement system that controls how the trap opens and closes. A DC motor mounted at the back drives the movement, using a cog system with four gears that allow the left and right petals to move in opposite directions. The mechanics are relatively straightforward, but designing the gear system took a fair amount of design thinking and trial-and-error.


Bridging Code and Motion

The movement logic is coded in Arduino, which I then uploaded to the microcontroller. It communicates with Unity through a patch that tracks what’s happening in the game. Specifically, the system monitors how many fly monsters have been “killed” in the virtual world. Once all the fly monsters are defeated, a signal is sent to the motor to open the Venus flytrap in real life—a moment of physical transformation that responds to virtual action.

This project sits somewhere between VR and physical computing. It’s not just about creating a game, but about exploring what happens when virtual reality meets metaphysical, embodied experiences. I’m fascinated by this transition between worlds—the way something in the virtual space can have a tangible, physical consequence, and how that loop can create a more meaningful sense of interaction and presence.

Setting up this system—both technically and conceptually—helped me shape the final scenario. It’s about merging playful game mechanics with thoughtful digital-physical storytelling, and I’m excited to keep exploring how far this kind of hybrid setup can go.

Categories
Project 2

Animation States based on user actions

Basic functionality


Getting the functionality implementation involved placing a box collider in the scene. This box has a trigger collider and a tag, which allows it to respond to collisions with a bullet object. Foundation for shooting bullets from the gun and hitting the abdomen of the monster. Sorry, I hope this ain’t too cruel.

In my C# script, I’m using OnTriggerEnter to detect when a bullet collides with this box. By using tags, I can define different behaviours depending on what the bullet hits, meaning what type of object within the scene gets hit by the bullet. For example, I assign the tag "Monster" to this collider, and the script handles it specifically when the bullet hits an object with that tag; consequently, only monsters can get hit, and only from its abdomen, otherwise it is bulletproof.

Now, to make interactions more precise and tied to the character’s body, I’ve attached this collider box as a child to one of the bones of a 3D model. The model (and its animations) are imported from Mixamo, and by placing the box under a bone like the spine or hips, it follows the character’s movements during animation. The spine worked out visually better than the hips, as there is more movement at the spine than hips, as per COG.

This setup ensures that the hitbox stays aligned with the monster’s belly even as different animations play — for example, idle, walk, or death. Since it’s parented to a bone, it moves in local space relative to the skeleton, not just world space.

Set of animations to represent different states:

When importing multiple FBX animation files from Mixamo (like idle, hit, walk, death), I ran into a problem where animations didn’t play correctly. The model would appear static or reset to a default pose.

After some troubleshooting (and watching tutorials), I realised the problem was due to avatar mismatch.

Here’s the fix:

This ensures all animations share a consistent skeletal rig. Without this, Unity won’t apply the animation data properly, and your character may stay static without any visible errors.

  • First, import the base T-pose or idle FBX with the correct avatar settings.
  • Then, for every additional animation FBX (e.g. walk, hit, die), go into the Rig tab in the import settings and set the Avatar Definition to “Copy From Other Avatar”.
  • Assign the avatar from your base model.

Summary of Logic:

  • A trigger box is placed inside the monster’s belly.
  • It’s parented to a bone, so it follows the animation.
  • When the bullet collides with it, the script triggers a hit reaction.
  • The Animator (assigned to the monster GameObject) switches states based on the trigger condition, such as transitioning into a “Hit” or “Die” animation.

This setup gives you a dynamic and accurate way to detect hits on specific body parts during animations.

Monster prefab and game concept

My idea is to create a seated VR game, where the player remains in a rotating chair. The player won’t move around, but can rotate freely to face threats coming from all directions. In this project, I explored user interaction and AI behavior through a MR game prototype built in Unity. The setup revolves around a central camera, with the user seated on a “routing table chair” and tracked via a touch head positioned under the camera centre. This configuration allows the user to rotate and move their body freely, which is crucial to how the game reads orientation and interaction.

The gameplay involves monsters (zombies) approaching the player from various positions in the environment. I plan to define the origin point (0,0,0) in Unity as the player’s position, and the monsters will spawn at random positions within a certain radius and walk toward that centre point.


Monster Behaviour and Setup:

  • I want to design a single monster prefab that contains all the core functionality: walking, taking damage, reacting to hits, and dying.
  • Each monster will require three successful hits from the player to be fully killed.
  • The monster’s Animator will have multiple states:
    • Walking toward the player.
    • A reaction animation for the first hit.
    • A different reaction for the second hit.
    • A death animation (falling forward or backwards) on the third hit.

I also want the walking animation to be affected by the hit reactions. For example, after being hit, the walker could pause briefly or slow down before resuming.


Key Goals:

  • Spawn Logic: Monsters will be spawned from various angles and distances (e.g. within a radius) and will always walk toward the centre (player).
  • Modular Prefab: All behaviour should be contained in one reusable prefab, so I can later swap out models or visuals without rewriting logic.
  • Focus on Functionality First: My current priority is to get everything working in terms of logic and animations. Once that’s stable, I can improve visuals, variety, and polish.


To detect proximity events between the user and approaching creatures, I attached a spherical collider under the head object – as can be seen in the screenshot above, located centraly (which represents the user’s origin point). For Unity’s physics system to register collision events correctly using OnTriggerEnter, at least one of the colliding objects needs to have a Rigidbody component—this was a key technical insight. I attached this Rigidbody to the user’s collider to ensure that the system could properly detect incoming threats.

The core mechanic involves enemy creatures that move toward the user from various directions. For this, I manually spawned seven creatures around the user, each set as a prefab, to simulate threats approaching from different angles. These enemies are animated and follow a basic AI path using Unity’s NavMesh system.

Initially, I struggled with configuring the NavMesh Agents. The creatures weren’t moving as expected, and I discovered the issue stemmed from an excessively large agent radius, which caused them to collide with each other immediately upon spawning. This blocked their movement toward the user until some of them were eliminated. Once I adjusted the agent radius in the component settings, the agents were able to navigate correctly, which was a significant breakthrough in troubleshooting.

Another major learning point was in managing the creature’s state transitions through the Animator component. Each enemy had three lives, and I created a custom C# script to handle hits via a TakeHit() function. After each hit, the enemy would briefly pause and then resume movement—except after the third hit, which would trigger a death animation.

However, I encountered a strange behaviour: even after the death animation was triggered, the creature’s body would continue moving toward the user. This was due to the NavMesh Agent still being active. To resolve this, I had to disable the agent and stop its velocity manually by setting it to a Vector3.zero. Additionally, I toggled the agent’s isStopped boolean to true and disabled the movement script to fully freeze the creature in place, allowing it to collapse realistically at the point of death.

Overall, this project deepened my understanding of Unity’s physics, AI navigation, and animation systems. It also highlighted the importance of detailed debugging, iterative testing, and thinking holistically about how different systems in Unity interact. What seemed like minor details—such as collider hierarchy, Rigidbody placement, or NavMesh agent size—turned out to be crucial to achieving believable and functional gameplay.

Categories
Project 2

Briding Metaverse with Metaphyscial



Bridging Physical and Virtual Realms with Arduino and Unity, respectively

For this part of the project, I intended to explore the interaction between physical and virtual environments. I aimed to establish a two-way communication channel where actions in the virtual world could produce real-world effects. To achieve this, I used an Arduino microcontroller to receive output signals from a Unity-based VR environment. This allowed me to control a physical LED light in response to a virtual event – shooting bullets from the gun – via the controller’s index finger trigger.


Setup Overview

The physical setup consisted of a single LED connected to an Arduino board. The Arduino was linked to my laptop via a USB-C cable. On the software side, I developed a Unity project using the OVRInput system to trigger virtual shooting events. These events would send a signal through a serial port to the Arduino, prompting the LED to turn on briefly.


Initial Challenges and Troubleshooting

The setup proved to be more challenging than expected, particularly in terms of serial communication and platform compatibility. Below is a breakdown of key issues I encountered and how I addressed them:

1. Arduino Upload Issues

At first, I was unable to upload sketches from the Arduino IDE to the board, despite:

  • The Arduino is being detected by the device manager
  • The correct drivers are being installed
  • Successful code compilation

Even though the COM port was visible and correctly selected, the IDE failed to upload the code. After troubleshooting extensively and rechecking the USB connections, I found that a simple system reboot resolved the issue. This was unexpected, but it allowed uploads to proceed normally afterwards.


2. Unity and Arduino Serial Communication; Arduino Sketch and C# description

Unity does not natively support serial communication with external devices like Arduino. To bridge this gap, I relied on the .NET System.IO.Ports namespace, which provides serial communication capabilities.

I wrote a basic Arduino sketch that turns an LED on or off based on a received character ('1' for on, '0' for off). In Unity, I implemented a custom C# script that uses the SerialPort class to send these signals. This script was attached to an empty GameObject and referenced within the RayGun script to trigger LED activation when the player fires the gun.

Based on the tutorial for setup communication: Unity & Arduino Communication


This is a simple Arduino sketch designed to control an LED. In the setup() function, serial communication is initialized at 9600 baud (bits per second), and pin 2 is configured as an output to control the LED. Although a global buffer and a char array (buf[]) are defined with a size of 50, they are not actively used in the final version of the code. I originally experimented with reading multiple characters at once, but I noticed this caused the LED to remain continuously on — which didn’t work well for my intended shooting feedback effect. As a result, I opted to read only one character at a time, which allowed for more responsive and accurate LED control.

In the loop() function, the sketch checks whether any data is available on the serial port. If data is detected, a single character is read and stored in the cmd variable. If this character is '0', the LED is turned off (digitalWrite(2, LOW)); if it’s '1', the LED is turned on (digitalWrite(2, HIGH)). This allows Unity (or any external controller) to send simple serial commands ('0' or '1') to toggle the LED in real time.

I also included a short delay of 200 milliseconds after each loop cycle. This was partly based on recommendations from online tutorials, but also confirmed through testing: it helps synchronize the communication and prevents the Arduino from reading too frequently or reacting too rapidly, which could cause inconsistent behavior. This delay ensures that the LED only responds once per input, making it more suitable for the quick, discrete signals used in a VR shooting mechanic.




In terms of the C# implementation, the script makes use of the System.IO.Ports namespace, which provides access to serial communication via the SerialPort class. This is essential for enabling Unity to communicate with external hardware such as an Arduino.

Within the Start() method, a serial connection is established using COM6, which corresponds to the port associated with my Arduino controller connected to the PC. The communication is initialized at 9600 baud, matching the settings specified in the Arduino sketch (Serial.begin(9600)).

The SendSignal(bool on) method is designed to send simple control signals — either '1' or '0' — to the Arduino. If the on parameter is true, it sends '1', which lights up the LED. If it’s false, it sends '0', turning the LED off. This binary approach allows Unity to provide immediate physical feedback in response to in-game events, such as shooting.

Lastly, the OnApplicationQuit() method ensures that the LED is turned off when the Unity application is closed. It sends a final '0' to the Arduino before closing the serial port. This prevents the LED from remaining on unintentionally after the game ends.

In summary, this script acts as a bridge between Unity and the Arduino, using serial communication to synchronize digital actions (e.g., pressing a button on the controller) with physical outputs (e.g., turning on an LED). This implementation enables a simple but effective feedback loop between virtual and physical environments.


Key Technical Bottlenecks

Namespace and Class Recognition Errors

A major obstacle was Unity’s failure to recognise the System.IO.Ports namespace and the SerialPort class. The error message stated that the class existed in two assemblies: System.dll and System.IO.Ports.dll, causing a conflict.

To resolve this:

  • I changed the API Compatibility Level in Unity (via Project Settings > Player > Other Settings) to .NET 4.x.
  • I manually downloaded the System.IO.Ports package from Microsoft’s official NuGet repository.
  • I experimented with placing different versions of the DLLs in Unity’s Assets/Plugins folder, but most led to version mismatches or runtime errors.

Ultimately, changing Unity’s compatibility settings resolved the issue, and no additional DLLs were required. As per the image below, I manually located files and copied them into the assets folder of my Unity projects, as Unity failed to retrieve the class of the namespace. This is something I’ve been advised to follow from the online group discussion. These, however, end up throwing an issue later on, with Unity suggesting a class definition left in 2 separate file versions. 1 came via plugin downloaded on VSC (Visual Studio Code), the other one manually via a zip file folder of the website. In the end, both were required to be kept on the local machine, but not under the same project. Very confusing haha.


Windows-only Support Confusion

Another issue arose from Unity reporting that System.IO.Ports it was “only supported on Windows”—despite my work on a Windows machine. This turned out to be a quirk in Unity’s error handling and was resolved by ensuring:

  • The Unity platform target was correctly set to Windows Standalone.

Final Implementation Outcome

After several hours of testing and debugging:

  • Unity successfully sent serial data to the Arduino each time the player pressed the fire button.
  • The Arduino correctly interpreted '1' the signal to light the LED, and '0' to turn it off after a short delay.
  • The interaction was smooth, and the LED reliably responded to gameplay events.

This implementation serves as a foundational example of hardware-software integration, particularly in VR environments.


Categories
Project 2

Designing a Fantasy- Inspired Shooting Game with Meaningful Feedback

Concept: Flies, Monsters, and the Venus Flytrap

Rather than having the player simply destroy enemies, I imagined the monsters of a Venus flytrap Queen who had trapped flies inside their bellies. When the player shoots a monster, the goal is not to harm it, but rather to liberate the flies trapped within.

So, with each successful hit, a swarm of flies bursts from the monster’s belly—visually suggesting that the player is helping, not hurting. This felt like a more thoughtful narrative balance, allowing for action gameplay without glorifying violence.

These monsters serve a queen—the “mother flytrap”—a physical computing plant in the real world. As the player defeats more of these monsters in-game, the number of freed flies is mapped to data used to drive a DC motor that gradually opens the flytrap plant in reality. This concept of connecting digital action to physical consequence is central to my interest in virtual-physical interaction.


Game Mechanics: Health and Feedback

Each monster has three lives, but early testing made it clear that players had no clear visual cue of the monster’s remaining health. I didn’t want to use floating numbers or health bars above their heads—it didn’t suit the aesthetic or the narrative. Instead, I introduced a colour transition system using materials and shaders.

Colour Feedback System Traffic Lights

  • Monsters start with a vivid green body, representing vitality and their flytrap origin.
  • With each hit, the colour fades:
    • First hit: duller green.
    • Second hit: reddish-green.
    • Third hit (final): fully red.

This gives players immediate visual feedback on the monster’s state without UI clutter, reinforcing the game’s narrative metaphor.


Implementing the Visual Effects

Material Cloning

To change the colour of each monster individually (rather than globally), I cloned their material at runtime:

csharpCopyEditbodyMaterial = bodyRenderer.material;

This creates a unique instance per zombie. If I used shared material, all zombies would change color together, which I discovered during testing and debugging.

Colour Transition Logic

I wrote a function UpdateBodyColor() that calculates the colour based on the monster’s remaining health. It uses:

csharpCopyEditMathf.Clamp01(health / 3f);
Color.Lerp(Color.red, Color.green, healthPercent);

This smoothly transitions from red (low health) to green (full health), depending on how many lives the monster has left.


Particle Effects: Flies – Stop motion animation aesthetics

To visualise the flies escaping, I used Unity’s Particle System.

  • I created a sprite sheet with 16 fly frames in a 4×4 grid.
  • I used the Texture Sheet Animation module to play these frames in sequence, creating a stop-motion animation effect.
  • The particle system is parented to the monster’s belly and only plays when the monster is hit.

I made sure to disable looping and prevent it from playing on start. That way, it only triggers when called from code (in the TakeHit() function).

This was an experimental but rewarding technique, simulating complex animation using a simple sprite sheet. It aligns well with the fantasy aesthetic, even though the flies are 2D in a 3D world.


Unity Implementation Notes

  • Each monster is a prefab. This made it easier to manage changes and maintain consistency across the game.
  • The TakeHit() method:
    • Reduces health.
    • Plays an animation.
    • Plays a sound (roar, hit, or death).
    • Triggers the flies’ particle effect.
    • Calls UpdateBodyColor() to change the visual appearance.
  • Once health reaches 0:
    • The monster “dies”.
    • The NavMesh agent stops.
    • A death animation and sound are played.
    • The object is destroyed after a short delay.

Bridging Physical and Digital Worlds

The core concept I’m exploring here is the flow of data between virtual and physical space. I love the idea of the player doing something in the game that directly affects the real world. By linking monster deaths to the opening of the physical flytrap plant via a DC motor, the project becomes more than just a game—it becomes an interactive experience across dimensions.

I see this as a kind of “virtual twin” concept, but not just a visual replica. It’s a data-driven relationship, where progress in the virtual world controls mechanical outcomes in the physical world.

This idea is loosely inspired by Slater’s concept of presence—how sensory data in one space (physical) can be reconstructed or mirrored in another (virtual), and vice versa. It’s this bidirectional flow of meaning and data that fascinates me.


Categories
Project 2

Mastering Transitions in Unity Animator: A Game Logic Essential


When creating dynamic animations in Unity, transitions play a vital role in shifting between different animation states. These transitions are not just aesthetic—they’re often tied directly to your game logic to deliver responsive and immersive feedback to the player.

What Are Transitions?

In Unity’s Animator system, transitions are used to move from one animation state to another. This could mean transitioning from an “idle” state to a “running” state when the player moves, or from a “standing” state to a “hit reaction” when the character takes damage.

Triggering Transitions

Transitions are typically triggered by parameters—these can be:

  • Booleans: Great for simple on/off states (e.g., isJumping = true). However, they should not be used for managing multiple simultaneous conditions, as this can cause errors or inconsistent behavior.
  • Integers or Floats: Useful when you need more nuanced control, such as switching between multiple animation states based on a speed or health value.
  • Triggers: Ideal for one-time events like a shot being fired or a character being hit.

For example, imagine a scenario in a shooter game:
When an object gets hit by a bullet, you can trigger a transition to a “damaged” animation state. This provides instant visual feedback to the player that the hit was registered—crucial for both gameplay and user experience.

Best Practices

  • Use Booleans sparingly, only for simple, binary state changes.
  • Use Triggers or numerical parameters for more complex or multi-condition transitions.
  • Always test transitions thoroughly to avoid animation glitches or logic conflicts.

Final Thoughts

Mastering transitions in Unity isn’t just about getting animations to play—it’s about making your game feel alive and responsive. By tying animations into game logic through intelligent use of transitions and parameters, you enhance both the realism and playability of your game.



Project Progress

For my project, I’m using a 3D model of a Creep Monster, which I downloaded from the FAB platform in FBX format. (Creep Monster | Fab) I then uploaded it to Mixamo, where I generated a skeleton for the model and applied different animations—such as standing idle and dying.

After exporting those animations, I imported them into Unity. One of the main steps is adding components, especially the Animator component. This is essential for handling transitions between animation states. These transitions are not only important for visual feedback but are also critical for managing game logic.

I also attached a custom C# script to the monster object in the scene. This script controls what actions should happen to the monster based on in-game interactions.

A key requirement for this setup is having an Avatar. The downloaded model didn’t come with one, but Unity allows you to generate it. To do this:

  1. Select the model in the Inspector panel.
  2. Go to the Rig tab under the Model settings.
  3. Change the Animation Type to “Humanoid”.
  4. Under the Avatar section, choose “Create From This Model”.
  5. Click Apply.

This process fixed an issue I faced during gameplay where the model wasn’t rendering correctly. The problem stemmed from the missing Avatar, which serves as the rig representation Unity needs for the Animator to work properly.

For interaction and testing, I modified a custom C# script attached to the bullet object. This script checks for OnTriggerEnter() events. When the bullet’s collider detects a collision with an object tagged as “Monster”, it triggers another script. That secondary script acts as a bridge, connecting the collision event to the Animator on the Creep Monster.

As a result, when the player shoots the monster, the Animator transitions from its default state to a “dying” animation. This workflow is how I’m handling enemy reactions and visual feedback for hits and deaths in the game.

It’s been a great way to troubleshoot and understand how colliders, custom C# scripts, and Animator states work together in Unity.

Next, I’ll explore animation layers for better control, especially for complex character behaviours.

Categories
Project 2

Animating Interaction in Immersive Spaces: Exploring Visual Feedback in Gamified VR Experiences

Project Objective

This project in it’s own nature is experimental and investigates animation of visual feedback within gamified or interactive VR environments. The work is grounded in the field of Human-Computer Interaction (HCI), especially within 3D immersive spaces where traditional screen-based feedback is replaced by spatial, multisensory experiences.

From Screen to Immersion: A Shift in Feedback Paradigms

In traditional gaming, feedback is predominantly audiovisual, limited to what can be shown on a 2D screen. In VR, the immersive spatial environment introduces new challenges and opportunities. Visual feedback becomes a primary tool for user orientation and understanding, especially when combined with haptic and audio inputs.

As we move away from screen-based interactions into head-mounted display (HMD) experiences, visual animation plays a central role in reinforcing the sensation of presence and in guiding user interaction. Haptic feedback can complement this, but visual cues remain the most immediate and informative component of interaction in VR.

The Role of Animation in Action-Based Feedback

Consider a typical game mechanic like shooting a gun in VR:

  • The user performs an action (e.g. firing a laser).
  • A visual response follows (e.g. a laser beam shoots forward and hits a target).
  • This entire process is animated, and the quality and believability of that animation impact the user’s understanding and experience.

Here, animation serves not just aesthetic purposes, but as a way to translate data into visual understanding—the laser beam visualises the trajectory and the distance within the space which helps user understand the space and gives them a sense of the space, which offers infinite FOV.



1. Personal Space

  • Distance: ~0.5 to 1.2 meters (1.5 to 4 feet)
  • Description: This is the space we reserve for close friends, family, or trusted interactions.
  • Use in VR: Personal space violations in VR can feel intense or invasive, making it powerful for emotional or narrative impact.

2. Intimate Space

  • Distance: 0 to 0.5 meters (0 to 1.5 feet)
  • Description: Reserved for very close interactions—hugging, whispering, or personal care.
  • Use in VR: Very rare outside of specific narrative or therapeutic experiences. It can evoke strong emotional responses, including discomfort.

3. Social Space

  • Distance: ~1.2 to 3.5 meters (4 to 12 feet)
  • Description: The typical space for casual or formal social interaction—conversations at a party, business meetings, or classroom settings.
  • Use in VR: Useful for multi-user or NPC (non-player character) interaction zones where you want users to feel present but not crowded.

4. Public Space

Use in VR: Great for audience design, environmental storytelling, or large-scale virtual spaces like plazas, arenas, or explorable worlds.

Distance: 3.5 meters and beyond (12+ feet)

Description: Space used for public speaking, performances, or observing others without direct engagement.


Visual Fidelity vs Animation Fidelity

Two critical concepts are being explored:

  • Visual Fidelity: How objects are represented in terms of texture, rendering quality, lighting, and detail. This relates to how convincing or realistic the environment feels.
  • Animation Fidelity: How smoothly and convincingly motion and interactions are animated. It includes timing, easing, weight, and physicality of motion.

Both forms of fidelity are essential for user immersion. High animation fidelity, in particular, supports believability—users need to feel that the action they performed caused a logical and proportional reaction in the environment.

Procedural Animation and Data-Driven Feedback

One key technique in this research is procedural animation—animations generated in real time based on code, physics, and user input, rather than being pre-authored (as in keyframe animation).

For example:

  • A bullet fired from a gun might follow a trajectory calculated based on angle, speed, and environmental factors.
  • The impact animation (e.g. an explosion) could scale in intensity depending on these values—larger impacts for faster bullets or steeper angles.
  • This helps communicate different degrees of impact through graduated visual responses.

Benefits of Procedural Feedback

  • Consistency with physical principles (e.g. gravity, momentum).
  • Dynamic responsiveness to different user actions.
  • Enhanced variation and realism, reducing repetitive feedback.

Designing Feedback for Understanding and Engagement

For visual feedback to be meaningful, variation matters. If every action results in the same animation (e.g., the same explosion no matter how intense the bullet), the user loses the ability to interpret the nuances of their actions.

Therefore, we must design condition-based feedback:

  • A weak hit might produce a small spark.
  • A high-speed, high-impact hit might create a large explosion with particle effects and shockwaves.

This approach informs users of the intensity and outcome of their actions, using animation to bridge interaction and consequence.

Justifying Design Choices

When working with keyframe animation, it’s essential to justify and rationalise aesthetic choices:

  • Why a certain texture?
  • Why a particular particle effect?
  • How do visual effects reflect the mechanics or emotional tone of the interaction?

These decisions must support the internal logic of the virtual environment and the user’s mental model of interaction within it.

Conclusion

This project explores how animation in immersive VR spaces can meaningfully represent user actions and their consequences. Through the lens of visual and animation fidelity, and by leveraging procedural techniques, we can create responsive, engaging, and informative feedback systems that enhance immersion, understanding, and enjoyment in gamified experiences.

Ultimately, thoughtful visual feedback design becomes a powerful language—bridging code, physics, and emotion—in the architecture of immersive digital spaces.





Categories
Project 2

Mixed Reality Game

I’ve been experimenting with the idea of creating a mixed reality game—something I first got excited about during a group project last term. That earlier experience got me thinking about how physical and virtual systems can be connected, but now I want to take it further and really explore in this project what mixed reality can mean as a creative and interactive space.

Mixed reality is often understood as bridging the physical and virtual, where virtual objects are rendered on top of the real world, touching on the idea of augmentation. But what I was more interested in was the idea that experiences and interactions happening within the virtual world, inside the headset, can actually result in physical outcomes. I’m really fascinated by that transition between the two worlds, especially how user actions in VR can influence or change something in the real world.

I’ve begun with reading to back this concept up. One book that’s really influenced my thinking so far is Reality+ by David Chalmers. He argues that virtual realities aren’t fake or “less real” than the physical world. Virtual objects and experiences—if they’re consistent, immersive, and meaningful to the user—can be just as “real” as anything in the physical world. That idea stuck with me, especially when thinking about digital objects as things that have structure, effects, and presence—even if they’re built from code instead of atoms. What I’m interested in now is creating a system where virtual actions have real-world consequences. So instead of just being immersed in a virtual world, the player’s success or failure can actually change something in the physical world. That’s where I started thinking of repurposing my Venus flytrap installation—something physical and mechanical, driven by what happens in the VR game.

The game idea is still forming, but here’s the general concept: the player is in a virtual world where they need to defeat a group of Venus flytrap-like creatures. If they succeed, a real-life Venus flytrap sculpture (which I’m building using cardboard, 3D-printed cogs, and a DC motor) will physically open up in response.




Inspirational Work

Duchamiana by Lillian Hess

One of the pieces that really inspired my thinking for this project is Duchamiana by Lillian Hess. I first saw it at the Digital Body Festival in London in November 2024. After that, I started following the artist’s work more closely on Instagram and noticed how the piece has evolved. It’s been taken to the next level — not just visually, but in terms of interactivity and curation.

Originally, I experienced it as a pre-animated camera sequence that moved through a virtual environment with characters appearing along the way. But in a more recent iteration, it’s been reimagined and enriched through the integration of a treadmill. The user now has to physically walk in order for the camera to move forward. I really love this translation, where physical movement in the real world directly affects how you navigate through the digital space.

Instead of relying on hand controllers or pre-scripted animations, the system tracks the user’s physical steps. That real-world data drives the experience. What stood out to me most was not just the interaction, but the way physical sound — like the noise of someone walking on the treadmill — becomes part of the audio-visual experience. It creates this amazing fusion: a layered, mixed-reality moment where real-world action affects both the sound and visuals of the virtual world.

It’s made me think a lot about how we might go beyond the VR headset — how we can design for transitions between physical and virtual spaces in more embodied, sensory ways. It’s a fascinating direction, and this work really opened up new possibilities in how I’m thinking about my own project.





Mother Bear, Mother Hen by Dr. Victoria Bradbury



Mother Bear Mother Hen Trailer on Vimeo

One project that really caught my attention while researching mixed reality and physical computing in VR is Mother Bear, Mother Hen by Dr. Victoria Bradbury, an assistant professor of New Media at UNC Asheville. What I found especially compelling is how the piece bridges physical and virtual spaces through custom-built wearable interfaces — essentially, two bespoke jackets: one themed as a bear and the other as a chicken.

These wearables act as physical computing interfaces, communicating the user’s real-world movements into the game world via a Circuit Playground microcontroller. The gameplay itself is hosted in VR using an HTC Vive headset. In this setup, the user’s stomping motion — tracked through the microcontroller — controls the movement mechanics within the game, while the HTC Vive controllers are used for actions like picking up objects or making selections.

Both the bear and chicken jackets communicate via the Arduino Uno. A stamping motion controls the movement, and the costumes allow for auditory and visual responsiveness.

What makes this piece even more immersive is how the wearables themselves provide both audio and visual feedback. They incorporate built-in lighting and responsive sound outputs that reflect what’s happening in the game — a brilliant example of reactive media design.

The project was awarded an Epic Games grant, which shows the increasing recognition and support for experimental VR and new media works that fuse physical computing with virtual environments. I found it incredibly inspiring, especially in the context of exploring embodied interaction, wearable technology, and the creative possibilities of mixed reality. It’s a powerful reminder of how far VR can go when it intersects with tactile, real-world interfaces and artistic intention.


Unreal Plant by leia @leiamake

Unreal Plant

Another project that combines Unreal Engine with physical computing is Real Plant by leia @leiamake. It explores the idea of a virtual twin, where a real plant is mirrored in both the physical and digital realms. The setup includes light sensors that monitor the plant’s exposure to light in the real world. That data is then sent to the virtual world, where the digital twin of the plant responds in the same way—lighting up when the physical plant is exposed to lighth.

This project is a great example of bridging the two worlds and creating an accurate mapping between them. It really embodies the concept of the virtual twin by translating real-world inputs into virtual actions. There’s also some additional interaction, like pressing a button to water the plant, though that part is more conventional and relies on standard inputs like a mouse or keyboard. Still, the core idea—mirroring real-world changes in the virtual space—is what stood out to me.

Virtual Reality and Robotics

A similar concept using physical computing and robotics can be found in real-world applications, such as remote robotic control systems. In these systems, the user operates from a VR control room equipped with multiple sensors. Using Oculus controllers, the user can interact with virtual controls in a shared space—sometimes even space-like environments.

Because Oculus devices support features like hand tracking and pose recognition, users can perform tasks such as gripping, picking up objects, and moving items around. What’s impressive about this setup is that the designers have mapped the human physical space into virtual space, and then further mapped that virtual space into the robotic environment. This creates a strong sense of co-location between the human operator and the robot’s actions.

This setup is a great example of a virtual twin, where there’s a precise and responsive mapping between virtual and physical environments. It also points towards the emerging concept of the industrial metaverse, particularly in training robots and AI systems.

A similar example is the early development of Tesla’s humanoid robots. While these robots are designed to be autonomous, they can also be operated by humans through virtual reality systems. The aim is often to support tasks like healthcare delivery or improving social connections.

This is an example where VR and robotics are used to stock up the shop shelves remotely, from a distance.

This ties into the idea of the virtual twin—replicating what happens in the virtual world and mapping it onto physical robots. It’s essentially about translating human actions in a digital environment into responses in the physical world, which is a key aspect of bridging virtual and real spaces.

Industrial meatverse.