Categories
Project 2

Animation States based on user actions

Basic functionality


Getting the functionality implementation involved placing a box collider in the scene. This box has a trigger collider and a tag, which allows it to respond to collisions with a bullet object. Foundation for shooting bullets from the gun and hitting the abdomen of the monster. Sorry, I hope this ain’t too cruel.

In my C# script, I’m using OnTriggerEnter to detect when a bullet collides with this box. By using tags, I can define different behaviours depending on what the bullet hits, meaning what type of object within the scene gets hit by the bullet. For example, I assign the tag "Monster" to this collider, and the script handles it specifically when the bullet hits an object with that tag; consequently, only monsters can get hit, and only from its abdomen, otherwise it is bulletproof.

Now, to make interactions more precise and tied to the character’s body, I’ve attached this collider box as a child to one of the bones of a 3D model. The model (and its animations) are imported from Mixamo, and by placing the box under a bone like the spine or hips, it follows the character’s movements during animation. The spine worked out visually better than the hips, as there is more movement at the spine than hips, as per COG.

This setup ensures that the hitbox stays aligned with the monster’s belly even as different animations play — for example, idle, walk, or death. Since it’s parented to a bone, it moves in local space relative to the skeleton, not just world space.

Set of animations to represent different states:

When importing multiple FBX animation files from Mixamo (like idle, hit, walk, death), I ran into a problem where animations didn’t play correctly. The model would appear static or reset to a default pose.

After some troubleshooting (and watching tutorials), I realised the problem was due to avatar mismatch.

Here’s the fix:

This ensures all animations share a consistent skeletal rig. Without this, Unity won’t apply the animation data properly, and your character may stay static without any visible errors.

  • First, import the base T-pose or idle FBX with the correct avatar settings.
  • Then, for every additional animation FBX (e.g. walk, hit, die), go into the Rig tab in the import settings and set the Avatar Definition to “Copy From Other Avatar”.
  • Assign the avatar from your base model.

Summary of Logic:

  • A trigger box is placed inside the monster’s belly.
  • It’s parented to a bone, so it follows the animation.
  • When the bullet collides with it, the script triggers a hit reaction.
  • The Animator (assigned to the monster GameObject) switches states based on the trigger condition, such as transitioning into a “Hit” or “Die” animation.

This setup gives you a dynamic and accurate way to detect hits on specific body parts during animations.

Monster prefab and game concept

My idea is to create a seated VR game, where the player remains in a rotating chair. The player won’t move around, but can rotate freely to face threats coming from all directions. In this project, I explored user interaction and AI behavior through a MR game prototype built in Unity. The setup revolves around a central camera, with the user seated on a “routing table chair” and tracked via a touch head positioned under the camera centre. This configuration allows the user to rotate and move their body freely, which is crucial to how the game reads orientation and interaction.

The gameplay involves monsters (zombies) approaching the player from various positions in the environment. I plan to define the origin point (0,0,0) in Unity as the player’s position, and the monsters will spawn at random positions within a certain radius and walk toward that centre point.


Monster Behaviour and Setup:

  • I want to design a single monster prefab that contains all the core functionality: walking, taking damage, reacting to hits, and dying.
  • Each monster will require three successful hits from the player to be fully killed.
  • The monster’s Animator will have multiple states:
    • Walking toward the player.
    • A reaction animation for the first hit.
    • A different reaction for the second hit.
    • A death animation (falling forward or backwards) on the third hit.

I also want the walking animation to be affected by the hit reactions. For example, after being hit, the walker could pause briefly or slow down before resuming.


Key Goals:

  • Spawn Logic: Monsters will be spawned from various angles and distances (e.g. within a radius) and will always walk toward the centre (player).
  • Modular Prefab: All behaviour should be contained in one reusable prefab, so I can later swap out models or visuals without rewriting logic.
  • Focus on Functionality First: My current priority is to get everything working in terms of logic and animations. Once that’s stable, I can improve visuals, variety, and polish.


To detect proximity events between the user and approaching creatures, I attached a spherical collider under the head object – as can be seen in the screenshot above, located centraly (which represents the user’s origin point). For Unity’s physics system to register collision events correctly using OnTriggerEnter, at least one of the colliding objects needs to have a Rigidbody component—this was a key technical insight. I attached this Rigidbody to the user’s collider to ensure that the system could properly detect incoming threats.

The core mechanic involves enemy creatures that move toward the user from various directions. For this, I manually spawned seven creatures around the user, each set as a prefab, to simulate threats approaching from different angles. These enemies are animated and follow a basic AI path using Unity’s NavMesh system.

Initially, I struggled with configuring the NavMesh Agents. The creatures weren’t moving as expected, and I discovered the issue stemmed from an excessively large agent radius, which caused them to collide with each other immediately upon spawning. This blocked their movement toward the user until some of them were eliminated. Once I adjusted the agent radius in the component settings, the agents were able to navigate correctly, which was a significant breakthrough in troubleshooting.

Another major learning point was in managing the creature’s state transitions through the Animator component. Each enemy had three lives, and I created a custom C# script to handle hits via a TakeHit() function. After each hit, the enemy would briefly pause and then resume movement—except after the third hit, which would trigger a death animation.

However, I encountered a strange behaviour: even after the death animation was triggered, the creature’s body would continue moving toward the user. This was due to the NavMesh Agent still being active. To resolve this, I had to disable the agent and stop its velocity manually by setting it to a Vector3.zero. Additionally, I toggled the agent’s isStopped boolean to true and disabled the movement script to fully freeze the creature in place, allowing it to collapse realistically at the point of death.

Overall, this project deepened my understanding of Unity’s physics, AI navigation, and animation systems. It also highlighted the importance of detailed debugging, iterative testing, and thinking holistically about how different systems in Unity interact. What seemed like minor details—such as collider hierarchy, Rigidbody placement, or NavMesh agent size—turned out to be crucial to achieving believable and functional gameplay.

Categories
Project 1

Acting + LipSync: blocking

PHONEMES: the mouth movement is like a accordion, each of the letter has a different sound, so the mouth will move differently depends on the letter and also influence by the mood/ state of emotion of the character

each phrase in the story represent the pose in the story

animating what character is thinking not what they are saying, as per focus of the listener is on the eyes/face not the

The most important are 3 poses to established the shot

understanding what makes up a sound, helps established how the corresponding parts of the face are moving if the

talk, resting against the elbow, to help you understand better the movement of the mouth

twin machine and animbot

Dreamwall picker for Maya by owenferny YT


4 components of language

The “m” and “b”, to make sound the lips had to be close but the mouth have to actual open to make the sound, pressing and realising lips.

P and B work the same way.

  1. How the jaw goes up and down, open and closes the mouth.

2. Corners: in and out.


The above picture show sketching for prepearing, trying to understand the mouth, jaw, tongue, movement, as per example from YT tutorial. Very good exercise, in understanding how different sounds are articulated, and how the movement should be created to approximate the movement, make it convining to the viewevr that actually is synced to the audio provided – the audio -visual.



Just like the jaw, corners should not be going in and out just for the sake of keeping going.
The corners’ movement is driven by the adjacent sound.

E is a great example, like word never, E within never is going to keep the mouth wide open. Exaggerated E means the mouth will open quite wide, but it is possible to pronounce E just short.

3. Tongue.

The most frequently underestimated and forgotten, yet significantly important. Needs to be taken for consideration.

LET IT GO: The L and T are best examples, when L is pronounced the tongue touches the roof of the mouth hardly, same for T but less powerfully.

4. The Polish

Look up if the character is breathing in between lines.

Do not go for default poses, create interesting shapes that express the emotions.

Bes sure to have a little moment that will stands out.



Process.


1. Setting up the project. Referecnign rig. Ensuring all the texture files are waiting source images folder.

2. Regenerating node inside the hypershade to fit the texture of the rig.

3. Default position. The character is sitting at the table in the restaurant so therefore sitting position.



Optimising workflow:

Installing animbot

Installing Studio Library 2.20.2 in order to be able to create custom poses and then save them so these can be used later, throughout an animation.





Setting up the pose library that comes with the rig.
It throws some errors and requires changing the namespace.

Solution:

Go to Windows > General Editors > Namespace Editor

Find your current rig namespace (e.g., character1)

Select it and click “Rename”

Rename it to mars_v01








Some of the lips coordination, when trying to match the sounds and approximate lipsync.

ssssssss

way -> w

she’s -> shhhhhhhh
shameles->


eee- from sheee

jaw slighl;ty mvoes up

the – ffffffffffffffffffff


Approach:

Start with the lipsync to ensure it matches the soundtrack. The soundtrack is very helpful with keeping the timing.




The feedback from the TA:

If the head moves too dynamically,the viewer won’t be able to notice the face expressions, as the high speed of head movement will creat somewhat the motion blur like effect. For the sequence with a lot of text words, especially when character speaks very quickly, the movement of the head should not be so dramatic.

Tips: Use the animation layers instead of keeping all of the keys on the same layer. It will make the process of work much efficient and you will be able to toggle between the layers and blend all those together.


George’s feedback:

Get your GOLDED POSES: the start and the end of the body right, and work with the middle.

The non verbal part of the acting- the body language is equally or even more important than the lypsync. It must showcase the emotional state of the character.

WORKING WITH THE QUICK SELECTION (WAIST, CHEST, NECK, HEAD) ***OREDER IMPORTNACE***

1. Set animation to autospan legacy for working in spline.
2. Create quick selection tool using for the body part: COG + chest +neck +head (not this will differ depending on the rig and the acting pose, this is true for my sitted animation with use of mazu rig)
3. Get the golden poses.
4. Start working with the middle part, get your in-betweens. Answer to the questions, which poses am I favouring? Apply this with the use of moving holds. The middle mouse button, move over, then hit S for keying or use the tweeting machine.
5. Getting in between poses. Decide on how long the pose is being held for before moving to the next pose.


WORKING WITH CHEST

WORKING WITH THE NECK

WORKING WITH HEAD

The next step of the process in this method is to decide how head moves in correspondence to the rest of the body.

In my reference, in the first part torso moves then head follow, on the contrary in the second part head is leading the movement.

To extancuate this further, in the next step is to work with the chest and the neck. Only work with the parts of the body that appear in the shot, no need to work with the waist, as only the check, neck and head are in the shot.



Working with the chest:

translation z:

translation y:

translation x:


The graph editor screenshots of work in progress, the one above gives better results than the one higher up. The rotation Z and Y matches, which means as the character moves their head in circles, the head goes back and then left, which makes sense. However, the one higher up give worse, less organic and more jumping effect.

Working with the neck controller to define the head movement. So the order goes like waist, chest, neck, head. The waist is the closest to the COG. The body moves from below upwards.

Reflection:

I’ve approached this from the other way around, doing lip sync first and then moving to body movement. One could argue that this is not a bad approach, as it sets me off with a foundation of mouth movement; however, I’ll need to go back do it and reapply accordingly.

My biggest problem in design thinking is being biased to being a 2D visual artists, and often struggle to think in 3D. Going through George’s tutorial, I reminded myself that I often merely refer to what’s in the viewport, that’s a 2D representation, so when moving the chest, the translation does not only happen from side to side and back to front, but in 3 dimensions altogether, which outputs much better results.

I kind of ignored what was happening at the waist level. Too bad. The waist isn’t visible in my acting pose, but it’s still crucial. It’s fair to say it doesn’t move as much as the upper body, but whatever happens to the waist affects the rest of the body. That’s the FK (Forward Kinematics) system at work—think chain reaction. It’s also very important to animate the waist intentionally so that you can later emphasise the contrast between the movements of the head and the waist. One of these parts usually leads the motion, and that difference needs to be accentuated in the animation.

In gaming and virtual worlds, animation is crucial for communicating between characters in a non-verbal way.



Blocking updated video for feedback, after applying changes and following the tutorial in class:

Categories
Project 2

Briding Metaverse with Metaphyscial



Bridging Physical and Virtual Realms with Arduino and Unity, respectively

For this part of the project, I intended to explore the interaction between physical and virtual environments. I aimed to establish a two-way communication channel where actions in the virtual world could produce real-world effects. To achieve this, I used an Arduino microcontroller to receive output signals from a Unity-based VR environment. This allowed me to control a physical LED light in response to a virtual event – shooting bullets from the gun – via the controller’s index finger trigger.


Setup Overview

The physical setup consisted of a single LED connected to an Arduino board. The Arduino was linked to my laptop via a USB-C cable. On the software side, I developed a Unity project using the OVRInput system to trigger virtual shooting events. These events would send a signal through a serial port to the Arduino, prompting the LED to turn on briefly.


Initial Challenges and Troubleshooting

The setup proved to be more challenging than expected, particularly in terms of serial communication and platform compatibility. Below is a breakdown of key issues I encountered and how I addressed them:

1. Arduino Upload Issues

At first, I was unable to upload sketches from the Arduino IDE to the board, despite:

  • The Arduino is being detected by the device manager
  • The correct drivers are being installed
  • Successful code compilation

Even though the COM port was visible and correctly selected, the IDE failed to upload the code. After troubleshooting extensively and rechecking the USB connections, I found that a simple system reboot resolved the issue. This was unexpected, but it allowed uploads to proceed normally afterwards.


2. Unity and Arduino Serial Communication; Arduino Sketch and C# description

Unity does not natively support serial communication with external devices like Arduino. To bridge this gap, I relied on the .NET System.IO.Ports namespace, which provides serial communication capabilities.

I wrote a basic Arduino sketch that turns an LED on or off based on a received character ('1' for on, '0' for off). In Unity, I implemented a custom C# script that uses the SerialPort class to send these signals. This script was attached to an empty GameObject and referenced within the RayGun script to trigger LED activation when the player fires the gun.

Based on the tutorial for setup communication: Unity & Arduino Communication


This is a simple Arduino sketch designed to control an LED. In the setup() function, serial communication is initialized at 9600 baud (bits per second), and pin 2 is configured as an output to control the LED. Although a global buffer and a char array (buf[]) are defined with a size of 50, they are not actively used in the final version of the code. I originally experimented with reading multiple characters at once, but I noticed this caused the LED to remain continuously on — which didn’t work well for my intended shooting feedback effect. As a result, I opted to read only one character at a time, which allowed for more responsive and accurate LED control.

In the loop() function, the sketch checks whether any data is available on the serial port. If data is detected, a single character is read and stored in the cmd variable. If this character is '0', the LED is turned off (digitalWrite(2, LOW)); if it’s '1', the LED is turned on (digitalWrite(2, HIGH)). This allows Unity (or any external controller) to send simple serial commands ('0' or '1') to toggle the LED in real time.

I also included a short delay of 200 milliseconds after each loop cycle. This was partly based on recommendations from online tutorials, but also confirmed through testing: it helps synchronize the communication and prevents the Arduino from reading too frequently or reacting too rapidly, which could cause inconsistent behavior. This delay ensures that the LED only responds once per input, making it more suitable for the quick, discrete signals used in a VR shooting mechanic.




In terms of the C# implementation, the script makes use of the System.IO.Ports namespace, which provides access to serial communication via the SerialPort class. This is essential for enabling Unity to communicate with external hardware such as an Arduino.

Within the Start() method, a serial connection is established using COM6, which corresponds to the port associated with my Arduino controller connected to the PC. The communication is initialized at 9600 baud, matching the settings specified in the Arduino sketch (Serial.begin(9600)).

The SendSignal(bool on) method is designed to send simple control signals — either '1' or '0' — to the Arduino. If the on parameter is true, it sends '1', which lights up the LED. If it’s false, it sends '0', turning the LED off. This binary approach allows Unity to provide immediate physical feedback in response to in-game events, such as shooting.

Lastly, the OnApplicationQuit() method ensures that the LED is turned off when the Unity application is closed. It sends a final '0' to the Arduino before closing the serial port. This prevents the LED from remaining on unintentionally after the game ends.

In summary, this script acts as a bridge between Unity and the Arduino, using serial communication to synchronize digital actions (e.g., pressing a button on the controller) with physical outputs (e.g., turning on an LED). This implementation enables a simple but effective feedback loop between virtual and physical environments.


Key Technical Bottlenecks

Namespace and Class Recognition Errors

A major obstacle was Unity’s failure to recognise the System.IO.Ports namespace and the SerialPort class. The error message stated that the class existed in two assemblies: System.dll and System.IO.Ports.dll, causing a conflict.

To resolve this:

  • I changed the API Compatibility Level in Unity (via Project Settings > Player > Other Settings) to .NET 4.x.
  • I manually downloaded the System.IO.Ports package from Microsoft’s official NuGet repository.
  • I experimented with placing different versions of the DLLs in Unity’s Assets/Plugins folder, but most led to version mismatches or runtime errors.

Ultimately, changing Unity’s compatibility settings resolved the issue, and no additional DLLs were required. As per the image below, I manually located files and copied them into the assets folder of my Unity projects, as Unity failed to retrieve the class of the namespace. This is something I’ve been advised to follow from the online group discussion. These, however, end up throwing an issue later on, with Unity suggesting a class definition left in 2 separate file versions. 1 came via plugin downloaded on VSC (Visual Studio Code), the other one manually via a zip file folder of the website. In the end, both were required to be kept on the local machine, but not under the same project. Very confusing haha.


Windows-only Support Confusion

Another issue arose from Unity reporting that System.IO.Ports it was “only supported on Windows”—despite my work on a Windows machine. This turned out to be a quirk in Unity’s error handling and was resolved by ensuring:

  • The Unity platform target was correctly set to Windows Standalone.

Final Implementation Outcome

After several hours of testing and debugging:

  • Unity successfully sent serial data to the Arduino each time the player pressed the fire button.
  • The Arduino correctly interpreted '1' the signal to light the LED, and '0' to turn it off after a short delay.
  • The interaction was smooth, and the LED reliably responded to gameplay events.

This implementation serves as a foundational example of hardware-software integration, particularly in VR environments.


Categories
Project 2

Designing a Fantasy- Inspired Shooting Game with Meaningful Feedback

Concept: Flies, Monsters, and the Venus Flytrap

Rather than having the player simply destroy enemies, I imagined the monsters of a Venus flytrap Queen who had trapped flies inside their bellies. When the player shoots a monster, the goal is not to harm it, but rather to liberate the flies trapped within.

So, with each successful hit, a swarm of flies bursts from the monster’s belly—visually suggesting that the player is helping, not hurting. This felt like a more thoughtful narrative balance, allowing for action gameplay without glorifying violence.

These monsters serve a queen—the “mother flytrap”—a physical computing plant in the real world. As the player defeats more of these monsters in-game, the number of freed flies is mapped to data used to drive a DC motor that gradually opens the flytrap plant in reality. This concept of connecting digital action to physical consequence is central to my interest in virtual-physical interaction.


Game Mechanics: Health and Feedback

Each monster has three lives, but early testing made it clear that players had no clear visual cue of the monster’s remaining health. I didn’t want to use floating numbers or health bars above their heads—it didn’t suit the aesthetic or the narrative. Instead, I introduced a colour transition system using materials and shaders.

Colour Feedback System Traffic Lights

  • Monsters start with a vivid green body, representing vitality and their flytrap origin.
  • With each hit, the colour fades:
    • First hit: duller green.
    • Second hit: reddish-green.
    • Third hit (final): fully red.

This gives players immediate visual feedback on the monster’s state without UI clutter, reinforcing the game’s narrative metaphor.


Implementing the Visual Effects

Material Cloning

To change the colour of each monster individually (rather than globally), I cloned their material at runtime:

csharpCopyEditbodyMaterial = bodyRenderer.material;

This creates a unique instance per zombie. If I used shared material, all zombies would change color together, which I discovered during testing and debugging.

Colour Transition Logic

I wrote a function UpdateBodyColor() that calculates the colour based on the monster’s remaining health. It uses:

csharpCopyEditMathf.Clamp01(health / 3f);
Color.Lerp(Color.red, Color.green, healthPercent);

This smoothly transitions from red (low health) to green (full health), depending on how many lives the monster has left.


Particle Effects: Flies – Stop motion animation aesthetics

To visualise the flies escaping, I used Unity’s Particle System.

  • I created a sprite sheet with 16 fly frames in a 4×4 grid.
  • I used the Texture Sheet Animation module to play these frames in sequence, creating a stop-motion animation effect.
  • The particle system is parented to the monster’s belly and only plays when the monster is hit.

I made sure to disable looping and prevent it from playing on start. That way, it only triggers when called from code (in the TakeHit() function).

This was an experimental but rewarding technique, simulating complex animation using a simple sprite sheet. It aligns well with the fantasy aesthetic, even though the flies are 2D in a 3D world.


Unity Implementation Notes

  • Each monster is a prefab. This made it easier to manage changes and maintain consistency across the game.
  • The TakeHit() method:
    • Reduces health.
    • Plays an animation.
    • Plays a sound (roar, hit, or death).
    • Triggers the flies’ particle effect.
    • Calls UpdateBodyColor() to change the visual appearance.
  • Once health reaches 0:
    • The monster “dies”.
    • The NavMesh agent stops.
    • A death animation and sound are played.
    • The object is destroyed after a short delay.

Bridging Physical and Digital Worlds

The core concept I’m exploring here is the flow of data between virtual and physical space. I love the idea of the player doing something in the game that directly affects the real world. By linking monster deaths to the opening of the physical flytrap plant via a DC motor, the project becomes more than just a game—it becomes an interactive experience across dimensions.

I see this as a kind of “virtual twin” concept, but not just a visual replica. It’s a data-driven relationship, where progress in the virtual world controls mechanical outcomes in the physical world.

This idea is loosely inspired by Slater’s concept of presence—how sensory data in one space (physical) can be reconstructed or mirrored in another (virtual), and vice versa. It’s this bidirectional flow of meaning and data that fascinates me.


Categories
Project 1

Hitting the SPLINE!

Spline the COG first, beacuse COG drives everything else!:

Translate Y to decide about the weight:

– Remove all the keys which are no longer needed.

  • Adjust the timing.
  • Simplifie the curve: The goal of cleaning is to get the least amount of keys for the desired effect.


    Settings for spline:



The common problem, that animation in spline sucks after moving from blocking, is the skipped part of blocking plus.If the ratio between the key frames in blocking against the entire timeline is small, like only 10 % ( 20 keyed frames for 200 timeline) men’s, that the majority of the work will need to be done by the computer. So here’s the good point, you’re the master of the own tool and not the computer, like the Maya is an extension of your abilities, but you need to dicatete the animation, so this should amplified the meaning behind cleaning the spline curves. (notes form Sir WAde Neistadt)



timeline: 150
key frames: 23

15%frames, the rest will get performed by Maya, which is likely for spline to be a trouble

  • the animation will feal very floaty (everything move all at once, at one speed at one direction – it’s like the hullication movement by AI)
  • the gimbal lock issue will occur
  • Moving holds: timing per body part -favouring the pose to stay in one position longer and then arrive quicker later

ADD MORE KEYS IN THE BLOCKING STAGE TO LOCKDOWN THE TIMING THAT YOU WANT FORM YOUR CHARACTER ACROSS THE ENITRE SHOT



GIMBAL is essentially a 3 ring axis, allowing for the rotation of the object. gimbal lock refers to the situation when 2 ring axis are overlapping, causing locking.

Rotation order by default is zyz, meaning that x is a child of y, y is child of z, so the z is the parent of the all.

Swithcing between: object, world and gimbla view : hold down W and + LEFT MOUSE BUTTON on the viewport

Categories
Project 2

Mastering Transitions in Unity Animator: A Game Logic Essential


When creating dynamic animations in Unity, transitions play a vital role in shifting between different animation states. These transitions are not just aesthetic—they’re often tied directly to your game logic to deliver responsive and immersive feedback to the player.

What Are Transitions?

In Unity’s Animator system, transitions are used to move from one animation state to another. This could mean transitioning from an “idle” state to a “running” state when the player moves, or from a “standing” state to a “hit reaction” when the character takes damage.

Triggering Transitions

Transitions are typically triggered by parameters—these can be:

  • Booleans: Great for simple on/off states (e.g., isJumping = true). However, they should not be used for managing multiple simultaneous conditions, as this can cause errors or inconsistent behavior.
  • Integers or Floats: Useful when you need more nuanced control, such as switching between multiple animation states based on a speed or health value.
  • Triggers: Ideal for one-time events like a shot being fired or a character being hit.

For example, imagine a scenario in a shooter game:
When an object gets hit by a bullet, you can trigger a transition to a “damaged” animation state. This provides instant visual feedback to the player that the hit was registered—crucial for both gameplay and user experience.

Best Practices

  • Use Booleans sparingly, only for simple, binary state changes.
  • Use Triggers or numerical parameters for more complex or multi-condition transitions.
  • Always test transitions thoroughly to avoid animation glitches or logic conflicts.

Final Thoughts

Mastering transitions in Unity isn’t just about getting animations to play—it’s about making your game feel alive and responsive. By tying animations into game logic through intelligent use of transitions and parameters, you enhance both the realism and playability of your game.



Project Progress

For my project, I’m using a 3D model of a Creep Monster, which I downloaded from the FAB platform in FBX format. (Creep Monster | Fab) I then uploaded it to Mixamo, where I generated a skeleton for the model and applied different animations—such as standing idle and dying.

After exporting those animations, I imported them into Unity. One of the main steps is adding components, especially the Animator component. This is essential for handling transitions between animation states. These transitions are not only important for visual feedback but are also critical for managing game logic.

I also attached a custom C# script to the monster object in the scene. This script controls what actions should happen to the monster based on in-game interactions.

A key requirement for this setup is having an Avatar. The downloaded model didn’t come with one, but Unity allows you to generate it. To do this:

  1. Select the model in the Inspector panel.
  2. Go to the Rig tab under the Model settings.
  3. Change the Animation Type to “Humanoid”.
  4. Under the Avatar section, choose “Create From This Model”.
  5. Click Apply.

This process fixed an issue I faced during gameplay where the model wasn’t rendering correctly. The problem stemmed from the missing Avatar, which serves as the rig representation Unity needs for the Animator to work properly.

For interaction and testing, I modified a custom C# script attached to the bullet object. This script checks for OnTriggerEnter() events. When the bullet’s collider detects a collision with an object tagged as “Monster”, it triggers another script. That secondary script acts as a bridge, connecting the collision event to the Animator on the Creep Monster.

As a result, when the player shoots the monster, the Animator transitions from its default state to a “dying” animation. This workflow is how I’m handling enemy reactions and visual feedback for hits and deaths in the game.

It’s been a great way to troubleshoot and understand how colliders, custom C# scripts, and Animator states work together in Unity.

Next, I’ll explore animation layers for better control, especially for complex character behaviours.

Categories
Project 1

Body Mehcanics: Reshooting Reference


Feedback 02/05/25

  • Lifting up the box: to make the process easier, let the character hold onto the box from below
  • constrain hands to the box -> then animate the box -> then copy animation over to the hands controllers (as per analogy to the in-class example with master controllers)
  • Adjust the timing at the beginning; currently, the implementation has a quick jump
  • The character approaches the desk with the box, then stops before getting closer to the box to lift it up
  • be sure to animate the legs accordingly to where the weight is distributed and bear in mind which leg is pivoting





Reshooting the reference based on the feedback given





Constrain


StepActionDetails
1Create LocatorsCreate locators and constrain them to the cube object with offset maintained.
2Constrain Locators to ControllersApply constraints from the locators to the corresponding controllers.
3Assign Box Locators to ControllersRight, centre, and left locators are constrained to the right arm, chest, and left arm, respectively.

Initially, I constrained the central locator to the chest of the character, which was incorrect. Upon reconsideration, it makes more sense for the central locator to be constrained with the chest, not directly to it. This is because the character is holding the box with the arms, but part of the weight is supported by the chest.


Categories
Project 2

Animating Interaction in Immersive Spaces: Exploring Visual Feedback in Gamified VR Experiences

Project Objective

This project in it’s own nature is experimental and investigates animation of visual feedback within gamified or interactive VR environments. The work is grounded in the field of Human-Computer Interaction (HCI), especially within 3D immersive spaces where traditional screen-based feedback is replaced by spatial, multisensory experiences.

From Screen to Immersion: A Shift in Feedback Paradigms

In traditional gaming, feedback is predominantly audiovisual, limited to what can be shown on a 2D screen. In VR, the immersive spatial environment introduces new challenges and opportunities. Visual feedback becomes a primary tool for user orientation and understanding, especially when combined with haptic and audio inputs.

As we move away from screen-based interactions into head-mounted display (HMD) experiences, visual animation plays a central role in reinforcing the sensation of presence and in guiding user interaction. Haptic feedback can complement this, but visual cues remain the most immediate and informative component of interaction in VR.

The Role of Animation in Action-Based Feedback

Consider a typical game mechanic like shooting a gun in VR:

  • The user performs an action (e.g. firing a laser).
  • A visual response follows (e.g. a laser beam shoots forward and hits a target).
  • This entire process is animated, and the quality and believability of that animation impact the user’s understanding and experience.

Here, animation serves not just aesthetic purposes, but as a way to translate data into visual understanding—the laser beam visualises the trajectory and the distance within the space which helps user understand the space and gives them a sense of the space, which offers infinite FOV.



1. Personal Space

  • Distance: ~0.5 to 1.2 meters (1.5 to 4 feet)
  • Description: This is the space we reserve for close friends, family, or trusted interactions.
  • Use in VR: Personal space violations in VR can feel intense or invasive, making it powerful for emotional or narrative impact.

2. Intimate Space

  • Distance: 0 to 0.5 meters (0 to 1.5 feet)
  • Description: Reserved for very close interactions—hugging, whispering, or personal care.
  • Use in VR: Very rare outside of specific narrative or therapeutic experiences. It can evoke strong emotional responses, including discomfort.

3. Social Space

  • Distance: ~1.2 to 3.5 meters (4 to 12 feet)
  • Description: The typical space for casual or formal social interaction—conversations at a party, business meetings, or classroom settings.
  • Use in VR: Useful for multi-user or NPC (non-player character) interaction zones where you want users to feel present but not crowded.

4. Public Space

Use in VR: Great for audience design, environmental storytelling, or large-scale virtual spaces like plazas, arenas, or explorable worlds.

Distance: 3.5 meters and beyond (12+ feet)

Description: Space used for public speaking, performances, or observing others without direct engagement.


Visual Fidelity vs Animation Fidelity

Two critical concepts are being explored:

  • Visual Fidelity: How objects are represented in terms of texture, rendering quality, lighting, and detail. This relates to how convincing or realistic the environment feels.
  • Animation Fidelity: How smoothly and convincingly motion and interactions are animated. It includes timing, easing, weight, and physicality of motion.

Both forms of fidelity are essential for user immersion. High animation fidelity, in particular, supports believability—users need to feel that the action they performed caused a logical and proportional reaction in the environment.

Procedural Animation and Data-Driven Feedback

One key technique in this research is procedural animation—animations generated in real time based on code, physics, and user input, rather than being pre-authored (as in keyframe animation).

For example:

  • A bullet fired from a gun might follow a trajectory calculated based on angle, speed, and environmental factors.
  • The impact animation (e.g. an explosion) could scale in intensity depending on these values—larger impacts for faster bullets or steeper angles.
  • This helps communicate different degrees of impact through graduated visual responses.

Benefits of Procedural Feedback

  • Consistency with physical principles (e.g. gravity, momentum).
  • Dynamic responsiveness to different user actions.
  • Enhanced variation and realism, reducing repetitive feedback.

Designing Feedback for Understanding and Engagement

For visual feedback to be meaningful, variation matters. If every action results in the same animation (e.g., the same explosion no matter how intense the bullet), the user loses the ability to interpret the nuances of their actions.

Therefore, we must design condition-based feedback:

  • A weak hit might produce a small spark.
  • A high-speed, high-impact hit might create a large explosion with particle effects and shockwaves.

This approach informs users of the intensity and outcome of their actions, using animation to bridge interaction and consequence.

Justifying Design Choices

When working with keyframe animation, it’s essential to justify and rationalise aesthetic choices:

  • Why a certain texture?
  • Why a particular particle effect?
  • How do visual effects reflect the mechanics or emotional tone of the interaction?

These decisions must support the internal logic of the virtual environment and the user’s mental model of interaction within it.

Conclusion

This project explores how animation in immersive VR spaces can meaningfully represent user actions and their consequences. Through the lens of visual and animation fidelity, and by leveraging procedural techniques, we can create responsive, engaging, and informative feedback systems that enhance immersion, understanding, and enjoyment in gamified experiences.

Ultimately, thoughtful visual feedback design becomes a powerful language—bridging code, physics, and emotion—in the architecture of immersive digital spaces.





Categories
Project 1

Body Mechanics: Blocking Plus


Next Week – Animation Tasks & Focus

Checklist:

  • Organize reference clips
  • Finalize idea and story beats
  • Prep scene file (camera, rig, environment if needed)

Blocking Phase Goals:

Blocking Plus (Blocking with Moving Holds)

  • Include moving holds to keep the animation alive even in still moments
  • Maintain rhythm and weight with subtle in-place motion

Blocking in Steps

  • All major keys placed on the same frame per pose (stepped tangents)
  • Emphasize strong pose-to-pose structure



Weighted tangent

By default, Maya comes with unweighted tangents within the graph editor, which makes the process of adjusting curves cumbersome, f.e, when adding hangouts to baking ball animation and then changing the position of the bounce. To solve this issue, the use of weighted tangents is recommended.

You never have to break the tangents with the weighted tangent option. Breaking the tangents to change the direction.




Animation

  1. Reference the rig.
  2. Perform animation on master controller.
  3. Adjust the graph editor.

  • While working in spline, break the tangent first, before applying the weighted tangent (default is unweighted tangent)


Working with weighted tangents. Select it all -> change to weighted tangent -> MMB + shift and drag, (shift just like in the other software, keep the line straight while expanding)

Troubleshooting

Maya stop showing the rig/model in the viewport, then: select the view →show → viewport → check for what stuff is unchecked.

Working with the weighted tangent. Translation Y, going up and down.

So the slope is steep to start with, which means that the character will pop up. On the contrary, the slope is easing down slowly which means character will smoothly and steadily come to rest.


Here. The slow, steady start gives an anticipation. The quick steep end, mean character gows straight down in.



This graph describes Translation Y. Before the character goes up, it goes down at first, the start of the graph goes to minus values, before it goes up again.

The tangent gets rotated to get more of the “hang out” time. Hanging out in the air, reaching equilibrium, losing momentum, before again starts falling down.

Baking keys & transferring it over to limbs

result:



The elbow constraints are only visible in the IK, be sure to change the IKFK to 1. setting IKFK = 1 switches the arm from FK mode (0) to IK mode (1).

In an IK (Inverse Kinematics) setup, the elbow is usually constrained via a pole vector or some specific control to guide the elbow’s bend. In FK (Forward Kinematics), you manually rotate joints without the system solving the bend, so “elbow constraints” (like pole vectors) don’t really apply in the same way or may not be visible/active.


Workflow:

Select the elbow control group or object (which has the constraint)

Shift-select the elbow locator (driver)

Apply a parent constraint (Maintain Offset OFF)
This way, the control moves exactly with the locator — no offset.

The above shows the correct constraints.

Once all 9 parent constraints have been applied, proceed with baking the keys.





After baking the animation key into locators (selecting all the locators and then Key -> Baking), selecting all the constraints and deleting them, the animation made on the main controller, and when playing the animation, the locators kept on animated in the viewport.

“You never should animate on the master control, but you can use the master control to your advantage”
in order to do so, you follow the following process:



1. Animate on the master control and adjust the animation to fit your timing.
2. Create locators for each (9) foot, knee, arm, elbow, X 2 right and left sides respectively, + COG (usually the pelvis)

3. Create parent constraints for each of these locators with the parent (driver) of the corresponding control (do not apply the offset) and locator as its child.
4. Bake the animation key onto the locators (after checking that the previous step was done correctly).
5. Delete the constraints.
6. Perform the reverse copying of animation from the locators onto the controllers (this time with maintain offset), watch out: the elbow and knees polar vector constraints only allow for translation and no rotation, you won’t be able to copy it over, which may yield an error.
7. Select all the controllers via the quick selection tool (no master control included, no constraints). Baking the animation key onto the constraints now.
8. Deleting all the locators afterwards.

This is a wrong selection set; again, no animation on the master control.


The Tangents configuration for the animation in blocking, as per the image above.

1. Update the settings.
2. Open the graph editor, select all of these and opt for stepped tangent. Essentially, this will allow to see all the poses.

Moving holds.

Get your poses done, according to the planning you prepared before this exercise.

Let’s say you’re gonna pose 10 and 16 (since animation is baked every 3 frames) and leave the 13 unposed.

1. Select controllers via the quick selection key.

2. Narrowed down key frames to 10 – 16, and applied spline animation for this section within the graph editor.
3. Copy the animation from frame 11, given to you from sthe pline by Maya, into the frame 13, via the MMB and this would essentially create the moving holds.
4. Go to frame 13 and slightly readjust the animation, such as the head moving slightly the other way.


31:07 picked up from here


Categories
Project 2

Mixed Reality Game

I’ve been experimenting with the idea of creating a mixed reality game—something I first got excited about during a group project last term. That earlier experience got me thinking about how physical and virtual systems can be connected, but now I want to take it further and really explore in this project what mixed reality can mean as a creative and interactive space.

Mixed reality is often understood as bridging the physical and virtual, where virtual objects are rendered on top of the real world, touching on the idea of augmentation. But what I was more interested in was the idea that experiences and interactions happening within the virtual world, inside the headset, can actually result in physical outcomes. I’m really fascinated by that transition between the two worlds, especially how user actions in VR can influence or change something in the real world.

I’ve begun with reading to back this concept up. One book that’s really influenced my thinking so far is Reality+ by David Chalmers. He argues that virtual realities aren’t fake or “less real” than the physical world. Virtual objects and experiences—if they’re consistent, immersive, and meaningful to the user—can be just as “real” as anything in the physical world. That idea stuck with me, especially when thinking about digital objects as things that have structure, effects, and presence—even if they’re built from code instead of atoms. What I’m interested in now is creating a system where virtual actions have real-world consequences. So instead of just being immersed in a virtual world, the player’s success or failure can actually change something in the physical world. That’s where I started thinking of repurposing my Venus flytrap installation—something physical and mechanical, driven by what happens in the VR game.

The game idea is still forming, but here’s the general concept: the player is in a virtual world where they need to defeat a group of Venus flytrap-like creatures. If they succeed, a real-life Venus flytrap sculpture (which I’m building using cardboard, 3D-printed cogs, and a DC motor) will physically open up in response.




Inspirational Work

Duchamiana by Lillian Hess

One of the pieces that really inspired my thinking for this project is Duchamiana by Lillian Hess. I first saw it at the Digital Body Festival in London in November 2024. After that, I started following the artist’s work more closely on Instagram and noticed how the piece has evolved. It’s been taken to the next level — not just visually, but in terms of interactivity and curation.

Originally, I experienced it as a pre-animated camera sequence that moved through a virtual environment with characters appearing along the way. But in a more recent iteration, it’s been reimagined and enriched through the integration of a treadmill. The user now has to physically walk in order for the camera to move forward. I really love this translation, where physical movement in the real world directly affects how you navigate through the digital space.

Instead of relying on hand controllers or pre-scripted animations, the system tracks the user’s physical steps. That real-world data drives the experience. What stood out to me most was not just the interaction, but the way physical sound — like the noise of someone walking on the treadmill — becomes part of the audio-visual experience. It creates this amazing fusion: a layered, mixed-reality moment where real-world action affects both the sound and visuals of the virtual world.

It’s made me think a lot about how we might go beyond the VR headset — how we can design for transitions between physical and virtual spaces in more embodied, sensory ways. It’s a fascinating direction, and this work really opened up new possibilities in how I’m thinking about my own project.





Mother Bear, Mother Hen by Dr. Victoria Bradbury



Mother Bear Mother Hen Trailer on Vimeo

One project that really caught my attention while researching mixed reality and physical computing in VR is Mother Bear, Mother Hen by Dr. Victoria Bradbury, an assistant professor of New Media at UNC Asheville. What I found especially compelling is how the piece bridges physical and virtual spaces through custom-built wearable interfaces — essentially, two bespoke jackets: one themed as a bear and the other as a chicken.

These wearables act as physical computing interfaces, communicating the user’s real-world movements into the game world via a Circuit Playground microcontroller. The gameplay itself is hosted in VR using an HTC Vive headset. In this setup, the user’s stomping motion — tracked through the microcontroller — controls the movement mechanics within the game, while the HTC Vive controllers are used for actions like picking up objects or making selections.

Both the bear and chicken jackets communicate via the Arduino Uno. A stamping motion controls the movement, and the costumes allow for auditory and visual responsiveness.

What makes this piece even more immersive is how the wearables themselves provide both audio and visual feedback. They incorporate built-in lighting and responsive sound outputs that reflect what’s happening in the game — a brilliant example of reactive media design.

The project was awarded an Epic Games grant, which shows the increasing recognition and support for experimental VR and new media works that fuse physical computing with virtual environments. I found it incredibly inspiring, especially in the context of exploring embodied interaction, wearable technology, and the creative possibilities of mixed reality. It’s a powerful reminder of how far VR can go when it intersects with tactile, real-world interfaces and artistic intention.


Unreal Plant by leia @leiamake

Unreal Plant

Another project that combines Unreal Engine with physical computing is Real Plant by leia @leiamake. It explores the idea of a virtual twin, where a real plant is mirrored in both the physical and digital realms. The setup includes light sensors that monitor the plant’s exposure to light in the real world. That data is then sent to the virtual world, where the digital twin of the plant responds in the same way—lighting up when the physical plant is exposed to lighth.

This project is a great example of bridging the two worlds and creating an accurate mapping between them. It really embodies the concept of the virtual twin by translating real-world inputs into virtual actions. There’s also some additional interaction, like pressing a button to water the plant, though that part is more conventional and relies on standard inputs like a mouse or keyboard. Still, the core idea—mirroring real-world changes in the virtual space—is what stood out to me.

Virtual Reality and Robotics

A similar concept using physical computing and robotics can be found in real-world applications, such as remote robotic control systems. In these systems, the user operates from a VR control room equipped with multiple sensors. Using Oculus controllers, the user can interact with virtual controls in a shared space—sometimes even space-like environments.

Because Oculus devices support features like hand tracking and pose recognition, users can perform tasks such as gripping, picking up objects, and moving items around. What’s impressive about this setup is that the designers have mapped the human physical space into virtual space, and then further mapped that virtual space into the robotic environment. This creates a strong sense of co-location between the human operator and the robot’s actions.

This setup is a great example of a virtual twin, where there’s a precise and responsive mapping between virtual and physical environments. It also points towards the emerging concept of the industrial metaverse, particularly in training robots and AI systems.

A similar example is the early development of Tesla’s humanoid robots. While these robots are designed to be autonomous, they can also be operated by humans through virtual reality systems. The aim is often to support tasks like healthcare delivery or improving social connections.

This is an example where VR and robotics are used to stock up the shop shelves remotely, from a distance.

This ties into the idea of the virtual twin—replicating what happens in the virtual world and mapping it onto physical robots. It’s essentially about translating human actions in a digital environment into responses in the physical world, which is a key aspect of bridging virtual and real spaces.

Industrial meatverse.