Categories
Project 2

Mixed Reality Project Reflection

Project Title: Venus Flytrap – Mixed Reality Game Experience
Platform: Meta Quest 3
Software: Unity (with Meta XR SDK), Arduino (for physical computing integration)



Overview & Context

This project was developed in response to a design brief that asked us to create an engaging experience using animation and storytelling, with a focus on user interaction and immersive feedback. The brief encouraged experimentation with multiverse concepts — blending physical and virtual worlds — to craft a compelling, thought-provoking experience.

I chose to approach this by designing a mixed reality (MR) game experience for Meta Quest 3 using Unity and physical computing via Arduino. The main focus was on user agency, visual feedback, and the integration of real-world components that react dynamically to virtual actions.


Concept & Gameplay Summary

The game centres around a Venus Flytrap theme and is designed as a seated, single-player experience. The player is surrounded by “flytrap monsters” and must shoot them to rescue a trapped fly, physically represented in the real world via a mechanical flytrap installation.

Core Mechanics:

  • Shoot flytrap monsters (each requiring 3 hits to be defeated)
  • Defeat all enemies to trigger a real-world reward: opening the physical flytrap and “releasing” the fly
  • Fail condition: If monsters reach the player, health depletes, and the game ends

This concept questions the idea that “what happens in VR stays in VR” by making real-world elements react to in-game choices.


Animation & Visual Feedback

A central focus of the project was the role of animation in creating strong audiovisual feedback and reinforcing player agency:

  • State-based animations: Each monster visually reacts to bullet hits via Unity’s Animator Controller and transitions (e.g., three different states reflect remaining health).
  • Color feedback: Monsters transition from vibrant green to desaturated red as they take damage.
  • Stop-motion-inspired effects: When monsters die, animated flies burst from their abdomen — a stylized visual reward.
  • Audio cues: Complement visual states and further reinforce player actions.

This dynamic feedback enhances immersion, turning simple interactions into meaningful, sensory-rich experiences.


Technical Implementation

  • Unity + Meta XR SDK: Developed the core game logic and animation system within Unity using the Meta SDK for mixed reality features like passthrough, hand/controller input, and spatial anchoring.
  • Arduino Integration: Connected the physical flytrap installation to Unity using serial communication. Actions in VR (e.g. defeating monsters) trigger real-world servo movements.
  • Pathfinding with NavMesh Agents: Monsters move toward the player using Unity’s NavMesh system, avoiding terrain obstacles and interacting with the player collider.
  • Procedural Bullet Logic: Shooting was implemented via C# scripts that instantiate projectile prefabs with force and collision detection. Hits are registered only on monsters’ abdominal areas.
  • Game State Management: Score tracking, monster kill count, and health depletion are all managed via a custom script, providing a clear player progression system.

Design Decisions & Constraints

  • Asset Use: Monsters were rigged using Mixamo animations and a model sourced from Unreal Engine assets (Creative Commons licensed), allowing focus on interaction design and technical implementation over custom 3D modeling.
  • Animator Learning Curve: A major portion of the project involved learning Unity’s animation pipeline, particularly in creating clean transitions and state-based logic driven by gameplay.
  • Hardware Limitations: Due to Meta Quest 3 restrictions, the final APK shared for submission runs without Arduino hardware. The version with physical feedback requires an external microcontroller setup, which is documented in the accompanying video.

Physical & Virtual Worlds Integration

This project explores the interplay between digital interaction and real-world feedback, where actions in the virtual world have tangible consequences. It challenges typical game environments by asking: What if virtual success or failure could reshape physical reality — even subtly?

By positioning the player in the center of a game board-like world and creating a reactive environment both inside and outside the headset, I aimed to explore storytelling through presence, consequence, and multisensory feedback.


Key Learnings

  • Deepened understanding of Unity’s animation system (Animator Controller, blend trees, transitions)
  • Gained practical experience with physical computing and VR device integration
  • Improved technical problem-solving through scripting gameplay logic and Arduino communication
  • Recognised the power of animation fidelity in conveying emotion, feedback, and agency

Submission Info

  • Final APK (without Arduino requirement) included
  • Showcase video with documentation of the Arduino setup + physical installation
  • Additional documentation includes technical breakdown, flowcharts, and design process
Categories
Project 2

World Defining

The word-building element is really just there to make the scene more visually engaging. The main purpose, though, is functional. It’s meant to enhance the gamified experience by adding some challenge, and also by introducing objects into the environment that interact with the gameplay.

These new digital “plants” serve two main purposes:
First, they help constrain the user’s field of vision, with long, plant-like forms appearing on the horizon. They also embrace the Venus flytrap aesthetic, reinforcing the overall theme. The idea is to partially obscure distant vision, creating a more immersive and slightly tense experience.

I’m keeping this project in mixed reality because it’s important that the user can still see the physical Venus flytrap installation and observe how it responds at the end of the game. That physical interaction is core to the experience.

By introducing these digital plants into the scene, I also managed to constrain the movement of the monsters. Now, not all monsters move directly toward the user in a straight line. Because the mesh surface for movement has been reduced by the placement of these digital obstacles, the monsters must now navigate around them to reach the user. This makes the gameplay more dynamic and adds natural variation to how and when monsters arrive.

Another layer of diversity in movement was achieved by tweaking the NavMesh Agent settings on each monster prefab. By adjusting parameters like speed, acceleration, and angular speed for each monster individually, their behavior became more varied and unpredictable.

I also added more detailed textures to the floor and plane to better fit the aesthetic, updated the UI to match the theme, and fine-tuned the color palette. All of these design choices help reinforce the atmosphere of a plant-monster, green Venus flytrap world.

So overall, these updates have made the project feel more cohesive, engaging, and visually in line with the original concept.

Categories
Project 1

Acting + LipSync: refining of blocking

When animating the eyes, make sure not to cut into the pupil; the iris can be slightly cropped, but only in extreme cases—such as when the character is intoxicated—should the pupil be cut.

The whites of the eyes should not be visible at the top or bottom unless the character is expressing intense emotions like fear or excitement. For example, when my character is looking downward, perhaps at someone with disgust—they are neither scared nor excited. Therefore, the eyelids should not be wide open in that pose.

The arc of the organic movement is crucial. The above synsketch includes notes on tracking the mark on the nose across frames to define the trajectory of the head movement. Since the nose moves very little, it should be used as a reference point for tracking the head’s motion and its pathway.

In the next steps, I’m going to adjust the head’s positioning—specifically its rotation—to ensure the movement feels more organic.

Right now, there’s too much happening in the neck area, making it feel like the neck is the pivot point. Rework the organic movement of the head first, and that should help correct the neck’s motion.

Remember that this shot is a 2D representation of movement, which will visually flatten things. Make sure to show both the upper and lower teeth. For example, if only the top teeth are visible, and too much gum is shown, the result may look like dentures.

The eyebrows need more animation to support the transition between expressions of surprise and disgust. Avoid holding the default pose for too long.

The mouth movement is currently exaggerated, which is fine—it just needs to be slightly toned down as the animation progresses.

As the head dynamically moves into the expression of disgust, there should be a clear preparation for that beat. The mouth will take on the shape of an inverted triangle, which forms the base for articulating that emotion.

Before: Visualising the nose’s trajectory. As per feedback, arcs have to be delivered and polished off.

After applying changes according to feedback given, the arcs for movement are delivered – (red line ) tip of the nose trajectory.

Motion trail animation feature is causing a lot of lagging. The better way of achieving the result is with animbot, motion trail function, more efficient way, may actually cant to be used, it’s not usable.

Categories
Project 2

Venus Fly Trap Physical Computing Installation

Blending Worlds: Building My Venus Flytrap VR Game

Over the past few weeks, I’ve been figuring out how to bridge what I’m doing creatively with the technical setup—finding a way to connect my ideas in Unity with a working space that includes a VR headset, microcontroller, and physical components. After sorting out how to get Unity communicating to the microcontroller based on my exploration, described in the previous blog post, I began working on my more sophisticated idea.

One small but meaningful breakthrough was setting up a button that, when pressed, would light up an LED on a breadboard. It was a simple interaction, but it confirmed that the virtual and physical systems could communicate. It might seem basic, but this helped me break through a technical wall I’d been stuck on. Sometimes the simplest prototypes are the most important steps forward.

From there, I started thinking about how to bring coherence between the physical and virtual elements of the project. The game I’m building revolves around a Venus flytrap, and I wanted the whole aesthetic and gameplay experience to revolve around that concept. In the game, the Venus flytrap acts as both protector and trap. It hides a real-world object inside its petals (in this case, the user) and stays closed. The player’s goal in VR is to defeat all the “fly monsters” surrounding it. Once they’re defeated, the Venus flytrap opens, revealing the trapped player and marking the win.


Repurposing Physical Models

For this, I built a physical model of a Venus flytrap. The petals are made of painted cardboard and 3D-printed components, designed around a gear-based movement system that controls how the trap opens and closes. A DC motor mounted at the back drives the movement, using a cog system with four gears that allow the left and right petals to move in opposite directions. The mechanics are relatively straightforward, but designing the gear system took a fair amount of design thinking and trial-and-error.


Bridging Code and Motion

The movement logic is coded in Arduino, which I then uploaded to the microcontroller. It communicates with Unity through a patch that tracks what’s happening in the game. Specifically, the system monitors how many fly monsters have been “killed” in the virtual world. Once all the fly monsters are defeated, a signal is sent to the motor to open the Venus flytrap in real life—a moment of physical transformation that responds to virtual action.

This project sits somewhere between VR and physical computing. It’s not just about creating a game, but about exploring what happens when virtual reality meets metaphysical, embodied experiences. I’m fascinated by this transition between worlds—the way something in the virtual space can have a tangible, physical consequence, and how that loop can create a more meaningful sense of interaction and presence.

Setting up this system—both technically and conceptually—helped me shape the final scenario. It’s about merging playful game mechanics with thoughtful digital-physical storytelling, and I’m excited to keep exploring how far this kind of hybrid setup can go.

Categories
Project 1

Acting Intro

Planning, planning, planning.

Importance of planning: “In order for the audience to be entertained by the characters, the shots need to be appealing in composition, posing, rhythm and phrasing, and contain acting choices that feel fresh.”

Emotional choices, acting.

What happens before the shot, during and after, in order to keep continuity across the shots.


12 rules:

1. Character does not always need to be in motion, actually, juxtaposition with still (moment of stillness) poses works best.

2. Dialogue doesn’t drive action, but thought does. Establish the emotional state of the character.

What are the main emotional poses? Decide on the timing of the changes between the emotional states.

3. The melody
base = main poses
middle = overlapping actions leading into and out of main poses and eight shift to support small gestures
high = non-verbal body movement (head, shoulders, eyes)
moments of stillness = punctucations

“character’s performance like a song—layering rhythms on top of each other to create interesting texture and timing within your animation. “

4. HANG TIME just like a bouncing ball, loosing its energy when boundsic back and getting stack in the air

HANG TIME = the thought processing moment, there must be a time allowed for the shift between emotions

5. Create a neutral pose for your character (Do not naimate on default T pose of the rig)

What are the character’s main traits?

6. What’s the style of the movement? Style fo the movement gives away about the charchters personality.

e.g.:
Spanish flamenco dancer Buzz Lightyear Toy Story.
redefining non-vanilla walkcycle (sassy, drunk, confident etc)

7. The line of action to create expressions.

face
shoulder lines
hip lines


Contrapposto is an italian term, and is used in the visual arts to describe a human figure standing with most of its weight on one foot.

Contraposition: one side is stretched while the other side of the face is squashed


8. MOVE THAT BODY : Sincerity of the feelings to be expressed through an entire body movement, not only a face.

exaggerate the TY (Translate Y) value in the hips to connect the lower body more to the performance, which enhances the emotional delivery


9. EXAGERATION + extreme poses + smear

juxtaposition of opposites (extreme balanced out with subtle) that gives the animation MORE energy and life


10. Thoughts no words

When a scene does have dialogue, a great trick to figuring out how to animate to a character’s thoughts is to write out the actual dialogue on paper, leaving spaces between the lines. Then, in a different color, write what the character is *thinking* right below what they’re saying.

Categories
Project 2

Animation States based on user actions

Basic functionality


Getting the functionality implementation involved placing a box collider in the scene. This box has a trigger collider and a tag, which allows it to respond to collisions with a bullet object. Foundation for shooting bullets from the gun and hitting the abdomen of the monster. Sorry, I hope this ain’t too cruel.

In my C# script, I’m using OnTriggerEnter to detect when a bullet collides with this box. By using tags, I can define different behaviours depending on what the bullet hits, meaning what type of object within the scene gets hit by the bullet. For example, I assign the tag "Monster" to this collider, and the script handles it specifically when the bullet hits an object with that tag; consequently, only monsters can get hit, and only from its abdomen, otherwise it is bulletproof.

Now, to make interactions more precise and tied to the character’s body, I’ve attached this collider box as a child to one of the bones of a 3D model. The model (and its animations) are imported from Mixamo, and by placing the box under a bone like the spine or hips, it follows the character’s movements during animation. The spine worked out visually better than the hips, as there is more movement at the spine than hips, as per COG.

This setup ensures that the hitbox stays aligned with the monster’s belly even as different animations play — for example, idle, walk, or death. Since it’s parented to a bone, it moves in local space relative to the skeleton, not just world space.

Set of animations to represent different states:

When importing multiple FBX animation files from Mixamo (like idle, hit, walk, death), I ran into a problem where animations didn’t play correctly. The model would appear static or reset to a default pose.

After some troubleshooting (and watching tutorials), I realised the problem was due to avatar mismatch.

Here’s the fix:

This ensures all animations share a consistent skeletal rig. Without this, Unity won’t apply the animation data properly, and your character may stay static without any visible errors.

  • First, import the base T-pose or idle FBX with the correct avatar settings.
  • Then, for every additional animation FBX (e.g. walk, hit, die), go into the Rig tab in the import settings and set the Avatar Definition to “Copy From Other Avatar”.
  • Assign the avatar from your base model.

Summary of Logic:

  • A trigger box is placed inside the monster’s belly.
  • It’s parented to a bone, so it follows the animation.
  • When the bullet collides with it, the script triggers a hit reaction.
  • The Animator (assigned to the monster GameObject) switches states based on the trigger condition, such as transitioning into a “Hit” or “Die” animation.

This setup gives you a dynamic and accurate way to detect hits on specific body parts during animations.

Monster prefab and game concept

My idea is to create a seated VR game, where the player remains in a rotating chair. The player won’t move around, but can rotate freely to face threats coming from all directions. In this project, I explored user interaction and AI behavior through a MR game prototype built in Unity. The setup revolves around a central camera, with the user seated on a “routing table chair” and tracked via a touch head positioned under the camera centre. This configuration allows the user to rotate and move their body freely, which is crucial to how the game reads orientation and interaction.

The gameplay involves monsters (zombies) approaching the player from various positions in the environment. I plan to define the origin point (0,0,0) in Unity as the player’s position, and the monsters will spawn at random positions within a certain radius and walk toward that centre point.


Monster Behaviour and Setup:

  • I want to design a single monster prefab that contains all the core functionality: walking, taking damage, reacting to hits, and dying.
  • Each monster will require three successful hits from the player to be fully killed.
  • The monster’s Animator will have multiple states:
    • Walking toward the player.
    • A reaction animation for the first hit.
    • A different reaction for the second hit.
    • A death animation (falling forward or backwards) on the third hit.

I also want the walking animation to be affected by the hit reactions. For example, after being hit, the walker could pause briefly or slow down before resuming.


Key Goals:

  • Spawn Logic: Monsters will be spawned from various angles and distances (e.g. within a radius) and will always walk toward the centre (player).
  • Modular Prefab: All behaviour should be contained in one reusable prefab, so I can later swap out models or visuals without rewriting logic.
  • Focus on Functionality First: My current priority is to get everything working in terms of logic and animations. Once that’s stable, I can improve visuals, variety, and polish.


To detect proximity events between the user and approaching creatures, I attached a spherical collider under the head object – as can be seen in the screenshot above, located centraly (which represents the user’s origin point). For Unity’s physics system to register collision events correctly using OnTriggerEnter, at least one of the colliding objects needs to have a Rigidbody component—this was a key technical insight. I attached this Rigidbody to the user’s collider to ensure that the system could properly detect incoming threats.

The core mechanic involves enemy creatures that move toward the user from various directions. For this, I manually spawned seven creatures around the user, each set as a prefab, to simulate threats approaching from different angles. These enemies are animated and follow a basic AI path using Unity’s NavMesh system.

Initially, I struggled with configuring the NavMesh Agents. The creatures weren’t moving as expected, and I discovered the issue stemmed from an excessively large agent radius, which caused them to collide with each other immediately upon spawning. This blocked their movement toward the user until some of them were eliminated. Once I adjusted the agent radius in the component settings, the agents were able to navigate correctly, which was a significant breakthrough in troubleshooting.

Another major learning point was in managing the creature’s state transitions through the Animator component. Each enemy had three lives, and I created a custom C# script to handle hits via a TakeHit() function. After each hit, the enemy would briefly pause and then resume movement—except after the third hit, which would trigger a death animation.

However, I encountered a strange behaviour: even after the death animation was triggered, the creature’s body would continue moving toward the user. This was due to the NavMesh Agent still being active. To resolve this, I had to disable the agent and stop its velocity manually by setting it to a Vector3.zero. Additionally, I toggled the agent’s isStopped boolean to true and disabled the movement script to fully freeze the creature in place, allowing it to collapse realistically at the point of death.

Overall, this project deepened my understanding of Unity’s physics, AI navigation, and animation systems. It also highlighted the importance of detailed debugging, iterative testing, and thinking holistically about how different systems in Unity interact. What seemed like minor details—such as collider hierarchy, Rigidbody placement, or NavMesh agent size—turned out to be crucial to achieving believable and functional gameplay.

Categories
Project 1

Acting + LipSync: blocking

PHONEMES: the mouth movement is like a accordion, each of the letter has a different sound, so the mouth will move differently depends on the letter and also influence by the mood/ state of emotion of the character

each phrase in the story represent the pose in the story

animating what character is thinking not what they are saying, as per focus of the listener is on the eyes/face not the

The most important are 3 poses to established the shot

understanding what makes up a sound, helps established how the corresponding parts of the face are moving if the

talk, resting against the elbow, to help you understand better the movement of the mouth

twin machine and animbot

Dreamwall picker for Maya by owenferny YT


4 components of language

The “m” and “b”, to make sound the lips had to be close but the mouth have to actual open to make the sound, pressing and realising lips.

P and B work the same way.

  1. How the jaw goes up and down, open and closes the mouth.

2. Corners: in and out.


The above picture show sketching for prepearing, trying to understand the mouth, jaw, tongue, movement, as per example from YT tutorial. Very good exercise, in understanding how different sounds are articulated, and how the movement should be created to approximate the movement, make it convining to the viewevr that actually is synced to the audio provided – the audio -visual.



Just like the jaw, corners should not be going in and out just for the sake of keeping going.
The corners’ movement is driven by the adjacent sound.

E is a great example, like word never, E within never is going to keep the mouth wide open. Exaggerated E means the mouth will open quite wide, but it is possible to pronounce E just short.

3. Tongue.

The most frequently underestimated and forgotten, yet significantly important. Needs to be taken for consideration.

LET IT GO: The L and T are best examples, when L is pronounced the tongue touches the roof of the mouth hardly, same for T but less powerfully.

4. The Polish

Look up if the character is breathing in between lines.

Do not go for default poses, create interesting shapes that express the emotions.

Bes sure to have a little moment that will stands out.



Process.


1. Setting up the project. Referecnign rig. Ensuring all the texture files are waiting source images folder.

2. Regenerating node inside the hypershade to fit the texture of the rig.

3. Default position. The character is sitting at the table in the restaurant so therefore sitting position.



Optimising workflow:

Installing animbot

Installing Studio Library 2.20.2 in order to be able to create custom poses and then save them so these can be used later, throughout an animation.





Setting up the pose library that comes with the rig.
It throws some errors and requires changing the namespace.

Solution:

Go to Windows > General Editors > Namespace Editor

Find your current rig namespace (e.g., character1)

Select it and click “Rename”

Rename it to mars_v01








Some of the lips coordination, when trying to match the sounds and approximate lipsync.

ssssssss

way -> w

she’s -> shhhhhhhh
shameles->


eee- from sheee

jaw slighl;ty mvoes up

the – ffffffffffffffffffff


Approach:

Start with the lipsync to ensure it matches the soundtrack. The soundtrack is very helpful with keeping the timing.




The feedback from the TA:

If the head moves too dynamically,the viewer won’t be able to notice the face expressions, as the high speed of head movement will creat somewhat the motion blur like effect. For the sequence with a lot of text words, especially when character speaks very quickly, the movement of the head should not be so dramatic.

Tips: Use the animation layers instead of keeping all of the keys on the same layer. It will make the process of work much efficient and you will be able to toggle between the layers and blend all those together.


George’s feedback:

Get your GOLDED POSES: the start and the end of the body right, and work with the middle.

The non verbal part of the acting- the body language is equally or even more important than the lypsync. It must showcase the emotional state of the character.

WORKING WITH THE QUICK SELECTION (WAIST, CHEST, NECK, HEAD) ***OREDER IMPORTNACE***

1. Set animation to autospan legacy for working in spline.
2. Create quick selection tool using for the body part: COG + chest +neck +head (not this will differ depending on the rig and the acting pose, this is true for my sitted animation with use of mazu rig)
3. Get the golden poses.
4. Start working with the middle part, get your in-betweens. Answer to the questions, which poses am I favouring? Apply this with the use of moving holds. The middle mouse button, move over, then hit S for keying or use the tweeting machine.
5. Getting in between poses. Decide on how long the pose is being held for before moving to the next pose.


WORKING WITH CHEST

WORKING WITH THE NECK

WORKING WITH HEAD

The next step of the process in this method is to decide how head moves in correspondence to the rest of the body.

In my reference, in the first part torso moves then head follow, on the contrary in the second part head is leading the movement.

To extancuate this further, in the next step is to work with the chest and the neck. Only work with the parts of the body that appear in the shot, no need to work with the waist, as only the check, neck and head are in the shot.



Working with the chest:

translation z:

translation y:

translation x:


The graph editor screenshots of work in progress, the one above gives better results than the one higher up. The rotation Z and Y matches, which means as the character moves their head in circles, the head goes back and then left, which makes sense. However, the one higher up give worse, less organic and more jumping effect.

Working with the neck controller to define the head movement. So the order goes like waist, chest, neck, head. The waist is the closest to the COG. The body moves from below upwards.

Reflection:

I’ve approached this from the other way around, doing lip sync first and then moving to body movement. One could argue that this is not a bad approach, as it sets me off with a foundation of mouth movement; however, I’ll need to go back do it and reapply accordingly.

My biggest problem in design thinking is being biased to being a 2D visual artists, and often struggle to think in 3D. Going through George’s tutorial, I reminded myself that I often merely refer to what’s in the viewport, that’s a 2D representation, so when moving the chest, the translation does not only happen from side to side and back to front, but in 3 dimensions altogether, which outputs much better results.

I kind of ignored what was happening at the waist level. Too bad. The waist isn’t visible in my acting pose, but it’s still crucial. It’s fair to say it doesn’t move as much as the upper body, but whatever happens to the waist affects the rest of the body. That’s the FK (Forward Kinematics) system at work—think chain reaction. It’s also very important to animate the waist intentionally so that you can later emphasise the contrast between the movements of the head and the waist. One of these parts usually leads the motion, and that difference needs to be accentuated in the animation.

In gaming and virtual worlds, animation is crucial for communicating between characters in a non-verbal way.



Blocking updated video for feedback, after applying changes and following the tutorial in class:

Categories
Project 2

Briding Metaverse with Metaphyscial



Bridging Physical and Virtual Realms with Arduino and Unity, respectively

For this part of the project, I intended to explore the interaction between physical and virtual environments. I aimed to establish a two-way communication channel where actions in the virtual world could produce real-world effects. To achieve this, I used an Arduino microcontroller to receive output signals from a Unity-based VR environment. This allowed me to control a physical LED light in response to a virtual event – shooting bullets from the gun – via the controller’s index finger trigger.


Setup Overview

The physical setup consisted of a single LED connected to an Arduino board. The Arduino was linked to my laptop via a USB-C cable. On the software side, I developed a Unity project using the OVRInput system to trigger virtual shooting events. These events would send a signal through a serial port to the Arduino, prompting the LED to turn on briefly.


Initial Challenges and Troubleshooting

The setup proved to be more challenging than expected, particularly in terms of serial communication and platform compatibility. Below is a breakdown of key issues I encountered and how I addressed them:

1. Arduino Upload Issues

At first, I was unable to upload sketches from the Arduino IDE to the board, despite:

  • The Arduino is being detected by the device manager
  • The correct drivers are being installed
  • Successful code compilation

Even though the COM port was visible and correctly selected, the IDE failed to upload the code. After troubleshooting extensively and rechecking the USB connections, I found that a simple system reboot resolved the issue. This was unexpected, but it allowed uploads to proceed normally afterwards.


2. Unity and Arduino Serial Communication; Arduino Sketch and C# description

Unity does not natively support serial communication with external devices like Arduino. To bridge this gap, I relied on the .NET System.IO.Ports namespace, which provides serial communication capabilities.

I wrote a basic Arduino sketch that turns an LED on or off based on a received character ('1' for on, '0' for off). In Unity, I implemented a custom C# script that uses the SerialPort class to send these signals. This script was attached to an empty GameObject and referenced within the RayGun script to trigger LED activation when the player fires the gun.

Based on the tutorial for setup communication: Unity & Arduino Communication


This is a simple Arduino sketch designed to control an LED. In the setup() function, serial communication is initialized at 9600 baud (bits per second), and pin 2 is configured as an output to control the LED. Although a global buffer and a char array (buf[]) are defined with a size of 50, they are not actively used in the final version of the code. I originally experimented with reading multiple characters at once, but I noticed this caused the LED to remain continuously on — which didn’t work well for my intended shooting feedback effect. As a result, I opted to read only one character at a time, which allowed for more responsive and accurate LED control.

In the loop() function, the sketch checks whether any data is available on the serial port. If data is detected, a single character is read and stored in the cmd variable. If this character is '0', the LED is turned off (digitalWrite(2, LOW)); if it’s '1', the LED is turned on (digitalWrite(2, HIGH)). This allows Unity (or any external controller) to send simple serial commands ('0' or '1') to toggle the LED in real time.

I also included a short delay of 200 milliseconds after each loop cycle. This was partly based on recommendations from online tutorials, but also confirmed through testing: it helps synchronize the communication and prevents the Arduino from reading too frequently or reacting too rapidly, which could cause inconsistent behavior. This delay ensures that the LED only responds once per input, making it more suitable for the quick, discrete signals used in a VR shooting mechanic.




In terms of the C# implementation, the script makes use of the System.IO.Ports namespace, which provides access to serial communication via the SerialPort class. This is essential for enabling Unity to communicate with external hardware such as an Arduino.

Within the Start() method, a serial connection is established using COM6, which corresponds to the port associated with my Arduino controller connected to the PC. The communication is initialized at 9600 baud, matching the settings specified in the Arduino sketch (Serial.begin(9600)).

The SendSignal(bool on) method is designed to send simple control signals — either '1' or '0' — to the Arduino. If the on parameter is true, it sends '1', which lights up the LED. If it’s false, it sends '0', turning the LED off. This binary approach allows Unity to provide immediate physical feedback in response to in-game events, such as shooting.

Lastly, the OnApplicationQuit() method ensures that the LED is turned off when the Unity application is closed. It sends a final '0' to the Arduino before closing the serial port. This prevents the LED from remaining on unintentionally after the game ends.

In summary, this script acts as a bridge between Unity and the Arduino, using serial communication to synchronize digital actions (e.g., pressing a button on the controller) with physical outputs (e.g., turning on an LED). This implementation enables a simple but effective feedback loop between virtual and physical environments.


Key Technical Bottlenecks

Namespace and Class Recognition Errors

A major obstacle was Unity’s failure to recognise the System.IO.Ports namespace and the SerialPort class. The error message stated that the class existed in two assemblies: System.dll and System.IO.Ports.dll, causing a conflict.

To resolve this:

  • I changed the API Compatibility Level in Unity (via Project Settings > Player > Other Settings) to .NET 4.x.
  • I manually downloaded the System.IO.Ports package from Microsoft’s official NuGet repository.
  • I experimented with placing different versions of the DLLs in Unity’s Assets/Plugins folder, but most led to version mismatches or runtime errors.

Ultimately, changing Unity’s compatibility settings resolved the issue, and no additional DLLs were required. As per the image below, I manually located files and copied them into the assets folder of my Unity projects, as Unity failed to retrieve the class of the namespace. This is something I’ve been advised to follow from the online group discussion. These, however, end up throwing an issue later on, with Unity suggesting a class definition left in 2 separate file versions. 1 came via plugin downloaded on VSC (Visual Studio Code), the other one manually via a zip file folder of the website. In the end, both were required to be kept on the local machine, but not under the same project. Very confusing haha.


Windows-only Support Confusion

Another issue arose from Unity reporting that System.IO.Ports it was “only supported on Windows”—despite my work on a Windows machine. This turned out to be a quirk in Unity’s error handling and was resolved by ensuring:

  • The Unity platform target was correctly set to Windows Standalone.

Final Implementation Outcome

After several hours of testing and debugging:

  • Unity successfully sent serial data to the Arduino each time the player pressed the fire button.
  • The Arduino correctly interpreted '1' the signal to light the LED, and '0' to turn it off after a short delay.
  • The interaction was smooth, and the LED reliably responded to gameplay events.

This implementation serves as a foundational example of hardware-software integration, particularly in VR environments.


Categories
Project 2

Designing a Fantasy- Inspired Shooting Game with Meaningful Feedback

Concept: Flies, Monsters, and the Venus Flytrap

Rather than having the player simply destroy enemies, I imagined the monsters of a Venus flytrap Queen who had trapped flies inside their bellies. When the player shoots a monster, the goal is not to harm it, but rather to liberate the flies trapped within.

So, with each successful hit, a swarm of flies bursts from the monster’s belly—visually suggesting that the player is helping, not hurting. This felt like a more thoughtful narrative balance, allowing for action gameplay without glorifying violence.

These monsters serve a queen—the “mother flytrap”—a physical computing plant in the real world. As the player defeats more of these monsters in-game, the number of freed flies is mapped to data used to drive a DC motor that gradually opens the flytrap plant in reality. This concept of connecting digital action to physical consequence is central to my interest in virtual-physical interaction.


Game Mechanics: Health and Feedback

Each monster has three lives, but early testing made it clear that players had no clear visual cue of the monster’s remaining health. I didn’t want to use floating numbers or health bars above their heads—it didn’t suit the aesthetic or the narrative. Instead, I introduced a colour transition system using materials and shaders.

Colour Feedback System Traffic Lights

  • Monsters start with a vivid green body, representing vitality and their flytrap origin.
  • With each hit, the colour fades:
    • First hit: duller green.
    • Second hit: reddish-green.
    • Third hit (final): fully red.

This gives players immediate visual feedback on the monster’s state without UI clutter, reinforcing the game’s narrative metaphor.


Implementing the Visual Effects

Material Cloning

To change the colour of each monster individually (rather than globally), I cloned their material at runtime:

csharpCopyEditbodyMaterial = bodyRenderer.material;

This creates a unique instance per zombie. If I used shared material, all zombies would change color together, which I discovered during testing and debugging.

Colour Transition Logic

I wrote a function UpdateBodyColor() that calculates the colour based on the monster’s remaining health. It uses:

csharpCopyEditMathf.Clamp01(health / 3f);
Color.Lerp(Color.red, Color.green, healthPercent);

This smoothly transitions from red (low health) to green (full health), depending on how many lives the monster has left.


Particle Effects: Flies – Stop motion animation aesthetics

To visualise the flies escaping, I used Unity’s Particle System.

  • I created a sprite sheet with 16 fly frames in a 4×4 grid.
  • I used the Texture Sheet Animation module to play these frames in sequence, creating a stop-motion animation effect.
  • The particle system is parented to the monster’s belly and only plays when the monster is hit.

I made sure to disable looping and prevent it from playing on start. That way, it only triggers when called from code (in the TakeHit() function).

This was an experimental but rewarding technique, simulating complex animation using a simple sprite sheet. It aligns well with the fantasy aesthetic, even though the flies are 2D in a 3D world.


Unity Implementation Notes

  • Each monster is a prefab. This made it easier to manage changes and maintain consistency across the game.
  • The TakeHit() method:
    • Reduces health.
    • Plays an animation.
    • Plays a sound (roar, hit, or death).
    • Triggers the flies’ particle effect.
    • Calls UpdateBodyColor() to change the visual appearance.
  • Once health reaches 0:
    • The monster “dies”.
    • The NavMesh agent stops.
    • A death animation and sound are played.
    • The object is destroyed after a short delay.

Bridging Physical and Digital Worlds

The core concept I’m exploring here is the flow of data between virtual and physical space. I love the idea of the player doing something in the game that directly affects the real world. By linking monster deaths to the opening of the physical flytrap plant via a DC motor, the project becomes more than just a game—it becomes an interactive experience across dimensions.

I see this as a kind of “virtual twin” concept, but not just a visual replica. It’s a data-driven relationship, where progress in the virtual world controls mechanical outcomes in the physical world.

This idea is loosely inspired by Slater’s concept of presence—how sensory data in one space (physical) can be reconstructed or mirrored in another (virtual), and vice versa. It’s this bidirectional flow of meaning and data that fascinates me.


Categories
Project 1

Hitting the SPLINE!

Spline the COG first, beacuse COG drives everything else!:

Translate Y to decide about the weight:

– Remove all the keys which are no longer needed.

  • Adjust the timing.
  • Simplifie the curve: The goal of cleaning is to get the least amount of keys for the desired effect.


    Settings for spline:



The common problem, that animation in spline sucks after moving from blocking, is the skipped part of blocking plus.If the ratio between the key frames in blocking against the entire timeline is small, like only 10 % ( 20 keyed frames for 200 timeline) men’s, that the majority of the work will need to be done by the computer. So here’s the good point, you’re the master of the own tool and not the computer, like the Maya is an extension of your abilities, but you need to dicatete the animation, so this should amplified the meaning behind cleaning the spline curves. (notes form Sir WAde Neistadt)



timeline: 150
key frames: 23

15%frames, the rest will get performed by Maya, which is likely for spline to be a trouble

  • the animation will feal very floaty (everything move all at once, at one speed at one direction – it’s like the hullication movement by AI)
  • the gimbal lock issue will occur
  • Moving holds: timing per body part -favouring the pose to stay in one position longer and then arrive quicker later

ADD MORE KEYS IN THE BLOCKING STAGE TO LOCKDOWN THE TIMING THAT YOU WANT FORM YOUR CHARACTER ACROSS THE ENITRE SHOT



GIMBAL is essentially a 3 ring axis, allowing for the rotation of the object. gimbal lock refers to the situation when 2 ring axis are overlapping, causing locking.

Rotation order by default is zyz, meaning that x is a child of y, y is child of z, so the z is the parent of the all.

Swithcing between: object, world and gimbla view : hold down W and + LEFT MOUSE BUTTON on the viewport