Categories
Project 2

Animation States based on user actions

Basic functionality


Getting the functionality implementation involved placing a box collider in the scene. This box has a trigger collider and a tag, which allows it to respond to collisions with a bullet object. Foundation for shooting bullets from the gun and hitting the abdomen of the monster. Sorry, I hope this ain’t too cruel.

In my C# script, I’m using OnTriggerEnter to detect when a bullet collides with this box. By using tags, I can define different behaviours depending on what the bullet hits, meaning what type of object within the scene gets hit by the bullet. For example, I assign the tag "Monster" to this collider, and the script handles it specifically when the bullet hits an object with that tag; consequently, only monsters can get hit, and only from its abdomen, otherwise it is bulletproof.

Now, to make interactions more precise and tied to the character’s body, I’ve attached this collider box as a child to one of the bones of a 3D model. The model (and its animations) are imported from Mixamo, and by placing the box under a bone like the spine or hips, it follows the character’s movements during animation. The spine worked out visually better than the hips, as there is more movement at the spine than hips, as per COG.

This setup ensures that the hitbox stays aligned with the monster’s belly even as different animations play — for example, idle, walk, or death. Since it’s parented to a bone, it moves in local space relative to the skeleton, not just world space.

Set of animations to represent different states:

When importing multiple FBX animation files from Mixamo (like idle, hit, walk, death), I ran into a problem where animations didn’t play correctly. The model would appear static or reset to a default pose.

After some troubleshooting (and watching tutorials), I realised the problem was due to avatar mismatch.

Here’s the fix:

This ensures all animations share a consistent skeletal rig. Without this, Unity won’t apply the animation data properly, and your character may stay static without any visible errors.

  • First, import the base T-pose or idle FBX with the correct avatar settings.
  • Then, for every additional animation FBX (e.g. walk, hit, die), go into the Rig tab in the import settings and set the Avatar Definition to “Copy From Other Avatar”.
  • Assign the avatar from your base model.

Summary of Logic:

  • A trigger box is placed inside the monster’s belly.
  • It’s parented to a bone, so it follows the animation.
  • When the bullet collides with it, the script triggers a hit reaction.
  • The Animator (assigned to the monster GameObject) switches states based on the trigger condition, such as transitioning into a “Hit” or “Die” animation.

This setup gives you a dynamic and accurate way to detect hits on specific body parts during animations.

Monster prefab and game concept

My idea is to create a seated VR game, where the player remains in a rotating chair. The player won’t move around, but can rotate freely to face threats coming from all directions. In this project, I explored user interaction and AI behavior through a MR game prototype built in Unity. The setup revolves around a central camera, with the user seated on a “routing table chair” and tracked via a touch head positioned under the camera centre. This configuration allows the user to rotate and move their body freely, which is crucial to how the game reads orientation and interaction.

The gameplay involves monsters (zombies) approaching the player from various positions in the environment. I plan to define the origin point (0,0,0) in Unity as the player’s position, and the monsters will spawn at random positions within a certain radius and walk toward that centre point.


Monster Behaviour and Setup:

  • I want to design a single monster prefab that contains all the core functionality: walking, taking damage, reacting to hits, and dying.
  • Each monster will require three successful hits from the player to be fully killed.
  • The monster’s Animator will have multiple states:
    • Walking toward the player.
    • A reaction animation for the first hit.
    • A different reaction for the second hit.
    • A death animation (falling forward or backwards) on the third hit.

I also want the walking animation to be affected by the hit reactions. For example, after being hit, the walker could pause briefly or slow down before resuming.


Key Goals:

  • Spawn Logic: Monsters will be spawned from various angles and distances (e.g. within a radius) and will always walk toward the centre (player).
  • Modular Prefab: All behaviour should be contained in one reusable prefab, so I can later swap out models or visuals without rewriting logic.
  • Focus on Functionality First: My current priority is to get everything working in terms of logic and animations. Once that’s stable, I can improve visuals, variety, and polish.


To detect proximity events between the user and approaching creatures, I attached a spherical collider under the head object – as can be seen in the screenshot above, located centraly (which represents the user’s origin point). For Unity’s physics system to register collision events correctly using OnTriggerEnter, at least one of the colliding objects needs to have a Rigidbody component—this was a key technical insight. I attached this Rigidbody to the user’s collider to ensure that the system could properly detect incoming threats.

The core mechanic involves enemy creatures that move toward the user from various directions. For this, I manually spawned seven creatures around the user, each set as a prefab, to simulate threats approaching from different angles. These enemies are animated and follow a basic AI path using Unity’s NavMesh system.

Initially, I struggled with configuring the NavMesh Agents. The creatures weren’t moving as expected, and I discovered the issue stemmed from an excessively large agent radius, which caused them to collide with each other immediately upon spawning. This blocked their movement toward the user until some of them were eliminated. Once I adjusted the agent radius in the component settings, the agents were able to navigate correctly, which was a significant breakthrough in troubleshooting.

Another major learning point was in managing the creature’s state transitions through the Animator component. Each enemy had three lives, and I created a custom C# script to handle hits via a TakeHit() function. After each hit, the enemy would briefly pause and then resume movement—except after the third hit, which would trigger a death animation.

However, I encountered a strange behaviour: even after the death animation was triggered, the creature’s body would continue moving toward the user. This was due to the NavMesh Agent still being active. To resolve this, I had to disable the agent and stop its velocity manually by setting it to a Vector3.zero. Additionally, I toggled the agent’s isStopped boolean to true and disabled the movement script to fully freeze the creature in place, allowing it to collapse realistically at the point of death.

Overall, this project deepened my understanding of Unity’s physics, AI navigation, and animation systems. It also highlighted the importance of detailed debugging, iterative testing, and thinking holistically about how different systems in Unity interact. What seemed like minor details—such as collider hierarchy, Rigidbody placement, or NavMesh agent size—turned out to be crucial to achieving believable and functional gameplay.

Leave a Reply

Your email address will not be published. Required fields are marked *