Categories
Project 2

Mastering Transitions in Unity Animator: A Game Logic Essential


When creating dynamic animations in Unity, transitions play a vital role in shifting between different animation states. These transitions are not just aesthetic—they’re often tied directly to your game logic to deliver responsive and immersive feedback to the player.

What Are Transitions?

In Unity’s Animator system, transitions are used to move from one animation state to another. This could mean transitioning from an “idle” state to a “running” state when the player moves, or from a “standing” state to a “hit reaction” when the character takes damage.

Triggering Transitions

Transitions are typically triggered by parameters—these can be:

  • Booleans: Great for simple on/off states (e.g., isJumping = true). However, they should not be used for managing multiple simultaneous conditions, as this can cause errors or inconsistent behavior.
  • Integers or Floats: Useful when you need more nuanced control, such as switching between multiple animation states based on a speed or health value.
  • Triggers: Ideal for one-time events like a shot being fired or a character being hit.

For example, imagine a scenario in a shooter game:
When an object gets hit by a bullet, you can trigger a transition to a “damaged” animation state. This provides instant visual feedback to the player that the hit was registered—crucial for both gameplay and user experience.

Best Practices

  • Use Booleans sparingly, only for simple, binary state changes.
  • Use Triggers or numerical parameters for more complex or multi-condition transitions.
  • Always test transitions thoroughly to avoid animation glitches or logic conflicts.

Final Thoughts

Mastering transitions in Unity isn’t just about getting animations to play—it’s about making your game feel alive and responsive. By tying animations into game logic through intelligent use of transitions and parameters, you enhance both the realism and playability of your game.



Project Progress

For my project, I’m using a 3D model of a Creep Monster, which I downloaded from the FAB platform in FBX format. (Creep Monster | Fab) I then uploaded it to Mixamo, where I generated a skeleton for the model and applied different animations—such as standing idle and dying.

After exporting those animations, I imported them into Unity. One of the main steps is adding components, especially the Animator component. This is essential for handling transitions between animation states. These transitions are not only important for visual feedback but are also critical for managing game logic.

I also attached a custom C# script to the monster object in the scene. This script controls what actions should happen to the monster based on in-game interactions.

A key requirement for this setup is having an Avatar. The downloaded model didn’t come with one, but Unity allows you to generate it. To do this:

  1. Select the model in the Inspector panel.
  2. Go to the Rig tab under the Model settings.
  3. Change the Animation Type to “Humanoid”.
  4. Under the Avatar section, choose “Create From This Model”.
  5. Click Apply.

This process fixed an issue I faced during gameplay where the model wasn’t rendering correctly. The problem stemmed from the missing Avatar, which serves as the rig representation Unity needs for the Animator to work properly.

For interaction and testing, I modified a custom C# script attached to the bullet object. This script checks for OnTriggerEnter() events. When the bullet’s collider detects a collision with an object tagged as “Monster”, it triggers another script. That secondary script acts as a bridge, connecting the collision event to the Animator on the Creep Monster.

As a result, when the player shoots the monster, the Animator transitions from its default state to a “dying” animation. This workflow is how I’m handling enemy reactions and visual feedback for hits and deaths in the game.

It’s been a great way to troubleshoot and understand how colliders, custom C# scripts, and Animator states work together in Unity.

Next, I’ll explore animation layers for better control, especially for complex character behaviours.

Categories
Project 1

Body Mehcanics: Reshooting Reference


Feedback 02/05/25

  • Lifting up the box: to make the process easier, let the character hold onto the box from below
  • constrain hands to the box -> then animate the box -> then copy animation over to the hands controllers (as per analogy to the in-class example with master controllers)
  • Adjust the timing at the beginning; currently, the implementation has a quick jump
  • The character approaches the desk with the box, then stops before getting closer to the box to lift it up
  • be sure to animate the legs accordingly to where the weight is distributed and bear in mind which leg is pivoting





Reshooting the reference based on the feedback given





Constrain


StepActionDetails
1Create LocatorsCreate locators and constrain them to the cube object with offset maintained.
2Constrain Locators to ControllersApply constraints from the locators to the corresponding controllers.
3Assign Box Locators to ControllersRight, centre, and left locators are constrained to the right arm, chest, and left arm, respectively.

Initially, I constrained the central locator to the chest of the character, which was incorrect. Upon reconsideration, it makes more sense for the central locator to be constrained with the chest, not directly to it. This is because the character is holding the box with the arms, but part of the weight is supported by the chest.


Categories
Project 2

Animating Interaction in Immersive Spaces: Exploring Visual Feedback in Gamified VR Experiences

Project Objective

This project in it’s own nature is experimental and investigates animation of visual feedback within gamified or interactive VR environments. The work is grounded in the field of Human-Computer Interaction (HCI), especially within 3D immersive spaces where traditional screen-based feedback is replaced by spatial, multisensory experiences.

From Screen to Immersion: A Shift in Feedback Paradigms

In traditional gaming, feedback is predominantly audiovisual, limited to what can be shown on a 2D screen. In VR, the immersive spatial environment introduces new challenges and opportunities. Visual feedback becomes a primary tool for user orientation and understanding, especially when combined with haptic and audio inputs.

As we move away from screen-based interactions into head-mounted display (HMD) experiences, visual animation plays a central role in reinforcing the sensation of presence and in guiding user interaction. Haptic feedback can complement this, but visual cues remain the most immediate and informative component of interaction in VR.

The Role of Animation in Action-Based Feedback

Consider a typical game mechanic like shooting a gun in VR:

  • The user performs an action (e.g. firing a laser).
  • A visual response follows (e.g. a laser beam shoots forward and hits a target).
  • This entire process is animated, and the quality and believability of that animation impact the user’s understanding and experience.

Here, animation serves not just aesthetic purposes, but as a way to translate data into visual understanding—the laser beam visualises the trajectory and the distance within the space which helps user understand the space and gives them a sense of the space, which offers infinite FOV.



1. Personal Space

  • Distance: ~0.5 to 1.2 meters (1.5 to 4 feet)
  • Description: This is the space we reserve for close friends, family, or trusted interactions.
  • Use in VR: Personal space violations in VR can feel intense or invasive, making it powerful for emotional or narrative impact.

2. Intimate Space

  • Distance: 0 to 0.5 meters (0 to 1.5 feet)
  • Description: Reserved for very close interactions—hugging, whispering, or personal care.
  • Use in VR: Very rare outside of specific narrative or therapeutic experiences. It can evoke strong emotional responses, including discomfort.

3. Social Space

  • Distance: ~1.2 to 3.5 meters (4 to 12 feet)
  • Description: The typical space for casual or formal social interaction—conversations at a party, business meetings, or classroom settings.
  • Use in VR: Useful for multi-user or NPC (non-player character) interaction zones where you want users to feel present but not crowded.

4. Public Space

Use in VR: Great for audience design, environmental storytelling, or large-scale virtual spaces like plazas, arenas, or explorable worlds.

Distance: 3.5 meters and beyond (12+ feet)

Description: Space used for public speaking, performances, or observing others without direct engagement.


Visual Fidelity vs Animation Fidelity

Two critical concepts are being explored:

  • Visual Fidelity: How objects are represented in terms of texture, rendering quality, lighting, and detail. This relates to how convincing or realistic the environment feels.
  • Animation Fidelity: How smoothly and convincingly motion and interactions are animated. It includes timing, easing, weight, and physicality of motion.

Both forms of fidelity are essential for user immersion. High animation fidelity, in particular, supports believability—users need to feel that the action they performed caused a logical and proportional reaction in the environment.

Procedural Animation and Data-Driven Feedback

One key technique in this research is procedural animation—animations generated in real time based on code, physics, and user input, rather than being pre-authored (as in keyframe animation).

For example:

  • A bullet fired from a gun might follow a trajectory calculated based on angle, speed, and environmental factors.
  • The impact animation (e.g. an explosion) could scale in intensity depending on these values—larger impacts for faster bullets or steeper angles.
  • This helps communicate different degrees of impact through graduated visual responses.

Benefits of Procedural Feedback

  • Consistency with physical principles (e.g. gravity, momentum).
  • Dynamic responsiveness to different user actions.
  • Enhanced variation and realism, reducing repetitive feedback.

Designing Feedback for Understanding and Engagement

For visual feedback to be meaningful, variation matters. If every action results in the same animation (e.g., the same explosion no matter how intense the bullet), the user loses the ability to interpret the nuances of their actions.

Therefore, we must design condition-based feedback:

  • A weak hit might produce a small spark.
  • A high-speed, high-impact hit might create a large explosion with particle effects and shockwaves.

This approach informs users of the intensity and outcome of their actions, using animation to bridge interaction and consequence.

Justifying Design Choices

When working with keyframe animation, it’s essential to justify and rationalise aesthetic choices:

  • Why a certain texture?
  • Why a particular particle effect?
  • How do visual effects reflect the mechanics or emotional tone of the interaction?

These decisions must support the internal logic of the virtual environment and the user’s mental model of interaction within it.

Conclusion

This project explores how animation in immersive VR spaces can meaningfully represent user actions and their consequences. Through the lens of visual and animation fidelity, and by leveraging procedural techniques, we can create responsive, engaging, and informative feedback systems that enhance immersion, understanding, and enjoyment in gamified experiences.

Ultimately, thoughtful visual feedback design becomes a powerful language—bridging code, physics, and emotion—in the architecture of immersive digital spaces.





Categories
Project 1

Body Mechanics: Blocking Plus


Next Week – Animation Tasks & Focus

Checklist:

  • Organize reference clips
  • Finalize idea and story beats
  • Prep scene file (camera, rig, environment if needed)

Blocking Phase Goals:

Blocking Plus (Blocking with Moving Holds)

  • Include moving holds to keep the animation alive even in still moments
  • Maintain rhythm and weight with subtle in-place motion

Blocking in Steps

  • All major keys placed on the same frame per pose (stepped tangents)
  • Emphasize strong pose-to-pose structure



Weighted tangent

By default, Maya comes with unweighted tangents within the graph editor, which makes the process of adjusting curves cumbersome, f.e, when adding hangouts to baking ball animation and then changing the position of the bounce. To solve this issue, the use of weighted tangents is recommended.

You never have to break the tangents with the weighted tangent option. Breaking the tangents to change the direction.




Animation

  1. Reference the rig.
  2. Perform animation on master controller.
  3. Adjust the graph editor.

  • While working in spline, break the tangent first, before applying the weighted tangent (default is unweighted tangent)


Working with weighted tangents. Select it all -> change to weighted tangent -> MMB + shift and drag, (shift just like in the other software, keep the line straight while expanding)

Troubleshooting

Maya stop showing the rig/model in the viewport, then: select the view →show → viewport → check for what stuff is unchecked.

Working with the weighted tangent. Translation Y, going up and down.

So the slope is steep to start with, which means that the character will pop up. On the contrary, the slope is easing down slowly which means character will smoothly and steadily come to rest.


Here. The slow, steady start gives an anticipation. The quick steep end, mean character gows straight down in.



This graph describes Translation Y. Before the character goes up, it goes down at first, the start of the graph goes to minus values, before it goes up again.

The tangent gets rotated to get more of the “hang out” time. Hanging out in the air, reaching equilibrium, losing momentum, before again starts falling down.

Baking keys & transferring it over to limbs

result:



The elbow constraints are only visible in the IK, be sure to change the IKFK to 1. setting IKFK = 1 switches the arm from FK mode (0) to IK mode (1).

In an IK (Inverse Kinematics) setup, the elbow is usually constrained via a pole vector or some specific control to guide the elbow’s bend. In FK (Forward Kinematics), you manually rotate joints without the system solving the bend, so “elbow constraints” (like pole vectors) don’t really apply in the same way or may not be visible/active.


Workflow:

Select the elbow control group or object (which has the constraint)

Shift-select the elbow locator (driver)

Apply a parent constraint (Maintain Offset OFF)
This way, the control moves exactly with the locator — no offset.

The above shows the correct constraints.

Once all 9 parent constraints have been applied, proceed with baking the keys.





After baking the animation key into locators (selecting all the locators and then Key -> Baking), selecting all the constraints and deleting them, the animation made on the main controller, and when playing the animation, the locators kept on animated in the viewport.

“You never should animate on the master control, but you can use the master control to your advantage”
in order to do so, you follow the following process:



1. Animate on the master control and adjust the animation to fit your timing.
2. Create locators for each (9) foot, knee, arm, elbow, X 2 right and left sides respectively, + COG (usually the pelvis)

3. Create parent constraints for each of these locators with the parent (driver) of the corresponding control (do not apply the offset) and locator as its child.
4. Bake the animation key onto the locators (after checking that the previous step was done correctly).
5. Delete the constraints.
6. Perform the reverse copying of animation from the locators onto the controllers (this time with maintain offset), watch out: the elbow and knees polar vector constraints only allow for translation and no rotation, you won’t be able to copy it over, which may yield an error.
7. Select all the controllers via the quick selection tool (no master control included, no constraints). Baking the animation key onto the constraints now.
8. Deleting all the locators afterwards.

This is a wrong selection set; again, no animation on the master control.


The Tangents configuration for the animation in blocking, as per the image above.

1. Update the settings.
2. Open the graph editor, select all of these and opt for stepped tangent. Essentially, this will allow to see all the poses.

Moving holds.

Get your poses done, according to the planning you prepared before this exercise.

Let’s say you’re gonna pose 10 and 16 (since animation is baked every 3 frames) and leave the 13 unposed.

1. Select controllers via the quick selection key.

2. Narrowed down key frames to 10 – 16, and applied spline animation for this section within the graph editor.
3. Copy the animation from frame 11, given to you from sthe pline by Maya, into the frame 13, via the MMB and this would essentially create the moving holds.
4. Go to frame 13 and slightly readjust the animation, such as the head moving slightly the other way.


31:07 picked up from here


Categories
Project 2

Mixed Reality Game

I’ve been experimenting with the idea of creating a mixed reality game—something I first got excited about during a group project last term. That earlier experience got me thinking about how physical and virtual systems can be connected, but now I want to take it further and really explore in this project what mixed reality can mean as a creative and interactive space.

Mixed reality is often understood as bridging the physical and virtual, where virtual objects are rendered on top of the real world, touching on the idea of augmentation. But what I was more interested in was the idea that experiences and interactions happening within the virtual world, inside the headset, can actually result in physical outcomes. I’m really fascinated by that transition between the two worlds, especially how user actions in VR can influence or change something in the real world.

I’ve begun with reading to back this concept up. One book that’s really influenced my thinking so far is Reality+ by David Chalmers. He argues that virtual realities aren’t fake or “less real” than the physical world. Virtual objects and experiences—if they’re consistent, immersive, and meaningful to the user—can be just as “real” as anything in the physical world. That idea stuck with me, especially when thinking about digital objects as things that have structure, effects, and presence—even if they’re built from code instead of atoms. What I’m interested in now is creating a system where virtual actions have real-world consequences. So instead of just being immersed in a virtual world, the player’s success or failure can actually change something in the physical world. That’s where I started thinking of repurposing my Venus flytrap installation—something physical and mechanical, driven by what happens in the VR game.

The game idea is still forming, but here’s the general concept: the player is in a virtual world where they need to defeat a group of Venus flytrap-like creatures. If they succeed, a real-life Venus flytrap sculpture (which I’m building using cardboard, 3D-printed cogs, and a DC motor) will physically open up in response.




Inspirational Work

Duchamiana by Lillian Hess

One of the pieces that really inspired my thinking for this project is Duchamiana by Lillian Hess. I first saw it at the Digital Body Festival in London in November 2024. After that, I started following the artist’s work more closely on Instagram and noticed how the piece has evolved. It’s been taken to the next level — not just visually, but in terms of interactivity and curation.

Originally, I experienced it as a pre-animated camera sequence that moved through a virtual environment with characters appearing along the way. But in a more recent iteration, it’s been reimagined and enriched through the integration of a treadmill. The user now has to physically walk in order for the camera to move forward. I really love this translation, where physical movement in the real world directly affects how you navigate through the digital space.

Instead of relying on hand controllers or pre-scripted animations, the system tracks the user’s physical steps. That real-world data drives the experience. What stood out to me most was not just the interaction, but the way physical sound — like the noise of someone walking on the treadmill — becomes part of the audio-visual experience. It creates this amazing fusion: a layered, mixed-reality moment where real-world action affects both the sound and visuals of the virtual world.

It’s made me think a lot about how we might go beyond the VR headset — how we can design for transitions between physical and virtual spaces in more embodied, sensory ways. It’s a fascinating direction, and this work really opened up new possibilities in how I’m thinking about my own project.





Mother Bear, Mother Hen by Dr. Victoria Bradbury



Mother Bear Mother Hen Trailer on Vimeo

One project that really caught my attention while researching mixed reality and physical computing in VR is Mother Bear, Mother Hen by Dr. Victoria Bradbury, an assistant professor of New Media at UNC Asheville. What I found especially compelling is how the piece bridges physical and virtual spaces through custom-built wearable interfaces — essentially, two bespoke jackets: one themed as a bear and the other as a chicken.

These wearables act as physical computing interfaces, communicating the user’s real-world movements into the game world via a Circuit Playground microcontroller. The gameplay itself is hosted in VR using an HTC Vive headset. In this setup, the user’s stomping motion — tracked through the microcontroller — controls the movement mechanics within the game, while the HTC Vive controllers are used for actions like picking up objects or making selections.

Both the bear and chicken jackets communicate via the Arduino Uno. A stamping motion controls the movement, and the costumes allow for auditory and visual responsiveness.

What makes this piece even more immersive is how the wearables themselves provide both audio and visual feedback. They incorporate built-in lighting and responsive sound outputs that reflect what’s happening in the game — a brilliant example of reactive media design.

The project was awarded an Epic Games grant, which shows the increasing recognition and support for experimental VR and new media works that fuse physical computing with virtual environments. I found it incredibly inspiring, especially in the context of exploring embodied interaction, wearable technology, and the creative possibilities of mixed reality. It’s a powerful reminder of how far VR can go when it intersects with tactile, real-world interfaces and artistic intention.


Unreal Plant by leia @leiamake

Unreal Plant

Another project that combines Unreal Engine with physical computing is Real Plant by leia @leiamake. It explores the idea of a virtual twin, where a real plant is mirrored in both the physical and digital realms. The setup includes light sensors that monitor the plant’s exposure to light in the real world. That data is then sent to the virtual world, where the digital twin of the plant responds in the same way—lighting up when the physical plant is exposed to lighth.

This project is a great example of bridging the two worlds and creating an accurate mapping between them. It really embodies the concept of the virtual twin by translating real-world inputs into virtual actions. There’s also some additional interaction, like pressing a button to water the plant, though that part is more conventional and relies on standard inputs like a mouse or keyboard. Still, the core idea—mirroring real-world changes in the virtual space—is what stood out to me.

Virtual Reality and Robotics

A similar concept using physical computing and robotics can be found in real-world applications, such as remote robotic control systems. In these systems, the user operates from a VR control room equipped with multiple sensors. Using Oculus controllers, the user can interact with virtual controls in a shared space—sometimes even space-like environments.

Because Oculus devices support features like hand tracking and pose recognition, users can perform tasks such as gripping, picking up objects, and moving items around. What’s impressive about this setup is that the designers have mapped the human physical space into virtual space, and then further mapped that virtual space into the robotic environment. This creates a strong sense of co-location between the human operator and the robot’s actions.

This setup is a great example of a virtual twin, where there’s a precise and responsive mapping between virtual and physical environments. It also points towards the emerging concept of the industrial metaverse, particularly in training robots and AI systems.

A similar example is the early development of Tesla’s humanoid robots. While these robots are designed to be autonomous, they can also be operated by humans through virtual reality systems. The aim is often to support tasks like healthcare delivery or improving social connections.

This is an example where VR and robotics are used to stock up the shop shelves remotely, from a distance.

This ties into the idea of the virtual twin—replicating what happens in the virtual world and mapping it onto physical robots. It’s essentially about translating human actions in a digital environment into responses in the physical world, which is a key aspect of bridging virtual and real spaces.

Industrial meatverse.

Categories
Collaborative Unit Uncategorised

Week 3: 3D Scanning at Tate Modern

Visit the site of Tate Modern, for data capture purposes, use the mobile phone camera to record videos of object exhibited, with use of the camera on its own/ the AI-powered app Luma AI.

Artwork that caught my eye

Metamorphosis of Narcissus

1937, Salvador Dalí

I’m familiar with this painting, and yes Dali is one of my favourite surrealists. I’m posting this here because it’s related to the work, I’m currently working on for the advanced unit with George, creating previs, where I’m inspired by the same mythological story of the Narcississ.

Dali about his painting: “A painting shown and explained to Dr Freud. Pedagogical presentation of the myth of narcissism, illustrated by a poem written at the same time. In this poem and this painting, there is death and fossilization of Narcissus.”




The Sculptor


1953, William Gear

The painting, which in its earthy colours and angular structure displays an allegiance to the aesthetic of ‘paysagisme abstrait’, combines considerable formal complexity with a new degree of sharpness. The spiky shapes are reminiscent of the so-called ‘geometry of fear‘ sculptures which had represented Britain at the Venice Biennale in 1952. 

The Painter’s Family

1926, Giorgio de Chirico

Painted in Paris in 1926. ‘The Painter’s Family’ dates from de Chirico’s period of association with the Surrealists when he tended to revive the subjects of his earlier Metaphysical period – in this case the mannequin figures – but in a heavier, more antique manner. The mannequin theme is said to have been inspired by a play Les Chants de la Mi-Mort written by de Chirico’s brother Andrea (Alberto Savinio) and published in Apollinaire’s magazine Les Soirées de Paris for July-August 1914. The drama’s protagonist is a ‘man without voice, without eyes or face’.

Categories
Advanced & Experimental

Week 4: Kicking off with previs of the chosen story in Maya

Initial storyboards were hand-drawn, then scanned and put together in the PS as the sequence of events. This was presented for the weekly review and updated accordingly as per George’s feedback.

– George suggested that story should come to the end of the cycle, with the closing scene, of the character waking up from the dream. This is to justify all the abstractions incorporated into the scene of the animation, such as the character being chased by the swiping left finger, which is dreamlike and not real. So, the scene 12 was copied over to the end.

– Abstraction comes into play while recreating the famous painting of Narcissus. George encouraged me to enhance these aesthetics, such as the body of water surrounding the bed, while the character is looking at his own reflection (scene 13).

– Another suggestion, is to apply a transition to white, between scenes 30 and 31, as the character escapes of the


Previs feedback

  1. Do not overuse camera movement! Each camera movement serves a purpose, this needs to be justified. It’s more than okey to use static shots.

2. Be sure to maintain symmetry when zooming in and out.

The above shot seems clunky, in the way that symmetry vanishes as the camera zooms out, check the image below.

3. POSING: Be more articulated when defining the actions of the character at the specific shots. Character at this point does not need to be animated, however, their poses must clearly articulate what they get up to in the scenes that are presented.

4. RULE OF THIRDS: be sure to use the rule of thirds to justified your composition.

5. Show off your director skills. The camera movement shall be justified. The following shots were proposed, however as suggested by George, theirs no need to cut between these shots. After the character falls through his own reflection on the mobile screen into the underground world, it makes sense to follow that and deliver a camera movement that follow the arc-like trajectory, approaching the camera from the top, and zooming into it, the flower now displayed don’t the phone, which should then be used for the dip black transition to another shot.

Georege pointed that no rotation should be used while zooming, as well as that the phone should be facing in portrait mode, not a landscape.

Maya process

  1. In the PS, I created the transparent png with the rule of third (green) and the red guides for symmetry.
  2. Added this transparent png as per the image plane onto the designated camera.
  3. Adjusted the depth of the image plane, so it’s an the very front.
  4. Use the rule_of_thirds layer as the reference, so it doe not get in the way while clicking on another object within the scene.

Categories
Advanced & Experimental

Week 3: World, camera shot and characters

Establishing shot



Establishing shot: an opening of the scene, to give an audience understanding of the world being presented. It allows the audience to understand how characters are oriented around the scene and how they are related.

Usually a bird-eye view type, extreme wide shot, with the use of the zooming in and out, or panning top to down, or right to left, there’s a variation of the techniques used, often though the effect focuses on revealing the world.


a. Start of the story
b. Character changes the space
c. The end of the story


180-degree rule


Analysis of establishing shots from examples of the choice.


The Dark Night Roberry




Characters, and their story, Who are the protagonists and antagonists, side characters?

The list of attributes.

Description of the character: bio. What the character does before and after of the story, to help establish what’s the middle ground, part of the story. How do the characters move, act and their mannerism.

Based on the story, the character’s actions can be justified, even if they are or morally accepted. The character has to be presented in a compelling way, within the bio describing what it takes to make them compalling.

Show not tell. Visual communication, through the character acting.

Being deliberate about the choices and going for it, to emphasize as much as possible out of it. Character introduction should be evocative enough to let the audience understand who the character is the first movement of the appearance, what’s the nature of the character.


Camera shot as well as assets and the scene/ environment matters when conveying the mood and the character.


Compellling: evoking interest, attention, or admiration in a powerfully irresistible way.
Compassionate: feeling or showing sympathy and concern for others.



Reference and inspiration:

Jonathan (@jonathan_djob_nkondo) • Instagram photos and videos

https://www.bing.com/search?q=severance&qs=n&form=QBRE&sp=-1&ghc=1&lq=0&pq=severance&sc=12-9&sk=&cvid=1F3469E73AC64F5197C9814448EB8C73&ghsh=0&ghacc=0&ghpl=






Categories
Uncategorised

Reserach phase

Ghost stories


Cookbook

I visited the Waterstones Victoria branch, curious about what’s available to buy for people interested in minimalising their footprint in the kitchen. I was stricken by only 2 books I managed to find within the food section. There’s the implication that sustainable eating is only presented within the vegan section. There was no however book offering the recipes which would encourage minimalising foodwaste, but only the carbon footprint.

Books:

Categories
Advanced & Experimental

Week 2: World Creation


“The process of constraining the world is originally an imaginary one, sometimes associated with a fictional universe.”

When creating the world of a story, every element—from the sets and props to the overall environment—plays a critical role in shaping the narrative. Understanding how these elements influence the shots and storytelling is key to building a cohesive world. Let’s break it down into the core components that will guide the world-building process:



The history of the world within your story sets the stage for everything that happens. Establishing the time period and understanding what happened before and after your story provides context for the narrative.

  • What time is your story set in?
  • What major events have shaped the world?
    • Consider the impact of things like natural disasters, pandemics, or significant historical events.
    • Establishing a clear beginning and end will help you develop the middle of the story.

Understanding these elements will help ground your world in reality, even if the world itself is fictional. Contextualizing your story through history gives the narrative emotional depth and relevance.

2. Ecology: The Flora, Fauna, and Environmental Impact



Ecology is about the relationship between living organisms and their environment. In your story, the natural world influences the characters and events.

  • What flora and fauna exist in this world?
  • How does nature affect the world?
  • What is the condition of the environment depicted in the scene?

The key here is to communicate the ecological elements visually—without overwhelming the audience with text or exposition. Visual storytelling is the most effective way to show the impact of nature on the world.

3. Geography: Defining the Location and Culture

Geography refers to the physical location where the story takes place. The environment and location provide context for the narrative and influence everything from architecture to culture.

  • Where is the story set? Is it in a primitive, developing, or advanced nation?
  • How does geography influence the architecture, culture, and people’s way of life?
    • Consider the role of history, religion, and science in shaping the built environment.
    • How does geography affect urbanization, culture, and daily life?
    • What influences people’s clothing, mannerisms, and social behaviour?

These factors will affect the choices made for the sets, props, and characters. The way people dress, speak, and move is heavily influenced by the geography and culture of their environment, so understanding this is essential when planning actor choices and character designs.

4. Sets and Props: Modelling the World

The sets and props are the physical manifestation of your world. These elements help signal to the audience what kind of environment they’re in and contribute to the overall atmosphere.

  • What props and sets will clearly define your world for the audience?
  • How do they reflect the environment, culture, and time period?

In this phase, you are modelling the world—creating spaces that feel authentic to the narrative. Props should be purposeful, serving as visual cues to support the world-building.

5. Basic Composition in Maya

When creating your world in 3D software like Maya, basic composition becomes critical in translating your ideas from concept to reality. Focus on the arrangement of objects, lighting, and camera angles that help reinforce the themes and tone of your story.

6. Mood board: Visual Inspiration

A mood board serves as a visual reference for the world you’re creating. It helps capture the atmosphere, textures, colour palettes, and design elements that define the aesthetic of the world.

You can gather references from various sites, including those specifically dedicated to visual references, to guide your decisions about the sets, props, and overall design.