Modular rigging
Embedded below are the screenshots of my notes taken during class time, as well as during my study and exploration of rigging in Unreal Engine.





Forward and Backward solve: rigging character of human-like posture






Embedded below are the screenshots of my notes taken during class time, as well as during my study and exploration of rigging in Unreal Engine.











For Humans:
The centre of gravity (COG) for humans is typically located around the hips or pelvis. This is where the main control resides in character rigs, serving as the foundation for natural movement.
For Hero Characters:
For hero characters, the COG shifts to the chest, reflecting their predominant movements, such as flying or leaping through the air. This adjustment enhances the believability of their superhuman abilities.
Maintaining Balance:
To ensure proper balance in animation, it’s essential to apply correct weight distribution within the positive space of the character’s body mass. This helps achieve a sense of realism and stability in movement.
The following images present the outcome of the exercise, practicing the COG by aligning the bony rig with poses corresponding to the reference images. Where the red marking is Gerorge’s feedback.
Posing with the use of the live-action snapshot reference, is to ensure the accuracy of the physical logic, and distribution of the weight in consideration of the COG, however, the animator role is to go one step beyond, to make it visually appealing with use of exaggeration and assymetry.



Kneeling jump breakdown of action, video source: 3 Exercises To INCREASE YOUR VERTICAL Pt.2 | JUMP HIGHER | The Lost Breed used as the reference. Breakdown including the anticipation: hand movement that implies the preparation for the jump, stretch and squash of the spine, marked with the ball reference under each pose, pendulum consideration where the hands follow the chest and pelvis moving up front.
This example was broken down correctly; however, it is visually uninteresting because the posing throughout the action is symmetrical. Animation should aim for asymmetrical posing, as this allows for more variation in the line of action, making the movement appear more dynamic. Animation has the advantage over real-life action in that movements can be exaggerated. Considering this, the current example is far too dull to justify the time spent animating it.
Switch the default tangent to “stepped” mode to make it easier to block out key poses and focus on timing.
Start with a neutral default pose. Use both the front and left views to ensure accuracy and alignment.
Create a quick selection tool for all the controllers, excluding the root controller. This will streamline the animation process and save time during adjustments.
Introduce anticipation by slightly offsetting the pose in the opposite direction of the intended movement. This creates a sense of tension and preparation before the action begins.
Use the “+” or “−” keys to control the gizmo for incremental adjustments. Remember:
Develop around 4 anticipation poses to build a dynamic and fluid movement.
Reset all values to 0 before moving on to the next pose. This ensures clarity and reduces errors.
The contact pose is critical for establishing the weight shift. When the hips take the weight, the body naturally leans toward the supporting side. Add subtle rotation in the direction of the hip movement to enhance realism.
Creating a video reference in front and side views for the body weight shift involved ensuring the camera was stable. Both perspectives needed to match precisely in timing, as two different cameras were used to capture the reference. The footage was then aligned in Premiere Pro, and consecutive frames were screenshotted for breakdown purposes, making it clear how to perform blocking for the action.
The image below shows the collection of screenshots, presenting consecutive frames of the action, which served as a reference for my blocking process.

The image below shows my investigation of the class demo on the body weight shift, as explained to us with consideration of both the front and side views.
Observations/Notes:

My submission on SyncSketch: Body weight shift -> front_weight shift_01
The following presents instructions on performing physics within UE5. These are embedded screenshots of the Word document that I made as the result of the studies and exploration. When rendering a lot of simulation physics within Unreal, it is recommended that you switch to Nanite.













Blocking animation submission on SyncSketch: Tailed ball -> fox jumping01
First of all, I overthought things and got it completely wrong by ignoring everything I had learned about animating the bouncing ball. Keep it simple, stupid!
I came across a blog, a tutorial, and other people’s animations, and I learned that when you jump (like an athlete), the path you’re creating in the Y-coordinate space—which describes your height—is not a perfect parabola. This is because of the friction that affects your body as it moves through the air. While this is true, I exaggerated it too much.
In reality, the path still follows a parabola, representing an organic movement with ease-in and ease-out. It may slightly deviate, becoming more pronounced on the left side of the curve, but it still represents the pathway from the floor to the highest point of the jump, where the body momentarily stays in the air before it begins to fall.
The blocking confused me when I initially planned the animation. I sketched out triangular motion trails for the jumps, but it should have been a parabola!
Fair enough—when Maya is set to linear tangent interpolation and the trail option is displayed, it shows these triangular shapes. However, this isn’t mechanical; it’s an organic motion, so the path should still resemble a parabola.
The tail should follow a similar trajectory to the ball, much like the roller coaster example where the ball is the front carriage and the subsequent parts of the tail are the carriages that follow. My first attempt resulted in a stiff tail, which is the opposite of the desired fluid appearance.
To achieve fluidity in the movement of the tail and eliminate stiffness, I began to redo the blocking stage, in reference to the feedback given. I drew the table that depicted the breakdown of 1 jump into 10 keyframes, to better navigate the following movement of the tail. I used that as the reference point, to check against while animating.


This is a screenshot of the work-in-progress during the blocking stage. The motion trail in the viewport currently has a triangular shape, which is typical for this stage. Moving forward to the spline animation phase, my role will be to refine the pathways to better approximate organic movement by reshaping them into parabolas. To achieve this, I will add more keyframes, providing Maya with additional data to create smoother parabolic interpolations. This approach addresses the default linear interpolation caused by the limited coordinate space data provided during the blocking stage.

Delivering an animation in spline requires working within the graph editor and the viewport to adjust the interpolation curves, ensuring they approximate organic movement while checking the outcome for both appearance and feel.
The automated spline interpolation function facilitates the transition from linear tangents to spline curves. However, due to the limited sample data provided, this often results in visual artifacts. Therefore, manual corrections are necessary to refine the animation.
Spline animation submission on SyncSketch: Tailed ball spline -> fox_jumps_spline_1


MF_PSR) for UV mapping.Unreal Engine 5.4 does not provide built-in functions for adjusting the brightness and contrast of textures. However, these adjustments can be implemented through node-based coding by creating a Master Material. While this requires additional setup, it offers significant benefits, allowing for greater customization of materials. For example, layers such as dirt or surface imperfections can be added, conveying a story about the material—such as rusted metal that has undergone corrosion.
Implementing a Master Material also promotes efficiency and reusability across projects, provided the project structure is well-organized. All textures and functions referenced by the Master Material must be stored in corresponding sub-directories within the main Master Material folder to ensure seamless functionality.
Adjusting brightness can be achieved using the multiplication node, with values customized through a scalar parameter, which takes floating-point numbers. Similarly, contrast adjustments can be made using the power node, with a scalar node providing an adjustable slider.
Creating a Master Material also requires defining custom PSR functions (MF_PSR) following naming conventions. PSR stands for position, scalar, and rotation, which are tied to UV mapping. This process can be likened to wrapping a 3D object, like a chocolate Easter bunny, with decorative foil. The flat, 2D foil covers the complex 3D shape, ensuring accurate texture mapping.
To create my Master Material, I followed the tutorial Unreal Engine Materials in 6 Levels of Complexity. My implementation differs slightly as I utilized an ORM file. Based on class discussions and Serra’s advice to memorize RGB channel functions, I assigned the appropriate colour channels in my Master Material: green for roughness, blue for displacement (used as metallic in my case due to research I encountered), and red for ambient occlusion. In the tutorial, the texture was connected to the specular input of the main node. However, the texture I downloaded from the Fab library lacked a specular layer. Instead, I assigned a scalar parameter to define the specular values manually.
The importance of a specular texture is well-documented: “A specular texture defines parts of the surface that might be dirty, scratched, or darker. These areas react differently to light sources. Without a specular texture, light reflects off the surface like it’s a flat plane.” Based on this insight, I may need to refine my Master Material to better align with its intended purpose, avoiding the flat-plane aesthetic and aiming for a more realistic appearance.
Additionally, I explored using the Lerp function to interpolate between two textures. For instance, to create a material that appears dirty, such as a surface touched by greasy hands, I added a layer representing surface imperfections. By blending this layer with the roughness texture using the Lerp function and promoting the alpha value to a parameter, I achieved adjustable blending in the material.
tutorial source: Unreal Engine Materials in 6 Levels of Complexity
tutorial source: How to HIDE Texture REPETITION in Unreal Engine – UE4 Tutorial
Imperfection
Adding a surface imperfection layer on top of the roughness map provides a story about the material’s journey and wear.
Dirt and Filth
Adding a dirt layer on top of the albedo map creates the appearance of a dirty surface. The Static Switch Parameter, which acts as a Boolean, allows for toggling the dirt effect on or off.
The process of creating a Master Material has deepened my understanding of material design within Unreal Engine. By customizing textures and parameters, I can achieve a higher level of realism and storytelling in my materials. For instance, adding imperfections or dirt layers provides depth and context to otherwise flat surfaces. While these techniques require a strong foundation in node-based logic and texture mapping, the results are highly rewarding, offering endless possibilities for material customization
Anticipation is an action that provides visual clues about the main action that is about to occur, often building up internal force leading to the main event.
Anticipation visually conveys the amount of strength and force that goes into a movement. The use of squash and stretch not only enhances the expression of movement but also makes it more convincing to the viewer.
This principle, as described by Bill Tytla, references Newton’s laws of motion.

The tail follows the same curve as the front part of the movement (e.g., a bouncing ball).
The natural pose for the tail is often an S-shape, which reflects the dynamics of motion.
Blocking is the foundation of the animation process. It involves setting the main keys, breakdowns, and using stepped tangents to sketch the motion.

Breakdowns are essential to filling the space between keyframes. They define how the motion transitions, playing a crucial role in creating the illusion of life.
Ghosting mode can be switched on for the controllers or object, to visualise tracking of the movement.


To create an iconograph, I took screenshots of George’s demo in class, which showed the simple up-and-down motion of the ball with the tail. I used these as the guide, while working on delivering of my animation, so I could always go back to it an reference the key frames, and try to approximate my blocking against George’s.

I opted for a simple animation while applying the theory I’ve learned so far. My animation consists of a small jump, which serves as the anticipation for the bigger jump. This is followed by three smaller jumps, indicating that the initial jump was high in energy. To balance the energy accumulated from the jump, the ball requires several smaller jumps to come to a stop.
The sequence follows the pattern of anticipation > action > reaction:
My submission on SyncSketch: Pendulum -> Pendulum
The pendulum movement is effective, but the achieved velocity is not proportionate or justified relative to the velocity of the base moving from point A to point B before it stops. To address this, the number of frames allocated for the base movement should be reduced to around 23 frames to create a shorter, more rapid motion. (Refer to the graph editor highlighted in the red box.)


I have grasped the concept of C and S shapes well, but I now need to ensure they are accurately approximated so that these shapes do not appear exaggerated. (See the comparison above: an over-the-top S versus a correctly approximated S below.)
Consistency is key when mapping the movement. The S and C shapes should be spaced evenly around a middle line, which could be drawn as a reference through the anchor point of the pendulum at its base.

When the pendulum comes to a stop, the movement must account for both the middle section and the outer part of the pendulum (corresponding to the rotation of the respective joints). The final part to stop should always be the outermost section of the pendulum.

The Avant-Garde movement of the early 1900s rejected traditional representational art, aiming to push the boundaries of artistic expression both visually and intellectually. This led to the development of two distinct forms of abstraction:
Formal abstraction centers on manipulating visual fundamentals, such as colour, shape, line, motion, rhythm, space, and composition, to create a distinct visual and sensory experience. This approach prioritizes movement, aesthetics, and sound design over narrative or messaging.
Example: Kaleidoscope by Len Lye (1935)

The fusion of formal and conceptual abstraction can be seen in works like Max Hattler’s (2005), where geometric shapes, visual patterns, and cultural symbolism converge. For example:
Conceptual abstraction focuses on ideas, thoughts, and narratives, often incorporating geometric shapes bound to real-world objects, thus adding symbolic depth.
Examples:


Both films reflect conceptual abstraction through symbolic visuals and narratives that convey cultural and intellectual themes.
The evolution of abstraction continues with the integration of modern technologies like machine learning and AI. Tools for visual processing expand the possibilities of abstract art in animation.
Example: LATENTSPACE by Charles Sainty (2024)
The theory of the Avant-Garde, as proposed by Renato Poggioli in 1984, aligns with the exploration of abstract representations in animation. This era sought to convey the interaction of the senses and media’s impact on our perception of reality, often engaging with unconscious desires and surrealism.
Avant-garde filmmakers often used cutting techniques driven by sound, creating a more immersive experience for the audience. The combination of sound and image in animation allows for a heightened emotional response, providing a unique experience that is challenging yet engaging.
Synaesthesia, a neurological condition where stimulation of one sense causes an involuntary reaction in another (such as tasting shapes or seeing colors), parallels the experience of avant-garde films. The blending of sound and visual elements in animation and experimental film can evoke a synesthetic experience, connecting senses in a way that challenges conventional perception.
When writing about avant-garde films, it’s important to consider:

Ctrl + A.
ShotBased/CamerasShotBased/LightingCtrl + X and Ctrl + D.Ctrl + Shift + P.

(image source: Comparison-shot-1024×572.png (1024×572))
Follow-through refers to the movement that continues after the primary action has taken place. It describes secondary actions that are a natural result of the primary motion, enhancing the realism and fluidity of the animation. This principle ensures that movements feel organic and not abruptly mechanical.
A key technique in follow-through is the successive breaking of joints, where the motion flows like a chain reaction. Each joint moves with a slight delay compared to the previous one, creating a cascading effect. This slight offset prevents the animation from looking rigid, introducing a natural and dynamic quality to the movement
Drag, or the wave principle is essential for achieving overlap in motion. It involves a lead-and-follow dynamic, where the leading part of an object drives the movement, and the following parts react to it. This interaction highlights the relationship between primary and secondary actions, adding depth and complexity to the animation.

A classic example of overlapping and follow-through action is pendulum motion. Here, the gravitational energy transitions into kinetic energy, creating smooth arcs and diminishing momentum over time. The pendulum showcases how overlapping motion can add weight, rhythm, and natural flow to animated objects, making them feel grounded in physical principles.
Tutorial source:
Everything is a Bouncing Ball
A video demonstration reveals that the complex movement of a character in animation can be broken down into separate areas, each of which can be depicted as a bouncing ball moving independently. Each joint or limb of the character follows its own unique path, much like an individual bouncing ball, yet when combined with the motion of the rest of the body, they come together to define the cohesive and dynamic movement of the character. This approach highlights how the interplay of these individual elements contributes to the overall flow and believability of the animated motion.
Following the class example, I attempted the “quick and dirty” method in Maya. However, I encountered difficulties when trying to develop the pendulum’s movement after its base stops. At this point, the kinetic energy transfers through the system, creating the characteristic “C” and “S” shapes.
Tutorial source:
How To Animate a TAIL – Animation Exercise
To better understand the logic behind the movement, I started by sketching the motion of grass as demonstrated in the tutorial.
Observations:
Video source:
https://vimeo.com/111841120
To analyse the pendulum motion, I used George’s pendulum animation to closely examine for all the shapes discussed in the point above.
Steps taken:


According to Animation Survival Kit, the positions where the pendulum reaches its furthest points are called extremes, which is intuitive. As a rule of thumb, extremes are not circled in animation planning sketches, but keyframes are.
Keyframes, based on the 1940s animation method, are the key drawings circled in the sketches used to plan an animation. Between these keyframes are the intermediate drawings, often referred to as in-betweens or phasing frames.
The hierarchy of animation planning follows this structure:
KEYS → EXTREMES → BREAKDOWNS → IN-BETWEENS
For example, moving from frame A to frame B without any in-between frames would create a “teleportation” effect—an unnaturally quick movement with no sense of transition. To address this, a breakdown key is introduced to highlight the journey between frames. The in-between frames are critical as they define the character of the movement. By adding in-between frames, the motion can be slowed down or made more fluid. Concepts like ease-in and ease-out rely on adding frames strategically to emphasize gradual acceleration or deceleration.
In 3D animation, software handles interpolation between frames automatically. However, the animator’s role remains essential for keying the major positions during the blocking stage.
A hybrid method known as layered method (discussed in detailed in week1), combining pose-to-pose and straight-ahead techniques, is the most effective approach. This ensures the animation is both logical and structured (pose-to-pose) while maintaining a fluid and natural quality (straight-ahead).
The keyframes are the most critical as they define the foundation of the animation.
The pendulum loses momentum as it swings, resulting in asymmetrical movement. Each successive endpoint of the swing decreases in height (as seen in the cartographic view/left plane). These “endpoints of the movement” are numbered in the image below, starting with 1.

Submission on syncsketch:
Ball that bounces 🙂 -> bouncing ball


As the base of the pendulum moves, the end of the pendulum is getting dragged alongside. This dragging gets transcend into kinematic energy, so when the base comes to stop the pendulum swings until the complete loss of the energy. The most important when delivering an animation is that, the end of pendulum, remains in the same position, for the first few keyframes, to achieve the notion of dragging.

The base comes to stop, energy is being transcend, resulting in a swing into the opposite direction.

The graph describes rotation X of the pendulum base, that decreases over the time, as the pendulum swing backwards and forwards, until complete lose of energy (flat line at the end).

Successive breaking of the following joints, as depicted on the screenshot above, as the pendulum take a awkward shape that resembles S shape, as it’s swing from one side to the opposite one, marking both extremes, while taking a shape of reversed letter C and regular C, respectively.

Offsetting as an easy and quick way to get fast results. For the consecutive joints of the pendulum the keys are applied in that same values are applied but offset in the time, meaning moved across the timeline by the certain number of keyframes for each of the following joints, with the highest offset applied for the end of the pendulum, as this comes to the end as the last. the screenshot shows the graph editor, where rotation x is shown for the following joints.
Animation has evolved tremendously over the years, with technological advancements playing a pivotal role in shaping its techniques and impact. Understanding the historical development of animation, alongside the role of cinematography and technology, allows us to appreciate the innovative progress the medium has undergone. Here’s a look at the key developments starting from the early 1900s, highlighting the technological and artistic shifts that influenced animation.
Early 1900s: The Dawn of Animation
1920s–1930s: The Rise of Hollywood and the Golden Age of Animation
1940s–1960s: The Expansion of Animation Techniques
1970s–1990s: The Digital Revolution and the Emergence of CGI
2000s–Present: The Integration of New Technologies