
For this character, the creation pipeline followed a multi-step process. I began by producing a hand-drawn concept sketch. After photographing the drawing, I brought the image into Photoshop to clean up any unwanted marks, sketch lines, or background artefacts. Next, I ran it through a light pass of KreaAI—only around 1% strength—just enough to smooth out the lines and clarify the artwork without altering its style. This refinement step is important for the stages that follow.

So the left hand side is a drawing I’ve done and on the right hand side is an KreaAI output and this is the image comparison

Once the artwork was properly prepared, I used it as an input for Meshy AI (which I subscribe to). Because the concept art is my own original work, the resulting model remains fully under my licence, as per MeshyAI regulations.
Working with image-based generation in Meshy has a learning curve. The input must be extremely precise, especially when specifying a correct T-pose. Details like hand orientation and finger separation are surprisingly important. When the fingers overlap in the drawing—or when the palms aren’t facing downward—the resulting rig will often have incorrect wrist rotations. These issues don’t always appear obvious at first, but they cause significant deformation later once the model is animated.
Another common challenge with image-based generation is incomplete or distorted topology. The model might look fine at first glance, but closer inspection reveals missing geometry, merged limbs, or strange deformation—such as thighs fused together at the back of the model. These issues show why early stages matter: getting things right at the start saves a lot of cleaning later.
I used MeshyAI onboarding texturing tool based on the concept art of my own.

After generating the mesh, I exported it and moved to Mixamo for auto-rigging. Mixamo allows for quick animation previews, which makes it easy to test how well the mesh deforms. However, Mixamo’s auto-skinning can be imperfect. Problems tend to appear around the elbows, wrists, and spine, where weights are often inaccurate. Because of that, I brought the rigged model into Maya to manually clean the mesh and adjust the skin weights. This part was particularly challenging because the auto-generated models often have high face counts, making precise selections difficult.


Overall, AI tools streamline parts of the workflow—generation, rigging, and iteration become faster—but they aren’t a complete solution. Clear inputs and good technical understanding are still essential, especially when the pipeline spans multiple applications.
For this project, creating custom characters was important for both the artistic direction and the installation’s interactive feedback loop. While I used default, rigs that are fully optimized, it was important to me that the art direction was involving input of my very own. coding, interface design, and character creation is complex, so leveraging AI for parts of the workflow allowed me to bring the visual elements to life without compromising quality or originality.