top of page

AI Creative Computing Process Blog

Updated: 2 days ago

Overview


This blog includes live updates on progress for SCAD’s AI 201 Course. This course covers utilizing AI systems, code, and tools to support creative processes.


As a VFX artist, I will be undergoing the coursework with the intention of researching and developing tools useful for artists in the VFX and animation industry. Tools that are catered to solving common problems, furthering accessibility, and overall be used by artists to streamline their productivity - not simply replace the final output.


Quarter Question: “What happens when generated code and AI-assisted imagery are used to streamline VFX production?”


Below is week-by-week and project-based progress.



Week 2, Session 1 — 03/31/26


What I Did


This week we continued to research different tools and their applications. After developing ideas for AI-generated coding projects week 1, I shifted focus to researching more generative models. I began researching how generative AI could be used for streamlining parts of pipelines such as:


• Quick concept iteration

• Face-cam character pose live link

• Render and texture pre-visualization

• Relighting

• Replacement / Paint-overs

• Roto


I linked ComfyUI to Claude on my device by adding an MCP server and linking to an API key. I experimented below with generating an image with a ComfyUI model prompted solely through Claude to ensure the link worked.


AI Interactions


Prompt: List the available models in my ComfyUI

Output: Claude listed the files in my root, Checkpoints, LoRAs, Text Encoders, VAE, and diffusion

Models Folders

Decision: This was to test if ClaudeAI on my laptop could connect with my directories and ComfyUI.

This proved that it could find the directory and my installed models.

Tag: @shift



Prompt: I have the Z-Image model. The files are in the correct folders. Build me a Z-Image text to

image workflow and generate a photo of a sunset over the ocean.

Output: Claude successfully connected to ComfyUI and my models BTS of my ComfyUI interface, and

set up a model and prompt to render. The render in ComfyUI took around an hour and a half.

Decision: This was to test if Claude could connect to Comfy, my models, and successfully queue up an

image to render. It succeeded, but since I am on my laptop it took very long.

Tag: @shift



Prompt: Italy Venice Waterway. 3D render. Make render realistic. Maintain composition and colors.

Add boat in the water. Boat in Venice waterway. Brown, small boat. Row boat in Venice waterway.

Calm waters. Sunny day

Output: Runway added the boat without touching any other part of the provided render.

Decision: This was a test to see if Runway (Gen-4 Turbo) could alter a rendered image and add

animation to it without compromising its integrity. The provided rendered image is a 3D render I had

created in Houdini, completely without the use of generative AI. It successfully followed the task, but not

to a level satisfactory for production (simple textures on boat, poor light interactions, odd movement,

composition does not lead eye, no glass interaction, etc).

Tag: @shift


What I Learned

These experiences helped me better understand the different models commonly used for VFX in the

industry, and how to have interactions between them. It provided insight in API keys and MCP models.

It showed me visual limits of different models, different styles of models, and helped me understand

troubleshooting.



Quarter Question Connection

This connects to the quarter question as it clarifies what models may be helpful for what parts of the

VFX pipeline, and how to integrate them properly.



What's Next

I will be narrowing down my concept for Project 1. I am leaning towards using AI-generated code at first

to create a procedural work of art. At the same time, I will continue to familiarize myself with generative

models to later down the line use them to bring a visual effects project to completion through iteration

and ideation.





Week 1, Session 1 — Research and Project 1 Kickoff


What I Did

This week we were introduced to the first project, which is to build a project tied to a creative computing

concept related to our industry. Provided examples of concepts (filtered to apply to VFX) are:


Generative systems: Noise fields, particle systems, growth patterns, rule-based generation.

Algorithmic art: Recursion, fractals, cellular automata, L-systems. Complex visual behavior from

simple rules.

Animation and motion: Frame-based rendering, physics simulation, easing, time-based

composition.




COMMON MODELS / ENVIRONMENTS USED FOR VFX


Claude AI

– Coding — Houdini VEX, Maya Python, MEL, Pipeline, etc.


Gemini

– Image generation: concept art, plate paint-overs, render mockups, ideation

– Free license with student account makes it more accessible


ChatGPT

– Coding

– Image generation: concept and iteration/ideation– Less accessible — no current subscription


Midjourney

– Image generation

– Concept art

– Ideation


Runway

– Image and video generation — Roto, background removal, upscale, video generation


ComfyUI

– Node-based

– Main issue is accessibility (heavy models)

– Diverse range: image gen, 3D model gen, relight, text-to-video, plate paint-overs/removals,

video replacements, beauty cleanups, composition separating, roto



USES FOR VFX / PROBLEMS IT CAN ADDRESS


Algorithmic Growth / Generation in Houdini

Tools: Claude / ChatGPT / Gemini & Houdini VEX

Use AI to generate code and complex equations for procedural/algorithmic patterns:

• Frost, plants, trees, vines, grass

• River networks

• Cracks (mud, paint, glass) — Pottery? Kintsugi?

• Crystals, snowflakes, lightning

• Veins (leaves, humans)

• Roads, fire paths — generate attributes for Pyro Simulation?


Create customizable controls for artistic direction

AI customizes tool/value input based on mood keywords from user

Example: "Generate a complex, busy section of rivers"  increases number of rivers, decreases width, raises connection frequency and branching


Particle Velocity Easy Customization / Initialization

Tools: Claude / ChatGPT / Gemini & Houdini VEX

Generate code with adjustable sliders for velocity, quaternion adjustments, noise, intensities,

and grouping

Example: "Generate an overwhelming and fast explosion of particles from origin"  fast

velocity, high noise, outward from 0,0,0



Matte Painting Basher

Tool: ComfyUI

extension

Provide source images with written instruction to integrate them into a matte painting / set



I’m continuing to research and expand this list—if this sparked any thoughts or if you have experience with other tools, I’d love to hear from you. Feel free to reach out at kfayenitti@gmail.com.













 
 
 

Comments


bottom of page