In an L-system, the user has no control beyond specifying the initial state. As a result, they're detached from the growth process
I wanted to make the entire growth process an interactive experience by making L-systems into a procedural illustration tool
What new stories could be told with living, growing ink?
L-systems are notoriously difficult to author due to their unpredictable nature. There's a theoretical reason for this—an NP-complete problem.
I got around this theoretical boundary with inspiration from Bret Victor's guiding principle that creators need an immediate connection to what they create.
With live visual feedback, users can make incremental adjustments toward their desired ink style.
L.ink places the artist and their procedural ink in a continuous feedback loop.
Unpredictable ink growth affects the artist's hand movement
Artist's hand movement determines ink growth
Through L.ink, I explored a strategy for balancing control and surprise within this feedback loop. I published a first-author extended abstract and have a first-author paper in submission based on my work:
L.ink: Illustrating Controllable Surprise with L-System Based Strokes (link)
CHI 2025 Late-Breaking Work
L.ink: Procedural Ink Growth for Controllable Surprise (in submission)
UIST 2025
This project started with the observation that adjacency matrices provide a natural connection from images to graphs. Can the data of an image be encoded within the tensions of a network?
If so, could we create an interactive system where pulling on the network of nodes deforms and contorts the image?
I imagined this as an almost grotesque experience of forcefully pulling apart the meaning of something.
After some experimentation I found that encoding raw pixel values didn't create the effect I had in mind—distortions were local and rectilinear.
I wondered if embedding in the latent space of a neural network would create a more interesting "semantic" distortion, so I trained an autoencoder on the (tiny) ImageNet dataset and wrote a Python backend to live decode the link lengths of a D3.js graph simulation.
I also decided to try using the webcam feed as input rather than a still image, to create a mirror-like experience. Here are the results using two different autoencoders that I trained:
I still wanted to enable a more semantically meaningful distortion of the image—I imagined watching my face morph into a different face, changing its identity.
I retrained on a dataset containing only faces (CelebA). I initially experimented with my prior autoencoder but ended up finding more success with a pre-trained variational autoencoder, which was able to learn a smoother latent space.
I also created a new interface to allow the user to morph their identity. Watch as I change the tunable scale factors below:
latent variables →
← tunable scale
webcam feed →
← live decoded image
In class, we surveyed recent state-of-the-art papers on 3D Gaussian Splatting—I proposed a project to create a 3D Gaussian VR painting tool.
I saw massive room for improvement over existing VR painting tools, where paint strokes are approximated by triangle-meshes that look like awkward tubes and ribbons.
3D Gaussians can approximate a wide range of real-world materials. I imagined a brush that allows you to draw with natural textures like grass and tree bark.
This project involved a lot of moving parts—here's a broad overview:
After running inference on the MVSGaussian model and automatically removing excessively large splats and capture ring artifacts, we were able to paint with these textures:
To put our system to the test (and just for fun), we created a large-scale scene in the lobby of Brown's CS building!
Systems like P5.js, Processing, and openFrameworks empower us to transform code into drawings. But complexity is hidden within code, regardless of intention. What if we could build a system to reveal the inherent complexity in source code by mapping it to a visual representation?
I wanted to repurpose utility for wonder.
My first idea was to create a kind of concrete poetry that procedurally generates itself—the source code is both the procedural instructions for where to place content AND the content itself. I created some small demos by blurring and thresholding source code text in the color palette of VSCode, and arranging it according to the structure of the original source. One surprising result was the amount of cartoonish faces that appeared from this process!
But I wasn't satisfied because the results seemed chaotic rather than structured as I wanted.
I wanted to establish a direct connection between a single source code instruction, and a single drawing instruction. This led me to ponder lower-level instructions in assembly or bytecode.
I wrote a program to extract bytecode from any Python file, allowing me to map each bytecode instruction to a pixel operation—for example, LOAD_CONST moves the current position and places a pixel, CALL_FUNCTION changes the current drawing color, etc. The parameters of the bytecode instructions become parameters governing how much to change color or how far to move.
The result on the left is the output of my visual interpreter running a puzzle solver program. The result on the right was run with source code from the famous Breakout game.
Our team extended the work of Hertzmann et al. by implementing style transfer from a still reference image onto a dynamic video.
This work was the culmination of a semester learning all manner of image manipulation techniques. See below from left to right: constructing HDR images from an exposure bracket, Poisson image-blending, patch-based texture transfer, and... creating Abe Lincoln as a piece of bread.
My favorite concept of class was the discrete fourier transform of an image. I am fascinated by the idea that every image has a dual-form that encodes exactly the same information in frequency space. Here is my face in frequency space:
Check out the final results of the video analogies project below!
Waterfall in watercolor style
Mountain range timelapse in oil painting style
Our team set out to build an explorable procedural hedge maze (built with Wilson's Algorithm) decorated entirely with procedurally-generated plants.
As the resident L-system enthusiast, I wrote a custom L-system engine in Blender to create diverse foliage. The engine generates 3D models of L-systems with configurable branching characteristics, leaf-size, stem-radius, etc.
Here are some plants generated with my engine!
Working with a pen plotter challenged (and delighted) me in a new way.
There's something satisfying about seeing a computational work rendered with pen and paper. I tend to work with purely digital media but this experience motivated me to engage with physical materials and fabrication in the future.
I trained in Taekwondo alongside my whole family since I was 3 years old. My brother, parents, and I all earned our black belts together.
I specialize in flips, spins, and acrobatic kicks (martial arts tricking) and captained an ATA demonstration team, leading us to win the World Championships and perform live on ESPN.
In undergrad, I was elected president of UCLA's Club Taekwondo team with over 100 members.
me!