Back to Website
Eric Chen
HCI · Creativity Support · Meditations on Computation
Portfolio Slides

L.ink

interactive L-system drawing tool

Brown University HCI Lab | Advisor: Jeff Huang

2024.02—now

Exploring how controllability and surprise impact the creative experience of procedural illustrators

Idea
L.ink
I joined the Brown HCI Lab with an idea inspired by my interest in the generative grammars known as L-systems...

In an L-system, the user has no control beyond specifying the initial state. As a result, they're detached from the growth process

I wanted to make the entire growth process an interactive experience by making L-systems into a procedural illustration tool

What new stories could be told with living, growing ink?

Implementation
interactive growing ink
L.ink
Implementation
visual ink-editor with live preview

L-systems are notoriously difficult to author due to their unpredictable nature. There's a theoretical reason for this—an NP-complete problem.



I got around this theoretical boundary with inspiration from Bret Victor's guiding principle that creators need an immediate connection to what they create.



With live visual feedback, users can make incremental adjustments toward their desired ink style.

L.ink
Outcomes/Future

L.ink places the artist and their procedural ink in a continuous feedback loop.

Unpredictable ink growth affects the artist's hand movement

Artist's hand movement determines ink growth

Through L.ink, I explored a strategy for balancing control and surprise within this feedback loop. I published a first-author extended abstract and have a first-author paper in submission based on my work:

L.ink: Illustrating Controllable Surprise with L-System Based Strokes (link)

CHI 2025 Late-Breaking Work


L.ink: Procedural Ink Growth for Controllable Surprise (in submission)

UIST 2025

L.ink

PULL

live autoencoding with interactive constraints

Rhode Island School of Design: Drawing with Computers + Personal Project

2024.11—2024.12

Exploring the tensions in representations of meaning

Idea
PULL

This project started with the observation that adjacency matrices provide a natural connection from images to graphs. Can the data of an image be encoded within the tensions of a network?

If so, could we create an interactive system where pulling on the network of nodes deforms and contorts the image?

I imagined this as an almost grotesque experience of forcefully pulling apart the meaning of something.

Implementation
pulling apart visual meaning
PULL

After some experimentation I found that encoding raw pixel values didn't create the effect I had in mind—distortions were local and rectilinear.


I wondered if embedding in the latent space of a neural network would create a more interesting "semantic" distortion, so I trained an autoencoder on the (tiny) ImageNet dataset and wrote a Python backend to live decode the link lengths of a D3.js graph simulation.


I also decided to try using the webcam feed as input rather than a still image, to create a mirror-like experience. Here are the results using two different autoencoders that I trained:

Implementation
latent identity
PULL

I still wanted to enable a more semantically meaningful distortion of the image—I imagined watching my face morph into a different face, changing its identity.


I retrained on a dataset containing only faces (CelebA). I initially experimented with my prior autoencoder but ended up finding more success with a pre-trained variational autoencoder, which was able to learn a smoother latent space.


I also created a new interface to allow the user to morph their identity. Watch as I change the tunable scale factors below:


latent variables →

← tunable scale

webcam feed →

← live decoded image

SplatBrush

VR painting with learned Gaussians

Brown: CV for Graphics and Interaction | Team: Eric Chen, Josh Yang, Serena Pulopot, Zihan Zhu

2024.11—2024.12

Enhancing the materiality of VR drawing applications

Idea
SplatBrush

In class, we surveyed recent state-of-the-art papers on 3D Gaussian Splatting—I proposed a project to create a 3D Gaussian VR painting tool.

I saw massive room for improvement over existing VR painting tools, where paint strokes are approximated by triangle-meshes that look like awkward tubes and ribbons.

3D Gaussians can approximate a wide range of real-world materials. I imagined a brush that allows you to draw with natural textures like grass and tree bark.

Implementation
SplatBrush

This project involved a lot of moving parts—here's a broad overview:

Outcomes
SplatBrush

After running inference on the MVSGaussian model and automatically removing excessively large splats and capture ring artifacts, we were able to paint with these textures:

To put our system to the test (and just for fun), we created a large-scale scene in the lobby of Brown's CS building!

Visual Code Studio

a knockoff gallery of bona-fide art

Rhode Island School of Design: Drawing with Computers + Personal Project

2024.10—2024.12

Misinterpreting source code as drawing instructions

Idea/Experiments
VCS

Systems like P5.js, Processing, and openFrameworks empower us to transform code into drawings. But complexity is hidden within code, regardless of intention. What if we could build a system to reveal the inherent complexity in source code by mapping it to a visual representation?

I wanted to repurpose utility for wonder.

My first idea was to create a kind of concrete poetry that procedurally generates itself—the source code is both the procedural instructions for where to place content AND the content itself. I created some small demos by blurring and thresholding source code text in the color palette of VSCode, and arranging it according to the structure of the original source. One surprising result was the amount of cartoonish faces that appeared from this process!


But I wasn't satisfied because the results seemed chaotic rather than structured as I wanted.

Implementation
VCS

I wanted to establish a direct connection between a single source code instruction, and a single drawing instruction. This led me to ponder lower-level instructions in assembly or bytecode.


I wrote a program to extract bytecode from any Python file, allowing me to map each bytecode instruction to a pixel operation—for example, LOAD_CONST moves the current position and places a pixel, CALL_FUNCTION changes the current drawing color, etc. The parameters of the bytecode instructions become parameters governing how much to change color or how far to move.


The result on the left is the output of my visual interpreter running a puzzle solver program. The result on the right was run with source code from the famous Breakout game.

Video Analogies

patch-based style transfer

Brown: Computational Photography | Team: Eric Chen, Seik Oh, Ziyan Liu

2023.11—2023.12

Dynamically mapping painterly aesthetics onto real-world scenes

Background
VA

Our team extended the work of Hertzmann et al. by implementing style transfer from a still reference image onto a dynamic video.


This work was the culmination of a semester learning all manner of image manipulation techniques. See below from left to right: constructing HDR images from an exposure bracket, Poisson image-blending, patch-based texture transfer, and... creating Abe Lincoln as a piece of bread.

My favorite concept of class was the discrete fourier transform of an image. I am fascinated by the idea that every image has a dual-form that encodes exactly the same information in frequency space. Here is my face in frequency space:

Outcomes
VA

Check out the final results of the video analogies project below!


Waterfall in watercolor style


Mountain range timelapse in oil painting style


Hyacinth Labyrinth

procedural hedge maze game

Brown: Computer Graphics | Team: Eric Chen, Faisal Zaghloul, Zach Weacher, Dominic Chui

2023.11—2023.12

Get lost in a winding maze of hedges

Implementation
generating diverse plant geometry
HL

Our team set out to build an explorable procedural hedge maze (built with Wilson's Algorithm) decorated entirely with procedurally-generated plants.


As the resident L-system enthusiast, I wrote a custom L-system engine in Blender to create diverse foliage. The engine generates 3D models of L-systems with configurable branching characteristics, leaf-size, stem-radius, etc.


Here are some plants generated with my engine!

Pen Plotting

experiments with materiality

Rhode Island School of Design: Drawing with Computers + Personal Project

2023.10—2023.12

Bringing computational work back into a physical medium

Experiments
PP

Working with a pen plotter challenged (and delighted) me in a new way.


There's something satisfying about seeing a computational work rendered with pen and paper. I tend to work with purely digital media but this experience motivated me to engage with physical materials and fabrication in the future.

Martial Arts

freedom of movement

ATA Taekwondo 4th Degree Black Belt, UCLA Taekwondo President, World Champion Demo Team Captain

2003.01—now

Pushing physical limits, expression through action

Experience
MA

I trained in Taekwondo alongside my whole family since I was 3 years old. My brother, parents, and I all earned our black belts together.


I specialize in flips, spins, and acrobatic kicks (martial arts tricking) and captained an ATA demonstration team, leading us to win the World Championships and perform live on ESPN.


In undergrad, I was elected president of UCLA's Club Taekwondo team with over 100 members.

me!