Back to Top
Back to Website
Eric Chen
HCI Research · Creativity Support · Computational Art
Portfolio Slides

L.ink

 procedural ink growth for controllable surprise

Brown University | Advisor: Jeff Huang

2024.02—2025.04

First-author research published as a UIST 2025 paper and CHI EA 2025 LBW

Motivation
L.ink
I joined the Brown HCI Lab with an idea inspired by my interest in the formal grammars known as L-systems...

In an L-system, the user has no control beyond specifying the initial state. As a result, they're detached from its growth.

I wanted to make the entire growth process an interactive experience by integrating L-systems into a procedural illustration tool.

I wondered how reacting to unpredictable growth would impact the creative experience of illustrators.

Implementation
interactive growing ink
L.ink
Implementation
visual ink-editor with live preview

L-systems are notoriously difficult to author. (This difficulty is actually tied to an NP-complete problem.)



I addressed this challenge with inspiration from Bret Victor's guiding principle that creators need an immediate connection to what they create.



With live visual feedback, users can make incremental adjustments toward their desired ink style.

L.ink
Outcomes

L.ink places the artist and their procedural ink in a continuous feedback loop.

Unpredictable ink growth affects artist's hand movement

Artist's hand movement determines further ink growth

Through a 12-participant user study with participants with novice to expert illustration experience, I explored a strategy for balancing control and surprise within this feedback loop. I published a first-author paper and first-author extended abstract reporting our findings:

L.ink: Procedural Ink Growth for Controllable Surprise (UIST 2025)

Eric Nai-Li Chen, Joshua Kong Yang, Jeff Huang, and Tongyu Zhou



L.ink: Illustrating Controllable Surprise with L-System Based Strokes (CHI EA 2025)

Eric Nai-Li Chen, Tongyu Zhou, Joshua Kong Yang, and Jeff Huang

L.ink

Semantic Sliders

 towards expressive and controllable AI interfaces

Adobe Research Internship | Advisors: Li-Yi Wei, Rubaiat Habib Kazi

2025.06—2025.09

Designing high-level semantic sliders for intuitive and explorable particle effects customization

Motivation
Semantic Sliders

I started my internship excited to leverage generative AI to enable authoring a broad range of dynamic effects. I initially envisioned an extension of L.ink that allowed aritsts to draw with elemental or magical brushes, with applications in motion graphics and interactive storytelling. (See my original video proposal here).


The first challenge was finding a representation expressive enough to enable a diverse range of special effects. Following an exploration of animated shaders, we settled on particle systems, where a large number of simple particles come together to create diverse and compelling effects. One strength of particle systems is that you can edit them interactively—try out a simplified example below!





Motivation
Semantic Sliders

In the toy example above, you have just 5 parameters to play with. But most production particle system engines provide way more tunable controls. (I count 195 parameters in Unity!)


The benefit is that these systems have broad expressive ranges. The drawback is that authoring any specific particle effect becomes very difficult, especially if you don't already know the meanings of confusing parameters like "Lifetime by Emitter Speed" or "Visualize Pivot."


Given this, generative AI could help inexperienced users author particle effects, but at what cost? We didn't want to take away from the meaningful exploration that comes from fiddling with parameters.

Recognizing a core HCI challenge, we focused on the question: how might we enable higher-level particle system editing for novices while maintaining compatibility with realtime exploration?

Design
Semantic Sliders

Towards solving this problem, I designed generative semantic sliders for particle effect customization. Users generate these sliders using text prompts like "heat" or "windiness." An LLM links each semantic slider to a set of dependent low-level parameters. When the user changes a semantic slider, its dependent parameters change instantly to achieve the appropriate effect. Crucially, low-level parameters also remain fully editable, preserving tinkerability while adding new channels for intuitive editing.

Initial sketch of the idea.

Working prototype. Semantic sliders influence red multipliers on low-level parameters.

Future
Semantic Sliders

This work is an ongoing project, with many unsolved questions still to tackle—here are some raised by our 4-participant formative study:


How can we afford semantic control over multidimensional or vector-valued concepts?


How should semantic sliders map to non-ratio-scale values (with no true zero)?


Does directly prompting for an adjective feel natural to users? What more intuitive interactions might we design?


We plan to submit our work-in-progress as a CHI 2026 poster, and explore these questions more fully in a future paper.

PULL

 live autoencoding with interactive latents

Rhode Island School of Design: Drawing with Computers + Personal Project

2024.11—2024.12

Exploring representations of identity

Idea
PULL

This project started with a strange observation that adjacency matrices provide a natural connection from images to graphs. Can the data of an image be encoded within the tensions of a network?

If so, could we create an interactive system where pulling on the network of nodes deforms and contorts the image?

I imagined this as an almost grotesque experience of forcefully pulling apart the meaning of something.

Implementation
pulling apart visual meaning
PULL

After some experimentation I found that encoding raw pixel values didn't create the effect I had in mind—distortions were local and rectilinear.


I wondered if embedding in the latent space of a neural network would create a more interesting "semantic" distortion, so I trained an autoencoder on the (tiny) ImageNet dataset and wrote a Python backend to live decode the link lengths of a D3.js graph simulation.


I also decided to try using the webcam feed as input rather than a still image, to create a mirror-like experience. Below see the network tensions change as I wave my hands around. Then, watch as I pull on one of the nodes to distort my own image:

Implementation
latent identity
PULL

I still wanted to enable a more semantically meaningful distortion of the image—I imagined watching my face morph into a different face, changing its identity.


I retrained on a dataset containing only faces (CelebA). I initially experimented with my prior autoencoder but ended up finding more success with a pre-trained variational autoencoder, which was able to learn a smoother latent space.


I also created a new interface to allow the user to morph their identity—watch as I change the tunable scale factors below:


latent variables →

← tunable scale

webcam feed →

← live decoded image

Visual Interpreter

 misinterpreting source code as drawing instructions

Rhode Island School of Design: Drawing with Computers + Personal Project

2024.10—2024.12

Revealing hidden complexity in arbitrary programs

Idea/Experiments
VI

Some code is written specifically for the purpose of generating visual art (P5.js, Processing). Other code is not—but rich complexity is hidden within it just the same. What if we could build a system to reveal the inherent complexity in source code by mapping it to a visual representation?

I wanted to repurpose utility for wonder.

My first idea was to create a kind of concrete poetry that procedurally generates itself—the source code is both the procedural instructions for where to place content and the content itself. I created some small demos by blurring and thresholding source code in the color palette of VSCode, and arranging it according to the structure of the original source. One surprising result was the amount of cartoonish faces that appeared from this process!


But I wasn't satisfied because the results were chaotic and didn't revealing the source code's underlying structure as I wanted.

Implementation
VI

I wanted to establish a direct connection between a single source code instruction, and a single drawing instruction. This led me to ponder lower-level instructions in assembly or bytecode.


I wrote a program to extract bytecode from Python source and map each bytecode instruction to a pixel operation—for example, LOAD_CONST moves the current drawing position and places a pixel, CALL_FUNCTION changes the current drawing color, etc. The parameters of the bytecode instructions become parameters governing how much to change color or how far to move.


The result on the left is the output of my visual interpreter running a puzzle solver program. The result on the right was run with source code from the famous Breakout game.

Video Analogies

 patch-based style transfer

Brown Computational Photography with Prof. James Tompkin | Team: Eric Chen, Seik Oh, Ziyan Liu

2023.11—2023.12

Dynamically mapping painterly aesthetics onto real-world scenes

Background
VA

Our team extended the work of Hertzmann et al. by implementing style transfer from a still reference image onto a dynamic video.


This work was the culmination of a semester learning all manner of image manipulation techniques. See below from left to right: constructing HDR images from an exposure bracket, Poisson image-blending, patch-based texture transfer, and... creating Abe Lincoln as a piece of toast.

My favorite concept from class was the discrete fourier transform of an image. I am fascinated by the idea that every image has a dual-form that encodes exactly the same information in frequency space. Here is my face in frequency space:

Outcomes
VA

Check out the final results of the video analogies project below!


Waterfall in watercolor style


Mountain range timelapse in oil painting style


Hyacinth Labyrinth

 procedural hedge maze game

Brown Computer Graphics with Prof. Daniel Ritchie | Team: Eric Chen, Faisal Zaghloul, Zach Weacher, Dominic Chui

2023.11—2023.12

Get lost in a winding maze of hedges

Implementation
generating diverse plant geometry
HL

Our team set out to build an explorable procedural hedge maze (built with Wilson's Algorithm) decorated entirely with procedurally-generated plants.


As the resident L-system enthusiast, I wrote a custom L-system engine in Blender to create diverse foliage. The engine generates 3D models of L-systems with configurable branching characteristics, leaf-size, stem-radius, etc.


Here are some plants generated with my engine!

SplatBrush

 VR painting with learned Gaussians

Brown CV for Graphics and Interaction with Prof. James Tompkin | Team: Eric Chen, Josh Yang, Serena Pulopot, Zihan Zhu

2024.11—2024.12

Enhancing the materiality of VR drawing applications

Motivation
SplatBrush

In class, we surveyed recent state-of-the-art papers on 3D Gaussian Splatting—I proposed a project to create a 3D Gaussian VR painting tool.

I saw massive room for improvement over existing VR painting tools, where paint strokes are approximated by triangle-meshes that look like awkward tubes and ribbons.

3D Gaussians can approximate a wide range of real-world materials. I imagined a brush that allows you to draw with natural textures like grass and tree bark.

Implementation
SplatBrush

This project involved a lot of moving parts—here's a broad overview:

Outcomes
SplatBrush

After running inference on the MVSGaussian model and automatically removing excessively large splats and capture ring artifacts, we were able to paint with these textures:

To put our system to the test (and just for fun), we created a large-scale scene in the lobby of Brown's CS building!

Pen Plotting

 experiments with materiality

Rhode Island School of Design: Drawing with Computers + Personal Project

2023.10—2023.12

Bringing computation back to physical media

Experiments
PP

Working with a pen plotter challenged and intrigued me in a new way.


It was really satisfying seeing computational work rendered in pen and paper, and these experiments motivated me to explore more physical media and/or fabrication in the future.

Martial Arts

 freedom of movement

ATA Taekwondo 4th Degree Black Belt, UCLA Taekwondo President, World Champion Demo Team Captain

2003.01—now

Pushing physical limits, expression through action

Experience
MA

I trained in Taekwondo alongside my family since I was 3 years old. My brother, parents, and I all earned our black belts together.


I specialize in flips, spins, and acrobatic kicks (martial arts tricking) and captained an ATA demonstration team, leading us to win the World Championships and perform live on ESPN.


In undergrad, I was elected president of UCLA's Club Taekwondo team with over 100 members.

me!