Ìrísí - A React Library for Building Product Videos in JSX
A few months ago, I needed to create a product demo video for something I was building. Nothing fancy, just a clean walkthrough showing the feature, how it works, what it looks like in action.
I did what most engineers do. Opened a screen recorder, ran through the flow, hoped the demo gods were watching. They weren't. Three takes, and one accidental tab switch later, I had a funny recording and I was too tired to redo.
So I tried the other route, I sat with an LLM, described what I wanted scene by scene, asked it to help me piece together some animation code. It worked, kind of. But it was painful. The model kept hallucinating component APIs that didn't exist, the code it generated needed heavy editing, and the mental overhead of managing keyframes, timing offsets, and animation libraries manually was exhausting. I spent more time debugging the video than building the feature it was supposed to show.
That experience planted a question I couldn't let go of: what if LLMs could generate product videos the same way they generate UI components?
The answer is Ìrísí.
The Name
Ìrísí (ee-REE-see) is a Yoruba word. It means appearance, the way a thing looks, the form it takes, how it presents itself to the world.
I've been naming my developer tools after Yoruba words with real meaning. My AI SDK is called Ajala, after the legendary traveler who cycled the entire world. The name carries weight. Ìrísí felt right for this because that's exactly what the library is about: controlling how things appear, shaping the form of your product's story.
The Vision
There's a version of the future I keep thinking about.
You finish building a feature. You write a short description of what it does. You hand it to an LLM. Two minutes later, you have a polished product video, animated, cinematic, the kind that makes your PM think you have a motion design team. You didn't touch a timeline. You didn't screen-record anything. The AI just... made it.
That's what Ìrísí is designed to enable.
The reason this is possible at all is because JSX is just text. It has a clear grammar, semantic component names, and readable props. When you write <Button variant="primary">Submit</Button>, a model doesn't need to guess what that does.
The problem with existing video tools is they weren't built with this in mind. They're built for humans operating GUIs — timeline panels, keyframe handles, layer stacks. That's a completely different paradigm from "write text that describes what you want."
Ìrísí is built on a different premise. What if a product video was just JSX?
<Presentation theme="dark">
<Frame duration={5} transition="fade">
<BackgroundGradient colors={["#1e3a5f", "#0a0a0a"]} />
<Center>
<Stack>
<Eyebrow animate="fadeIn" delay={0.2}>NEW FEATURE</Eyebrow>
<Title animate="slideUp" delay={0.5} staggerBy="word">
Instant Working Capital Loans
</Title>
<Subtitle animate="fadeIn" delay={1.2}>
Approved in under 60 seconds.
</Subtitle>
</Stack>
</Center>
</Frame>
</Presentation>
Five seconds. Animated headline, staggered word reveal, gradient background, fade transition. An LLM can write that. It's just props.
The LLM-First Design Decision
When I was speccing out Ìrísí, I made one decision that shapes everything: the API has to be something a model can generate reliably without needing the docs open.
That means three things:
Component names read as plain English. <ScrambleText> scrambles text. <MaskReveal> reveals behind a mask. <BackgroundAurora> is an aurora background. No abbreviations, no cleverness.
Prop values are named before they're numeric. animate="slideUp" not animation={2}. easing="bounce" not easing={[0.68,-0.55,0.27,1.55]}. The model should be able to infer what a prop does from its name alone.
Zero required props where possible. A <Frame> with nothing on it still renders. A <Title> with just children looks good out of the box. The 80% case should be one line of JSX.
The result: when you hand Ìrísí's component descriptions to Claude and say "make me a product video for this feature", the code it writes should run. Not "pretty close, needs tweaks." Runs.
The Part That Makes It Real
The component I'm most excited about is <UICursor>.
The hardest thing to fake in a product demo is interaction. You want to show a user clicking a button, filling a form, navigating a flow. Today your options are: screen-record the real thing and hope it cooperates, or spend an afternoon in a design tool animating every click manually.
<UICursor> lets you script it:
<UICursor
moves={[
{ to: { x: 0.3, y: 0.4 }, duration: 0.8 },
{ to: { x: 0.6, y: 0.5 }, duration: 0.6, click: true },
{ type: "Feranmi Adeniji", duration: 1.5 },
{ to: { x: 0.7, y: 0.7 }, duration: 0.4, click: true }
]}
/>
The cursor moves, clicks, types, choreographed, deterministic, zero screen recording. Pair it with <UIInput>, <UIButton>, <UIModal>, and you can build a complete product walkthrough for a feature that doesn't even exist yet.
An LLM can generate this. Describe the user flow in English, it writes the cursor script, the video renders.
How I'm Building It
The spec is done, 170+ components across 17 categories, all documented with props. Text, layout, media, charts, UI mockups, backgrounds, transitions, annotations, audio, scroll controls, the works.
Implementation starts now. And I'm going to lean heavily on Claude Code for most of the actual coding.
There's a certain irony in using an LLM to build a library designed for LLMs to use. I'm fine with that.
Follow Along
The repo is live at https://github.com/spiderocious/irisi, star it to stay updated, open issues for features you want to see, or reach out if you want to contribute.
I'll be writing about the interesting engineering problems as they come up. The timeline engine is going to be non-trivial.
If you've ever finished a feature and wished the demo just built itself, that's what this is for.
Ìrísí. The way your product appears to the world.


