View Model Interfaces are a pattern for designing the boundary between view and business logic. I have leveraged, to my benefit, this set of rules to follow in several projects. - Where either the performance needed to be very high - By performance I mean reactivity like needs to be very controlled - The complexity that comes from raw React hooks was difficult to follow - When I've had to manage complex UI state that might involve CSS animations or user inputs and the like where I really need to understand and know exactly what's going to change in the DOM to not interrupt the user who might be selecting text or inserting inputs and so forth. I accomplished a lot of. There's also a fourth thing which is also equally important is being able when a certain part of my application becomes sufficiently complicated enough I have found that adhering to these rules for that boundary leads to much easier testability, much easier prototyping, and a simpler UI. The controversial thesis here is that you should take the time to split out. You should take the time to adopt a reactivity toolset that is not dependent on the UI or rendering contexts. You should use fine-grained reactivity tooling (whether it's signals or observables or the like) so that when you are defining the interface, it's extremely clear from a type system point of view what's going to change and where to draw your boundaries. The audience here is somewhat familiar with different patterns of encapsulating UI. They probably have a lot of experience with React, Svelte, or Vue. I'm going to be focusing on React. The too-long-didn't-read, it's that... Your view model interface should be entirely interactive outside of the view's lifecycle, however your view lifecycle works (whether that's React hook context) you shouldn't have to rely on that when designing your view model interface and following these patterns will lead to more easily testable code, more easily iterated on code, and better LLM performance for complex code, complex UIs. The reason that the LLMs perform better is because you can define a clear boundary between "here's what the state looks like and how it is reactive in a type-driven way" and have an AI implement the UI above that. You'll know that the AI's implementation of that UI is type-checked and following your plan. These view model interfaces also provide a central point of documentation for what's expected out of the view and what the different sections of a UI are, so it gives you an opportunity to inform an AI how the view is supposed to be laid out before it gets started and the AI doesn't get distracted by the actual implementation. My personal hook here has been, I just tend to struggle with managing to hold in my head all of events management plus styling plus render performance plus contexts plus testing concerns. Like early on in my career, I realized I just struggled a lot as the normal React applications got more complex and I found myself in a lot of conversations about hooks and how to manage hooks and how to manage the reactivity of a certain part of an application. I always could just step back and say, "What the fuck is actually going on?" Like I think I know and I can kinda look at a few things at the same time to get an idea, but at the end of the day it was really unclear. Once you had a couple of use effects involved and some use states and maybe a context, I really lost the feeling that I could understand what was happening. Hook: As my software projects and jobs became targeting more complicated UIs and authorable UIs, I found a video about Flutter BLOC pattern from Paulo Soares in 2018 that described this kind of thinking pattern to apply. From then on, I have been leveraging this kind of pattern. Originally, I actually called it MVVM or the block pattern, but the more that I've implemented it and tried to explain it to people, it became more obvious that it's the important part of the pattern is actually just a view model interface and having a definition of what a view model interface is. So that's what we're going to talk about in this article. So yeah, the context here is that you're trying to reason about hooks and how they're interacting together and you're trying to tie things together and you're also trying to communicate to other engineers how something works and you're managing. contexts and I mean you don't have that kind of memory to remember like spatially how all these contexts like work together. You probably really enjoyed the container and display components patterns and react early on and you were wondering what the next iteration of those ideas might look like. The turn here is we define everything that the UI should look like, the actual even structure of the UI, we define that in a view model, like a type and an interface. That interface is going to expose as little as closely to how the UI gets mapped as possible, so that when we go to render the UI, you shouldn't have to reach for anything beyond a dot map, and you shouldn't have to call any functions with anything beyond, like, input text. If there's an event that changes the, your goal is to basically make all of the inner workings of your business logic, fetch state, everything opaque to the UI code. By following this rule, we can actually forego testing the majority of our UI side and skip a lot of complexity by not needing to set up end-to-end testing from React all the way to our business logic. We can actually test our entire application interacting directly with a view model which is much simpler than interacting with the UI directly or trying to only interact with the state machine itself. Proof: In fact, for most of my projects now, I only test when the only testing I do up front is on the view model itself, and usually that's only if the view has sufficiently complicated logic. I found that by breaking apart the view from the actual business logic, there's a lot fewer things that you need to test or that can go wrong because it's a lot easier to reason about.
Anchors
TODO: translate the raw thoughts into initial anchors. Controversial thesis: You should split view from business logic with a type-first, fine-grained reactive View Model Interface that lives outside the view lifecycle. - This makes complex UIs testable, predictable, and easier to co-develop (including with LLMs). Hero visual ideas: - Running UI, View Model interface, and test cases in three panes - Could be a simple text input with autocomplete/hints at the bottom
Introduction
⋅
View Model Interfaces are a pattern for designing the boundary between view and business logic.
Let's take a look at a simple example of what a view model interface for the following autocomplete input might look like.
From the interface alone, you should be able to picture what this is and how it behaves. Picture it in your head before you take a look at a demo of what this might look like.
View Model Interfaces are a pattern for designing the boundary between view and business logic. I have leveraged, to my benefit, this set of rules to follow in several projects.\n\n- Where either the performance needed to be very high\n- By performance I mean reactivity like needs to be very controlled\n- The complexity that comes from raw React hooks was difficult to follow\n- When I've had to manage complex UI state that might involve CSS animations or user inputs and the like where I really need to understand and know exactly what's going to change in the DOM to not interrupt the user who might be selecting text or inserting inputs and so forth.\n I accomplished a lot of. There's also a fourth thing which is also equally important is being able when a certain part of my application becomes sufficiently complicated enough I have found that adhering to these rules for that boundary leads to much easier testability, much easier prototyping, and a simpler UI.\n\nThe controversial thesis here is that you should take the time to split out. You should take the time to adopt a reactivity toolset that is not dependent on the UI or rendering contexts. You should use fine-grained reactivity tooling (whether it's signals or observables or the like) so that when you are defining the interface, it's extremely clear from a type system point of view what's going to change and where to draw your boundaries.\n\nThe audience here is somewhat familiar with different patterns of encapsulating UI. They probably have a lot of experience with React, Svelte, or Vue. I'm going to be focusing on React.\n\nThe too-long-didn't-read, it's that... Your view model interface should be entirely interactive outside of the view's lifecycle, however your view lifecycle works (whether that's React hook context) you shouldn't have to rely on that when designing your view model interface and following these patterns will lead to more easily testable code, more easily iterated on code, and better LLM performance for complex code, complex UIs.\n\nThe reason that the LLMs perform better is because you can define a clear boundary between \"here's what the state looks like and how it is reactive in a type-driven way\" and have an AI implement the UI above that. You'll know that the AI's implementation of that UI is type-checked and following your plan.\n\nThese view model interfaces also provide a central point of documentation for what's expected out of the view and what the different sections of a UI are, so it gives you an opportunity to inform an AI how the view is supposed to be laid out before it gets started and the AI doesn't get distracted by the actual implementation.\n\nMy personal hook here has been, I just tend to struggle with managing to hold in my head all of events management plus styling plus render performance plus contexts plus testing concerns. Like early on in my career, I realized I just struggled a lot as the normal React applications got more complex and I found myself in a lot of conversations about hooks and how to manage hooks and how to manage the reactivity of a certain part of an application. I always could just step back and say, \"What the fuck is actually going on?\" Like I think I know and I can kinda look at a few things at the same time to get an idea, but at the end of the day it was really unclear. Once you had a couple of use effects involved and some use states and maybe a context, I really lost the feeling that I could understand what was happening.\n\nHook: As my software projects and jobs became targeting more complicated UIs and authorable UIs, I found a video about Flutter BLOC pattern from Paulo Soares in 2018 that described this kind of thinking pattern to apply. From then on, I have been leveraging this kind of pattern. Originally, I actually called it MVVM or the block pattern, but the more that I've implemented it and tried to explain it to people, it became more obvious that it's the important part of the pattern is actually just a view model interface and having a definition of what a view model interface is. So that's what we're going to talk about in this article.\n\nSo yeah, the context here is that you're trying to reason about hooks and how they're interacting together and you're trying to tie things together and you're also trying to communicate to other engineers how something works and you're managing. contexts and I mean you don't have that kind of memory to remember like spatially how all these contexts like work together. You probably really enjoyed the container and display components patterns and react early on and you were wondering what the next iteration of those ideas might look like.\n\nThe turn here is we define everything that the UI should look like, the actual even structure of the UI, we define that in a view model, like a type and an interface. That interface is going to expose as little as closely to how the UI gets mapped as possible, so that when we go to render the UI, you shouldn't have to reach for anything beyond a dot map, and you shouldn't have to call any functions with anything beyond, like, input text. If there's an event that changes the, your goal is to basically make all of the inner workings of your business logic, fetch state, everything opaque to the UI code.\n\nBy following this rule, we can actually forego testing the majority of our UI side and skip a lot of complexity by not needing to set up end-to-end testing from React all the way to our business logic. We can actually test our entire application interacting directly with a view model which is much simpler than interacting with the UI directly or trying to only interact with the state machine itself.\n\nProof: In fact, for most of my projects now, I only test when the only testing I do up front is on the view model itself, and usually that's only if the view has sufficiently complicated logic. I found that by breaking apart the view from the actual business logic, there's a lot fewer things that you need to test or that can go wrong because it's a lot easier to reason about.
\n\n\n\n\n\n \n Anchors\n \n \n
TODO: translate the raw thoughts into initial anchors.\nControversial thesis: You should split view from business logic with a type-first, fine-grained reactive View Model Interface that lives outside the view lifecycle.\n\n- This makes complex UIs testable, predictable, and easier to co-develop (including with LLMs).\n\nHero visual ideas:\n\n- Running UI, View Model interface, and test cases in three panes\n- Could be a simple text input with autocomplete/hints at the bottom
\n\n\n
Introduction
⋅
\n
View Model Interfaces are a pattern for designing the boundary between view and business logic.
\n
Let's take a look at a simple example of what a view model interface for the following autocomplete input might look like.
\n\n\n\nFrom the interface alone, you should be able to picture what this is and how it behaves. Picture it in your head before you take a look at a demo of what this might look like.\n\n\n\n\n
Let's Build Composable Keyboard Navigation Together
⋅
\n
\n
Prerequisites: We'll assume you're comfortable with React and TypeScript. We'll introduce Entity-Component-System (ECS) concepts as we go - no prior game dev experience needed!
\n
Time to read: ~15 minutes \nWhat we'll learn: How to build keyboard navigation using composable plugins that don't know about each other
\n
\n
The Problem We're Solving
⋅
\n
We're building a complex UI, and we need keyboard shortcuts everywhere. Our text editor needs Cmd+B for bold, our cards need arrow keys for navigation, our buttons need Enter to activate.
\n
We could write one giant keyboard handler that knows about every component. But we've been down that road before - it becomes a tangled mess the moment we need context-sensitive shortcuts or want to test things in isolation. Every time we add a component, we're editing that massive switch statement. Every time a shortcut conflicts, we're debugging spaghetti code.
\n
Let's try something different. We'll build a system where components declare what they need, and a plugin wires everything together automatically. No tight coupling, no spaghetti code, and every piece testable in isolation.
\n
What We're Building
⋅
\n
We'll create three interactive cards, each with different keyboard shortcuts. When we focus a card, its shortcuts become active. Press ↑/↓ to navigate between cards, then try each card's unique actions.
\n
By the end, we'll understand how four small building blocks compose into a working keyboard system - without any of them knowing about the others.
\n
Our Approach: Entities, Components, and Plugins
⋅
\n
We're borrowing a pattern from game development called Entity-Component-System (ECS):
\n
\n
Entity = A unique identifier for a thing in the system, not a class or instance
\n
Component = Data attached to an entity via a component type key
\n
Plugin = Behavior that queries entities with specific component combinations and reacts to changes
\n
\n
The mapping to React:
\n
React → ECS\n─────────────────────────────────────\nComponent instance → Entity (just a UID)\nProps/state shape → Component (data attached to UID)\nContext + useEffect → Plugin (reactive behavior)\nuseState → Atom (Jotai reactive state)\n
\n
The key insight: Components are just data. Plugins add behavior by querying for that data. Nothing is tightly coupled.
\n
Let's see this in practice.
\n
Try It Out First
⋅
\n
Before diving into theory, let's play with what we're building:
\n\n\n\n
Watch how the demo responds. Notice that only the focused card's shortcuts work - the others are \"dormant\" until focused.
\n
Now let's understand how we built this.
\n
Our Four Building Blocks
⋅
\n
Let's break down our keyboard system into four composable pieces.
\n
Block 1: Making Things Focusable
⋅
\n
First, we need to mark which entities can receive focus. We'll create a CFocusable component:
activeFocusAtom: Holds the UID of whichever entity currently has focus (or null)
\n
\n
How it connects:
\n
// When we focus a card:\nconst focusUnique = world.getUniqueOrThrow(UCurrentFocus);\nworld.store.set(focusUnique.activeFocusAtom, cardUID);\n\n// Anywhere else in our app:\nconst focusedEntityUID = world.store.get(focusUnique.activeFocusAtom);\n
\n
Notice: CFocusable and UCurrentFocus don't import each other. They communicate through atoms. The CFocusable Plugin (which we'll see soon) is what wires them together.
\n
Block 3: Declaring Actions
⋅
\n
Now we need entities to declare what keyboard actions they support:
\n
\n \n
\n
export type AnyAction = {\n label: string;\n defaultKeybinding: DefaultKeyCombo;\n description?: string;\n icon?: any;\n self?: boolean;\n hideFromLastPressed?: boolean;\n};\ntype AnyBindables = Record<string, AnyAction>;\n\nexport type ActionEvent = { target: UID };\n\nexport namespace CActions {\n export type Bindings<T extends AnyBindables> = {\n [P in keyof T]: Handler<ActionEvent> | Falsey;\n };\n}\n\nexport type ActionBindings<T extends AnyBindables = AnyBindables> = {\n bindingSource: DevString;\n registryKey: ActionRegistryKey<T>;\n bindingsAtom: Atom<CActions.Bindings<T>>;\n};\n\nexport class ActionRegistryKey<T extends AnyBindables = AnyBindables> {\n constructor(\n public readonly key: string,\n public readonly meta: { source: DevString; sectionName: string },\n public readonly bindables: T,\n ) {}\n}\n\nexport class CActions extends World.Component(\"actions\")<CActions, ActionBindings[]>() {\n static bind<T extends AnyBindables>(key: ActionBindings<T>) {\n return CActions.of([key as ActionBindings]);\n }\n\n static merge(...bindings: Array<ActionBindings | ActionBindings[]>) {\n const out: ActionBindings[] = [];\n for (const b of bindings) {\n if (Array.isArray(b)) {\n out.push(...b);\n } else {\n out.push(b);\n }\n }\n return CActions.of(out);\n }\n\n static defineActions<T extends AnyBindables>(\n key: string,\n meta: { source: DevString; sectionName: string },\n actions: T,\n ): ActionRegistryKey<T> {\n return new ActionRegistryKey(key, meta, actions);\n }\n}
\n
\n
\n
What this gives us:
\n
\n
A way to define available actions (defineActions)
\n
A way to bind handlers to those actions per entity
\n
Actions are just metadata: label, key binding, description
Notice: We defined the action schema once, then bound different handlers per entity. One card might delete, another might archive. Same action definition, different behavior.
\n
Block 4: Wiring It All Together - ActionsPlugin
⋅
\n
Here's where the magic happens. We need something that:
\n\n
Listens for keyboard events
\n
Finds the currently focused entity
\n
Matches keys to actions
\n
Executes the handler
\n\n
That's what our ActionsPlugin does:
\n
\n \n
\n
export const ActionsPlugin = World.definePlugin({\n name: dev`ActionsPlugin`,\n setup: (build) => {\n const { store } = build;\n\n // Track the currently focused entity\n const currentFocusAtom = atom((get) =>\n pipeNonNull(get(build.getUniqueAtom(UCurrentFocus)), (a) => get(a.activeFocusAtom)),\n );\n\n const rootUIDAtom = atom<UID | null>(null);\n const currentDispatchSpotAtom = atom((get) => get(currentFocusAtom) ?? get(rootUIDAtom));\n\n const handleOnce = new WeakSet<KeyboardEvent>();\n build.addUnique(UKeydownRootHandler, {\n handler(reason, keyboardEvent) {\n if (handleOnce.has(keyboardEvent)) return Outcome.Passthrough;\n handleOnce.add(keyboardEvent);\n if (keyboardEvent.defaultPrevented) return Outcome.Passthrough;\n const world = store.get(build.worldAtom);\n if (!world) return Outcome.Passthrough;\n const dispatchFromUID = store.get(currentDispatchSpotAtom);\n if (!dispatchFromUID) return Outcome.Passthrough;\n // Walk up the parent chain looking for keydown handlers\n const result = CParent.dispatch(\n dev`keydown from root`.because(reason),\n world,\n dispatchFromUID,\n CKeydownHandler,\n reason,\n keyboardEvent,\n );\n // Prevent default browser behavior when we handle the key\n if (result !== Outcome.Passthrough) {\n keyboardEvent.preventDefault();\n }\n return result;\n },\n rootUIDAtom,\n });
\n
\n
\n
Here's what happens when we press a key:
\n
User presses "X"\n ↓\nActionsPlugin.UKeydownRootHandler receives event\n ↓\nQuery: Which entity has focus? (from UCurrentFocus)\n ↓\nWalk up parent chain: Does this entity have CKeydownHandler?\n ↓\nMatch key "X" to action "delete"\n ↓\nCall the handler we bound earlier\n ↓\npreventDefault() so browser doesn't scroll\n
\n
The beautiful part? None of these components import each other. The plugin queries the world: \"Give me the focused entity. Does it have CActions? Great, wire up keyboard handling for it.\"
\n
\n \n
\n
// Provide keydown handler for entities with CActions\n build.onEntityCreated(\n {\n requires: [CActions],\n provides: [CKeydownHandler],\n },\n (uid, { actions }) => {\n const combinedCombosAtom = atom((get) => {\n type ComboData = {\n actionKey: string;\n handler: (reason: DevString, event: ActionEvent) => Outcome;\n } & AnyAction;\n\n const combosMap = new Map<string, ComboData[]>();\n\n for (const actionSet of actions) {\n const resolvedBindings = get(actionSet.bindingsAtom);\n for (const [actionKey, maybeHandler] of Object.entries(resolvedBindings)) {\n if (!maybeHandler) continue;\n const bindable = actionSet.registryKey.bindables[actionKey];\n if (!bindable) continue;\n const defaultKey = bindable.defaultKeybinding;\n const combo = normalizedKeyCombo(defaultKey, ENV_KEYBOARD_KIND).normalized;\n const comboData: ComboData = {\n actionKey,\n handler: maybeHandler,\n ...bindable,\n };\n const list = combosMap.get(combo);\n if (!list) {\n combosMap.set(combo, [comboData]);\n } else {\n list.push(comboData);\n }\n }\n }\n return combosMap;\n });\n\n const keydownHandler = CKeydownHandler.of({\n handler(reason, event) {\n // Omit shift for letter keys so \"Shift+X\" matches \"X\"\n const combos = addModifiersToKeyCombo(ENV_KEYBOARD_KIND, event, true);\n if (event.defaultPrevented) return Outcome.Passthrough;\n const combosMap = store.get(combinedCombosAtom);\n\n for (const combo of combos) {\n const comboDatas = combosMap.get(combo.normalized);\n if (!comboDatas) continue;\n for (const comboData of comboDatas) {\n const outcome = comboData.handler(dev`Key combo pressed: ${combo.normalized}`.because(reason), {\n target: uid,\n });\n if (outcome !== Outcome.Passthrough) {\n return outcome;\n }\n }\n }\n return Outcome.Passthrough;\n },\n });\n\n return { keydownHandler };\n },\n );
\n
\n
\n
This is the actual per-entity handler creation. When we add an entity with CActions, the plugin automatically:
\n
\n
Reads all action definitions
\n
Normalizes key combos (so \"X\" and \"Shift-X\" both match)
\n
Creates a CKeydownHandler that matches keys to handlers
\n
Plugs it into the event system
\n
\n
We don't call any of this ourselves. It Just Works™.
\n
What We Learned
⋅
\n
Let's step back and appreciate what we built:
\n
✅ We Can Test Everything In Isolation
⋅
\n
Want to test if \"X\" triggers delete? No React needed:
\n
const world = createTestWorld();\nconst cardUID = addCardEntity(world, {\n onDelete: mockFn,\n});\n\n// Simulate focus\nworld.store.set(focusAtom, cardUID);\n\n// Simulate keypress\nrootHandler.handler(dev`test`, { key: "x" });\n\nexpect(mockFn).toHaveBeenCalled();\n
\n
✅ Components Are Composable
⋅
\n
A simple button might only have CFocusable. A rich text editor adds CActions with 50 shortcuts. A card adds both plus CSelectable. Mix and match:
For our use case (complex editor with nested contexts), the composition benefits outweigh the complexity cost. For most apps, a 2KB library like tinykeys is probably the right call.
\n
Tracing a Keypress Together
⋅
\n
Let's walk through exactly what happens when we press \"X\" to delete a card. This demystifies the \"magic\":
\n
📍 Step 1: DOM Event (keyboard-demo.entrypoint.tsx:29)\n document.addEventListener('keydown', ...)\n Event fires with event.key = "x"\n\n ↓\n\n📍 Step 2: Root Handler (ActionsPlugin.ts:33-42)\n UKeydownRootHandler.handler() receives event\n Check currentDispatchSpotAtom: Is anything focused?\n Result: Card 2 has focus\n\n ↓\n\n📍 Step 3: Parent Walk (ActionsPlugin.ts:44-55)\n CParent.dispatch() walks up the entity tree\n Current: Card 2 → Does it have CKeydownHandler? YES ✓\n\n ↓\n\n📍 Step 4: Key Normalization (ActionsPlugin.ts:105-107)\n addModifiersToKeyCombo("x", event, omitShift=true)\n "x" → "X" (uppercase)\n No modifiers, final combo: "X"\n\n ↓\n\n📍 Step 5: Action Lookup (ActionsPlugin.ts:110-115)\n combosMap.get("X") → [{action: "delete", handler: fn}]\n Call handler(dev`Key combo`, {target: card2UID})\n\n ↓\n\n📍 Step 6: Our Handler (createKeyboardDemo.ts:83)\n onDelete() runs\n alert("Deleted Card 2!")\n Returns: handled`delete`\n\n ↓\n\n📍 Step 7: Prevent Default (ActionsPlugin.ts:50-52)\n outcome !== Passthrough, so:\n event.preventDefault() ← stops browser scroll\n Return "handled" to stop propagation\n
\n
Key insight: Notice how information flows through atoms and component queries, never through direct imports or method calls. That's the decoupling in action.
\n
What's Next?
⋅
\n
Now that we understand composable keyboard navigation, we can:
\n
\n
Add spatial navigation (arrow keys navigate a 2D grid)
\n
Build focus trapping for modals
\n
Create a command palette with searchable actions
\n
Support user-customizable keybindings
\n
\n
The pattern scales because we're composing data, not coupling objects.
\n
Reflection
⋅
\n
We started with a problem: keyboard shortcuts without spaghetti code.
\n
We solved it by separating concerns:
\n
\n
CFocusable says \"I can receive focus\" (data)
\n
CActions says \"I have these shortcuts\" (data)
\n
ActionsPlugin says \"When those exist together, wire them up\" (behavior)
\n
\n
No component knows about the others. Add a new shortcut? Update one entity's CActions. Add a new focusable element? Add CFocusable. The plugin handles the rest.
\n
That's the power of Entity-Component-System for UI.
Hi, my name is Cole. I work at Phosphor where we are tackling extremely dense UI problems. At previous companies, I had experienced very similar things. But the scale out was always a challenge I had gone from in my career I've gone from a basic view-model-controller patterns to a form of MVVM (model-view-view-model) and then at my previous start-up, I ventured into an architecture that was in Rust. While writing in Rust, we found that the ease due to how the borrow checker was working made it the easiest way to do this is to write it in Rust.
\n
The most easy way to do things like reactivity was through a pattern called entity-component-systems, which allow you to create detached components from entities where entities are themselves just IDs that you use to look up components in a sparse set or... a type map.
\n
What this yielded was that in files that needed to do some kind of complex logic, instead of importing an entire AST structure, you had a parsed version and type checking and errors all as separate components, so one system could handle the raw structure too. Parsed form. Then another system could take parsed form too. The two errors and suggestions and a separate system could handle the parsed form to suggestions. Yet another system says, \"Okay, now let's map those suggestions to UI notifications\", and each one of these systems was able to sort of build on each other in an extremely scoped way where they own each system individually only cared about the components between them.
\n
I had previously used this pattern not just because I was in Rust previously, but I grew to love the pattern for the properties that gave me for testing and reasoning. I had the suspicion early on here at Phosphor that we could leverage this pattern to improve our UI building because every rectangle has a lot of complexity behind it and you have to figure out how you're going to scale out that complexity.
\n\n
Let me take that step back and tell you kind of like, well, see if I can paint the picture of the problem that you know we're experiencing at Phosphor. If we start with one input box, you're going to imagine there's a little there's some hooks, there's some state management, there might be some persistence involved, and you kind of start. Then we add a little bit more. We add validation, and validation will report if there's an error or something. This is where you're thinking now I need system-wide undo and redo, which is going to touch a few files and need to link things together. Then you say, \"Okay, well, I have a shit. I need to also support collaborative editing so multiple people can see what each other are doing and maintain kind of a collaborative feature.\" This is going to touch a bunch of files because now we're talking about the way you present things and it's the talking about your sync layer. Your front-end needs to maybe be reworked so that the sync layer is actually the hook that you're using and not some other persistence thing. You refactor that. \"Okay, cool, normal things.\" But we start to notice that unless you adopt a full sync engine tool, say like Yjs or Loro + Loro-Mirror, the problem becomes very big already.
\n
[hole] - example of components
\n
But things don't stop there. All we've categorized so far is very basic, like features for any authorable environment. These are table stakes basics. Maybe some authorable environments don't need validation pieces, but let's continue because now we have to talk about the things that are expensive at Phosphor. We go from what we just described to having a number of additional features like comparisons, so you are comparing a single value in that input to another value, and where does that other value come from? For us, it comes from a version control state. We maintain a version-controlled graph of all the changes and groups of changes with proposals and where every change came from, and we present information about every value's origin in the UI. Right here, you start to wonder, \"Okay, well, if I'm going to present all this information in the UI, how am I going to manage that complexity? I have undo/redo, I didn't even mention key bindings yet, which is another thing you need to consider. Then we're talking about diff views that we're seeing like before and afters, and that has its own set of layers that we don't have to get into, but it's a lot. I'm not even sure I could continue. We could talk about keyboard maps and customizable key bindings, and we can talk about selection state, where you have multiple things that are selected, and then you have copy/paste. You have that, the yada yada, a lot that ends up going into this, and that's how we found ourselves here.
\n
And here forward, I'm thinking to myself, \"Well, I know how we scaled the complexity at Story.ai, and that was through this ECS pattern. How will that adapt to my favorite approach to UI state management today, which is the view model pattern?\" I started thinking about that because I practically maintained our company's ECS crate that we were specifically using and we had a lot of opinions about how to build up that crate and make it have better errors and make it more usable and so forth. I thought, \"Okay, well, maybe we could adapt a lot of that idea into a library that we can use here at Phosphor,\" and that is where I started with WorldState. WorldState is a set of tools that allows you to build up a UI state without any reliance on React and with this sort of decoupled approach that I liked from structuring things as view models and vs. views and things that I like from ECS. Right now, our WorldState is backed by Jotai Atoms, which is just a reactive container of some kind and reactive values, right? Signals and so forth. We're considering adapting WorldState to use Livestore as our reactivity graph next.
\n
[S2] ECS solution translated to UI
⋅
\n
So I was thinking to myself, \"What are the characteristics going to be like? Because unlike Rust, we have these reactive containers and we don't have the frame-by-frame computation thing that most games have. We just have components that contain atoms, and when we want to create the equivalent of a system, we are going to create a plugin. A plugin will base on the fact that if there's an entity created with these components, then I can provide these components and there's never a need to add or remove components from it. You just simply have this sort of plugin architecture.\"
\n
\n
Game developers faced this exact problem a decade ago and found an elegant solution: entities, components, and systems. The pattern translates directly to UI engineering.
\n
\n
[hole](\"Example of a plugin\")
\n\n
The key insight: add features by adding plugins and components, not by editing old code. Your existing validation logic never changes when you add collaborative cursors. Your undo system remains untouched when you introduce AI suggestions.
\n
WorldState isn't a game engine—it's an ECS-inspired approach for expressing UI concerns as composable data and behavior.
[Interactive scrollytelling demo showing rectangle gaining features step by step]
\n
Watch a simple rectangle evolve: Base → validation → undo → diffs → presence → focus → commands → AI → inline visualization.
\n
Each step demonstrates additive architecture. The validation plugin doesn't know about undo. The undo plugin doesn't know about presence. The presence plugin doesn't know about AI suggestions. Yet they all compose cleanly because they operate on shared entity state through well-defined component interfaces.
\n
Toggle the layer chips at the bottom to see how features compose. Turn validation off and on—notice how it doesn't break undo or cursors. Enable all layers simultaneously—no conflicts, no special coordination code needed.
\n
Reality check: if your app is a simple contact form, you don't need this. But if your rectangles feel like living systems with multiple interacting concerns, this architecture prevents the coupling nightmare.
[Working demo: grid navigation with context-sensitive actions palette]
\n
This demo shows three complex UI concerns working in harmony: selection, focus, and context-sensitive commands.
\n
Selection (what's highlighted) and focus (where keyboard input goes) are different concepts that can diverge then sync. In this grid, you can select multiple cells with Shift+click while focus remains on a single cell for typing. The selection and focus plugins coordinate through shared component state, not direct coupling.
\n
Navigate with arrow keys—the focus moves, and the actions palette updates to show contextually relevant commands. Hit Spacebar to select the focused cell. Use Shift+arrows to extend selection. Each behavior is handled by a separate plugin responding to component state changes.
\n
The power emerges from composition: KeyboardPlugin + SelectionPlugin + ActionsPlugin working together without knowing about each other's implementation details.
\n
[S5] When to use vs avoid
⋅
\n
This architecture shines for certain types of applications and can be overkill for others.
\n
Great fit if you're building:
\n
\n
Collaborative editors where multiple people edit the same document simultaneously
\n
Dashboards with cross-panel effects where selecting data in one chart filters others
\n
Complex modeling tools with multiple overlapping interaction modes
\n
Professional workflows requiring deep keyboard navigation and shortcuts
\n
\n
Probably overkill if:
\n
\n
You're building a basic CRUD application with standard forms
\n
Your UI is mostly static content with minimal interactivity
\n
You have a small team and simple requirements
\n
\n
Design engineers welcome: The more interaction layers your application has, the more this pattern will help. If you find yourself building \"features that affect other features,\" you're in the sweet spot.
\n
The architecture pays for its complexity by preventing coupling debt. Early investment in entity-component structure pays dividends when you need to add the fifth, sixth, and seventh layers of interaction.
\n
[S6] Community engagement conclusion
⋅
\n
Your rectangle doesn't have to be a hydra.
\n
We're collecting UI engineering war stories and solutions like this one. What's your rectangle horror story? What tools have you discovered that tame coupling nightmares in complex interfaces?
\n
Share your dev tooling discoveries with us—we'll feature the best ones and build a knowledge base of battle-tested patterns for UI engineers.
\n
Send us a DM, or check out our careers page if you're the kind of engineer who sees these patterns and wants to build tools that make them easier to implement.
\n
The future of UI engineering is compositional. Let's build it together.