Duolingo Chess
Designing the Duolingo Chessboard Before Data Binding
When we started building Duolingo Chess, the board didn’t have full data binding or backend integration. There was no canonical engine-connected state layer yet. But the board still needed to feel complete — responsive, physical, and production-ready.
I was a core engineer/designer on the four-person team that brought Chess from concept to launch. My focus in the early phase was the interaction and representation layer: defining how pieces moved, how state was modeled, and how animation could coexist with deterministic logic.
What made this interesting was the constraint. We had to build something that felt polished before the architecture beneath it was finalized.
Deterministic State Before Backend State
Pieces needed to work in two modes simultaneously. They had to respond to direct user input — tap, drag, drop — but they also had to be movable programmatically for testing and eventual engine-driven updates.
That forced us to separate canonical board state from visual transform state very early.
Each piece had a coordinate representation independent of how it was rendered. Movement was never defined by where the UI element happened to be on screen. Instead, UI motion was treated as a visual consequence of state transitions.
That decision ended up shaping the entire architecture.
Separating Intent From Motion
One of the earliest problems was input modeling. Chess interactions become frustrating if they’re too rigid, but ambiguous interactions are worse.
I separated interaction into three layers: selection state, intended move, and visual transform. A drag gesture didn’t directly mutate the board. It updated intent. Only once the move was validated did the canonical state transition occur, which then triggered animation.
That separation allowed us to:
Cancel or revert moves safely
Highlight valid moves independently of animation
Override interaction cleanly with engine-driven updates
Even before full data binding, the board behaved like a small state machine.
Simulating Physicality Without Physics
We wanted pieces to feel weighted, but introducing real physics would have destabilized interaction logic and added unnecessary complexity.
Instead, I built a nested transform system. The canonical board position lived in a parent transform. An interactive offset transform handled immediate gesture response. A secondary delayed transform interpolated toward the parent, creating subtle rotational lag.
At rest, the transforms cancelled each other out. During motion, one would follow immediately while the other caught up. The result was a slight sway — enough to imply mass without affecting determinism.
The important part was that animation never defined position. It was layered on top of state.
Prototyping the Board in Code
Before full backend integration, I built a lightweight chess representation layer to validate assumptions.
I implemented FEN parsing to generate deterministic board configurations. From there, I modeled piece identity, position, and movement rules. This wasn’t meant to replace a production engine — it was meant to stress-test the interaction model.
By encoding movement and capture rules directly, I could validate edge cases, ensure highlighting logic was correct, and confirm that animation hooks aligned cleanly with state transitions.
More importantly, it clarified what “state” actually meant for our system.
Evolving Beyond Distributed Events
Early on, pieces emitted their own movement events. That worked in a small prototype but became fragile as complexity increased.
As we approached production, we centralized updates into a Chess ViewModel and began representing moves as structured events in a log. State transitions became board-owned rather than piece-owned.
That shift simplified reasoning about the system. Instead of asking what each piece was doing, we could inspect the board as a single source of truth. Multi-piece interactions became easier. Debugging became tractable.
Most importantly, animation remained a consequence of state — never the cause.
What This Foundation Enabled
By the time we launched v1, the board wasn’t just visually polished. It was architecturally prepared to evolve.
Because canonical state, interaction intent, and visual transforms were clearly separated, we could:
Layer richer animation without destabilizing logic
Integrate backend data binding cleanly
Add features like multi-piece selection
Iterate on “feel” independently from state representation
Building before data binding forced discipline. It made us define boundaries early, which ultimately made the system more scalable.