Think in
Entities
An interactive guide to Warp DB. Learn why entities are the unit of everything — and why your code gets simpler when you stop fighting shared state.
The Problem
You have a users table. 3 services write to it. You add a row-level lock. Things work.
Now 12 services write to it. You add MVCC, connection pooling, retry logic, deadlock detection. The schema has 47 columns.
A deploy goes wrong. 2 services have stale schema. Rows are half-written. The migration rolled back but the data didn't. You're debugging at 2am.
You add distributed transactions. Latency triples. The on-call rotates weekly because nobody wants to own the conflict resolution logic.
What if the thing that owns the data was the unit of everything — storage, concurrency, backup, and deletion?
┌─── One entity
│ ┌─── One actor (process)
│ │ ┌─── One writer
│ │ │
│ No locks. No MVCC. No conflicts. Ever. Your First Entity
Build a working example in 5 steps. Each step adds one concept. By the end, you'll have writes, reads, and a full audit trail.
import warp
import warp/context.{WarpOptions, ActorMode, SingleNode}
import banking/account
let assert Ok(ctx) = warp.start(WarpOptions(
context: "banking",
aggregates: [account_aggregate()],
shards: 16,
data_dir: "/data/banking",
mode: ActorMode,
cluster: SingleNode,
..defaults()
))A context is your database instance. It owns the shard files, the actor registry, and the projection writer. You configure it once at startup.
The Single Writer Guarantee
This is the core insight. Everything else in Warp follows from this.
Your entity lives as an OTP process. It holds its state in memory. Writes serialize through its mailbox. No lock manager, no retry loop — the mailbox IS the serialization.
Two requests hit the same entity? They queue. Not with database locks — with message passing. The second caller waits microseconds, not milliseconds. No deadlocks. No timeouts.
There is no concurrent writer to disagree with. No optimistic concurrency. No "row was modified by another transaction." By construction, not by convention.
Thinking in Entities
The single most important skill in Warp. Get this right and everything else is easy. Get it wrong and you've built a bottleneck.
You're building a chat app with rooms. 1000 users send messages simultaneously. How do you model it?
You're tracking inventory across 5 warehouses, 10,000 SKUs. Each SKU can be restocked or sold from any warehouse. How do you model it?
The Rule: Ask yourself — who writes to this data? If the answer is "many things," you have many entities, not one.
Event Storming
Before you write code, you need to understand your domain. Event Storming is how you get there — no ERDs, no upfront schema design, just sticky notes and conversations.
Event Storming was invented by Alberto Brandolini in 2012. His insight: the fastest way to understand a business domain is to get domain experts and developers in a room with unlimited sticky notes and ask one question — "what happens in your system?"
No UML. No entity-relationship diagrams. No upfront schema. You start with domain events — things that happened — and work backwards to discover commands, aggregates, policies, and read models. The result is a shared understanding of the domain that maps directly to code.
Entity-relationship diagrams model tables and foreign keys. Warp doesn't have tables or foreign keys. It has entities, events, and projections. Trying to design a Warp system with an ERD is like trying to design a message-passing system with a spreadsheet — the abstraction doesn't fit. Event Storming maps directly to Warp's primitives.
The mapping from an event storm to Warp code is direct:
| Sticky Note | Warp Primitive | Code |
|---|---|---|
| Event | Event type in aggregate | Credited(amount: Int) |
| Command | warp.append() call | alice |> account.append(Credited(5000)) |
| Aggregate | AggregateDefinition | account_aggregate() |
| Policy | Saga or projection trigger | saga.new("xfr") |> saga.step(...) |
| Read Model | Projection to rqlite | p.upsert("balances") |> p.id(...) |
In a traditional event storm, the entity boundaries are the hardest part to discover. With Warp, the question is simple: "who is the single writer?" Each writer is an entity. Group the stickies by writer, and you have your Warp architecture.
Alberto Brandolini designed Event Storming for DDD with any event-sourced system. Warp makes the mapping trivial because every Warp primitive already has a sticky note color.
References, Not Joins
In traditional databases, relationships live in the schema. In Warp, they live in event payloads and projections.
-- 3 tables, FK constraints,
-- 3 migration files
CREATE TABLE users (
id UUID PRIMARY KEY,
email TEXT NOT NULL
);
CREATE TABLE orders (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id)
ON DELETE CASCADE,
total INT
);
CREATE TABLE items (
id UUID PRIMARY KEY,
order_id UUID REFERENCES orders(id)
ON DELETE CASCADE,
sku TEXT, qty INT
);
-- Add a column? Migration.
-- Rename a table? 4 files change.
-- Delete a user? CASCADE and pray. // References in event payloads
// No FK constraints, no migrations
pub type OrderEvent {
OrderPlaced(
customer_id: String,
items: List(Item),
total: Int,
)
OrderShipped(tracking: String)
OrderRefunded(reason: String)
}
// Projection denormalizes for queries
p.upsert("orders")
|> p.id(order_id)
|> p.set("customer", p.text(customer_id))
|> p.set("total", p.int(total))
|> p.build
// Add a field? Next event has it.
// Delete a user? warp.delete(user).
// One call. Done. Key insight: Your data model evolves by appending new event types, not by migrating tables. Old events keep their original shape. New events add new fields. The apply function handles both.
Sagas
Within one entity, writes are strongly consistent. But what about operations that span two entities — like transferring money from Alice to Bob? You can't lock two actors. Sagas orchestrate multi-entity operations with automatic compensation on failure.
import warp/saga.{StepEvent}
let assert Ok(Committed(..)) =
saga.new("xfr-001")
|> saga.step(
entity: "alice",
aggregate: "Account",
schema_vsn: 1,
forward: StepEvent(
"Debited", "{\"amount\":500}"),
compensate: StepEvent(
"Credited", "{\"amount\":500}"),
)
|> saga.step(
entity: "bob",
aggregate: "Account",
schema_vsn: 1,
forward: StepEvent(
"Credited", "{\"amount\":500}"),
compensate: StepEvent(
"Debited", "{\"amount\":500}"),
)
|> saga.commit(ctx) All forward steps succeeded.
A step failed. Prior steps were reversed automatically.
Compensation itself failed. Requires operator intervention.
Every step gets a deterministic event_id: saga_id/fwd/0, saga_id/fwd/1. Replaying the same saga is idempotent — completed steps are skipped. Crash recovery is automatic via saga.recover_pending(ctx).
Your Read Model
Events are your source of truth — but you can't SELECT from an event log. Projections automatically build query-friendly tables in rqlite from your events.
pub fn todo_aggregate() -> AggregateDefinition {
AggregateDefinition(
name: "Todo",
initial_state: dynamic.list([]),
apply: todo_apply,
project: todo_project,
migrate: fn(evt, _vsn) { evt },
)
}Each aggregate has a project function. It receives the entity ID, the current state, and the event — and returns a list of projection commands. These commands describe what to write to rqlite.
Projections are derived. If rqlite crashes, replay all events and rebuild from scratch. Your event log is the source of truth — the projection is just a materialized view.
ActorMode vs ReadThrough
Two modes, same API. Pick based on your read/write ratio. You can always switch later.
| Aspect | ActorMode | ReadThrough |
|---|---|---|
| Read latency | ~ns (memory) | ~100µs (fold) |
| Write latency | ~500µs | ~200µs |
| Memory per entity | KB–10s KB | ~0 |
| Startup cost | Replay events | None |
| Best for | Games, real-time | Audit, PSPs |
What's the Catch?
Every tool has tradeoffs. Here are Warp's — honestly.
If you've spent years thinking in tables, JOINs, and transactions, entity-thinking feels foreign at first. This is normal. The mental model is simpler — but simpler doesn't mean familiar. Give it a week. It clicks.
Within one entity: strong consistency. Across entities: eventual consistency via projections. If you need "read Alice and Bob, both reflecting the saga" — that takes milliseconds, not microseconds. For most applications, this is invisible.
Warp doesn't have SQL JOINs, window functions, or ad-hoc analytics. Your analytical queries go through rqlite projections. For OLAP, export to a data warehouse. For everything OLTP — Warp is faster, simpler, and gives you more mental clarity.
Gotchas & How to Handle Them
Every database has edge cases. Here are the ones you'll actually encounter — and exactly how to handle them.