WARP
Bending the Fabric of Storage v0.1
One entity. One actor. One writer. No conflicts. Ever.
Alice closes her account. All her data must go. Same task, two approaches.
// ── GDPR: Right to Erasure ───
// Alice closes her account.
// All her data must go.
// 1. Data portability (Art. 20)
let assert Ok(export) =
alice |> warp.export()
// Returns every event ever stored
// for this entity. Hand it to Alice.
// 2. Right to erasure (Art. 17)
let assert Ok(Nil) =
alice |> warp.delete()
// What just happened:
// → Actor stopped
// → All events purged from shard
// → Projections cleaned (rqlite)
// → Snapshot removed
// → Backup segments orphaned
// (no entity = no key = backups
// for this entity are dead data)
//
// One entity. One call. Gone.
// No FK cascade. No orphan check.
// No "did we miss a table?" # ── Data portability (Art. 20)
def export_user(repo, user_id) do
user = repo.get!(User, user_id)
acct_ids =
repo.all(
from a in Account,
where: a.user_id == ^user_id,
select: a.id
)
%{
user: user,
accounts: repo.all(
from a in Account,
where: a.user_id == ^user_id),
transactions: repo.all(
from t in Transaction,
where: t.account_id in ^acct_ids),
kyc: repo.all(
from k in KycDocument,
where: k.user_id == ^user_id),
sessions: repo.all(
from s in Session,
where: s.user_id == ^user_id),
audit: repo.all(
from a in AuditLog,
where: a.user_id == ^user_id),
}
end
# ── Right to erasure (Art. 17)
# Order matters (FK constraints)
def delete_user(repo, user_id) do
acct_ids =
repo.all(
from a in Account,
where: a.user_id == ^user_id,
select: a.id
)
Ecto.Multi.new()
|> Ecto.Multi.delete_all(
:intents,
from(i in TransferIntent,
where: i.from_acct in ^acct_ids
or i.to_acct in ^acct_ids))
|> Ecto.Multi.delete_all(
:audit,
from(a in AuditLog,
where: a.user_id == ^user_id))
|> Ecto.Multi.delete_all(
:sessions,
from(s in Session,
where: s.user_id == ^user_id))
|> Ecto.Multi.delete_all(
:kyc,
from(k in KycDocument,
where: k.user_id == ^user_id))
|> Ecto.Multi.delete_all(
:txns,
from(t in Transaction,
where: t.account_id in ^acct_ids))
|> Ecto.Multi.delete_all(
:accounts,
from(a in Account,
where: a.user_id == ^user_id))
|> Ecto.Multi.delete_all(
:user,
from(u in User,
where: u.id == ^user_id))
|> repo.transaction()
end
# ── Purge from replicas ───────
def purge_replicas(user_id) do
Enum.each(replica_repos(), fn repo ->
delete_user(repo, user_id)
end)
end
# ── Advance replication slots ─
def advance_slots(repo) do
{:ok, %{rows: slots}} =
Ecto.Adapters.SQL.query!(repo,
"SELECT slot_name
FROM pg_replication_slots
WHERE active = true")
Enum.each(slots, fn [name] ->
Ecto.Adapters.SQL.query!(repo,
"SELECT
pg_logical_slot_get_changes(
$1, NULL, NULL)",
[name])
end)
end
# ── Force WAL rotation ────────
def rotate_wal(repo) do
Ecto.Adapters.SQL.query!(
repo, "SELECT pg_switch_wal()")
Ecto.Adapters.SQL.query!(
repo, "CHECKPOINT")
end
# ── Log for DPO audit ─────────
def log_deletion(repo, user_id) do
repo.insert!(%GdprDeletionLog{
user_id: user_id,
tables_purged: 7,
replicas_purged:
length(replica_repos()),
wal_rotated: true,
slots_advanced: true
})
end
# ── Run it all ────────────────
def gdpr_erase(user_id) do
export = export_user(EURepo, user_id)
{:ok, _} = delete_user(EURepo, user_id)
purge_replicas(user_id)
advance_slots(EURepo)
rotate_wal(EURepo)
log_deletion(EURepo, user_id)
{:ok, export}
end One entity. One call. Gone. No FK cascade. No orphan check. No "did we miss a table?" Warp's entity-scoped storage makes right-to-erasure trivial by design.
Every Database Assumes Shared Writers
3 services write to the users table. You add row locks.
12 services, 3 regions. MVCC, connection pools, retry logic, a DBA.
Cross-region transfer fails mid-way. Distributed transactions, saga tables you maintain yourself, latency triples.
What if the thing that owns the data was the unit of concurrency, storage, backup, and deletion?
Entity
Your data boundary. One entity = one user, one order, one sensor. It owns its events, its state, and its lifecycle. warp.entity(ctx, 'alice') gives you a handle to everything Alice.
Actor
Your concurrency model. Each entity is an OTP process. Writes serialize through the mailbox. Reads come from memory. No locks, no MVCC, no conflicts — by construction.
Event
Your source of truth. You don't UPDATE a row — you append a fact. Credited(5000), Debited(1000). The aggregate folds events into current state. Full audit trail, time-travel, replay.
From zero to a working transfer saga in 6 steps.
import warp
import warp/context.{WarpOptions, ActorMode, SingleNode}
let assert Ok(ctx) = warp.start(WarpOptions(
context: "banking",
aggregates: [account_aggregate()],
shards: 16,
data_dir: "/data/banking",
mode: ActorMode,
cluster: SingleNode,
..defaults()
))A context is your database instance. It owns the shard files, the actor registry, and the projection writer.
Client → Router → Shard → Entity Actor → State / Events / Projections
Nanosecond Reads
State lives in actor memory. No disk round-trip, no query planner, no network hop.
Zero-Conflict Writes
Single writer guarantee per entity. No locks, no MVCC, no retry logic. Conflicts are impossible by construction.
GDPR-Native Deletion
One entity, one call. All events, projections, and snapshots purged. No FK cascade, no orphan check.
Linear Scaling
No Raft, no quorum, no replication factor. Add nodes, entities redistribute. Scale by addition.
You have a running 3-node cluster. You want to add 2 more nodes. Same task, three databases:
// You have a 3-node cluster.
// Add nodes 4 and 5.
//
// On each NEW node, point it
// at any existing peer:
warp.start(WarpOptions(
context: "payments",
aggregates: [account_aggregate()],
shards: 16,
data_dir: "/data",
mode: ActorMode,
cluster: Clustered(
seed_nodes: ["warp@10.0.0.1"],
),
..defaults()
))
// That's it. The new nodes
// join, the consistent-hash
// ring updates, 1/N of
// entities migrate live.
// Zero downtime, no repair,
// no nodetool, no certs. # On EACH new node, edit
# /etc/scylla/scylla.yaml:
cluster_name: payments
listen_address: 10.0.0.4
rpc_address: 10.0.0.4
seed_provider:
- parameters:
- seeds: "10.0.0.1,10.0.0.2"
auto_bootstrap: true
# Bring node up — ONE AT A TIME
# (parallel = token collision)
systemctl start scylla-server
nodetool status # wait UN
# Repeat for node 5, then:
nodetool cleanup # on every
# OLD node
# Faster than Cassandra here,
# but still hours of streaming.
# If RF needs bumping:
# ALTER KEYSPACE payments ...
# nodetool repair -full # Provision certs for new nodes
cockroach cert create-node \
10.0.0.4 localhost \
--certs-dir=certs \
--ca-key=ca.key
cockroach cert create-node \
10.0.0.5 localhost \
--certs-dir=certs \
--ca-key=ca.key
# Copy certs to each new node,
# then on each:
cockroach start \
--certs-dir=certs \
--advertise-addr=10.0.0.4 \
--join=10.0.0.1,10.0.0.2,10.0.0.3 \
--cache=.25 --max-sql-memory=.25
# Cluster auto-rebalances ranges.
# Watch it for hours:
cockroach node status \
--certs-dir=certs
# Re-check zone configs,
# lease preferences, and pay
# the ~2ms Raft tax per write Why it's this simple: Warp DB has no Raft, no Paxos, no quorum, no replication factor. Each entity has exactly one writer (its actor), and entities are routed deterministically by consistent hashing. Adding a node means reshuffling 1/N of entities — handled by warp.migrate() with zero downtime. ScyllaDB and CockroachDB both require multi-node minimums, repair cycles, and hours of streaming. There's no consistency level to pick because there's no concurrent writer to disagree with.
Warp DB vs ScyllaDB — Docker, Same Hardware
Apple M1 · Docker (5 CPU cores, 10GB RAM) · ScyllaDB --smp 4 --memory 8G · Warp DB native C writer · durable writes
Single-Event Writes
| Concurrent callers | ScyllaDB | Warp DB | Advantage |
|---|---|---|---|
| 16 | 10K ev/s | 139K ev/s | 13.9x |
| 256 | 11K ev/s | 139K ev/s | 12.6x |
| 1,000 | 7K ev/s | 117K ev/s | 16.7x |
Batched Writes (500 events/call)
Why it's this fast: Warp DB writes are in-process (no TCP, no serialization) with zero distributed consensus on the write path. One actor per entity, one SQLite shard writer per partition. The batch path packs 500 events into a single NIF call — 2 Erlang messages per 500 events vs 1,000 messages for individual writes. Same SQLite transaction, same durability, 500x fewer coordination round-trips.
Throughput Scaling
Single-writer-per-entity means zero coordination on the write path. Scaling is linear — no Raft quorum tax, no ring rebalancing storms.
| Write path | 5 Docker cores | 8 Mac cores | Projected 32 cores |
|---|---|---|---|
warp.append (single event) | 139K ev/s | 133K ev/s | ~500K ev/s |
warp.append (auto-batched) | — | 700K ev/s | ~2.5M ev/s |
warp.append_batch (explicit batch) | 1,553K ev/s | 737K ev/s | ~5M ev/s |
Linux (Docker) outperforms macOS at high throughput due to faster I/O scheduling and native x86_64 execution (no Rosetta). The batch path on Linux hit 1.55M ev/s on just 5 cores — 2x the macOS number. warp.append with concurrent callers benefits from automatic write coalescing by the shard writer.
What's the Catch?
It's a Paradigm Shift
Entity-thinking feels foreign if you've spent years in tables and JOINs. The model is simpler — but simpler doesn't mean familiar. Start with one bounded context. The /learn tutorial walks you through it.
Eventually Consistent
Within one entity: strong consistency. Across entities: eventual consistency via projections. For most applications, the ms-level delay is invisible. If you need immediate cross-entity reads, query both entities directly.
Not a Query Engine
No ad-hoc JOINs or window functions. Analytical queries go through rqlite projections. For OLAP, export to a data warehouse. For OLTP — Warp is faster, simpler, and gives you more mental clarity than any shared-writer database.
What about JOINs?
Can I migrate from Postgres?
Is it production-ready?
What about backups?
Do I need to know Gleam?
How big can entities get?
Stop fighting shared state.
One entity. One actor. One writer. Ship faster.