DEV Community

Vivian Voss
Vivian Voss

Posted on • Originally published at vivianvoss.net

The Single Writer

By Design — Episode 01

In 2000, D. Richard Hipp built a database for a U.S. Navy destroyer. The problem was rather specific: the existing Informix installation required a DBA. The DBA was not always available. When the database went down, the software went down. Hipp's solution was not to fix the server. It was to eliminate it entirely. 600 KB. One file. Zero configuration.

The Complaint

"SQLite can't handle concurrent writes."

Correct. Only one writer at a time. The others queue. Every Stack Overflow thread about SQLite eventually contains this revelation, delivered with the gravity of someone discovering that water is wet.

The Decision

Hipp did not forget to add concurrent writes. He chose not to.

A single writer means no deadlocks. No lock contention. No transaction ordering puzzles that require a PhD and three cups of coffee to debug. The entire concurrency model fits in your head.

WAL mode (2010) completed the picture: unlimited concurrent readers, one writer, zero interference. Readers never block the writer. The writer never block readers. Rather elegant, that.

The Trade-Off

Honest cost: if your application needs hundreds of concurrent write transactions per second, SQLite is the wrong tool. Hipp says so himself. This is not a limitation he is embarrassed about. It is a boundary he drew deliberately.

The trade-off bought him everything else: one file, zero configuration, no daemon, no DBA, no setup procedure. Cross-platform. Copy the file. It works.

The Proof

  • 1 trillion active databases worldwide
  • Every Android device. Every iPhone. Every major browser.
  • Airbus A350 avionics (DO-178B: the kind where lives depend on it)
  • Library of Congress recommended archival format
  • 35% faster than raw filesystem I/O for small reads
  • 156,000 lines of source. 92 million lines of tests.
  • Public domain. Supported until 2050.

The "toy database" runs in more places than every other database engine combined.

The Principle

Every design decision is a trade-off. The mediocre ones try to have everything. The brilliant ones know what to sacrifice.

Hipp sacrificed write concurrency. He gained simplicity so complete that his database became the most deployed software module in history.

One does wonder what we might accomplish if we had the courage to say "no" more often.

Read the full article on vivianvoss.net →


By Vivian Voss — System Architect & Software Developer. Follow me on LinkedIn for daily technical writing.

Top comments (2)

Collapse
 
acytryn profile image
Andre Cytryn

the WAL mode angle is underrated in these discussions. most people hit the single-writer complaint and stop there, without realizing that readers never block the writer in WAL mode. for the vast majority of use cases that complaint is theoretical, not real. what workloads have you found actually push SQLite past its limits in practice?

Collapse
 
vivian-voss profile image
Vivian Voss

Quite right on WAL mode! The complaint is almost always theoretical because most applications are read-heavy, and WAL turns that into a non-issue entirely.

The workloads where I have seen SQLite genuinely struggle:

  1. High-frequency concurrent inserts from multiple processes (not threads). IoT telemetry ingestion, for instance, where dozens of independent collectors write simultaneously. The SQLITE_BUSY timeout becomes a real bottleneck, not a theoretical one.

  2. Large analytical queries running alongside writes. A reporting query scanning millions of rows holds a read snapshot in WAL, which prevents WAL checkpointing. The WAL file grows unbounded until the reader finishes. On constrained storage (embedded, edge devices), this becomes uncomfortable rather quickly.

  3. Write-heavy web applications with long-running transactions. If your business logic does network calls between BEGIN and COMMIT, you are holding the write lock for the duration of that call. Every other writer queues behind your API latency.

For everything else (and that covers the vast majority of applications), the single-writer model is not a limitation anyone encounters in practice. It is a constraint that exists in documentation, not in production.

The honest boundary: if you need more than roughly 10,000 write transactions per second from concurrent sources, reach for PostgreSQL. Below that, SQLite is not merely adequate. It is superior.