The revenge of SQL: How a 50-year-old language reinvents itself

Prototyping is my favorite part of programming. I like building new stuff and getting things working. It’s no surprise, then, that I am a big fan of MongoDB and NoSQL in general. Don’t get me wrong: I’ve always appreciated SQL for what it is. The intoxicating smoothness of using MongoDB in JavaScript just swept me off my feet.

Led by the dynamic PostgreSQL team, SQL has recently orchestrated an incredible comeback. It’s never stopped being at the heart of enterprise data. But now it is both the traditional choice and on the list of exciting tech to watch. How did that happen?

The making of an SQL comeback

It all started when SQLite, the lightweight relational database, brought SQL into the browser. SQL in the browser enabled a new architecture built on client-side back-end syncing, where SQL, and not JSON, was the hinge. Language tools played along, making it more comfortable to use SQL from any platform. The well-understood predictability of the relational architecture continued its long game of quietly winning converts, then PostgreSQL topped it off with the new schemaless jsonb type.

And that’s how it happened: Just when you thought it was dead, SQL became cool again.

The myth of ‘schemaless’

The thing that makes NoSQL in JavaScript so alluring is that you don’t have to leave the language paradigm in order to think about or manage your database structure, the schema. If you want to insert some new type while you are coding, you just do something like this:

await db.collection('cats').insertOne({ name: 'Fluffy', mood: 'Judgmental' });

Even if db.cats doesn’t exist yet, the store will create it for you. It’s the same with the data “shape” (name and mood). And best of all, you can just shove the JSON object right in there.

This appears to be the holy grail of frictionless data: The database and the code both speak JSON. You don’t have to stop to write a CREATE TABLE statement. You don’t have to run a migration script. You don’t have to think about the data; you just create what you need, on the fly, and the datastore accommodates.

But as our prototypes mature into production systems, we discover an uncomfortable truth: The schema is still there, but now it’s in our code. It’s implicit, and it looks like this:

if (cat && cat.mood && typeof cat.mood === 'string')

Or, if you like:

const mood = cat?.mood ?? 'neutral';

The code now enforces the schema. This is an ongoing, systemic fact of life in the schemaless world. Of course, even in a strict schema, you are doing this kind of thing for validation (whether in code or with a validation framework), but the true consistency-of–record is preserved in the database itself.

The pressure of building out a large system without strong consistency causes real anxiety. What developers really want is strong data integrity with low friction. And now, three trends have converged to make that possible with SQL:

  • SQL on the front end with syncing
  • Better SQL clients
  • SQL with schemaless types (JSONB)

The first is bold and new; the second is plain old engineering; the last is evolutionary adaptation.

Let’s take a closer look.

SQL on the front end

The first solution involves a radical rethinking of where the database lives. For 30 years, the database was a lumbering monster locked in the server room. The browser was just a silent terminal that begged for data from APIs.

But thanks to WebAssembly (WASM), we can now run the actual database engine right inside the browser. Technologies like PGlite (PostgreSQL in WASM) and SQLite (via standardized browser builds) have transformed the database to a client-side technology.

The move to the front end also sparked the rise of serverless SQL for analytics and edge computing. Tools like DuckDB let developers crunch millions of rows of analytical data on the user’s device or at the edge, all without needing a massive cloud warehouse.

This development by itself would be interesting but not earth shattering, except for the introduction of syncing technologies like ElectricSQL. Syncing is an idea that has been around in projects like PouchDB in the NoSQL world, but now it’s catching on with SQL. Syncing lets us use the same datastore in the browser (or a portion of it) and the server, and the syncing engine automatically handles the negotiation.

Syncing also opens up the potential of a local-first database architecture. Instead of writing complex API endpoints (GET /cats, POST /cats) and loading spinners, your front-end code just talks to its local database.

You INSERT a record locally, and it happens instantly. Then, a background sync engine (like ElectricSQL or Replicache) handles the messy work of getting that data to the server. The API layer is eliminated entirely.

Of course, the shift to local-first requires serious mental rejiggering and also has architectural implications. But locating a relational database directly in the browser raises the prospect of SQL as the new universal data language.

Better SQL clients

The second factor is down to engineering hard work; in this case, long years of consistent iteration on database clients.

It turns out, much of SQL’s reputation as a clunky old technology was actually an inadequate tooling problem. Regardless of the language used, writing SQL meant concatenating strings or wrestling with heavy, magical ORMs.

Although ORM tools like Hibernate/JPA let developers manage data inside their language of choice (in this case, Java), they abstract the mechanics to the point where it’s hard to grasp what is happening. Reasoning about data flows becomes disorienting, and it’s easier to make mistakes.

But a new generation of ORM-lite tools are working to bridge the gap. Tools like Drizzle (for TypeScript), Exposed (for Kotlin), and jOOQ (for Java) put the focus on developer experience. They map the rigidity of SQL to the idiom of your programming language. As an example, here’s how Drizzle makes querying a table feel like filtering a JSON array in TypeScript, but with full type safety:

const grumpyCats = await db
  .select()
  .from(cats)
  .where(eq(cats.mood, 'Judgmental'));

Tools like these mean developers no longer need to guess whether our code matches our data. They give us a feel more like MongoDB—where code and data speak the same language—without sacrificing the integrity of the schema.

SQL with schemaless types

The Postgres team asked the question: What if a relational database could speak shameless JSON? The jsonb type is the answer.

Although PostgreSQL was the pioneer, other databases have followed suit. It was a brilliant strategic move that let developers use schemaless documents when the need was there, but within the context of the relational structure.

This reduced the need for polyglot persistence architectures (the idea, popular in 2015, that you needed PostgreSQL for your transactions and MongoDB for your catalogs).

Instead, JSONB gave us strict ACID compliance for critical data like financial transactions and PID, and flexible JSON blobs for messy data like configs and logs—and did it all in the same row. We realized we didn’t need to abandon SQL for flexibility; instead, SQL just needed to loosen up a bit.

JSONB also supports indexing, meaning you get the performance of indexed tables, even when using hybrid statements that involve both standard fields and JSON.

The promise of using a single datastore is too huge an architectural win to be ignored.

Also see: JSONB in PostgreSQL today and tomorrow.

Friction as a feature

Of course, long experience tells us not to get carried away. The industry isn’t going to deprecate REST APIs anytime soon. (If we were up for that, we’d just use HTMX.) The momentum of the current stack is massive, and for good reason: Decoupling the client from the database is a battle-tested pattern.

SQL also brings its own baggage. You still have to manage connection pools, you still have to write migration scripts (even if tools make them easier), and scaling a relational database is still harder than scaling a document store.

This movement isn’t about SQL replacing everything overnight; it’s more like the pendulum swinging back to the middle. We are realizing that the friction of SQL—the need to define types and relationships—was a feature, not a bug. It forces you to design your system before you build it.

SQL and the Lindy Effect

The Lindy Effect is a concept that say the longer a technology survives, the longer it will probably continue surviving. SQL has survived mainframes, the PC revolution, the web, mobile, and it’s now into the AI era. It didn’t survive by being stubborn but by being adaptable. So far, SQL has absorbed JSON, resized itself for web browsers, and integrated with modern languages. But SQL’s revenge isn’t based on destroying the alternatives. It’s more about staying focused on what is essential, proving that sometimes the boring way is really just foundational.

Go to Source

Author: