Separation of Concerns: You're Cutting Along the Wrong Line
The principle everyone agrees on. The application everyone gets wrong. What if data and code were never separate concerns?

“Separation of concerns” is one of those rare principles that nobody argues against.
Ask ten developers what makes good architecture, and at least eight will mention it. It shows up in code reviews, design documents, conference talks, and job interviews. It’s so deeply absorbed that we rarely stop to examine it. Of course you separate concerns. That’s just good engineering.
But have you ever stopped to ask what the concerns actually are?
Because the way most of us apply this principle, particularly at the boundary between our applications and our databases, might be the most expensive misunderstanding in software engineering. Not because separation is wrong. Because we’re separating the wrong things.
How We Got Here
In 1974, Edsger Dijkstra wrote a short paper called “On the Role of Scientific Thought” (EWD447). In it, he described what he considered characteristic of all intelligent thinking:
Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects.
He called this separation of concerns. Two things are crucial in this passage.
First: “study in depth an aspect… in isolation.” This is a thinking move. When you’re reasoning about correctness, you temporarily set aside performance. When you’re reasoning about the data model, you temporarily set aside the UI. You focus your attention.
Second: “all the time knowing that one is occupying oneself only with one of the aspects.” You know it’s one piece of a whole. The aspects interact. The separation is in your attention, not in the system. Dijkstra described a discipline of thought: think about each aspect on its own terms, but know that they are part of the same system and will be reunited.
Then something happened. The thinking technique became an architecture pattern.
In the 1990s, three-tier architecture formalized the idea: presentation at the top, business logic in the middle, data at the bottom. Each tier was a separate concern. Java EE baked it into the platform. Spring scaffolded it as Controller/Service/Repository. Clean Architecture drew concentric circles with the database exiled to the outermost ring.
Each wave pushed the database further away from the logic it served. The database became a filing cabinet. You put data in, you take data out. All the intelligence, all the reasoning, all the decision-making, that belonged in the application layer. Anything else was “coupling.”
The cognitive technique became an organizational mandate, and the organizational mandate became an industry.
The Contradictions Nobody Talks About
Once you start looking, the contradictions are everywhere.
If your schema enforces that an order must have a customer, and your application validates that an order must have a customer, are you separating concerns or duplicating them?
Think about it. You have the same business rule expressed twice: once as a foreign key constraint, once as application validation. When the rule changes, you update it in two places. When you forget one, you get data integrity bugs that are hard to trace. The “separation” didn’t reduce complexity. It doubled it.
If ORMs exist to bridge the gap between your objects and your tables, and Ted Neward called ORM “the Vietnam of computer science,” what does that tell you about the gap itself?
Neward’s analogy was precise. You enter the ORM conflict optimistically. Early wins reinforce your commitment. But the situation gradually becomes a quagmire with “no clear demarcation point, no clear win conditions, and no clear exit strategy.” The entire ORM industry exists to solve a problem created by insisting that objects and relations are separate concerns. Erik Meijer and Gavin Bierman later showed they are mathematical duals of each other: the same structure viewed from opposite directions. The “impedance mismatch” isn’t a law of nature. It’s an artifact of splitting one thing into two representations and then struggling to reconcile them.
Domain-Driven Design tells you that behavior must colocate with data. An anemic domain model, where entities are just data bags and logic lives in services, is an anti-pattern. But DDD also demands “persistence ignorance,” which requires elaborate mapping layers between your domain objects and your database. Which is it? Colocate behavior with data, or pretend your data doesn’t persist?
The tension is real because the underlying assumption is false. DDD treats the database as an implementation detail to be abstracted away. But your database isn’t an implementation detail. It’s a computational system with forty years of engineering behind it. Treating it as a detail you hide behind a repository interface is like treating the operating system as a detail you hide behind a syscall wrapper. Technically possible, practically expensive.
The Forty-Year Reasoning Engine You’re Bypassing
Here’s where the cost becomes concrete.
Your database contains a query optimizer. This is a planning system that has been refined for over forty years across billions of production workloads. It reasons about data access patterns, join ordering, index selection, predicate pushdown, and execution strategies. It considers the physical layout of data on disk, the statistics of value distributions, the available memory, and the cost of different access methods. It produces execution plans that balance all of these factors simultaneously.
When you fetch all rows to your application and filter them in memory, you are handing a problem to a less capable system. Your application code doesn’t know about index availability. It doesn’t know about data locality. It can’t reason about join ordering. It doesn’t maintain statistics about value distributions. You are telling a grandmaster to sit back while you play the game yourself.
Consider a typical ORM pattern. You load a list of orders. For each order, you load the customer. For each customer, you load their address. That’s one query for the orders, N queries for the customers, and N more for the addresses. This is the N+1 problem, and it’s not a bug in your ORM. It’s a symptom. The application is making decisions about data access one step at a time, with no visibility into the full picture. The database, if given the full query, would recognize this as a three-way join, pick an optimal join strategy, use indexes efficiently, and return the result in a single pass.
The N+1 problem is a microcosm of a larger issue. When you move data access decisions into your application, you lose the optimizer’s ability to reason holistically. Each query is a local decision. The database never sees the full workload pattern. It can’t suggest that a materialized view would eliminate your most expensive join. It can’t tell you that a composite index would serve three of your query patterns simultaneously. It answers what you ask, one question at a time, without context.
And the optimizer is just the beginning. Constraints, triggers, materialized views, window functions, recursive queries, partial indexes. These aren’t “database features” in the way a checkbox on a feature comparison chart is a feature. They’re computational capabilities. They represent decades of research into efficient data manipulation. Every one of them is something you are paying for (in licensing, in hardware, in operational overhead) but not using, because the architecture told you the database is for storage.
What if Data and Code Are One Concern?
Your schema IS a program. It declares what entities exist, what relationships hold, what invariants must be maintained. Your constraints ARE logic. A foreign key constraint is a business rule expressed in a language the database can enforce continuously and efficiently. Your queries ARE computation. A window function that computes running totals is an algorithm, expressed declaratively, optimized by a planner, and executed close to the data.
When you look at it this way, the real separation isn’t between data and behavior. Both live in the same domain. An order has both structure (fields, relationships) and rules (must have a customer, total must equal sum of line items, cannot ship until payment clears). These are one concern: the order concern. Splitting them across systems doesn’t separate concerns. It fragments a single concern across two implementations.
The real separation should follow the axes where things genuinely need to vary independently. Sometimes that’s domain boundaries: orders and shipping and billing have different data, different rules, and different lifecycles. Sometimes it’s latency: a real-time fraud check and a monthly report touch the same data but can’t live under the same constraints. Sometimes it’s tenancy or compliance: regulatory walls that aren’t negotiable. Sometimes it’s deployment cadence: the part of the system that ships daily and the part that requires change control.
These are real concerns. They justify real boundaries. “Data versus behavior within the same domain” is not one of them.
Things that change together should live together. When your order validation rules and your order data change together (and they always do), they belong in the same place. When you split them across an application and a database, every change requires coordination across the boundary. Every ORM, every repository pattern, every DTO, every data mapper is a coordination tax paid because two systems are implementing the same concern.
The Question to Take With You
The question was never “should we separate concerns?” Yes. Always. Dijkstra was right about that.
The question is: are we separating the right things?
The next time you reach for an ORM or write a validation layer that mirrors your schema constraints, pause and ask yourself a question. Am I separating concerns, or am I separating two implementations of the same concern?
The next time you move a computation into your application that the database could handle natively, ask: am I protecting a boundary that matters, or am I bypassing a forty-year-old reasoning engine because my architecture diagram told me to?
Separation of concerns is a powerful idea. So powerful that we’ve been applying it reflexively, at the most visible boundary in our stack, without checking whether that boundary aligns with the actual concern boundaries in our domain.
Maybe it’s time to redraw the lines. Not between data and code. Between the things that genuinely don’t need to know about each other.
This is what we’re building at Inferal: a system where your business rules don’t bypass the reasoning engine. They are the reasoning engine. Rules, data, and constraints live together, evaluated continuously, optimized holistically, and traceable end to end. No mapping layers. No coordination tax. If you’re tired of maintaining two implementations of the same concern, let’s talk.