perspectives

Code Is the Wrong Abstraction

Temporal, DBOS, Windmill, and Lambda Durable Functions solve state durability. But code itself is the problem: workflows aren't sequences you build. They emerge from conditions. And code can't express emergence.

8 min read
Yurii Rashkovskii
Yurii Rashkovskii
Founder

When code fails, it loses state. Your script was halfway through processing an order, the process restarted or a connection dropped, and now you have no idea what happened. Did the payment go through? Was inventory decremented? Is the customer waiting for a shipment that will never arrive? You’re left digging through logs and querying databases to reconstruct what happened.

This is a real problem. Products like Temporal, DBOS, Windmill, and Lambda Durable Functions solve it elegantly. They persist workflow state at each step. If the process restarts, the workflow resumes exactly where it left off. Payment went through? We know. Inventory decremented? Recorded. The state machine survives failure.

These are genuinely useful tools. If you’re building workflows in code today, you should probably use one of them.

But durable execution solves the wrong problem.

Code Is Still Code

Making workflow code durable doesn’t change the fact that it’s code. You’re still composing a sequence of steps. You’re still encoding assumptions about execution order. You’re still building a flow.

Consider a simple order workflow:

async def process_order(order: Order):
    await validate_inventory(order)
    await process_payment(order)
    await decrement_inventory(order)
    await schedule_shipment(order)
    await notify_customer(order)

This code says: first validate, then payment, then decrement, then ship, then notify. The sequence is the logic. If something unexpected happens (payment succeeds but shipment can’t be scheduled because the warehouse is closed), what happens? You add error handling. Then more error handling. Then compensating transactions. The code grows. The conditional branches multiply. The “simple” workflow becomes a nested maze of try-catch blocks and state checks.

Now consider: what if fraud detection triggers after payment but before shipment? What if the customer cancels mid-process? What if inventory was decremented by another workflow between your check and your decrement? Each edge case requires modifying the workflow code. Each modification risks breaking existing flows.

The durability layer persists your state. It doesn’t help you reason about what state you should be in.

validatepaymentdecrementshipnotifyout of stockdeclinedfraudrace: gonewarehouse downrollback?refund + restore?user canceladdr invalidundo all?

The Emergence Insight

I’ve built workflow engines before: interpreters for BPMN and Amazon States Language. Those systems still execute flows as steps in sequence. But building them is what led me to this insight: workflows aren’t something you compose. Workflows emerge.

Think about what a “workflow” actually is. It’s not a recipe someone designed. It’s the observable pattern that emerges when multiple independent conditions interact. An order ships when:

  • Inventory is available
  • Payment is confirmed
  • No fraud flags exist
  • Shipping address is valid
  • Customer hasn’t cancelled
  • Warehouse is operational

These aren’t steps in a sequence. They’re conditions that must all be true simultaneously. The “workflow” is what happens when they align. When they don’t align, different things happen: waiting, cancellation, alerts, manual review.

InventoryavailablePaymentconfirmedFraud_FlagnoneAddressvalidOrderactiveWarehouseoperationalShipmentreadyNotificationcustomerMonitoringdelivery

In code, you compose these conditions into procedural logic. You decide the order. You handle each failure mode explicitly. You encode a specific flow.

With rules, you declare the conditions. The workflow emerges from their interaction.

rule Ship_Order
  match O = Order where Status == "pending",
        Payment { Order } where Status == "confirmed",
        Inventory where Product == O.Product, Quantity >= O.Quantity,
        no Fraud_Flag { Order },
        Address { Order } where Valid == true,
        Warehouse where Status == "operational"
  infers object Shipment { Order: O, Status: "ready" }

This rule doesn’t say “first check inventory, then check payment.” It says: when all these conditions are true, a shipment is ready. The engine evaluates continuously. When facts change, rules fire. New facts appear. (Inferal calls these facts objects - things that exist and have properties, rather than assertions in a database.) The workflow is the emergent behavior of the rule set.

Rules can also be retroactive. If a condition stops holding true, the consequences can be automatically reverted: if no external systems were touched, changes roll back cleanly. If irreversible changes occurred, mitigation rules can fire instead. In code, you’d need to manually track what happened and write explicit compensation logic. With rules, the system knows what was inferred from what, and can respond accordingly.

What Code Can’t Express

The difference becomes stark when you consider what happens in the real world.

Versioning is untrivial. Your workflow code changes. But you have running instances mid-execution. How do you migrate the state of the old program to the state of the new program? Temporal calls this “versioning” and provides primitives to handle it, but it’s fundamentally a hard problem: you’re trying to rewrite a running program’s state into a different program’s state space.

I’ve observed teams handle this by splitting large workflows into smaller ones, making each piece independently upgradable. Follow this logic to its conclusion: the most upgradable workflow is one step. At that point, you don’t have a workflow anymore. You have independent units that react to conditions. You have rules.

With rules, there is no version migration. Facts are facts. Rules can be added, modified, or removed. The system continuously evaluates current facts against current rules. No state migration needed. The workflow adapts to the new rules automatically.

Observability is opaque. Your workflow code pulled some data, made a decision, and moved to the next step. How do you know what data it pulled? How do you know what decision it made? You can log, but logs are after-the-fact reconstruction. The decision path isn’t intrinsically visible.

With rules, every activation leaves a trace. The system knows: this rule fired because these facts matched. These new facts were inferred. You can trace backwards from any outcome to the conditions that caused it. Observability isn’t a logging layer you add. It’s built into how the system works.

Control flow is hard to reason about. Workflow code with complex conditionals and loops creates a combinatorial explosion of execution paths. Testing every path is impractical. Understanding what the code will do under novel conditions requires simulating execution in your head.

Rules are independent. Each rule can be understood, tested, and verified in isolation. The complexity isn’t in the control flow. It’s in the fact space, which is queryable and inspectable.

Parallelization is constrained. Code says “do this, then that.” You can add explicit parallel blocks, but the developer has to identify what can run concurrently. Get it wrong and you have race conditions.

Rules are naturally parallel. If multiple rules match simultaneously, they can fire concurrently. The engine handles coordination. The developer expresses conditions, not execution strategy.

The Code Bias

Developers see code as “more powerful.” This is understandable. A function can do anything: loops, conditionals, I/O, arbitrary computation. A rule just matches patterns and produces facts. The function seems strictly more capable.

But this confuses “more expressive” with “more powerful for the task.” A function is more expressive than a SQL query. That doesn’t mean you should implement database retrieval with procedural code. The constraint of SQL (declarative, set-based) is what makes it practical.

The same applies to workflows. Yes, you can express any workflow in code. But the very flexibility of code, its ability to express any control flow, is what makes workflow code hard to reason about, hard to version, hard to observe, and hard to parallelize.

A single rule looks like a query: “when these conditions match, do this.” A set of rules is a program. A declarative one, but a program nonetheless. The difference is that the “control flow” arises from condition matching rather than being explicitly encoded. And that difference is what makes the hard problems solvable.

Consider error handling. In code, errors are exceptional conditions that interrupt normal flow. But “payment declined” isn’t really an error. It’s a fact about the world. In a rule-based system, you don’t “handle” payment decline. You write rules that match when payment is declined:

rule Payment_Declined_Notification
  match O = Order where Status == "pending",
        Payment { Order } where Status == "declined"
  infers object Customer_Notification {
      Order: O,
      Message: "Payment was declined",
      Action: "update_payment"
  }

Every state is just another condition. There’s no distinction between “normal flow” and “error handling.” The rules match whatever facts exist.

Sequence and Logic Are Fused

There’s a deeper issue with code-based workflows.

When you write step1(); step2(); step3();, you’re saying two things at once:

  1. These three things need to happen
  2. They happen in this order

The temporal ordering is embedded in the logical structure. You can’t separate them. If you later realize step2 and step3 could run in parallel, you have to restructure the code.

Rules separate what from when. You declare what conditions produce what conclusions. The engine determines when to evaluate. If two rules match simultaneously, they can fire in parallel. If a rule’s conditions aren’t met, it waits without blocking anything else. The timing follows from the facts, not from code structure.

This separation is what makes rules adaptable. New conditions can be added without restructuring existing rules. The system adapts because the timing was never hardcoded in the first place.

Testing the Difference

Testing workflow code means testing execution paths. With complex conditionals, the paths multiply. A workflow with 10 decision points has potentially 2^10 = 1,024 paths. You can’t test them all. You test the important ones and hope.

Testing rules means testing conditions. Each rule is a unit: given these input facts, does it produce the expected output facts? The tests are independent. Add a new rule, add tests for that rule. Existing tests don’t break because existing rules don’t change.

The difference is combinatorial vs. additive complexity. Code complexity grows exponentially with decision points. Rule complexity grows linearly with rule count.

The Replay Problem

Temporal and similar tools solve replay through deterministic execution. Every workflow run is recorded. If you need to replay (for debugging, for recovery, for testing), you feed the same inputs and get the same outputs.

But deterministic replay is fragile. What if your workflow calls an external API? You have to mock it. What if it uses the current timestamp? You have to inject it. What if it generates a random ID? You have to record and replay it. Any non-determinism breaks replay.

Rules sidestep this entirely. The facts are the state. To replay, you load the facts and run the rules. No mocking needed. The rules are pure: same facts in, same conclusions out. External interactions become facts that get recorded. The replay problem dissolves.

What Emerges

When you build with rules instead of code, something interesting happens: the workflow becomes visible.

In code, the workflow is implicit in the control flow. You have to trace through the code to understand what happens when. The workflow exists in the developer’s head, encoded imperfectly in syntax.

In rules, the workflow is the observable pattern of rule activations. You can literally see it: this fact appeared, this rule fired, these new facts appeared, which triggered these other rules. The workflow isn’t something you design. It’s something you discover by running the system.

This changes how you think about the problem. Instead of asking “what’s the right sequence of steps?”, you ask “what conditions should produce what conclusions?” The sequence takes care of itself.

The order workflow I started with becomes a set of independent rules:

  • When order is placed and inventory exists, reserve inventory
  • When inventory is reserved and payment is requested, process payment
  • When payment succeeds and inventory is reserved, confirm order
  • When order is confirmed and warehouse is operational, create shipment
  • When shipment is created, notify customer
  • When payment fails, release inventory reservation
  • When fraud is detected, cancel order and reverse any completed steps

Each rule is simple. The workflow is their emergent interaction. Change one rule, the system adapts. Add a new rule, existing rules keep working.

Durable execution made workflow code survivable. It didn’t make it tractable. For that, you need a different paradigm. One where workflows emerge from conditions rather than being composed from steps.


At Inferal, we’re building a rule engine designed for exactly this: agent-native workflows where the system continuously evaluates conditions against streaming facts, and the workflow emerges from rule interactions. The control flow takes care of itself.


Share