Simple vs Easy

17 Apr 2026 · ai, llms, software, engineering, Simple, Easy, Rich Hickey

There’s a talk by Rich Hickey called Simple Made Easy that I keep coming back to. The central idea sounds almost trivial until you sit with it:

Simple and easy are not the same thing and confusing them is the root of most complexity in software.

– Rich Hickey

Easy means near to hand — familiar, quick to pick up, low friction. Easy is relative to you. What’s easy for a senior engineer is not easy for someone new to a codebase.

Simple means one thing. One role, one concept, not interleaved with other concerns. Simple is objective. A thing either has one job or it doesn’t.

The trap Hickey identifies is that we reach for easy things and call the result simple. We defend familiar tools and established patterns as if knowing them makes them clean. Familiarity and simplicity are orthogonal, and over time the difference shows.

I’ve spent the last year building a customer-facing transactional chat system. One that uses LLMs for natural language processing and agents to orchestrate backend systems to do the things that the customer wants to do. It has been one of the most instructive projects I’ve worked on, largely because every major design challenge turned out to be a question about this distinction.

The LLM as a Complecting Force

complecting /kəmˈplektɪŋ/ v.

The act of interweaving two or more distinct concepts, concerns, or responsibilities such that they can no longer be understood, changed, or reasoned about independently.

The word Hickey uses for the opposite of simple is complected. A complected system is one where concerns that should be separate have been tangled together, and you can no longer reason about one without understanding the other.

LLMs are seductive complecting machines.

The easy path in a chat system is to let the LLM do everything: understand the customer’s request, decide what action to take, call the right backend systems, handle errors, and format a response. It can do all of this. Modern models are capable enough that it mostly works in a demo. But you’ve now built something where intent parsing, business logic, orchestration, and presentation are all happening inside one opaque thing. You can’t test them independently. You can’t reason about failures. You can’t audit decisions. When something goes wrong (and in a transactional system, things will go wrong) you have no clean seam to inspect.

The simple path is to use the LLM narrowly for what it’s genuinely good at: converting natural language into structured intent. A customer says “I want to move my appointment to next Thursday.” The LLM’s job is to produce something like:

{ "action": "reschedule", "entity": "appointment", "target": "next Thursday" }

From there, deterministic code takes over, resolving the date, checking availability, calling the backend, deciding what to do if the slot is full.

Each part can now be tested and reasoned about on its own. The LLM component has one job. The orchestration layer has one job. The result is a system where, when something breaks, you know where to look.

Legacy Integration as a Simplicity Decision

Any system of meaningful scale eventually has to talk to something old. A backend built under different assumptions, with different a different design, different conventions, and perhaps unexpected behaviours. The easy path is to call those systems from wherever you need them. An agent needs a customer’s account status? Hit the legacy API inline. Another agent needs order history? Same thing, somewhere else. This works quickly and requires no upfront design.

But now the quirks of those legacy systems are scattered throughout your new codebase. Your agents are complected with someone else’s old decisions. When the legacy system changes, you’re hunting across your entire codebase for callsites. When you want to test an agent, you can’t do it without the legacy system being available.

The simple path is an adapter layer (or what the kids call Model Context Protocol). One thin piece of your system that is the only thing that knows about the legacy systems. It speaks the legacy language inward and a clean interface outward. Your agents don’t know what’s behind it they ask for an account status and get a consistent shape back.

This is more work upfront. You have to define those interfaces before you fully understand what you need. But the complexity of the legacy systems is now contained. It doesn’t bleed into every corner of the new system, and the rest of the codebase stays clean.

Security at the Boundary, Not Everywhere

With LLMs in the path, security requires deliberate thought. The model is processing arbitrary customer input and producing output that drives real actions like rescheduling appointments, updating records, triggering transactions. The attack surface is real: prompt injection, output manipulation, unintended instructions buried in user messages.

The easy response is to add checks everywhere. Validate input before the LLM. Filter output after. Check again before the backend call. Each check feels prudent in isolation. But security logic spread across every layer is hard to audit, and tends to give a false sense of coverage. It also complects security concerns into every part of the system.

The simpler model is to decide where your trust boundaries are and enforce them explicitly in one place. Anything that comes out of an LLM should be treated as untrusted, just like raw user input. There should be a single validation and normalisation layer that structured LLM output passes through before anything acts on it. Inside that boundary, code operates on data that has already been checked. There’s only one place to audit.

This also forces you to think clearly about what the LLM is actually producing. If its output is structured data then that data has a schema, and validating a schema is a solved problem.

Designing for Unknown Features

The hardest part of this project wasn’t any of the above, it was that the feature set wasn’t fully defined when we started. The system needed to handle customer journeys we hadn’t mapped yet, integrate with backend capabilities that weren’t ready, and be extended by a team that would grow.

The instinct here is to build for flexibility. Add configuration flags. Build plugin systems. Design extension points for things you might need later. In practice it usually means building complexity you pay the cost of now, in service of requirements that may never arrive.

Hickey’s answer to this is different, and I think it’s right:

A system stays extensible not because it anticipates future features, but because its parts are simple.

If each component has a single, clear role and talks to other components through clean interfaces, then new features arrive as new components, not as modifications to existing ones. You don’t need to predict what’s coming. You need to make sure nothing is entangled with anything else.

In practice this meant resisting the urge to build “smart” agents that handled multiple concerns, keeping the orchestration layer thin and data-driven, and treating each new customer journey as a composition of simple, reusable parts rather than a new bespoke flow. New capability has generally meant new agents, new agent orchestration logic and new adapter methods. No changes required to things that already work.

The Through-line

Every major challenge, be it security, legacy integration, unknown future scope had an easy solution that pushed complexity deeper in. Let the LLM handle everything. Call legacy systems directly from wherever. Add flexibility upfront just in case.

The work, each time, was finding the simpler separation:

  • What is the LLM’s actual job?
  • Where does the legacy mess belong?
  • Where is the real trust boundary?

Simple is not the same as easy. Simple is often harder to arrive at. But it’s the thing that, six months later, you’re still grateful you chose.