Sifen.
all posts
Jan 18, 202610 min readbackendarchitectureproduction

The boring middle layer is where products live or die

Validation, auth, error mapping, observability, idempotency. None of it is feature work. All of it decides whether the API feels solid.

Where products quietly live or die

The frontend gets the demo. The database gets the diagrams. The middle layer (the request handlers, the validation, the auth checks, the error mapping, the logging, the rate limits, the small pieces of glue that turn a database into an API) is what people complain about three months after launch. It is also what determines whether the product feels solid or fragile.

Most teams underinvest in it because it is not a feature. There is no screenshot. There is no demo. There is a vague feeling of "the API is reliable" or "the API is flaky". The boring middle layer is the difference between those two.

What lives there, exactly

When I say middle layer I mean a specific set of concerns:

  • Validation. Where do typed inputs become trusted values? Once, at the boundary. With Zod or similar, never with hand-rolled checks scattered across handlers.
  • Auth and authorization. Are you logged in? Are you allowed to do this thing to this resource? These are two checks, not one, and they belong on every protected route, not in a per-feature memo.
  • Error mapping. Domain errors become HTTP status codes in one place. Stack traces stay on the server. Clients get a stable error shape.
  • Observability. Every request has a trace ID, every log line carries it, every error carries the request context that produced it.
  • Idempotency. The handlers that mutate take an idempotency key, store the result, and return the same response on retry.
  • Limits. Rate limits, body size limits, timeouts. Set globally and overridable per route.

That is the middle layer. None of it is glamorous. All of it is load-bearing.

The validation boundary

Validation is the one I see most often done badly. Either it is everywhere (every handler reaches into req.body and pokes at fields with ifs) or it is nowhere (everyone trusts that the frontend sent the right thing).

The fix is a single pattern, applied without exception:

const CreatePost = z.object({
  title: z.string().min(1).max(140),
  body: z.string().min(1).max(20_000),
  tags: z.array(z.string()).max(10).default([]),
});

export const createPost = handler(CreatePost, async ({ input, ctx }) => {
  const post = await ctx.db.posts.create({ data: { ...input, authorId: ctx.user.id } });
  return { post };
});

The handler factory parses the body against the schema, attaches input typed exactly as the schema, and rejects with a 400 on parse failure. Inside the handler, every field is already correct. There are no as casts, no defensive if (!input.title) checks, no second-guessing. That is what "trusted values" buys you.

Errors as a typed surface

The other place teams underinvest is errors. Most APIs return something like { error: "Bad request" } and call it a day. Then the frontend reaches into the message string with includes() to decide whether to show a toast or open a modal. That is a contract by accident.

A better shape, and not a complicated one:

type ApiError =
  | { kind: "validation"; fields: Record<string, string[]> }
  | { kind: "auth"; reason: "unauthenticated" | "forbidden" }
  | { kind: "conflict"; resource: string }
  | { kind: "rate_limited"; retryAfterMs: number }
  | { kind: "server" };

Five shapes. Every error in the system maps to one of them. The frontend switches on kind, not on a string. When you add a new domain error, you either add a new kind or fold it into an existing one, and the type system tells you every call site that needs to handle it.

Observability that survives an incident

The test for whether your observability is real: when an alert fires at 2am, can you find the failing request in under a minute? If the answer is "I'd have to grep", you do not have observability, you have logs.

Three things, all installed at the middle layer, all cheap:

  1. A request-scoped trace ID. Generated on entry, attached to the request, carried into every log line and outbound call. Sent to the client as a header so support tickets can quote it.
  2. Structured logging. No string concatenation. logger.info({ traceId, userId, action: "post.create", durationMs }, "post created") is grep-able forever.
  3. Error capture with context. When a handler throws, the trace ID, the user ID, the input shape (sanitised), and the route are captured together. Sentry or whatever does this for free if you give it the context.

Set this up once, at the middle layer, and every feature inherits it. Try to retrofit it after launch and you will be on call for a year.

Idempotency is a one-day investment

The last piece, and the one most likely to be skipped, is idempotency. Mutating endpoints should accept an Idempotency-Key header, store the response keyed by (user, key, route), and return the cached response on repeat. This is the difference between a flaky network producing duplicate purchases and a flaky network producing one purchase and one cached "you already did this".

It is a day's work. It pays for itself the first time a mobile client retries a charge.

The "feature work" trap

The reason teams skip all of this is that none of it is feature work. There is no card on the board for "validate inputs at the boundary". The product manager does not see the middle layer until it breaks.

The trick is to bake it into the platform: handler factory, error type, logger, idempotency middleware. Once those exist, every feature gets them by default, and writing a route that bypasses them looks weirder than writing a route that uses them. That is when the middle layer becomes invisible, which is exactly where you wanted it.