Skip to content

Inline Projections

How inline projections work and why they exist

Standard projections in DeltaBase are async — events get appended, then delivered to webhook endpoints where projections process them into read models. This works well, but it means your read model might be stale when the HTTP response is sent.

Inline projections flip that: they run synchronously within the same request as the event append. After appendToStream or append completes, your read models are already up-to-date.

Here’s the typical async flow:

1. Client sends POST /orders
2. Command handler appends events to DeltaBase
3. Response: { orderId: 'order-123', status: 'created' }
4. Client immediately does GET /orders/order-123
5. ...read model hasn't been updated yet
6. 404 or stale data

The webhook subscription fires milliseconds to seconds later. For many use cases that’s fine. For others (checkout flows, real-time dashboards, user-facing state changes), it’s not.

1. Client sends POST /orders
2. Command handler appends events to DeltaBase
3. SDK automatically runs registered projections on the appended events
4. Read model is updated
5. Response: { orderId: 'order-123', status: 'created' }
6. Client does GET /orders/order-123
7. Correct data, every time

The key: projections are registered on the EventStore client, not on individual command handlers. Every appendToStream and append call automatically runs them.

import { DeltaBase } from '@delta-base/server';
import { projections, singleStreamProjection } from '@delta-base/toolkit';
const eventStore = deltabase.getEventStore('orders', {
projections: projections.inline([orderSummaryProjection]),
});
// Every append now runs the projection automatically
await eventStore.appendToStream('order-123', [orderCreatedEvent]);
// Read model is consistent ^

One read model document per event stream. If your stream is order-123, the projection produces one document for that order.

  • Document tracks _version (the stream position of the last processed event)
  • Catch-up is per-stream: if the document falls behind, it reads only the missing events for that specific stream
  • Best for: order summaries, user profiles, any entity-per-stream pattern

One document aggregated from events across multiple streams. A “customer dashboard” that counts orders across all order-* streams, for example.

  • Tracks a global cursor (__cursor:{projectionName}) for the last processed globalPosition
  • Catch-up reads historical events from the entire event store (batched)
  • getDocumentId derives the document key from the event (not the stream ID)
  • Best for: dashboards, cross-entity aggregations, DCB-style projections where tags identify entities

You can define inline projections two ways:

Declarative (recommended for most cases): Use singleStreamProjection() or multiStreamProjection() with a pure evolve function. The SDK handles versioning, catch-up, and idempotency.

Imperative: Implement the InlineProjection<TEvent> interface with a processEvents method and an optional lastProcessedPosition() for catch-up. This gives you full control — write to SQL, use transactions, structure your read model however you want.

Both styles work with projections.inline():

projections.inline([
orderSummaryProjection, // declarative
orderAuditProjection, // imperative
]);

The key difference: declarative projections use an in-memory or KV read model store. Imperative projections can write anywhere — SQLite, Drizzle, Turso, whatever. The SDK calls your processEvents with batches of ReadEvent<T>[] and stays out of your way.

Catch-Up: What Happens When You Deploy a New Projection

Section titled “Catch-Up: What Happens When You Deploy a New Projection”

When you add a new projection to an event store that already has events, the projection needs to “catch up” on history. How this works depends on the projection type.

Single-stream projections catch up lazily: when an append happens for stream order-123, the projection checks the document’s _version against the appended events. If there’s a gap, it reads the missing events for that specific stream and processes them first.

Multi-stream projections catch up on the first appendToStream or append call. They query the event store from their cursor position and process historical events in batches of 100.

Imperative projections catch up automatically when they define lastProcessedPosition(). The SDK reads events from (lastProcessedPosition + 1) in batches and calls processEvents for each batch. The projection owns its own cursor — the SDK never persists anything.

If you need full control over the replay loop (e.g. optimized bulk inserts), implement catchUp(eventStore) instead. The SDK delegates to it entirely.

You can trigger catch-up explicitly during Worker startup:

await eventStore.catchUpProjections();

This avoids slower first-request behavior — useful in production.

This is a core design principle. Events represent facts — the business decision (via the decider) was valid. If a projection fails:

  1. Events remain in the event store (source of truth)
  2. The projection’s cursor/version does NOT advance
  3. On the next appendToStream call, the projection catches up automatically

A temporarily stale read model is better than lost events. The event store is the source of truth; read models are derived and rebuildable.

In concurrent environments (multiple Cloudflare Worker isolates calling appendToStream simultaneously), the same event might be processed by a projection more than once. This is safe because:

  1. Declarative projections track _version per document. Processing an event at a position that’s already been processed is a no-op.
  2. The evolve function must be deterministic: same input, same output. No side effects, no external lookups, no randomness.
  3. Redundant work is wasted compute but produces correct state.

The recommendation: always make your evolve functions pure and deterministic. This is the same principle as reducers in Redux or folds in functional programming.

InlineAsync (Webhook)
ConsistencyRead-after-writeEventually consistent
LatencyAdds to request timeDecoupled
InfrastructureNo webhook config neededRequires subscription + endpoint
Failure modeProjection fails = request fails (configurable)Retry queue + DLQ
Cross-serviceSame process onlyFan out to other services
Best forUser-facing state, checkout flows, dashboardsAnalytics, notifications, cross-service sync

You can run both on the same event store. Critical projections inline, others via webhooks. This is the recommended hybrid approach.