Floating Promises in Serverless

engineeringserverlessdebuggingnextjs

We were seeing database errors in our logs that didn’t make sense. A request would come in, do its work, return successfully - and somewhere in the logs there’d be a database connection error. But the request didn’t make any database calls. Not in the code, not in the logs. The errors were just… there, attached to a request that had nothing to do with them.

How lambdas actually work

The mental model most people have is that each request gets a fresh container. That’s not quite right. After a lambda handles a request, it doesn’t shut down - it freezes. The runtime stays warm, module-level variables persist, and singleton clients (like a database connection pool) survive across invocations. When the next request arrives, the environment is thawed and reused. This is well-understood and it’s why the “initialize your client outside the handler” pattern works.

What’s less obvious is what happens to promises that haven’t settled when the lambda freezes. AWS documents that background processes or callbacks resume when the environment is reused. That includes promises.

The bug

A fire-and-forget pattern had been introduced. After processing a request, an analytics event was kicked off without awaiting it:

export async function POST(request: Request) {
  const result = await processOrder(request);

  // Fire and forget
  trackAnalytics({ event: "order_placed", orderId: result.id });

  return Response.json(result);
}

In a long-lived server process, this is fine. The promise resolves in the background, nobody cares. In a lambda, the sequence plays out differently:

Diagram showing how a floating promise from Request A survives lambda freeze and resolves during Request B's execution context.
A floating promise outlives the request that created it. When the lambda thaws for a new request, the old promise resolves in the wrong context.
  1. Request A fires off trackAnalytics() without await and returns a response. The promise sent its request to the external service but hasn’t received a response yet.
  2. The lambda has no more work to do. It freezes - but the pending promise is still in memory.
  3. The external service responds while the lambda is frozen. Nobody’s listening.
  4. Request B arrives. The lambda thaws, and the old promise thaws with it.
  5. The stale promise resolves or rejects in Request B’s execution context.

If it rejects (timeout, dead connection), it can corrupt shared state or throw an unhandled error that affects Request B. That’s where our phantom errors were coming from - they belonged to the previous request.

What we saw

A floating promise had sent a query to the database, then the lambda froze. The database responded in milliseconds, but the lambda was frozen - the response went nowhere. When the lambda thawed for a new request, the old promise was still waiting. It eventually timed out, and that timeout fired during the new request’s execution. The errors in our logs weren’t from the current request. They were leftovers from the last one.

Finding it

Once I had the hunch, I wrote a quick test:

let leftover: string | null = null;

export async function GET() {
  const inherited = leftover;

  new Promise<void>((resolve) => {
    setTimeout(() => {
      leftover = `set at ${new Date().toISOString()}`;
      resolve();
    }, 100);
  });

  return Response.json({
    leftover,
    message: inherited
      ? "This value was set by a previous request"
      : "No leftover yet - hit this again",
  });
}

First hit: no leftover. Second hit: there’s the value from the first request, written by a promise that executed after the response was already sent. That confirmed it.

From there I went to the AWS Lambda docs which describe execution environment reuse and background process resumption in detail, and traced the behavior back to our specific fire-and-forget calls.

The fix

Await everything. If there’s a promise, it gets awaited before the handler returns. This is the simplest and most reliable fix.

export async function POST(request: Request) {
  const result = await processOrder(request);
  await trackAnalytics({ event: "order_placed", orderId: result.id });
  return Response.json(result);
}

Use after() for work that shouldn’t block the response. Next.js stabilized after() in v15.1 for exactly this. It uses waitUntil() under the hood to extend the lambda’s lifetime, so the work completes before the runtime freezes - without making the user wait.

import { after } from "next/server";

export async function POST(request: Request) {
  const result = await processOrder(request);

  after(async () => {
    await trackAnalytics({ event: "order_placed", orderId: result.id });
    await sendWebhook(result);
  });

  return Response.json(result);
}

The response goes out immediately. The lambda stays alive until after() finishes. No floating promises.

Takeaway

Serverless runtimes reuse execution environments. Any promise that hasn’t settled when the handler returns will survive into the next invocation. The after() API exists precisely because this is such a common problem - fire-and-forget is a natural pattern that works everywhere except lambdas.

This doesn’t match the mental model that serverless platforms present. The whole pitch is isolated, stateless, per-request compute. But under the hood, lambdas share state between invocations by design. If you’re running fire-and-forget patterns on Vercel or any lambda-based platform, audit your async calls. Either await them or use after() to ensure they complete within the current request’s lifecycle.

Further Reading