Reproducing AWS Lambda’s Execution Environment Locally with Docker Compose

Feb 21, 2026 Edited: Feb 22, 2026

From time to time, I find myself to debug some lambda workflows going in timeout or taking longer than expected.

The common traits of these lambdas are few and very precise: simple, atomic, non-hot path jobs (async workers, housekeepers etc…), written in Node.js

While on my company issued powerful dev machine everything always works as expected, sharp and speedy, debugging timeouts can be difficult when moving on a managed environment, where you have very little control over resource distribution.

Step 1: Environment

Docker is a great tool to use to “make sure the env is part of the game”. It can also help to mimic the lambda configuration environment, in ways that are rarely documented

First of all, AWS publishes official base Docker images for each supported runtime and distributes them based on the lambda runtimes shared responsibility rules

Read more: https://gallery.ecr.aws/lambda/nodejs

It serves various purposes, including giving the ability of providing a working base images for custom lambda deployments.

All of those images incorporate the Lambda Runtime Interface Client (RIC) and the Runtime Interface Emulator (RIE), implementing the Lambda Runtime API completely. This is a crucial point, as it will allow us to trigger our code directly from an HTTP request to that container.

Step 2: Setup

Long story short: If you have good tests and coverage, you already have half of the work done.

The goal here is to isolate your function’s logic from the test framework overhead, running just enough scaffolding to replicate what Lambda injects at startup, which is exactly part of what you do when you want to test cover you implementation.

Having test frameworks and harnesses adds code (and thus computing) overhead that could harm your performance profiling experience, but you can always work to remove most of the things that are not needed for your given case

That will eventually lead you to write a custom handler entrypoint for your container, making sure your code has all the library configuration and service dependencies in place. The final custom Dockerfile will be something similar to this:

FROM public.ecr.aws/lambda/nodejs:24
# Set handler entrypoint
# File called wrapper.js exposing a handler named function
# Will be relative to LAMBDA_TASK_ROOT dir (/var/task as default) - See next paragraph
CMD ["myFunction/wrapper.handler"]

A note about node_modules and platforms

Those images runs based on Linux, while your host might differ. Since we are going to mount the source code in a shared volume (and node_modules will be in there too), you need to make sure your installed host deps are compatible or add an extra step of copying the package*.json files here and run npm install from within the container at build time

Step 3: “Bend it like Lambda”

Docker Compose syntax provides a straight forward way of instructing the resources for a given service. While this is useful in distributed, balanced and production deployments, nothing prohibits to use that locally too

services:
  lambda:
    build: .
    platform: linux/amd64   # Match target Lambda's architecture
    ports:
      - "9000:8080" # The RIE listens internally on port 8080
    environment:
      - AWS_LAMBDA_FUNCTION_MEMORY_SIZE=512 # Matches Lambda memory setting
      - AWS_LAMBDA_FUNCTION_TIMEOUT=120 # Matches Lambda timeout setting, in seconds
      - AWS_REGION=us-east-1
      - AWS_ACCESS_KEY_ID=local
      - AWS_SECRET_ACCESS_KEY=local
    volumes:
      - ./path/to/your/project:/var/task/myFunction:ro
    deploy:
      resources:
        limits:
          memory: 512M       # Matches Lambda memory setting
          cpus: '0.29'       # Magic CPU core number - Memory / 1769 = 512 / 1769

A note about the magic number

There are several online sources that provides detailed reports on the matter. Basically, AWS Lambda (FirecrackerVM) and pretty much every FaaS allows you to manage only a few aspects about resource allocation

It makes sense, they manage the environment so they know how to distribute load, spawn new instances of your function etc…

Lambda allows you to set, among other things, the Memory (RAM) your function need. CPU cores availability scales up fractionally and linearly with the memory amount. Since billing is done accordingly (CPU GB/s and invoke count), you can expect the scaling steps are publicly disclosed by AWS here.

Several guys spent a bit of time to “profile” and “guess” how many cores (or fractions of it) you get back for a specific memory amount. The result is: You get 1 full core every 1769 MB of RAM.

Step 4: “Call me, maybe”

Spinning docker compose up (or -d) will have this lambda running, and RIE is listening to your local 9000 port

So… Call that function

curl -X POST "http://localhost:9000/2015-03-31/functions/function/invocations" \
  -d '{"key": "value"}'

Your code is now being executed in a very similar environment reflecting your deployment configuration, locally.

That’s all - sort of

There are a few differences of course, but we can get quite close on having similar compute power and execution environment parity.

While AWS Lambda FirecrackerVM does limit the max memory for instance, Docker limit will actually kill the container (OOM)

While the linear CPU/Memory format serves well it’s purpose, it’s not quite 100% spot on and may not reflect actual resource sharing/gating happening on Firecracker, especially regarding I/O (network, filesystem and generic filesystem offloaded tasks) and, most importantly, cpus: 1.0 are not the same thing as 1 vCPU-seconds of credits per second

Last but not least, only one function instance is running at a time. There’s no simulated concurrency, unlike real Lambda which can spin up multiple execution environments in parallel. There are no concurrency isolation guarantees (we could potentially reach that by setting some scaling and getting deeper in the RIE workings and more complex local setup)

Sources

  • For the official docs and key references used throughout this post, the most important ones to cite are:

Author note

AI disclosure: parts of this content were developed in conversation with Perplexity AI (Claude Sonnet 4.6) and verified against official documentation. The main concept is still human developed, and AI has been use to research and find sources, as well as improving the clarity of this post

~LBRDan