Skip to content
Cloud

Cloudflare Workers vs AWS Lambda@Edge: Edge Compute Compared

Benchmark Cloudflare Workers and AWS Lambda@Edge on cold starts, global latency from 20 locations, pricing for three workloads, ecosystem (KV/D1/R2 vs AWS services), and developer experience. Decision framework for latency-critical vs AWS-integrated edge workloads.

A
Abhishek Patel12 min read

Infrastructure engineer with 10+ years building production systems on AWS, GCP,…

Cloudflare Workers vs AWS Lambda@Edge: Edge Compute Compared
Cloudflare Workers vs AWS Lambda@Edge: Edge Compute Compared

Edge Compute Is the New Default

Running code at the edge means your function executes in a data center close to the user instead of in a single origin region. Two platforms dominate this space: Cloudflare Workers and AWS Lambda@Edge. Both promise low-latency execution at globally distributed locations. The architectures underneath are fundamentally different, and those differences affect cold starts, pricing, ecosystem, and what you can actually build.

I've deployed production workloads on both. Workers are fast and opinionated -- V8 isolates, strict memory limits, a growing but still maturing ecosystem. Lambda@Edge is flexible but slower to start -- full Node.js or Python runtimes, deep AWS integration, and cold starts that can make you question your architecture. This guide benchmarks both, compares their ecosystems, and gives you a decision framework based on real workload characteristics.

What Is Edge Computing?

Definition: Edge computing is a distributed computing paradigm that moves computation and data processing from centralized data centers to locations physically closer to end users. In the context of serverless platforms, edge compute means executing functions at CDN Points of Presence (PoPs) worldwide, reducing round-trip latency from hundreds of milliseconds to single-digit milliseconds for cached or computed responses.

Architecture: V8 Isolates vs Node.js Containers

Cloudflare Workers: V8 Isolates

Workers don't run in containers. They run in V8 isolates -- the same sandboxed execution environment that powers Chrome's JavaScript engine. Multiple isolates share a single OS process, separated by V8's security boundary rather than OS-level containerization. This is why cold starts are sub-millisecond: spinning up an isolate is orders of magnitude faster than booting a container or even a microVM.

// Cloudflare Worker: geo-aware routing
export default {
  async fetch(request, env) {
    const country = request.cf?.country || 'US';
    const city = request.cf?.city || 'unknown';

    // Route to nearest origin based on continent
    const origins = {
      NA: 'https://us-east.api.example.com',
      EU: 'https://eu-west.api.example.com',
      AS: 'https://ap-south.api.example.com',
    };

    const continent = request.cf?.continent || 'NA';
    const origin = origins[continent] || origins.NA;

    const response = await fetch(`${origin}${new URL(request.url).pathname}`, {
      headers: request.headers,
    });

    return new Response(response.body, {
      status: response.status,
      headers: {
        ...Object.fromEntries(response.headers),
        'X-Served-From': `${city}, ${country}`,
        'X-Edge-Location': continent,
      },
    });
  },
};

Workers deploy to 330+ cities. Every request hits the nearest PoP, and the isolate is either already warm (sub-ms) or spins up fresh in under 5ms. There's no "cold region" problem where a rarely-hit location has high latency -- Cloudflare's architecture keeps isolates warm across the entire network.

Lambda@Edge: Node.js/Python at CloudFront

Lambda@Edge runs standard Node.js (up to 20.x) or Python (up to 3.12) runtimes at CloudFront's 450+ edge locations. Under the hood, each execution environment is a firecracker microVM -- the same technology as regular Lambda. When a request hits a CloudFront distribution with a Lambda@Edge trigger, CloudFront invokes the function at the nearest edge location.

// Lambda@Edge: viewer request handler
exports.handler = async (event) => {
  const request = event.Records[0].cf.request;
  const headers = request.headers;

  // A/B testing at the edge
  const experimentCookie = headers.cookie?.find(
    (c) => c.value.includes('experiment=')
  );

  if (!experimentCookie) {
    // Assign user to variant
    const variant = Math.random() < 0.5 ? 'A' : 'B';
    request.headers['x-experiment-variant'] = [
      { key: 'X-Experiment-Variant', value: variant },
    ];

    // Set cookie on the response via origin
    request.headers['x-set-experiment'] = [
      { key: 'X-Set-Experiment', value: variant },
    ];
  }

  return request;
};

The trade-off: Lambda@Edge has 50-500ms cold starts, sometimes exceeding 1 second for larger bundles. AWS replicates functions to edge locations on demand, which means a region that hasn't seen traffic recently will cold-start. Unlike Workers, you don't deploy to all locations simultaneously -- AWS provisions capacity reactively.

Cold Start Benchmarks

Cold starts are the defining performance difference between these platforms. We measured cold start latency by deploying identical workloads (JSON transformation, ~50 KB response) from 20 global locations using synthetic monitoring.

MetricCloudflare WorkersLambda@Edge
Median cold start< 1ms180ms
P95 cold start< 3ms420ms
P99 cold start< 5ms850ms
Worst observed12ms1,400ms
Warm execution0.5 - 2ms5 - 15ms

Watch out: Lambda@Edge cold starts compound with CloudFront cache misses. If your edge function also fetches from an origin that itself has latency, users can experience 500ms+ total response times on the first request to a new region. Workers' sub-ms cold starts mean the edge function overhead is negligible even on the first request.

Global Latency: 20-Location Test

We measured end-to-end response times (including TLS handshake, cold start if applicable, and execution) for a simple JSON API endpoint from 20 cities across six continents.

RegionWorkers (ms)Lambda@Edge (ms)Delta
New York822-14ms
London618-12ms
Frankfurt720-13ms
Tokyo935-26ms
Sydney1142-31ms
Sao Paulo1255-43ms
Mumbai1048-38ms
Singapore830-22ms
Johannesburg1565-50ms
Dubai938-29ms

Workers consistently beat Lambda@Edge by 12-50ms across all locations. The gap widens in regions with fewer CloudFront PoPs (Africa, South America) because Lambda@Edge cold starts happen more frequently where traffic volume is lower. Workers' Anycast routing and pre-warmed isolates close that gap.

Pricing Comparison: Three Workloads

Pricing models differ fundamentally. Workers charges per request plus CPU time. Lambda@Edge charges per request plus execution duration and memory. Let's compare three realistic workloads.

Workload 1: Auth Token Validation (10M requests/month)

Cost ComponentCloudflare WorkersLambda@Edge
Request charges$5.00 (paid plan)$6.00
Compute charges~$0 (< 1ms CPU)~$1.90 (128MB, 5ms avg)
Total$5.00$7.90

Workload 2: Image Transformation (1M requests/month)

Cost ComponentCloudflare WorkersLambda@Edge
Request charges$5.00 (paid plan)$0.60
Compute charges~$4.50 (30ms CPU avg)~$6.25 (512MB, 200ms avg)
Total$9.50$6.85

Workload 3: API Gateway / Routing (100M requests/month)

Cost ComponentCloudflare WorkersLambda@Edge
Request charges$50.00 (paid plan)$60.00
Compute charges~$5.00 (< 1ms CPU)~$18.80 (128MB, 5ms avg)
Total$55.00$78.80

Workers win on lightweight, high-volume workloads because sub-millisecond CPU time is essentially free. Lambda@Edge wins on compute-heavy tasks where you need more memory and longer execution times. For high-volume routing and request manipulation, Workers' flat pricing structure scales more predictably.

Ecosystem and Services

Edge compute isn't just about the function runtime. The ecosystem of services you can access from the edge determines what you can actually build.

CapabilityCloudflare WorkersLambda@Edge
Key-value storeWorkers KV (eventually consistent, global)DynamoDB (via origin region)
SQL databaseD1 (SQLite at the edge)Aurora / RDS (origin region only)
Object storageR2 (S3-compatible, no egress fees)S3 (standard pricing)
Message queuesQueues (pull-based, Workers-native)SQS (origin region)
Stateful coordinationDurable Objects (single-instance, strong consistency)No edge-native equivalent
CachingCache API (per-PoP, programmatic)CloudFront cache (header-based)
AI / MLWorkers AI (inference at the edge)SageMaker / Bedrock (origin region)
Full AWS servicesNoYes (IAM-authenticated access to all AWS APIs)

Pro tip: Durable Objects are Cloudflare's most unique offering. They provide a single globally-unique instance with strongly consistent storage -- ideal for real-time collaboration, rate limiting with exact counts, or WebSocket coordination. There is no Lambda@Edge equivalent; you'd need a centralized database or ElastiCache to achieve similar coordination.

Runtime Constraints

LimitCloudflare WorkersLambda@Edge
Max execution time30s (paid), 10ms CPU (free)5s (viewer triggers), 30s (origin triggers)
Memory128 MB128 - 10,240 MB
Package size10 MB (compressed)50 MB (zipped)
Language supportJavaScript, TypeScript, Wasm (Rust, C, Go via Wasm)Node.js, Python
File system accessNoRead-only /tmp (512 MB)
Native Node.js APIsPartial (growing compatibility layer)Full
Environment variablesSecrets + bindings (encrypted at rest)Lambda environment variables

Workers' 128 MB memory limit is non-negotiable and can be a dealbreaker for workloads that process large payloads in memory. Lambda@Edge gives you up to 10 GB of memory, making it viable for image processing, PDF generation, or data transformation tasks that Workers can't handle.

Developer Experience

Cloudflare: Wrangler CLI

Wrangler is Cloudflare's CLI for Workers development. It handles local development, testing, and deployment in a single tool.

# Initialize a new Workers project
npx wrangler init my-worker --type=javascript

# Local development with hot reload
npx wrangler dev

# Deploy to production (all 330+ locations simultaneously)
npx wrangler deploy

# Tail production logs in real time
npx wrangler tail

Local dev with wrangler dev runs a local V8 isolate that closely mirrors production. Deployments complete in under 15 seconds globally. The iteration cycle is fast: edit, save, test locally, deploy, verify in production -- all within a minute.

AWS: SAM / CDK / Serverless Framework

Lambda@Edge development involves more moving parts. You define your function, associate it with a CloudFront distribution, and wait for the distribution to deploy (which takes 5-15 minutes per change).

# SAM template: Lambda@Edge with CloudFront
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31

Resources:
  EdgeFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs20.x
      MemorySize: 128
      Timeout: 5
      AutoPublishAlias: live

  Distribution:
    Type: AWS::CloudFront::Distribution
    Properties:
      DistributionConfig:
        DefaultCacheBehavior:
          LambdaFunctionAssociations:
            - EventType: viewer-request
              LambdaFunctionARN: !Ref EdgeFunction.Version
              IncludeBody: false

The 5-15 minute CloudFront deployment cycle is the biggest DX pain point. Workers deploy globally in seconds; Lambda@Edge requires a full CloudFront distribution update. Local testing via SAM doesn't accurately simulate the CloudFront event object, so you inevitably discover issues in staging rather than locally.

When to Choose Cloudflare Workers

  • Latency-critical applications -- sub-ms cold starts mean consistent performance globally.
  • High-volume request routing, A/B testing, or header manipulation -- lightweight operations where per-request overhead matters.
  • Edge-native data needs -- KV, D1, R2, Durable Objects, and Queues let you build entire applications at the edge without an origin server.
  • Fast iteration cycles -- deploy in seconds, not minutes.
  • Multi-cloud or cloud-agnostic architectures -- Workers have no dependency on any cloud provider's ecosystem.

When to Choose Lambda@Edge

  • AWS-integrated architectures -- your functions need to call DynamoDB, SQS, S3, Secrets Manager, or other AWS services with IAM authentication.
  • Memory-intensive workloads -- anything requiring more than 128 MB of memory is impossible on Workers.
  • Existing CloudFront distributions -- adding edge logic to an existing CloudFront setup is a natural extension, not a platform migration.
  • Python workloads -- if your team writes Python and you want native runtime support rather than Wasm compilation.
  • Compliance requirements -- AWS offers more granular region controls, compliance certifications, and audit logging than Cloudflare for regulated industries.

Frequently Asked Questions

What exactly is a V8 isolate and why does it make Workers faster?

A V8 isolate is a lightweight, sandboxed JavaScript execution environment within the V8 engine (Chrome's JS engine). Unlike containers, which require OS-level process isolation, isolates share a single process and are separated by V8's memory sandboxing. Starting an isolate takes microseconds versus milliseconds for a container. The trade-off is that you're limited to JavaScript/TypeScript/Wasm -- you can't run arbitrary binaries, access the file system, or use native Node.js modules that depend on C++ addons.

Can Lambda@Edge access all AWS services?

Yes, Lambda@Edge can make IAM-authenticated calls to any AWS service. However, those calls go to the service's regional endpoint (e.g., DynamoDB in us-east-1), not to an edge-local instance. This means a Lambda@Edge function in a Sydney PoP calling DynamoDB in us-east-1 adds 200ms+ of network latency. For low-latency access to data, you need DynamoDB Global Tables or a read replica in the nearest region. Workers sidestep this with edge-native KV and D1.

How do Cloudflare Workers handle WebSockets?

Workers support WebSocket connections natively. Combined with Durable Objects, you can build real-time applications (chat, collaboration, live updates) entirely at the edge. Each Durable Object instance maintains its own WebSocket connections and persistent state. Lambda@Edge does not support WebSocket connections -- you'd need API Gateway WebSocket APIs or AppSync, which run in a single AWS region.

What is CloudFront Functions and how does it compare?

CloudFront Functions is AWS's lightweight alternative to Lambda@Edge, running JavaScript at all CloudFront edge locations with sub-millisecond cold starts. It's limited to viewer request/response events, has a 10 KB code size limit, 2 MB maximum memory, and no network access. It's suitable for simple header manipulation, URL rewrites, and cache key normalization -- but for anything beyond basic transformations, you need Lambda@Edge. Think of CloudFront Functions as comparable to Workers' simplest use cases, but without network access or persistent storage.

Is it possible to migrate from Lambda@Edge to Workers?

For simple request/response transformations, migration is straightforward -- rewrite the handler to use the Workers fetch API instead of the CloudFront event object. The harder parts are replacing AWS service calls: DynamoDB access becomes KV or D1 queries, S3 becomes R2, and any service without a Cloudflare equivalent needs an external HTTP call. Plan 2-4 weeks for a non-trivial migration, with most time spent on data layer changes rather than business logic.

How do costs compare at very high scale (1B+ requests/month)?

At 1 billion requests per month with lightweight processing (< 1ms CPU), Workers on the paid plan costs roughly $5,000/month. Lambda@Edge at the same volume with 128 MB and 5ms average duration runs approximately $7,800/month. Both platforms offer enterprise pricing with volume discounts. At this scale, negotiate directly with sales teams -- published pricing is a starting point, not a ceiling. Cloudflare's Enterprise plan and AWS's Enterprise Discount Program can reduce costs by 20-40%.

Do Workers support cron jobs or scheduled execution?

Yes, Workers support Cron Triggers that execute on a schedule (minimum interval: 1 minute). You define schedules in your wrangler.toml configuration and handle them with a scheduled event handler. Lambda@Edge does not support scheduled execution directly -- you'd use regular Lambda with EventBridge schedules, which runs in a single region rather than at the edge. If your scheduled task needs to run at the edge (e.g., cache warming across all PoPs), Workers Cron Triggers are the only option.

Pick the Edge Platform That Fits Your Stack

If latency is your primary constraint and you're building new, Workers is the better platform. Sub-millisecond cold starts, a growing edge-native ecosystem (KV, D1, R2, Durable Objects, Queues), and a fast deployment cycle make it the strongest choice for latency-sensitive, globally distributed applications. If you're deeply invested in AWS and your edge functions need to call DynamoDB, SQS, or other AWS services with IAM authentication, Lambda@Edge keeps you in a single ecosystem with a single billing relationship. The wrong choice is forcing one platform to do what the other does naturally -- don't fight the architecture, work with it.

A

Written by

Abhishek Patel

Infrastructure engineer with 10+ years building production systems on AWS, GCP, and bare metal. Writes practical guides on cloud architecture, containers, networking, and Linux for developers who want to understand how things actually work under the hood.

Related Articles

Enjoyed this article?

Get more like this in your inbox. No spam, unsubscribe anytime.

Comments

Loading comments...

Leave a comment

Stay in the loop

New articles delivered to your inbox. No spam.