CORS with API Gateways and Proxies
Here's the thing about CORS: the best CORS configuration is no CORS configuration. If you can architect your system so that the browser never makes a cross-origin request, you've eliminated an entire class of bugs, preflights, and confused Stack Overflow searches.
Reverse proxies, API gateways, and edge services are often the right place to make that happen—or, when CORS is unavoidable, the right place to configure it centrally instead of scattering Access-Control-Allow-Origin headers across every microservice in your fleet.
How Reverse Proxies Eliminate CORS
A reverse proxy sits between the browser and your backend services, presenting a single origin to the client. The browser talks to https://app.example.com. The reverse proxy routes requests internally based on path, hostname, or other rules. The backend servers could be running on different ports, different machines, even different data centers. The browser doesn't know and doesn't care.
┌───────────────────────────────────┐
│ Reverse Proxy │
Browser ──────────► │ https://app.example.com │
│ │
│ / → frontend:3000 │
│ /api/users → user-service:8081 │
│ /api/orders → order-service:8082│
│ /api/auth → auth-service:8083 │
└───────────────────────────────────┘
Every request from the browser goes to the same origin. No cross-origin requests. No CORS. The fact that five different backend services handle those requests is an implementation detail hidden behind the proxy.
This is, frankly, how most production web applications should be set up. CORS is for situations where you can't share an origin, not a tax you accept because you didn't think about your deployment architecture.
Nginx as a Reverse Proxy
Nginx is the most common way to achieve this. Here's a production-ready configuration:
upstream frontend {
server 127.0.0.1:3000;
}
upstream api {
server 127.0.0.1:8080;
}
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
# Frontend: serve static files or proxy to SSR server
location / {
root /var/www/app/dist;
try_files $uri $uri/ /index.html;
}
# API: proxy to backend
location /api/ {
proxy_pass http://api;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Important: do NOT add CORS headers here.
# Same origin = no CORS needed.
}
}
Test it with curl to confirm no CORS headers are involved:
# Request without Origin header (same-origin requests don't send one)
curl -v https://app.example.com/api/users
# No Access-Control-* headers in response — because none are needed
If your frontend is server-rendered (Next.js, Nuxt, etc.), replace the static file serving with another proxy_pass:
location / {
proxy_pass http://frontend;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
When You Actually Need CORS: API Gateways
Sometimes you can't hide behind a reverse proxy. Your API serves multiple clients: a web app at app.example.com, a partner portal at partner.example.com, a mobile app (which doesn't have CORS, but hits the same endpoints). You need CORS, and you need it configured in one place rather than in every microservice.
API gateways are the natural place for this.
AWS API Gateway
AWS API Gateway has CORS support, but it's one of those features that's just confusing enough to generate a support ticket. There are two completely different places where CORS headers can be configured, and you probably need both.
For REST APIs (v1):
-
Integration Response: These are the CORS headers returned on successful API calls. You configure them on each method's Integration Response or via a Gateway Response.
-
Gateway Response: These handle cases where API Gateway itself rejects the request before it reaches your Lambda—things like authorization failures, throttling, or bad request format. If you only configure CORS on your Lambda, a 403 from the authorizer won't have CORS headers, and the browser will show a generic CORS error instead of the actual error.
In the AWS Console, for each resource:
- Enable CORS on the resource (Actions → Enable CORS)
- Set your allowed origins, methods, and headers
- Deploy the API (people forget this step constantly)
Or via CloudFormation / SAM:
MyApi:
Type: AWS::Serverless::Api
Properties:
StageName: prod
Cors:
AllowOrigin: "'https://app.example.com'"
AllowMethods: "'GET,POST,PUT,DELETE,OPTIONS'"
AllowHeaders: "'Content-Type,Authorization'"
MaxAge: "'86400'"
Note the nested quotes. Yes, the single quotes inside the double quotes are required. Yes, this has caused thousands of hours of collective debugging time. The value is a string that gets placed verbatim into the header, so it needs to be a quoted string value.
For HTTP APIs (v2):
HTTP APIs have a simpler CORS configuration. In the console, go to your API → CORS, and fill in the fields. Or via CloudFormation:
MyHttpApi:
Type: AWS::ApiGatewayV2::Api
Properties:
Name: my-api
ProtocolType: HTTP
CorsConfiguration:
AllowOrigins:
- https://app.example.com
AllowMethods:
- GET
- POST
- PUT
- DELETE
- OPTIONS
AllowHeaders:
- Content-Type
- Authorization
MaxAge: 86400
Much cleaner. HTTP APIs (v2) are generally preferred over REST APIs (v1) for new projects, and CORS configuration is one of the many reasons why.
The AWS CORS Trap:
Your Lambda returns a 200 with data. CORS headers are on the integration response. Everything works. Then one day your authorizer rejects a request. API Gateway returns a 403. The 403 has no CORS headers because the request never reached the integration. The browser sees a response without Access-Control-Allow-Origin and reports a CORS error. Your users see "CORS error" when the real problem is an expired token.
Fix: configure CORS headers on the Gateway Response (for REST APIs) or enable CORS at the API level (for HTTP APIs), so that every response—including errors generated by API Gateway itself—includes proper CORS headers.
# Test with curl to see what headers are on an error response
curl -v -X OPTIONS https://abc123.execute-api.us-east-1.amazonaws.com/prod/api/users \
-H "Origin: https://app.example.com" \
-H "Access-Control-Request-Method: GET" \
-H "Access-Control-Request-Headers: Authorization"
Kong
Kong handles CORS via a plugin:
curl -X POST http://kong:8001/services/my-api/plugins \
--data "name=cors" \
--data "config.origins=https://app.example.com" \
--data "config.methods=GET,POST,PUT,DELETE,OPTIONS" \
--data "config.headers=Content-Type,Authorization" \
--data "config.max_age=86400" \
--data "config.credentials=true"
Or declaratively in kong.yml:
plugins:
- name: cors
config:
origins:
- https://app.example.com
- https://partner.example.com
methods:
- GET
- POST
- PUT
- DELETE
- OPTIONS
headers:
- Content-Type
- Authorization
max_age: 86400
credentials: true
Kong's CORS plugin handles preflight responses automatically. It intercepts OPTIONS requests and responds without forwarding them to your upstream service. This is exactly what you want—your application code never needs to think about CORS.
Traefik
Traefik configures CORS via middleware, either in the static config or via Docker labels:
# traefik dynamic config
http:
middlewares:
cors-headers:
headers:
accessControlAllowOriginList:
- "https://app.example.com"
- "https://partner.example.com"
accessControlAllowMethods:
- GET
- POST
- PUT
- DELETE
- OPTIONS
accessControlAllowHeaders:
- Content-Type
- Authorization
accessControlMaxAge: 86400
accessControlAllowCredentials: true
routers:
my-api:
rule: "Host(`api.example.com`)"
middlewares:
- cors-headers
service: my-backend
Or with Docker labels:
services:
my-api:
labels:
- "traefik.http.middlewares.cors.headers.accessControlAllowOriginList=https://app.example.com"
- "traefik.http.middlewares.cors.headers.accessControlAllowMethods=GET,POST,PUT,DELETE,OPTIONS"
- "traefik.http.middlewares.cors.headers.accessControlAllowHeaders=Content-Type,Authorization"
- "traefik.http.middlewares.cors.headers.accessControlMaxAge=86400"
- "traefik.http.routers.my-api.middlewares=cors"
Cloudflare Workers as a CORS Proxy
Cloudflare Workers sit at the edge and can add CORS headers to any origin response. This is useful when you're calling a third-party API that doesn't support CORS (many don't) and you need to access it from browser JavaScript.
export default {
async fetch(request) {
const url = new URL(request.url);
// Handle preflight
if (request.method === 'OPTIONS') {
return new Response(null, {
headers: {
'Access-Control-Allow-Origin': 'https://app.example.com',
'Access-Control-Allow-Methods': 'GET, POST, OPTIONS',
'Access-Control-Allow-Headers': 'Content-Type, Authorization',
'Access-Control-Max-Age': '86400',
},
});
}
// Proxy the request to the upstream API
const apiUrl = `https://third-party-api.com${url.pathname}${url.search}`;
const response = await fetch(apiUrl, {
method: request.method,
headers: request.headers,
body: request.body,
});
// Clone the response and add CORS headers
const newResponse = new Response(response.body, response);
newResponse.headers.set('Access-Control-Allow-Origin', 'https://app.example.com');
return newResponse;
},
};
Deploy this worker on a route like cors-proxy.example.com, and your frontend calls it instead of the third-party API directly. The worker adds the CORS headers that the third-party API doesn't provide.
Word of caution: don't build an open CORS proxy that allows any origin to access any URL. That's a security liability. Always restrict the allowed origins and the upstream URLs.
The BFF Pattern: Why It Sidesteps CORS Entirely
The Backend for Frontend (BFF) pattern puts a thin server layer between your frontend and your backend services. The BFF runs on the same origin as the frontend, so the browser makes same-origin requests. The BFF then calls your microservices server-to-server, where CORS doesn't apply.
Browser (app.example.com)
│
│ same-origin requests
▼
BFF (app.example.com/api/*)
│
│ server-to-server (no CORS)
├──► User Service (internal)
├──► Order Service (internal)
└──► Auth Service (internal)
The BFF can:
- Aggregate data from multiple microservices into a single response
- Handle authentication and token management
- Translate between frontend-friendly and backend-specific data formats
- Keep API keys and secrets on the server
Next.js API routes, Nuxt server routes, and SvelteKit server endpoints are all BFF implementations. They solve CORS by architecture rather than configuration.
CORS at the Edge: CDN Considerations
If your API responses are cached by a CDN (CloudFront, Cloudflare, Fastly), CORS adds a wrinkle: the Origin header varies between clients, but CDNs cache based on URL by default. A request from app.example.com gets cached, and when partner.example.com makes the same request, the CDN serves the cached response with Access-Control-Allow-Origin: https://app.example.com. The partner's browser rejects it.
The fix: include Origin in the cache key.
CloudFront: Add Origin to the cache policy's header whitelist, or use an origin request policy that forwards the Origin header to your backend.
{
"CachePolicyConfig": {
"ParametersInCacheKeyAndForwardedToOrigin": {
"HeadersConfig": {
"HeaderBehavior": "whitelist",
"Headers": {
"Items": ["Origin"]
}
}
}
}
}
Cloudflare: Use a Cache Key custom rule that includes the Origin header.
Fastly: Add Origin to the Vary header in your backend response, and configure Fastly to respect Vary.
Alternatively, your origin server should set Vary: Origin on any response that includes Access-Control-Allow-Origin. This tells the CDN (and any intermediate cache) that the response varies based on the Origin header and should not be served to requests with a different Origin.
HTTP/1.1 200 OK
Access-Control-Allow-Origin: https://app.example.com
Vary: Origin
Content-Type: application/json
The Vary: Origin header is easy to forget and painful to debug. If your CORS works fine without a CDN but breaks with one, this is almost certainly the issue.
Multiple Origins: The Allowlist Pattern
CORS has no native support for multiple origins. The Access-Control-Allow-Origin header takes exactly one value: either a single origin or *. You can't do this:
# THIS DOES NOT WORK
Access-Control-Allow-Origin: https://app.example.com, https://partner.example.com
You also can't do wildcard subdomains:
# THIS DOES NOT WORK EITHER
Access-Control-Allow-Origin: *.example.com
The standard pattern for supporting multiple origins is:
- Maintain an allowlist of permitted origins
- Read the
Originheader from the incoming request - Check if it's in the allowlist
- If yes, echo it back as the
Access-Control-Allow-Originvalue - If no, either omit the header or return a 403
Here's the pattern in Node.js:
const ALLOWED_ORIGINS = new Set([
'https://app.example.com',
'https://partner.example.com',
'https://staging.example.com',
]);
function getCorsOrigin(requestOrigin) {
if (ALLOWED_ORIGINS.has(requestOrigin)) {
return requestOrigin;
}
return null;
}
// In your middleware
const origin = req.headers.origin;
const allowedOrigin = getCorsOrigin(origin);
if (allowedOrigin) {
res.setHeader('Access-Control-Allow-Origin', allowedOrigin);
res.setHeader('Vary', 'Origin'); // Critical!
}
The Vary: Origin header is mandatory when you dynamically set Access-Control-Allow-Origin based on the request. Without it, a CDN or browser cache might serve a response with the wrong origin to a different client.
For wildcard subdomain matching:
function isAllowedOrigin(origin) {
if (!origin) return false;
// Exact matches
if (ALLOWED_ORIGINS.has(origin)) return true;
// Subdomain wildcard: *.example.com
try {
const url = new URL(origin);
return url.hostname.endsWith('.example.com') && url.protocol === 'https:';
} catch {
return false;
}
}
Be careful with subdomain matching. If you allow *.example.com, make sure you trust all subdomains. A compromised blog.example.com could make cross-origin requests to your API. Only use wildcard subdomain matching when you control all subdomains.
Where Should CORS Headers Be Added?
In a multi-layer architecture, there are several places you could add CORS headers:
Browser → CDN/Edge → Load Balancer → API Gateway → Application
The rule: add CORS headers in exactly one place. I don't care which layer you choose, but pick one and only one. Here's how to decide:
| Layer | When to use |
|---|---|
| CDN/Edge (Cloudflare Workers, CloudFront Functions) | When you need CORS on static assets or want to handle it before any backend logic |
| API Gateway (Kong, AWS APIGW, Traefik) | When you have multiple backend services and want centralized CORS policy |
| Application (Express middleware, Django middleware) | When CORS rules vary per route or depend on application logic |
| Reverse proxy (Nginx) | When Nginx is already your edge and you want simple, centralized config |
The Double CORS Headers Problem
This is, without exaggeration, the most common CORS misconfiguration I see in production systems. It happens when CORS headers are added at multiple layers in the stack.
Scenario: your Express app uses the cors middleware, and your Nginx reverse proxy adds CORS headers. The browser receives:
Access-Control-Allow-Origin: https://app.example.com
Access-Control-Allow-Origin: https://app.example.com
You might think "well, they're the same value, what's the problem?" The problem is that the HTTP spec says Access-Control-Allow-Origin must appear at most once, and browsers reject responses with duplicate values. The actual error in Chrome:
Access to fetch at 'https://api.example.com/data' from origin
'https://app.example.com' has been blocked by CORS policy: The
'Access-Control-Allow-Origin' header contains multiple values
'https://app.example.com, https://app.example.com', but only one
is allowed.
You configured CORS correctly in two places, and it broke harder than if you'd configured it in zero places. Wonderful.
Diagnose it with curl:
curl -v https://api.example.com/api/users \
-H "Origin: https://app.example.com" \
2>&1 | grep -i "access-control"
If you see duplicate headers, you need to figure out which layer is adding them and remove one.
Quick Nginx fix: If your application already handles CORS, make sure Nginx doesn't add its own headers. If Nginx is supposed to handle CORS, strip any CORS headers the backend sends:
location /api/ {
proxy_pass http://api;
# Remove CORS headers from backend response
proxy_hide_header Access-Control-Allow-Origin;
proxy_hide_header Access-Control-Allow-Methods;
proxy_hide_header Access-Control-Allow-Headers;
proxy_hide_header Access-Control-Allow-Credentials;
# Add CORS headers at this layer only
add_header Access-Control-Allow-Origin "https://app.example.com" always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Content-Type, Authorization" always;
add_header Access-Control-Max-Age 86400;
}
The always parameter on add_header is important—without it, Nginx only adds the header on successful (2xx) responses, not on error responses. You need CORS headers on error responses too, or the browser will mask your actual errors behind a generic CORS failure.
Load Balancers and CORS
Load balancers (AWS ALB, Google Cloud Load Balancer, HAProxy) are generally not the right place for CORS. They're designed to distribute traffic, not to modify HTTP headers. While some load balancers support header manipulation, it's usually limited and awkward.
The exception is AWS Application Load Balancer, which can add fixed response headers via listener rules. But for dynamic CORS (where the Access-Control-Allow-Origin value depends on the request's Origin header), an ALB isn't expressive enough. Use an API gateway or application-level middleware instead.
If your load balancer sits in front of an API gateway:
Browser → Load Balancer → API Gateway (CORS here) → Backend
Let the API gateway handle CORS. The load balancer just passes traffic through.
Debugging CORS in Proxied Architectures
When CORS isn't working in a proxied setup, here's the debugging checklist:
1. Is the Origin header reaching the right layer?
# Check if your proxy forwards the Origin header
curl -v https://api.example.com/api/users \
-H "Origin: https://app.example.com" \
2>&1 | grep -i "access-control-allow-origin"
If the response has no Access-Control-Allow-Origin, either the layer handling CORS isn't receiving the Origin header, or it's not configured to respond to that origin.
2. Is the preflight reaching the right layer?
curl -v -X OPTIONS https://api.example.com/api/users \
-H "Origin: https://app.example.com" \
-H "Access-Control-Request-Method: POST" \
-H "Access-Control-Request-Headers: Content-Type, Authorization"
Some proxies and load balancers don't forward OPTIONS requests. If your API gateway handles CORS, it needs to see the OPTIONS request. Make sure nothing upstream is swallowing it.
3. Are there duplicate CORS headers?
curl -s -D - https://api.example.com/api/users \
-H "Origin: https://app.example.com" \
-o /dev/null | grep -i "access-control"
If you see any header listed twice, you have a double-header problem.
4. Does the Vary header include Origin?
curl -s -D - https://api.example.com/api/users \
-H "Origin: https://app.example.com" \
-o /dev/null | grep -i "vary"
If you're dynamically setting Access-Control-Allow-Origin based on the request and the Vary header doesn't include Origin, you'll get caching bugs.
Summary
The hierarchy of CORS strategies for proxied architectures:
- Eliminate CORS: Use a reverse proxy to serve frontend and API from the same origin. No CORS headers needed.
- Centralize CORS: If you need multiple origins, handle CORS in one layer—your API gateway or reverse proxy—not in every microservice.
- Don't double up: If your application handles CORS, strip CORS headers at the proxy layer (or vice versa). Duplicate headers break everything.
- Cache correctly: Use
Vary: Originon any response whereAccess-Control-Allow-Originis dynamic. IncludeOriginin CDN cache keys. - Test with curl: Browsers hide CORS details in DevTools. Use curl to see the raw headers and verify your configuration at each layer.