Deploy Next.js SSR on Azure Container Apps (Full Guide) 

In this guide, I walk through deploying Next.js SSR on Azure Container Apps — how I containerized the application with server-side rendering, integrated Auth0 authentication, and provisioned everything with Terraform IaC and GitHub Actions CI/CD. This is the frontend for DocWriter Studio, an agentic AI document generation system.


In DocWriter Studio I use Next.js with output: “standalone” compiled into a multi-stage Docker image. The container runs a Node.js SSR server on port 3000 inside Azure Container Apps, handling Auth0 session management, middleware-based route protection, and server-side token minting. I provision the Container App with Terraform — Key Vault–backed secrets, managed identity for ACR pulls, and auto-scaling from 1 to 3 replicas. GitHub Actions builds the image, pushes it to Azure Container Registry via OIDC, and triggers Terraform to deploy the new revision — all without stored passwords.


Table of Contents

  1. Why I Chose Azure Container Apps for Next.js SSR
  2. Standalone Output Mode for Next.js SSR on Azure Container Apps
  3. Multi-Stage Dockerfile for Next.js SSR
  4. Build-Time vs Runtime Env Variables in Next.js SSR on Azure Container Apps
  5. How I Protect Routes with SSR Middleware in Azure Container Apps
  6. How the Server-Side Token Flow Works
  7. Terraform Configuration for Next.js SSR on Azure Container Apps
  8. How I Inject Secrets from Azure Key Vault into Container Apps
  9. CI/CD: GitHub Actions Pipeline for Next.js SSR on Azure Container Apps
  10. Where SSR Fits in the Multi-Agent AI Architecture
  11. Common Pitfalls: Next.js SSR on Azure Container Apps Troubleshooting
  12. Frequently Asked Questions

Why I Chose Azure Container Apps for Next.js SSR

In Part 1 of this series, I introduced DocWriter Studio — a multi-agent AI system that orchestrates specialized agents (planner, writer, reviewer, verifier, rewriter, finalizer) through Azure Service Bus queues to generate 60+ page enterprise documents. The frontend gives users real-time visibility into that pipeline: live status polling, timeline rendering, token-usage metrics, and on-demand artifact downloads in Markdown, PDF, and DOCX formats.

Early on I considered a statically exported Next.js site or a client-rendered SPA, but neither could satisfy these requirements. Here is why I ruled them out:

Auth0 session management requires a Node.js server. The @auth0/nextjs-auth0 SDK encrypts session cookies with a 32+ character secret, validates OAuth callbacks, and refreshes tokens — all in server-side route handlers. The client secret, JWKS validation, and session encryption simply cannot run in a browser.

Access tokens are minted on the server before the client receives them. My /api/auth/token route calls auth0.getAccessToken() with refresh: true, then returns only the short-lived access token to the browser. The refresh token never leaves the server.

Middleware evaluates every request server-side. I guard protected routes (/workspace, /newdocument) with Next.js middleware that checks for a valid session and redirects unauthenticated users to Auth0 Universal Login — before any HTML reaches the browser.

API route handlers act as a Backend for Frontend (BFF). Routes under /api/auth/* delegate to the Auth0 SDK. These run exclusively in the Node.js process.

I chose Azure Container Apps as the host because it gives me managed HTTPS ingress with automatic TLS certificates, horizontal auto-scaling (1–3 replicas), Key Vault–backed secret injection via managed identity, and a Consumption workload profile that charges only for vCPU-seconds and memory-seconds actually consumed. For a project like DocWriter Studio, that combination was hard to beat.


Standalone Output Mode for Next.js SSR on Azure Container Apps

The first configuration decision I made for containerizing Next.js SSR on Azure Container Apps was the build output mode. I went with output: “standalone”:

// ui/next.config.ts  
// https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/ui/next.config.ts  
  
import type { NextConfig } from "next";  
  
const nextConfig: NextConfig = {  
  output: "standalone",  
};  
  
export default nextConfig;  

This instructs next build to produce a self-contained directory with:

  • server.js — A Node.js HTTP server entrypoint
  • Tree-shaken node_modules — Only the packages actually imported at runtime (typically ~50 MB instead of ~500 MB)
  • .next/static — Compiled CSS, JavaScript chunks, and assets

Without standalone mode, my Docker image would have to ship the entire node_modules folder. With it, the production command is simply node server.js, and the image size drops dramatically.

Key takeaway: Always use output: “standalone” when deploying Next.js SSR in Docker containers. It is the officially recommended approach for production containerization, and in my experience the size difference alone makes it worth it.


Multi-Stage Dockerfile for Next.js SSR

I separated dependency installation, Next.js compilation, and the production runtime into distinct stages. Here is the complete Dockerfile I use in DocWriter Studio:

# ui/Dockerfile  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/ui/Dockerfile  
  
# syntax=docker.io/docker/dockerfile:1  
  
FROM node:20-alpine AS base  
  
# ──────────────────────────────────────────────────────────────  
# Build-time public env vars — compiled into the JS bundle  
# ──────────────────────────────────────────────────────────────  
ARG NEXT_PUBLIC_API_BASE_URL  
ARG NEXT_PUBLIC_AUTH0_AUDIENCE  
ARG NEXT_PUBLIC_AUTH0_SCOPE  
ARG NEXT_PUBLIC_PROFILE_ROUTE  
ARG NEXT_PUBLIC_ACCESS_TOKEN_ROUTE  
ARG NEXT_PUBLIC_REVIEW_STYLE_ENABLED  
ARG NEXT_PUBLIC_REVIEW_COHESION_ENABLED  
ARG NEXT_PUBLIC_REVIEW_SUMMARY_ENABLED  
  
# ── Stage 1: Install dependencies ─────────────────────────────  
FROM base AS deps  
RUN apk add --no-cache libc6-compat  
WORKDIR /app  
COPY ui/package.json ./package.json  
COPY ui/package-lock.json ./package-lock.json  
RUN npm ci  
  
# ── Stage 2: Build the Next.js application ────────────────────  
FROM base AS builder  
WORKDIR /app  
COPY --from=deps /app/node_modules ./node_modules  
COPY ui/. ./  
  
# Re-declare ARGs and export as ENV for next build  
ARG NEXT_PUBLIC_API_BASE_URL  
ARG NEXT_PUBLIC_AUTH0_AUDIENCE  
ARG NEXT_PUBLIC_AUTH0_SCOPE  
ENV NEXT_PUBLIC_API_BASE_URL=${NEXT_PUBLIC_API_BASE_URL}  
ENV NEXT_PUBLIC_AUTH0_AUDIENCE=${NEXT_PUBLIC_AUTH0_AUDIENCE}  
ENV NEXT_PUBLIC_AUTH0_SCOPE=${NEXT_PUBLIC_AUTH0_SCOPE}  
# ... remaining NEXT_PUBLIC_* variables follow the same pattern  
  
RUN npm run build  
  
# ── Stage 3: Production runner ─────────────────────────────────  
FROM base AS runner  
WORKDIR /app  
ENV NODE_ENV=production  
  
RUN addgroup --system --gid 1001 nodejs  
RUN adduser --system --uid 1001 nextjs  
  
COPY --from=builder /app/public ./public  
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./  
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static  
  
USER nextjs  
EXPOSE 3000  
ENV PORT=3000  
ENV HOSTNAME="0.0.0.0"  
CMD ["node", "server.js"]  

What Each Stage Does

Stage Purpose What Stays in the Final Image
base Sets node:20-alpine as the foundation Alpine Linux + Node.js runtime
deps Installs node_modules from lockfile Nothing — used only by the builder
builder Copies source, injects NEXT_PUBLIC_* as ENV, runs next build Nothing — artifacts are cherry-picked
runner Copies only public/, .next/standalone/, .next/static/ The minimal production server

Why I Chose These Patterns

Non-root user (nextjs, UID 1001) — The Node.js process cannot write to system directories even if the container is compromised. This is a container security best practice enforced by many Azure policy initiatives, and I make it a habit in every Dockerfile I write.

HOSTNAME=“0.0.0.0” — This one cost me a frustrating debugging session. Without it, Next.js standalone binds to 127.0.0.1. Azure Container Apps sends traffic through the Envoy-based ingress sidecar to the container’s exposed port from outside the loopback interface. If the server only listens on localhost, every request returns HTTP 502 Bad Gateway.

libc6-compat in the deps stage — Alpine Linux uses musl libc by default. Some npm packages with native bindings expect glibc. I add the compat package to prevent cryptic segfaults during npm ci.


Build-Time vs Runtime Env Variables in Next.js SSR on Azure Container Apps

This is the single most important concept I want to mention about deploying Next.js SSR on Azure Container Apps. Getting it wrong either breaks your public URLs or leaks your secrets into the browser bundle. I learned this the hard way.

Category When Resolved How Injected Can Change Without Rebuild? Examples
Build-time (NEXT_PUBLIC_*) During next build in Docker Docker build-argsENV → compiled into JS chunks ❌ No — requires image rebuild NEXT_PUBLIC_API_BASE_URL, NEXT_PUBLIC_AUTH0_AUDIENCE
Runtime (server-only) When container starts Container App env vars / Key Vault secrets ✅ Yes — change env and restart AUTH0_SECRET, AUTH0_CLIENT_SECRET, AUTH0_ISSUER_BASE_URL

Build-time variables are inlined by the Next.js compiler into the JavaScript bundle. The browser reads NEXT_PUBLIC_API_BASE_URL directly from the JS chunk — no server round-trip needed. If I change the API URL, I have to rebuild the Docker image.

Runtime variables are read from process.env exclusively on the Node.js server. AUTH0_CLIENT_SECRET is never bundled into client code. I inject these through Azure Container Apps at startup, either as plain environment variables or as Key Vault secret references resolved by managed identity.


How I Protect Routes with SSR Middleware in Azure Container Apps

With my Next.js SSR server running inside a Container App, every HTTP request passes through middleware before reaching any page component. This is where server-side rendering becomes the authentication gatekeeper:

// ui/proxy.ts  
// https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/ui/proxy.ts  
  
import type { NextRequest } from "next/server";  
import { NextResponse } from "next/server";  
import { auth0 } from "./src/lib/auth0";  
  
const PROTECTED_PATHS = ["/workspace", "/newdocument"];  
const LOGIN_PATH = "/api/auth/login";  
  
const isProtected = (pathname: string) =>  
  PROTECTED_PATHS.some((base) => pathname === base || pathname.startsWith(`${base}/`));
  
export async function proxy(request: NextRequest) {  
  // Step 1: Run Auth0 middleware (hydrates session, refreshes cookies)  
  const authResponse = await auth0.middleware(request);  
    
  const pathname = request.nextUrl.pathname;  
  // Step 2: Public pages and auth callback routes pass through  
  if (!isProtected(pathname) ``` pathname.startsWith("/api/auth")) {  
    return authResponse;  
  }  
    
  // Step 3: Check for active session on protected paths  
  const session = await auth0.getSession(request);  
  if (session) {  
    return authResponse;  
  }  
    
  // Step 4: No session → redirect to Auth0 Universal Login  
  const loginUrl = new URL(LOGIN_PATH, request.url);  
  loginUrl.searchParams.set("returnTo", pathname + request.nextUrl.search);  
  return NextResponse.redirect(loginUrl);  
}  
  
export const config = {  
  matcher: [  
    "/api/auth/:path*",  
    "/auth/:path*",  
    "/workspace",  
    "/workspace/:path*",  
    "/newdocument",  
    "/newdocument/:path*",  
  ],  
};  

The Authentication Flow Step-by-Step

  1. User visits /workspace → Container Apps ingress forwards the HTTPS request to my Next.js server on port 3000
  2. Middleware interceptsauth0.middleware() reads the encrypted session cookie and hydrates the Auth0 session
  3. Session checkauth0.getSession() verifies the session is valid and not expired
  4. No session? → The server issues a 302 Redirect to /api/auth/login?returnTo=/workspace
  5. Auth0 Universal Login → The user authenticates with Auth0 (email/password, social login, SSO)
  6. Callback → Auth0 redirects to /api/auth/callback, the SSR server validates the authorization code, creates the session, and redirects to /workspace
  7. Authenticated → The workspace page renders with full access to the AI document pipeline

This entire flow requires a server. A static export cannot read cookies, call auth0.getSession(), or issue server-side redirects. That is exactly why I needed Next.js SSR on Azure Container Apps.


How the Server-Side Token Flow Works

My SSR server acts as a Backend for Frontend (BFF), brokering access tokens between the browser and the Auth0 token endpoint:

Browser                 Next.js SSR (Container App :3000)       Auth0         FastAPI API (Container App :8000)  
  │                            │                                  │                     │  
  ├─ GET /api/auth/token ─────▶│                                  │                     │  
  │                            ├─ auth0.getAccessToken(refresh) ─▶│                     │  
  │                            │◀── access_token ────────────────┤│                     │  
  │◀── { accessToken } ────────┤                                  │                     │  
  │                            │                                  │                     │  
  ├─ GET /jobs (Bearer token) ──────────────────────────────────────────────────────────▶│  
  │◀── { documents: [...] } ─────────────────────────────────────────────────────────────│  

My token route handles audience and scope negotiation:

// ui/src/app/api/auth/token/route.ts  
// https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/ui/src/app/api/auth/token/route.ts#L18-L27  
  
async function obtainToken(request: NextRequest) {  
  const args = buildArgs(request);  
  const response = NextResponse.next();  
  const token = await auth0.getAccessToken(request, response, {  
    refresh: true,  
    audience: args.audience,  
    scope: args.scopes?.join(" "),  
  });  
  return { response, token };  
}  

The client-side API module I wrote caches the token and attaches it as a Bearer header on every authenticated request:

// ui/src/lib/api.ts  
// https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/ui/src/lib/api.ts#L41-L69  
  
async function request(path: string, options: RequestOptions = {}) {
  if (!API_BASE) {
    throw new Error("NEXT_PUBLIC_API_BASE_URL is not set");
  }

  const { auth, ...rest } = options;

  const headers: Record<string, string> = {
    "Content-Type": "application/json",
    ...(rest.headers as Record<string, string> | undefined),
  };

  if (auth) {
    const token = await getAccessToken();
    headers["Authorization"] = `Bearer ${token}`;
  }

  const res = await fetch(`${API_BASE}${path}`, {
    ...rest,
    headers,
    cache: "no-store",
  });

  if (!res.ok) {
    const message = await res.text();
    throw new Error(message || `Request failed: ${res.status}`);
  }
  return res.json();
}

An important design decision: the browser calls the FastAPI backend directly (not through the SSR server). My SSR server is only involved in authentication — it is not a data proxy. This keeps the Node.js container lightweight and focused on what it does best.


Terraform Configuration for Next.js SSR on Azure Container Apps

All DocWriter Studio containers share one Azure Container Apps environment. My Terraform module provisions three resource types: azurerm_container_app.api, azurerm_container_app.ui, and azurerm_container_app.functions.

Container Apps Environment (Consumption Plan)

# infra/terraform/modules/app/main.tf  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/infra/terraform/modules/app/main.tf#L1-L14  
  
resource "azurerm_container_app_environment" "main" {  
  name                       = "${var.name_prefix}-cae"  
  location                   = var.location  
  resource_group_name        = var.resource_group_name  
  log_analytics_workspace_id = var.log_analytics_id  
  tags                       = var.tags  
  
  workload_profile {  
    name                  = "Consumption"  
    workload_profile_type = "Consumption"  
    maximum_count         = 0  
    minimum_count         = 0  
  }  
}  

The Consumption profile means I pay only for the vCPU-seconds and memory-seconds consumed. All container apps within the environment share an internal DNS namespace, so the UI can reach the API using its internal FQDN.

The UI Container App Resource

# infra/terraform/modules/app/main.tf  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/infra/terraform/modules/app/main.tf#L96-L172  
  
resource "azurerm_container_app" "ui" {  
  for_each = var.ui_images  
  
  name                         = "${var.name_prefix}-${each.key}"  
  resource_group_name          = var.resource_group_name  
  container_app_environment_id = azurerm_container_app_environment.main.id  
  revision_mode                = "Single"  
  tags                         = var.tags  
  
  identity {  
    type         = "SystemAssigned, UserAssigned"  
    identity_ids = [var.managed_identity_id]  
  }  
  
  registry {  
    server   = var.container_registry_login  
    identity = var.managed_identity_id  
  }  
  
  template {  
    min_replicas = try(each.value.min_replicas, 1)  
    max_replicas = try(each.value.max_replicas, 1)  
  
    container {  
      name   = each.key  
      image  = each.value.image  
      cpu    = 0.25  
      memory = "0.5Gi"  
  
      dynamic "env" {  
        for_each = var.ui_env  
        content {  
          name  = env.key  
          value = env.value  
        }  
      }  
  
      dynamic "env" {  
        for_each = var.ui_secrets  
        content {  
          name        = env.value.env_name  
          secret_name = env.value.name  
        }  
      }  
    }  
  }  
  
  ingress {  
    allow_insecure_connections = false  
    external_enabled           = true  
    target_port                = var.ui_ports[each.key]  
  
    traffic_weight {  
      percentage      = 100  
      latest_revision = true  
    }  
  }  
  
  dynamic "secret" {  
    for_each = var.ui_secrets  
    content {  
      name                = secret.value.name  
      key_vault_secret_id = secret.value.key_vault_secret_id  
      identity            = secret.value.identity  
    }  
  }  
}  

Why I Made These Design Decisions

Configuration Value Why
revision_mode “Single” Only one SSR server version active at a time — prevents session/bundle hash conflicts between revisions
identity SystemAssigned, UserAssigned User-assigned identity for ACR pull and Key Vault access; system identity for future per-app RBAC
registry.identity Managed identity Passwordless image pulls from Azure Container Registry — no admin credentials stored
min_replicas 1 SSR server must always be warm — Auth0 callbacks fail if no server is running
max_replicas 3 Horizontal auto-scaling under load
cpu / memory 0.25 vCPU / 0.5 Gi Lightweight Node.js server — AI workloads run in separate Function containers
target_port 3000 Matches the PORT env var in my Dockerfile
allow_insecure_connections false HTTPS-only — Container Apps provides managed TLS certificates automatically
external_enabled true Publicly reachable — this is the user-facing frontend

Root Module Invocation

# infra/terraform/main.tf  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/infra/terraform/main.tf#L200-L214  
  
  ui_images = {  
    ui = {  
      image        = "${module.container_registry.url}/docwriter-ui:${var.docker_image_version}"  
      min_replicas = 1  
      max_replicas = 3  
    }  
  }  
  
  ui_ports = {  
    ui = 3000  
  }  

How I Inject Secrets from Azure Key Vault into Container Apps

Auth0 secrets cannot be stored as plain environment variables. I use Azure Key Vault secret references, resolved at runtime by the container app’s managed identity:

# infra/terraform/modules/app/variables.tf  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/infra/terraform/modules/app/variables.tf#L65-L74  
  
variable "ui_secrets" {  
  description = "Secrets for the UI container app"  
  type = list(object({  
    name                = string  
    env_name            = string  
    key_vault_secret_id = string  
    identity            = string  
  }))  
  default = []  
}  

Here is how the flow works in my setup:

  1. Terraform creates a Key Vault secret (e.g., auth0-client-secret) with the sensitive value
  2. The Container App’s secret block references the Key Vault secret URI and specifies which managed identity resolves it
  3. The env block maps the secret to an environment variable name (AUTH0_CLIENT_SECRET)
  4. At container startup, the platform resolves the Key Vault reference and injects the value into process.env
  5. The Auth0 SDK reads process.env.AUTH0_CLIENT_SECRET in server-side route handlers

The secret value never appears in Terraform state files (when using Key Vault references), container logs, or the Azure portal in plain text. This approach gives me confidence that sensitive credentials stay where they belong.


CI/CD: GitHub Actions Pipeline for Next.js SSR on Azure Container Apps

My build pipeline follows a four-job sequence: Functions → API → UI → Terraform.

# .github/workflows/docker-build.yml  
# https://github.com/azure-way/aidocwriter/blob/69a49588eea5050bd2f031cf462fcd7874bc65c1/.github/workflows/docker-build.yml#L118-L167  
  
  ui:  
    runs-on: ubuntu-latest  
    needs: build  
    environment:  
      name: demo  
    steps:  
      - name: Checkout  
        uses: actions/checkout@v4  
        
      - name: Azure login (OIDC)  
        uses: azure/login@v2  
        with:  
          client-id: ${{ secrets.SPN_CLIENT_ID }}  
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}  
          subscription-id: ${{ secrets.subscription_id }}  
        
      - name: Authenticate to ACR  
        run: az acr login --name ${{ secrets.ACR_NAME }}  
        
      - name: Build and push UI image  
        uses: docker/build-push-action@v5  
        with:  
          context: .  
          file: ui/Dockerfile  
          push: true  
          build-args: |  
            NEXT_PUBLIC_API_BASE_URL=https://aidocwriter-api.gentlecliff-6769fc4f.westeurope.azurecontainerapps.io  
            NEXT_PUBLIC_AUTH0_AUDIENCE=https://docwriter-api.azureway.cloud  
            NEXT_PUBLIC_AUTH0_SCOPE=openid profile email api offline_access  
            NEXT_PUBLIC_PROFILE_ROUTE=/auth/profile  
            NEXT_PUBLIC_ACCESS_TOKEN_ROUTE=/api/auth/access-token  
          tags: |  
            ${{ secrets.ACR_LOGIN_SERVER }}/docwriter-ui:latest  
            ${{ secrets.ACR_LOGIN_SERVER }}/docwriter-ui:${{ needs.build.outputs.version }}  
  
  terraform:  
    needs: [build, api, ui]  
    uses: ./.github/workflows/terraform.yml  
    with:  
      docker_image_version: ${{ needs.build.outputs.version }}  
    secrets: inherit  

Pipeline Flow

Step Job What Happens
1 build Builds 12 Function images + PlantUML server in parallel via matrix strategy; derives version from git describe
2 api Builds the FastAPI Docker image
3 ui Builds the Next.js SSR image with NEXT_PUBLIC_* build-args; pushes to ACR with :latest and versioned tags
4 terraform Runs terraform apply with the new image version, creating new Container App revisions for all services

I use OIDC workload identity federation (azure/login@v2) to authenticate to Azure without stored passwords. The GitHub repository’s id-token: write permission allows the Actions runner to request a federated token that Azure AD trusts. I consider this essential — I never want long-lived credentials sitting in GitHub secrets.


Where SSR Fits in the Multi-Agent AI Architecture

As I described in Part 1, DocWriter Studio runs a 10-stage agentic pipeline: INTAKE → PLAN → WRITE → REVIEW → VERIFY → REWRITE → DIAGRAM_PREP → DIAGRAM_RENDER → FINALIZE.

My SSR container does not run AI workloads. It is the user’s control plane:

SSR Container (Next.js :3000) Function Containers (Azure Functions)
Auth0 session management and token minting Planner agent (LLM-powered)
Middleware-based route protection Writer agent
Server-side profile/session hydration Review ensemble (general, style, cohesion, summary)
API route handlers (/api/auth/*) Verifier and Rewriter agents
Serving pre-rendered HTML for public marketing pages Diagram preparation and PlantUML rendering
Finalizer (Markdown → PDF/DOCX export)

Both the SSR container and the Function workers run in the same Container Apps environment, but they scale independently. The UI scales on HTTP request volume (1–3 replicas). The Functions scale on Azure Service Bus queue depth. I find this separation clean and easy to reason about.


Common Pitfalls: Next.js SSR on Azure Container Apps Troubleshooting

1. HTTP 502 Bad Gateway After Deployment

Cause: HOSTNAME not set to 0.0.0.0. Next.js standalone defaults to 127.0.0.1, but Container Apps sends traffic from outside the loopback interface.
Fix: Add ENV HOSTNAME=“0.0.0.0” in the Dockerfile runner stage. This was the first issue I hit, and it took me longer than I would like to admit to figure out.

2. Auth0 Callback URL Mismatch Error

Cause: The Container Apps FQDN or custom domain is not registered in Auth0’s Allowed Callback URLs.
Fix: Add https://<your-domain>/api/auth/callback to the Auth0 application settings.

3. Build-Time Variable Shows undefined in the Browser

Cause: NEXT_PUBLIC_* variable was not passed as a Docker build-arg during image build.
Fix: Ensure all NEXT_PUBLIC_* values are declared as ARG, exported as ENV in the builder stage, and passed via build-args in the CI/CD pipeline. I now double-check this list every time I add a new public variable.

4. Scale-to-Zero Breaks Authentication

Cause: min_replicas = 0 on the UI container means no server is running when Auth0 sends the callback redirect.
Fix: Set min_replicas = 1 for any Container App that handles OAuth callbacks. I learned this one quickly — the redirect just hangs while the container cold-starts, and Auth0 times out.

5. Secret Rotation Doesn’t Take Effect

Cause: Container Apps caches Key Vault secret values until a new revision is created.
Fix: After rotating a secret in Key Vault, trigger a terraform apply or manually create a new Container App revision.

6. CORS Errors on API Calls

Cause: CORS headers are needed on the API container, not the UI. The UI is the origin — it does not serve cross-origin resources.
Fix: Configure the CORS policy on azurerm_container_app.api with the UI’s domain as an allowed origin.


Frequently Asked Questions

Can I use Azure Static Web Apps instead of Container Apps for Next.js SSR?

Azure Static Web Apps supports hybrid Next.js deployments (SSG + API routes). However, it has limitations on execution time, bundle size, and does not support long-running Node.js processes. If your SSR server needs to manage Auth0 sessions, run middleware on every request, and serve as a BFF — as mine does — Container Apps provides the full Node.js runtime you need.

How much does the Next.js SSR container cost on Azure Container Apps?

On the Consumption plan with 0.25 vCPU and 0.5 Gi memory, a single always-on replica costs me approximately $10–15/month. Scaling to 3 replicas under load increases cost proportionally, but only for the duration of the scale-out. For a project like this, I find the cost very reasonable.

Can I use ISR (Incremental Static Regeneration) with this setup?

Yes. The standalone output mode supports ISR. Pages can be statically generated at build time and revalidated on demand. The Node.js server handles revalidation requests server-side. I combine ISR for public marketing pages with full SSR for authenticated workspace routes.

How do I add a custom domain to the UI Container App?

Use azurerm_container_app_custom_domain in Terraform or the Azure CLI. You will need to configure a CNAME record and validate domain ownership. After adding the domain, update Auth0’s Allowed Callback URLs, Logout URLs, and Web Origins.

What Node.js version should I use?

I recommend Node.js 20 LTS (Alpine variant) for production Next.js deployments. The node:20-alpine base image provides the smallest footprint with long-term support, and it is what I use across all my projects.


Summary

Running Next.js SSR on Azure Container Apps is the right deployment model when your frontend manages authentication sessions, mints access tokens, and guards routes with server-side middleware. For DocWriter Studio’s agentic AI pipeline, this architecture gives me:

  • A persistent Node.js runtime for Auth0 session encryption, token refresh, and OAuth callbacks
  • Horizontal auto-scaling (1–3 replicas) behind managed HTTPS ingress with automatic TLS
  • Secret isolation via Key Vault references resolved by managed identity — no stored passwords
  • Clean separation from AI workload containers that scale independently on queue depth
  • Fully automated CI/CD through GitHub Actions with OIDC → ACR push → Terraform apply → new revision
  • Minimal container footprint via standalone output and multi-stage Docker builds

The full source code — Dockerfile, Terraform modules, middleware, and Auth0 integration — is available at azure-way/aidocwriter.

Leave a Reply