DevBolt
Processed in your browser. Your data never leaves your device.

How do I validate a Dockerfile online?

Paste your Dockerfile and click Validate to check for syntax errors, security issues (running as root, latest tags), best practices (multi-stage builds, layer optimization), and deprecated instructions. Each issue includes a severity, line number, and fix suggestion. Everything runs in your browser.

Validate Dockerfile
Input
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Output
✓ Valid Dockerfile
Base image: node:20-alpine
Stages: 1
Instructions: 6
Best practices: all passed
← Back to tools

Dockerfile Validator

Validate your Dockerfile for syntax errors, security issues, and best practices. Checks instructions, image tags, layer optimization, and more.

About Dockerfile Validation

A Dockerfile is a text document containing instructions to assemble a Docker container image. Each instruction creates a layer in the image.

What we check:

  • Valid Dockerfile instructions (FROM, RUN, COPY, etc.)
  • FROM requirements — must be first instruction, image tag pinning
  • Multi-stage build validation — stage names and COPY --from references
  • Security — running as root, piping scripts from URLs
  • Best practices — ADD vs COPY, exec form vs shell form, apt-get cleanup
  • Layer optimization — combining apt-get update and install, cleaning caches
  • Deprecated instructions like MAINTAINER
  • EXPOSE port validation and WORKDIR absolute paths
  • HEALTHCHECK and USER instruction presence

Everything runs in your browser — no data is sent over the network.

Tips & Best Practices

Pro Tip

Order layers from least to most frequently changing

Docker caches layers sequentially — a change invalidates all subsequent layers. Put OS packages and dependencies (rarely change) before your application code (changes every commit). This pattern: COPY package.json → RUN npm install → COPY . means npm install is cached unless package.json changes.

Common Pitfall

Running as root inside containers is a critical security risk

By default, Dockerfiles run as root. If an attacker exploits your app, they have root access inside the container (and potentially escape to the host). Always add USER nonroot after installing dependencies. Many base images (gcr.io/distroless) run as non-root by default.

Real-World Example

Use multi-stage builds to reduce image size by 90%

A Node.js app with devDependencies can produce a 1.5 GB image. Multi-stage build: stage 1 installs and builds (FROM node:20 AS builder), stage 2 copies only the output (FROM node:20-slim, COPY --from=builder /app/dist). Final image: ~150 MB with no build tools, source code, or devDependencies.

Security Note

Scan your Dockerfile for known vulnerabilities

Use hadolint for Dockerfile best practices and docker scout or trivy for vulnerability scanning of base images. Alpine-based images have fewer CVEs than Ubuntu/Debian bases. Pin base images with digest (@sha256:...) to prevent supply chain attacks via tag hijacking.

Frequently Asked Questions

How do I validate a Dockerfile for best practices?
Paste your Dockerfile into DevBolt's validator and it checks for syntax errors, security issues, and best practice violations. The tool flags problems like running as root without a USER instruction, using the latest tag instead of pinned versions, missing health checks, inefficient layer ordering that breaks Docker cache, and missing .dockerignore recommendations. Each issue includes an explanation and suggested fix. The validator follows Docker's official best practices and Hadolint-style rules. All validation runs client-side in your browser.
What are common Dockerfile security issues?
Critical Dockerfile security issues include running containers as root (always add a USER instruction with a non-root user), using unverified base images (prefer official images), not pinning base image versions (use specific tags like node:20-alpine instead of node:latest), including secrets in build layers (use multi-stage builds or BuildKit secrets), and installing unnecessary packages that increase the attack surface. Each layer in a Docker image is immutable, so secrets added in early layers persist even if removed in later layers.
How do I optimize Dockerfile layer caching?
Docker builds images layer by layer, caching each layer and reusing it if the input has not changed. Order your Dockerfile instructions from least to most frequently changing. Copy dependency manifests (package.json, requirements.txt) and install dependencies before copying source code. This way, dependency installation is cached unless the manifest changes. A common Node.js pattern is: COPY package*.json ./ then RUN npm ci, then COPY . ./ as separate steps. Use .dockerignore to exclude node_modules, .git, and build artifacts from the build context.

Related Inspect Tools