8 min read

Kubernetes Too Complex? A How-To Guide for Deploying Microservices with HashiCorp Nomad

Table of Contents

Kubernetes is the undisputed king of container orchestration. Its power and flexibility have made it the de facto standard for managing complex, cloud-native applications. But with great power comes great complexity. For many teams, especially those with smaller workloads or a desire for operational simplicity, the steep learning curve and high operational overhead of Kubernetes can be a significant barrier.

If you’ve ever felt that Kubernetes might be overkill for your needs, you’re not alone. The good news is, there’s a powerful, lightweight, and refreshingly simple alternative: HashiCorp Nomad.

In this guide, we’ll cut through the noise and provide a practical, hands-on introduction to Nomad. You’ll learn why it’s a compelling choice, understand its core concepts, and walk step-by-step through deploying your first microservice. Let’s dive in.

Table of contents

Why Consider Nomad Over Kubernetes?

Nomad is designed with a “just works” philosophy, focusing on developer and operator productivity. It provides robust orchestration capabilities without the sprawling ecosystem and conceptual weight of Kubernetes.

Simplicity and a Single Binary

The most striking difference is Nomad’s architecture. It is a single, lightweight binary that you run as a server or a client. There are no separate API servers, schedulers, or controllers to manage. This drastically simplifies installation, maintenance, and upgrades.

  • Kubernetes: A typical cluster involves etcd, kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, and a container runtime, each a potential point of failure and a component to learn and manage.
  • Nomad: You have a single Go binary, nomad. You run it in -server mode on your control plane nodes and -client mode on your worker nodes. That’s it.

Flexibility Beyond Containers

While Kubernetes is laser-focused on containers, Nomad is a general-purpose workload orchestrator. It uses a driver-based architecture to support a wide variety of tasks, including:

  • Containers: Docker, Podman, and other OCI-compliant runtimes.
  • Standalone Applications: Native binaries, scripts (exec driver).
  • Virtual Machines: Full VMs using the qemu driver.
  • Java Applications: JAR files run without containerization.

This flexibility allows you to manage both modern containerized services and legacy applications on the same platform, providing a unified operational model.

Native Integration with the HashiCorp Ecosystem

Nomad is a first-class citizen in the HashiCorp ecosystem. It integrates seamlessly with:

  • Consul: For automatic service discovery, health checking, and service mesh capabilities.
  • Vault: For securely managing and injecting secrets (API keys, passwords, certificates) into your applications at runtime.
  • Terraform: For provisioning the underlying infrastructure and even deploying Nomad jobs declaratively.

This tight integration provides a complete, out-of-the-box solution for scheduling, networking, and security.

Core Concepts of Nomad

Nomad’s data model is simple and intuitive. To get started, you only need to understand a few key terms.

Jobs, Groups, and Tasks

  • Task: The smallest unit of work in Nomad. It’s a single process, like a running Docker container, a Java application, or an exec command.
  • Group: A collection of related Tasks that you want to co-locate on the same client machine. For example, a web server and its log-shipping sidecar. All tasks in a group run on the same Nomad client.
  • Job: The top-level unit of work. A job is a specification that declares the desired state of one or more groups. It defines what you want to run, how many instances, and on which clients. Jobs are idempotent; submitting the same job file multiple times will only result in changes if the cluster state has drifted from the specification.

Clients and Servers

  • Servers: A small cluster of nodes (typically 3 or 5 for high availability) that form the control plane. They manage the cluster state, handle scheduling decisions, and respond to API requests.
  • Clients: The worker nodes in the cluster. Clients register with the servers and are responsible for running the tasks assigned to them by the scheduler.

A Practical How-To: Deploying a Microservice with Nomad

Theory is great, but let’s get our hands dirty. We’ll deploy a simple web application using Nomad’s local development agent.

Step 1: Setting Up a Local Nomad Dev Environment

First, install the Nomad binary for your operating system.

With the binary in your PATH, you can start a complete, single-node Nomad cluster (running both a server and a client) with one command. This is perfect for local development and testing.

# Start a Nomad agent in development mode
nomad agent -dev

Once it’s running, you can access the Nomad UI in your browser at http://localhost:4646.

Step 2: Defining a Nomad Job with HCL

Nomad jobs are defined using the human-readable HashiCorp Configuration Language (HCL). Create a file named echo.nomad and add the following content. This job will deploy a simple web server that echoes back request details.

// The 'job' stanza is the top-level block for a job specification.
// We give it a unique name, "echo-service".
job "echo-service" {
  # The 'datacenters' list specifies which datacenters the job can run in.
  # "dc1" is the default for the dev agent.
  datacenters = ["dc1"]

  # The 'group' stanza defines a set of tasks to be run together.
  group "web" {
    # 'count' specifies how many instances of this group to run.
    count = 2

    # The 'network' stanza configures networking for the tasks in this group.
    network {
      # We define a port named 'http' and map it to a dynamic port on the host.
      # Nomad will handle port allocation to avoid conflicts.
      port "http" {}
    }

    # The 'task' stanza defines the actual workload to run.
    task "server" {
      # 'driver' specifies which driver to use. Here, we use 'docker'.
      driver = "docker"

      # 'config' provides driver-specific configuration.
      config {
        image = "hashicorp/http-echo:latest"
        # Arguments to pass to the container's entrypoint.
        args = [
          "-listen", ":${NOMAD_PORT_http}", # Listen on the port allocated by Nomad
          "-text", "Hello from Nomad!",
        ]
      }

      # The 'service' stanza integrates with Consul for service discovery.
      # Nomad will automatically register this service in Consul.
      service {
        name = "echo-web-server"
        port = "http"

        # Add tags for filtering and a health check.
        tags = ["web", "microservice"]
        check {
          type     = "http"
          path     = "/"
          interval = "10s"
          timeout  = "2s"
        }
      }

      # 'resources' defines the CPU and memory required by the task.
      resources {
        cpu    = 100 # MHz
        memory = 64  # MB
      }
    }
  }
}

Step 3: Running and Inspecting the Job

Now, let’s submit this job to our running Nomad agent.

# Run the job
nomad job run echo.nomad

# You should see output like this:
# ==> Monitoring evaluation "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
#     Evaluation triggered by job "echo-service"
#     Allocation "..." created: node "...", group "web"
#     Evaluation status changed: "pending" -> "complete"
# ==> Evaluation "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" finished with status "complete"

You can check the status of your job from the CLI:

# Check the job status
nomad job status echo-service

# Get the ID of one of the allocations
# Then inspect the allocation status to find the allocated port
nomad alloc status <your-alloc-id>

In the Allocation Status output, look for the Ports section to find the host port mapped to your container (e.g., http -> 127.0.0.1:24813). Open your browser and navigate to that address. You should see “Hello from Nomad!” and the request details. You can also see your new job, groups, and tasks in the Nomad UI.

Step 4: Updating and Scaling the Service

Orchestration shines when it’s time to manage change. Let’s scale our service from 2 instances to 4. Simply update the count in your echo.nomad file:

// in group "web"
count = 4

Now, run the job again.

nomad job run echo.nomad

Nomad will perform a rolling update, creating two new allocations to meet the desired count of 4. You can observe this process in the Nomad UI or via nomad job status echo-service. This same workflow applies to changing the container image for a zero-downtime deployment.

Conclusion

We’ve only scratched the surface, but the contrast with Kubernetes should be clear. With a single binary and a simple HCL file, we deployed, scaled, and managed a containerized microservice. We didn’t need to configure a YAML manifest spanning multiple API objects, set up an ingress controller, or learn a complex networking model.

Nomad offers a pragmatic path to modern orchestration. It delivers 80% of the value for 20% of the complexity, making it an ideal choice for:

  • Teams looking for operational simplicity.
  • Organizations managing a mix of containerized and non-containerized workloads.
  • Edge computing scenarios where a lightweight footprint is critical.

If the complexity of Kubernetes has been holding you back, it’s time to give HashiCorp Nomad a serious look.

What’s next?

  • Explore Nomad’s official tutorials to learn about production deployments, integrating with Consul and Vault, and running different task drivers.
  • Try provisioning your Nomad cluster infrastructure using Terraform.

Have you used Nomad? Share your experiences or questions in the comments below