Ask AI

Deploying Dagster on Helm#

Kubernetes is a container orchestration system for automating deployment, scaling, and management of containerized applications. Dagster uses Kubernetes in combination with Helm, a package manager for Kubernetes applications. Using Helm, users specify the configuration of required Kubernetes resources to deploy Dagster through a values file or command-line overrides. References to values.yaml in the following sections refer to Dagster's values.yaml.

Dagster publishes a fully-featured Helm chart to manage installing and running a production-grade Kubernetes deployment of Dagster. For each Dagster component in the chart, Dagster publishes a corresponding Docker image on DockerHub.


Prerequisites#

To complete the steps in this guide, you'll need to:

  • Have kubectl installed and configured with your desired Kubernetes cluster
  • Understand the basics of Helm
  • Have Helm 3 installed
  • Have Docker installed

Versioning#

The Dagster Helm chart is versioned with the same version numbers as the Dagster Python library. Ideally, the Helm chart and Dagster Python library should only be used together when their version numbers match.

In the following tutorial, we install the most recent version of the Dagster Helm chart. To use an older version of the chart, pass a --version flag to helm upgrade.


Deployment architecture#

Default Dagster-Kubernetes deployment architecture
ComponentTypeImage
Code location serverDeployment behind a serviceUser-provided or dagster/user-code-example (released weekly)
Dagster webserverDeployment behind a servicedagster/dagster-k8s (released weekly)
DaemonDeploymentdagster/dagster-k8s (released weekly)
Run workerJobUser-provided or dagster/user-code-example (released weekly)
DatabasePostgreSQL postgres (optional)

Walkthrough#

Step 1: Configure kubectl#

First, configure the kubectl CLI to point at a kubernetes cluster. You can use docker-desktop to set up a local k8s cluster to develop against or substitute with another k8s cluster as desired.

If you're using docker-desktop and you have a local cluster set up, configure the kubectl CLI to point to the local k8s cluster:

kubectl config set-context dagster --namespace default --cluster docker-desktop --user=docker-desktop
kubectl config use-context dagster

Step 2: Build a Docker image for user code#

Skip this step if using Dagster's example user code image (dagster/user-code-example).

In this step, you'll build a Docker image containing your Dagster definitions and any dependencies needed to execute the business logic in your code. For reference, here is an example Dockerfile and the corresponding user code directory.

This example installs all of the Dagster-related dependencies in the Dockerfile and then copies the directory with the implementation of the Dagster repository into the root folder. We'll need to remember the path of this repository in a later step to set up the gRPC server as a deployment.

The example user code repository includes:

For projects with many dependencies, we recommend publishing your Python project as a package and installing it in your Dockerfile.

Step 3: Push Docker image to registry#

Skip this step if using Dagster's example user code image (dagster/user-code-example).

Next, publish the image to a registry that is accessible from the Kubernetes cluster, such as Amazon Web Services (AWS) ECR or DockerHub.

Step 4: Set up Amazon S3#

This step is optional.

Several of the jobs in dagster/user-code-example use an S3 I/O Manager. To run these jobs, you'll need an available AWS S3 bucket and access to a pair of AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY values. This is because the I/O Manager uses boto.

This tutorial also has the option of using minio to mock an S3 endpoint locally in k8s. Note: This option uses host.docker.internal to access a host from within Docker. This behavior has only been tested for MacOS and may need a different configuration for other platforms.

  1. To use AWS S3, create a bucket in your AWS account. For this tutorial, we'll create a bucket called test-bucket.
  2. Retrieve your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY credentials.
  3. Run the following to create your k8s secrets:
kubectl create secret generic dagster-aws-access-key-id --from-literal=AWS_ACCESS_KEY_ID=<YOUR ACCESS KEY ID>
kubectl create secret generic dagster-aws-secret-access-key --from-literal=AWS_SECRET_ACCESS_KEY=<SECRET ACCESS KEY>

Step 5: Add the Dagster Helm chart repository#

The Dagster chart repository contains the versioned charts for all Dagster releases. Add the remote URL under the namespace dagster to install the Dagster charts.

helm repo add dagster https://dagster-io.github.io/helm

Step 6: Configure your user deployment#

Step 6.1: Configure the deployment#

In this step, you'll update the dagster-user-deployments.deployments section of the Dagster chart's values.yaml to include your deployment.

Here, we can specify the configuration of the Kubernetes deployment that creates the gRPC server for the webserver and daemon to access the user code. The gRPC server is created through the arguments passed to dagsterApiGrpcArgs, which expects a list of arguments for dagster api grpc.

To get access to the Dagster values.yaml, run:

helm show values dagster/dagster > values.yaml

The following snippet works for Dagster's example user code image. Since our Dockerfile contains the code location definition in a path, we specify arguments for the gRPC server to find this path under dagsterApiGrpcArgs.

Note: If you haven't set up an S3 endpoint, you can only run the job called example_job.

dagster-user-deployments:
  enabled: true
  deployments:
    - name: "k8s-example-user-code-1"
      image:
        repository: "docker.io/dagster/user-code-example"
        tag: latest
        pullPolicy: Always
      dagsterApiGrpcArgs:
        - "--python-file"
        - "/example_project/example_repo/repo.py"
      port: 3030

dagsterApiGrpcArgs also supports loading code location definitions from a module name. Refer to the code location documentation for a list of arguments.

You can also specify configuration like configmaps, secrets, volumes, resource limits, and labels for individual user code deployments:

dagster-user-deployments:
  enabled: true
  deployments:
    - name: "k8s-example-user-code-1"
      image:
        repository: "docker.io/dagster/user-code-example"
        tag: latest
        pullPolicy: Always
      dagsterApiGrpcArgs:
        - "--python-file"
        - "/example_project/example_repo/repo.py"
      port: 3030
      envConfigMaps:
        - name: my-config-map
      envSecrets:
        - name: my-secret
      labels:
        foo_label: bar_value
      volumes:
        - name: my-volume
          configMap: my-config-map
      volumeMounts:
        - name: test-volume
          mountPath: /opt/dagster/test_folder
          subPath: test_file.yaml
      resources:
        limits:
          cpu: 100m
          memory: 128Mi
        requests:
          cpu: 100m
          memory: 128Mi
      includeConfigInLaunchedRuns:
        enabled: true

By default, this configuration will also be included in the pods for any runs that are launched for the code location server. You can disable this behavior for a code location server by setting includeConfigInLaunchedRuns.enabled to false for that server.

Step 6.2 Run pod_per_op_job#

You'll need a slightly different configuration to run the pod_per_op_job. This is because pod_per_op_job uses an s3_pickle_io_manager, so you'll need to provide the user code k8s pods with AWS S3 credentials.

Refer to Step 4 for setup instructions. The below snippet works for both AWS S3 and a local S3 endpoint via minio:

dagster-user-deployments:
  enabled: true
  deployments:
    - name: "k8s-example-user-code-1"
      image:
        repository: "docker.io/dagster/user-code-example"
        tag: latest
        pullPolicy: Always
      dagsterApiGrpcArgs:
        - "--python-file"
        - "/example_project/example_repo/repo.py"
      port: 3030
      envSecrets:
        - name: dagster-aws-access-key-id
        - name: dagster-aws-secret-access-key

runLauncher:
  type: K8sRunLauncher
  config:
    k8sRunLauncher:
      envSecrets:
        - name: dagster-aws-access-key-id
        - name: dagster-aws-secret-access-key

Step 7: Install the Dagster Helm chart#

Next, you'll install the Helm chart and create a release. Below, we've named our release dagster. We use helm upgrade --install to create the release if it doesn't exist; otherwise, the existing dagster release will be modified:

helm upgrade --install dagster dagster/dagster -f /path/to/values.yaml

Helm will launch several pods including PostgreSQL. You can check the status of the installation with kubectl. Note that it might take a few minutes for the pods to move to a Running state.

If everything worked correctly, you should see output like the following:

$ kubectl get pods
NAME                                              READY   STATUS    RESTARTS   AGE
dagster-webserver-645b7d59f8-6lwxh                    1/1     Running   0          11m
dagster-k8s-example-user-code-1-88764b4f4-ds7tn   1/1     Running   0          9m24s
dagster-postgresql-0                              1/1     Running   0          17m

Step 8: Run a job in your deployment#

After Helm has successfully installed all the required Kubernetes resources, start port forwarding to the webserver pod via:

DAGSTER_WEBSERVER_POD_NAME=$(kubectl get pods --namespace default \
  -l "app.kubernetes.io/name=dagster,app.kubernetes.io/instance=dagster,component=webserver" \
  -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $DAGSTER_WEBSERVER_POD_NAME 8080:80

Next, try running a job. Navigate to http://127.0.0.1:8080, then the Launchpad tab for example_job, and click Launch Run.

You can introspect the jobs that were launched with kubectl:

$ kubectl get jobs
NAME                                               COMPLETIONS   DURATION   AGE
dagster-run-5ee8a0b3-7ca5-44e6-97a6-8f4bd86ee630   1/1           4s         11s

Now, you can try a run with step isolation. Switch to the pod_per_op_job job, changing the default config to point to your S3 bucket if needed, and launch the run.

If you're using Minio, change your config to look like this:

resources:
  io_manager:
    config:
      s3_bucket: "test-bucket"
  s3:
    config:
      # This use of host.docker.internal is unique to Mac
      endpoint_url: http://host.docker.internal:9000
      region_name: us-east-1
ops:
  multiply_the_word:
    config:
      factor: 0
    inputs:
      word: ""

Again, you can view the launched jobs:

kubectl get jobs
NAME                                               COMPLETIONS   DURATION   AGE
dagster-run-5ee8a0b3-7ca5-44e6-97a6-8f4bd86ee630   1/1           4s         11s
dagster-run-733baf75-fab2-4366-9542-0172fa4ebc1f   1/1           4s         100s

That's it! You deployed Dagster, configured with the default K8sRunLauncher, onto a Kubernetes cluster using Helm.


Debugging#

Running into issues deploying on Helm? Use these commands to help with debugging.

Get the name of the webserver pod#

DAGSTER_WEBSERVER_POD_NAME=$(kubectl get pods --namespace default \
      -l "app.kubernetes.io/name=dagster,app.kubernetes.io/instance=dagster,component=webserver" \
      -o jsonpath="{.items[0].metadata.name}")

Start a shell in the webserver pod#

kubectl exec --stdin --tty $DAGSTER_WEBSERVER_POD_NAME -- /bin/bash

Get debug data from $RUN_ID#

kubectl exec $DAGSTER_WEBSERVER_POD_NAME -- dagster debug export $RUN_ID debug_info.gzip

Get a list of recently failed runs#

kubectl exec $DAGSTER_WEBSERVER_POD -- dagster debug export fakename fakename.gzip

Get debug output of a failed run#

Note: This information is also available in the UI

kubectl exec $DAGSTER_WEBSERVER_POD -- dagster debug export 360d7882-e631-4ac7-8632-43c75cb4d426 debug.gzip

Extract the debug.gzip from the pod#

kubectl cp $DAGSTER_WEBSERVER_POD:debug.gzip debug.gzip

List config maps#

kubectl get configmap # Make note of the "user-deployments" configmap
kubectl get configmap dagster-dagster-user-deployments-$NAME