Ask AI

Run launchers#

This guide is applicable to Dagster Open Source (OSS) deployments. For Dagster+, refer to the Dagster+ documentation.

Runs instigated from the Dagster UI, the scheduler, or the dagster job launch CLI command are launched in Dagster. This is a distinct operation from executing a job using the execute_job Python API or the CLI execute command. A launch operation allocates computational resources (e.g. a process, a container, a Kubernetes pod, etc) to carry out a run execution and then instigates the execution.

The core abstraction in the launch process is the run launcher, which is configured as part of the Dagster instance The run launcher is the interface to the computational resources that will be used to actually execute Dagster runs. It receives the ID of a created run and a representation of the pipeline that is about to undergo execution.


Relevant APIs#

NameDescription
RunLauncherBase class for run launchers.

Built-in run launchers#

The simplest run launcher is the built-in run launcher, DefaultRunLauncher. This run launcher spawns a new process per run on the same node as the job's code location.

Other run launchers include:

NameDescriptionDocumentation
K8sRunLauncherA run launcher that allocates a Kubernetes job per run. Deploying with Helm
EcsRunLauncherA run launcher that launches an Amazon ECS task per run. Deploying with ECS
DockerRunLauncherA run launcher that launches runs in a Docker container.
CeleryK8sRunLauncherA run launcher that launches runs as single Kubernetes jobs with extra configuration to support the celery_k8s_job_executor. Per-op limits in Kubernetes

Custom run launchers#

A few examples of when a custom run launcher is needed:

  • You have custom infrastructure or custom APIs for allocating nodes for execution.
  • You have custom logic for launching runs on different clusters, platforms, etc.

We refer to the process or computational resource created by the run launcher as the run worker. The run launcher only determines the behavior of the run worker. Once execution starts within the run worker, it is the executor - an in-memory abstraction in the run worker process - that takes over management of computational resources.