Workspace files#

This guide is applicable only to Dagster Open Source (OSS).

Workspace files contain a collection of user-defined code locations and information about where to find them. Code locations loaded via workspace files can contain either a Definitions object or multiple repositories.

Workspace files are used by Dagster to load code locations in complex or customized environments, such as a production OSS deployment. For local development within a single Python environment, Definitions users can use the -m or -f flags with our CLI tools, or set the pyproject.toml file to avoid using command line flags entirely.

Relevant APIs#

DefinitionsThe object that contains all the definitions defined within a code location. Definitions include assets, jobs, resources, schedules, and sensors.
@repositoryThe decorator used to define repositories. The decorator returns a RepositoryDefinition. Note: This has been replaced by Definitions, which is the recommended way to define code locations.

Understanding workspace files#

A workspace file tells Dagster where to find code locations. By default, this is a YAML document named workspace.yaml. For example:

# workspace.yaml

  - python_file:

Each entry in a workspace file is considered a code location. A code location can contain either a single Definitions object, a repository, or multiple repositories.

To accommodate incrementally migrating from @repository to Definitions, code locations in a single workspace file can mix and match between definition approaches. For example, code-location-1 could load a single Definitions object from a file or module, and code-location-2 could load multiple repositories.

Each code location is loaded in its own process that Dagster tools use an RPC protocol to communicate with. This process separation allows multiple code locations in different environments to be loaded independently, and ensures that errors in user code can't impact Dagster system code.

Configuring code locations#

From a file#

To load a code location from a Python file, use the python_file key in workspace.yaml. The value of python_file should specify a path relative to workspace.yaml leading to a file that contains a code location definition.

For example, if a code location is in and the file is in the same folder as workspace.yaml, the code location could be loaded using the following:

# workspace.yaml

  - python_file:

If using @repository to define code locations, you can identify a single repository within the module using the attribute key. The value of this key must be the name of a repository or the name of a function that returns a RepositoryDefinition. For example:

# workspace.yaml

  - python_file:
      attribute: hello_world_repository

Loading workspace files#

By default, Dagster command-line tools (like dagster dev, dagit, or dagster-daemon run) look for workspace files (by default, workspace.yaml) in the current directory when invoked. This allows you to launch from that directory without the need for command line arguments:

dagster dev

To load the workspace.yaml file from a different folder, use the -w argument:

dagster dev -w path/to/workspace.yaml

When dagster dev is run, Dagster will load all the code locations defined by the workspace file. Refer to the CLI reference for more info and examples.

If a code location can't be loaded - for example, due to a syntax or some other unrecoverable error - a warning message will display in Dagit. You'll be directed to a status page with a descriptive error and stack trace for any locations Dagster was unable to load.

Note: If a code location is re-named or its configuration in a workspace file is modified, you'll need to stop and re-start any running schedules or sensors in that code location. You can do this in Dagit by navigating to the Deployment overview page and using the Schedules and Sensors tabs.

This is required because when you start a schedule or a sensor, a serialized representation of the entry in your workspace file is stored in a database. The Dagster daemon process uses this serialized representation to identify and load your schedule or sensor. If the code location is modified and its schedules and sensors aren't restarted, the Dagster daemon process will use an outdated serialized representation, resulting in issues.

Running your own gRPC server#

By default, Dagster tools automatically create a process on your local machine for each of your code locations. However, it's also possible to run your own gRPC server that's responsible for serving information about your code locations. This can be useful in more complex system architectures that deploy user code separately from Dagit.

Initializing the server#

To initialize the Dagster gRPC server, run the dagster api grpc command and include:

  • A target file or module. Similar to a workspace file, the target can either be a Python file or module.
  • Host address
  • Port or socket

The following tabs demonstrate some common ways to initialize a gRPC server:

Running on a port, using a Python file:

dagster api grpc --python-file /path/to/ --host --port 4266

Running on a socket, using a Python file:

dagster api grpc --python-file /path/to/ --host --socket /path/to/socket

Refer to the API docs for the full list of options that can be set when running a new gRPC server.

Then, in your workspace file, configure a new gRPC server code location to load:

# workspace.yaml

  - grpc_server:
      host: localhost
      port: 4266
      location_name: "my_grpc_server"

Specifying a Docker image#

When running your own gRPC server within a container, you can tell Dagit that any runs launched from a code location should be launched in a container with that same image.

To do this, set the DAGSTER_CURRENT_IMAGE environment variable to the name of the image before starting the server. After setting this environment variable for your server, the image should be listed alongside the code location on the Status page in Dagit.

This image will only be used by run launchers and executors that expect to use Docker images (like the DockerRunLauncher, K8sRunLauncher, docker_executor, or k8s_job_executor).

If you're using the built-in Helm chart, this environment variable is automatically set on each of your gRPC servers.


Loading relative imports#

By default, code is loaded with Dagit's working directory as the base path to resolve any local imports in your code. Using the working_directory key, you can specify a custom working directory for relative imports. For example:

# workspace.yaml

  - python_file:
      working_directory: my_working_directory/