Updated cronitor pin to allow versions >= 5.0.1 to enable use of DayOfWeek as 7. Cronitor 4.0.0 is still disallowed. (Thanks, @joshuataylor!)
Added flag checkDbReadyInitContainer to optionally disable db check initContainer.
[ui] Added Google Drive icon for kind tags. (Thanks, @dragos-pop!)
[ui] Renamed the run lineage sidebar on the Run details page to Re-executions.
[ui] Sensors and schedules that appear in the Runs page are now clickable.
[ui] Runs targeting assets now show more of the assets in the Runs page.
[dagster-airbyte] The destination type for an Airbyte asset is now added as a kind tag for display in the UI.
[dagster-gcp] DataprocResource now receives an optional parameter labels to be attached to Dataproc clusters. (Thanks, @thiagoazcampos!)
[dagster-k8s] Added a checkDbReadyInitContainer flag to the Dagster Helm chart to allow disabling the default init container behavior. (Thanks, @easontm!)
[dagster-k8s] K8s pod logs are now logged when a pod fails. (Thanks, @apetryla!)
[dagster-sigma] Introduced build_materialize_workbook_assets_definition which can be used to build assets that run materialize schedules for a Sigma workbook.
[dagster-snowflake] SnowflakeResource and SnowflakeIOManager both accept additional_snowflake_connection_args config. This dictionary of arguments will be passed to the snowflake.connector.connect method. This config will be ignored if you are using the sqlalchemy connector.
[helm] Added the ability to set user-deployments labels on k8s deployments as well as pods.
Assets with self dependencies and BackfillPolicy are now evaluated correctly during backfills. Self dependent assets no longer result in serial partition submissions or disregarded upstream dependencies.
Previously, the freshness check sensor would not re-evaluate freshness checks if an in-flight run was planning on evaluating that check. Now, the freshness check sensor will kick off an independent run of the check, even if there's already an in flight run, as long as the freshness check can potentially fail.
Previously, if the freshness check was in a failing state, the sensor would wait for a run to update the freshness check before re-evaluating. Now, if there's a materialization later than the last evaluation of the freshness check and no planned evaluation, we will re-evaluate the freshness check automatically.
[ui] Fixed run log streaming for runs with a large volume of logs.
[ui] Fixed a bug in the Backfill Preview where a loading spinner would spin forever if an asset had no valid partitions targeted by the backfill.
[dagster-aws] PipesCloudWatchMessageReader correctly identifies streams which are not ready yet and doesn't fail on ThrottlingException. (Thanks, @jenkoian!)
[dagster-fivetran] Column metadata can now be fetched for Fivetran assets using FivetranWorkspace.sync_and_poll(...).fetch_column_metadata().
[dagster-k8s] The k8s client now waits for the main container to be ready instead of only waiting for sidecar init containers. (Thanks, @OrenLederman!)
The automatic run retry daemon has been updated so that there is a single source of truth for if a run will be retried and if the retry has been launched. Tags are now added to run at failure time indicating if the run will be retried by the automatic retry system. Once the automatic retry has been launched, the run ID of the retry is added to the original run.
When canceling a backfill of a job, the backfill daemon will now cancel all runs launched by that backfill before marking the backfill as canceled.
Dagster execution info (tags such as dagster/run-id, dagster/code-location, dagster/user and Dagster Cloud environment variables) typically attached to external resources are now available under DagsterRun.dagster_execution_info.
SensorReturnTypesUnion is now exported for typing the output of sensor functions.
[dagster-dbt] dbt seeds now get a valid code version (Thanks @marijncv!).
Manual and automatic retries of runs launched by backfills that occur while the backfill is still in progress are now incorporated into the backfill's status.
Manual retries of runs launched by backfills are no longer considered part of the backfill if the backfill is complete when the retry is launched.
[dagster-fivetran] Fivetran assets can now be materialized using the FivetranWorkspace.sync_and_poll(…) method in the definition of a @fivetran_assets decorator.
[dagster-fivetran] load_fivetran_asset_specs has been updated to accept an instance of DagsterFivetranTranslator or custom subclass.
[dagster-fivetran] The fivetran_assets decorator was added. It can be used with the FivetranWorkspace resource and DagsterFivetranTranslator translator to load Fivetran tables for a given connector as assets in Dagster. The build_fivetran_assets_definitions factory can be used to create assets for all the connectors in your Fivetran workspace.
[dagster-aws] ECSPipesClient.run now waits up to 70 days for tasks completion (waiter parameters are configurable) (Thanks @jenkoian!)
[dagster-dbt] Update dagster-dbt scaffold template to be compatible with uv (Thanks @wingyplus!).
[dagster-airbyte] A load_airbyte_cloud_asset_specs function has
been added. It can be used with the AirbyteCloudWorkspace resource and DagsterAirbyteTranslator translator to load your Airbyte Cloud connection streams as external assets in Dagster.
[ui] Add an icon for the icechunk kind.
[ui] Improved ui for manual sensor/schedule evaluation.
Fixed database locking bug for the ConsolidatedSqliteEventLogStorage, which is mostly used for tests.
[dagster-aws] Fixed a bug in the ECSRunLauncher that prevented it from accepting a user-provided task definition when DAGSTER_CURRENT_IMAGE was not set in the code location.
[ui] Fixed an issue that would sometimes cause the asset graph to fail to render on initial load.
[ui] Fix global auto-materialize tick timeline when paginating.
Global op concurrency is now enabled on the default SQLite storage. Deployments that have not been migrated since 1.6.0 may need to run dagster instance migrate to enable.
Introduced map_asset_specs to enable modifying AssetSpecs and AssetsDefinitions in bulk.
Introduced AssetSpec.replace_attributes and AssetSpec.merge_attributes to easily alter properties of an asset spec.
[ui] Add a "View logs" button to open tick logs in the sensor tick history table.
[ui] Add Spanner kind icon.
[ui] The asset catalog now supports filtering using the asset selection syntax.
[dagster-pipes, dagster-aws] PipesS3MessageReader now has a new parameter include_stdio_in_messages which enables log forwarding to Dagster via Pipes messages.
[dagster-pipes] Experimental: A new Dagster Pipes message type log_external_stream has been added. It can be used to forward external logs to Dagster via Pipes messages.
[dagster-powerbi] Opts in to using admin scan APIs to pull data from a Power BI instance. This can be disabled by passing load_powerbi_asset_specs(..., use_workspace_scan=False).
[dagster-sigma] Introduced an experimental dagster-sigma snapshot command, allowing Sigma workspaces to be captured to a file for faster subsequent loading.
Fixed a bug that caused DagsterExecutionStepNotFoundError errors when trying to execute an asset check step of a run launched by a backfill.
Fixed an issue where invalid cron strings like "0 0 30 2 *" that represented invalid dates in February were still allowed as Dagster cron strings, but then failed during schedule execution. Now, these invalid cronstrings will raise an exception when they are first loaded.
Fixed a bug where owners added to AssetOuts when defining a @graph_multi_asset were not added to the underlying AssetsDefinition.
Fixed a bug where using the & or | operators on AutomationConditions with labels would cause that label to be erased.
[ui] Launching partitioned asset jobs from the launchpad now warns if no partition is selected.
[ui] Fixed unnecessary middle truncation occurring in dialogs.
[ui] Fixed timestamp labels and "Now" line rendering bugs on the sensor tick timeline.
[ui] Opening Dagster's UI with a single job defined takes you to the Overview page rather than the Job page.
[ui] Fix stretched tags in backfill table view for non-partitioned assets.
[ui] Open automation sensor evaluation details in a dialog instead of navigating away.
[ui] Fix scrollbars in dark mode.
[dagster-sigma] Workbooks filtered using a SigmaFilter no longer fetch lineage information.
[dagster-powerbi] Fixed an issue where reports without an upstream dataset dependency would fail to translate to an asset spec.
Added a new icon for the Denodo kind tag. (Thanks, @tintamarre!)
Errors raised from defining more than one Definitions object at module scope now include the object names so that the source of the error is easier to determine.
[ui] Asset metadata entries like dagster/row_count now appear on the events page and are properly hidden on the overview page when they appear in the sidebar.
[dagster-aws] PipesGlueClient now attaches AWS Glue metadata to Dagster results produced during Pipes invocation.
[dagster-aws] PipesEMRServerlessClient now attaches AWS EMR Serverless metadata to Dagster results produced during Pipes invocation and adds Dagster tags to the job run.
[dagster-aws] PipesECSClient now attaches AWS ECS metadata to Dagster results produced during Pipes invocation and adds Dagster tags to the ECS task.
[dagster-aws] PipesEMRClient now attaches AWS EMR metadata to Dagster results produced during Pipes invocation.
[dagster-databricks] PipesDatabricksClient now attaches Databricks metadata to Dagster results produced during Pipes invocation and adds Dagster tags to the Databricks job.
[dagster-fivetran] Added load_fivetran_asset_specs function. It can be used with the FivetranWorkspace resource and DagsterFivetranTranslator translator to load your Fivetran connector tables as external assets in Dagster.
[dagster-looker] Errors are now handled more gracefully when parsing derived tables.
[dagster-sigma] Sigma assets now contain extra metadata and kind tags.
[dagster-sigma] Added support for direct workbook to warehouse table dependencies.
[dagster-sigma] Added include_unused_datasets field to SigmaFilter to disable pulling datasets that aren't used by a downstream workbook.
[dagster-sigma] Added skip_fetch_column_data option to skip loading Sigma column lineage. This can speed up loading large instances.
[dagster-sigma] Introduced an experimental dagster-sigma snapshot command, allowing Sigma workspaces to be captured to a file for faster subsequent loading.
More Airflow-related content is coming soon! We'd love for you to check it out, and post any comments / questions in the #airflow-migration channel in the Dagster slack.
Fixed a bug in run status sensors where setting incompatible arguments monitor_all_code_locations and monitored_jobs did not raise the expected error. (Thanks, @apetryla!)
Fixed an issue that would cause the label for AutomationCondition.any_deps_match() and AutomationCondition.all_deps_match() to render incorrectly when allow_selection or ignore_selection were set.
Fixed a bug which could cause code location load errors when using CacheableAssetsDefinitions in code locations that contained AutomationConditions
Fixed an issue where the default multiprocess executor kept holding onto subprocesses after their step completed, potentially causing Too many open files errors for jobs with many steps.
[ui] Fixed an issue introduced in 1.9.2 where the backfill overview page would sometimes display extra assets that were targeted by the backfill.
[ui] Fixed "Open in Launchpad" button when testing a schedule or sensor by ensuring that it opens to the correct deployment.
[ui] Fixed an issue where switching a user setting was immediately saved, rather than waiting for the change to be confirmed.
[dagster-looker] Unions without unique/distinct criteria are now properly handled.
[dagster-powerbi] Fixed an issue where reports without an upstream dataset dependency would fail to translate to an asset spec.
[dagster-sigma] Fixed an issue where API fetches did not paginate properly.
Introduced a new constructor, AssetOut.from_spec, that will construct an AssetOut from an AssetSpec.
[ui] Column tags are now displayed in the Column name section of the asset overview page.
[ui] Introduced an icon for the gcs (Google Cloud Storage) kind tag.
[ui] Introduced icons for report and semanticmodel kind tags.
[ui] The tooltip for a tag containing a cron expression now shows a human-readable, timezone-aware cron string.
[ui] Asset check descriptions are now sourced from docstrings and rendered in the UI. (Thanks, @marijncv!)
[dagster-aws] Added option to propagate tags to ECS tasks when using the EcsRunLauncher. (Thanks, @zyd14!)
[dagster-dbt] You can now implement DagsterDbtTranslator.get_code_version to customize the code version for your dbt assets. (Thanks, @Grzyblon!)
[dagster-pipes] Added the ability to pass arbitrary metadata to PipesClientCompletedInvocation. This metadata will be attached to all materializations and asset checks stored during the pipes invocation.
[dagster-powerbi] During a full workspace scan, owner and column metadata is now automatically attached to assets.
Fixed an issue with AutomationCondition.execution_in_progress which would cause it to evaluate to True for unpartitioned assets that were part of a run that was in progress, even if the asset itself had already been materialized.
Fixed an issue with AutomationCondition.run_in_progress that would cause it to ignore queued runs.
Fixed an issue that would cause a default_automation_condition_sensor to be constructed for user code servers running on dagster version < 1.9.0 even if the legacy auto_materialize: use_sensors configuration setting was set to False.
[ui] Fixed an issue when executing asset checks where the wrong job name was used in some situations. The correct job name is now used.
[ui] Selecting assets with 100k+ partitions no longer causes the asset graph to temporarily freeze.
[ui] Fixed an issue that could cause a GraphQL error on certain pages after removing an asset.
[ui] The asset events page no longer truncates event history in cases where both materialization and observation events are present.
[ui] The backfill coordinator logs tab no longer sits in a loading state when no logs are available to display.
[ui] Fixed issue which would cause the "Partitions evaluated" label on an asset's automation history page to incorrectly display 0 in cases where all partitions were evaluated.
[ui] Fix "Open in Playground" link when testing a schedule or sensor by ensuring that it opens to the correct deployment.
[ui] Fixed an issue where the asset graph would reload unexpectedly.
[dagster-dbt] Fixed an issue where the SQL filepath for a dbt model was incorrectly resolved when the dbt manifest file was built on a Windows machine, but executed on a Unix machine.
[dagster-pipes] Asset keys containing embedded / characters now work correctly with Dagster Pipes.
dagster project scaffold now has an option to create dagster projects from templates with excluded files/filepaths.
[ui] Filters in the asset catalog now persist when navigating subdirectories.
[ui] The Run page now displays the partition(s) a run was for.
[ui] Filtering on owners/groups/tags is now case-insensitive.
[dagster-tableau] the helper function parse_tableau_external_and_materializable_asset_specs is now available to parse a list of Tableau asset specs into a list of external asset specs and materializable asset specs.
[dagster-looker] Looker assets now by default have owner and URL metadata.
[dagster-k8s] Added a per_step_k8s_config configuration option to the k8s_job_executor, allowing the k8s configuration of individual steps to be configured at run launch time (thanks @Kuhlwein!)
[dagster-fivetran] Introduced DagsterFivetranTranslator to customize assets loaded from Fivetran.
[dagster-snowflake] dagster_snowflake.fetch_last_updated_timestamps now supports ignoring tables not found in Snowflake instead of raising an error.
Fixed issue which would cause a default_automation_condition_sensor to be constructed for user code servers running on dagster version < 1.9.0 even if the legacy auto_materialize: use_sensors configuration setting was set to False.
Fixed an issue where running dagster instance migrate on Dagster version 1.9.0 constructed a SQL query that exceeded the maximum allowed depth.
Fixed an issue where wiping a dynamically partitioned asset causes an error.
[dagster-polars] ImportErrors are no longer raised when bigquery libraries are not installed [#25708]
Fixed dagster new-project, which broke on the 0.11.0 release (Thank you @saulius!)
Docs fixes (Thanks @michaellynton and @zuik!)
New
The left navigation in Dagit now allows viewing more than one repository at a time. Click “Filter” to choose which repositories to show.
In dagster-celery-k8s, you can now specify a custom container image to use for execution in executor config. This image will take precedence over the image used for the user code deployment.
Bugfixes
Previously, fonts were not served correctly in Dagit when using the --path-prefix option. Custom fonts and their CSS have now been removed, and system fonts are now used for both normal and monospace text.
In Dagit, table borders are now visible in Safari.
Stopping and starting a sensor was preventing future sensor evaluations due to a timezone issue when calculating the minimum interval from the last tick timestamp. This is now fixed.
The blank state for the backfill table is now updated to accurately describe the empty state.
Asset catalog entries were returning an error if they had not been recently materialized since (since 0.11.0). Our asset queries are now backwards compatible to read from old materializations.
Backfills can now successfully be created with step selections even for partitions that did not have an existing run.
Backfill progress were sometimes showing negative counts for the “Skipped” category, when backfill runs were manually re-executed. This has now been amended to adjust the total run counts to include manually re-executed runs.
MySQL is now supported as a backend for storages you can now run your Dagster Instance on top of MySQL instead of Postgres. See the docs for how to configure MySQL for Event Log Storage, Run Storage, and Schedule Storage.
A new backfills page in Dagit lets you monitor and cancel currently running backfills. Backfills are now managed by the Dagster Daemon, which means you can launch backfills over thousands of partitions without risking crashing your Dagit server.
[Experimental] Dagster now helps you track the lineage of assets. You can attach AssetKeys to solid outputs through either the OutputDefinition or IOManager, which allows Dagster to automatically generate asset lineage information for assets referenced in this way. Direct parents of an asset will appear in the Dagit Asset Catalog. See the asset docs to learn more.
[Experimental] A collect operation for dynamic orchestration allows you to run solids that take a set of dynamically mapped outputs as an input. Building on the dynamic orchestration features of DynamicOutput and map from the last release, this release includes the ability to collect over dynamically mapped outputs. You can see an example here.
Dagster has a new documentation site. The URL is still https://docs.dagster.io, but the site has a new design and updated content. If you’re on an older version of Dagster, you can still view pre-0.11.0 documentation at https://legacy-docs.dagster.io.
dagster new-project is a new CLI command that generates a Dagster project with skeleton code on your filesystem. Learn how to use it here.
Added a partition_days_offset argument to the @daily_schedule decorator that allows you to customize which partition is used for each execution of your schedule. The default value of this parameter is 1, which means that a schedule that runs on day N will fill in the partition for day N-1. To create a schedule that uses the partition for the current day, set this parameter to 0, or increase it to make the schedule use an earlier day’s partition. Similar arguments have also been added for the other partitioned schedule decorators (@monthly_schedule, @weekly_schedule, and @hourly_schedule).ar
Both sensors and schedule definitions support a description parameter that takes in a human-readable string description and displays it on the corresponding landing page in Dagit.
Assets
[Experimental] AssetMaterialization now accepts a tags argument. Tags can be used to filter assets in Dagit.
Added support for assets to the default SQLite event log storage.
Daemon
The QueuedRunCoordinator daemon is now more resilient to errors while dequeuing runs. Previously runs which could not launch would block the queue. They will now be marked as failed and removed from the queue.
The dagster-daemon process uses fewer resources and spins up fewer subprocesses to load pipeline information. Previously, the scheduler, sensor, and run queue daemon each spun up their own process for this–now they share a single process.
The dagster-daemon process now runs each of its daemons in its own thread. This allows the scheduler, sensor loop, and daemon for launching queued runs to run in parallel, without slowing each other down.
Deployment
When specifying the location of a gRPC server in your workspace.yaml file to load your pipelines, you can now specify an environment variable for the server’s hostname and port.
When deploying your own gRPC server for your pipelines, you can now specify that connecting to that server should use a secure SSL connection.
When a solid-decorated function has a Python type annotation and no Dagster type has been explicitly registered for that Python type, Dagster now automatically constructs a corresponding Dagster type instead of raising an error.
Added a dagster run delete CLI command to delete a run and its associated event log entries.
fs_io_manager now defaults the base directory to base_dir via the Dagster instance’s local_artifact_storage configuration. Previously, it defaulted to the directory where the pipeline was executed.
When user code raises an error inside handle_output, load_input, or a type check function, the log output now includes context about which input or output the error occurred during.
We have added the BoolSource config type (similar to the StringSource type). The config value for this type can be a boolean literal or a pointer to an environment variable that is set to a boolean value.
When trying to run a pipeline where every step has been memoized, you now get a DagsterNoStepsToExecuteException.
The OutputContext passed to the has_output method of MemoizableIOManager now includes a working log.
After manually reloading the current repository, users will now be prompted to regenerate preset-based or partition-set-based run configs in the Playground view. This helps ensure that the generated run config is up to date when launching new runs. The prompt does not occur when the repository is automatically reloaded.
Added ability to preview runs for upcoming schedule ticks.
Dagit now has a global search feature in the left navigation, allowing you to jump quickly to pipelines, schedules, sensors, and partition sets across your workspace. You can trigger search by clicking the search input or with the / keyboard shortcut.
Timestamps in Dagit have been updated to be more consistent throughout the app, and are now localized based on your browser’s settings.
In Dagit, a repository location reload button is now available in the header of every pipeline, schedule, and sensor page.
You can now makes changes to your workspace.yaml file without restarting Dagit. To reload your workspace, navigate to the Status page and press the “Reload all” button in the Workspace section.
When viewing a run in Dagit, log filtering behavior has been improved. step and type filtering now offers fuzzy search, all log event types are now searchable, and visual bugs within the input have been repaired. Additionally, the default setting for “Hide non-matches” has been flipped to true.
When using a grpc_server repository location, Dagit will automatically detect changes and prompt you to reload when the remote server updates.
When launching a backfill from Dagit, the “Re-execute From Last Run” option has been removed, because it had confusing semantics. “Re-execute From Failure” now includes a tooltip.
Added a secondary index to improve performance when querying run status.
The asset catalog now displays a flattened view of all assets, along with a filter field. Tags from AssetMaterializations can be used to filter the catalog view.
The asset catalog now enables wiping an individual assets from an action in the menu. Bulk wipes of assets is still only supported with the CLI command dagster asset wipe.
Users can set Kubernetes labels on Celery worker deployments
Users can set environment variables for Flower deployment
The Redis helm chart is now included as an optional dagster helm chart dependency
K8sRunLauncher and CeleryK8sRunLauncher no longer reload the pipeline being executed just before launching it. The previous behavior ensured that the latest version of the pipeline was always being used, but was inconsistent with other run launchers. Instead, to ensure that you’re running the latest version of your pipeline, you can refresh your repository in Dagit by pressing the button next to the repository name.
Added a flag to the Dagster helm chart that lets you specify that the cluster already has a redis server available, so the Helm chart does not need to create one in order to use redis as a messaging queue. For more information, see the Helm chart’s values.yaml file.
Celery queues can now be configured with different node selectors. Previously, configuring a node selector applied it to all Celery queues.
When setting userDeployments.deployments in the Helm chart, replicaCount now defaults to 1 if not specified.
Changed our weekly docker image releases (the default images in the helm chart). dagster/dagster-k8s and dagster/dagster-celery-k8s can be used for all processes which don't require user code (Dagit, Daemon, and Celery workers when using the CeleryK8sExecutor). user-code-example can be used for a sample user repository. The prior images (k8s-dagit, k8s-celery-worker, k8s-example) are deprecated.
All images used in our Helm chart are now fully qualified, including a registry name. If you are encountering rate limits when attempting to pull images from DockerHub, you can now edit the Helm chart to pull from a registry of your choice.
We now officially use Helm 3 to manage our Dagster Helm chart.
We are now publishing the dagster-k8s, dagster-celery-k8s, user-code-example, and k8s-dagit-example images to a public ECR registry in addition to DockerHub. If you are encountering rate limits when attempting to pull images from DockerHub, you should now be able to pull these images from public.ecr.aws/dagster.
.Values.dagsterHome is now a global variable, available at .Values.global.dagsterHome.
.Values.global.postgresqlSecretName has been introduced, for subcharts to access the Dagster Helm chart’s generated Postgres secret properly.
.Values.userDeployments has been renamed .Values.dagster-user-deployments to reference the subchart’s values. When using Dagster User Deployments, enabling .Values.dagster-user-deployments.enabled will create a workspace.yaml for Dagit to locate gRPC servers with user code. To create the actual gRPC servers, .Values.dagster-user-deployments.enableSubchart should be enabled. To manage the gRPC servers in a separate Helm release, .Values.dagster-user-deployments.enableSubchart should be disabled, and the subchart should be deployed in its own helm release.
Schedules now run in UTC (instead of the system timezone) if no timezone has been set on the schedule. If you’re using a deprecated scheduler like SystemCronScheduler or K8sScheduler, we recommend that you switch to the native Dagster scheduler. The deprecated schedulers will be removed in the next Dagster release.
Names provided to alias on solids now enforce the same naming rules as solids. You may have to update provided names to meet these requirements.
The retries method on Executor should now return a RetryMode instead of a Retries. This will only affect custom Executor classes.
Submitting partition backfills in Dagit now requires dagster-daemon to be running. The instance setting in dagster.yaml to optionally enable daemon-based backfills has been removed, because all backfills are now daemon-based backfills.
# removed, no longer a valid setting in dagster.yaml
backfill:
daemon_enabled: true
The corresponding value flag dagsterDaemon.backfill.enabled has also been removed from the Dagster helm chart.
The sensor daemon interval settings in dagster.yaml has been removed. The sensor daemon now runs in a continuous loop so this customization is no longer useful.
# removed, no longer a valid setting in dagster.yaml
sensor_settings:
interval_seconds: 10
The instance argument to RunLauncher.launch_run has been removed. If you have written a custom RunLauncher, you’ll need to update the signature of that method. You can still access the DagsterInstance on the RunLauncher via the _instance parameter.
The has_config_entry, has_configurable_inputs, and has_configurable_outputs properties of solid and composite_solid have been removed.
The deprecated optionality of the name argument to PipelineDefinition has been removed, and the argument is now required.
The execute_run_with_structured_logs and execute_step_with_structured_logs internal CLI entry points have been removed. Use execute_run or execute_step instead.
The python_environment key has been removed from workspace.yaml. Instead, to specify that a repository location should use a custom python environment, set the executable_path key within a python_file, python_module, or python_package key. See the docs for more information on configuring your workspace.yaml file.
[dagster-dask] The deprecated schema for reading or materializing dataframes has been removed. Use the read or to keys accordingly.
Fixed an issue where postgres databases were unable to initialize the Dagster schema or migrate to a newer version of the Dagster schema. (Thanks @wingyplus for submitting the fix!)
[dagster-dbt] The dbt commands seed and docs generate are now available as solids in the
library dagster-dbt. (thanks @dehume-drizly!)
New
Dagit now has a global search feature in the left navigation, allowing you to jump quickly to
pipelines, schedules, and sensors across your workspace. You can trigger search by clicking the
search input or with the / keyboard shortcut.
Timestamps in Dagit have been updated to be more consistent throughout the app, and are now
localized based on your browser’s settings.
Adding SQLPollingEventWatcher for alternatives to filesystem or DB-specific listen/notify
functionality
We have added the BoolSource config type (similar to the StringSource type). The config value for
this type can be a boolean literal or a pointer to an environment variable that is set to a boolean
value.
The QueuedRunCoordinator daemon is now more resilient to errors while dequeuing runs. Previously
runs which could not launch would block the queue. They will now be marked as failed and removed
from the queue.
When deploying your own gRPC server for your pipelines, you can now specify that connecting to that
server should use a secure SSL connection. For example, the following workspace.yaml file specifies
that a secure connection should be used:
The dagster-daemon process uses fewer resources and spins up fewer subprocesses to load pipeline
information. Previously, the scheduler, sensor, and run queue daemon each spun up their own process
for this–now they share a single process.
Integrations
[Helm] - All images used in our Helm chart are now fully qualified, including a registry name.
If you are encountering rate limits when attempting to pull images from DockerHub, you can now
edit the Helm chart to pull from a registry of your choice.
[Helm] - We now officially use Helm 3 to manage our Dagster Helm chart.
[ECR] - We are now publishing the dagster-k8s, dagster-celery-k8s, user-code-example, and
k8s-dagit-example images to a public ECR registry in addition to DockerHub. If you are
encountering rate limits when attempting to pull images from DockerHub, you should now be able to
pull these images from public.ecr.aws/dagster.
[dagster-spark] - The dagster-spark config schemas now support loading values for all fields via
environment variables.
Bugfixes
Fixed a bug in the helm chart that would cause a Redis Kubernetes pod to be created even when an
external Redis is configured. Now, the Redis Kubernetes pod is only created when redis.internal
is set to True in helm chart.
Fixed an issue where the dagster-daemon process sometimes left dangling subprocesses running
during sensor execution, causing excess resource usage.
Fixed an issue where Dagster sometimes left hanging threads running after pipeline execution.
Fixed an issue where the sensor daemon would mistakenly mark itself as in an unhealthy state even
after recovering from an error.
Tags applied to solid invocations using the tag method on solid invocations (as opposed to solid
definitions) are now correctly propagated during execution. They were previously being ignored.
Experimental
MySQL (via dagster-mysql) is now supported as a backend for event log, run, & schedule storages.
Add the following to your dagster.yaml to use MySQL for storage:
When user code raises an error inside handle_output, load_input, or a type check function, the log output now includes context about which input or output the error occurred during.
Added a secondary index to improve performance when querying run status. Run dagster instance migrate to upgrade.
[Helm] Celery queues can now be configured with different node selectors. Previously, configuring a node selector applied it to all Celery queues.
In Dagit, a repository location reload button is now available in the header of every pipeline, schedule, and sensor page.
When viewing a run in Dagit, log filtering behavior has been improved. step and type filtering now offer fuzzy search, all log event types are now searchable, and visual bugs within the input have been repaired. Additionally, the default setting for “Hide non-matches” has been flipped to true.
After launching a backfill in Dagit, the success message now includes a link to view the runs for the backfill.
The dagster-daemon process now runs faster when running multiple schedulers or sensors from the same repository.
When launching a backfill from Dagit, the “Re-execute From Last Run” option has been removed, because it had confusing semantics. “Re-execute From Failure” now includes a tooltip.
fs_io_manager now defaults the base directory to base_dir via the Dagster instance’s local_artifact_storage configuration. Previously, it defaults to the directory where the pipeline is executed.
Experimental IO managers versioned_filesystem_io_manager and custom_path_fs_io_manager now require base_dir as part of the resource configs. Previously, the base_dir defaulted to the directory where the pipeline was executed.
Added a backfill daemon that submits backfill runs in a daemon process. This should relieve memory / CPU requirements for scheduling large backfill jobs. Enabling this feature requires a schema migration to the runs storage via the CLI command dagster instance migrate and configuring your instance with the following settings in dagster.yaml:
backfill:
daemon_enabled: true
There is a corresponding flag in the Dagster helm chart to enable this instance configuration. See the Helm chart’s values.yaml file for more information.
Both sensors and schedule definitions support a description parameter that takes in a human-readable string description and displays it on the corresponding landing page in Dagit.
Integrations
[dagster-gcp] The gcs_pickle_io_manager now also retries on 403 Forbidden errors, which previously would only retry on 429 TooManyRequests.
Bug Fixes
The use of Tuple with nested inner types in solid definitions no longer causes GraphQL errors
When searching assets in Dagit, keyboard navigation to the highlighted suggestion now navigates to the correct asset.
In some cases, run status strings in Dagit (e.g. “Queued”, “Running”, “Failed”) did not accurately match the status of the run. This has been repaired.
The experimental CLI command dagster new-repo should now properly generate subdirectories and files, without needing to install dagster from source (e.g. with pip install --editable).
Sensor minimum intervals now interact in a more compatible way with sensor daemon intervals to minimize evaluation ticks getting skipped. This should result in the cadence of sensor evaluations being less choppy.
Dependencies
Removed Dagster’s pin of the pendulum datetime/timezone library.
Documentation
Added an example of how to write a user-in-the-loop pipeline