[experimental] AutomationCondition.eager() will now only launch runs for missing partitions which become missing after the condition has been added to the asset. This avoids situations in which the eager policy kicks off a large amount of work when added to an asset with many missing historical static/dynamic partitions.
[experimental] Added a new AutomationCondition.asset_matches() condition, which can apply a condition against an arbitrary asset in the graph.
[experimental] Added the ability to specify multiple kinds for an asset with the kinds parameter.
[dagster-github] Added create_pull_request method on GithubClient that enables creating a pull request.
[dagster-github] Added create_ref method on GithubClient that enables creating a new branch.
[dagster-embedded-elt] dlt assets now generate column metadata for child tables.
[dagster-embedded-elt] dlt assets can now fetch row count metadata with dlt.run(...).fetch_row_count() for both partitioned and non-partitioned assets. Thanks @kristianandre!
[dagster-airbyte] relation identifier metadata is now attached to Airbyte assets.
[dagster-embedded-elt] relation identifier metadata is now attached to sling assets.
[dagster-embedded-elt] relation identifier metadata is now attached to dlt assets.
JobDefinition, @job, and define_asset_job now take a run_tags parameter. If run_tags are defined, they will be attached to all runs of the job, and tags will not be. If run_tags is not set, then tags are attached to all runs of the job (status quo behavior). This change enables the separation of definition-level and run-level tags on jobs.
Then env var DAGSTER_COMPUTE_LOG_TAIL_WAIT_AFTER_FINISH can now be used to pause before capturing logs (thanks @HynekBlaha!)
The kinds parameter is now available on AssetSpec.
OutputContext now exposes the AssetSpec of the asset that is being stored as an output (thanks, @marijncv!)
[experimental] Backfills are incorporated into the Runs page to improve observability and provide a more simplified UI. See the GitHub discussion for more details.
[ui] The updated navigation is now enabled for all users. You can revert to the legacy navigation via a feature flag. See GitHub discussion for more.
[ui] Improved performance for loading partition statuses of an asset job.
[dagster-docker] Run containers launched by the DockerRunLauncher now include dagster/job_name and dagster/run_id labels.
[dagster-aws] The ECS launcher now automatically retries transient ECS RunTask failures (like capacity placement failures).
Changed the log volume for global concurrency blocked runs in the run coordinator to be less spammy.
[ui] Asset checks are now visible in the run page header when launched from a schedule.
[ui] Fixed asset group outlines not rendering properly in Safari.
[ui] Reporting a materialization event now removes the asset from the asset health "Execution failures" list and returns the asset to a green / success state.
[ui] When setting an AutomationCondition on an asset, the label of this condition will now be shown in the sidebar on the Asset Details page.
[ui] Previously, filtering runs by Created date would include runs that had been updated after the lower bound of the requested time range. This has been updated so that only runs created after the lower bound will be included.
[ui] When using the new experimental navigation flag, added a fix for the automations page for code locations that have schedules but no sensors.
[ui] Fixed tag wrapping on asset column schema table.
[ui] Restored object counts on the code location list view.
[ui] Padding when displaying warnings on unsupported run coordinators has been corrected (thanks @hainenber!)
[dagster-k8s] Fixed an issue where run termination sometimes did not terminate all step processes when using the k8s_job_executor, if the termination was initiated while it was in the middle of launching a step pod.
AssetSpec now has a with_io_manager_key method that returns an AssetSpec with the appropriate metadata entry to dictate the key for the IO manager used to load it. The deprecation warning for SourceAsset now references this method.
Added a max_runtime_seconds configuration option to run monitoring, allowing you to specify that any run in your Dagster deployment should terminate if it exceeds a certain runtime. Prevoiusly, jobs had to be individually tagged with a dagster/max_runtime tag in order to take advantage of this feature. Jobs and runs can still be tagged in order to override this value for an individual run.
It is now possible to set both tags and a custom execution_fn on a ScheduleDefinition. Schedule tags are intended to annotate the definition and can be used to search and filter in the UI. They will not be attached to run requests emitted from the schedule if a custom execution_fn is provided. If no custom execution_fn is provided, then for back-compatibility the tags will also be automatically attached to run requests emitted from the schedule.
SensorDefinition and all of its variants/decorators now accept a tags parameter. The tags annotate the definition and can be used to search and filter in the UI.
Added the dagster definitions validate command to Dagster CLI. This command validates if Dagster definitions are loadable.
[dagster-databricks] Databricks Pipes now allow running tasks in existing clusters.
Fixed an issue where calling build_op_context in a unit test would sometimes raise a TypeError: signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object Exception on process shutdown.
[dagster-webserver] Fix an issue where the incorrect sensor/schedule state would appear when using DefaultScheduleStatus.STOPPED / DefaultSensorStatus.STOPPED after performing a reset.
Fixed an issue where users with Launcher permissions for a particular code location were not able to cancel backfills targeting only assets in that code location.
Fixed an issue preventing long-running alerts from being sent when there was a quick subsequent run.
Added --partition-range option to dagster asset materialize CLI. This option only works for assets with single-run Backfill Policies.
Added a new .without() method to AutomationCondition.eager(), AutomationCondition.on_cron(), and AutomationCondition.on_missing() which allows sub-conditions to be removed, e.g. AutomationCondition.eager().without(AutomationCondition.in_latest_time_window()).
Added AutomationCondition.on_missing(), which materializes an asset partition as soon as all of its parent partitions are filled in.
pyproject.toml can now load multiple Python modules as individual Code Locations. Thanks, @bdart!
[ui] If a code location has errors, a button will be shown to view the error on any page in the UI.
[dagster-adls2] The ADLS2PickleIOManager now accepts lease_duration configuration. Thanks, @0xfabioo!
[dagster-embedded-elt] Added an option to fetch row count metadata after running a Sling sync by calling sling.replicate(...).fetch_row_count().
[dagster-fivetran] The dagster-fivetran integration will now automatically pull and attach column schema metadata after each sync.
Fixed an issue which could cause errors when using AutomationCondition.any_downstream_condition() with downstream AutoMaterializePolicy objects.
Fixed an issue where process_config_and_initialize did not properly handle processing nested resource config.
[ui] Fixed an issue that would cause some AutomationCondition evaluations to be labeled DepConditionWrapperCondition instead of the key that they were evaluated against.
[dagster-webserver] Fixed an issue with code locations appearing in fluctuating incorrect state in deployments with multiple webserver processes.
[dagster-embedded-elt] Fixed an issue where Sling column lineage did not correctly resolve int the Dagster UI.
[dagster-k8s] The wait_for_pod check now waits until all pods are available, rather than erroneously returning after the first pod becomes available. Thanks @easontm!
The AssetSpec constructor now raises an error if an invalid group name is provided, instead of an error being raised when constructing the Definitions object.
dagster/relation_identifier metadata is now automatically attached to assets which are stored using a DbIOManager.
[ui] Streamlined the code location list view.
[ui] The “group by” selection on the Timeline Overview page is now part of the query parameters, meaning it will be retained when linked to directly or when navigating between pages.
[dagster-dbt] When instantiating DbtCliResource, the project_dir argument will now override the DBT_PROJECT_DIR environment variable if it exists in the local environment (thanks, @marijncv!).
[dagster-embedded-elt] dlt assets now generate rows_loaded metadata (thanks, @kristianandre!).
Fixed a bug where setting asset_selection=[] on RunRequest objects yielded from sensors using asset_selection would select all assets instead of none.
Fixed bug where the tick status filter for batch-fetched graphql sensors was not being respected.
[examples] Fixed missing assets in assets_dbt_python example.
[dagster-airbyte] Updated the op names generated for Airbyte assets to include the full connection ID, avoiding name collisions.
[dagster-dbt] Fixed issue causing dagster-dbt to be unable to load dbt projects where the adapter did not have a database field set (thanks, @dargmuesli!)
[dagster-dbt] Removed a warning about not being able to load the dbt.adapters.duckdb module when loading dbt assets without that package installed.
You may now wipe specific asset partitions directly from the execution context in user code by calling DagsterInstance.wipe_asset_partitions.
Dagster+ users with a "Viewer" role can now create private catalog views.
Fixed an issue where the default IOManager used by Dagster+ Serverless did not respect setting allow_missing_partitions as metadata on a downstream asset.
Fixed an issue where runs in Dagster+ Serverless that materialized partitioned assets would sometimes fail with an object has no attribute '_base_path' error.
[dagster-graphql] Fixed an issue where the statuses filter argument to the sensorsOrError GraphQL field was sometimes ignored when querying GraphQL for multiple sensors at the same time.
Updated multi-asset sensor definition to be less likely to timeout queries against the asset history storage.
Consolidated the CapturedLogManager and ComputeLogManager APIs into a single base class.
[ui] Added an option under user settings to clear client side indexeddb caches as an escape hatch for caching related bugs.
[dagster-aws, dagster-pipes] Added a new PipesECSClient to allow Dagster to interface with ECS tasks.
[dagster-dbt] Increased the default timeout when terminating a run that is running a dbt subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Increased the default timeout when terminating a run that is running an sdf subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Added support for caching and asset selection (Thanks, akbog!)
[dagster-dlt] Added support for AutomationCondition using DagsterDltTranslator.get_automation_condition() (Thanks, aksestok!)
[ui] Fixed a bug where in-progress runs from a backfill could not be terminated from the backfill UI.
[ui] Fixed a bug that caused an "Asset must be part of at least one job" error when clicking on an external asset in the asset graph UI
Fixed an issue where viewing run logs with the latest 5.0 release of the watchdog package raised an exception.
[ui] Fixed issue causing the “filter to group” action in the lineage graph to have no effect.
[ui] Fixed case sensitivity when searching for partitions in the launchpad.
[ui] Fixed a bug which would redirect to the events tab for an asset if you loaded the partitions tab directly.
[ui] Fixed issue causing runs to get skipped when paging through the runs list (Thanks, @HynekBlaha!)
[ui] Fixed a bug where the asset catalog list view for a particular group would show all assets.
[dagster-dbt] fix bug where empty newlines in raw dbt logs were not being handled correctly.
[dagster-k8s, dagster-celery-k8s] Correctly set dagster/image label when image is provided from user_defined_k8s_config. (Thanks, @HynekBlaha!)
[dagster-duckdb] Fixed an issue for DuckDB versions older than 1.0.0 where an unsupported configuration option, custom_user_agent, was provided by default
[dagster-k8s] Fixed an issue where Kubernetes Pipes failed to create a pod if the op name contained capital or non-alphanumeric containers.
[dagster-embedded-elt] Fixed an issue where dbt assets downstream of Sling were skipped
[dagser-aws]: Direct AWS API arguments in PipesGlueClient.run have been deprecated and will be removed in 1.9.0. The new params argument should be used instead.
The default io_manager on Serverless now supports the allow_missing_partitions configuration option.
Fixed a bug that caused an error when loading the launchpad for a partition, when using in Dagster+ with an agent with version below 1.8.2
1.8.3 (core) / 0.24.3 (libraries) (YANKED - This version of Dagster resulted in errors when trying to launch runs that target individual asset partitions)#
When different assets within a code location have different PartitionsDefinitions, there will no longer be an implicit asset job __ASSET_JOB_... for each PartitionsDefinition; there will just be one with all the assets. This reduces the time it takes to load code locations with assets with many different PartitionsDefinitions.
[ui] Fixed a collection of broken links pointing to renamed Declarative Automation pages.
[dagster-dbt] Fixed issue preventing usage of MultiPartitionMapping with @dbt_assets (Thanks, @arookieds!)
[dagster-azure] Fixed issue that would cause an error when configuring an AzureBlobComputeLogManager without a secret_key (Thanks, @ion-elgreco and @HynekBlaha!)
Fixed a bug for job backfills using backfill policies that materialized multiple partitions in a single run would be launched multiple times.
Fixed an issue where runs would sometimes move into a FAILURE state rather than a CANCELED state if an error occurred after a run termination request was started.
[ui] Fixed a bug where an incorrect dialog was shown when canceling a backfill.
[ui] Fixed the asset page header breadcrumbs for assets with very long key path elements.
[ui] Fixed the run timeline time markers for users in timezones that have off-hour offsets.
[ui] Fixed bar chart tooltips to use correct timezone for timestamp display.
[ui] Fixed an issue introduced in the 1.8.0 release where some jobs created from graph-backed assets were missing the “View as Asset Graph” toggle in the Dagster UI.
[dagster-airbyte] AirbyteCloudResource now supports client_id and client_secret for authentication - the api_key approach is no longer supported. This is motivated by the deprecation of portal.airbyte.com on August 15, 2024.
You can now pass AssetSpec objects to the assets argument of Definitions, to let Dagster know about assets without associated materialization functions. This replaces the experimental external_assets_from_specs API, as well as SourceAssets, which are now deprecated. Unlike SourceAssets, AssetSpecs can be used for non-materializable assets with dependencies on Dagster assets, such as BI dashboards that live downstream of warehouse tables that are orchestrated by Dagster. [docs].
[Experimental] You can now merge Definitions objects together into a single larger Definitions object, using the new Definitions.merge API (doc). This makes it easier to structure large Dagster projects, as you can construct a Definitions object for each sub-domain and then merge them together at the top level.
[Experimental] You can now add AutomationConditions to your assets to have them automatically executed in response to specific conditions (docs). These serve as a drop-in replacement and improvement over the AutoMaterializePolicy system, which is being marked as deprecated.
[Experimental] Sensors and schedules can now directly target assets, via the new target parameter, instead of needing to construct a job.
[Experimental] The Timeline page can now be grouped by job or automation. When grouped by automation, all runs launched by a sensor responsible for evaluating automation conditions will get bucketed to that sensor in the timeline instead of the "Ad-hoc materializations" row. Enable this by opting in to the Experimental navigation feature flag in user settings.
The Asset Details page now prominently displays row count and relation identifier (table name, schema, database), when corresponding asset metadata values are provided. For more information, see the metadata and tags docs.
Introduced code reference metadata which can be used to open local files in your editor, or files in source control in your browser. Dagster can automatically attach code references to your assets’ Python source. For more information, see the docs.
[Experimental] Metadata bound checks – The new build_metadata_bounds_checks API [doc] enables easily defining asset checks that fail if a numeric asset metadata value falls outside given bounds.
[Experimental] Freshness checks from dbt config - Freshness checks can now be set on dbt assets, straight from dbt. Check out the API docs for build_freshness_checks_from_dbt_assets for more.
Dagster Pipes (PipesSubprocessClient) and its integrations with Lambda (PipesLambdaClient), Kubernetes (PipesK8sClient), and Databricks (PipesDatabricksClient) are no longer experimental.
The new DbtProject class (docs) makes it simpler to define dbt assets that can be constructed in both development and production. DbtProject.prepare_if_dev() eliminates boilerplate for local development, and the dagster-dbt project prepare-and-package CLI can helps pull deps and generate the manifest at build time.
[Experimental] The dagster-looker package can be used to define a set of Dagster assets from a Looker project that is defined in LookML and is backed by git. See the GitHub discussion for more details.
Catalog views — In Dagster+, selections into the catalog can now be saved and shared across an organization as catalog views. Catalog views have a name and description, and can be applied to scope the catalog, asset health, and global asset lineage pages against the view’s saved selection.
Code location history — Dagster+ now stores a history of code location deploys, including the ability to revert to a previously deployed configuration.
Changes since 1.7.16 (core) / 0.22.16 (libraries)#
The target of both schedules and sensors can now be set using an experimental target parameter that accepts an AssetSelection or list of assets. Any assets passed this way will also be included automatically in the assets list of the containing Definitions object.
ScheduleDefinition and SensorDefinition now have a target argument that can accept an AssetSelection.
You can now wipe materializations for individual asset partitions.
AssetSpec now has a partitions_def attribute. All the AssetSpecs provided to a @multi_asset must have the same partitions_def.
The assets argument on materialize now accepts AssetSpecs.
The assets argument on Definitions now accepts AssetSpecs.
The new merge method on Definitions enables combining multiple Definitions object into a single larger Definitions object with their combined contents.
Runs requested through the Declarative Automation system now have a dagster/from_automation_condition: true tag applied to them.
Changed the run tags query to be more performant. Thanks @egordm!
Dagster Pipes and its integrations with Lambda, Kubernetes, and Databricks are no longer experimental.
The Definitions constructor will no longer raise errors when the provided definitions aren’t mutually resolve-able – e.g. when there are conflicting definitions with the same name, unsatisfied resource dependencies, etc. These errors will still be raised at code location load time. The new Definitions.validate_loadable static method also allows performing the validation steps that used to occur in constructor.
AssetsDefinitions object provided to a Definitions object will now be deduped by reference equality. That is, the following will now work:
from dagster import asset, Definitions
@assetdefmy_asset():...
defs = Definitions(assets=[my_asset, my_asset])# Deduped into just one AssetsDefinition.
[dagster-embedded-elt] Adds translator options for dlt integration to override auto materialize policy, group name, owners, and tags
[dagster-sdf] Introducing the dagster-sdf integration for data modeling and transformations powered by sdf.
[dagster-dbt] Added a new with_insights() method which can be used to more easily attach Dagster+ Insights metrics to dbt executions: dbt.cli(...).stream().with_insights()
Dagster now raises an error when an op yields an output corresponding to an unselected asset.
Fixed a bug that caused downstream ops within a graph-backed asset to be skipped when they were downstream of assets within the graph-backed assets that aren’t part of the selection for the current run.
Fixed a bug where code references did not work properly for self-hosted GitLab instances. Thanks @cooperellidge!
[ui] When engine events with errors appear in run logs, their metadata entries are now rendered correctly.
[ui] The asset catalog greeting now uses your first name from your identity provider.
[ui] The create alert modal now links to the alerting documentation, and links to the documentation have been updated.
[ui] Fixed an issue introduced in the 1.7.13 release where some asset jobs were only displaying their ops in the Dagster UI instead of their assets.
Fixed an issue where terminating a run while it was using the Snowflake python connector would sometimes move it into a FAILURE state instead of a CANCELED state.
Fixed an issue where backfills would sometimes move into a FAILURE state instead of a CANCELED state when the backfill was canceled.
The experimental and deprecated build_asset_with_blocking_check has been removed. Use the blocking argument on @asset_check instead.
Users with mypy and pydantic 1 may now experience a “metaclass conflict” error when using Config. Previously this would occur when using pydantic 2.
AutoMaterializeSensorDefinition has been renamed AutomationConditionSensorDefinition.
The deprecated methods of the ComputeLogManager have been removed. Custom ComputeLogManager implementations must also implement the CapturedLogManager interface. This will not affect any of the core implementations available in the core dagster package or the library packages.
By default, an AutomationConditionSensorDefinition with the name “default_automation_condition_sensor” will be constructed for each code location, and will handle evaluating and launching runs for all AutomationConditions and AutoMaterializePolicies within that code location. You can restore the previous behavior by setting:
auto_materialize:use_sensors:False
in your dagster.yaml file.
[dagster-dbt] Support for dbt-core==1.6.* has been removed because the version is now end-of-life.
[dagster-dbt] The following deprecated APIs have been removed:
KeyPrefixDagsterDbtTranslator has been removed. To modify the asset keys for a set of dbt assets, implementDagsterDbtTranslator.get_asset_key() instead.
Support for setting freshness policies through dbt metadata on field +meta.dagster_freshness_policy has been removed. Use +meta.dagster.freshness_policy instead.
Support for setting auto-materialize policies through dbt metadata on field +meta.dagster_auto_materialize_policy has been removed. Use +meta.dagster.auto_materialize_policy instead.
Support for load_assets_from_dbt_project, load_assets_from_dbt_manifest, and dbt_cli_resource has been removed. Use @dbt_assets, DbtCliResource, and DbtProject instead to define how to load dbt assets from a dbt project and to execute them.
Support for rebuilt ops like dbt_run_op, dbt_compile_op, etc has been removed. Use @op and DbtCliResource directly to execute dbt commands in an op.
Properties on AssetExecutionContext , OpExecutionContext , and ScheduleExecutionContext that include datetimes now return standard Python datetime objects instead of Pendulum datetimes. The types in the public API for these properties have always been datetime and this change should not be breaking in the majority of cases, but Pendulum datetimes include some additional methods that are not present on standard Python datetimes, and any code that was using those methods will need to be updated to either no longer use those methods or transform the datetime into a Pendulum datetime. See the 1.8 migration guide for more information and examples.
MemoizableIOManager, VersionStrategy, SourceHashVersionStrategy, OpVersionContext, ResourceVersionContext, and MEMOIZED_RUN_TAG, which have been deprecated and experimental since pre-1.0, have been removed.
The Run Status column of the Backfills page has been removed. This column was only populated for backfills of jobs. To see the run statuses for job backfills, click on the backfill ID to get to the Backfill Details page.
The experimental external_assets_from_specs API has been deprecated. Instead, you can directly pass AssetSpec objects to the assets argument of the Definitions constructor.
AutoMaterializePolicy has been marked as deprecated in favor of AutomationCondition , which provides a significantly more flexible and customizable interface for expressing when an asset should be executed. More details on how to migrate your AutoMaterializePolicies can be found in the Migration Guide.
SourceAsset has been deprecated. See the major changes section and migration guide for more details.
The asset_partition_key_for_output, asset_partition_keys_for_output, and asset_partition_key_range_for_output, and asset_partitions_time_window_for_output methods on OpExecutionContext have been deprecated. Instead, use the corresponding property: partition_key, partition_keys, partition_key_range, or partition_time_window.
The partitions_def parameter on define_asset_job is now deprecated. The partitions_def for an asset job is determined from the partitions_def attributes on the assets it targets, so this parameter is redundant.
[dagster-shell] create_shell_command_op and create_shell_script_op have been marked as deprecated in favor of PipesSubprocessClient (see details in Dagster Pipes subprocess reference)
[dagster-airbyte] load_assets_from_airbyte_project is now deprecated, because the Octavia CLI that it relies on is an experimental feature that is no longer supported. Use build_airbyte_assets or load_assets_from_airbyte_project instead.
In Dagster+, selections into the catalog can now be saved and shared across an organization as catalog views. Catalog views have a name and description, and can be applied to scope the catalog, asset health, and global asset lineage pages against the view’s saved selection.
In Dagster+ run alerts, if you are running Dagster 1.8 or greater in your user code, you will now receive exception-level information in the alert body.
[dagster-celery-k8s] Added a per_step_k8s_config configuration option to the celery_k8s_job_executor , allowing the k8s configuration of individual steps to be configured at run launch time. Thanks @alekseik1!
[dagster-dbt] Deprecated the log_column_level_metadata macro in favor of the new with_column_metadata API.
[dagster-airbyte] Deprecated load_assets_from_airbyte_project as the Octavia CLI has been deprecated.
On June 20, 2024, AWS changed the AWS CloudMap CreateService API to allow resource-level permissions. The Dagster+ ECS Agent uses this API to launch code locations. We’ve updated the Dagster+ ECS Agent CloudFormation template to accommodate this change for new users. Existing users have until October 14, 2024 to add the new permissions and should have already received similar communication directly from AWS.
Fixed a bug with BigQuery cost tracking in Dagster+ insights, where some runs would fail if there were null values for either total_byte_billed or total_slot_ms in the BigQuery INFORMATION_SCHEMA.JOBS table.
Fixed an issue where code locations that failed to load with extremely large error messages or stack traces would sometimes cause errors with agent heartbeats until the code location was redeployed.
[blueprints] When specifying an asset key in ShellCommandBlueprint, you can now use slashes as a delimiter to generate an AssetKey with multiple path components.
[community-controbution][mlflow] The mlflow resource now has a mlflow_run_id attribute (Thanks Joe Percivall!)
[community-contribution][mlflow] The mlflow resource will now retry when it fails to fetch the mlflow run ID (Thanks Joe Percivall!)