Asset search results now display compute and storage kind icons.
Asset jobs where the underlying assets have multiple backfill policies will no longer fail at definition time. Instead, the backfill policy for the job will use the minimum max_partitions_per_run from the job’s constituent assets.
[dagstermill] asset_tags can now be specified when building dagstermill assets
[dagster-embedded-elt] Custom asset tags can be applied to Sling assets via the DagsterSlingTranslator
[dagster-embedded-elt] dlt assets now automatically have dagster/storage_kind tags attached
tags passed to outs in graph_multi_asset now get correctly propagated to the resulting assets.
[ui] Fixed an issue in the where when multiple runs were started at the same time to materialize the same asset, the most recent one was not always shown as in progress in the asset graph in the Dagster UI.
The “newly updated” auto-materialize rule will now respond to either new observations or materializations for observable assets.
build_metadata_bounds_checks now no longer errors when targeting metadata keys that have special characters.
Updated the Schedule concept page to be a “jumping off” point for all-things scheduling, including a high-level look at how schedules work, their benefits, and what you need to know before diving in
[experimental] The backfill daemon can now store logs and display them in the UI for increased visibility into the daemon’s behavior. Please contact Dagster Labs if you are interested in piloting this experimental feature.
Added a --read-only flag to the dagster-cloud ci branch-deployment CLI command, which returns the current branch deployment name for the current code repository branch without update the status of the branch deployment.
Dagster will now display a “storage kind” tag on assets in the UI, similar to the existing compute kind. To set storage kind for an asset, set its dagster/storage_kind tag.
You can now set retry policy on dbt assets, to enable coarse-grained retries with delay and jitter. For fine-grained partial retries, we still recommend invoking dbt retry within a try/except block to avoid unnecessary, duplicate work.
AssetExecutionContext now exposes a has_partition_key_range property.
The owners, metadata, tags, and deps properties on AssetSpec are no longer Optional. The AssetSpec constructor still accepts None values, which are coerced to empty collections of the relevant type.
The docker_executor and k8s_job_executor now consider at most 1000 events at a time when loading events from the current run to determine which steps should be launched. This value can be tuned by setting the DAGSTER_EXECUTOR_POP_EVENTS_LIMIT environment variable in the run process.
Added a dagster/retry_on_asset_or_op_failure tag that can be added to jobs to override run retry behavior for runs of specific jobs. See the docs for more information.
Improved the sensor produced by build_sensor_for_freshness_checks to describe when/why it skips evaluating freshness checks.
A new “Runs” tab on the backfill details page allows you to see list and timeline views of the runs launched by the backfill.
[dagster-dbt] dbt will now attach relation identifier metadata to asset materializations to indicate where the built model is materialized to.
[dagster-graphql] The GraphQL Python client will now include the HTTP error code in the exception when a query fails. Thanks @yuvalgimmunai!
Fixed sensor logging behavior with the @multi_asset_sensor.
ScheduleDefinition now properly supports being passed a RunConfig object.
When an asset function returns a MaterializeResult, but the function has no type annotation, previously, the IO manager would still be invoked with a None value. Now, the IO manager is not invoked.
The AssetSpec constructor now raises an error if an invalid owner string is passed to it.
When using the graph_multi_asset decorator, the code_version property on AssetOuts passed in used to be ignored. Now, they no longer are.
[dagster-deltalake] Fixed GcsConfig import error and type error for partitioned assets (Thanks @thmswt)
The asset graph and asset catalog now show the materialization status of External assets (when manually reported) rather than showing “Never observed”
The ability to set a custom base deployment when creating a branch deployment has been enabled for all organizations.
When a code location fails to deploy, the Kubernetes agent now includes additional any warning messages from the underlying replicaset in the failure message to aid with troubleshooting.
Serverless deployments now support using a requirements.txt with hashes.
Fixed an issue where the dagster-cloud job launch command did not support specifying asset keys with prefixes in the --asset-key argument.
[catalog UI] Catalog search now allows filtering by type, i.e. group:, code location:, tag:, owner:.
New dagster+ accounts will now start with two default alert policies; one to alert if the default free credit budget for your plan is exceeded, and one to alert if a single run goes over 24 hours. These alerts will be sent as emails to the email with which the account was initially created.
The new build_metadata_bounds_checks API creates asset checks which verify that numeric metadata values on asset materializations fall within min or max values. See the documentation for more information.
Fixed an incompatibility between build_sensor_for_freshness_checks and Dagster Plus. This API should now work when used with Dagster Plus.
[ui] Billing / usage charts no longer appear black-on-black in Dagster’s dark mode.
[ui] The asset catalog is now available for teams plans.
[ui] Fixed a bug where the alert policy editor would misinterpret the threshold on a long-running job alert.
[kubernetes] Added a dagsterCloudAgent.additionalPodSpecConfig to the Kubernetes agent Helm chart allowing arbitrary pod configuration to be applied to the agent pod.
[ECS] Fixed an issue where the ECS agent would sometimes raise a “Too many concurrent attempts to create a new revision of the specified family” exception when using agent replicas.
Fixed spurious errors in logs due to module shadowing.
Fixed an issue in the Backfill Daemon where if the assets to be materialized had different BackfillPolicys, each asset would get materialized in its own run, rather than grouping assets together into single run.
Fixed an issue that could cause the Asset Daemon to lose information in its cursor about an asset if that asset’s code location was temporarily unavailable.
[dagster-dbt] Mitigated issues with cli length limits by only listing specific dbt tests as needed when the tests aren’t included via indirect selection, rather than listing all tests.
The backfill daemon now has additional logging to document the progression through each tick and why assets are and are not materialized during each evaluation of a backfill.
Made performance improvements in both calculating and storing data version for assets, especially for assets with a large fan-in.
Standardized table row count metadata output by various integrations to dagster/row_count .
[dagster-aws][community-contribution] Additional parameters can now be passed to the following resources: CloudwatchLogsHandler, ECRPublicClient, SecretsManagerResource, SSMResource thanks @jacob-white-simplisafe !
Fixed issue that could cause runs to fail if they targeted any assets which had a metadata value of type TableMetadataValue, TableSchemaMetadataValue, or TableColumnLineageMetadataValue defined.
Fixed an issue which could cause evaluations produced via the Auto-materialize system to not render the “skip”-type rules.
Backfills of asset jobs now correctly use the BackfillPolicy of the underlying assets in the job.
[dagster-databricks][community-contribution] databricks-sdk version bumped to 0.17.0, thanks @lamalex !