Performance improvements when loading code locations using multi-assets with many asset keys.
AutomationCondition.in_progress() now will be true if an asset partition is part of an in-progress backfill that has not yet executed it. The prior behavior, which only considered runs, is encapsulated in AutomationCondition.execution_in_progress().
[ui] Added tag filter to the jobs page.
[ui] Preserve user login state for a longer period of time.
[dagster-dbt] Performance improvements when loading definitions using build_dbt_asset_selection.
[dagster-docker] container_kwargs.stop_timeout can now be set when using the DockerRunLauncher or docker_executor to configure the amount of time that Docker will wait when terminating a run for it to clean up before forcibly stopping it with a SIGKILL signal.
[dagster-sigma] The Sigma integration now fetches initial API responses in parallel, speeding up initial load.
[dagster-looker] Attempt to naively render liquid templates for derived table sql.
[dagster-looker] Added support for views and explores that rely on refinements or extends.
[dagster-looker] When fetching explores and dashboards from the Looker API, retrieve in parallel.
Fixed an issue with AutomationCondition.eager() that could cause it to attempt to launch a second attempt of an asset in cases where it was skipped or failed during a run where one of its parents successfully materialized.
Fixed an issue which would cause AutomationConditionSensorDefinitions to not be evaluated if the use_user_code_server value was toggled after the initial evaluation.
Fixed an issue where configuration values for aliased pydantic fields would be dropped.
[ui] Fix an issue in the code locations page where invalid query parameters could crash the page.
[ui] Fix navigation between deployments when query parameters are present in the URL.
[helm] the blockOpConcurrencyLimitedRuns section of queuedRunCoordinator now correctly templates the appropriate config.
[dagster-sigma] Fixed pulling incomplete data for very large workspaces.
The AutomationCondition.eager(), AutomationCondition.missing(), and AutomationCondition.on_cron conditions are now compatible with asset checks.
Added AssetSelection.materializable(), which returns only assets that are materializable in an existing selection.
Added a new AutomationCondition.all_deps_blocking_checks_passed condition, which can be used to prevent materialization when any upstream blocking checks have failed.
Added a code_version parameter to the @graph_asset decorator.
If a LaunchPartitionBackfill mutation is submitted to GQL with invalid partition keys, it will now return an early PartitionKeysNotFoundError.
AssetSelection.checks_for_assets now accepts AssetKeys and string asset keys, in addition to AssetsDefinitions.
[ui] Added a search bar to partitions tab on the asset details page.
[ui] Restored docked left nav behavior for wide viewports.
[dagster-aws] get_objects now has a since_last_modified that enables only fetching objects modified after a given timestamp.
[dagster-aws] New AWS EMR Dagster Pipes client (dagster_aws.pipes.PipesEMRCLient ) for running and monitoring AWS EMR jobs from Dagster.
Fixed an issue which could cause incorrect evaluation results when using self-dependent partition mappings with AutomationConditions that operate over dependencies.
[ui] Fixed an issue where the breadcumb on asset pages would flicker nonstop.
[dagster-embedded-elt] Fixed extraction of metadata for dlt assets whose source and destination identifiers differ.
[dagster-databricks] Fixed a permissioning gap that existed with the DatabricksPySparkStepLauncher, so that permissions are now set correctly for non-admin users.
[dagster-dbt] Fixed an issue where column metadata generated with fetch_column_metadata did not work properly for models imported through dbt dependencies.
[experimental] AutomationCondition.eager() will now only launch runs for missing partitions which become missing after the condition has been added to the asset. This avoids situations in which the eager policy kicks off a large amount of work when added to an asset with many missing historical static/dynamic partitions.
[experimental] Added a new AutomationCondition.asset_matches() condition, which can apply a condition against an arbitrary asset in the graph.
[experimental] Added the ability to specify multiple kinds for an asset with the kinds parameter.
[dagster-github] Added create_pull_request method on GithubClient that enables creating a pull request.
[dagster-github] Added create_ref method on GithubClient that enables creating a new branch.
[dagster-embedded-elt] dlt assets now generate column metadata for child tables.
[dagster-embedded-elt] dlt assets can now fetch row count metadata with dlt.run(...).fetch_row_count() for both partitioned and non-partitioned assets. Thanks @kristianandre!
[dagster-airbyte] relation identifier metadata is now attached to Airbyte assets.
[dagster-embedded-elt] relation identifier metadata is now attached to sling assets.
[dagster-embedded-elt] relation identifier metadata is now attached to dlt assets.
JobDefinition, @job, and define_asset_job now take a run_tags parameter. If run_tags are defined, they will be attached to all runs of the job, and tags will not be. If run_tags is not set, then tags are attached to all runs of the job (status quo behavior). This change enables the separation of definition-level and run-level tags on jobs.
Then env var DAGSTER_COMPUTE_LOG_TAIL_WAIT_AFTER_FINISH can now be used to pause before capturing logs (thanks @HynekBlaha!)
The kinds parameter is now available on AssetSpec.
OutputContext now exposes the AssetSpec of the asset that is being stored as an output (thanks, @marijncv!)
[experimental] Backfills are incorporated into the Runs page to improve observability and provide a more simplified UI. See the GitHub discussion for more details.
[ui] The updated navigation is now enabled for all users. You can revert to the legacy navigation via a feature flag. See GitHub discussion for more.
[ui] Improved performance for loading partition statuses of an asset job.
[dagster-docker] Run containers launched by the DockerRunLauncher now include dagster/job_name and dagster/run_id labels.
[dagster-aws] The ECS launcher now automatically retries transient ECS RunTask failures (like capacity placement failures).
Changed the log volume for global concurrency blocked runs in the run coordinator to be less spammy.
[ui] Asset checks are now visible in the run page header when launched from a schedule.
[ui] Fixed asset group outlines not rendering properly in Safari.
[ui] Reporting a materialization event now removes the asset from the asset health "Execution failures" list and returns the asset to a green / success state.
[ui] When setting an AutomationCondition on an asset, the label of this condition will now be shown in the sidebar on the Asset Details page.
[ui] Previously, filtering runs by Created date would include runs that had been updated after the lower bound of the requested time range. This has been updated so that only runs created after the lower bound will be included.
[ui] When using the new experimental navigation flag, added a fix for the automations page for code locations that have schedules but no sensors.
[ui] Fixed tag wrapping on asset column schema table.
[ui] Restored object counts on the code location list view.
[ui] Padding when displaying warnings on unsupported run coordinators has been corrected (thanks @hainenber!)
[dagster-k8s] Fixed an issue where run termination sometimes did not terminate all step processes when using the k8s_job_executor, if the termination was initiated while it was in the middle of launching a step pod.
AssetSpec now has a with_io_manager_key method that returns an AssetSpec with the appropriate metadata entry to dictate the key for the IO manager used to load it. The deprecation warning for SourceAsset now references this method.
Added a max_runtime_seconds configuration option to run monitoring, allowing you to specify that any run in your Dagster deployment should terminate if it exceeds a certain runtime. Prevoiusly, jobs had to be individually tagged with a dagster/max_runtime tag in order to take advantage of this feature. Jobs and runs can still be tagged in order to override this value for an individual run.
It is now possible to set both tags and a custom execution_fn on a ScheduleDefinition. Schedule tags are intended to annotate the definition and can be used to search and filter in the UI. They will not be attached to run requests emitted from the schedule if a custom execution_fn is provided. If no custom execution_fn is provided, then for back-compatibility the tags will also be automatically attached to run requests emitted from the schedule.
SensorDefinition and all of its variants/decorators now accept a tags parameter. The tags annotate the definition and can be used to search and filter in the UI.
Added the dagster definitions validate command to Dagster CLI. This command validates if Dagster definitions are loadable.
[dagster-databricks] Databricks Pipes now allow running tasks in existing clusters.
Fixed an issue where calling build_op_context in a unit test would sometimes raise a TypeError: signal handler must be signal.SIG_IGN, signal.SIG_DFL, or a callable object Exception on process shutdown.
[dagster-webserver] Fix an issue where the incorrect sensor/schedule state would appear when using DefaultScheduleStatus.STOPPED / DefaultSensorStatus.STOPPED after performing a reset.
Fixed an issue where users with Launcher permissions for a particular code location were not able to cancel backfills targeting only assets in that code location.
Fixed an issue preventing long-running alerts from being sent when there was a quick subsequent run.
Added --partition-range option to dagster asset materialize CLI. This option only works for assets with single-run Backfill Policies.
Added a new .without() method to AutomationCondition.eager(), AutomationCondition.on_cron(), and AutomationCondition.on_missing() which allows sub-conditions to be removed, e.g. AutomationCondition.eager().without(AutomationCondition.in_latest_time_window()).
Added AutomationCondition.on_missing(), which materializes an asset partition as soon as all of its parent partitions are filled in.
pyproject.toml can now load multiple Python modules as individual Code Locations. Thanks, @bdart!
[ui] If a code location has errors, a button will be shown to view the error on any page in the UI.
[dagster-adls2] The ADLS2PickleIOManager now accepts lease_duration configuration. Thanks, @0xfabioo!
[dagster-embedded-elt] Added an option to fetch row count metadata after running a Sling sync by calling sling.replicate(...).fetch_row_count().
[dagster-fivetran] The dagster-fivetran integration will now automatically pull and attach column schema metadata after each sync.
Fixed an issue which could cause errors when using AutomationCondition.any_downstream_condition() with downstream AutoMaterializePolicy objects.
Fixed an issue where process_config_and_initialize did not properly handle processing nested resource config.
[ui] Fixed an issue that would cause some AutomationCondition evaluations to be labeled DepConditionWrapperCondition instead of the key that they were evaluated against.
[dagster-webserver] Fixed an issue with code locations appearing in fluctuating incorrect state in deployments with multiple webserver processes.
[dagster-embedded-elt] Fixed an issue where Sling column lineage did not correctly resolve int the Dagster UI.
[dagster-k8s] The wait_for_pod check now waits until all pods are available, rather than erroneously returning after the first pod becomes available. Thanks @easontm!
The AssetSpec constructor now raises an error if an invalid group name is provided, instead of an error being raised when constructing the Definitions object.
dagster/relation_identifier metadata is now automatically attached to assets which are stored using a DbIOManager.
[ui] Streamlined the code location list view.
[ui] The “group by” selection on the Timeline Overview page is now part of the query parameters, meaning it will be retained when linked to directly or when navigating between pages.
[dagster-dbt] When instantiating DbtCliResource, the project_dir argument will now override the DBT_PROJECT_DIR environment variable if it exists in the local environment (thanks, @marijncv!).
[dagster-embedded-elt] dlt assets now generate rows_loaded metadata (thanks, @kristianandre!).
Fixed a bug where setting asset_selection=[] on RunRequest objects yielded from sensors using asset_selection would select all assets instead of none.
Fixed bug where the tick status filter for batch-fetched graphql sensors was not being respected.
[examples] Fixed missing assets in assets_dbt_python example.
[dagster-airbyte] Updated the op names generated for Airbyte assets to include the full connection ID, avoiding name collisions.
[dagster-dbt] Fixed issue causing dagster-dbt to be unable to load dbt projects where the adapter did not have a database field set (thanks, @dargmuesli!)
[dagster-dbt] Removed a warning about not being able to load the dbt.adapters.duckdb module when loading dbt assets without that package installed.
You may now wipe specific asset partitions directly from the execution context in user code by calling DagsterInstance.wipe_asset_partitions.
Dagster+ users with a "Viewer" role can now create private catalog views.
Fixed an issue where the default IOManager used by Dagster+ Serverless did not respect setting allow_missing_partitions as metadata on a downstream asset.
Fixed an issue where runs in Dagster+ Serverless that materialized partitioned assets would sometimes fail with an object has no attribute '_base_path' error.
[dagster-graphql] Fixed an issue where the statuses filter argument to the sensorsOrError GraphQL field was sometimes ignored when querying GraphQL for multiple sensors at the same time.
Updated multi-asset sensor definition to be less likely to timeout queries against the asset history storage.
Consolidated the CapturedLogManager and ComputeLogManager APIs into a single base class.
[ui] Added an option under user settings to clear client side indexeddb caches as an escape hatch for caching related bugs.
[dagster-aws, dagster-pipes] Added a new PipesECSClient to allow Dagster to interface with ECS tasks.
[dagster-dbt] Increased the default timeout when terminating a run that is running a dbt subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Increased the default timeout when terminating a run that is running an sdf subprocess to wait 25 seconds for the subprocess to cleanly terminate. Previously, it would only wait 2 seconds.
[dagster-sdf] Added support for caching and asset selection (Thanks, akbog!)
[dagster-dlt] Added support for AutomationCondition using DagsterDltTranslator.get_automation_condition() (Thanks, aksestok!)
[ui] Fixed a bug where in-progress runs from a backfill could not be terminated from the backfill UI.
[ui] Fixed a bug that caused an "Asset must be part of at least one job" error when clicking on an external asset in the asset graph UI
Fixed an issue where viewing run logs with the latest 5.0 release of the watchdog package raised an exception.
[ui] Fixed issue causing the “filter to group” action in the lineage graph to have no effect.
[ui] Fixed case sensitivity when searching for partitions in the launchpad.
[ui] Fixed a bug which would redirect to the events tab for an asset if you loaded the partitions tab directly.
[ui] Fixed issue causing runs to get skipped when paging through the runs list (Thanks, @HynekBlaha!)
[ui] Fixed a bug where the asset catalog list view for a particular group would show all assets.
[dagster-dbt] fix bug where empty newlines in raw dbt logs were not being handled correctly.
[dagster-k8s, dagster-celery-k8s] Correctly set dagster/image label when image is provided from user_defined_k8s_config. (Thanks, @HynekBlaha!)
[dagster-duckdb] Fixed an issue for DuckDB versions older than 1.0.0 where an unsupported configuration option, custom_user_agent, was provided by default
[dagster-k8s] Fixed an issue where Kubernetes Pipes failed to create a pod if the op name contained capital or non-alphanumeric containers.
[dagster-embedded-elt] Fixed an issue where dbt assets downstream of Sling were skipped
[dagser-aws]: Direct AWS API arguments in PipesGlueClient.run have been deprecated and will be removed in 1.9.0. The new params argument should be used instead.
The default io_manager on Serverless now supports the allow_missing_partitions configuration option.
Fixed a bug that caused an error when loading the launchpad for a partition, when using in Dagster+ with an agent with version below 1.8.2
1.8.3 (core) / 0.24.3 (libraries) (YANKED - This version of Dagster resulted in errors when trying to launch runs that target individual asset partitions)#
When different assets within a code location have different PartitionsDefinitions, there will no longer be an implicit asset job __ASSET_JOB_... for each PartitionsDefinition; there will just be one with all the assets. This reduces the time it takes to load code locations with assets with many different PartitionsDefinitions.
[ui] Fixed a collection of broken links pointing to renamed Declarative Automation pages.
[dagster-dbt] Fixed issue preventing usage of MultiPartitionMapping with @dbt_assets (Thanks, @arookieds!)
[dagster-azure] Fixed issue that would cause an error when configuring an AzureBlobComputeLogManager without a secret_key (Thanks, @ion-elgreco and @HynekBlaha!)
dagster.readthedocs.io is currently stale due to availability issues.
New
Improvements to S3 Resource. (Thanks @dwallace0723!)
Better error messages in Dagit.
Better font/styling support in Dagit.
Changed OutputDefinition to take is_required rather than is_optional argument. This is to
remain consistent with changes to Field in 0.7.1 and to avoid confusion
with python's typing and dagster's definition of Optional, which indicates None-ability,
rather than existence. is_optional is deprecated and will be removed in a future version.
Added support for Flower in dagster-k8s.
Added support for environment variable config in dagster-snowflake.
Bugfixes
Improved performance in Dagit waterfall view.
Fixed bug when executing solids downstream of a skipped solid.
Improved navigation experience for pipelines in Dagit.
Fixed for the dagster-aws CLI tool.
Fixed issue starting Dagit without DAGSTER_HOME set on windows.
Fixed pipeline subset execution in partition-based schedules.
There are a substantial number of breaking changes in the 0.7.0 release.
Please see 070_MIGRATION.md for instructions regarding migrating old code.
Scheduler
The scheduler configuration has been moved from the @schedules decorator to DagsterInstance.
Existing schedules that have been running are no longer compatible with current storage. To
migrate, remove the scheduler argument on all @schedules decorators:
Finally, if you had any existing schedules running, delete the existing $DAGSTER_HOME/schedules
directory and run dagster schedule wipe && dagster schedule up to re-instatiate schedules in a
valid state.
The should_execute and environment_dict_fn argument to ScheduleDefinition now have a
required first argument context, representing the ScheduleExecutionContext
Config System Changes
In the config system, Dict has been renamed to Shape; List to Array; Optional to
Noneable; and PermissiveDict to Permissive. The motivation here is to clearly delineate
config use cases versus cases where you are using types as the inputs and outputs of solids as
well as python typing types (for mypy and friends). We believe this will be clearer to users in
addition to simplifying our own implementation and internal abstractions.
Our recommended fix is not to use Shape and Array, but instead to use our new condensed
config specification API. This allow one to use bare dictionaries instead of Shape, lists with
one member instead of Array, bare types instead of Field with a single argument, and python
primitive types (int, bool etc) instead of the dagster equivalents. These result in
dramatically less verbose config specs in most cases.
So instead of
from dagster import Shape, Field, Int, Array, String
# ... code
config=Shape({ # Dict prior to change
'some_int' : Field(Int),
'some_list: Field(Array[String]) # List prior to change
})
one can instead write:
config={'some_int': int, 'some_list': [str]}
No imports and much simpler, cleaner syntax.
config_field is no longer a valid argument on solid, SolidDefinition, ExecutorDefintion,
executor, LoggerDefinition, logger, ResourceDefinition, resource, system_storage, and
SystemStorageDefinition. Use config instead.
For composite solids, the config_fn no longer takes a ConfigMappingContext, and the context
has been deleted. To upgrade, remove the first argument to config_fn.
Field takes a is_required rather than a is_optional argument. This is to avoid confusion
with python's typing and dagster's definition of Optional, which indicates None-ability,
rather than existence. is_optional is deprecated and will be removed in a future version.
Required Resources
All solids, types, and config functions that use a resource must explicitly list that
resource using the argument required_resource_keys. This is to enable efficient
resource management during pipeline execution, especially in a multiprocessing or
remote execution environment.
The @system_storage decorator now requires argument required_resource_keys, which was
previously optional.
Dagster Type System Changes
dagster.Set and dagster.Tuple can no longer be used within the config system.
Dagster types are now instances of DagsterType, rather than a class than inherits from
RuntimeType. Instead of dynamically generating a class to create a custom runtime type, just
create an instance of a DagsterType. The type checking function is now an argument to the
DagsterType, rather than an abstract method that has to be implemented in
a subclass.
RuntimeType has been renamed to DagsterType is now an encouraged API for type creation.
Core type check function of DagsterType can now return a naked bool in addition
to a TypeCheck object.
type_check_fn on DagsterType (formerly type_check and RuntimeType, respectively) now
takes a first argument context of type TypeCheckContext in addition to the second argument of
value.
define_python_dagster_type has been eliminated in favor of PythonObjectDagsterType .
dagster_type has been renamed to usable_as_dagster_type.
as_dagster_type has been removed and similar capabilities added as
make_python_type_usable_as_dagster_type.
PythonObjectDagsterType and usable_as_dagster_type no longer take a type_check argument. If
a custom type_check is needed, use DagsterType.
As a consequence of these changes, if you were previously using dagster_pyspark or
dagster_pandas and expecting Pyspark or Pandas types to work as Dagster types, e.g., in type
annotations to functions decorated with @solid to indicate that they are input or output types
for a solid, you will need to call make_python_type_usable_as_dagster_type from your code in
order to map the Python types to the Dagster types, or just use the Dagster types
(dagster_pandas.DataFrame instead of pandas.DataFrame) directly.
Other
We no longer publish base Docker images. Please see the updated deployment docs for an example
Dockerfile off of which you can work.
step_metadata_fn has been removed from SolidDefinition & @solid.
SolidDefinition & @solid now takes tags and enforces that values are strings or
are safely encoded as JSON. metadata is deprecated and will be removed in a future version.
resource_mapper_fn has been removed from SolidInvocation.
New
Dagit now includes a much richer execution view, with a Gantt-style visualization of step
execution and a live timeline.
Early support for Python 3.8 is now available, and Dagster/Dagit along with many of our libraries
are now tested against 3.8. Note that several of our upstream dependencies have yet to publish
wheels for 3.8 on all platforms, so running on Python 3.8 likely still involves building some
dependencies from source.
dagster/priority tags can now be used to prioritize the order of execution for the built-in
in-process and multiprocess engines.
dagster-postgres storages can now be configured with separate arguments and environment
variables, such as:
run_storage:
module: dagster_postgres.run_storage
class: PostgresRunStorage
config:
postgres_db:
username: test
password:
env: ENV_VAR_FOR_PG_PASSWORD
hostname: localhost
db_name: test
Support for RunLaunchers on DagsterInstance allows for execution to be "launched" outside of
the Dagit/Dagster process. As one example, this is used by dagster-k8s to submit pipeline
execution as a Kubernetes Job.
Added support for adding tags to runs initiated from the Playground view in Dagit.
Added @monthly_schedule decorator.
Added Enum.from_python_enum helper to wrap Python enums for config. (Thanks @kdungs!)
[dagster-bash] The Dagster bash solid factory now passes along kwargs to the underlying
solid construction, and now has a single Nothing input by default to make it easier to create a
sequencing dependency. Also, logs are now buffered by default to make execution less noisy.
[dagster-aws] We've improved our EMR support substantially in this release. The
dagster_aws.emr library now provides an EmrJobRunner with various utilities for creating EMR
clusters, submitting jobs, and waiting for jobs/logs. We also now provide a
emr_pyspark_resource, which together with the new @pyspark_solid decorator makes moving
pyspark execution from your laptop to EMR as simple as changing modes.
[dagster-pandas] Added create_dagster_pandas_dataframe_type, PandasColumn, and
Constraint API's in order for users to create custom types which perform column validation,
dataframe validation, summary statistics emission, and dataframe serialization/deserialization.
[dagster-gcp] GCS is now supported for system storage, as well as being supported with the
Dask executor. (Thanks @habibutsu!) Bigquery solids have also been updated to support the new API.
Bugfix
Ensured that all implementations of RunStorage clean up pipeline run tags when a run
is deleted. Requires a storage migration, using dagster instance migrate.
The multiprocess and Celery engines now handle solid subsets correctly.
The multiprocess and Celery engines will now correctly emit skip events for steps downstream of
failures and other skips.
The @solid and @lambda_solid decorators now correctly wrap their decorated functions, in the
sense of functools.wraps.
Performance improvements in Dagit when working with runs with large configurations.
The Helm chart in dagster_k8s has been hardened against various failure modes and is now
compatible with Helm 2.
SQLite run and event log storages are more robust to concurrent use.
Improvements to error messages and to handling of user code errors in input hydration and output
materialization logic.
Fixed an issue where the Airflow scheduler could hang when attempting to load dagster-airflow
pipelines.
We now handle our SQLAlchemy connections in a more canonical way (thanks @zzztimbo!).
Fixed an issue using S3 system storage with certain custom serialization strategies.
Fixed an issue leaking orphan processes from compute logging.
Fixed an issue leaking semaphores from Dagit.
Setting the raise_error flag in execute_pipeline now actually raises user exceptions instead
of a wrapper type.
Documentation
Our docs have been reorganized and expanded (thanks @habibutsu, @vatervonacht, @zzztimbo). We'd
love feedback and contributions!
Thank you
Thank you to all of the community contributors to this release!! In alphabetical order: @habibutsu,
@kdungs, @vatervonacht, @zzztimbo.
Added the dagster-github library, a community contribution from @Ramshackle-Jamathon and
@k-mahoney!
dagster-celery
Simplified and improved config handling.
An engine event is now emitted when the engine fails to connect to a broker.
Bugfix
Fixes a file descriptor leak when running many concurrent dagster-graphql queries (e.g., for
backfill).
The @pyspark_solid decorator now handles inputs correctly.
The handling of solid compute functions that accept kwargs but which are decorated with explicit
input definitions has been rationalized.
Fixed race conditions in concurrent execution using SQLite event log storage with concurrent
execution, uncovered by upstream improvements in the Python inotify library we use.
Documentation
Improved error messages when using system storages that don't fulfill executor requirements.