Releases: databricks/dbt-databricks
Releases · databricks/dbt-databricks
Version 1.4.0
Breaking changes
- Raise an exception when schema contains '.'. (#222)
- Containing a catalog in
schema
is not allowed anymore. - Need to explicitly use
catalog
instead.
- Containing a catalog in
Features
- Support Python 3.11 (#233)
- Support
incremental_predicates
(#161) - Apply connection retry refactor, add defaults with exponential backoff (#137)
- Quote by Default (#241)
- Avoid show table extended command. (#231)
- Use show table extended with table name list for get_catalog. (#237)
- Add support for a glob pattern in the databricks_copy_into macro (#259)
Version 1.3.2
Version 1.2.5
Fixes
- Partially revert to fix the case where schema config contains uppercase letters. (#224)
Version 1.1.7
Fixes
- Partially revert to fix the case where schema config contains uppercase letters. (#224)
Version 1.3.1
Under the hood
- Show and log a warning when schema contains '.'. (#221)
Version 1.2.4
Under the hood
- Show and log a warning when schema contains '.'. (#221)
Version 1.1.6
Under the hood
- Show and log a warning when schema contains '.'. (#221)
Version 1.3.0
Features
- Support python model through run command API, currently supported materializations are table and incremental. (dbt-labs/dbt-spark#377, #126)
- Enable Pandas and Pandas-on-Spark DataFrames for dbt python models (dbt-labs/dbt-spark#469, #181)
- Support job cluster in notebook submission method (dbt-labs/dbt-spark#467, #194)
- In
all_purpose_cluster
submission method, a confighttp_path
can be specified in Python model config to switch the cluster where Python model runs.def model(dbt, _): dbt.config( materialized='table', http_path='...' ) ...
- In
- Use builtin timestampadd and timestampdiff functions for dateadd/datediff macros if available (#185)
- Implement testing for a test for various Python models (#189)
- Implement testing for
type_boolean
in Databricks (dbt-labs/dbt-spark#471, #188) - Add a macro to support COPY INTO (#190)
Under the hood
- Apply "Initial refactoring of incremental materialization" (#148)
- Now dbt-databricks uses
adapter.get_incremental_strategy_macro
instead ofdbt_spark_get_incremental_sql
macro to dispatch the incremental strategy macro. The overwrittendbt_spark_get_incremental_sql
macro will not work anymore.
- Now dbt-databricks uses
- Better interface for python submission (dbt-labs/dbt-spark#452, #178)