Skip to content

Releases: databricks/dbt-databricks

Version 1.4.0

25 Jan 23:46
Compare
Choose a tag to compare

Breaking changes

  • Raise an exception when schema contains '.'. (#222)
    • Containing a catalog in schema is not allowed anymore.
    • Need to explicitly use catalog instead.

Features

  • Support Python 3.11 (#233)
  • Support incremental_predicates (#161)
  • Apply connection retry refactor, add defaults with exponential backoff (#137)
  • Quote by Default (#241)
  • Avoid show table extended command. (#231)
  • Use show table extended with table name list for get_catalog. (#237)
  • Add support for a glob pattern in the databricks_copy_into macro (#259)

Version 1.3.2

09 Nov 21:43
Compare
Choose a tag to compare

Fixes

  • Fix copy into macro when passing expression_list. (#223)
  • Partially revert to fix the case where schema config contains uppercase letters. (#224)

Version 1.2.5

09 Nov 21:40
Compare
Choose a tag to compare

Fixes

  • Partially revert to fix the case where schema config contains uppercase letters. (#224)

Version 1.1.7

09 Nov 21:37
Compare
Choose a tag to compare

Fixes

  • Partially revert to fix the case where schema config contains uppercase letters. (#224)

Version 1.3.1

01 Nov 20:09
Compare
Choose a tag to compare

Under the hood

  • Show and log a warning when schema contains '.'. (#221)

Version 1.2.4

01 Nov 20:06
Compare
Choose a tag to compare

Under the hood

  • Show and log a warning when schema contains '.'. (#221)

Version 1.1.6

01 Nov 20:02
Compare
Choose a tag to compare

Under the hood

  • Show and log a warning when schema contains '.'. (#221)

Version 1.3.0

14 Oct 18:54
Compare
Choose a tag to compare

Features

  • Support python model through run command API, currently supported materializations are table and incremental. (dbt-labs/dbt-spark#377, #126)
  • Enable Pandas and Pandas-on-Spark DataFrames for dbt python models (dbt-labs/dbt-spark#469, #181)
  • Support job cluster in notebook submission method (dbt-labs/dbt-spark#467, #194)
    • In all_purpose_cluster submission method, a config http_path can be specified in Python model config to switch the cluster where Python model runs.
      def model(dbt, _):
          dbt.config(
              materialized='table',
              http_path='...'
          )
          ...
  • Use builtin timestampadd and timestampdiff functions for dateadd/datediff macros if available (#185)
  • Implement testing for a test for various Python models (#189)
  • Implement testing for type_boolean in Databricks (dbt-labs/dbt-spark#471, #188)
  • Add a macro to support COPY INTO (#190)

Under the hood

  • Apply "Initial refactoring of incremental materialization" (#148)
    • Now dbt-databricks uses adapter.get_incremental_strategy_macro instead of dbt_spark_get_incremental_sql macro to dispatch the incremental strategy macro. The overwritten dbt_spark_get_incremental_sql macro will not work anymore.
  • Better interface for python submission (dbt-labs/dbt-spark#452, #178)

Version 1.2.3

26 Sep 18:43
Compare
Choose a tag to compare

Fixes

  • Fix cancellation (#173)
  • http_headers should be dict in the profile (#174)

Version 1.1.5

26 Sep 18:41
Compare
Choose a tag to compare

Fixes

  • Fix cancellation (#173)
  • http_headers should be dict in the profile (#174)