Skip to content

Commit bf3c9ee

Browse files
Prepare release 1.8.0 (#229)
* Prepare release 1.8.0 Signed-off-by: Levko Kravets <levko.ne@gmail.com> * Fix `npm audit` issues Signed-off-by: Levko Kravets <levko.ne@gmail.com> * Minor fixes to changelog Signed-off-by: Levko Kravets <levko.ne@gmail.com> --------- Signed-off-by: Levko Kravets <levko.ne@gmail.com>
1 parent 957791b commit bf3c9ee

File tree

3 files changed

+50
-9
lines changed

3 files changed

+50
-9
lines changed

CHANGELOG.md

+41
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,46 @@
11
# Release History
22

3+
## 1.8.0
4+
5+
### Highlights
6+
7+
- Retry failed CloudFetch requests (databricks/databricks-sql-nodejs#211)
8+
- Fixed compatibility issues with Node@14 (databricks/databricks-sql-nodejs#219)
9+
- Support Databricks OAuth on Azure (databricks/databricks-sql-nodejs#223)
10+
- Support Databricks OAuth on GCP (databricks/databricks-sql-nodejs#224)
11+
- Support LZ4 compression for Arrow and CloudFetch results (databricks/databricks-sql-nodejs#216)
12+
- Fix OAuth M2M flow on Azure (databricks/databricks-sql-nodejs#228)
13+
14+
### OAuth on Azure
15+
16+
Some Azure instances now support Databricks native OAuth flow (in addition to AAD OAuth). For a backward
17+
compatibility, library will continue using AAD OAuth flow by default. To use Databricks native OAuth,
18+
pass `useDatabricksOAuthInAzure: true` to `client.connect()`:
19+
20+
```ts
21+
client.connect({
22+
// other options - host, port, etc.
23+
authType: 'databricks-oauth',
24+
useDatabricksOAuthInAzure: true,
25+
// other OAuth options if needed
26+
});
27+
```
28+
29+
Also, we fixed issue with AAD OAuth when wrong scopes were passed for M2M flow.
30+
31+
### OAuth on GCP
32+
33+
We enabled OAuth support on GCP instances. Since it uses Databricks native OAuth,
34+
all the options are the same as for OAuth on AWS instances.
35+
36+
### CloudFetch improvements
37+
38+
Now library will automatically attempt to retry failed CloudFetch requests. Currently, the retry strategy
39+
is quite basic, but it is going to be improved in the future.
40+
41+
Also, we implemented a support for LZ4-compressed results (Arrow- and CloudFetch-based). It is enabled by default,
42+
and compression will be used if server supports it.
43+
344
## 1.7.1
445

546
- Fix "Premature close" error which happened due to socket limit when intensively using library

package-lock.json

+8-8
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "@databricks/sql",
3-
"version": "1.7.1",
3+
"version": "1.8.0",
44
"description": "Driver for connection to Databricks SQL via Thrift API.",
55
"main": "dist/index.js",
66
"types": "dist/index.d.ts",

0 commit comments

Comments
 (0)