@databricks/sql
Advanced tools
Changelog
1.9.0
maxRows
and queryTimeout
(databricks/databricks-sql-nodejs#255)Changelog
1.8.4
Array.at
/TypedArray.at
polyfill (databricks/databricks-sql-nodejs#242 by @barelyhuman)REMOVE
(databricks/databricks-sql-nodejs#249)lz4
module optional so package manager can skip it when cannot install (databricks/databricks-sql-nodejs#246)Changelog
1.8.3
Changelog
1.8.2
Improved results handling when running queries against older DBR versions (databricks/databricks-sql-nodejs#232)
Changelog
1.8.1
Security fixes:
An issue in all published versions of the NPM package ip allows an attacker to execute arbitrary code and obtain sensitive information via the isPublic() function. This can lead to potential Server-Side Request Forgery (SSRF) attacks. The core issue is the function's failure to accurately distinguish between public and private IP addresses.
Changelog
1.8.0
Some Azure instances now support Databricks native OAuth flow (in addition to AAD OAuth). For a backward
compatibility, library will continue using AAD OAuth flow by default. To use Databricks native OAuth,
pass useDatabricksOAuthInAzure: true
to client.connect()
:
client.connect({
// other options - host, port, etc.
authType: 'databricks-oauth',
useDatabricksOAuthInAzure: true,
// other OAuth options if needed
});
Also, we fixed issue with AAD OAuth when wrong scopes were passed for M2M flow.
We enabled OAuth support on GCP instances. Since it uses Databricks native OAuth, all the options are the same as for OAuth on AWS instances.
Now library will automatically attempt to retry failed CloudFetch requests. Currently, the retry strategy is quite basic, but it is going to be improved in the future.
Also, we implemented a support for LZ4-compressed results (Arrow- and CloudFetch-based). It is enabled by default, and compression will be used if server supports it.
Changelog
1.7.1
Changelog
1.7.0
maxRows
option of IOperation.fetchChunk()
. Now it will return chunks
of requested size (databricks/databricks-sql-nodejs#200)IOperation.hasMoreRows()
behavior to avoid fetching data beyond the end of dataset.
Also, now it will work properly prior to fetching first chunk (databricks/databricks-sql-nodejs#205)Changelog
1.6.1
canUseMultipleCatalogs
option when creating session (databricks/databricks-sql-nodejs#203)Changelog
1.6.0
This feature allows to pass through proxy all the requests library makes. By default, proxy is disabled.
To enable proxy, pass a configuration object to DBSQLClient.connect
:
client.connect({
// pass host, path, auth options as usual
proxy: {
protocol: 'http', // supported protocols: 'http', 'https', 'socks', 'socks4', 'socks4a', 'socks5', 'socks5h'
host: 'localhost', // proxy host (string)
port: 8070, // proxy port (number)
auth: { // optional proxy basic auth config
username: ...
password: ...
},
},
})
Note: using proxy settings from environment variables is currently not supported