New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

kafkajs

Package Overview
Dependencies
Maintainers
2
Versions
299
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

kafkajs - npm Package Compare versions

Comparing version 1.5.0-beta.0 to 1.5.0-beta.1

16

CHANGELOG.md

@@ -8,2 +8,13 @@ # Changelog

## [1.5.0-beta.1] - 2019-01-17
### Fixed
- Rolling upgrade from 0.10 to 0.11 causes unknown magic byte errors #246
### Changed
- Validate consumer groupId #244
### Added
- Expose network queue size event to consumers, producers and admin #245
## [1.5.0-beta.0] - 2019-01-08

@@ -33,2 +44,7 @@

## [1.4.7] - 2019-01-17
### Fixed
- Rolling upgrade from 0.10 to 0.11 causes unknown magic byte errors #246
## [1.4.6] - 2018-12-03

@@ -35,0 +51,0 @@

3

package.json
{
"name": "kafkajs",
"version": "1.5.0-beta.0",
"version": "1.5.0-beta.1",
"description": "A modern Apache Kafka client for node.js",

@@ -57,2 +57,3 @@ "author": "Tulio Ornelas <ornelas.tulio@gmail.com>",

"jest": "^23.5.0",
"jest-extended": "^0.11.0",
"jest-junit": "^5.1.0",

@@ -59,0 +60,0 @@ "lint-staged": "^6.0.0",

@@ -435,3 +435,3 @@ [![KafkaJS](https://raw.githubusercontent.com/tulios/kafkajs/master/logo.png)](https://github.com/tulios/kafkajs)

try {
try {
// Call one of the transaction's send methods

@@ -453,4 +453,4 @@ await transaction.send({ topic, messages })

```javascript
await transaction.sendOffsets({
consumerGroupId, topics
await transaction.sendOffsets({
consumerGroupId, topics
})

@@ -572,3 +572,3 @@ ```

Consumer groups allow a group of machines or processes to coordinate access to a list of topics, distributing the load among the consumers. When a consumer fails the load is automatically distributed to other members of the group. Consumer groups must have unique group ids within the cluster, from a kafka broker perspective.
Consumer groups allow a group of machines or processes to coordinate access to a list of topics, distributing the load among the consumers. When a consumer fails the load is automatically distributed to other members of the group. Consumer groups __must have__ unique group ids within the cluster, from a kafka broker perspective.

@@ -1287,2 +1287,9 @@ Creating the consumer:

* consumer.events.REQUEST_QUEUE_SIZE
payload: {
`broker`,
`clientId`,
`queueSize`
}
### <a name="instrumentation-producer"></a> Producer

@@ -1322,2 +1329,9 @@

* producer.events.REQUEST_QUEUE_SIZE
payload: {
`broker`,
`clientId`,
`queueSize`
}
### <a name="instrumentation-admin"></a> Admin

@@ -1357,2 +1371,9 @@

* admin.events.REQUEST_QUEUE_SIZE
payload: {
`broker`,
`clientId`,
`queueSize`
}
## <a name="custom-logging"></a> Custom logging

@@ -1359,0 +1380,0 @@

@@ -11,2 +11,3 @@ const swapObject = require('../utils/swapObject')

REQUEST_TIMEOUT: adminType(networkEvents.NETWORK_REQUEST_TIMEOUT),
REQUEST_QUEUE_SIZE: adminType(networkEvents.NETWORK_REQUEST_QUEUE_SIZE),
}

@@ -17,2 +18,3 @@

[events.REQUEST_TIMEOUT]: networkEvents.NETWORK_REQUEST_TIMEOUT,
[events.REQUEST_QUEUE_SIZE]: networkEvents.NETWORK_REQUEST_QUEUE_SIZE,
}

@@ -19,0 +21,0 @@

@@ -40,2 +40,6 @@ const Long = require('long')

}) => {
if (!groupId) {
throw new KafkaJSNonRetriableError('Consumer groupId must be a non-empty string.')
}
const logger = rootLogger.namespace('Consumer')

@@ -42,0 +46,0 @@ const instrumentationEmitter = rootInstrumentationEmitter || new InstrumentationEventEmitter()

@@ -19,2 +19,3 @@ const swapObject = require('../utils/swapObject')

REQUEST_TIMEOUT: consumerType(networkEvents.NETWORK_REQUEST_TIMEOUT),
REQUEST_QUEUE_SIZE: consumerType(networkEvents.NETWORK_REQUEST_QUEUE_SIZE),
}

@@ -25,2 +26,3 @@

[events.REQUEST_TIMEOUT]: networkEvents.NETWORK_REQUEST_TIMEOUT,
[events.REQUEST_QUEUE_SIZE]: networkEvents.NETWORK_REQUEST_QUEUE_SIZE,
}

@@ -27,0 +29,0 @@

@@ -92,2 +92,3 @@ class KafkaJSError extends Error {

class KafkaJSLockTimeout extends KafkaJSTimeout {}
class KafkaJSUnsupportedMagicByteInMessageSet extends KafkaJSNonRetriableError {}

@@ -113,2 +114,3 @@ module.exports = {

KafkaJSServerDoesNotSupportApiKey,
KafkaJSUnsupportedMagicByteInMessageSet,
}

@@ -11,2 +11,3 @@ const swapObject = require('../utils/swapObject')

REQUEST_TIMEOUT: producerType(networkEvents.NETWORK_REQUEST_TIMEOUT),
REQUEST_QUEUE_SIZE: producerType(networkEvents.NETWORK_REQUEST_QUEUE_SIZE),
}

@@ -17,2 +18,3 @@

[events.REQUEST_TIMEOUT]: networkEvents.NETWORK_REQUEST_TIMEOUT,
[events.REQUEST_QUEUE_SIZE]: networkEvents.NETWORK_REQUEST_QUEUE_SIZE,
}

@@ -19,0 +21,0 @@

@@ -1,2 +0,6 @@

const { KafkaJSPartialMessageError, KafkaJSNonRetriableError } = require('../../errors')
const {
KafkaJSPartialMessageError,
KafkaJSUnsupportedMagicByteInMessageSet,
} = require('../../errors')
const V0Decoder = require('./v0/decoder')

@@ -12,3 +16,3 @@ const V1Decoder = require('./v1/decoder')

default:
throw new KafkaJSNonRetriableError(
throw new KafkaJSUnsupportedMagicByteInMessageSet(
`Unsupported MessageSet message version, magic byte: ${magicByte}`

@@ -15,0 +19,0 @@ )

@@ -37,2 +37,9 @@ const Long = require('long')

if (e.name === 'KafkaJSUnsupportedMagicByteInMessageSet') {
// Received a MessageSet and a RecordBatch on the same response, the cluster is probably
// upgrading the message format from 0.10 to 0.11. Stop processing this message set to
// receive the full record batch on the next request
break
}
throw e

@@ -39,0 +46,0 @@ }

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc