Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
no-kafka is Apache Kafka 0.9 client for Node.js with new unified consumer API support.
Supports sync and async Gzip and Snappy compression, producer batching and controllable retries, offers few predefined group assignment strategies and producer partitioner option.
All methods will return a promise
Please check a CHANGELOG for backward incompatible changes in version 3.x
kafka-topics.sh --zookeeper 127.0.0.1:2181 --create --topic kafka-test-topic --partitions 3 --replication-factor 1
npm install no-kafka
Example:
var Kafka = require('no-kafka');
var producer = new Kafka.Producer();
return producer.init().then(function(){
return producer.send({
topic: 'kafka-test-topic',
partition: 0,
message: {
value: 'Hello!'
}
});
})
.then(function (result) {
/*
[ { topic: 'kafka-test-topic', partition: 0, offset: 353 } ]
*/
});
Send and retry if failed within 100ms delay:
return producer.send(messages, {
retries: {
attempts: 2,
delay: {
min: 100,
max: 300
}
}
});
Accumulate messages into single batch until their total size is >= 1024 bytes or 100ms timeout expires (overwrite Producer constructor options):
producer.send(messages, {
batch: {
size: 1024,
maxWait: 100
}
});
producer.send(messages, {
batch: {
size: 1024,
maxWait: 100
}
});
Please note, that if you pass different options to the send()
method then these messages will be grouped into separate batches:
// will be sent in batch 1
producer.send(messages, {
batch: {
size: 1024,
maxWait: 100
},
codec: Kafka.COMPRESSION_GZIP
});
// will be sent in batch 2
producer.send(messages, {
batch: {
size: 1024,
maxWait: 100
},
codec: Kafka.COMPRESSION_SNAPPY
});
Send a message with the key:
producer.send({
topic: 'kafka-test-topic',
partition: 0,
message: {
key: 'some-key'
value: 'Hello!'
}
});
Example: override the default partitioner with a custom partitioner that only uses a portion of the key.
var util = require('util');
var Kafka = require('no-kafka');
var Producer = Kafka.Producer;
var DefaultPartitioner = Kafka.DefaultPartitioner;
function MyPartitioner() {
DefaultPartitioner.apply(this, arguments);
}
util.inherits(MyPartitioner, DefaultPartitioner);
MyPartitioner.prototype.getKey = function getKey(message) {
return message.key.split('-')[0];
};
var producer = new Producer({
partitioner : new MyPartitioner()
});
return producer.init().then(function(){
return producer.send({
topic: 'kafka-test-topic',
message: {
key : 'namespace-key',
value : 'Hello!'
}
});
});
requiredAcks
- require acknoledgments for produce request. If it is 0 the server will not send any response. If it is 1 (default), the server will wait the data is written to the local log before sending a response. If it is -1 the server will block until the message is committed by all in sync replicas before sending a response. For any number > 1 the server will block waiting for this number of acknowledgements to occur (but the server will never wait for more acknowledgements than there are in-sync replicas).timeout
- timeout in ms for produce requestclientId
- ID of this client, defaults to 'no-kafka-client'connectionString
- comma delimited list of initial brokers list, defaults to '127.0.0.1:9092'reconnectionDelay
- controls optionally progressive delay between reconnection attempts in case of network error:
min
- minimum delay, used as increment value for next attempts, defaults to 1000msmax
- maximum delay value, defaults to 1000mspartitioner
- Class instance used to determine topic partition for message. If message already specifies a partition, the partitioner won't be used. The partitioner must inherit from Kafka.DefaultPartitioner
. The partition
method receives 3 arguments: the topic name, an array with topic partitions, and the message (useful to partition by key, etc.). partition
can be sync or async (return a Promise).retries
- controls number of attempts at delay between them when produce request fails
attempts
- number of total attempts to send the message, defaults to 3delay
- controls delay between retries, the delay is progressive and incrememented with each attempt with min
value steps up to but not exceeding max
value
min
- minimum delay, used as increment value for next attempts, defaults to 1000msmax
- maximum delay value, defaults to 3000mscodec
- compression codec, one of Kafka.COMPRESSION_NONE, Kafka.COMPRESSION_SNAPPY, Kafka.COMPRESSION_GZIPbatch
- control batching (grouping) of requests
size
- group messages together into single batch until their total size exceeds this value, defaults to 16384 bytes. Set to 0 to disable batching.maxWait
- send grouped messages after this amount of milliseconds expire even if their total size doesn't exceed batch.size
yet, defaults to 10ms. Set to 0 to disable batching.asyncCompression
- boolean, use asynchronouse compression instead of synchronous, defaults to false
Manually specify topic, partition and offset when subscribing. Suitable for simple use cases.
Example:
var consumer = new Kafka.SimpleConsumer();
// data handler function can return a Promise
var dataHandler = function (messageSet, topic, partition) {
messageSet.forEach(function (m) {
console.log(topic, partition, m.offset, m.message.value.toString('utf8'));
});
};
return consumer.init().then(function () {
// Subscribe partitons 0 and 1 in a topic:
return consumer.subscribe('kafka-test-topic', [0, 1], dataHandler);
});
Subscribe (or change subscription) to specific offset and limit maximum received MessageSet size:
consumer.subscribe('kafka-test-topic', 0, {offset: 20, maxBytes: 30}, dataHandler)
Subscribe to latest or earliest offsets in the topic/parition:
consumer.subscribe('kafka-test-topic', 0, {time: Kafka.LATEST_OFFSET}, dataHandler)
consumer.subscribe('kafka-test-topic', 0, {time: Kafka.EARLIEST_OFFSET}, dataHandler)
Subscribe to all partitions in a topic:
consumer.subscribe('kafka-test-topic', dataHandler)
Commit offset(s) (V0, Kafka saves these commits to Zookeeper)
consumer.commitOffset([
{
topic: 'kafka-test-topic',
partition: 0,
offset: 1
},
{
topic: 'kafka-test-topic',
partition: 1,
offset: 2
}
])
Fetch commited offset(s)
consumer.fetchOffset([
{
topic: 'kafka-test-topic',
partition: 0
},
{
topic: 'kafka-test-topic',
partition: 1
}
]).then(function (result) {
/*
[ { topic: 'kafka-test-topic',
partition: 1,
offset: 2,
metadata: null,
error: null },
{ topic: 'kafka-test-topic',
partition: 0,
offset: 1,
metadata: null,
error: null } ]
*/
});
groupId
- group ID for comitting and fetching offsets. Defaults to 'no-kafka-group-v0'maxWaitTime
- maximum amount of time in milliseconds to block waiting if insufficient data is available at the time the fetch request is issued, defaults to 100msidleTimeout
- timeout between fetch calls, defaults to 1000msminBytes
- minimum number of bytes to wait from Kafka before returning the fetch call, defaults to 1 bytemaxBytes
- maximum size of messages in a fetch response, defaults to 1MBclientId
- ID of this client, defaults to 'no-kafka-client'connectionString
- comma delimited list of initial brokers list, defaults to '127.0.0.1:9092'reconnectionDelay
- controls optionally progressive delay between reconnection attempts in case of network error:
min
- minimum delay, used as increment value for next attempts, defaults to 1000msmax
- maximum delay value, defaults to 1000msrecoveryOffset
- recovery position (time) which will used to recover subscription in case of OffsetOutOfRange error, defaults to Kafka.LATEST_OFFSETasyncCompression
- boolean, use asynchronouse decompression instead of synchronous, defaults to false
handlerConcurrency
- specify concurrency level for the consumer handler function, defaults to 10Specify an assignment strategy (or use no-kafka built-in consistent or round robin assignment strategy) and subscribe by specifying only topics. Elected group leader will automatically assign partitions between all group members.
Example:
var Promise = require('bluebird');
var consumer = new Kafka.GroupConsumer();
var dataHandler = function (messageSet, topic, partition) {
return Promise.each(messageSet, function (m){
console.log(topic, partition, m.offset, m.message.value.toString('utf8'));
// commit offset
return consumer.commitOffset({topic: topic, partition: partition, offset: m.offset, metadata: 'optional'});
});
};
var strategies = [{
subscriptions: ['kafka-test-topic'],
handler: dataHandler
}];
consumer.init(strategies); // all done, now wait for messages in dataHandler
no-kafka provides three built-in strategies:
Kafka.WeightedRoundRobinAssignmentStrategy
weighted round robin assignment (based on wrr-pool).Kafka.ConsistentAssignmentStrategy
which is based on a consistent hash ring and so provides consistent assignment across consumers in a group based on supplied metadata.id
and metadata.weight
options.Kafka.DefaultAssignmentStrategy
simple round robin assignment strategy (default).Using Kafka.WeightedRoundRobinAssignmentStrategy
:
var strategies = {
strategy: 'TestStrategy',
subscriptions: ['kafka-test-topic'],
metadata: {
weight: 4
},
strategy: new Kafka.WeightedRoundRobinAssignmentStrategy(),
handler: dataHandler
};
// consumer.init(strategies)....
Using Kafka.ConsistentAssignmentStrategy
:
var strategies = {
subscriptions: ['kafka-test-topic'],
metadata: {
id: process.argv[2] || 'consumer_1',
weight: 50
},
strategy: new Kafka.ConsistentAssignmentStrategy(),
handler: dataHandler
};
// consumer.init(strategies)....
Note that each consumer in a group should have its own and consistent metadata.id.
You can also write your own assignment strategy by inheriting from Kafka.DefaultAssignmentStrategy and overwriting assignment
method.
groupId
- group ID for comitting and fetching offsets. Defaults to 'no-kafka-group-v0.9'maxWaitTime
- maximum amount of time in milliseconds to block waiting if insufficient data is available at the time the fetch request is issued, defaults to 100msidleTimeout
- timeout between fetch calls, defaults to 1000msminBytes
- minimum number of bytes to wait from Kafka before returning the fetch call, defaults to 1 bytemaxBytes
- maximum size of messages in a fetch responseclientId
- ID of this client, defaults to 'no-kafka-client'connectionString
- comma delimited list of initial brokers list, defaults to '127.0.0.1:9092'reconnectionDelay
- controls optionally progressive delay between reconnection attempts in case of network error:
min
- minimum delay, used as increment value for next attempts, defaults to 1000msmax
- maximum delay value, defaults to 1000mssessionTimeout
- session timeout in ms, min 6000, max 30000, defaults to 15000
heartbeatTimeout
- delay between heartbeat requests in ms, defaults to 1000
retentionTime
- offset retention time in ms, defaults to 1 day (24 * 3600 * 1000)startingOffset
- starting position (time) when there is no commited offset, defaults to Kafka.LATEST_OFFSET
recoveryOffset
- recovery position (time) which will used to recover subscription in case of OffsetOutOfRange error, defaults to Kafka.LATEST_OFFSETasyncCompression
- boolean, use asynchronouse decompression instead of synchronous, defaults to false
handlerConcurrency
- specify concurrency level for the consumer handler function, defaults to 10Offes two methods:
listGroups
- list existing consumer groupsdescribeGroup
- describe existing group by its idExample:
var admin = new Kafka.GroupAdmin();
return admin.init().then(function(){
return admin.listGroups().then(function(groups){
// [ { groupId: 'no-kafka-admin-test-group', protocolType: 'consumer' } ]
return admin.describeGroup('no-kafka-admin-test-group').then(function(group){
/*
{ error: null,
groupId: 'no-kafka-admin-test-group',
state: 'Stable',
protocolType: 'consumer',
protocol: 'DefaultAssignmentStrategy',
members:
[ { memberId: 'group-consumer-82646843-b4b8-4e91-94c9-b4708c8b05e8',
clientId: 'group-consumer',
clientHost: '/192.168.1.4',
version: 0,
subscriptions: [ 'kafka-test-topic'],
metadata: <Buffer 63 6f 6e 73 75 6d 65 72 2d 6d 65 74 61 64 61 74 61>,
memberAssignment:
{ _blength: 44,
version: 0,
partitionAssignment:
[ { topic: 'kafka-test-topic',
partitions: [ 0, 1, 2 ] },
],
metadata: null } },
] }
*/
})
});
});
no-kafka supports both SNAPPY and Gzip compression. To use SNAPPY you must install the snappy
NPM module in your project.
Enable compression in Producer:
var Kafka = require('no-kafka');
var producer = new Kafka.Producer({
clientId: 'producer',
codec: Kafka.COMPRESSION_SNAPPY // Kafka.COMPRESSION_NONE, Kafka.COMPRESSION_SNAPPY, Kafka.COMPRESSION_GZIP
});
Alternatively just send some messages with specified compression codec (overwrites codec set in contructor):
return producer.send({
topic: 'kafka-test-topic',
partition: 0,
message: { value: 'p00' }
}, { codec: Kafka.COMPRESSION_SNAPPY })
By default no-kafka will use asynchronous compression and decompression.
Disable async compression/decompression (and use sync) with asyncCompression
option (synchronous Gzip is not availble in node < 0.11):
Producer:
var producer = new Kafka.Producer({
clientId: 'producer',
asyncCompression: false, // use sync compression/decompression
codec: Kafka.COMPRESSION_SNAPPY
});
Consumer:
var consumer = new Kafka.SimpleConsumer({
idleTimeout: 100,
clientId: 'simple-consumer',
asyncCompression: true
});
no-kafka will connect to the hosts specified in connectionString
constructor option unless it is omitted. In this case it will use KAFKA_URL environment variable or fallback to default kafka://127.0.0.1:9092
. For better availability always specify several initial brokers: 10.0.1.1:9092,10.0.1.2:9092,10.0.1.3:9092
. The /
prefix is optional.
All network errors are handled by the library: producer will retry sending failed messages for configured amount of times, simple consumer and group consumer will try to reconnect to failed host, update metadata as needed as so on.
To connect to Kafka with SSL endpoint enabled specify SSL certificate and key file options:
var producer = new Kafka.Producer({
connectionString: 'kafka://127.0.0.1:9093', // should match `listeners` SSL option in Kafka config
ssl: {
certFile: '/path/to/client.crt',
keyFile: '/path/to/client.key'
}
});
Other Node.js SSL options are available such as rejectUnauthorized
, secureProtocol
, ciphers
, etc. See Node.js tls.createServer
method documentation for more details.
It is also possible to use KAFKA_CLIENT_CERT
and KAFKA_CLIENT_CERT_KEY
environment variables to specify SSL certificate and key locations:
KAFKA_URL=kafka://127.0.0.1:9093 KAFKA_CLIENT_CERT=./test/ssl/client.crt KAFKA_CLIENT_CERT_KEY=./test/ssl/client.key node producer.js
Sometimes the advertised listener addresses for a Kafka cluster may be incorrect from the client,
such as when a Kafka farm is behind NAT or other network infrastructure. In this scenario it is
possible to pass a brokerRedirection
option to the Producer
, SimpleConsumer
or GroupConsumer
.
The value of the brokerDirection
can be either:
A function returning a tuple of host (string) and port (integer), such as:
brokerRedirection: function (host, port) {
return {
host: host + '.somesuffix.com', // Fully qualify
port: port + 100, // Port NAT
}
}
A simple map of connection strings to new connection strings, such as:
brokerRedirection: {
'some-host:9092': 'actual-host:9092',
'kafka://another-host:9092': 'another-host:9093',
'third-host:9092': 'kafka://third-host:9000'
}
A common scenario for this kind of remapping is when a Kafka cluster exists within a Docker application, and the internally advertised names needed for container to container communication do not correspond to the actual external ports or addresses when connecting externally via other tools.
In case of network error which prevents further operations no-kafka will try to reconnect to Kafka brokers in a endless loop with the optionally progressive delay which can be configured with reconnectionDelay
option.
You can differentiate messages from several instances of producer/consumer by providing unique clientId
in options:
var consumer1 = new Kafka.GroupConsumer({
clientId: 'group-consumer-1'
});
var consumer2 = new Kafka.GroupConsumer({
clientId: 'group-consumer-2'
});
=>
2016-01-12T07:41:57.884Z INFO group-consumer-1 ....
2016-01-12T07:41:57.884Z INFO group-consumer-2 ....
Change the logging level:
var consumer = new Kafka.GroupConsumer({
clientId: 'group-consumer',
logger: {
logLevel: 1 // 0 - nothing, 1 - just errors, 2 - +warnings, 3 - +info, 4 - +debug, 5 - +trace
}
});
Send log messages to Logstash server(s) via UDP:
var consumer = new Kafka.GroupConsumer({
clientId: 'group-consumer',
logger: {
logstash: {
enabled: true,
connectionString: '10.0.1.1:9999,10.0.1.2:9999',
app: 'myApp-kafka-consumer'
}
}
});
You can overwrite the function that outputs messages to stdout/stderr:
var consumer = new Kafka.GroupConsumer({
clientId: 'group-consumer',
logger: {
logFunction: console.log
}
});
There is no Kafka API call to create a topic. Kafka supports auto creating of topics when their metadata is first requested (auto.create.topic
option) but the topic is created with all default parameters, which is useless. There is no way to be notified when the topic has been created, so the library will need to ping the server with some interval. There is also no way to be notified of any error for this operation. For this reason, having no guarantees, no-kafka won't provide topic creation method until there will be a specific Kafka API call to create/manage topics.
FAQs
Apache Kafka 0.9 client for Node.JS
The npm package no-kafka receives a total of 4,948 weekly downloads. As such, no-kafka popularity was classified as popular.
We found that no-kafka demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.