nsq-logger
Nsq service to read messages from all topics listed within a list of nsqlookupd services.|
INFO: all examples are written in coffee-script
Install
npm install nsq-logger
Initialize
var logger = new NsqLogger( config );
Example:
var NsqLogger = require( "nsq-logger" );
var config = {
clientId: "myFooClient"
};
var logger = new NsqLogger( config );
var writer = logger.Writer;
logger.on( "message", function( topic, data, done ){
done();
});
writer.connect();
writer.publish( "topic23", "Do some Stuff!" );
Config
- clientId : (
String|Null
required ) An identifier used to disambiguate this client. - active : (
Boolean
default=true ) Configuration to (en/dis)abel the nsq recorder - namespace : (
String|Null
default=null ) Internally prefix the nsq topics. This will be handled transparent, but with this it's possible to separate different environments from each other. E.g. you can run a "staging" and "live" environment on one nsq cluster. - loggerChannel : (
String
default="nsqlogger" ) The channel name for the logger to each topic - exceededTopic : (
String
default="_exceeded" ) A topic name, that will store exceeded messages. - ignoreTopics : (
String[]|Function
default=null ) A list of topics that should be ignored or a function that will called to check the ignored topics manually - lookupdPollInterval : (
Number
default=60 ) Time in seconds to poll the nsqlookupd servers to sync the available topics - maxInFlight : (
Number
default=1 ) The maximum number of messages to process at once. This value is shared between nsqd connections. It's highly recommended that this value is greater than the number of nsqd connections. - heartbeatInterval : (
Number
default=30 ) The frequency in seconds at which the nsqd will send heartbeats to this Reader. - lookupdTCPAddresses : (
String[]
default=[ "127.0.0.1:4160", "127.0.0.1:4162" ] ) A list of nsq lookup servers - lookupdHTTPAddresses : (
String[]
default=[ "127.0.0.1:4161", "127.0.0.1:4163" ] ) A list of nsq lookup servers - maxAttempts : (
Number
default=10 ) The number of times to a message can be requeued before it will be handed to the DISCARD handler and then automatically finished. 0 means that there is no limit. If not DISCARD handler is specified and maxAttempts > 0, then the message will be finished automatically when the number attempts has been exhausted. - messageTimeout : (
Number|Null
default=null ) Message timeout in ms or null
for no timeout - sampleRate : (
Number|Null
default=null ) Deliver a percentage of all messages received to this connection. 1 <= sampleRate <= 99 - requeueDelay : (
Number|Null
default=5 ) The delay is in seconds. This is how long nsqd will hold on the message before attempting it again. - host : (
String
default="127.0.0.1" ) Host of a nsqd - port : (
Number
default=4150 ) Port of a nsqd - deflate : (
Boolean
default=false ) Use zlib Deflate compression. - deflateLevel : (
Number
default=6 ) Use zlib Deflate compression level.
Methods
.activate()
Activate the module
Return
( Boolean ): true
if it is now activated. false
if it was already active
.deactivate()
Deactivate the module
Return
( Boolean ): true
if it is now deactivated. false
if it was already inactive
.active()
Test if the module is currently active
Return
( Boolean ): Is active?
.destroy( cb )
This will stop all readers and disconnect every connection from nsq
Arguments
- cb : (
Function
) Called after destroy
Events
message
The main event to catch and process messages from all topics.
Arguments
- topic : (
String
) The topic of this message - data : (
String|Object|Array
) The message content. It tries to JSON.parse the message if possible. Otherwise it will be just a string. - done : (
String
) You have to call this function until the message was processed. This will remove the message from the queue. Otherwise it will be requeued. If you add a argument cb( new Error("Dooh!") )
it will interpreted as an error and this message be requeued immediately
Example:
logger.on( "message", function( topic, data, done ){
myDB.write( "INSERT INTO " + topic + " VALUES ( " + data + " )", done );
});
ready
Emitted once the list of topics was received and the readers are created and connected.
This is just an internal helper. The Method list
will also wait for the first response. The events add
, remove
and change
are active after this first response.
Example:
topics.on( "ready", function( err ){
});
Properties
logger.config
Type: ( Config )
This is the internaly used configuration.
Attributes
See logger config
logger.Writer
Type: ( NsqWriter )
To write messages you can use the internal writer instance.
Details see Writer
logger.Topics
Type: ( NsqTopics )
The logger uses a module called nsq-topics
to sync the existing topics and generate the readers for each topic.
You can grab the internal used insatnce with logger.Topics
Details see nsq-topics
logger.ready
Type: ( Boolean )
The logger is ready to use
Writer
Initialize
var NsqWriter = require( "nsq-logger/writer" )
Example:
var NsqWriter = require( "nsq-logger/writer" );
var config = {
clientId: "myFooClient",
host: "127.0.0.1",
port: 4150
};
var writer = new NsqWriter( config );
writer.connect();
writer.publish( "topic23", "Do some Stuff!" );
Config
- clientId : (
String|Null
required ) An identifier used to disambiguate this client. - namespace : (
String|Null
default=null ) Internally prefix the nsq topics. This will be handled transparent, but with this it's possible to separate different environments from each other. E.g. you can run a "staging" and "live" environment on one nsq cluster. - active : (
Boolean
default=true ) Configuration to (en/dis)abel the nsq recorder - host : (
String
default="127.0.0.1" ) Host of a nsqd - port : (
Number
default=4150 ) Port of a nsqd - deflate : (
Boolean
default=false ) Use zlib Deflate compression. - deflateLevel : (
Number
default=6 ) Use zlib Deflate compression level.
Methods
.connect()
You have to connect the writer before publishing data
Return
( Writer ): retuns itself for chaining
.disconnect()
disconnect the client
Return
( Writer ): retuns itself for chaining
.publish()
You have to connect the writer before publishing data
Arguments
- topic : (
String
) Topic name - data : (
String|Object|Array
) Data to publish. If it's not a string it will be JSON stringified - cb : (
Function
) Called after a successful publish
Return
( Writer ): retuns itself for chaining
Example:
writer
.connect()
.publish(
"hello",
JSON.strinigify( { to: [ "nsq-logger" ] } )
);
.activate()
Activate the module
Return
( Boolean ): true
if it is now activated. false
if it was already active
.deactivate()
Deactivate the module
Return
( Boolean ): true
if it is now deactivated. false
if it was already inactive
.active()
Test if the module is currently active
Return
( Boolean ): Is active?
.destroy( cb )
Disconnect and remove all event listeners
Arguments
- cb : (
Function
) Called after destroy
Events
message
The main event to catch and process messages from all topics.
Arguments
- topic : (
String
) The topic of this message - data : (
String|Object|Array
) The message content. A String or parsed JSON data. - done : (
String
) You have to call this function until the message was processed. This will remove the message from the queue. Otherwise it will be requeued. If you add a argument cb( new Error("Dooh!") )
it will interpreted as an error and this message be requeued immediately - msg : (
Message
) The raw nsqjs message.
Example:
logger.on( "message", function( topic, data, done ){
myDB.write( "INSERT INTO " + topic + " VALUES ( " + data + " )", done );
});
ready
Emitted once the list of topics was received and the readers are created and connected.
This is just an internal helper. The Method list
will also wait for the first response. The events add
, remove
and change
are active after this first response.
Example:
topics.on( "ready", function( err ){
});
Properties
writer.ready
Type: ( Boolean )
The writer is ready to use
writer.connected
Type: ( Boolean )
The writer is connected to nsqd
Reader
Initialize
var NsqReader = require( "nsq-logger/reader" )
var reader = NsqReader( topic, channel, config )
Example:
var NsqReader = require( "nsq-logger/reader" );
var config = {
clientId: "myFooClient",
lookupdTCPAddresses: "127.0.0.1:4160",
lookupdHTTPAddresses: "127.0.0.1: 4161",
};
var reader = new NsqReader( "topic23", "channel42", config );
reader.on( "message", function( topic, data, done ){
done();
});
reader.connect();
Paramater
NsqReader( topic, channel, config )
- topic : (
String
required ) The topic to listen to - channel : (
String
required ) The nsq channel to use or create - config : (
Object|Config
) Configuration object or a config object
Config
- clientId : (
String|Null
required ) An identifier used to disambiguate this client. - namespace : (
String|Null
default=null ) Internally prefix the nsq topics. This will be handled transparent, but with this it's possible to separate different environments from each other. E.g. you can run a "staging" and "live" environment on one nsq cluster. - active : (
Boolean
default=true ) Configuration to (en/dis)abel the nsq recorder - maxInFlight : (
Number
default=1 ) The maximum number of messages to process at once. This value is shared between nsqd connections. It's highly recommended that this value is greater than the number of nsqd connections. - heartbeatInterval : (
Number
default=30 ) The frequency in seconds at which the nsqd will send heartbeats to this Reader. - lookupdTCPAddresses : (
String[]
default=[ "127.0.0.1:4160", "127.0.0.1:4162" ] ) A list of nsq lookup servers - lookupdHTTPAddresses : (
String[]
default=[ "127.0.0.1:4161", "127.0.0.1:4163" ] ) A list of nsq lookup servers - maxAttempts : (
Number
default=10 ) The number of times to a message can be requeued before it will be handed to the DISCARD handler and then automatically finished. 0 means that there is no limit. If not DISCARD handler is specified and maxAttempts > 0, then the message will be finished automatically when the number attempts has been exhausted. - messageTimeout : (
Number|Null
default=null ) Message timeout in ms or null
for no timeout - sampleRate : (
Number|Null
default=null ) Deliver a percentage of all messages received to this connection. 1 <= sampleRate <= 99 - requeueDelay : (
Number|Null
default=5 ) The delay is in seconds. This is how long nsqd will hold on the message before attempting it again.
Methods
.connect()
You have to connect the writer before publishing data
Return
( Writer ): retuns itself for chaining
.disconnect()
disconnect the client
Return
( Writer ): retuns itself for chaining
.activate()
Activate the module
Return
( Boolean ): true
if it is now activated. false
if it was already active
.deactivate()
Deactivate the module
Return
( Boolean ): true
if it is now deactivated. false
if it was already inactive
.active()
Test if the module is currently active
Return
( Boolean ): Is active?
.destroy( cb )
Disconnect and remove all event listeners
Arguments
- cb : (
Function
) Called after destroy
Events
message
The main event to catch and process messages from the defined topic.
Arguments
- data : (
String|Object|Array
) The message content. A String or parsed JSON data. - done : (
String
) You have to call this function until the message was processed. This will remove the message from the queue. Otherwise it will be requeued. If you add a argument cb( new Error("Dooh!") )
it will interpreted as an error and this message be requeued immediately
Example:
logger.on( "message", function( data, done ){
myDB.write( "INSERT INTO mylogs VALUES ( " + data + " )", done );
});
ready
Emitted once the list of topics was received and the readers are created and connected.
This is just an internal helper. The Method list
will also wait for the first response. The events add
, remove
and change
are active after this first response.
Example:
topics.on( "ready", function( err ){
});
Properties
writer.ready
Type: ( Boolean )
The writer is ready to use
writer.connected
Type: ( Boolean )
The writer is connected to nsqd
TODO's / IDEAS
- more tests
- use with promises
Release History
Version | Date | Description |
---|
1.0.0 | 2019-01-08 | Make the module compatible with node:10 but no longer compatible with node < 5 |
0.1.3 | 2017-07-25 | Small fix to catch JSON stringify errors within Writer.publish |
0.1.2 | 2016-07-22 | Fixed a stupid error with the host config of the writer |
0.1.1 | 2016-07-19 | Removed debugging output |
0.1.0 | 2016-07-15 | Updated dependencies Issue#2 and optimized activate Issue#3 |
0.0.7 | 2016-01-20 | Added raw nsqjs Message as last argument to the message event |
0.0.6 | 2015-12-04 | Bugfix on setting an array configuration; added code banner |
0.0.5 | 2015-12-03 | Added namespaces and made multiple parallel logger instances possible. |
0.0.4 | 2015-12-03 | configuration bugfix |
0.0.3 | 2015-12-02 | updated object tests |
0.0.2 | 2015-12-02 | Internal restructure and docs |
0.0.1 | 2015-12-02 | Initial version |
Initially Generated with generator-mpnodemodule
Other projects
Name | Description |
---|
nsq-topics | Nsq helper to poll a nsqlookupd service for all it's topics and mirror it locally. |
nsq-nodes | Nsq helper to poll a nsqlookupd service for all it's nodes and mirror it locally. |
node-cache | Simple and fast NodeJS internal caching. Node internal in memory cache like memcached. |
nsq-watch | Watch one or many topics for unprocessed messages. |
rsmq | A really simple message queue based on redis |
redis-heartbeat | Pulse a heartbeat to redis. This can be used to detach or attach servers to nginx or similar problems. |
systemhealth | Node module to run simple custom checks for your machine or it's connections. It will use redis-heartbeat to send the current state to redis. |
rsmq-cli | a terminal client for rsmq |
rest-rsmq | REST interface for. |
redis-sessions | An advanced session store for NodeJS and Redis |
connect-redis-sessions | A connect or express middleware to simply use the redis sessions. With redis sessions you can handle multiple sessions per user_id. |
redis-notifications | A redis based notification engine. It implements the rsmq-worker to safely create notifications and recurring reports. |
hyperrequest | A wrapper around hyperquest to handle the results |
task-queue-worker | A powerful tool for background processing of tasks that are run by making standard http requests |
soyer | Soyer is small lib for server side use of Google Closure Templates with node.js. |
grunt-soy-compile | Compile Goggle Closure Templates ( SOY ) templates including the handling of XLIFF language files. |
backlunr | A solution to bring Backbone Collections together with the browser fulltext search engine Lunr.js |
domel | A simple dom helper if you want to get rid of jQuery |
obj-schema | Simple module to validate an object by a predefined schema |
The MIT License (MIT)
Copyright © 2015 M. Peter, http://www.tcs.de
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.