Implements search to your models easily using elasticsearch as engine.
I will fix this eventually
Logs information gathered from your internet box and stores into ElasticSearch for display into Kibana.
Elasticsearch alias-based partitions for scalable indexing and searching
Dumps records to Elasticsearch Using Url.
Syphon data from an Arel source into ElasticSearch
Testcontainers makes it easy to create and clean up container-based dependencies for automated tests.
Elasticsearch integration for Databases, such as ActiveRecord
Loads records from Elasticsearch. parallel query support. Allows connect to self signed ssl servers
A gemed version of elasticsearch-paramedic with Rack support.
Define mappings to turn model instances into indexable search documents
I simply want to be able to control the backup and restoration of the Elasticsearch cluster to S3 without any fuss or having to dilly around with curl -XPUTS and friends. So here it is, and we now support Shared Volumes. See the docs on the --fs flag. There are many features I wish to add to this, and if you have any suggestions, please feel free to send them my way!
Fork of ruby-elasticsearch-tracer. OpenTracing instrumentation for Ruby Elasticsearch client.
Updates the elasticsearch instance from a deb or rpm on the local machine. Assumes elasticsearch is a service.
Importing slack log from exported files or API to elasticsearch or other datastores
A modular client for ElasticSearch. It provides an implementation of the Query language as well as multiple client implementations for HTTP and native access.
Elastic Logger, logs to elasticsearch or files
Command line tool which helps remove old ES indices and types
A logstash output plugin that will perform event triggered grooming (aka pruning) of time-series indices especially those created by logstash-output-elasticsearch.
ElasticSearch adapter for the General Object Mapper. Current versions of ElasticSearch are supported.
Provides OAuth2 credentials negotiation on an Elasticsearch transport
Log2json lets you read, filter and send logs as JSON objects via Unix pipes. It is inspired by Logstash, and is meant to be compatible with it at the JSON event/record level so that it can easily work with Kibana. Reading logs is done via a shell script(eg, `tail`) running in its own process. You then configure(see the `syslog2json` or the `nginxlog2json` script for examples) and run your filters in Ruby using the `Log2Json` module and its contained helper classes. `Log2Json` reads from stdin the logs(one log record per line), parses the log lines into JSON records, and then serializes and writes the records to stdout, which then can be piped to another process for processing or sending it to somewhere else. Currently, Log2json ships with a `tail-log` script that can be run as the input process. It's the same as using the Linux `tail` utility with the `-v -F` options except that it also tracks the positions(as the numbers of lines read from the beginning of the files) in a few files in the file system so that if the input process is interrupted, it can continue reading from where it left off next time if the files had been followed. This feature is similar to the sincedb feature in Logstash's file input. Note: If you don't need the tracking feature(ie, you are fine with always tailling from the end of file with `-v -F -n0`), then you can just use the `tail` utility that comes with your Linux distribution.(Or more specifically, the `tail` from the GNU coreutils). Other versions of the `tail` utility may also work, but are not tested. The input protocol expected by Log2json is very simple and documented in the source code. ** The `tail-log` script uses a patched version of `tail` from the GNU coreutils package. A binary of the `tail` utility compiled for Ubuntu 12.04 LTS is included with the Log2json gem. If the binary doesn't work for your distribution, then you'll need to get GNU coreutils-8.13, apply the patch(it can be found in the src/ directory of the installed gem), and then replace the bin/tail binary in the directory of the installed gem with your version of the binary. ** P.S. If you know of a way to configure and compile ONLY the tail program in coreutils, please let me know! The reason I'm not building tail post gem installation is that it takes too long to configure && make because that actually builds every utilties in coreutils. For shipping logs to Redis, there's the `lines2redis` script that can be used as the output process in the pipe. For shipping logs from Redis to ElasticSearch, Log2json provides a `redis2es` script. Finally here's an example of Log2json in action: From a client machine: tail-log /var/log/{sys,mail}log /var/log/{kern,auth}.log | syslog2json | queue=jsonlogs \ flush_size=20 \ flush_interval=30 \ lines2redis host.to.redis.server 6379 0 # use redis DB 0 On the Redis server: redis_queue=jsonlogs redis2es host.to.es.server Resources that help writing log2json filters: - look at log2json.rb source and example filters - http://grokdebug.herokuapp.com/ - http://www.ruby-doc.org/stdlib-1.9.3/libdoc/date/rdoc/DateTime.html#method-i-strftime
Powerful and flexible elasticsearch query factory for you ruby application
Pcache load probe for cdn-manager using ElasticSearch data.
Count or grep log messages in a specified time range from a Logstash Elasticsearch server.
Human-friendly query language for Elasticsearch. Formerly known as plunk.
A very simple crawler for RubyGems.org used to demo the power of ElasticSearch at RubyConf 2013
An elasticsearch wrapper for mongoid odm based on slingshot. Makes integration between ElasticSearch full-text search engine and Mongoid documents seemless and simple.
Sumo Sensu gem for elasticsearch checks
Elasticsearch5 output plugin is an Embulk plugin that loads records to Elasticsearch read by any input plugins. Search the input plugins by "embulk-input" keyword.
A super lightweight interface to ElasticSearch's HTTP REST API
Gum provides simple DSL for searching in your Elasticsearch
ElasticSearch output plugin for Fluent event collector
ElasticUtil uses ElasticSearch's /_search and /_bulk APIs to dump and restore indices
Add Elasticsearch reindex option to ActiveRecord associations
Command Line tool for interacting with Elasticsearch. Still in Development, expect bugs and fast releases until 1.0.0.
Create reverse queries and add them to elasticsearch to trigger an event any time a new record matches any of the queries.
This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program. Also, this plugin inherit of elasticsearch input plugin, but added tracking_column like jdbc input plugin.
Elasticsearch insights made easy
Ruby gem for transferring elasticsearch index from one source to another source
Ruby block DSL for writing ElasticSearch queries
Ruby integrations for Elasticsearch indices management using close, delete, snapshot, and restore
Rails Model integration for Elasticsearch.
A declarative interface for building Elasticsearch queries
Iterates over the entire index
ElasticsearchRecord is a ActiveRecord adapter and provides similar functionality for Elasticsearch.
Convert user queries like `(London OR Paris) AND 'John Wick'` into elasticsearch JSON queries
Elasticsearch utilities
Better elasticsearch documents for ruby