Manage and continuously rebuild your ElasticSearch indexes with Chewy and Capistrano v3.
An API-driven multitenant identity, custom content distribution/management and reporting platform powered by Rails, React, GraphQL and ElasticSearch
Include autocomplete into your project. Base capability can perform multiple queries per request.
Ruby client for Elasticsearch. See the `elasticsearch` gem for full integration.
Install an embedded version of elasticsearch into your project
Ruby integrations for Elasticsearch (client, API, etc.)
This gem offers a shim to set up Searchkick to work with a Bonsai Elasticsearch cluster. The bonsai-searchkick gem automatically sets up the Searchkick client correctly so users don't need to worry about configuring it in their code or writing an initializer. Further details and documentation can be found on this gem's Github repository.
bulk import file generator as well as nested document from MySQL for elasticsearch bulk api
A general way to use elasticsearch
ruby gem for building complex structures that will end up in hashes (initially devised for ElasticSearch search requests)
Add support for ElasticSearch in Polipus crawler
An ElasticSearch Client with ThriftClient-like failover handling.
This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program
Elasticsearch package for the metacrunch ETL toolkit.
Take a peek into your Elasticsearch indices.
Enhance elasticsearch-rails with `elasticsearch_json_changes` to translate the attribute changes into document updates
Eson is modular client for ElasticSearch. It provides an implementation of the Query language as well as multiple client implementations for HTTP and native access.
Rails 3 generator and rake tasks to install, start and interact with elasticsearch
A dead simple DSL for integrating ActiveModel with ElasticSearch.
Lets you load data from the command-line into data stores like * Elasticsearch * MongoDB * HBase * MySQL * Kafka and others.
A small wrapper around Elasticsearch
Logstash-like for moving events from Kafka into Elasticsearch.
Query builder for elasticsearch-rails
Ruby client for Elasticsearch. See the `elasticsearch` gem for full integration.
Antbird is a yet another low level API client for the Elasticsearch HTTP interface.
Backs up ElasticSearch to S3
elasticsearch slowlog parser
This gem is a logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/plugin install gemname. This gem is not a stand-alone program
Persistence layer for Ruby models and Elasticsearch.
ActiveModel/Record integrations for Elasticsearch.
Dumps records to Elasticsearch Ruby. Elasticsearch 1.X AND 2.X AND 5.X compatible.
Ruby on Rails integrations for Elasticsearch.
Log2json lets you read, filter and send logs as JSON objects via Unix pipes. It is inspired by Logstash, and is meant to be compatible with it at the JSON event/record level so that it can easily work with Kibana. Reading logs is done via a shell script(eg, `tail`) running in its own process. You then configure(see the `syslog2json` or the `nginxlog2json` script for examples) and run your filters in Ruby using the `Log2Json` module and its contained helper classes. `Log2Json` reads from stdin the logs(one log record per line), parses the log lines into JSON records, and then serializes and writes the records to stdout, which then can be piped to another process for processing or sending it to somewhere else. Currently, Log2json ships with a `tail-log` script that can be run as the input process. It's the same as using the Linux `tail` utility with the `-v -F` options except that it also tracks the positions(as the numbers of lines read from the beginning of the files) in a few files in the file system so that if the input process is interrupted, it can continue reading from where it left off next time if the files had been followed. This feature is similar to the sincedb feature in Logstash's file input. Note: If you don't need the tracking feature(ie, you are fine with always tailling from the end of file with `-v -F -n0`), then you can just use the `tail` utility that comes with your Linux distribution.(Or more specifically, the `tail` from the GNU coreutils). Other versions of the `tail` utility may also work, but are not tested. The input protocol expected by Log2json is very simple and documented in the source code. ** The `tail-log` script uses a patched version of `tail` from the GNU coreutils package. A binary of the `tail` utility compiled for Ubuntu 12.04 LTS is included with the Log2json gem. If the binary doesn't work for your distribution, then you'll need to get GNU coreutils-8.13, apply the patch(it can be found in the src/ directory of the installed gem), and then replace the bin/tail binary in the directory of the installed gem with your version of the binary. ** P.S. If you know of a way to configure and compile ONLY the tail program in coreutils, please let me know! The reason I'm not building tail post gem installation is that it takes too long to configure && make because that actually builds every utilties in coreutils. For shipping logs to Redis, there's the `lines2redis` script that can be used as the output process in the pipe. For shipping logs from Redis to ElasticSearch, Log2json provides a `redis2es` script. Finally here's an example of Log2json in action: From a client machine: tail-log /var/log/{sys,mail}log /var/log/{kern,auth}.log | syslog2json | queue=jsonlogs \ flush_size=20 \ flush_interval=30 \ lines2redis host.to.redis.server 6379 0 # use redis DB 0 On the Redis server: redis_queue=jsonlogs redis2es host.to.es.server Resources that help writing log2json filters: - look at log2json.rb source and example filters - http://grokdebug.herokuapp.com/ - http://www.ruby-doc.org/stdlib-1.9.3/libdoc/date/rdoc/DateTime.html#method-i-strftime
this is a Output plugin. Post to "Amazon Elasticsearch Service".
Rails 2.3 ActiveRecord integrations for Elasticsearch
Elasticsearch integration for Ruby on Rails by Platanus
Elasticshell provides a command-line shell 'es' for connecting to and querying an Elasticsearch database. The shell will tab-complete Elasticsearch API commands and index/mapping names.
Adds capabilities to manage custom fields and values of various types to database models. Also includes ElasticSearch integration.
Adapter for ElasticSearch
EventMachine for Stretcher a Fast, Elegant, ElasticSearch client
Conditions searching using predicates such as 'name_cont' or 'created_at_gt' for elasticsearch models
An Elasticsearch full text search index generator for Jekyll.
Elasticsearch index configuration rake tasks for Zilly property data
A tool for taking search queries of the form most users will expect, and producing ElasticSearch queries that do what most users would expect.
This handler provides facilities for sending events to Elasticsearch
Elasticsearch::Model extensions for Mongoid adding support of single collection inheritance (by the way of multiple indexes), localized fields and mapped fields extraction.
Simple wrapper for elasticsearch
Elasticsearch Wrapper, with Query & Mapping Builders
Luban::Elasticsearch is a Luban package to manage Elasticsearch installation, deployment and control
Add Active::Support.instrument calls to ElasticSearch queries