
Product
Introducing Tier 1 Reachability: Precision CVE Triage for Enterprise Teams
Socket’s new Tier 1 Reachability filters out up to 80% of irrelevant CVEs, so security teams can focus on the vulnerabilities that matter.
loopback-connector-es-kstn
Advanced tools
Basic Elasticsearch datasource connector for Loopback.
Table of Contents generated with DocToc
lib
directory has the entire source code for this connectornode_modules
folder when you run npm install loopback-connector-es --save --save-exact
examples
directory has a loopback app which uses this connectornode_modules
folder!
1. similarly the examples/server/datasources.json
and examples/server/datasources.<env>.js
files are there for this demo app to use
1. you can copy their content over to <yourApp>/server/datasources.json
or <yourApp>/server/datasources.<env>.js
if you want and edit it there but don't start editing the files inside examples/server
itself and expect changes to take place in your app!test
directory has unit testsexamples
folder.npmignore
if you're still confused about what's part of the published connector and what's not.datasources.json
files in this repo mention various configurations:elasticsearch-ssl
elasticsearch-plain
db
elasticsearch-plain
and then move on to configuring the additional properties that are exemplified in elasticsearch-ssl
. You can mix & match if you'd like to have mongo and es and memory, all three! These are basics of the "connector" framework in loooback and not something we added.model-config.json
file and point the models at the dataSource
you want to use.cd <yourApp>
npm install loopback-connector-es --save --save-exact
index
field as an override for your modeltype:ModelName
by default, then you can provide an additional type
field as an override for your modelhttp
or https
(http
is the default if none specified) ... must be https
if you're using ssl
es-jetty
or found
or shield
http-aws-es
NOTE: The package needs to be installed in your project. Its not part of this Connector."db": {
"connector": "es",
"name": "<name>",
"index": "<index>",
"hosts": [
{
"protocol": "http",
"host": "127.0.0.1",
"port": 9200,
"auth": "username:password"
}
],
"apiVersion": "<apiVersion>",
"refreshOn": ["save","create", "updateOrCreate"],
"log": "trace",
"defaultSize": <defaultSize>,
"requestTimeout": 30000,
"ssl": {
"ca": "./../cacert.pem",
"rejectUnauthorized": true
},
"amazonES": {
"region": "us-east-1",
"accessKey": "AKID",
"secretKey": "secret"
},
"mappings": [
{
"name": "UserModel",
"properties": {
"realm": {"type": "string", "index" : "not_analyzed" },
"username": {"type": "string", "index" : "not_analyzed" },
"password": {"type": "string", "index" : "not_analyzed" },
"email": {"type": "string", "analyzer" : "email" }
}
},
{
"name": "CoolModel",
"index": <useSomeOtherIndex>,
"type": <overrideTypeName>,
"properties": {
"realm": {"type": "string", "index" : "not_analyzed" },
"username": {"type": "string", "index" : "not_analyzed" },
"password": {"type": "string", "index" : "not_analyzed" },
"email": {"type": "string", "analyzer" : "email" }
}
}
],
"settings": {
"analysis": {
"filter": {
"email": {
"type": "pattern_capture",
"preserve_original": 1,
"patterns": [
"([^@]+)",
"(\\p{L}+)",
"(\\d+)",
"@(.+)"
]
}
},
"analyzer": {
"email": {
"tokenizer": "uax_url_email",
"filter": ["email", "lowercase", "unique"]
}
}
}
}
}
/examples/server/datasources.json
for more hints.examples
directory contains a loopback app which uses this connector.As a developer, you may want a short lived ES instance that is easy to tear down when you're finished dev testing. We recommend docker to facilitate this.
Pre-requisites You will need docker-engine and docker-compose installed on your system.
Step-1
# combination of node v0.10.46 with elasticsearch v1
export NODE_VERSION=0.10.46
export ES_VERSION=1
echo 'NODE_VERSION' $NODE_VERSION && echo 'ES_VERSION' $ES_VERSION
# similarly feel free to try relevant combinations:
## of node v0.10.46 with elasticsearch v2
## of node v0.12 with elasticsearch v2
## of node v0.4 with elasticsearch v2
## of node v5 with elasticsearch v2
## elasticsearch v5 will probably not work as there isn't an `elasticsearch` client for it, as of this writing
## etc.
Step-2
docker-compose
commands.git clone https://github.com/strongloop-community/loopback-connector-elastic-search.git myEsConnector
cd myEsConnector/examples
npm install
docker-compose up
Step-3
localhost:3000/explorer
and you will find our example loopback app running there.examples/server/datasources.json
so that it only has the following content remaining: {}
NODE_ENV
environment variable on your local/host machine
NODE_ENV=sample-es-plain-1
if you want to use examples/server/datasources.sample-es-plain-1.js
NODE_ENV=sample-es-plain-2
if you want to use examples/server/datasources.sample-es-plain-2.js
NODE_ENV=sample-es-ssl-1
if you want to use examples/server/datasources.sample-es-ssl-1.js
datasources.json
or datasources.<env>.js
based on what you learn from these sample files.
1. Technically, to run the example, you don't need to set NODE_ENV
if you won't be configuring via the .<env>.js
files ... configuring everything within datasources.json
is perfectly fine too. Just remember that you will lose the ability to have inline comments and will have to use double-quotes if you stick with .json
git clone https://github.com/strongloop-community/loopback-connector-elastic-search.git myEsConnector
cd myEsConnector
docker-compose -f docker-compose-for-tests.yml up
# in another terminal window or tab
cd myEsConnector/examples
npm install
DEBUG=boot:test:* node server/server.js
localhost:3000/explorer
and you will find our example loopback app running there.git clone https://github.com/strongloop-community/loopback-connector-elastic-search.git myEsConnector
cd myEsConnector/examples
npm install
curl -X POST https://username:password@my.es.cluster.com/shakespeare
curl -X DELETE https://username:password@my.es.cluster.com/shakespeare
apiVersion
field in examples/server/datasources.json
that matches the version of ES you are running.cacert.pem
file for communicating securely (https) with your ES instance. Download the certificate chain for your ES server using this sample (will need to be edited to use your provider) command:cd myEsConnector
openssl s_client -connect my.es.cluster.com:9243 -showcerts | tee cacert.pem
ctrl+c
```
---
No client certificate CA names sent
---
```
4. Run:
cd myEsConnector/examples
DEBUG=boot:test:* node server/server.js
examples/server/boot/boot.js
file will automatically populate data for UserModels on your behalf when the server starts.curl -X POST username:password@my.es.cluster.com/shakespeare/_search -d '{"query": {"match_all": {}}}'
{"q" : "friends, romans, countrymen"}
From version 1.3.4, refresh
option is added which support's instant search after create
and update
. This option is configurable and one can activate or deactivate it according to their need. By default refresh is true
which makes response to come only after documents are indexed(searchable).
To know more about refresh
go through this article
Datasource File: Pass refreshOn
array from datasource file including methods name in which you want this to be true
"es": {
"name": "es",
"refreshOn": ["save","create", "updateOrCreate"],
.....
Model.json file: Configurable on per model and operation level (true
, false
, wait_for
)
"elasticsearch": {
"create": {
"refresh": false
},
"destroy": {
"refresh": false
},
"destroyAll": {
"refresh": "wait_for"
}
}
elasticsearch-ssl
and elasticsearch-plain
in your datasources.json
file? You just need one of them (not both), based on how you've setup your ES instance.model-config.json
to point at the datasource you configured? Maybe you are using a different or misspelled name than what you thought you had!apiVersion
field in datasources.json
that matches the version of ES you are running?elasticsearch
sub-dependency from <yourApp>/node_modules/loopback-connector-es/node_modules
folder and then install the latest client:cd <yourApp>/node_modules/loopback-connector-es/node_modules
elasticsearch
folder
1. unix/mac quickie: rm -rf elasticsearch
npm install --save --save-exact https://github.com/elastic/elasticsearch-js.git
cat elasticsearch/package.json | grep -A 5 supported_es_branches
2. on other platforms, look into the elasticsearch/package.json
and search for the supported_es_branches
json block.cd <yourApp>
test/resource/datasource-test.json
to point at your ES instance and then run npm test
docker-compose -f docker-compose-for-tests.yml up
test/init.js
:var settings = require('./resource/datasource-test.json'); // comment this out if you'll be using either of the following
//var settings = require('./resource/datasource-test-v1-plain.json');
//var settings = require('./resource/datasource-test-v2-plain.json');
npm test
docker-compose -f docker-compose-for-tests.yml down
master
into develop
... if they are behind, before starting the feature
branchfeature
branches from the develop
branchfeature
into develop
then create a release
branch to:
1. update the changelog
1. close related issues and mention release version
1. update the readme
1. fix any bugs from final testing
1. commit locally and run npm-release x.x.x -m "<some comment>"
1. merge release
into both master
and develop
1. push master
and develop
to GitHubdevelop
branch, if possiblemaster
branch ... I understand and I can't stop you. I only hope that there is a good reason like develop
not being up-to-date with master
for the work you want to build upon.npm-release <versionNumber> -m <commit message>
may be used to publish. Pubilshing to NPM should happen from the master
branch. It should ideally only happen when there is something release worthy. There's no point in publishing just because of changes to test
or examples
folder or any other such entities that aren't part of the "published module" (refer to .npmignore
) to begin with."log": "trace"
field in your datasources file or omit it. You can refer to the detailed docs here and hereDEBUG=loopback:connector:elasticsearch
1. For example, try running tests with and without it, to see the difference:
DEBUG=loopback:connector:elasticsearch npm test
npm test
01
or 02
etc. in order to run them in that order by leveraging default alphabetical sorting.02.basic-querying.test.js
file uses two models to test various CRUD operations that any connector must provide, like find(), findById(), findByIds(), updateAttributes()
etc.
1. the two models are User
and Customer
2. their ES mappings are laid out in test/resource/datasource-test.json
3. their loopback definitions can be found in the first before
block that performs setup in 02.basic-querying.test.js
file ... these are the equivalent of a MyModel.json
in your real loopback app.
id
for the model and if its generated or not_uid
. Without some sort of es-field-level-scripting-on-index (if that is possible at all) ... I am not sure how we could ask elasticsearch to take over auto-generating an id-like value for any arbitrary field! So the connector is setup such that adding id: {type: String, generated: true, id: true}
will tell it to use _uid
as the actual field backing the id
... you can keep using the doing model.id
abstraction and in the background _uid
values are mapped to it.generated: true
and id: true
?
1. No! The connector isn't coded that way right now ... while it is an interesting idea to couple any such field with ES's _uid
field inside this connector ... I am not sure if this is the right thing to do. If you had objectId: {type: String, generated: true, id: true}
then you won't find a real objectId
field in your ES documents. Would that be ok? Wouldn't that confuse developers who want to write custom queries and run 3rd party app against their ES instance? Don't use obejctId
, use _uid
would have to be common knowledge. Is that ok?Release 1.0.6
of this connector updates the underlying elasticsearch client version to 11.0.1
For this connector, you can configure an index
name for your ES instance and the loopback model's name is conveniently/automatically mapped as the ES type
.
Users must setup string
fields as not_analyzed
by default for predictable matches just like other loopback backends. And if more flexibility is required, multi-field mappings can be used too.
"name" : {
"type" : "multi_field",
"fields" : {
"name" : {"type" : "string", "index" : "not_analyzed"},
"native" : {"type" : "string", "index" : "analyzed"}
}
}
...
// this will treat 'George Harrison' as 'George Harrison' in a search
User.find({order: 'name'}, function (err, users) {..}
// this will treat 'George Harrison' as two tokens: 'george' and 'harrison' in a search
User.find({order: 'name', where: {'name.native': 'Harrison'}}, function (err, users) {..}
Release 1.3.4
add's support for updateAll for elasticsearch v-2.3
and above. To make updateAll work you will have to add below options in your elasticsearch.yml
config file
script.inline: true
script.indexed: true
script.engine.groovy.inline.search: on
script.engine.groovy.inline.update: on
TBD
FAQs
Connect LoopBack to Elasticsearch engine
We found that loopback-connector-es-kstn demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket’s new Tier 1 Reachability filters out up to 80% of irrelevant CVEs, so security teams can focus on the vulnerabilities that matter.
Research
/Security News
Ongoing npm supply chain attack spreads to DuckDB: multiple packages compromised with the same wallet-drainer malware.
Security News
The MCP Steering Committee has launched the official MCP Registry in preview, a central hub for discovering and publishing MCP servers.