New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

solrcloudpy

Package Overview
Dependencies
Maintainers
2
Versions
26
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

solrcloudpy - pypi Package Compare versions

Comparing version
3.0.4
to
4.0.0
+7
-12
PKG-INFO

@@ -1,4 +0,4 @@

Metadata-Version: 2.1
Metadata-Version: 1.1
Name: solrcloudpy
Version: 3.0.4
Version: 4.0.0
Summary: python library for interacting with SolrCloud

@@ -20,5 +20,6 @@ Home-page: https://github.com/solrcloudpy/solrcloudpy

It is compatible with versions 4.6 up to 6.0.
Version 4.x is compatible with all supported versions of SolrCloud.
For older versions, you should consider using 3.x, which supports versions 4 through 6.
The API is meant to be close to pymongo's API, where one can access
collections and databases as simple attributes
collections and databases as simple attributes
or dictionary keys.

@@ -37,3 +38,3 @@

Access an existing collection:

@@ -45,3 +46,3 @@

conn["test_collection"].search({'q':'query2'})
Index documents:

@@ -59,7 +60,2 @@

collection.search({'q':'*:*'})
Console
-------
``solrcloudpy`` comes with a console that can be run simply by typing ``solrconsole``. More information on usage is available at
http://solrcloudpy.github.io/solrcloudpy/console.html

@@ -78,2 +74,1 @@ Documentation and API

Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
Provides-Extra: ip

@@ -12,5 +12,6 @@ solrcloudpy

It is compatible with versions 4.6 up to 6.0.
Version 4.x is compatible with all supported versions of SolrCloud.
For older versions, you should consider using 3.x, which supports versions 4 through 6.
The API is meant to be close to pymongo's API, where one can access
collections and databases as simple attributes
collections and databases as simple attributes
or dictionary keys.

@@ -29,3 +30,3 @@

Access an existing collection:

@@ -37,3 +38,3 @@

conn["test_collection"].search({'q':'query2'})
Index documents:

@@ -51,7 +52,2 @@

collection.search({'q':'*:*'})
Console
-------
``solrcloudpy`` comes with a console that can be run simply by typing ``solrconsole``. More information on usage is available at
http://solrcloudpy.github.io/solrcloudpy/console.html

@@ -58,0 +54,0 @@ Documentation and API

+24
-23

@@ -1,5 +0,6 @@

from setuptools import setup, find_packages
import re
with open('README.rst', 'r') as f:
from setuptools import find_packages, setup
with open("README.rst", "r") as f:
long_description = f.read()

@@ -9,30 +10,30 @@

def get_version():
return re.search(r"""__version__\s+=\s+(?P<quote>['"])(?P<version>.+?)(?P=quote)""",
open('solrcloudpy/__init__.py').read()).group('version')
return re.search(
r"""__version__\s+=\s+(?P<quote>['"])(?P<version>.+?)(?P=quote)""",
open("solrcloudpy/__init__.py").read(),
).group("version")
setup(
name="solrcloudpy",
version=get_version(),
author='Didier Deshommes, Robert Elwell',
author_email='dfdeshom@gmail.com, robert.elwell@gmail.com',
packages=find_packages(exclude=['ez_setup']),
url='https://github.com/solrcloudpy/solrcloudpy',
license='LICENSE.txt',
keywords='solr solrcloud',
description='python library for interacting with SolrCloud ',
author="Didier Deshommes, Robert Elwell",
author_email="dfdeshom@gmail.com, robert.elwell@gmail.com",
packages=find_packages(exclude=["ez_setup"]),
url="https://github.com/solrcloudpy/solrcloudpy",
license="LICENSE.txt",
keywords="solr solrcloud",
description="python library for interacting with SolrCloud ",
long_description=long_description,
include_package_data=True,
platforms='any',
entry_points={'console_scripts': ['solrconsole = scripts.solrconsole:main [ip]']},
platforms="any",
classifiers=[
'Development Status :: 3 - Alpha',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Indexing/Search'
],
install_requires=['requests >= 2.11.1', 'IPython < 5.2.0', 'semver', 'pathlib2', 'future'],
extras_require={"ip": ['IPython == 5.1.0']}
"Development Status :: 3 - Alpha",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Topic :: Internet :: WWW/HTTP :: Indexing/Search",
],
install_requires=["requests >= 2.11.1", "semver", "pathlib2", "future"],
)

@@ -1,4 +0,4 @@

Metadata-Version: 2.1
Metadata-Version: 1.1
Name: solrcloudpy
Version: 3.0.4
Version: 4.0.0
Summary: python library for interacting with SolrCloud

@@ -20,5 +20,6 @@ Home-page: https://github.com/solrcloudpy/solrcloudpy

It is compatible with versions 4.6 up to 6.0.
Version 4.x is compatible with all supported versions of SolrCloud.
For older versions, you should consider using 3.x, which supports versions 4 through 6.
The API is meant to be close to pymongo's API, where one can access
collections and databases as simple attributes
collections and databases as simple attributes
or dictionary keys.

@@ -37,3 +38,3 @@

Access an existing collection:

@@ -45,3 +46,3 @@

conn["test_collection"].search({'q':'query2'})
Index documents:

@@ -59,7 +60,2 @@

collection.search({'q':'*:*'})
Console
-------
``solrcloudpy`` comes with a console that can be run simply by typing ``solrconsole``. More information on usage is available at
http://solrcloudpy.github.io/solrcloudpy/console.html

@@ -78,2 +74,1 @@ Documentation and API

Classifier: Topic :: Internet :: WWW/HTTP :: Indexing/Search
Provides-Extra: ip
requests>=2.11.1
IPython<5.2.0
semver
pathlib2
future
[ip]
IPython==5.1.0

@@ -5,4 +5,2 @@ LICENSE.txt

setup.py
scripts/__init__.py
scripts/solrconsole.py
solrcloudpy/__init__.py

@@ -15,3 +13,2 @@ solrcloudpy/connection.py

solrcloudpy.egg-info/dependency_links.txt
solrcloudpy.egg-info/entry_points.txt
solrcloudpy.egg-info/requires.txt

@@ -18,0 +15,0 @@ solrcloudpy.egg-info/top_level.txt

@@ -1,2 +0,1 @@

scripts
solrcloudpy

@@ -0,8 +1,10 @@

import logging
from solrcloudpy.collection import SolrCollection
from solrcloudpy.connection import SolrConnection
from solrcloudpy.collection import SolrCollection
from solrcloudpy.parameters import SearchOptions
import logging
logging.basicConfig()
__version__ = "3.0.4"
__all__ = ['SolrCollection', 'SolrConnection', 'SearchOptions']
__version__ = "4.0.0"
__all__ = ["SolrCollection", "SolrConnection", "SearchOptions"]

@@ -73,2 +73,3 @@ """

__all__ = ['SolrCollection']
__all__ = ["SolrCollection"]
"""
Manage and administer a collection
"""
from solrcloudpy.utils import SolrException, CollectionBase
from .stats import SolrIndexStats
from .schema import SolrSchema
import time
import json
import requests
import logging
import time
import requests
from solrcloudpy.utils import CollectionBase, SolrException
from .schema import SolrSchema
from .stats import SolrIndexStats
class SolrCollectionAdmin(CollectionBase):

@@ -18,2 +20,3 @@ """

"""
def __init__(self, connection, name):

@@ -27,7 +30,7 @@ """

super(SolrCollectionAdmin, self).__init__(connection, name)
# corresponding public methods are memoized for a lower memory footprint
self._index_stats = None
self._schema = None
def exists(self):

@@ -64,42 +67,46 @@ """

"""
params = {'name': self.name,
'replicationFactor': replication_factor,
'action': 'CREATE'}
router_name = kwargs.get("router_name", 'compositeId')
params['router.name'] = router_name
params = {
"name": self.name,
"replicationFactor": replication_factor,
"action": "CREATE",
}
router_name = kwargs.get("router_name", "compositeId")
params["router.name"] = router_name
num_shards = kwargs.get("num_shards", "1")
params['numShards'] = num_shards
params["numShards"] = num_shards
shards = kwargs.get("shards")
if shards:
params['shards'] = shards
params["shards"] = shards
max_shards_per_node = kwargs.get('max_shards_per_node', 1)
params['maxShardsPerNode'] = max_shards_per_node
max_shards_per_node = kwargs.get("max_shards_per_node", 1)
params["maxShardsPerNode"] = max_shards_per_node
create_node_set = kwargs.get('create_node_set')
create_node_set = kwargs.get("create_node_set")
if create_node_set:
params['createNodeSet'] = create_node_set
params["createNodeSet"] = create_node_set
collection_config_name = kwargs.get('collection_config_name')
collection_config_name = kwargs.get("collection_config_name")
if collection_config_name:
params['collection.configName'] = collection_config_name
params["collection.configName"] = collection_config_name
router_field = kwargs.get('router_field')
router_field = kwargs.get("router_field")
if router_field:
params['router.field'] = router_field
params["router.field"] = router_field
# this collection doesn't exist yet, actually create it
if not self.exists() or force:
res = self.client.get('admin/collections', params).result
if hasattr(res, 'success'):
res = self.client.get("admin/collections", params).result
if hasattr(res, "success"):
# Create the index and wait until it's available
while True:
if not self._is_index_created():
logging.getLogger('solrcloud').info("index not created yet, waiting...")
logging.getLogger("solrcloud").info(
"index not created yet, waiting..."
)
time.sleep(1)
else:
else:
break
return SolrCollectionAdmin(self.connection,self.name)
return SolrCollectionAdmin(self.connection, self.name)
else:

@@ -117,3 +124,3 @@ raise SolrException(str(res))

server = list(self.connection.servers)[0]
req = requests.get('%s/solr/%s' % (server, self.name))
req = requests.get("%s/solr/%s" % (server, self.name))
return req.status_code == requests.codes.ok

@@ -126,5 +133,7 @@

"""
response = self.client.get('/solr/admin/collections', {'action': 'CLUSTERSTATUS', 'wt': 'json'}).result.dict
if 'aliases' in response['cluster']:
return self.name in response['cluster']['aliases']
response = self.client.get(
"/solr/admin/collections", {"action": "CLUSTERSTATUS", "wt": "json"}
).result.dict
if "aliases" in response["cluster"]:
return self.name in response["cluster"]["aliases"]
return False

@@ -135,7 +144,9 @@

Delete a collection
:return: a response associated with the delete request
:rtype: SolrResponse
"""
return self.client.get('admin/collections', {'action': 'DELETE', 'name': self.name}).result
return self.client.get(
"admin/collections", {"action": "DELETE", "name": self.name}
).result

@@ -145,7 +156,9 @@ def reload(self):

Reload a collection
:return: a response associated with the reload request
:rtype: SolrResponse
"""
return self.client.get('admin/collections', {'action': 'RELOAD', 'name': self.name}).result
return self.client.get(
"admin/collections", {"action": "RELOAD", "name": self.name}
).result

@@ -165,8 +178,8 @@ def split_shard(self, shard, ranges=None, split_key=None):

"""
params = {'action': 'SPLITSHARD', 'collection': self.name, 'shard': shard}
params = {"action": "SPLITSHARD", "collection": self.name, "shard": shard}
if ranges:
params['ranges'] = ranges
params["ranges"] = ranges
if split_key:
params['split.key'] = split_key
return self.client.get('admin/collections', params).result
params["split.key"] = split_key
return self.client.get("admin/collections", params).result

@@ -184,6 +197,6 @@ def create_shard(self, shard, create_node_set=None):

"""
params = {'action': 'CREATESHARD', 'collection': self.name, 'shard': shard}
params = {"action": "CREATESHARD", "collection": self.name, "shard": shard}
if create_node_set:
params['create_node_set'] = create_node_set
return self.client.get('admin/collections', params).result
params["create_node_set"] = create_node_set
return self.client.get("admin/collections", params).result

@@ -199,4 +212,4 @@ def create_alias(self, alias):

"""
params = {'action': 'CREATEALIAS', 'name': alias, 'collections': self.name}
return self.client.get('admin/collections', params).result
params = {"action": "CREATEALIAS", "name": alias, "collections": self.name}
return self.client.get("admin/collections", params).result

@@ -212,4 +225,4 @@ def delete_alias(self, alias):

"""
params = {'action': 'DELETEALIAS', 'name': alias}
return self.client.get('admin/collections', params).result
params = {"action": "DELETEALIAS", "name": alias}
return self.client.get("admin/collections", params).result

@@ -227,7 +240,9 @@ def delete_replica(self, replica, shard):

"""
params = {'action': 'DELETEREPLICA',
'replica': replica,
'collection': self.name,
'shard': shard}
return self.client.get('admin/collections', params).result
params = {
"action": "DELETEREPLICA",
"replica": replica,
"collection": self.name,
"shard": shard,
}
return self.client.get("admin/collections", params).result

@@ -238,3 +253,3 @@ @property

Get the state of this collection
:return: the state of this collection

@@ -246,16 +261,12 @@ :rtype: dict

response = self.client.get(
"/{webappdir}/admin/collections".format(
webappdir=self.connection.webappdir
),
dict(action="clusterstatus"),
).result
try:
params = {'detail': 'true', 'path': '/clusterstate.json'}
response = self.client.get('/solr/zookeeper', params).result
data = json.loads(response['znode']['data'])
return data[self.name]
return response["cluster"]["collections"][self.name]
except KeyError:
response = self.client.get(
'/{webappdir}/admin/collections'.format(webappdir=self.connection.webappdir),
dict(action='clusterstatus')
).result
try:
return response['cluster']['collections'][self.name]
except KeyError:
return {}
return {}

@@ -277,7 +288,7 @@ @property

"""
response = self.client.get('%s/admin/luke' % self.name, {}).result
response = self.client.get("%s/admin/luke" % self.name, {}).result
# XXX ugly
data = response['index'].dict
data.pop('directory', None)
data.pop('userData', None)
data = response["index"].dict
data.pop("directory", None)
data.pop("userData", None)
return data

@@ -295,3 +306,3 @@

return self._index_stats
@property

@@ -316,7 +327,9 @@ def schema(self):

return self.index_stats
def _backup_restore_action(self, action, backup_name, location=None, repository=None):
def _backup_restore_action(
self, action, backup_name, location=None, repository=None
):
"""
Creates or restores a backup for a collection, based on the action
:param action: the action, either BACKUP or RESTORE

@@ -333,20 +346,16 @@ :type action: str

"""
params = {
'action': action,
'collection': self.name,
'name': backup_name
}
params = {"action": action, "collection": self.name, "name": backup_name}
if location:
params['location'] = location
params["location"] = location
if repository:
params['repository'] = repository
params["repository"] = repository
return self.client.get('admin/collections', params, async=True)
return self.client.get("admin/collections", params, asynchronous=True)
def backup(self, backup_name, location=None, repository=None):
"""
Creates a backup for a collection
:param backup_name: the name of the backup we will use for storage & restoration

@@ -361,3 +370,5 @@ :type backup_name: str

"""
return self._backup_restore_action('BACKUP', backup_name, location=location, repository=repository)
return self._backup_restore_action(
"BACKUP", backup_name, location=location, repository=repository
)

@@ -377,3 +388,5 @@ def restore(self, backup_name, location=None, repository=None):

"""
return self._backup_restore_action('RESTORE', backup_name, location=location, repository=repository)
return self._backup_restore_action(
"RESTORE", backup_name, location=location, repository=repository
)

@@ -387,8 +400,10 @@ def request_status(self, async_response):

"""
return self.client.get('admin/collections',
{
"action": 'REQUESTSTATUS',
"requestid": async_response.async_id,
"wt": 'json'
}).result
return self.client.get(
"admin/collections",
{
"action": "REQUESTSTATUS",
"requestid": async_response.async_id,
"wt": "json",
},
).result

@@ -395,0 +410,0 @@ def request_state(self, async_response):

"""
Utilities to index large volumes of documents in Solr
"""
import logging
from contextlib import contextmanager
import logging
log = logging.getLogger('solrcloud')
log = logging.getLogger("solrcloud")

@@ -18,2 +18,3 @@

"""
def __init__(self, solr, batch_size=100, auto_commit=True):

@@ -71,8 +72,13 @@ """

auto_commit = self.auto_commit
log.debug("SolrBatchAdder: flushing {batch_len} articles to Solr (auto_commit={auto_commit})".format(
batch_len=batch_len, auto_commit=auto_commit))
log.debug(
"SolrBatchAdder: flushing {batch_len} articles to Solr (auto_commit={auto_commit})".format(
batch_len=batch_len, auto_commit=auto_commit
)
)
try:
self.solr.add(self.batch)
except Exception as e:
log.exception("Exception encountered when committing batch, falling back on one-by-one commit")
log.exception(
"Exception encountered when committing batch, falling back on one-by-one commit"
)
log.error(e)

@@ -84,3 +90,3 @@ # one by one fall-back

except Exception as e:
log.error(u"Could not add item to solr index")
log.error("Could not add item to solr index")
log.exception(str(e))

@@ -98,3 +104,5 @@ if auto_commit:

except Exception as e:
log.warning("SolrBatchAdder timed out when committing, but it's safe to ignore")
log.warning(
"SolrBatchAdder timed out when committing, but it's safe to ignore"
)

@@ -101,0 +109,0 @@ def _append_commit(self, doc):

@@ -12,2 +12,3 @@ """

"""
def __init__(self, connection, collection_name):

@@ -31,3 +32,3 @@ """

"""
return self.client.get('%s/schema' % self.collection_name).result.dict
return self.client.get("%s/schema" % self.collection_name).result.dict

@@ -41,3 +42,3 @@ @property

"""
return self.client.get('%s/schema/name' % self.collection_name).result.dict
return self.client.get("%s/schema/name" % self.collection_name).result.dict

@@ -51,3 +52,3 @@ @property

"""
return self.client.get('%s/schema/version' % self.collection_name).result.dict
return self.client.get("%s/schema/version" % self.collection_name).result.dict

@@ -61,3 +62,3 @@ @property

"""
return self.client.get('%s/schema/uniquekey' % self.collection_name).result.dict
return self.client.get("%s/schema/uniquekey" % self.collection_name).result.dict

@@ -71,3 +72,5 @@ @property

"""
return self.client.get('%s/schema/similarity' % self.collection_name).result.dict
return self.client.get(
"%s/schema/similarity" % self.collection_name
).result.dict

@@ -81,3 +84,5 @@ @property

"""
return self.client.get('%s/schema/solrqueryparser/defaultoperator' % self.collection_name).result.dict
return self.client.get(
"%s/schema/solrqueryparser/defaultoperator" % self.collection_name
).result.dict

@@ -94,3 +99,5 @@ def get_field(self, field):

"""
return self.client.get('%s/schema/field/%s' % (self.collection_name, field)).result.dict
return self.client.get(
"%s/schema/field/%s" % (self.collection_name, field)
).result.dict

@@ -104,3 +111,3 @@ def get_fields(self):

"""
return self.client.get('%s/schema/fields' % self.collection_name).result.dict
return self.client.get("%s/schema/fields" % self.collection_name).result.dict

@@ -116,3 +123,5 @@ def add_fields(self, json_schema):

"""
return self.client.update('%s/schema/fields' % self.collection_name, body=json_schema).result.dict
return self.client.update(
"%s/schema/fields" % self.collection_name, body=json_schema
).result.dict

@@ -125,3 +134,5 @@ def get_dynamic_fields(self):

"""
return self.client.get('%s/schema/dynamicfields' % self.collection_name).result.dict
return self.client.get(
"%s/schema/dynamicfields" % self.collection_name
).result.dict

@@ -138,3 +149,5 @@ def get_dynamic_field(self, field):

"""
return self.client.get('%s/schema/dynamicfield/%s' % (self.collection_name, field)).result.dict
return self.client.get(
"%s/schema/dynamicfield/%s" % (self.collection_name, field)
).result.dict

@@ -147,3 +160,5 @@ def get_fieldtypes(self):

"""
return self.client.get('%s/schema/fieldtypes' % (self.collection_name)).result.dict
return self.client.get(
"%s/schema/fieldtypes" % (self.collection_name)
).result.dict

@@ -159,3 +174,5 @@ def get_fieldtype(self, ftype):

"""
return self.client.get('%s/schema/fieldtypes/%s' % (self.collection_name, ftype)).result.dict
return self.client.get(
"%s/schema/fieldtypes/%s" % (self.collection_name, ftype)
).result.dict

@@ -168,3 +185,5 @@ def get_copyfields(self):

"""
return self.client.get('%s/schema/copyfields' % self.collection_name).result.dict
return self.client.get(
"%s/schema/copyfields" % self.collection_name
).result.dict

@@ -180,2 +199,4 @@ def get_copyfield(self, field):

"""
return self.client.get('%s/schema/copyfield/%s' % (self.collection_name, field)).result.dict
return self.client.get(
"%s/schema/copyfield/%s" % (self.collection_name, field)
).result.dict

@@ -5,9 +5,9 @@ """

from solrcloudpy.utils import CollectionBase, SolrException, as_json_bool
from future.utils import iterkeys
import datetime as dt
import json
from future.utils import iterkeys
from solrcloudpy.utils import CollectionBase, SolrException, as_json_bool
# todo this seems funky -- only called once

@@ -21,2 +21,3 @@ dthandler = lambda obj: obj.isoformat() if isinstance(obj, dt.datetime) else None

"""
def __repr__(self):

@@ -29,3 +30,3 @@ """

def _get_response(self, path, params=None, method='GET', body=None):
def _get_response(self, path, params=None, method="GET", body=None):
"""

@@ -56,4 +57,4 @@ Retrieves a response from the solr client

"""
path = '%s/update/json' % self.name
resp = self._get_response(path, method='POST', params=params, body=body)
path = "%s/update/json" % self.name
resp = self._get_response(path, method="POST", params=params, body=body)
if resp.code != 200:

@@ -131,10 +132,10 @@ raise SolrException(resp.result.error)

"""
if 'q' not in iterkeys(query):
if "q" not in iterkeys(query):
raise ValueError("query should have a 'q' parameter")
if hasattr(query, 'commonparams'):
q = list(query.commonparams['q'])
if hasattr(query, "commonparams"):
q = list(query.commonparams["q"])
q = q[0]
else:
q = query['q']
q = query["q"]

@@ -162,8 +163,9 @@ m = json.dumps({"delete": {"query": "%s" % q}})

"""
params = {'softCommit': as_json_bool(soft_commit),
'waitSearcher': as_json_bool(wait_searcher),
'maxSegments': max_segments,
'optimize': 'true'
}
return self._get_response('%s/update' % self.name, params=params).result
params = {
"softCommit": as_json_bool(soft_commit),
"waitSearcher": as_json_bool(wait_searcher),
"maxSegments": max_segments,
"optimize": "true",
}
return self._get_response("%s/update" % self.name, params=params).result

@@ -170,0 +172,0 @@ def commit(self):

@@ -5,5 +5,6 @@ """

from future.utils import iteritems
from solrcloudpy.utils import _Request, SolrResult
from solrcloudpy.utils import SolrResult, _Request
class SolrIndexStats(object):

@@ -13,2 +14,3 @@ """

"""
def __init__(self, connection, name):

@@ -34,12 +36,14 @@ """

"""
params = {'stats': 'true', 'cat': 'CACHE'}
result = self.client.get('/solr/%s/admin/mbeans' % self.name, params).result.dict
caches = result['solr-mbeans']['CACHE']
params = {"stats": "true", "cat": "CACHE"}
result = self.client.get(
"/solr/%s/admin/mbeans" % self.name, params
).result.dict
caches = result["solr-mbeans"]["CACHE"]
res = {}
for cache, info in iteritems(caches):
if cache == 'fieldCache':
res[cache] = {'entries_count': info['stats'].get('entries_count', 0)}
if cache == "fieldCache":
res[cache] = {"entries_count": info["stats"].get("entries_count", 0)}
continue
res[cache] = info['stats']
res[cache] = info["stats"]

@@ -56,9 +60,11 @@ return SolrResult(res)

"""
params = {'stats': 'true', 'cat': 'QUERYHANDLER'}
result = self.client.get('/solr/%s/admin/mbeans' % self.name, params).result.dict
caches = result['solr-mbeans']['QUERYHANDLER']
params = {"stats": "true", "cat": "QUERYHANDLER"}
result = self.client.get(
"/solr/%s/admin/mbeans" % self.name, params
).result.dict
caches = result["solr-mbeans"]["QUERYHANDLER"]
res = {}
for cache, info in iteritems(caches):
res[cache] = info['stats']
res[cache] = info["stats"]
return SolrResult(res)

@@ -16,4 +16,7 @@ """

"""
import urllib
import json
import urllib.error
import urllib.parse
import urllib.request
import semver

@@ -25,6 +28,5 @@ from future.utils import iteritems

MIN_SUPPORTED_VERSION = '>=4.6.0'
MIN_SUPPORTED_VERSION = ">5.4.0"
# TODO: revisit this when Solr 7 comes around.
MAX_SUPPORTED_VERSION = '<=7.0.0'
MAX_SUPPORTED_VERSION = "<=9.0.0"

@@ -58,11 +60,14 @@

def __init__(self, server="localhost:8983",
detect_live_nodes=False,
user=None,
password=None,
timeout=10,
webappdir='solr',
version='5.3.0',
request_retries=1,
use_https=False):
def __init__(
self,
server="localhost:8983",
detect_live_nodes=False,
user=None,
password=None,
timeout=10,
webappdir="solr",
version="7.7.0",
request_retries=1,
use_https=False,
):
self.user = user

@@ -75,14 +80,15 @@ self.password = password

if not semver.match(version, MIN_SUPPORTED_VERSION) and semver.match(version, MAX_SUPPORTED_VERSION):
if not semver.match(version, MIN_SUPPORTED_VERSION) and semver.match(
version, MAX_SUPPORTED_VERSION
):
raise Exception("Unsupported version %s" % version)
if semver.match(self.version, '<5.4.0'):
self.zk_path = '/{webappdir}/zookeeper'.format(webappdir=self.webappdir)
else:
self.zk_path = '/{webappdir}/admin/zookeeper'.format(webappdir=self.webappdir)
self.zk_path = "/{webappdir}/admin/zookeeper".format(webappdir=self.webappdir)
protocol = "https" if use_https else "http"
self.url_template = '{protocol}://{{server}}/{webappdir}/'.format(protocol=protocol, webappdir=self.webappdir)
self.url_template = "{protocol}://{{server}}/{webappdir}/".format(
protocol=protocol, webappdir=self.webappdir
)
if type(server) == str:

@@ -125,28 +131,27 @@ self.url = self.url_template.format(server=server)

"""
params = {'detail': 'false', 'path': '/collections'}
response = self.client.get(
self.zk_path, params).result
params = {"detail": "false", "path": "/collections"}
response = self.client.get(self.zk_path, params).result
if 'children' not in response['tree'][0]:
if "children" not in response["tree"][0]:
return []
if response['tree'][0]['data']['title'] == '/collections':
if response["tree"][0]["data"]["title"] == "/collections":
# solr 5.3 and older
data = response['tree'][0]['children']
data = response["tree"][0]["children"]
else:
# solr 5.4+
data = None
for branch in response['tree']:
for branch in response["tree"]:
if data is not None:
break
for child in branch['children']:
if child['data']['title'] == '/collections':
if 'children' not in child:
for child in branch["children"]:
if child["data"]["title"] == "/collections":
if "children" not in child:
return []
else:
data = child['children']
data = child["children"]
break
colls = []
if data:
colls = [node['data']['title'] for node in data]
colls = [node["data"]["title"] for node in data]
return colls

@@ -160,6 +165,9 @@

"""
params = {'wt': 'json', }
params = {
"wt": "json",
}
response = self.client.get(
('/{webappdir}/admin/cores'.format(webappdir=self.webappdir)), params).result
cores = list(response.get('status', {}).keys())
("/{webappdir}/admin/cores".format(webappdir=self.webappdir)), params
).result
cores = list(response.get("status", {}).keys())
return cores

@@ -177,33 +185,41 @@

res = []
if semver.match(self.version, '<5.4.0'):
params = {'detail': 'true', 'path': '/clusterstate.json'}
if semver.match(self.version, "<5.4.0"):
params = {"detail": "true", "path": "/clusterstate.json"}
response = self.client.get(
('/{webappdir}/zookeeper'.format(webappdir=self.webappdir)), params).result
data = json.loads(response['znode']['data'])
("/{webappdir}/zookeeper".format(webappdir=self.webappdir)), params
).result
data = json.loads(response["znode"]["data"])
collections = self.list()
for coll in collections:
shards = data[coll]['shards']
shards = data[coll]["shards"]
for shard, shard_info in iteritems(shards):
replicas = shard_info['replicas']
replicas = shard_info["replicas"]
for replica, info in iteritems(replicas):
state = info['state']
if state != 'active':
item = {"collection": coll,
"replica": replica,
"shard": shard,
"info": info,
}
state = info["state"]
if state != "active":
item = {
"collection": coll,
"replica": replica,
"shard": shard,
"info": info,
}
res.append(item)
else:
params = {'action': 'CLUSTERSTATUS', 'wt': 'json'}
params = {"action": "CLUSTERSTATUS", "wt": "json"}
response = self.client.get(
('/{webappdir}/admin/collections'.format(webappdir=self.webappdir)), params).result
for collection_name, collection in list(response.dict['cluster']['collections'].items()):
for shard_name, shard in list(collection['shards'].items()):
for replica_name, replica in list(shard['replicas'].items()):
if replica['state'] != 'active':
item = {"collection": collection_name,
"replica": replica_name,
"shard": shard_name,
"info": replica}
("/{webappdir}/admin/collections".format(webappdir=self.webappdir)),
params,
).result
for collection_name, collection in list(
response.dict["cluster"]["collections"].items()
):
for shard_name, shard in list(collection["shards"].items()):
for replica_name, replica in list(shard["replicas"].items()):
if replica["state"] != "active":
item = {
"collection": collection_name,
"replica": replica_name,
"shard": shard_name,
"info": replica,
}
res.append(item)

@@ -224,5 +240,5 @@

"""
params = {'detail': 'true', 'path': '/overseer_elect/leader'}
params = {"detail": "true", "path": "/overseer_elect/leader"}
response = self.client.get(self.zk_path, params).result
return json.loads(response['znode']['data'])
return json.loads(response["znode"]["data"])

@@ -237,6 +253,6 @@ @property

"""
params = {'detail': 'true', 'path': '/live_nodes'}
params = {"detail": "true", "path": "/live_nodes"}
response = self.client.get(self.zk_path, params).result
children = [d['data']['title'] for d in response['tree'][0]['children']]
nodes = [c.replace('_solr', '') for c in children]
children = [d["data"]["title"] for d in response["tree"][0]["children"]]
nodes = [c.replace("_solr", "") for c in children]
return [self.url_template.format(server=a) for a in nodes]

@@ -243,0 +259,0 @@

@@ -1,7 +0,7 @@

from future.utils import iterkeys, iteritems
from collections import defaultdict
from future.utils import iteritems, iterkeys
class BaseParams(object):
def __init__(self, query=None, **kwargs):

@@ -14,3 +14,3 @@ """

if query:
self._q['q'].add(query)
self._q["q"].add(query)

@@ -56,3 +56,3 @@ for k, v in list(kwargs.items()):

c = self._q.copy()
return iter(dict(c).items())
return iter(list(dict(c).items()))

@@ -98,3 +98,3 @@ def __getitem__(self, item):

"""
self._q['q'].add(query)
self._q["q"].add(query)
return self

@@ -110,3 +110,3 @@

"""
self._q['sort'].add(criteria)
self._q["sort"].add(criteria)
return self

@@ -122,3 +122,3 @@

"""
self._q['start'].add(start)
self._q["start"].add(start)
return self

@@ -134,3 +134,3 @@

"""
self._q['rows'].add(r)
self._q["rows"].add(r)
return self

@@ -146,3 +146,3 @@

"""
self._q['fq'].add(query)
self._q["fq"].add(query)
return self

@@ -158,3 +158,3 @@

"""
self._q['fl'].add(fields)
self._q["fl"].add(fields)
return self

@@ -170,3 +170,3 @@

"""
self._q['defType'].add(t)
self._q["defType"].add(t)
return self

@@ -182,3 +182,3 @@

"""
self._q['explainOther'].add(val)
self._q["explainOther"].add(val)
return self

@@ -194,3 +194,3 @@

"""
self._q['timeAllowed'].add(t)
self._q["timeAllowed"].add(t)
return self

@@ -206,3 +206,3 @@

"""
self._q['cache'].add(val)
self._q["cache"].add(val)
return self

@@ -214,3 +214,3 @@

"""
self._q['logParamList'].add(val)
self._q["logParamList"].add(val)
return self

@@ -225,3 +225,3 @@

"""
self._q['debug'].add("true")
self._q["debug"].add("true")
return self

@@ -244,3 +244,3 @@

"""
self._q['mlt.fl'].add(field)
self._q["mlt.fl"].add(field)
return self

@@ -256,3 +256,3 @@

"""
self._q['mlt.mintf'].add(tf)
self._q["mlt.mintf"].add(tf)
return self

@@ -268,3 +268,3 @@

"""
self._q['mlt.mindf'].add(df)
self._q["mlt.mindf"].add(df)
return self

@@ -280,3 +280,3 @@

"""
self._q['mlt.minwl'].add(wl)
self._q["mlt.minwl"].add(wl)
return self

@@ -292,3 +292,3 @@

"""
self._q['mlt.maxwl'].add(wl)
self._q["mlt.maxwl"].add(wl)
return self

@@ -304,3 +304,3 @@

"""
self._q['mlt.maxqt'].add(qt)
self._q["mlt.maxqt"].add(qt)
return self

@@ -316,3 +316,3 @@

"""
self._q['mlt.maxntp'].add(ntp)
self._q["mlt.maxntp"].add(ntp)
return self

@@ -328,3 +328,3 @@

"""
self._q['mlt.boost'].add(val)
self._q["mlt.boost"].add(val)
return self

@@ -340,3 +340,3 @@

"""
self._q['mlt.qf'].add(fields)
self._q["mlt.qf"].add(fields)
return self

@@ -352,3 +352,3 @@

"""
self._q['mlt.count'].add(c)
self._q["mlt.count"].add(c)
return self

@@ -371,3 +371,3 @@

"""
self._q['facet.query'].add(query)
self._q["facet.query"].add(query)
return self

@@ -383,3 +383,3 @@

"""
self._q['facet.field'].add(field)
self._q["facet.field"].add(field)
return self

@@ -398,5 +398,5 @@

if field:
self._q['f.%s.facet.prefix' % field].add(criteria)
self._q["f.%s.facet.prefix" % field].add(criteria)
else:
self._q['facet.prefix'].add(criteria)
self._q["facet.prefix"].add(criteria)
return self

@@ -418,5 +418,5 @@

if field:
self._q['f.%s.facet.sort' % field].add(criteria)
self._q["f.%s.facet.sort" % field].add(criteria)
else:
self._q['facet.sort'].add(criteria)
self._q["facet.sort"].add(criteria)

@@ -436,5 +436,5 @@ return self

if field:
self._q['f.%s.facet.limit' % field].add(limit)
self._q["f.%s.facet.limit" % field].add(limit)
else:
self._q['facet.limit'].add(limit)
self._q["facet.limit"].add(limit)
return self

@@ -453,5 +453,5 @@

if field:
self._q['f.%s.facet.offset' % field].add(offset)
self._q["f.%s.facet.offset" % field].add(offset)
else:
self._q['facet.offset'].add(offset)
self._q["facet.offset"].add(offset)
return self

@@ -470,5 +470,5 @@

if field:
self._q['f.%s.facet.mincount' % field].add(count)
self._q["f.%s.facet.mincount" % field].add(count)
else:
self._q['facet.mincount'].add(count)
self._q["facet.mincount"].add(count)
return self

@@ -487,5 +487,5 @@

if field:
self._q['f.%s.facet.missing' % field].add(val)
self._q["f.%s.facet.missing" % field].add(val)
else:
self._q['facet.missing'].add(val)
self._q["facet.missing"].add(val)
return self

@@ -504,5 +504,5 @@

if field:
self._q['f.%s.facet.method' % field].add(m)
self._q["f.%s.facet.method" % field].add(m)
else:
self._q['facet.method'].add(m)
self._q["facet.method"].add(m)
return self

@@ -521,5 +521,5 @@

if field:
self._q['f.%s.facet.enum.cache.minDf' % field].add(val)
self._q["f.%s.facet.enum.cache.minDf" % field].add(val)
else:
self._q['facet.enum.cache.minDf'].add(val)
self._q["facet.enum.cache.minDf"].add(val)
return self

@@ -535,3 +535,3 @@

"""
self._q['facet.threads'].add(num)
self._q["facet.threads"].add(num)
return self

@@ -553,6 +553,6 @@

"""
self._q['facet.range'].add(field)
self._q['f.%s.facet.range.start' % field].add(start)
self._q['f.%s.facet.range.end' % field].add(end)
self._q['f.%s.facet.range.gap' % field].add(gap)
self._q["facet.range"].add(field)
self._q["f.%s.facet.range.start" % field].add(start)
self._q["f.%s.facet.range.end" % field].add(end)
self._q["f.%s.facet.range.gap" % field].add(gap)
return self

@@ -568,3 +568,3 @@

"""
self._q['facet.pivot'].add(fields)
self._q["facet.pivot"].add(fields)
return self

@@ -580,3 +580,3 @@

"""
self._q['facet.pivot.mincount'].add(count)
self._q["facet.pivot.mincount"].add(count)
return self

@@ -612,5 +612,7 @@

self.mltparams = MLTParams()
self._all = [self.commonparams,
self.facetparams,
self.mltparams, ]
self._all = [
self.commonparams,
self.facetparams,
self.mltparams,
]

@@ -620,3 +622,3 @@ def iteritems(self):

if len(self.facetparams) > 0:
res.update({'facet': 'true'})
res.update({"facet": "true"})
for p in self._all:

@@ -623,0 +625,0 @@ res.update(iter(p))

@@ -0,23 +1,29 @@

import json
import logging
import random
import uuid
import requests
from future.utils import iteritems
from requests.auth import HTTPBasicAuth
from requests.exceptions import ConnectionError, HTTPError
from requests.auth import HTTPBasicAuth
import requests
try:
from urllib.parse import urljoin
except ImportError:
from urlparse import urljoin
from urllib.parse import urljoin
try:
unicode
def encodeUnicode(str):
if isinstance(str, unicode):
return str.encode('utf-8', 'ignore')
str
def encodeUnicode(value):
if isinstance(value, str):
return value.encode("utf-8", "ignore")
except NameError:
def encodeUnicode(str):
def encodeUnicode(value):
return str
import json
import random
import logging
import uuid
logger = logging.getLogger(__name__)

@@ -53,5 +59,6 @@

self.client.auth = HTTPBasicAuth(
self.connection.user, self.connection.password)
self.connection.user, self.connection.password
)
def request(self, path, params=None, method='GET', body=None, async=False):
def request(self, path, params=None, method="GET", body=None, asynchronous=False):
"""

@@ -69,4 +76,4 @@ Send a request to a collection

:type body: str
:param async: whether to perform the action asynchronously (only for collections API)
:type async: bool
:param asynchronous: whether to perform the action asynchronously (only for collections API)
:type asynchronous: bool

@@ -79,17 +86,15 @@ :returns response: an instance of :class:`~solrcloudpy.utils.SolrResponse`

params = params or {}
if method.lower() != 'get':
headers = {'content-type': 'application/json'}
if method.lower() != "get":
headers = {"content-type": "application/json"}
# https://github.com/solrcloudpy/solrcloudpy/issues/21
# https://wiki.apache.org/solr/SolJSON
resparams = {'wt': 'json',
'omitHeader': 'true',
'json.nl': 'map'}
resparams = {"wt": "json", "omitHeader": "true", "json.nl": "map"}
if async:
if asynchronous:
async_id = uuid.uuid4()
logger.info("Sending request with async_id %s" % async_id)
resparams['async'] = async_id
resparams["async"] = async_id
if hasattr(params, 'iteritems') or hasattr(params, 'items'):
if hasattr(params, "iteritems") or hasattr(params, "items"):
resparams.update(iteritems(params))

@@ -109,10 +114,13 @@

try:
r = self.client.request(method, fullpath,
params=resparams,
data=body,
headers=headers,
timeout=self.timeout)
r = self.client.request(
method,
fullpath,
params=resparams,
data=body,
headers=headers,
timeout=self.timeout,
)
r.raise_for_status()
if async:
if asynchronous:
result = AsyncResponse(r, async_id)

@@ -123,4 +131,3 @@ else:

except (ConnectionError, HTTPError) as e:
logger.exception('Failed to connect to server at %s. e=%s',
host, e)
logger.exception("Failed to connect to server at %s. e=%s", host, e)

@@ -134,4 +141,4 @@ # Track retries, and take a server with too many retries out of the pool

if len(servers) <= 0:
logger.error('No servers left to try')
raise SolrException('No servers available')
logger.error("No servers left to try")
raise SolrException("No servers available")
finally:

@@ -144,3 +151,3 @@ # avoid requests library's keep alive throw exception in python3

def update(self, path, params=None, body=None, async=False):
def update(self, path, params=None, body=None, asynchronous=False):
"""

@@ -155,4 +162,4 @@ Posts an update request to Solr

:type body: str
:param async: whether to perform the action asynchronously (only for collections API)
:type async: bool
:param asynchronous: whether to perform the action asynchronously (only for collections API)
:type asynchronous: bool
:returns response: an instance of :class:`~solrcloudpy.utils.SolrResponse`

@@ -162,5 +169,7 @@ :rtype: SolrResponse

"""
return self.request(path, params=params, method='POST', body=body, async=async)
return self.request(
path, params=params, method="POST", body=body, asynchronous=asynchronous
)
def get(self, path, params=None, async=False):
def get(self, path, params=None, asynchronous=False):
"""

@@ -173,4 +182,4 @@ Sends a get request to Solr

:type params: dict
:param async: whether to perform the action asynchronously (only for collections API)
:type async: bool
:param asynchronous: whether to perform the action asynchronously (only for collections API)
:type asynchronous: bool
:returns response: an instance of :class:`~solrcloudpy.utils.SolrResponse`

@@ -180,3 +189,5 @@ :rtype: SolrResponse

"""
return self.request(path, params=params, method='GET', async=async)
return self.request(
path, params=params, method="GET", asynchronous=asynchronous
)

@@ -210,3 +221,3 @@

for k, v in iteritems(obj):
k = encodeUnicode(k)
k = encodeUnicode(k).decode("utf-8")
if isinstance(v, dict):

@@ -305,3 +316,2 @@ # create a new object from this (sub)class,

class AsyncResponse(SolrResponse):
def __init__(self, response_obj, async_id):

@@ -328,3 +338,2 @@ """

class SolrResponseJSONEncoder(json.JSONEncoder):
def default(self, o):

@@ -334,3 +343,3 @@ if type(o) == type(SolrResult({})):

if len(val) > 200:
s = val[:100] + ' ... '
s = val[:100] + " ... "
else:

@@ -337,0 +346,0 @@ s = val

@@ -1,8 +0,9 @@

from __future__ import print_function
import os
import time
import unittest
import time
import os
from requests.adapters import ReadTimeout
from solr_instance import SolrInstance
from solrcloudpy import SolrConnection
from requests.adapters import ReadTimeout

@@ -14,56 +15,63 @@ solrprocess = None

def setUp(self):
self.conn = SolrConnection(version=os.getenv('SOLR_VERSION', '6.1.0'))
self.conn = SolrConnection(version=os.getenv("SOLR_VERSION", "6.1.0"))
self.collparams = {}
confname = os.getenv('SOLR_CONFNAME', '')
if confname != '':
self.collparams['collection_config_name'] = confname
confname = os.getenv("SOLR_CONFNAME", "")
if confname != "":
self.collparams["collection_config_name"] = confname
def test_create_collection(self):
original_count = len(self.conn.list())
coll2 = self.conn.create_collection('coll2', **self.collparams)
self.assertEqual(len(self.conn.list()), original_count+1)
coll2 = self.conn.create_collection("coll2", **self.collparams)
time.sleep(3)
self.assertEqual(len(self.conn.list()), original_count + 1)
self.conn.list()
time.sleep(3)
coll3 = self.conn.create_collection('coll3', **self.collparams)
self.assertEqual(len(self.conn.list()), original_count+2)
coll3 = self.conn.create_collection("coll3", **self.collparams)
time.sleep(3)
self.assertEqual(len(self.conn.list()), original_count + 2)
# todo calling state here means the integration works, but what should we assert?
coll2.state
coll2.drop()
self.assertEqual(len(self.conn.list()), original_count+1)
time.sleep(3)
self.assertEqual(len(self.conn.list()), original_count + 1)
time.sleep(3)
coll3.drop()
time.sleep(3)
self.assertEqual(len(self.conn.list()), original_count)
def test_reload(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
time.sleep(3)
res = coll2.reload()
self.assertTrue(getattr(res, 'success') is not None)
self.assertTrue(getattr(res, "success") is not None)
coll2.drop()
def test_split_shard(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
time.sleep(3)
res = coll2.split_shard('shard1', ranges="80000000-90000000,90000001-7fffffff")
res = coll2.split_shard("shard1", ranges="80000000-90000000,90000001-7fffffff")
time.sleep(3)
self.assertTrue(getattr(res, 'success') is not None)
self.assertTrue(getattr(res, "success") is not None)
coll2.drop()
def test_create_shard(self):
coll2 = self.conn.create_collection('coll2',
router_name='implicit',
shards='myshard1', max_shards_per_node=3,
**self.collparams)
coll2 = self.conn.create_collection(
"coll2",
router_name="implicit",
shards="myshard1",
max_shards_per_node=3,
**self.collparams
)
time.sleep(3)
res = coll2.create_shard('shard_my')
res = coll2.create_shard("shard_my")
time.sleep(3)
self.assertTrue(getattr(res, 'success') is not None)
self.assertTrue(getattr(res, "success") is not None)
coll2.drop()
def test_create_delete_alias(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2.create_alias('alias2')
coll2 = self.conn.create_collection("coll2", **self.collparams)
coll2.create_alias("alias2")
time.sleep(3)
self.assertTrue(self.conn.alias2.is_alias())
coll2.delete_alias('alias2')
coll2.delete_alias("alias2")
coll2.drop()

@@ -73,15 +81,21 @@

try:
coll2 = self.conn.create_collection('test_delete_replica',
router_name='implicit',
shards='myshard1',
max_shards_per_node=6,
replication_factor=2,
**self.collparams)
coll2 = self.conn.create_collection(
"test_delete_replica",
router_name="implicit",
shards="myshard1",
max_shards_per_node=6,
replication_factor=2,
**self.collparams
)
except ReadTimeout:
print("Encountered read timeout while testing delete replicate")
print("This generally doesn't mean the collection wasn't created with the settings passed.")
coll2 = self.conn['test_delete_replica']
print(
"This generally doesn't mean the collection wasn't created with the settings passed."
)
coll2 = self.conn["test_delete_replica"]
time.sleep(3)
firstReplica = list(coll2.shards['shards']['myshard1']['replicas'].dict.keys())[0]
result = coll2.delete_replica(firstReplica, 'myshard1')
firstReplica = list(coll2.shards["shards"]["myshard1"]["replicas"].dict.keys())[
0
]
result = coll2.delete_replica(firstReplica, "myshard1")
self.assertTrue(result.success)

@@ -92,3 +106,3 @@ coll2.drop()

def setUpModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -102,3 +116,3 @@ solrprocess = SolrInstance("solr2")

def tearDownModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -109,4 +123,4 @@ if solrprocess:

if __name__ == '__main__':
if __name__ == "__main__":
# run tests
unittest.main()

@@ -0,6 +1,7 @@

import os
import time
import unittest
from solr_instance import SolrInstance
import time
import os
from solrcloudpy import SolrConnection, SolrCollection
from solrcloudpy import SolrCollection, SolrConnection

@@ -12,13 +13,13 @@ solrprocess = None

def setUp(self):
self.conn = SolrConnection(version=os.getenv('SOLR_VERSION', '6.1.0'))
self.conn = SolrConnection(version=os.getenv("SOLR_VERSION", "6.1.0"))
self.collparams = {}
confname = os.getenv('SOLR_CONFNAME', '')
if confname != '':
self.collparams['collection_config_name'] = confname
confname = os.getenv("SOLR_CONFNAME", "")
if confname != "":
self.collparams["collection_config_name"] = confname
def test_list(self):
self.conn['foo'].create(**self.collparams)
self.conn["foo"].create(**self.collparams)
colls = self.conn.list()
self.assertTrue(len(colls) >= 1)
self.conn['foo'].drop()
self.conn["foo"].drop()

@@ -35,3 +36,3 @@ def test_live_nodes(self):

def test_create_collection(self):
coll = self.conn.create_collection('test2', **self.collparams)
coll = self.conn.create_collection("test2", **self.collparams)
self.assertTrue(isinstance(coll, SolrCollection))

@@ -49,5 +50,4 @@ self.conn.test2.drop()

def setUpModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -62,3 +62,3 @@ # start solr

def tearDownModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -69,3 +69,3 @@ if solrprocess:

if __name__ == '__main__':
if __name__ == "__main__":
unittest.main()

@@ -0,6 +1,7 @@

import os
import time
import unittest
import time
import os
from solr_instance import SolrInstance
from solrcloudpy import SolrConnection, SearchOptions
from solrcloudpy import SearchOptions, SolrConnection

@@ -12,10 +13,10 @@ solrprocess = None

def setUp(self):
self.conn = SolrConnection(version=os.getenv('SOLR_VERSION', '6.1.0'))
self.conn = SolrConnection(version=os.getenv("SOLR_VERSION", "6.1.0"))
self.collparams = {}
confname = os.getenv('SOLR_CONFNAME', '')
if confname != '':
self.collparams['collection_config_name'] = confname
confname = os.getenv("SOLR_CONFNAME", "")
if confname != "":
self.collparams["collection_config_name"] = confname
def test_add(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
docs = [{"id": str(_id), "includes": "silly text"} for _id in range(5)]

@@ -30,3 +31,3 @@

def test_delete(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
docs = [{"id": str(_id), "includes": "silly text"} for _id in range(5)]

@@ -53,6 +54,6 @@

def test_custom_params_search(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
docs = [{"id": str(_id), "includes": "silly text"} for _id in range(5)]
res_1 = coll2.add(docs, {'omitHeader': "false"})
res_1 = coll2.add(docs, {"omitHeader": "false"})
self.assertEqual(0, res_1.responseHeader.status)

@@ -63,5 +64,5 @@

self.assertEqual(0, res_2.responseHeader.status)
def test_post_body_search(self):
coll2 = self.conn.create_collection('coll2', **self.collparams)
coll2 = self.conn.create_collection("coll2", **self.collparams)
docs = [{"id": str(_id), "includes": "silly text"} for _id in range(5)]

@@ -72,3 +73,3 @@

# JSON DSL Query format
res = coll2.search({},"POST", '{"query": "id:1"}').result
res = coll2.search({}, "POST", '{"query": "id:1"}').result
self.assertTrue(len(res.response.docs) == 1)

@@ -79,3 +80,3 @@ coll2.drop()

def setUpModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -90,3 +91,3 @@ # start solr

def tearDownModule():
if os.getenv('SKIP_STARTUP', False):
if os.getenv("SKIP_STARTUP", False):
return

@@ -97,4 +98,4 @@ if solrprocess:

if __name__ == '__main__':
if __name__ == "__main__":
# run tests
unittest.main()
import sys
import argparse
import json
from pprint import pprint
from IPython.terminal import ipapp
from IPython.config.loader import Config
from solrcloudpy.connection import SolrConnection
from solrcloudpy.parameters import SearchOptions
def display_list(ob, pprinter, cycle):
if len(ob) == 0:
pprinter.text('[]')
return
e = ob[0]
if type(e) == type({}):
val = json.dumps(ob, indent=4)
pprinter.text(val)
return
pprinter.text(str(ob))
def display_dict(ob, pprinter, cycle):
try:
val = json.dumps(ob, indent=4)
pprinter.text(val)
except TypeError:
pprint(ob)
def get_config(args):
c = Config()
c.PromptManager.in_template = 'solr %s:%s> ' % (args.host, args.port)
c.PromptManager.in2_template = 'solr %s:%s>' % (args.host, args.port)
c.PromptManager.out_template = ''
c.PromptManager.justify = False
c.PlainTextFormatter.pprint = True
c.TerminalInteractiveShell.confirm_exit = False
return c
def get_conn(args):
return SolrConnection(["%s:%s" % (args.host, args.port), ],
user=args.user,
password=args.password, version=args.version)
def main():
parser = argparse.ArgumentParser(description='Parser for solrcloudpy console')
parser.add_argument('--host', default='localhost', help='host')
parser.add_argument('--port', default='8983', help='port')
parser.add_argument('--user', default=None, help='user')
parser.add_argument('--password', default=None, help='password')
parser.add_argument('--version', default='5.3.0', help='version')
args = parser.parse_args(sys.argv[1:])
conn = get_conn(args)
c = get_config(args)
banner = "SolrCloud Console\nUse the 'conn' object to access a collection"
banner2 = "\nType 'collections' to see the list of available collections"
app = ipapp.TerminalIPythonApp.instance()
shell = ipapp.TerminalInteractiveShell.instance(
parent=app,
profile_dir=app.profile_dir,
ipython_dir=app.ipython_dir,
user_ns={"conn": conn,
"collections": conn.list(),
"SearchOptions": SearchOptions},
banner1=banner,
banner2=banner2,
display_banner=False,
config=c)
formatter = shell.get_ipython().display_formatter.formatters["text/plain"]
formatter.for_type(type([]), display_list)
formatter.for_type(type({}), display_dict)
shell.configurables.append(app)
app.shell = shell
app.initialize(argv=[])
app.start()
return
[console_scripts]
solrconsole = scripts.solrconsole:main [ip]