New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

cbcmgr

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

cbcmgr

Couchbase connection manager

  • 2.2.40
  • PyPI
  • Socket score

Maintainers
1

cb-util 2.2.40

Couchbase Utilities

Couchbase connection manager. Simplifies connecting to a Couchbase cluster and performing data and management operations.

Installing

$ pip install cbcmgr

API Usage

Original syntax (package is backwards compatible):

>>> from cbcmgr.cb_connect import CBConnect
>>> from cbcmgr.cb_management import CBManager
>>> bucket = scope = collection = "test"
>>> dbm = CBManager("127.0.0.1", "Administrator", "password", ssl=False).connect()
>>> dbm.create_bucket(bucket)
>>> dbm.create_scope(scope)
>>> dbm.create_collection(collection)
>>> dbc = CBConnect("127.0.0.1", "Administrator", "password", ssl=False).connect(bucket, scope, collection)
>>> result = dbc.cb_upsert("test::1", {"data": 1})
>>> result = dbc.cb_get("test::1")
>>> print(result)
{'data': 1}

New Operator syntax:

keyspace = "test.test.test"
db = CBOperation(hostname, "Administrator", "password", ssl=False, quota=128, create=True).connect(keyspace)
db.put_doc(col_a.collection, "test::1", document)
d = db.get_doc(col_a.collection, "test::1")
assert d == document
db.index_by_query("select data from test.test.test")
r = db.run_query(col_a.cluster, "select data from test.test.test")
assert r[0]['data'] == 'data'

Thread Pool Syntax:

pool = CBPool(hostname, "Administrator", "password", ssl=False, quota=128, create=True)
pool.connect(keyspace)
pool.dispatch(keyspace, Operation.WRITE, f"test::1", document)
pool.join()

Async Pool Syntax

pool = CBPoolAsync(hostname, "Administrator", "password", ssl=False, quota=128, create=True)
await pool.connect(keyspace)
await pool.join()

CLI Utilities

cbcutil

Load 1,000 records of data using the default schema:

$ cbcutil load --host couchbase.example.com --count 1000 --schema default

Load data from a test file:

$ cat data/data_file.txt | cbcutil load --host couchbase.example.com -b bucket

Export data from a bucket to CSV (default output file location is $HOME)

$ cbcutil export csv --host couchbase.example.com -i -b sample_app

Export data as JSON and load that data into another cluster

$ cbcutil export json --host source -i -O -q -b bucket | cbcutil load --host destination -b bucket

Get a document from a bucket using the key:

$ cbcutil get --host couchbase.example.com -b employees -k employees:1

List information about a Couchbase cluster:

$ cbcutil list --host couchbase.example.com -u developer -p password

List detailed information about a Couchbase cluster including health information:

$ cbcutil list --host couchbase.example.com --ping -u developer -p password

Replicate buckets, indexes and users from self-managed cluster to Capella (and filter buckets beginning with "test" and users with usernames beginning with "dev"):

cbcutil replicate source --host 1.2.3.4 --filter 'bucket:test.*' --filter 'user:dev.*' | cbcutil replicate target --host cb.abcdefg.cloud.couchbase.com -p "Password123#" --project dev-project --db testdb

List available schemas:

$ cbcutil schema

Randomizer tokens

Note: Except for the US States the random data generated may not be valid. For example the first four digits of the random credit card may not represent a valid financial institution. The intent is to simulate real data. Any similarities to real data is purely coincidental.

TokenDescription
date_timeData/time string in form %Y-%m-%d %H:%M:%S
rand_credit_cardRandom credit card format number
rand_ssnRandom US Social Security format number
rand_fourRandom four digits
rand_accountRandom 10 digit number
rand_idRandom 16 digit number
rand_zip_codeRandom US Zip Code format number
rand_dollarRandom dollar amount
rand_hashRandom 16 character alphanumeric string
rand_addressRandom street address
rand_cityRandom city name
rand_stateRandom US State name
rand_firstRandom first name
rand_lastRandom last name
rand_nicknameRandom string with a concatenated first initial and last name
rand_emailRandom email address
rand_usernameRandom username created from a name and numbers
rand_phoneRandom US style phone number
rand_boolRandom boolean value
rand_yearRandom year from 1920 to present
rand_monthRandom month number
rand_dayRandom day number
rand_date_1Near term random date with slash notation
rand_date_2Near term random date with dash notation
rand_date_3Near term random date with spaces
rand_dob_1Date of Birth with slash notation
rand_dob_2Date of Birth with dash notation
rand_dob_3Date of Birth with spaces
rand_imageRandom 128x128 pixel JPEG image

Options

Usage: cbcutil command options

CommandDescription
loadLoad data
getGet data
listList cluster information
exportExport data
importImport via plugin
cleanRemove buckets
schemaSchema management options
replicateReplicate configuration
OptionDescription
-u USER, --user USERUser Name
-p PASSWORD, --password PASSWORDUser Password
-h HOST, --host HOSTCluster Node or Domain Name
-b BUCKET, --bucket BUCKETBucket name
-s SCOPE, --scope SCOPEScope name
-c COLLECTION, --collection COLLECTIONCollection name
-k KEY, --key KEYKey name or pattern
-d DATA, --data DATAData to import
-F FILTER, --filter FILTERFilter expression (i.e. bucket:regex, user:regex, etc.)
--project PROJECTCapella project name
--db DATABASECapella database name
-q, --quietQuiet mode (only necessary output)
-O, --stdoutOutput exported data to the terminal
-i, --indexCreate a primary index for export operations (if not present)
--tlsEnable SSL (default)
-e, --externalUse external network for clusters with an external network
--schema SCHEMASchema name
--count COUNTRecord Count
--file FILEFile mode schema JSON file
--id IDID field (for file mode)
--directory DIRECTORYDirectory for export operations
--deferCreates an index as deferred
-P PLUGINImport plugin
-V PLUGIN_VARIABLEPass variable in form key=value to plugin

sgwutil

Database Commands:

CommandDescription
createCreate SGW database (connect to CBS Bucket)
deleteDelete a database
syncManage Sync Function for database
resyncReprocess documents with sync function
listList database
dumpDump synced document details

User Commands:

CommandDescription
createCreate users
deleteDelete user
listList users
mapCreate users based on document field

Database parameters:

ParameterDescription
-b, --bucketBucket
-n, --nameDatabase name
-f, --functionSync Function file
-r, --replicasNumber of replicas
-g, --getDisplay current Sync Function

User parameters:

ParameterDescription
-n, --nameDatabase name
-U, --sguserSync Gateway user name
-P, --sgpassSync Gateway user password
-d, --dbhostCouchbase server connect name or IP (for map command)
-l, --dbloginCouchbase server credentials in form user:password
-f, --fieldDocument field to map
-k, --keyspaceKeyspace with documents for map
-a, --allList all users

Examples:

Create Sync Gateway database "sgwdb" that is connected to bucket "demo":

sgwutil database create -h hostname -n sgwdb -b demo

Get information about database "sgwdb":

sgwutil database list -h hostname -n sgwdb

Display information about documents in the database including the latest channel assignment:

sgwutil database dump -h hostname -n sgwdb

Create a Sync Gateway database user:

sgwutil user create -h hostname -n sgwdb --sguser sgwuser --sgpass "password"

Display user details:

sgwutil user list -h hostname -n sgwdb --sguser sgwuser

List all database users:

sgwutil user list -h hostname -n sgwdb -a

Create users in database "sgwdb" based on the unique values for document value "field_name" in keyspace "demo":

sgwutil user map -h sgwhost -d cbshost -f field_name -k demo -n sgwdb

Add Sync Function:

sgwutil database sync -h hostname -n sgwdb -f /home/user/demo.js

Display Sync Function:

sgwutil database sync -h hostname -n sgwdb -g

Delete user:

sgwutil user delete -h hostname -n sgwdb --sguser sgwuser

Delete database "sgwdb":

sgwutil database delete -h hostname -n sgwdb

caputil

Note: Save Capella v4 token file as $HOME/.capella/default-api-key-token.txt
Create Capella cluster:

caputil cluster create --project project-name --name testdb --region us-east-1

Update Capella cluster (to add services):

caputil cluster update --project pytest-name --name testdb --services search,analytics,eventing

Delete Capella cluster:

caputil cluster delete --project project-name --name testdb --region us-east-1

Create bucket:

caputil bucket create --project project-name --db testdb --name test-bucket

Change database user password:

caputil user password --project pytest-name --db testdb --name Administrator

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc