Security News
PyPI’s New Archival Feature Closes a Major Security Gap
PyPI now allows maintainers to archive projects, improving security and helping users make informed decisions about their dependencies.
@or-sdk/contacts
Advanced tools
$ npm i @or-sdk/contacts
Contacts is a composition of separate Api entities.
import { Contacts } from '@or-sdk/contacts'
// with direct api url
const contacts = new Contacts({
token: 'my-account-token-string',
contactsApiUrl: 'http://example.account-settings/endpoint'
});
// with service discovery(slower)
const contacts = new Contacts({
token: 'my-account-token-string',
discoveryUrl: 'http://example.account-settings/endpoint'
});
After when you initialize the Contacts Api,
various entities are available at your service with which you can work
(all entities presented in api
folder )
Let's say you want to perform some CRUD operation on the contactBook entity All you need to do, it's just refer to contactBookApi(entity)
import { Contacts } from '@or-sdk/contacts'
const {contactBookApi, migrationsApi, ...rest} = new Contacts(...);
const migrationsStatus = migrationsApi.getMigrationState()
const book = contactBookApi.getContactsBook(id)
const newContactBook = contactBookApi.createContactBook(...)
import { Contacts } from '@or-sdk/contacts'
// always use 'withKeepAliveAgents' option for bulk create
const { bulkCreateApi } = new Contacts({
contactsApiUrl: ...,
accountId: ...,
token: ...,
withKeepAliveAgents: true
});
// since bulkCreateContacts is a time consuming operation,
// bulkName can be used for tracking the bulk progress
//
// usually 4000000 bytes as a maximum size of JSON payload sent to
// a single batch (the bulk operation, under its hood, consists of
// some amount of batches) is quite enough to complete the bulk
// operation successfully, however, if bulk fails due to PostgreSQL
// server workload, you can try to decrease the batch size with the
// appropriated option
const { created, failed } = await bulkCreateApi.bulkCreateContacts(
'some bulk name', // bulkName
{
contact_book: 'some book id', // optional
contacts: [...] // it is recommended to provide
// an unique (for the scope of a single bulk)
// contactKey for every contact, it will be used
// in the returned 'create' or 'failed' objects
// to indicate created contact id or failure
// reason respectively
}, // data
{ batchSize: 4000000 } // optional
);
import { Contacts } from '@or-sdk/contacts'
const { bulkCreateApi } = new Contacts({
contactsApiUrl: ...,
accountId: ...,
token: ...,
withKeepAliveAgents: true
});
for await (const bulkResults of bulkCreateApi.trackBulkCreateContacts('bulkName', data)) {
if (bulkResults.type === 'progress') {
const progress = bulkResults.results;
// do progress related operations here
} else {
const results = bulkResults.results;
// process final results here
}
}
Create bulk consumes lots of resources, in terms of the quantity of launched lambdas (it's huge!), and in terms of created workload on the DB (DB server memory, CPU, connections to the DB). So basically it means that the more activities are taking place at particular point of time the higher is the probability of bulk failure. It is strongly recommended to execute the bulk create operation during non working hours!
Bulk provides some interface for tuning the workload by means of the options
argument. With the default options, the bulk successfully completes 200k contacts for approximately 5 mins. Tuning options may result into the bulk operation performance degrade, however it might dramatically reduce workload on the system and turn failed bulk into successful one!
Here is the short description of the bulk create contacts mechanism to understand how to affect it by tuning options:
batch
parallelBatchesAmount
) batches in parallel - we call it batches series
; executing here means sending the batch data to the server where contacts, actually, are created and inserted into DBpolingDuration
repeatFailedParallelBatchesIn
Here is the options:
{
batchSize?: number; // default is 4000000 bytes.
// Reducing this number is the very first step you should try.
parallelBatchesAmount?: number; // default is 4. Reducing it also improves bulk reliability,
// however it is making it slower.
batchesToProcess?: number; // default is ALL batches. If specified, executes only the number
//of batches passed here.
polingDuration?: number; // default is 15000 milliseconds,
//specifies the pause duration between poling batch process results
repeatFailedParallelBatchesIn?: number; // default is 30000 milliseconds (30 seconds) -
// the amount of time bulk waits until repeating an attempt
// to execute again a failed series of batches
logger?: BulkLogger; // logger, pass here an object that has log() function;
// to be more precise, the log function should be of the following type:
// (message?: unknown, ...optionalParams: unknown[]) => void
};
FAQs
Unknown package
We found that @or-sdk/contacts demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
PyPI now allows maintainers to archive projects, improving security and helping users make informed decisions about their dependencies.
Research
Security News
Malicious npm package postcss-optimizer delivers BeaverTail malware, targeting developer systems; similarities to past campaigns suggest a North Korean connection.
Security News
CISA's KEV data is now on GitHub, offering easier access, API integration, commit history tracking, and automated updates for security teams and researchers.