Research
Security News
Malicious npm Packages Inject SSH Backdoors via Typosquatted Libraries
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
appwrite-utils
Advanced tools
Appwrite Utility Functions to help with database management, data conversion, data import, migrations, and much more.
The AppwriteUtils package simplifies the process of managing data migrations and schema updates for Appwrite projects. It provides a comprehensive toolset for database setup, data conversion, and schema management, all accessible through a simple command-line interface. This package is designed to be easily integrated into your development workflow, requiring minimal setup to get started.
To use AppwriteUtils, first, install the package via npm:
npm install appwrite-utils
Once installed, you should first run the setup command to generate the config, then you can run migration commands directly using npx
(or bunx
, whatever):
npx appwrite-utils-setup
You may generate an example config using (or look in the examples folder)
npx appwrite-utils-setup --example
npx appwrite-utils-migrate --args
Replace --args
with specific arguments for your migration task. For example, to run migrations in a development environment, you might use:
npx appwrite-utils-migrate --dev
"{$id}"
for instance gets replaced with the created documents ID. If it has one, "{id}"
would be replaced by the JSON items id
field, "{dbId}"
the current dataase, "{createdDoc}"
the created document in this import iteration, and more!This package leverages TypeScript for type safety and is configured to work seamlessly with Appwrite. It's built to support complex migration scenarios, making it an essential tool for developers working with Appwrite projects.
After installing the package, you can run various migration-related tasks using the command line. Here are some examples of commands you might use, reflecting the capabilities as defined in index.ts
:
Initialize a New Migration: Set up your database and prepare it for new migrations. This will also generate schemas but will not import data.
npx appwrite-utils-migrate --init
Run Migrations in Production: Apply migrations to your production database.
npx appwrite-utils-migrate --prod
Run Migrations in Staging: Apply migrations to your staging database.
npx appwrite-utils-migrate --staging
Run Migrations in Development: Apply migrations to your development database.
npx appwrite-utils-migrate --dev
Wipe Databases: Wipe your databases. Use with caution.
npx appwrite-utils-migrate --wipe
Generate Schemas: Generate TypeScript schemas from your Appwrite database collections.
npx appwrite-utils-migrate --generate
Import Data: Import data into your Appwrite project from external sources.
npx appwrite-utils-migrate --import
Backup Data: Backup your database data.
npx appwrite-utils-migrate --backup
Each command can be combined with others as needed, except for --init
which runs a specific initialization routine including schema generation but not data import. For example, to run migrations in a development environment and import data, you might use:
npx appwrite-utils-migrate --dev --import
By simplifying the migration process, AppwriteUtils enables developers to focus on building their applications, knowing that their data management and schema updates are handled efficiently.
Converters take a value (in the import data) and convert it, before validating it or processing it
anyToString(value: any): string | null
anyToNumber(value: any): number | null
anyToBoolean(value: any): boolean | null
anyToAnyArray(value: any, separator?: string): any[]
anyToStringArray(value: any): string[]
trySplitByDifferentSeparators(value: string): string[]
removeStartEndQuotes(value: string): string
splitByComma(value: string): string[]
splitByPipe(value: string): string[]
splitBySemicolon(value: string): string[]
splitByColon(value: string): string[]
splitBySlash(value: string): string[]
splitByBackslash(value: string): string[]
splitBySpace(value: string): string[]
splitByDot(value: string): string[]
splitByUnderscore(value: string): string[]
splitByHyphen(value: string): string[]
pickFirstElement(value: any[]): any
pickLastElement(value: any[]): any
stringifyObject(object: any): string
parseObject(jsonString: string): any
safeParseDate(input: string | number): DateTime | null
removeInvalidElements(input: any[]): any[]
Validation Rules are run after converters, and are there to make sure invalid data doesn't get added to your database
isNumber(value: any): boolean
isString(value: any): boolean
isBoolean(value: any): boolean
isArray(value: any): boolean
isObject(value: any): boolean
isNull(value: any): boolean
isUndefined(value: any): boolean
isDefined(value: any): boolean
isDate(value: any): boolean
isEmpty(value: any): boolean
isInteger(value: any): boolean
isFloat(value: any): boolean
isArrayLike(value: any): boolean
isArrayLikeObject(value: any): boolean
isFunction(value: any): boolean
isLength(value: any): boolean
isMap(value: any): boolean
isSet(value: any): boolean
isRegExp(value: any): boolean
isSymbol(value: any): boolean
isObjectLike(value: any): boolean
isPlainObject(value: any): boolean
isSafeInteger(value: any): boolean
isTypedArray(value: any): boolean
isEqual(value: any, other: any): boolean
isMatch(object: any, source: any): boolean
has(object: any, path: string): boolean
get(object: any, path: string, defaultValue: any): any
After Import Actions run after the import and do something with the old data, new data, or something else entirely
Provided Fields:
{dbId}
- Current database ID{collId}
- Current collection ID{docId}
- Created document ID{createdDoc}
- Created document objectany_string
- You can use any string or thing as a value too! (like for data){some_template_string}
- The templating system allows you to reference anything in the context of the current
data you're working with. So for instance, if your imported item has {ownerId}
and you use {ownerId}
, it'll reference
that old JSON item data in the import.updateCreatedDocument(dbId: string, collId: string, docId: string, data: any): Promise<any>
checkAndUpdateFieldInDocument(dbId: string, collId: string, docId: string, fieldName: string, oldFieldValue: any, newFieldValue: any): Promise<any>
setFieldFromOtherCollectionDocument(dbId: string, collIdOrName: string, docId: string, fieldName: string, otherCollIdOrName: string, otherDocId: string, otherFieldName: string): Promise<any>
createOrGetBucket(bucketName: string, bucketId?: string, permissions?: string[], fileSecurity?: boolean, enabled?: boolean, maxFileSize?: number, allowedExtensions?: string[], compression?: string, encryption?: boolean, antivirus?: boolean): Promise<any>
createFileAndUpdateField(dbId: string, collId: string, docId: string, fieldName: string, bucketId: string, filePath: string, fileName: string): Promise<any>
appwriteConfig.yaml
(this week)safeParseDate
, it wasn't parsing dates very safely...trySplitByDifferentSeparators
works to fix the logic. Added converters are above. Also made it so converters and validation actions can take arrays for the item, because why not.import something from '@/utils'
in esbuild
, stupid, I miss Vite :(update
importDef type and photos from URL'sbasePath
optional in importDefs
, if it's just an array of objects or somethin you don't need it!targetKey
a lot, so set it you lazy nerds! (myself included)targetKey
optional too, whoopsoldKey
or oldKeys
[any]
in your template string. So if you have a nested object"RECORDS": [
{
"someObject": {
"someValue": {
"id": 3,
},
"anotherValue": {
"id": 4,
}
}
}
]
You can resolve the ID's by getting the first one using "oldKey": "someObject.[any].id"
with a converter of converters: ["pickFirstElement"]
or get all of them by using "oldKeys": "someObject.[any].id"
-- this also works in fileData
fileData
before y'all used it (if y'all is anyone) -- also made sure to fix the type defs in CustomDefinitionspath
field in the fileData
of the importDefs
to be a URL, for laziness!type
and updateMapping
optionally to importDefs
so you can run a second file to update the first one, if neededremoveInvalidElements
converteroldKeys
to importDefs
so you can concatenate multiple keys to one for an array. Also added five new converter functions, anyToStringArray
, pickFirstElement
, pickLastElement
, stringifyObject
, parseObject
, and a new validator, isDefined
for when you just need to know if something is, well, defined (!undefined, !null, and !empty). I also fixed the exports for the types for the custom definitions, my bad!setup
and migrate
lmao, now it's appwrite-utils-setup
& appwrite-utils-migrate
bin
section to package.json and "shebang" to top of main.ts
and setup.ts
to enable npx
FAQs
`appwrite-utils` is a comprehensive TypeScript library designed to streamline the development process for Appwrite projects. It provides a suite of utilities and helper functions that facilitate data manipulation, schema management, and seamless integrati
The npm package appwrite-utils receives a total of 95 weekly downloads. As such, appwrite-utils popularity was classified as not popular.
We found that appwrite-utils demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket’s threat research team has detected six malicious npm packages typosquatting popular libraries to insert SSH backdoors.
Security News
MITRE's 2024 CWE Top 25 highlights critical software vulnerabilities like XSS, SQL Injection, and CSRF, reflecting shifts due to a refined ranking methodology.
Security News
In this segment of the Risky Business podcast, Feross Aboukhadijeh and Patrick Gray discuss the challenges of tracking malware discovered in open source softare.