yarn add @op-engineering/op-sqlcipher
npx pod-install
OP SQLCipher embeds the latest version of SQLCipher and provides a low-level (JSI-backed) API to execute SQL queries.
SQLCipher embedded SQLite version: 3.42.0
Created by @ospfranco. Please consider Sponsoring, none of this work is for free. I pay for it with my time and knowledge. If you are a company in need of help with your React Native/React apps feel free to reach out. I also do a lot of C++ and nowadays Rust.
Benchmarks
You can find the benchmarking code in the example app. You should expect anywhere between a 5x to an 8x improvement over non-JSI packages, and now a 5x to 8x improvement over quick-sqlite and expo-sqlite. Loading a 300k record database (in milliseconds).
Memory consumption is also 1/4 compared to react-native-quick-sqlite
. This query used to take 1.2 GB of peak memory usage, and now runs in 250mbs.
You can also turn on Memory Mapping to make your queries even faster by skipping the kernel during I/O and potentially reduce RAM usage, this comes with some disadvantages though. If you want even more speed and you can re-use your queries you can use Prepared Statements.
Duplicated libcrypto.so
If you have any library that also depends on OpenSSL, you might find an issue similar to this when building Android:
Execution failed for task ':app:mergeDebugNativeLibs'.
> A failure occurred while executing com.android.build.gradle.internal.tasks.MergeNativeLibsTask$MergeNativeLibsTaskWorkAction
> 2 files found with path 'lib/arm64-v8a/libcrypto.so' from inputs:
- /Users/osp/Developer/mac_test/node_modules/react-native-quick-crypto/android/build/intermediates/library_jni/debug/jni/arm64-v8a/libcrypto.so
- /Users/osp/.gradle/caches/transforms-3/e13f88164840fe641a466d05cd8edac7/transformed/jetified-flipper-0.182.0/jni/arm64-v8a/libcrypto.so
It means you have a transitive dependency where two libraries depend on OpenSSL and are generating a libcrypto.so
. You can get around this issue by adding the following in your app/build.gradle
:
packagingOptions {
// Should prevent clashes with other libraries that use OpenSSL
pickFirst '**/libcrypto.so'
}
Usually this is caused by flipper which also depends on OpenSSL
This just tells Gradle to grab whatever OpenSSL version it finds first and link against that, but as you can imagine this is not correct if the packages depend on different OpenSSL versions (op-sqlcipher depends on com.android.ndk.thirdparty:openssl:1.1.1q-beta-1
). You should make sure all the OpenSSL versions match and you have no conflicts or errors.
Database Location
Default location
If you don't pass a location
the library creates/opens databases by appending the passed name plus, the library directory on iOS and the database directory on Android.
Relative location
You can use relative location to navigate in and out of the default location
import { open } from '@op-engineering/op-sqlcipher';
const db = open({
name: 'myDB',
location: '../files/databases',
encryptionKey: 'YOUR ENCRYPTION KEY, KEEP IT SOMEWHERE SAFE',
});
Note that on iOS the file system is sand-boxed, so you cannot access files/directories outside your app bundle directories.
Passing absolute paths
You can also pass absolute paths to completely change the location of the database, the library exports useful paths you can use:
import {
IOS_LIBRARY_PATH,
IOS_DOCUMENT_PATH,
ANDROID_DATABASE_PATH,
ANDROID_FILES_PATH,
ANDROID_EXTERNAL_FILES_PATH,
open,
} from '@op-engineering/op-sqlcipher';
const db = open({
name: 'myDb',
location: Platform.OS === 'ios' ? IOS_LIBRARY_PATH : ANDROID_DATABASE_PATH,
});
Here is an example if you want to access the SD card app's directory:
const db = open({
name: 'myDB',
location:
Platform.OS === 'ios' ? IOS_LIBRARY_PATH : ANDROID_EXTERNAL_FILES_PATH,
});
You can even drill down:
const db = open({
name: 'myDB',
location:
Platform.OS === 'ios'
? IOS_LIBRARY_PATH
: `${ANDROID_EXTERNAL_FILES_PATH}/dbs/`,
});
In-memory
Using SQLite in-memory mode is supported by passing a ':memory:'
as a location:
import { open } from '@op-engineering/op-sqlcipher';
const largeDb = open({
name: 'inMemoryDb',
encryptionKey: 'YOUR ENCRYPTION KEY, KEEP IT SOMEWHERE SAFE',
location: ':memory:',
});
Speed
op-sqlite is already the fastest solution it can be, but it doesn't mean you cannot tweak SQLite to be faster (at the cost of some disadvantages). One possible tweak is turning on Memory Mapping. It allows to read/write to/from the disk without going through the kernel. However, if your queries throw an error your application might crash.
To turn on Memory Mapping, execute the following pragma statement after opening a db:
const db = open({
name: 'mydb.sqlite',
});
db.execute('PRAGMA mmap_size=268435456');
If you use prepared statements plus memory mapping, you can get to inches of MMKV for the most performance critical queries, here is a simple example writing/reading a single value.
API
import {open} from '@op-engineering/op-sqlcipher'
const db = open({
name: 'myDb.sqlite',
encryptionKey: 'YOUR ENCRYPTION KEY, KEEP IT SOMEWHERE SAFE'
})
db = {
close: () => void,
delete: () => void,
attach: (dbNameToAttach: string, alias: string, location?: string) => void,
detach: (alias: string) => void,
transaction: (fn: (tx: Transaction) => Promise<void>) => Promise<void>,
execute: (query: string, params?: any[]) => QueryResult,
executeAsync: (query: string, params?: any[]) => Promise<QueryResult>,
executeBatch: (commands: SQLBatchTuple[]) => BatchQueryResult,
executeBatchAsync: (commands: SQLBatchTuple[]) => Promise<BatchQueryResult>,
loadFile: (location: string) => Promise<FileLoadResult>,
updateHook: (
callback: ((params: {
table: string;
operation: UpdateHookOperation;
row?: any;
rowId: number;
}) => void) | null
) => void,
commitHook: (callback: (() => void) | null) => void,
rollbackHook: (callback: (() => void) | null) => void
}
Simple queries
The basic query is synchronous, it will block rendering on large operations, further below you will find async versions.
import { open } from '@op-engineering/op-sqlcipher';
try {
const db = open({ name: 'myDb.sqlite' });
let { rows } = db.execute('SELECT somevalue FROM sometable');
rows._array.forEach((row) => {
console.log(row);
});
let { rowsAffected } = await db.executeAsync(
'UPDATE sometable SET somecolumn = ? where somekey = ?',
[0, 1]
);
console.log(`Update affected ${rowsAffected} rows`);
} catch (e) {
console.error('Something went wrong executing SQL commands:', e.message);
}
Multiple statements in a single string
You can execute multiple statements in a single operation. The API however is not really thought for this use case and the results (and their metadata) will be mangled, so you can discard it.
db.execute(
`CREATE TABLE T1 ( id INT PRIMARY KEY) STRICT;
CREATE TABLE T2 ( id INT PRIMARY KEY) STRICT;`
);
let t1name = db.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='T1';"
);
console.log(t1name.rows?._array[0].name);
let t2name = db.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='T2';"
);
console.log(t2name.rows?._array[0].name);
Transactions
Throwing an error inside the callback will ROLLBACK the transaction.
If you want to execute a large set of commands as fast as possible you should use the executeBatch
method, it wraps all the commands in a transaction and has less overhead.
await db.transaction('myDatabase', (tx) => {
const { status } = tx.execute(
'UPDATE sometable SET somecolumn = ? where somekey = ?',
[0, 1]
);
await tx.executeAsync = tx.executeAsync(
'UPDATE sometable SET somecolumn = ? where somekey = ?',
[0, 1]
);
throw new Error('Random Error!');
tx.commit();
tx.rollback();
});
Batch operation
Batch execution allows the transactional execution of a set of commands
const commands = [
['CREATE TABLE TEST (id integer)'],
['INSERT INTO TEST (id) VALUES (?)', [1]],
[('INSERT INTO TEST (id) VALUES (?)', [2])],
[('INSERT INTO TEST (id) VALUES (?)', [[3], [4], [5], [6]])],
];
const res = db.executeSqlBatch('myDatabase', commands);
console.log(`Batch affected ${result.rowsAffected} rows`);
Dynamic Column Metadata
In some scenarios, dynamic applications may need to get some metadata information about the returned result set.
This can be done by testing the returned data directly, but in some cases may not be enough, for example when data is stored outside
SQLite datatypes. When fetching data directly from tables or views linked to table columns, SQLite can identify the table declared types:
let { metadata } = db.executeSql(
'myDatabase',
'SELECT int_column_1, bol_column_2 FROM sometable'
);
metadata.forEach((column) => {
console.log(`${column.name} - ${column.type}`);
});
Async operations
You might have too much SQL to process and it will cause your application to freeze. There are async versions for some of the operations. This will offload the SQLite processing to a different thread.
db.executeAsync(
'myDatabase',
'SELECT * FROM "User";',
[]).then(({rows}) => {
console.log('users', rows._array);
})
);
Blobs
Blobs are supported via ArrayBuffer
, you need to be careful about the semantics though. You cannot instantiate an instance of ArrayBuffer
directly, nor pass a typed array directly. Here is an example:
db = open({
name: 'blobs',
});
db.execute('DROP TABLE IF EXISTS BlobTable;');
db.execute(
'CREATE TABLE BlobTable ( id INT PRIMARY KEY, name TEXT NOT NULL, content BLOB) STRICT;'
);
let binaryData = new Uint8Array(2);
binaryData[0] = 42;
db.execute(`INSERT OR REPLACE INTO BlobTable VALUES (?, ?, ?);`, [
1,
'myTestBlob',
binaryData,
]);
const result = db.execute('SELECT content FROM BlobTable');
const finalUint8 = new Uint8Array(result.rows!._array[0].content);
Prepared statements
A lot of the work when executing queries is not iterating through the result set itself but, sometimes, planning the execution. If you have a query which is expensive but you can re-use (even if you have to change the arguments) you can use a prepared statement
:
const statement = db.prepareStatement('SELECT * FROM User WHERE name = ?;');
statement.bind(['Oscar']);
let results1 = statement.execute();
statement.bind(['Carlos']);
let results2 = statement.execute();
You only pay the price of parsing the query once, and each subsequent execution should be faster.
Attach or Detach other databases
SQLite supports attaching or detaching other database files into your main database connection through an alias.
You can do any operation you like on this attached database like JOIN results across tables in different schemas, or update data or objects.
These databases can have different configurations, like journal modes, and cache settings.
You can, at any moment, detach a database that you don't need anymore. You don't need to detach an attached database before closing your connection. Closing the main connection will detach any attached databases.
SQLite has a limit for attached databases: A default of 10, and a global max of 125
References: Attach - Detach
db.attach('mainDatabase', 'statistics', 'stats', '../databases');
const res = db.executeSql(
'mainDatabase',
'SELECT * FROM some_table_from_mainschema a INNER JOIN stats.some_table b on a.id_column = b.id_column'
);
db.detach('mainDatabase', 'stats');
if (!detachResult.status) {
}
Loading SQL Dump Files
If you have a SQL dump file, you can load it directly, with low memory consumption:
const { rowsAffected, commands } = db
.loadFile('myDatabase', '/absolute/path/to/file.sql')
.then((res) => {
const { rowsAffected, commands } = res;
});
Hooks
You can subscribe to changes in your database by using an update hook:
db.updateHook(({ rowId, table, operation, row = {} }) => {
console.warn(`Hook has been called, rowId: ${rowId}, ${table}, ${operation}`);
console.warn(JSON.stringify(row, null, 2));
});
db.execute('INSERT INTO "User" (id, name, age, networth) VALUES(?, ?, ?, ?)', [
id,
name,
age,
networth,
]);
Same goes for commit and rollback hooks
db.commitHook(() => {
console.log('Transaction commmitted!');
});
db.rollbackHook(() => {
console.log('Transaction rolled back!');
});
db.transaction(async (tx) => {
tx.execute(
'INSERT INTO "User" (id, name, age, networth) VALUES(?, ?, ?, ?)',
[id, name, age, networth]
);
});
try {
await db.transaction(async (tx) => {
throw new Error('Test Error');
});
} catch (e) {
}
You can pass `null`` to remove hooks at any moment:
db.updateHook(null);
db.commitHook(null);
db.rollbackHook(null);
Use built-in SQLite
On iOS you can use the embedded SQLite, when running pod-install
add an environment flag:
OP_SQLITE_USE_PHONE_VERSION=1 npx pod-install
On Android, it is not possible to link the OS SQLite. It is also a bad idea due to vendor changes, old android bugs, etc. Unfortunately, this means this library will add some megabytes to your app size.
Enable compile-time options
By specifying pre-processor flags, you can enable optional features like FTS5, Geopoly, etc.
iOS
Add a post_install
block to your <PROJECT_ROOT>/ios/Podfile
like so:
post_install do |installer|
installer.pods_project.targets.each do |target|
if target.name == "op-sqlcipher" then
target.build_configurations.each do |config|
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] << 'SQLITE_ENABLE_FTS5=1'
end
end
end
end
Replace the <SQLITE_FLAGS>
part with the flags you want to add.
For example, you could add SQLITE_ENABLE_FTS5=1
to GCC_PREPROCESSOR_DEFINITIONS
to enable FTS5 in the iOS project.
Android
You can specify flags via <PROJECT_ROOT>/android/gradle.properties
like so:
OPSQLiteFlags="-DSQLITE_ENABLE_FTS5=1"
Additional configuration
App groups (iOS only)
On iOS, the SQLite database can be placed in an app group, in order to make it accessible from other apps in that app group. E.g. for sharing capabilities.
To use an app group, add the app group ID as the value for the OPSQLite_AppGroup
key in your project's Info.plist
file. You'll also need to configure the app group in your project settings. (Xcode -> Project Settings -> Signing & Capabilities -> Add Capability -> App Groups)
Contribute
You need to have clang-format installed (brew install clang-format
)
License
MIT License.