Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@mikro-orm/libsql

Package Overview
Dependencies
Maintainers
0
Versions
541
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@mikro-orm/libsql

TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, PostgreSQL and SQLite databases as well as usage with vanilla JavaScript.

  • 6.4.1-dev.5
  • Source
  • npm
  • Socket score

Version published
Weekly downloads
922
decreased by-33.14%
Maintainers
0
Weekly downloads
 
Created
Source

MikroORM

TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite (including libSQL) databases.

Heavily inspired by Doctrine and Hibernate.

NPM version NPM dev version Chat on discord Downloads Coverage Status Maintainability Build Status

🤔 Unit of What?

You might be asking: What the hell is Unit of Work and why should I care about it?

Unit of Work maintains a list of objects (entities) affected by a business transaction and coordinates the writing out of changes. (Martin Fowler)

Identity Map ensures that each object (entity) gets loaded only once by keeping every loaded object in a map. Looks up objects using the map when referring to them. (Martin Fowler)

So what benefits does it bring to us?

Implicit Transactions

First and most important implication of having Unit of Work is that it allows handling transactions automatically.

When you call em.flush(), all computed changes are queried inside a database transaction (if supported by given driver). This means that you can control the boundaries of transactions simply by calling em.persistLater() and once all your changes are ready, calling flush() will run them inside a transaction.

You can also control the transaction boundaries manually via em.transactional(cb).

const user = await em.findOneOrFail(User, 1);
user.email = 'foo@bar.com';
const car = new Car();
user.cars.add(car);

// thanks to bi-directional cascading we only need to persist user entity
// flushing will create a transaction, insert new car and update user with new email
// as user entity is managed, calling flush() is enough
await em.flush();

ChangeSet based persistence

MikroORM allows you to implement your domain/business logic directly in the entities. To maintain always valid entities, you can use constructors to mark required properties. Let's define the User entity used in previous example:

@Entity()
export class User {

  @PrimaryKey()
  id!: number;

  @Property()
  name!: string;

  @OneToOne(() => Address)
  address?: Address;

  @ManyToMany(() => Car)
  cars = new Collection<Car>(this);

  constructor(name: string) {
    this.name = name;
  }

}

Now to create new instance of the User entity, we are forced to provide the name:

const user = new User('John Doe'); // name is required to create new user instance
user.address = new Address('10 Downing Street'); // address is optional

Once your entities are loaded, make a number of synchronous actions on your entities, then call em.flush(). This will trigger computing of change sets. Only entities (and properties) that were changed will generate database queries, if there are no changes, no transaction will be started.

const user = await em.findOneOrFail(User, 1, {
  populate: ['cars', 'address.city'],
});
user.title = 'Mr.';
user.address.street = '10 Downing Street'; // address is 1:1 relation of Address entity
user.cars.getItems().forEach(car => car.forSale = true); // cars is 1:m collection of Car entities
const car = new Car('VW');
user.cars.add(car);

// now we can flush all changes done to managed entities
await em.flush();

em.flush() will then execute these queries from the example above:

begin;
update "user" set "title" = 'Mr.' where "id" = 1;
update "user_address" set "street" = '10 Downing Street' where "id" = 123;
update "car"
  set "for_sale" = case
    when ("id" = 1) then true
    when ("id" = 2) then true
    when ("id" = 3) then true
    else "for_sale" end
  where "id" in (1, 2, 3)
insert into "car" ("brand", "owner") values ('VW', 1);
commit;

Identity Map

Thanks to Identity Map, you will always have only one instance of given entity in one context. This allows for some optimizations (skipping loading of already loaded entities), as well as comparison by identity (ent1 === ent2).

📖 Documentation

MikroORM documentation, included in this repo in the root directory, is built with Docusaurus and publicly hosted on GitHub Pages at https://mikro-orm.io.

There is also auto-generated CHANGELOG.md file based on commit messages (via semantic-release).

✨ Core Features

📦 Example Integrations

You can find example integrations for some popular frameworks in the mikro-orm-examples repository:

TypeScript Examples

JavaScript Examples

🚀 Quick Start

First install the module via yarn or npm and do not forget to install the database driver as well:

Since v4, you should install the driver package, but not the db connector itself, e.g. install @mikro-orm/sqlite, but not sqlite3 as that is already included in the driver package.

yarn add @mikro-orm/core @mikro-orm/mongodb       # for mongo
yarn add @mikro-orm/core @mikro-orm/mysql         # for mysql/mariadb
yarn add @mikro-orm/core @mikro-orm/mariadb       # for mysql/mariadb
yarn add @mikro-orm/core @mikro-orm/postgresql    # for postgresql
yarn add @mikro-orm/core @mikro-orm/mssql         # for mssql
yarn add @mikro-orm/core @mikro-orm/sqlite        # for sqlite
yarn add @mikro-orm/core @mikro-orm/better-sqlite # for better-sqlite
yarn add @mikro-orm/core @mikro-orm/libsql        # for libsql

or

npm i -s @mikro-orm/core @mikro-orm/mongodb       # for mongo
npm i -s @mikro-orm/core @mikro-orm/mysql         # for mysql/mariadb
npm i -s @mikro-orm/core @mikro-orm/mariadb       # for mysql/mariadb
npm i -s @mikro-orm/core @mikro-orm/postgresql    # for postgresql
npm i -s @mikro-orm/core @mikro-orm/mssql         # for mssql
npm i -s @mikro-orm/core @mikro-orm/sqlite        # for sqlite
npm i -s @mikro-orm/core @mikro-orm/better-sqlite # for better-sqlite
npm i -s @mikro-orm/core @mikro-orm/libsql        # for libsql

Next, if you want to use decorators for your entity definition, you will need to enable support for decorators as well as esModuleInterop in tsconfig.json via:

"experimentalDecorators": true,
"emitDecoratorMetadata": true,
"esModuleInterop": true,

Alternatively, you can use EntitySchema.

Then call MikroORM.init as part of bootstrapping your app:

To access driver specific methods like em.createQueryBuilder() we need to specify the driver type when calling MikroORM.init(). Alternatively we can cast the orm.em to EntityManager exported from the driver package:

import { EntityManager } from '@mikro-orm/postgresql';
const em = orm.em as EntityManager;
const qb = em.createQueryBuilder(...);
import type { PostgreSqlDriver } from '@mikro-orm/postgresql'; // or any other SQL driver package

const orm = await MikroORM.init<PostgreSqlDriver>({
  entities: ['./dist/entities'], // path to your JS entities (dist), relative to `baseDir`
  dbName: 'my-db-name',
  type: 'postgresql',
});
console.log(orm.em); // access EntityManager via `em` property

There are more ways to configure your entities, take a look at installation page.

Read more about all the possible configuration options in Advanced Configuration section.

Then you will need to fork entity manager for each request so their identity maps will not collide. To do so, use the RequestContext helper:

const app = express();

app.use((req, res, next) => {
  RequestContext.create(orm.em, next);
});

You should register this middleware as the last one just before request handlers and before any of your custom middleware that is using the ORM. There might be issues when you register it before request processing middleware like queryParser or bodyParser, so definitely register the context after them.

More info about RequestContext is described here.

Now you can start defining your entities (in one of the entities folders). This is how simple entity can look like in mongo driver:

./entities/MongoBook.ts

@Entity()
export class MongoBook {

  @PrimaryKey()
  _id: ObjectID;

  @SerializedPrimaryKey()
  id: string;

  @Property()
  title: string;

  @ManyToOne(() => Author)
  author: Author;

  @ManyToMany(() => BookTag)
  tags = new Collection<BookTag>(this);

  constructor(title: string, author: Author) {
    this.title = title;
    this.author = author;
  }

}

For SQL drivers, you can use id: number PK:

./entities/SqlBook.ts

@Entity()
export class SqlBook {

  @PrimaryKey()
  id: number;

}

Or if you want to use UUID primary keys:

./entities/UuidBook.ts

import { v4 } from 'uuid';

@Entity()
export class UuidBook {

  @PrimaryKey()
  uuid = v4();

}

More information can be found in defining entities section in docs.

When you have your entities defined, you can start using ORM either via EntityManager or via EntityRepositorys.

To save entity state to database, you need to persist it. Persist takes care or deciding whether to use insert or update and computes appropriate change-set. Entity references that are not persisted yet (does not have identifier) will be cascade persisted automatically.

// use constructors in your entities for required parameters
const author = new Author('Jon Snow', 'snow@wall.st');
author.born = new Date();

const publisher = new Publisher('7K publisher');

const book1 = new Book('My Life on The Wall, part 1', author);
book1.publisher = publisher;
const book2 = new Book('My Life on The Wall, part 2', author);
book2.publisher = publisher;
const book3 = new Book('My Life on The Wall, part 3', author);
book3.publisher = publisher;

// just persist books, author and publisher will be automatically cascade persisted
await em.persistAndFlush([book1, book2, book3]);

To fetch entities from database you can use find() and findOne() of EntityManager:

const authors = em.find(Author, {}, { populate: ['books'] });

for (const author of authors) {
  console.log(author); // instance of Author entity
  console.log(author.name); // Jon Snow

  for (const book of author.books) { // iterating books collection
    console.log(book); // instance of Book entity
    console.log(book.title); // My Life on The Wall, part 1/2/3
  }
}

More convenient way of fetching entities from database is by using EntityRepository, that carries the entity name, so you do not have to pass it to every find and findOne calls:

const booksRepository = em.getRepository(Book);

const books = await booksRepository.find({ author: '...' }, { 
  populate: ['author'],
  limit: 1,
  offset: 2,
  orderBy: { title: QueryOrder.DESC },
});

console.log(books); // Loaded<Book, 'author'>[]

Take a look at docs about working with EntityManager or using EntityRepository instead.

🤝 Contributing

Contributions, issues and feature requests are welcome. Please read CONTRIBUTING.md for details on the process for submitting pull requests to us.

Authors

👤 Martin Adámek

See also the list of contributors who participated in this project.

Show Your Support

Please ⭐️ this repository if this project helped you!

📝 License

Copyright © 2018 Martin Adámek.

This project is licensed under the MIT License - see the LICENSE file for details.

Keywords

FAQs

Package last updated on 18 Nov 2024

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc