New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@hms-dbmi/vizarr

Package Overview
Dependencies
Maintainers
4
Versions
10
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@hms-dbmi/vizarr

[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/hms-dbmi/vizarr/master?filepath=example%2Fgetting_started.ipynb) [![launch ImJoy](https://imjoy.io/static/badge/launch-imjoy-badge.svg)](https://imjoy.io/#/app?workspace=vizarr&pl

  • 0.1.2
  • npm
  • Socket score

Version published
Weekly downloads
17
increased by70%
Maintainers
4
Weekly downloads
 
Created
Source

vizarr

Binder launch ImJoy Open In Colab

Multiscale OME-Zarr in Jupyter Notebook with Vizarr

Vizarr is a minimal, purely client-side program for viewing Zarr-based images. It is built with Viv and exposes a Python API using the imjoy-rpc, allowing users to programatically view multiplex and multiscale images from within a Jupyter Notebook. The ImJoy plugin registers a codec for Python zarr.Array and zarr.Group objects, enabling Viv to securely request chunks lazily via Zarr.js. This means that other valid zarr-python stores can be viewed remotely with Viv, enabling flexible workflows when working with large datasets.

Remote image registration workflow

We created Vizarr to enhance interactive multimodal image alignment using the wsireg library. We describe a rapid workflow where comparison of registration methods as well as visual verification of alignnment can be assessed remotely, leveraging high-performance computational resources for rapid image processing and Viv for interactive web-based visualization in a laptop computer. The Jupyter Notebook containing the workflow described in the manuscript can be found in multimodal_registration_vizarr.ipynb. For more information, please read our preprint doi:10.31219/osf.io/wd2gu.

Data types

Vizarr supports viewing 2D slices of n-Dimensional Zarr arrays, allowing users to choose a single channel or blended composites of multiple channels during analysis. It has special support for the developing OME-Zarr format for multiscale and multimodal images. Currently Viv supports u1, u2, u4, and f4 arrays, but contributions are welcome to support more np.dtypes!

Getting started

The easiest way to get started with vizarr is to clone this repository and open one of the example Jupyter Notebooks.

Limitations

vizarr was built to support the registration use case above where multiple, pyramidal OME-Zarr images are viewed within a Jupyter Notebook. Support for other Zarr arrays is supported but not as well tested. More information regarding thew viewing of generic Zarr arrays can be found in in the example notebooks.

FAQs

Package last updated on 26 Nov 2020

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc