Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
.. image:: img/ONT_logo.png :width: 800 :alt: .
ont_fast5_api
is a simple interface to HDF5 files of the Oxford Nanopore
.fast5 file format.
It provides:
multi_read
and single_read
formatsThe ont_fast5_api
is available on PyPI and can be installed via pip::
pip install ont-fast5-api
Alternatively, it is available on github where it can be built from source::
git clone https://github.com/nanoporetech/ont_fast5_api
pip install ./ont_fast5_api
ont_fast5_api
is a pure python project and should run on most python
versions and operating systems.
It requires:
h5py <http://www.h5py.org>
_: 2.6 or higherNumPy <https://www.numpy.org>
_: 1.11 or highersix <https://github.com/benjaminp/six>
_: 1.10 or higherprogressbar33 <https://github.com/germangh/python-progressbar>
_: 2.3.1 or higherThe ont_fast5_api provides a simple interface to access the data structures in .fast5 files of either single- or multi- read format using the same method calls.
For example to print the raw data from all reads in a file::
from ont_fast5_api.fast5_interface import get_fast5_file
def print_all_raw_data():
fast5_filepath = "test/data/single_reads/read0.fast5" # This can be a single- or multi-read file
with get_fast5_file(fast5_filepath, mode="r") as f5:
for read in f5.get_reads():
raw_data = read.get_raw_data()
print(read.read_id, raw_data)
The ont_fast5_api
provides terminal/command-line console_scripts
for
converting between files in the Oxford Nanopore single_read
and
multi_read
.fast5 file formats. These are provided to ensure compatibility between
tools which expect either the single_read
or multi_read
.fast5 file
formats.
The scripts are added during installation and can be called from the terminal/command-line or from within python.
This script converts folders containing single_read_fast5
files into
multi_read_fast5_files
::
single_to_multi_fast5
[required]
-i, --input_path INPUT_PATH <(path) folder containing single_read_fast5 files>
-s, --save_path SAVE_PATH <(path) to folder where multi_read fast5 files will be output>
[optional]
-t, --threads THREADS <(int) number of CPU threads to use; default=1>
-f, --filename_base FILENAME_BASE <(string) name for new multi_read file; default="batch" (see note-1)>
-n, --batch_size BATCH_SIZE <(int) number of single_reads to include in each multi_read file; default=4000>
--recursive <if included, recursively search sub-directories for single_read files>
note-1: newly created multi_read
files require a name. This is the
filename_base
with the batch count and .fast5
appended to it; e.g.
-f batch
yields batch_0.fast5, batch_1.fast5, ...
example usage::
single_to_multi_fast5 --input_path /data/reads --save_path /data/multi_reads
--filename_base batch_output --batch_size 100 --recursive
Where /data/reads
and/or its subfolders contain single_read
.fast5
files. The output will be multi_read
fast5 files each containing 100 reads,
in the folder: /data/multi_reads
with the names: batch_output_0.fast5
,
batch_output_1.fast5
etc.
This script converts folders containing multi_read_fast5
files into
single_read_fast5
files::
multi_to_single_fast5
[required]
-i, --input_path INPUT_PATH <(path) folder containing multi_read_fast5 files>
-s, --save_path SAVE_PATH <(path) to folder where single_read fast5 files will be output
[optional]
-t, --threads THREADS <(int) number of CPU threads to use; default=1>
--recursive <if included, recursively search sub-directories for multi_read files>
example usage::
multi_to_single_fast5 --input_path /data/multi_reads --save_path /data/single_reads
--recursive
Where /data/multi_reads
and/or its subfolders contain multi_read
.fast5
files. The output will be single_read
.fast5 files in the folder
/data/single_reads
with one subfolder per multi_read
input file
This script extracts reads from multi_read_fast5_file(s)
based on a list of read_ids::
fast5_subset
[required]
-i, --input INPUT_PATH <(path) to folder containing multi_read_fast5 files or an individual multi_read_fast5 file>
-s, --save_path SAVE_PATH <(path) to folder where multi_read fast5 files will be output>
-l,--read_id_list SUMMARY_PATH <(file) either sequencing_summary.txt file or a file containing a list of read_ids>
[optional]
-f, --filename_base FILENAME_BASE <(string) name for new multi_read file; default="batch" (see note-1)>
-n, --batch_size BATCH_SIZE <(int) number of single_reads to include in each multi_read file; default=4000>
--recursive <if included, recursively search sub-directories for single_read files>
example usage::
fast5_subset --input /data/multi_reads --save_path /data/subset
--read_id_list read_id_list.txt --batch_size 100 --recursive
Where /data/multi_reads
and/or its subfolders contain multi_read
.fast5
files and read_id_list.txt
is a text file either containing 1 read_id per line
or a tsv file with a column named read_id
.
The output will be multi_read
.fast5 files each containing 100 reads,
in the folder: /data/multi_reads
with the names: batch_output_0.fast5
,
batch_output_1.fast5
etc.
This script for demultiplexing
reads from multi_read_fast5_file(s)
.
Extracts reads into multiple directories based on column value in a summary file::
demux_fast5.py
[required]
-i, --input INPUT_PATH <Path to Fast5 file or directory of Fast5 files>
-s, --save_path SAVE_PATH <Directory to output MultiRead subsets>
-l, --summary_file SUMMARY_PATH <TSV file containing read_id and demultiplex columns>
[optional]
--read_id_column COLUMN_NAME <Name of read_id column in summary file (default 'read_id')>
--demultiplex_column COLUMN_NAME <Name of column for demultiplexing in summary file (default 'barcoding_arrangement')>
-f, --filename_base FILENAME_BASE <Root of output filename, default='batch' -> 'batch_0.fast5'>
-n, --batch_size BATCH_SIZE <Number of reads per multi-read file, default 4000>
-t, --threads THREADS <Maximum number of processes to use>
-r, --recursive <Flag to search recursively through input directory for MultiRead fast5 files>
--ignore_symlinks <Ignore symlinks when searching recursively for fast5 files>
-c --compression COMPRESSION <Target output compression type (vbz,vbz_legacy_v0,gzip,None)>
Intended use is for multiplexed experiments, for reads with different barcodes or from different genomes.
example usage::
demux_fast5 --input /data/multi_reads --save_path /data/demultiplexed_reads --summary_file barcoding_summary.txt
Where /data/multi_reads
and/or its subfolders contain fast5 files from multiplexed experiment,
barcoding_summary.txt
is the output of guppy_barcoder. /data/demultiplexed_reads
will contain a directory per
barcode, containing multi_read
.fast5 files with names: /data/demultiplexed_reads/barcode01/batch_0.fast5
,
/data/demultiplexed_reads/barcode02/batch_0.fast5
etc. Directories are named by values in demultiplex column.
This script copies and converts raw data between vbz
and gzip
compression formats::
compress_fast5
[required]
-i, --input_path INPUT_PATH <(path) folder containing multi_read_fast5 files>
-s, --save_path SAVE_PATH <(path) to folder where single_read fast5 files will be output>
-c, --compression COMPRESSION <(str) [vbz, gzip] target compression format>
[optional]
-t, --threads THREADS <(int) number of CPU threads to use; default=1>
--recursive <if included, recursively search sub-directories for fast5 files>
--sanitize <flag to remove optional groups (such as basecalling and modified base information)>
example usage::
compress_fast5 --input_path /data/uncompressed_reads --save_path /data/compressed_reads
--compression vbz --recursive --threads 40
Where /data/uncompressed_reads
and/or its subfolders contain .fast5 files. The output will be a copy of the input
folder structure containing compressed reads preserving both the folder structure and file type.
The optional --sanitize
option can be used to greatly reduce file size when files contain optional data
from the Guppy basecaller that could in principle be regenerated by running Guppy. The files output
when using the sanitize
option will be identical in structure to those output by MinKNOW when
live basecalling is disabled.
NB compress_fast5
will copy .fast5 files in order to compress them due to HDF5 implementation constraints.
Further detail of HDF5 data management strategies can be found:
https://support.hdfgroup.org/HDF5/doc/Advanced/FileSpaceManagement/FileSpaceManagement.pdf
VBZ compression is a compression algorithm developed by Oxford Nanopore to reduce file size and improve read/write performance when handling raw data in Fast5 files. Previously, the default compression was GZIP and comparing to GZIP we see a compression improvement of >30% and a CPU performance improvement of >10X for compression and >5X for decompression. Further details of the implementation and benchmarks can be found here: https://github.com/nanoporetech/vbz_compression
Benchmarking the performance of compression within the ont_fast5_api against a normal file copy showed
compressing from gzip
to vbz
was approximately 2x slower than copying files. In other words, if it would take two
hours to copy a set of files from an input folder to an output folder then it should take four hours to compress those
files with VBZ. Running the script without compressing (i.e. the same type of compression in and out; gzip->gzip)
was approximately 2x faster than a file copy since it can utilise mutiple threads.
HDF5 file format - a portable file format for storing and managing data. It is designed for flexible and efficient I/O and for high volume and complex data
Fast5 - an implementation of the HDF5 file format, with specific data schemas for Oxford Nanopore sequencing data
Single read fast5 - A fast5 file containing all the data pertaining to a single Oxford Nanopore read. This may include raw signal data, run metadata, fastq-basecalls and any other additional analyses
Multi read fast5 - A fast5 file containing data pertaining to a multiple Oxford Nanopore reads.
Demultiplexing - A process of separating reads of an experiment where multiple samples were mixed together (multiplexed), into corresponding samples. Demultiplexing is based on markers that identify sample origin, e.g. unique barcodes or alignment to a reference genome.
FAQs
Oxford Nanopore Technologies fast5 API software
We found that ont-fast5-api demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.