Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

tensorflow-io-gcs-filesystem-nightly

Package Overview
Dependencies
Maintainers
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

tensorflow-io-gcs-filesystem-nightly

TensorFlow IO

  • 0.31.0.dev20230309180344
  • Source
  • PyPI
  • Socket score

Maintainers
3



TensorFlow I/O

GitHub CI PyPI License Documentation

TensorFlow I/O is a collection of file systems and file formats that are not available in TensorFlow's built-in support. A full list of supported file systems and file formats by TensorFlow I/O can be found here.

The use of tensorflow-io is straightforward with keras. Below is an example to Get Started with TensorFlow with the data processing aspect replaced by tensorflow-io:

import tensorflow as tf
import tensorflow_io as tfio

# Read the MNIST data into the IODataset.
dataset_url = "https://storage.googleapis.com/cvdf-datasets/mnist/"
d_train = tfio.IODataset.from_mnist(
    dataset_url + "train-images-idx3-ubyte.gz",
    dataset_url + "train-labels-idx1-ubyte.gz",
)

# Shuffle the elements of the dataset.
d_train = d_train.shuffle(buffer_size=1024)

# By default image data is uint8, so convert to float32 using map().
d_train = d_train.map(lambda x, y: (tf.image.convert_image_dtype(x, tf.float32), y))

# prepare batches the data just like any other tf.data.Dataset
d_train = d_train.batch(32)

# Build the model.
model = tf.keras.models.Sequential(
    [
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation=tf.nn.relu),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax),
    ]
)

# Compile the model.
model.compile(
    optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
)

# Fit the model.
model.fit(d_train, epochs=5, steps_per_epoch=200)

In the above MNIST example, the URL's to access the dataset files are passed directly to the tfio.IODataset.from_mnist API call. This is due to the inherent support that tensorflow-io provides for HTTP/HTTPS file system, thus eliminating the need for downloading and saving datasets on a local directory.

NOTE: Since tensorflow-io is able to detect and uncompress the MNIST dataset automatically if needed, we can pass the URL's for the compressed files (gzip) to the API call as is.

Please check the official documentation for more detailed and interesting usages of the package.

Installation

Python Package

The tensorflow-io Python package can be installed with pip directly using:

$ pip install tensorflow-io

People who are a little more adventurous can also try our nightly binaries:

$ pip install tensorflow-io-nightly

To ensure you have a version of TensorFlow that is compatible with TensorFlow-IO, you can specify the tensorflow extra requirement during install:

pip install tensorflow-io[tensorflow]

Similar extras exist for the tensorflow-gpu, tensorflow-cpu and tensorflow-rocm packages.

Docker Images

In addition to the pip packages, the docker images can be used to quickly get started.

For stable builds:

$ docker pull tfsigio/tfio:latest
$ docker run -it --rm --name tfio-latest tfsigio/tfio:latest

For nightly builds:

$ docker pull tfsigio/tfio:nightly
$ docker run -it --rm --name tfio-nightly tfsigio/tfio:nightly

R Package

Once the tensorflow-io Python package has been successfully installed, you can install the development version of the R package from GitHub via the following:

if (!require("remotes")) install.packages("remotes")
remotes::install_github("tensorflow/io", subdir = "R-package")

TensorFlow Version Compatibility

To ensure compatibility with TensorFlow, it is recommended to install a matching version of TensorFlow I/O according to the table below. You can find the list of releases here.

TensorFlow I/O VersionTensorFlow CompatibilityRelease Date
0.31.02.11.xFeb 25, 2022
0.30.02.11.xJan 20, 2022
0.29.02.11.xDec 18, 2022
0.28.02.11.xNov 21, 2022
0.27.02.10.xSep 08, 2022
0.26.02.9.xMay 17, 2022
0.25.02.8.xApr 19, 2022
0.24.02.8.xFeb 04, 2022
0.23.12.7.xDec 15, 2021
0.23.02.7.xDec 14, 2021
0.22.02.7.xNov 10, 2021
0.21.02.6.xSep 12, 2021
0.20.02.6.xAug 11, 2021
0.19.12.5.xJul 25, 2021
0.19.02.5.xJun 25, 2021
0.18.02.5.xMay 13, 2021
0.17.12.4.xApr 16, 2021
0.17.02.4.xDec 14, 2020
0.16.02.3.xOct 23, 2020
0.15.02.3.xAug 03, 2020
0.14.02.2.xJul 08, 2020
0.13.02.2.xMay 10, 2020
0.12.02.1.xFeb 28, 2020
0.11.02.1.xJan 10, 2020
0.10.02.0.xDec 05, 2019
0.9.12.0.xNov 15, 2019
0.9.02.0.xOct 18, 2019
0.8.11.15.xNov 15, 2019
0.8.01.15.xOct 17, 2019
0.7.21.14.xNov 15, 2019
0.7.11.14.xOct 18, 2019
0.7.01.14.xJul 14, 2019
0.6.01.13.xMay 29, 2019
0.5.01.13.xApr 12, 2019
0.4.01.13.xMar 01, 2019
0.3.01.12.0Feb 15, 2019
0.2.01.12.0Jan 29, 2019
0.1.01.12.0Dec 16, 2018

Performance Benchmarking

We use github-pages to document the results of API performance benchmarks. The benchmark job is triggered on every commit to master branch and facilitates tracking performance w.r.t commits.

Contributing

Tensorflow I/O is a community led open source project. As such, the project depends on public contributions, bug-fixes, and documentation. Please see:

  • contribution guidelines for a guide on how to contribute.
  • development doc for instructions on the development environment setup.
  • tutorials for a list of tutorial notebooks and instructions on how to write one.

Build Status and CI

BuildStatus
Linux CPU Python 2Status
Linux CPU Python 3Status
Linux GPU Python 2Status
Linux GPU Python 3Status

Because of manylinux2010 requirement, TensorFlow I/O is built with Ubuntu:16.04 + Developer Toolset 7 (GCC 7.3) on Linux. Configuration with Ubuntu 16.04 with Developer Toolset 7 is not exactly straightforward. If the system have docker installed, then the following command will automatically build manylinux2010 compatible whl package:

#!/usr/bin/env bash

ls dist/*
for f in dist/*.whl; do
  docker run -i --rm -v $PWD:/v -w /v --net=host quay.io/pypa/manylinux2010_x86_64 bash -x -e /v/tools/build/auditwheel repair --plat manylinux2010_x86_64 $f
done
sudo chown -R $(id -nu):$(id -ng) .
ls wheelhouse/*

It takes some time to build, but once complete, there will be python 3.5, 3.6, 3.7 compatible whl packages available in wheelhouse directory.

On macOS, the same command could be used. However, the script expects python in shell and will only generate a whl package that matches the version of python in shell. If you want to build a whl package for a specific python then you have to alias this version of python to python in shell. See .github/workflows/build.yml Auditwheel step for instructions how to do that.

Note the above command is also the command we use when releasing packages for Linux and macOS.

TensorFlow I/O uses both GitHub Workflows and Google CI (Kokoro) for continuous integration. GitHub Workflows is used for macOS build and test. Kokoro is used for Linux build and test. Again, because of the manylinux2010 requirement, on Linux whl packages are always built with Ubuntu 16.04 + Developer Toolset 7. Tests are done on a variatiy of systems with different python3 versions to ensure a good coverage:

PythonUbuntu 18.04Ubuntu 20.04macOS + osx9Windows-2019
2.7:heavy_check_mark::heavy_check_mark::heavy_check_mark:N/A
3.7:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:
3.8:heavy_check_mark::heavy_check_mark::heavy_check_mark::heavy_check_mark:

TensorFlow I/O has integrations with many systems and cloud vendors such as Prometheus, Apache Kafka, Apache Ignite, Google Cloud PubSub, AWS Kinesis, Microsoft Azure Storage, Alibaba Cloud OSS etc.

We tried our best to test against those systems in our continuous integration whenever possible. Some tests such as Prometheus, Kafka, and Ignite are done with live systems, meaning we install Prometheus/Kafka/Ignite on CI machine before the test is run. Some tests such as Kinesis, PubSub, and Azure Storage are done through official or non-official emulators. Offline tests are also performed whenever possible, though systems covered through offine tests may not have the same level of coverage as live systems or emulators.

Live SystemEmulatorCI IntegrationOffline
Apache Kafka:heavy_check_mark::heavy_check_mark:
Apache Ignite:heavy_check_mark::heavy_check_mark:
Prometheus:heavy_check_mark::heavy_check_mark:
Google PubSub:heavy_check_mark::heavy_check_mark:
Azure Storage:heavy_check_mark::heavy_check_mark:
AWS Kinesis:heavy_check_mark::heavy_check_mark:
Alibaba Cloud OSS:heavy_check_mark:
Google BigTable/BigQueryto be added
Elasticsearch (experimental):heavy_check_mark::heavy_check_mark:
MongoDB (experimental):heavy_check_mark::heavy_check_mark:

References for emulators:

Community

Additional Information

License

Apache License 2.0

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc