Socket
Socket
Sign inDemoInstall

qtorch-plus

Package Overview
Dependencies
1
Maintainers
1
Alerts
File Explorer

Install Socket

Protect your apps from supply chain attacks

Install

qtorch-plus

Low-Precision Arithmetic Simulation in Pytorch - Extension for Posit and customized number formats

    0.2.0

Maintainers
1

Readme

QPytorch+: Extending Qpytorch for Posit format and more

Author: minhhn2910@github, himeshi@github

Install

Install in developer mode:
git clone https://github.com/minhhn2910/QPyTorch.git
cd QPyTorch
pip install -e ./

Simple test if c-extension is working correctly :

python test.py

Important: if there are errors when running test.py, please export the environment variables indicating build directory and/or CUDA_HOME, otherwise we will have permission problem in multi-user-server.

export TORCH_EXTENSIONS_DIR=/[your-home-folder]/torch_extension
export CUDA_HOME=/[your cuda instalation directory e.g. /usr/local/cuda-10.2] 
python test.py

Functionality:

  • Support Posit Format with round to nearest mode.
  • Scaling of value before & after conversion to/from posit is supported (Exponent bias when the scale is a power of 2).
    For example: value x -> x*scale -> Posit(x*scale) -> x
  • Support Tanh approximation with Posit and correction of error:
    When x is in a posit format with es = 0 => Sigmoid(x) = (x XOR 0x8000) >> 2 => PositTanh(x) = 2 · Sigmoid(2x) − 1
  • More number formats (Table lookup, log2 system ..., and new rounding modes will be supported on new versions).
Currently under development and update to support more number formats and schemes.

Demo and tutorial:

  • Approximate Tanh Function with Posit is presented at examples/tutorial/test_posit_func.ipynb
  • Most functionalities can be tested by using the notebooks in posit tutorials: ./examples/tutorial/
  • Notebook demo training Cifar10 with vanilla Posit 8 bit: examples/tutorial/CIFAR10_Posit_Training_Example.ipynb
  • Demo of DCGAN Cifar10 training with Posit 8 bit: Google Colab Link
  • Demo of DCGAN Lsun inference using Posit 6 bit and Approximate Tanh : Google Colab Link
  • Demo of applying posit 6 bits & 8 bits to ALBERT for Question Answering Task: GoogleColab Demo

If you find this repo useful, please cite our paper(s) listed below. The below also explain the terms and usage of Posit's enhancements (exponent bias and tanh function).

@inproceedings{ho2021posit,
  title={Posit Arithmetic for the Training and Deployment of Generative Adversarial Networks},
  author={Ho, Nhut-Minh and Nguyen, Duy-Thanh and De Silva, Himeshi and Gustafson, John L and Wong, Weng-Fai and Chang, Ik Joon},
  booktitle={2021 Design, Automation \& Test in Europe Conference \& Exhibition (DATE)},
  pages={1350--1355},
  year={2021},
  organization={IEEE}
}


The original Qpytorch package which supports floating point and fixed point:

The original README file is in REAME.original.md

Credit to the Qpytorch team and their original publication

@misc{zhang2019qpytorch,
    title={QPyTorch: A Low-Precision Arithmetic Simulation Framework},
    author={Tianyi Zhang and Zhiqiu Lin and Guandao Yang and Christopher De Sa},
    year={2019},
    eprint={1910.04540},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Qpytorch Team

FAQs


Did you know?

Socket installs a GitHub app to automatically flag issues on every pull request and report the health of your dependencies. Find out what is inside your node modules and prevent malicious activity before you update the dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc