New Research: Supply Chain Attack on Axios Pulls Malicious Dependency from npm.Details →
Socket
Book a DemoSign in
Socket

tensorlayer

Package Overview
Dependencies
Maintainers
5
Versions
79
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

tensorlayer - pypi Package Compare versions

Comparing version
2.2.1
to
2.2.2
+12
-44
PKG-INFO
Metadata-Version: 2.1
Name: tensorlayer
Version: 2.2.1
Version: 2.2.2
Summary: High Level Tensorflow Deep Learning Library for Researcher and Engineer.

@@ -31,49 +31,17 @@ Home-page: https://github.com/tensorlayer/tensorlayer

Why another deep learning library: TensorLayer
==============================================
Design Features
=================
As deep learning practitioners, we have been looking for a library that
can address various development purposes. This library is easy to adopt
by providing diverse examples, tutorials and pre-trained models. Also,
it allow users to easily fine-tune TensorFlow; while being suitable for
production deployment. TensorLayer aims to satisfy all these purposes.
It has three key features:
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer lifts the low-level dataflow interface
of TensorFlow to *high-level* layers / models. It is very easy to
learn through the rich `example
codes <https://github.com/tensorlayer/awesome-tensorlayer>`__
contributed by a wide community.
- **Flexibility** : TensorLayer APIs are transparent: it does not
mask TensorFlow from users; but leaving massive hooks that help
*low-level tuning* and *deep customization*.
- **Zero-cost Abstraction** : TensorLayer can achieve the *full
power* of TensorFlow. The following table shows the training speeds
of classic models using TensorLayer and native TensorFlow on a Titan
X Pascal GPU.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
+---------------+-----------------+-----------------+-----------------+
| | CIFAR-10 | PTB LSTM | Word2Vec |
+===============+=================+=================+=================+
| TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
+---------------+-----------------+-----------------+-----------------+
| TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
+---------------+-----------------+-----------------+-----------------+
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
TensorLayer stands at a unique spot in the library landscape. Other
wrapper libraries like Keras and TFLearn also provide high-level
abstractions. They, however, often hide the underlying engine from
users, which make them hard to customize and fine-tune. On the contrary,
TensorLayer APIs are generally flexible and transparent. Users often
find it easy to start with the examples and tutorials, and then dive
into TensorFlow seamlessly. In addition, TensorLayer does not create
library lock-in through native supports for importing components from
Keras, TFSlim and TFLearn.
TensorLayer has a fast growing usage among top researchers and
engineers, from universities like Imperial College London, UC Berkeley,
Carnegie Mellon University, Stanford University, and University of
Technology of Compiegne (UTC), and companies like Google, Microsoft,
Alibaba, Tencent, Xiaomi, and Bloomberg.
Install

@@ -80,0 +48,0 @@ =======

@@ -20,49 +20,17 @@ |TENSORLAYER-LOGO|

Why another deep learning library: TensorLayer
==============================================
Design Features
=================
As deep learning practitioners, we have been looking for a library that
can address various development purposes. This library is easy to adopt
by providing diverse examples, tutorials and pre-trained models. Also,
it allow users to easily fine-tune TensorFlow; while being suitable for
production deployment. TensorLayer aims to satisfy all these purposes.
It has three key features:
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer lifts the low-level dataflow interface
of TensorFlow to *high-level* layers / models. It is very easy to
learn through the rich `example
codes <https://github.com/tensorlayer/awesome-tensorlayer>`__
contributed by a wide community.
- **Flexibility** : TensorLayer APIs are transparent: it does not
mask TensorFlow from users; but leaving massive hooks that help
*low-level tuning* and *deep customization*.
- **Zero-cost Abstraction** : TensorLayer can achieve the *full
power* of TensorFlow. The following table shows the training speeds
of classic models using TensorLayer and native TensorFlow on a Titan
X Pascal GPU.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
+---------------+-----------------+-----------------+-----------------+
| | CIFAR-10 | PTB LSTM | Word2Vec |
+===============+=================+=================+=================+
| TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
+---------------+-----------------+-----------------+-----------------+
| TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
+---------------+-----------------+-----------------+-----------------+
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
TensorLayer stands at a unique spot in the library landscape. Other
wrapper libraries like Keras and TFLearn also provide high-level
abstractions. They, however, often hide the underlying engine from
users, which make them hard to customize and fine-tune. On the contrary,
TensorLayer APIs are generally flexible and transparent. Users often
find it easy to start with the examples and tutorials, and then dive
into TensorFlow seamlessly. In addition, TensorLayer does not create
library lock-in through native supports for importing components from
Keras, TFSlim and TFLearn.
TensorLayer has a fast growing usage among top researchers and
engineers, from universities like Imperial College London, UC Berkeley,
Carnegie Mellon University, Stanford University, and University of
Technology of Compiegne (UTC), and companies like Google, Microsoft,
Alibaba, Tencent, Xiaomi, and Bloomberg.
Install

@@ -69,0 +37,0 @@ =======

Metadata-Version: 2.1
Name: tensorlayer
Version: 2.2.1
Version: 2.2.2
Summary: High Level Tensorflow Deep Learning Library for Researcher and Engineer.

@@ -31,49 +31,17 @@ Home-page: https://github.com/tensorlayer/tensorlayer

Why another deep learning library: TensorLayer
==============================================
Design Features
=================
As deep learning practitioners, we have been looking for a library that
can address various development purposes. This library is easy to adopt
by providing diverse examples, tutorials and pre-trained models. Also,
it allow users to easily fine-tune TensorFlow; while being suitable for
production deployment. TensorLayer aims to satisfy all these purposes.
It has three key features:
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer lifts the low-level dataflow interface
of TensorFlow to *high-level* layers / models. It is very easy to
learn through the rich `example
codes <https://github.com/tensorlayer/awesome-tensorlayer>`__
contributed by a wide community.
- **Flexibility** : TensorLayer APIs are transparent: it does not
mask TensorFlow from users; but leaving massive hooks that help
*low-level tuning* and *deep customization*.
- **Zero-cost Abstraction** : TensorLayer can achieve the *full
power* of TensorFlow. The following table shows the training speeds
of classic models using TensorLayer and native TensorFlow on a Titan
X Pascal GPU.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
+---------------+-----------------+-----------------+-----------------+
| | CIFAR-10 | PTB LSTM | Word2Vec |
+===============+=================+=================+=================+
| TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
+---------------+-----------------+-----------------+-----------------+
| TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
+---------------+-----------------+-----------------+-----------------+
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
TensorLayer stands at a unique spot in the library landscape. Other
wrapper libraries like Keras and TFLearn also provide high-level
abstractions. They, however, often hide the underlying engine from
users, which make them hard to customize and fine-tune. On the contrary,
TensorLayer APIs are generally flexible and transparent. Users often
find it easy to start with the examples and tutorials, and then dive
into TensorFlow seamlessly. In addition, TensorLayer does not create
library lock-in through native supports for importing components from
Keras, TFSlim and TFLearn.
TensorLayer has a fast growing usage among top researchers and
engineers, from universities like Imperial College London, UC Berkeley,
Carnegie Mellon University, Stanford University, and University of
Technology of Compiegne (UTC), and companies like Google, Microsoft,
Alibaba, Tencent, Xiaomi, and Bloomberg.
Install

@@ -80,0 +48,0 @@ =======

@@ -22,2 +22,3 @@ #! /usr/bin/python

'pixel_wise_softmax',
'mish',
]

@@ -343,2 +344,21 @@

def mish(x):
"""Mish activation function.
Reference: [Mish: A Self Regularized Non-Monotonic Neural Activation Function .Diganta Misra, 2019]<https://arxiv.org/abs/1908.08681>
Parameters
----------
x : Tensor
input.
Returns
-------
Tensor
A ``Tensor`` in the same type as ``x``.
"""
return x * tf.math.tanh(tf.math.softplus(x))
# Alias

@@ -345,0 +365,0 @@ lrelu = leaky_relu

@@ -75,2 +75,4 @@ #! /usr/bin/python

#'load_graph_and_params',
'load_and_assign_ckpt',
'ckpt_to_npz_dict'
]

@@ -1,9 +0,10 @@

# /usr/bin/python
#! /usr/bin/python
# -*- coding: utf-8 -*-
import numpy as np
import tensorflow as tf
from tensorflow.python.training import moving_averages
import tensorlayer as tl
from tensorlayer import logging
from tensorlayer.decorators import deprecated_alias
from tensorlayer.layers.core import Layer

@@ -25,4 +26,2 @@ from tensorlayer.layers.utils import (quantize_active_overflow, quantize_weight_overflow)

----------
prev_layer : :class:`Layer`
Previous layer.
n_filter : int

@@ -55,14 +54,2 @@ The number of filters.

The bits of the output of previous layer
decay : float
A decay factor for `ExponentialMovingAverage`.
Suggest to use a large value for large dataset.
epsilon : float
Eplison.
is_train : boolean
Is being used for training or inference.
beta_init : initializer or None
The initializer for initializing beta, if None, skip beta.
Usually you should not skip beta unless you know what happened.
gamma_init : initializer or None
The initializer for initializing gamma, if None, skip gamma.
use_gemm : boolean

@@ -74,6 +61,8 @@ If True, use gemm instead of ``tf.matmul`` for inferencing. (TODO).

The arguments for the weight matrix initializer.
use_cudnn_on_gpu : bool
Default is False.
data_format : str
"NHWC" or "NCHW", default is "NHWC".
dilation_rate : tuple of int
Specifying the dilation rate to use for dilated convolution.
in_channels : int
The number of in channels.
name : str

@@ -84,18 +73,12 @@ A unique layer name.

---------
>>> import tensorflow as tf
>>> import tensorlayer as tl
>>> x = tf.placeholder(tf.float32, [None, 256, 256, 3])
>>> net = tl.layers.InputLayer(x, name='input')
>>> net = tl.layers.QuanConv2dWithBN(net, 64, (5, 5), (1, 1), act=tf.nn.relu, padding='SAME', is_train=is_train, bitW=bitW, bitA=bitA, name='qcnnbn1')
>>> net = tl.layers.MaxPool2d(net, (3, 3), (2, 2), padding='SAME', name='pool1')
...
>>> net = tl.layers.QuanConv2dWithBN(net, 64, (5, 5), (1, 1), padding='SAME', act=tf.nn.relu, is_train=is_train, bitW=bitW, bitA=bitA, name='qcnnbn2')
>>> net = tl.layers.MaxPool2d(net, (3, 3), (2, 2), padding='SAME', name='pool2')
...
>>> net = tl.layers.Input([50, 256, 256, 3])
>>> layer = tl.layers.QuanConv2dWithBN(n_filter=64, filter_size=(5,5),strides=(1,1),padding='SAME',name='qcnnbn1')
>>> print(layer)
>>> net = tl.layers.QuanConv2dWithBN(n_filter=64, filter_size=(5,5),strides=(1,1),padding='SAME',name='qcnnbn1')(net)
>>> print(net)
"""
@deprecated_alias(layer='prev_layer', end_support_version=1.9) # TODO remove this line for the 1.9 release
def __init__(
self,
prev_layer,
n_filter=32,

@@ -109,15 +92,32 @@ filter_size=(3, 3),

is_train=False,
gamma_init=tf.compat.v1.initializers.ones,
beta_init=tf.compat.v1.initializers.zeros,
gamma_init=tl.initializers.truncated_normal(stddev=0.02),
beta_init=tl.initializers.truncated_normal(stddev=0.02),
bitW=8,
bitA=8,
use_gemm=False,
W_init=tf.compat.v1.initializers.truncated_normal(stddev=0.02),
W_init=tl.initializers.truncated_normal(stddev=0.02),
W_init_args=None,
use_cudnn_on_gpu=None,
data_format=None,
data_format="channels_last",
dilation_rate=(1, 1),
in_channels=None,
name='quan_cnn2d_bn',
):
super(QuanConv2dWithBN, self).__init__(prev_layer=prev_layer, act=act, W_init_args=W_init_args, name=name)
super(QuanConv2dWithBN, self).__init__(act=act, name=name)
self.n_filter = n_filter
self.filter_size = filter_size
self.strides = strides
self.padding = padding
self.decay = decay
self.epsilon = epsilon
self.is_train = is_train
self.gamma_init = gamma_init
self.beta_init = beta_init
self.bitW = bitW
self.bitA = bitA
self.use_gemm = use_gemm
self.W_init = W_init
self.W_init_args = W_init_args
self.data_format = data_format
self.dilation_rate = dilation_rate
self.in_channels = in_channels
logging.info(

@@ -130,4 +130,5 @@ "QuanConv2dWithBN %s: n_filter: %d filter_size: %s strides: %s pad: %s act: %s " % (

x = self.inputs
self.inputs = quantize_active_overflow(self.inputs, bitA) # Do not remove
if self.in_channels:
self.build(None)
self._built = True

@@ -140,94 +141,101 @@ if use_gemm:

try:
pre_channel = int(prev_layer.outputs.get_shape()[-1])
except Exception: # if pre_channel is ?, it happens when using Spatial Transformer Net
pre_channel = 1
logging.warning("[warnings] unknow input channels, set to 1")
def __repr__(self):
actstr = self.act.__name__ if self.act is not None else 'No Activation'
s = (
'{classname}(in_channels={in_channels}, out_channels={n_filter}, kernel_size={filter_size}'
', strides={strides}, padding={padding}' + actstr
)
if self.dilation_rate != (1, ) * len(self.dilation_rate):
s += ', dilation={dilation_rate}'
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
shape = (filter_size[0], filter_size[1], pre_channel, n_filter)
strides = (1, strides[0], strides[1], 1)
def build(self, inputs_shape):
if self.data_format == 'channels_last':
self.data_format = 'NHWC'
if self.in_channels is None:
self.in_channels = inputs_shape[-1]
self._strides = [1, self.strides[0], self.strides[1], 1]
self._dilation_rate = [1, self.dilation_rate[0], self.dilation_rate[1], 1]
elif self.data_format == 'channels_first':
self.data_format = 'NCHW'
if self.in_channels is None:
self.in_channels = inputs_shape[1]
self._strides = [1, 1, self.strides[0], self.strides[1]]
self._dilation_rate = [1, 1, self.dilation_rate[0], self.dilation_rate[1]]
else:
raise Exception("data_format should be either channels_last or channels_first")
with tf.compat.v1.variable_scope(name):
W = tf.compat.v1.get_variable(
name='W_conv2d', shape=shape, initializer=W_init, dtype=LayersConfig.tf_dtype, **self.W_init_args
)
self.filter_shape = (self.filter_size[0], self.filter_size[1], self.in_channels, self.n_filter)
self.W = self._get_weights("filters", shape=self.filter_shape, init=self.W_init)
conv = tf.nn.conv2d(
x, W, strides=strides, padding=padding, use_cudnn_on_gpu=use_cudnn_on_gpu, data_format=data_format
para_bn_shape = (self.n_filter, )
if self.gamma_init:
self.scale_para = self._get_weights(
"scale_para", shape=para_bn_shape, init=self.gamma_init, trainable=self.is_train
)
else:
self.scale_para = None
para_bn_shape = conv.get_shape()[-1:]
if gamma_init:
scale_para = tf.compat.v1.get_variable(
name='scale_para', shape=para_bn_shape, initializer=gamma_init, dtype=LayersConfig.tf_dtype,
trainable=is_train
)
else:
scale_para = None
if beta_init:
offset_para = tf.compat.v1.get_variable(
name='offset_para', shape=para_bn_shape, initializer=beta_init, dtype=LayersConfig.tf_dtype,
trainable=is_train
)
else:
offset_para = None
moving_mean = tf.compat.v1.get_variable(
'moving_mean', para_bn_shape, initializer=tf.compat.v1.initializers.constant(1.),
dtype=LayersConfig.tf_dtype, trainable=False
if self.beta_init:
self.offset_para = self._get_weights(
"offset_para", shape=para_bn_shape, init=self.beta_init, trainable=self.is_train
)
else:
self.offset_para = None
moving_variance = tf.compat.v1.get_variable(
'moving_variance',
para_bn_shape,
initializer=tf.compat.v1.initializers.constant(1.),
dtype=LayersConfig.tf_dtype,
trainable=False,
)
self.moving_mean = self._get_weights(
"moving_mean", shape=para_bn_shape, init=tl.initializers.constant(1.0), trainable=False
)
self.moving_variance = self._get_weights(
"moving_variance", shape=para_bn_shape, init=tl.initializers.constant(1.0), trainable=False
)
mean, variance = tf.nn.moments(x=conv, axes=list(range(len(conv.get_shape()) - 1)))
def forward(self, inputs):
x = inputs
inputs = quantize_active_overflow(inputs, self.bitA) # Do not remove
outputs = tf.nn.conv2d(
input=x, filters=self.W, strides=self._strides, padding=self.padding, data_format=self.data_format,
dilations=self._dilation_rate, name=self.name
)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay, zero_debias=False
) # if zero_debias=True, has bias
mean, variance = tf.nn.moments(outputs, axes=list(range(len(outputs.get_shape()) - 1)))
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay, zero_debias=False
) # if zero_debias=True, has bias
update_moving_mean = moving_averages.assign_moving_average(
self.moving_mean, mean, self.decay, zero_debias=False
) # if zero_debias=True, has bias
update_moving_variance = moving_averages.assign_moving_average(
self.moving_variance, mean, self.decay, zero_debias=False
) # if zero_debias=True, has bias
def mean_var_with_update():
with tf.control_dependencies([update_moving_mean, update_moving_variance]):
return tf.identity(mean), tf.identity(variance)
if self.is_train:
mean, var = self.mean_var_with_update(update_moving_mean, update_moving_variance, mean, variance)
else:
mean, var = self.moving_mean, self.moving_variance
if is_train:
mean, var = mean_var_with_update()
else:
mean, var = moving_mean, moving_variance
w_fold = self._w_fold(self.W, self.scale_para, var, self.epsilon)
w_fold = _w_fold(W, scale_para, var, epsilon)
bias_fold = _bias_fold(offset_para, scale_para, mean, var, epsilon)
W_ = quantize_weight_overflow(w_fold, self.bitW)
W = quantize_weight_overflow(w_fold, bitW)
conv_fold = tf.nn.conv2d(inputs, W_, strides=self.strides, padding=self.padding, data_format=self.data_format)
conv_fold = tf.nn.conv2d(
self.inputs, W, strides=strides, padding=padding, use_cudnn_on_gpu=use_cudnn_on_gpu,
data_format=data_format
)
if self.beta_init:
bias_fold = self._bias_fold(self.offset_para, self.scale_para, mean, var, self.epsilon)
conv_fold = tf.nn.bias_add(conv_fold, bias_fold, name='bn_bias_add')
self.outputs = tf.nn.bias_add(conv_fold, bias_fold, name='bn_bias_add')
if self.act:
conv_fold = self.act(conv_fold)
self.outputs = self._apply_activation(self.outputs)
return conv_fold
self._add_layers(self.outputs)
def mean_var_with_update(self, update_moving_mean, update_moving_variance, mean, variance):
with tf.control_dependencies([update_moving_mean, update_moving_variance]):
return tf.identity(mean), tf.identity(variance)
self._add_params([W, scale_para, offset_para, moving_mean, moving_variance])
def _w_fold(self, w, gama, var, epsilon):
return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon))
def _w_fold(w, gama, var, epsilon):
return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon))
def _bias_fold(beta, gama, mean, var, epsilon):
return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon)))
def _bias_fold(self, beta, gama, mean, var, epsilon):
return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon)))

@@ -390,11 +390,7 @@ #! /usr/bin/python

if inspect.isfunction(val):
if ("__module__" in dir(val)) and (len(val.__module__) >
10) and (val.__module__[0:10] == "tensorflow"):
if ("__module__" in dir(val)) and (len(val.__module__) > 10) and (val.__module__[0:10]
== "tensorflow"):
params[arg] = val.__name__
else:
params[arg] = ('is_Func', utils.func2str(val))
# if val.__name__ == "<lambda>":
# params[arg] = utils.lambda2str(val)
# else:
# params[arg] = {"module_path": val.__module__, "func_name": val.__name__}
# ignore more args e.g. TL initializer

@@ -401,0 +397,0 @@ elif arg.endswith('init'):

@@ -27,3 +27,3 @@ #! /usr/bin/python

'QuanDense',
'QuanDenseLayerWithBN',
'QuanDenseWithBN',
]

@@ -8,2 +8,3 @@ #! /usr/bin/python

import tensorlayer as tl
from tensorlayer import logging

@@ -15,8 +16,8 @@ from tensorlayer.decorators import deprecated_alias

__all__ = [
'QuanDenseLayerWithBN',
'QuanDenseWithBN',
]
class QuanDenseLayerWithBN(Layer):
"""The :class:`QuanDenseLayerWithBN` class is a quantized fully connected layer with BN, which weights are 'bitW' bits and the output of the previous layer
class QuanDenseWithBN(Layer):
"""The :class:`QuanDenseWithBN` class is a quantized fully connected layer with BN, which weights are 'bitW' bits and the output of the previous layer
are 'bitA' bits while inferencing.

@@ -26,4 +27,2 @@

----------
prev_layer : :class:`Layer`
Previous layer.
n_units : int

@@ -49,14 +48,2 @@ The number of units of this layer.

The bits of the output of previous layer
decay : float
A decay factor for `ExponentialMovingAverage`.
Suggest to use a large value for large dataset.
epsilon : float
Eplison.
is_train : boolean
Is being used for training or inference.
beta_init : initializer or None
The initializer for initializing beta, if None, skip beta.
Usually you should not skip beta unless you know what happened.
gamma_init : initializer or None
The initializer for initializing gamma, if None, skip gamma.
use_gemm : boolean

@@ -68,11 +55,20 @@ If True, use gemm instead of ``tf.matmul`` for inferencing. (TODO).

The arguments for the weight matrix initializer.
in_channels: int
The number of channels of the previous layer.
If None, it will be automatically detected when the layer is forwarded for the first time.
name : a str
A unique layer name.
Examples
---------
>>> import tensorlayer as tl
>>> net = tl.layers.Input([50, 256])
>>> layer = tl.layers.QuanDenseWithBN(128, act='relu', name='qdbn1')(net)
>>> print(layer)
>>> net = tl.layers.QuanDenseWithBN(256, act='relu', name='qdbn2')(net)
>>> print(net)
"""
@deprecated_alias(layer='prev_layer', end_support_version=1.9) # TODO remove this line for the 1.9 release
def __init__(
self,
prev_layer,
n_units=100,

@@ -85,11 +81,27 @@ act=None,

bitA=8,
gamma_init=tf.compat.v1.initializers.ones,
beta_init=tf.compat.v1.initializers.zeros,
gamma_init=tl.initializers.truncated_normal(stddev=0.05),
beta_init=tl.initializers.truncated_normal(stddev=0.05),
use_gemm=False,
W_init=tf.compat.v1.initializers.truncated_normal(stddev=0.05),
W_init=tl.initializers.truncated_normal(stddev=0.05),
W_init_args=None,
name=None, #'quan_dense_with_bn',
in_channels=None,
name=None, # 'quan_dense_with_bn',
):
super(QuanDenseLayerWithBN, self).__init__(prev_layer=prev_layer, act=act, W_init_args=W_init_args, name=name)
super(QuanDenseWithBN, self).__init__(act=act, W_init_args=W_init_args, name=name)
self.n_units = n_units
self.decay = decay
self.epsilon = epsilon
self.is_train = is_train
self.bitW = bitW
self.bitA = bitA
self.gamma_init = gamma_init
self.beta_init = beta_init
self.use_gemm = use_gemm
self.W_init = W_init
self.in_channels = in_channels
if self.in_channels is not None:
self.build((None, self.in_channels))
self._built = True
logging.info(

@@ -100,96 +112,90 @@ "QuanDenseLayerWithBN %s: %d %s" %

if self.inputs.get_shape().ndims != 2:
def __repr__(self):
actstr = self.act.__name__ if self.act is not None else 'No Activation'
s = ('{classname}(n_units={n_units}, ' + actstr)
s += ', bitW={bitW}, bitA={bitA}'
if self.in_channels is not None:
s += ', in_channels=\'{in_channels}\''
if self.name is not None:
s += ', name=\'{name}\''
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
def build(self, inputs_shape):
if self.in_channels is None and len(inputs_shape) != 2:
raise Exception("The input dimension must be rank 2, please reshape or flatten it")
if use_gemm:
if self.in_channels is None:
self.in_channels = inputs_shape[1]
if self.use_gemm:
raise Exception("TODO. The current version use tf.matmul for inferencing.")
n_in = int(self.inputs.get_shape()[-1])
x = self.inputs
self.inputs = quantize_active_overflow(self.inputs, bitA)
self.n_units = n_units
n_in = inputs_shape[-1]
self.W = self._get_weights("weights", shape=(n_in, self.n_units), init=self.W_init)
with tf.compat.v1.variable_scope(name):
para_bn_shape = (self.n_units, )
if self.gamma_init:
self.scale_para = self._get_weights("gamm_weights", shape=para_bn_shape, init=self.gamma_init)
else:
self.scale_para = None
W = tf.compat.v1.get_variable(
name='W', shape=(n_in, n_units), initializer=W_init, dtype=LayersConfig.tf_dtype, **self.W_init_args
)
if self.beta_init:
self.offset_para = self._get_weights("beta_weights", shape=para_bn_shape, init=self.beta_init)
else:
self.offset_para = None
mid_out = tf.matmul(x, W)
self.moving_mean = self._get_weights(
"moving_mean", shape=para_bn_shape, init=tl.initializers.constant(1.0), trainable=False
)
self.moving_variance = self._get_weights(
"moving_variacne", shape=para_bn_shape, init=tl.initializers.constant(1.0), trainable=False
)
para_bn_shape = mid_out.get_shape()[-1:]
def forward(self, inputs):
x = inputs
inputs = quantize_active_overflow(inputs, self.bitA)
mid_out = tf.matmul(x, self.W)
if gamma_init:
scale_para = tf.compat.v1.get_variable(
name='scale_para', shape=para_bn_shape, initializer=gamma_init, dtype=LayersConfig.tf_dtype,
trainable=is_train
)
else:
scale_para = None
mean, variance = tf.nn.moments(x=mid_out, axes=list(range(len(mid_out.get_shape()) - 1)))
if beta_init:
offset_para = tf.compat.v1.get_variable(
name='offset_para', shape=para_bn_shape, initializer=beta_init, dtype=LayersConfig.tf_dtype,
trainable=is_train
)
else:
offset_para = None
update_moving_mean = moving_averages.assign_moving_average(
self.moving_mean, mean, self.decay, zero_debias=False
) # if zero_debias=True, has bias
moving_mean = tf.compat.v1.get_variable(
'moving_mean', para_bn_shape, initializer=tf.compat.v1.initializers.constant(1.),
dtype=LayersConfig.tf_dtype, trainable=False
)
update_moving_variance = moving_averages.assign_moving_average(
self.moving_variance, variance, self.decay, zero_debias=False
) # if zero_debias=True, has bias
moving_variance = tf.compat.v1.get_variable(
'moving_variance',
para_bn_shape,
initializer=tf.compat.v1.initializers.constant(1.),
dtype=LayersConfig.tf_dtype,
trainable=False,
)
if self.is_train:
mean, var = self.mean_var_with_update(update_moving_mean, update_moving_variance, mean, variance)
else:
mean, var = self.moving_mean, self.moving_variance
mean, variance = tf.nn.moments(x=mid_out, axes=list(range(len(mid_out.get_shape()) - 1)))
w_fold = self._w_fold(self.W, self.scale_para, var, self.epsilon)
update_moving_mean = moving_averages.assign_moving_average(
moving_mean, mean, decay, zero_debias=False
) # if zero_debias=True, has bias
W = quantize_weight_overflow(w_fold, self.bitW)
update_moving_variance = moving_averages.assign_moving_average(
moving_variance, variance, decay, zero_debias=False
) # if zero_debias=True, has bias
outputs = tf.matmul(inputs, W)
def mean_var_with_update():
with tf.control_dependencies([update_moving_mean, update_moving_variance]):
return tf.identity(mean), tf.identity(variance)
if self.beta_init:
bias_fold = self._bias_fold(self.offset_para, self.scale_para, mean, var, self.epsilon)
outputs = tf.nn.bias_add(outputs, bias_fold, name='bias_add')
else:
outputs = outputs
if is_train:
mean, var = mean_var_with_update()
else:
mean, var = moving_mean, moving_variance
if self.act:
outputs = self.act(outputs)
else:
outputs = outputs
return outputs
w_fold = _w_fold(W, scale_para, var, epsilon)
bias_fold = _bias_fold(offset_para, scale_para, mean, var, epsilon)
def mean_var_with_update(self, update_moving_mean, update_moving_variance, mean, variance):
with tf.control_dependencies([update_moving_mean, update_moving_variance]):
return tf.identity(mean), tf.identity(variance)
W = quantize_weight_overflow(w_fold, bitW)
# W = tl.act.sign(W) # dont update ...
def _w_fold(self, w, gama, var, epsilon):
return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon))
# W = tf.Variable(W)
self.outputs = tf.matmul(self.inputs, W)
# self.outputs = xnor_gemm(self.inputs, W) # TODO
self.outputs = tf.nn.bias_add(self.outputs, bias_fold, name='bias_add')
self.outputs = self._apply_activation(self.outputs)
self._add_layers(self.outputs)
self._add_params([W, scale_para, offset_para, moving_mean, moving_variance])
def _w_fold(w, gama, var, epsilon):
return tf.compat.v1.div(tf.multiply(gama, w), tf.sqrt(var + epsilon))
def _bias_fold(beta, gama, mean, var, epsilon):
return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon)))
def _bias_fold(self, beta, gama, mean, var, epsilon):
return tf.subtract(beta, tf.compat.v1.div(tf.multiply(gama, mean), tf.sqrt(var + epsilon)))

@@ -94,2 +94,12 @@ #! /usr/bin/python

# dense/quan_dense_bn.py
__all__ += [
'QuanDenseLayerWithBN',
]
def QuanDenseLayerWithBN(*args, **kwargs):
raise NonExistingLayerError("QuanDenseLayerWithBN(net, name='a') --> QuanDenseWithBN(name='a')(net)" + __log__)
# dense/ternary_dense.py

@@ -96,0 +106,0 @@ __all__ += [

@@ -110,2 +110,10 @@ #! /usr/bin/python

def _compute_shape(tensors):
if isinstance(tensors, list):
shape_mem = [t.get_shape().as_list() for t in tensors]
else:
shape_mem = tensors.get_shape().as_list()
return shape_mem
def batch_normalization(x, mean, variance, offset, scale, variance_epsilon, data_format, name=None):

@@ -260,3 +268,4 @@ """Data Format aware version of tf.nn.batch_normalization."""

def _check_input_shape(self, inputs):
if inputs.ndim <= 1:
inputs_shape = _compute_shape(inputs)
if len(inputs_shape) <= 1:
raise ValueError('expected input at least 2D, but got {}D input'.format(inputs.ndim))

@@ -323,3 +332,4 @@

def _check_input_shape(self, inputs):
if inputs.ndim != 2 and inputs.ndim != 3:
inputs_shape = _compute_shape(inputs)
if len(inputs_shape) != 2 and len(inputs_shape) != 3:
raise ValueError('expected input to be 2D or 3D, but got {}D input'.format(inputs.ndim))

@@ -347,3 +357,4 @@

def _check_input_shape(self, inputs):
if inputs.ndim != 4:
inputs_shape = _compute_shape(inputs)
if len(inputs_shape) != 4:
raise ValueError('expected input to be 4D, but got {}D input'.format(inputs.ndim))

@@ -371,3 +382,4 @@

def _check_input_shape(self, inputs):
if inputs.ndim != 5:
inputs_shape = _compute_shape(inputs)
if len(inputs_shape) != 5:
raise ValueError('expected input to be 5D, but got {}D input'.format(inputs.ndim))

@@ -374,0 +386,0 @@

@@ -7,3 +7,3 @@ #! /usr/bin/python

MINOR = 2
PATCH = 1
PATCH = 2
PRE_RELEASE = ''

@@ -10,0 +10,0 @@ # Use the following formatting: (major, minor, patch, prerelease)

@@ -211,3 +211,7 @@ #!/usr/bin/env python

cls.model = Model(cls.input_layer, cls.n14)
cls.n15 = tl.layers.QuanConv2dWithBN(
n_filter=64, filter_size=(5, 5), strides=(1, 1), act=tf.nn.relu, padding='SAME', name='quancnnbn2d'
)(cls.n14)
cls.model = Model(cls.input_layer, cls.n15)
print("Testing Conv2d model: \n", cls.model)

@@ -325,2 +329,6 @@

def test_layer_n15(self):
self.assertEqual(len(self.n15._info[0].layer.all_weights), 5)
self.assertEqual(self.n15.get_shape().as_list()[1:], [24, 24, 64])
# def test_layer_n8(self):

@@ -327,0 +335,0 @@ #

@@ -246,2 +246,57 @@ #!/usr/bin/env python

class Layer_QuanDenseWithBN_Test(CustomTestCase):
@classmethod
def setUpClass(cls):
print("-" * 20, "Layer_QuanDenseWithBN_Test", "-" * 20)
cls.batch_size = 4
cls.inputs_shape = [cls.batch_size, 10]
cls.ni = Input(cls.inputs_shape, name='input_layer')
cls.layer1 = QuanDenseWithBN(n_units=5)
nn = cls.layer1(cls.ni)
cls.layer1._nodes_fixed = True
cls.M = Model(inputs=cls.ni, outputs=nn)
cls.layer2 = QuanDenseWithBN(n_units=5, in_channels=10)
cls.layer2._nodes_fixed = True
cls.inputs = tf.random.uniform((cls.inputs_shape))
cls.n1 = cls.layer1(cls.inputs)
cls.n2 = cls.layer2(cls.inputs)
cls.n3 = cls.M(cls.inputs, is_train=True)
print(cls.layer1)
print(cls.layer2)
@classmethod
def tearDownClass(cls):
pass
def test_layer_n1(self):
print(self.n1[0])
def test_layer_n2(self):
print(self.n2[0])
def test_model_n3(self):
print(self.n3[0])
def test_exception(self):
try:
layer = QuanDenseWithBN(n_units=5)
inputs = Input([4, 10, 5], name='ill_inputs')
out = layer(inputs)
self.fail('ill inputs')
except Exception as e:
print(e)
try:
layer = QuanDenseWithBN(n_units=5, use_gemm=True)
out = layer(self.ni)
self.fail('use gemm')
except Exception as e:
print(e)
class Layer_TernaryDense_Test(CustomTestCase):

@@ -248,0 +303,0 @@

import os
import time
import psutil
import tensorflow as tf
import keras
import psutil
from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

@@ -9,0 +9,0 @@ from keras.applications.vgg16 import VGG16

@@ -5,3 +5,2 @@ import os

import numpy as np
import psutil

@@ -11,2 +10,3 @@ import torch

import torch.optim as optim
from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

@@ -13,0 +13,0 @@ from torchvision.models import vgg16

import os
import time
import psutil
import tensorflow as tf
from tensorflow.python.keras.applications import VGG16
import psutil
from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

@@ -9,0 +9,0 @@

import os
import time
import psutil
import tensorflow as tf
from tensorflow.python.keras.applications import VGG16
import psutil
from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

@@ -9,0 +9,0 @@

import os
import time
import psutil
import tensorflow as tf
import psutil
import tensorlayer as tl

@@ -8,0 +8,0 @@ from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

import os
import time
import psutil
import tensorflow as tf
import psutil
import tensorlayer as tl

@@ -8,0 +8,0 @@ from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

import os
import time
import psutil
import tensorflow as tf
import psutil
import tensorlayer as tl

@@ -8,0 +8,0 @@ from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

import os
import time
import psutil
import tensorflow as tf
import psutil
import tensorlayer as tl

@@ -8,0 +8,0 @@ from exp_config import (BATCH_SIZE, LERANING_RATE, MONITOR_INTERVAL, NUM_ITERS, random_input_generator)

@@ -8,3 +8,3 @@ #!/usr/bin/env python

import tensorflow as tf
import numpy as np
import tensorlayer as tl

@@ -120,5 +120,13 @@ from tests.utils import CustomTestCase

def test_mish(self):
for i in range(-5, 15):
good_output = i * np.tanh(np.math.log(1 + np.math.exp(i)))
computed_output = tl.act.mish(float(i))
self.assertAlmostEqual(computed_output.numpy(), good_output, places=5)
if __name__ == '__main__':
unittest.main()

@@ -7,6 +7,6 @@ #!/usr/bin/env python

import nltk
import tensorflow as tf
from tensorflow.python.platform import gfile
import nltk
import tensorlayer as tl

@@ -13,0 +13,0 @@ from tests.utils import CustomTestCase

Sorry, the diff of this file is too big to display