Launch Week Day 1: Socket for Jira Is Now Available.Learn More
Socket
Book a DemoSign in
Socket

tensorlayer

Package Overview
Dependencies
Maintainers
5
Versions
79
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

tensorlayer - pypi Package Compare versions

Comparing version
2.2.3
to
2.2.5
+211
LICENSE.rst
License
=======
Copyright (c) 2016~2020 The TensorLayer contributors. All rights reserved.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2016, The TensorLayer Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Contact
=======
Questions? Please contact hao.dong@pku.edu.cn
#! /usr/bin/python
# -*- coding: utf-8 -*-
from .computer_vision_object_detection import *
from .human_pose_estimation import *
from .computer_vision import *
#! /usr/bin/python
# -*- coding: utf-8 -*-
from .yolov4 import YOLOv4
from .common import *
#! /usr/bin/python
# -*- coding: utf-8 -*-
import tensorflow as tf
import colorsys, random, cv2
import numpy as np
def decode_tf(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=0, XYSCALE=[1, 1, 1]):
batch_size = tf.shape(conv_output)[0]
conv_output = tf.reshape(conv_output, (batch_size, output_size, output_size, 3, 5 + NUM_CLASS))
conv_raw_dxdy, conv_raw_dwdh, conv_raw_conf, conv_raw_prob = tf.split(conv_output, (2, 2, 1, NUM_CLASS), axis=-1)
xy_grid = tf.meshgrid(tf.range(output_size), tf.range(output_size))
xy_grid = tf.expand_dims(tf.stack(xy_grid, axis=-1), axis=2) # [gx, gy, 1, 2]
xy_grid = tf.tile(tf.expand_dims(xy_grid, axis=0), [batch_size, 1, 1, 3, 1])
xy_grid = tf.cast(xy_grid, tf.float32)
pred_xy = ((tf.sigmoid(conv_raw_dxdy) * XYSCALE[i]) - 0.5 * (XYSCALE[i] - 1) + xy_grid) * \
STRIDES[i]
pred_wh = (tf.exp(conv_raw_dwdh) * ANCHORS[i])
pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1)
pred_conf = tf.sigmoid(conv_raw_conf)
pred_prob = tf.sigmoid(conv_raw_prob)
pred_prob = pred_conf * pred_prob
pred_prob = tf.reshape(pred_prob, (batch_size, -1, NUM_CLASS))
pred_xywh = tf.reshape(pred_xywh, (batch_size, -1, 4))
return pred_xywh, pred_prob
def decode(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE=[1, 1, 1]):
return decode_tf(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=i, XYSCALE=XYSCALE)
def filter_boxes(box_xywh, scores, score_threshold=0.4, input_shape=tf.constant([416, 416])):
scores_max = tf.math.reduce_max(scores, axis=-1)
mask = scores_max >= score_threshold
class_boxes = tf.boolean_mask(box_xywh, mask)
pred_conf = tf.boolean_mask(scores, mask)
class_boxes = tf.reshape(class_boxes, [tf.shape(scores)[0], -1, tf.shape(class_boxes)[-1]])
pred_conf = tf.reshape(pred_conf, [tf.shape(scores)[0], -1, tf.shape(pred_conf)[-1]])
box_xy, box_wh = tf.split(class_boxes, (2, 2), axis=-1)
input_shape = tf.cast(input_shape, dtype=tf.float32)
box_yx = box_xy[..., ::-1]
box_hw = box_wh[..., ::-1]
box_mins = (box_yx - (box_hw / 2.)) / input_shape
box_maxes = (box_yx + (box_hw / 2.)) / input_shape
boxes = tf.concat(
[
box_mins[..., 0:1], # y_min
box_mins[..., 1:2], # x_min
box_maxes[..., 0:1], # y_max
box_maxes[..., 1:2] # x_max
],
axis=-1
)
# return tf.concat([boxes, pred_conf], axis=-1)
return (boxes, pred_conf)
def read_class_names(class_file_name):
names = {}
with open(class_file_name, 'r') as data:
for ID, name in enumerate(data):
names[ID] = name.strip('\n')
return names
def draw_bbox(image, bboxes, show_label=True):
classes = read_class_names('model/coco.names')
num_classes = len(classes)
image_h, image_w, _ = image.shape
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors))
random.seed(0)
random.shuffle(colors)
random.seed(None)
out_boxes, out_scores, out_classes, num_boxes = bboxes
for i in range(num_boxes[0]):
if int(out_classes[0][i]) < 0 or int(out_classes[0][i]) > num_classes: continue
coor = out_boxes[0][i]
coor[0] = int(coor[0] * image_h)
coor[2] = int(coor[2] * image_h)
coor[1] = int(coor[1] * image_w)
coor[3] = int(coor[3] * image_w)
fontScale = 0.5
score = out_scores[0][i]
class_ind = int(out_classes[0][i])
bbox_color = colors[class_ind]
bbox_thick = int(0.6 * (image_h + image_w) / 600)
c1, c2 = (coor[1], coor[0]), (coor[3], coor[2])
cv2.rectangle(image, c1, c2, bbox_color, bbox_thick)
if show_label:
bbox_mess = '%s: %.2f' % (classes[class_ind], score)
t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0]
c3 = (c1[0] + t_size[0], c1[1] - t_size[1] - 3)
cv2.rectangle(image, c1, (np.float32(c3[0]), np.float32(c3[1])), bbox_color, -1) #filled
cv2.putText(
image, bbox_mess, (c1[0], np.float32(c1[1] - 2)), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0),
bbox_thick // 2, lineType=cv2.LINE_AA
)
return image
def get_anchors(anchors_path, tiny=False):
anchors = np.array(anchors_path)
if tiny:
return anchors.reshape(2, 3, 2)
else:
return anchors.reshape(3, 3, 2)
def decode_train(conv_output, output_size, NUM_CLASS, STRIDES, ANCHORS, i=0, XYSCALE=[1, 1, 1]):
conv_output = tf.reshape(conv_output, (tf.shape(conv_output)[0], output_size, output_size, 3, 5 + NUM_CLASS))
conv_raw_dxdy, conv_raw_dwdh, conv_raw_conf, conv_raw_prob = tf.split(conv_output, (2, 2, 1, NUM_CLASS), axis=-1)
xy_grid = tf.meshgrid(tf.range(output_size), tf.range(output_size))
xy_grid = tf.expand_dims(tf.stack(xy_grid, axis=-1), axis=2) # [gx, gy, 1, 2]
xy_grid = tf.tile(tf.expand_dims(xy_grid, axis=0), [tf.shape(conv_output)[0], 1, 1, 3, 1])
xy_grid = tf.cast(xy_grid, tf.float32)
pred_xy = ((tf.sigmoid(conv_raw_dxdy) * XYSCALE[i]) - 0.5 * (XYSCALE[i] - 1) + xy_grid) * \
STRIDES[i]
pred_wh = (tf.exp(conv_raw_dwdh) * ANCHORS[i])
pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1)
pred_conf = tf.sigmoid(conv_raw_conf)
pred_prob = tf.sigmoid(conv_raw_prob)
return tf.concat([pred_xywh, pred_conf, pred_prob], axis=-1)
def yolo4_input_processing(original_image):
image_data = cv2.resize(original_image, (416, 416))
image_data = image_data / 255.
images_data = []
for i in range(1):
images_data.append(image_data)
images_data = np.asarray(images_data).astype(np.float32)
batch_data = tf.constant(images_data)
return batch_data
def yolo4_output_processing(feature_maps):
STRIDES = [8, 16, 32]
ANCHORS = get_anchors([12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401])
NUM_CLASS = 80
XYSCALE = [1.2, 1.1, 1.05]
iou_threshold = 0.45
score_threshold = 0.25
bbox_tensors = []
prob_tensors = []
score_thres = 0.2
for i, fm in enumerate(feature_maps):
if i == 0:
output_tensors = decode(fm, 416 // 8, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE)
elif i == 1:
output_tensors = decode(fm, 416 // 16, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE)
else:
output_tensors = decode(fm, 416 // 32, NUM_CLASS, STRIDES, ANCHORS, i, XYSCALE)
bbox_tensors.append(output_tensors[0])
prob_tensors.append(output_tensors[1])
pred_bbox = tf.concat(bbox_tensors, axis=1)
pred_prob = tf.concat(prob_tensors, axis=1)
boxes, pred_conf = filter_boxes(
pred_bbox, pred_prob, score_threshold=score_thres, input_shape=tf.constant([416, 416])
)
pred = {'concat': tf.concat([boxes, pred_conf], axis=-1)}
for key, value in pred.items():
boxes = value[:, :, 0:4]
pred_conf = value[:, :, 4:]
boxes, scores, classes, valid_detections = tf.image.combined_non_max_suppression(
boxes=tf.reshape(boxes, (tf.shape(boxes)[0], -1, 1, 4)),
scores=tf.reshape(pred_conf, (tf.shape(pred_conf)[0], -1, tf.shape(pred_conf)[-1])),
max_output_size_per_class=50, max_total_size=50, iou_threshold=iou_threshold, score_threshold=score_threshold
)
output = [boxes.numpy(), scores.numpy(), classes.numpy(), valid_detections.numpy()]
return output
def result_to_json(image, pred_bbox):
image_h, image_w, _ = image.shape
out_boxes, out_scores, out_classes, num_boxes = pred_bbox
class_names = {}
json_result = []
with open('model/coco.names', 'r') as data:
for ID, name in enumerate(data):
class_names[ID] = name.strip('\n')
nums_class = len(class_names)
for i in range(num_boxes[0]):
if int(out_classes[0][i]) < 0 or int(out_classes[0][i]) > nums_class: continue
coor = out_boxes[0][i]
coor[0] = int(coor[0] * image_h)
coor[2] = int(coor[2] * image_h)
coor[1] = int(coor[1] * image_w)
coor[3] = int(coor[3] * image_w)
score = float(out_scores[0][i])
class_ind = int(out_classes[0][i])
bbox = np.array([coor[1], coor[0], coor[3], coor[2]]).tolist() # [x1,y1,x2,y2]
json_result.append({'image': None, 'category_id': class_ind, 'bbox': bbox, 'score': score})
return json_result
#! /usr/bin/python
# -*- coding: utf-8 -*-
"""YOLOv4 for MS-COCO.
# Reference:
- [tensorflow-yolov4-tflite](
https://github.com/hunglc007/tensorflow-yolov4-tflite)
"""
import tensorflow as tf
import numpy as np
import tensorlayer as tl
from tensorlayer.activation import mish
from tensorlayer.layers import Conv2d, MaxPool2d, BatchNorm2d, ZeroPad2d, UpSampling2d, Concat, Input, Elementwise
from tensorlayer.models import Model
from tensorlayer import logging
INPUT_SIZE = 416
weights_url = {'link': 'https://pan.baidu.com/s/1MC1dmEwpxsdgHO1MZ8fYRQ', 'password': 'idsz'}
def upsample(input_layer):
return UpSampling2d(scale=2)(input_layer)
def convolutional(
input_layer, filters_shape, downsample=False, activate=True, bn=True, activate_type='leaky', name=None
):
if downsample:
input_layer = ZeroPad2d(((1, 0), (1, 0)))(input_layer)
padding = 'VALID'
strides = 2
else:
strides = 1
padding = 'SAME'
if bn:
b_init = None
else:
b_init = tl.initializers.constant(value=0.0)
conv = Conv2d(
n_filter=filters_shape[-1], filter_size=(filters_shape[0], filters_shape[1]), strides=(strides, strides),
padding=padding, b_init=b_init, name=name
)(input_layer)
if bn:
if activate ==True:
if activate_type == 'leaky':
conv = BatchNorm2d(act='lrelu0.1')(conv)
elif activate_type == 'mish':
conv = BatchNorm2d(act=mish)(conv)
else:
conv = BatchNorm2d()(conv)
return conv
def residual_block(input_layer, input_channel, filter_num1, filter_num2, activate_type='leaky'):
short_cut = input_layer
conv = convolutional(input_layer, filters_shape=(1, 1, input_channel, filter_num1), activate_type=activate_type)
conv = convolutional(conv, filters_shape=(3, 3, filter_num1, filter_num2), activate_type=activate_type)
residual_output = Elementwise(tf.add)([short_cut, conv])
return residual_output
def cspdarknet53(input_data=None):
input_data = convolutional(input_data, (3, 3, 3, 32), activate_type='mish')
input_data = convolutional(input_data, (3, 3, 32, 64), downsample=True, activate_type='mish')
route = input_data
route = convolutional(route, (1, 1, 64, 64), activate_type='mish', name='conv_rote_block_1')
input_data = convolutional(input_data, (1, 1, 64, 64), activate_type='mish')
for i in range(1):
input_data = residual_block(input_data, 64, 32, 64, activate_type="mish")
input_data = convolutional(input_data, (1, 1, 64, 64), activate_type='mish')
input_data = Concat()([input_data, route])
input_data = convolutional(input_data, (1, 1, 128, 64), activate_type='mish')
input_data = convolutional(input_data, (3, 3, 64, 128), downsample=True, activate_type='mish')
route = input_data
route = convolutional(route, (1, 1, 128, 64), activate_type='mish', name='conv_rote_block_2')
input_data = convolutional(input_data, (1, 1, 128, 64), activate_type='mish')
for i in range(2):
input_data = residual_block(input_data, 64, 64, 64, activate_type="mish")
input_data = convolutional(input_data, (1, 1, 64, 64), activate_type='mish')
input_data = Concat()([input_data, route])
input_data = convolutional(input_data, (1, 1, 128, 128), activate_type='mish')
input_data = convolutional(input_data, (3, 3, 128, 256), downsample=True, activate_type='mish')
route = input_data
route = convolutional(route, (1, 1, 256, 128), activate_type='mish', name='conv_rote_block_3')
input_data = convolutional(input_data, (1, 1, 256, 128), activate_type='mish')
for i in range(8):
input_data = residual_block(input_data, 128, 128, 128, activate_type="mish")
input_data = convolutional(input_data, (1, 1, 128, 128), activate_type='mish')
input_data = Concat()([input_data, route])
input_data = convolutional(input_data, (1, 1, 256, 256), activate_type='mish')
route_1 = input_data
input_data = convolutional(input_data, (3, 3, 256, 512), downsample=True, activate_type='mish')
route = input_data
route = convolutional(route, (1, 1, 512, 256), activate_type='mish', name='conv_rote_block_4')
input_data = convolutional(input_data, (1, 1, 512, 256), activate_type='mish')
for i in range(8):
input_data = residual_block(input_data, 256, 256, 256, activate_type="mish")
input_data = convolutional(input_data, (1, 1, 256, 256), activate_type='mish')
input_data = Concat()([input_data, route])
input_data = convolutional(input_data, (1, 1, 512, 512), activate_type='mish')
route_2 = input_data
input_data = convolutional(input_data, (3, 3, 512, 1024), downsample=True, activate_type='mish')
route = input_data
route = convolutional(route, (1, 1, 1024, 512), activate_type='mish', name='conv_rote_block_5')
input_data = convolutional(input_data, (1, 1, 1024, 512), activate_type='mish')
for i in range(4):
input_data = residual_block(input_data, 512, 512, 512, activate_type="mish")
input_data = convolutional(input_data, (1, 1, 512, 512), activate_type='mish')
input_data = Concat()([input_data, route])
input_data = convolutional(input_data, (1, 1, 1024, 1024), activate_type='mish')
input_data = convolutional(input_data, (1, 1, 1024, 512))
input_data = convolutional(input_data, (3, 3, 512, 1024))
input_data = convolutional(input_data, (1, 1, 1024, 512))
maxpool1 = MaxPool2d(filter_size=(13, 13), strides=(1, 1))(input_data)
maxpool2 = MaxPool2d(filter_size=(9, 9), strides=(1, 1))(input_data)
maxpool3 = MaxPool2d(filter_size=(5, 5), strides=(1, 1))(input_data)
input_data = Concat()([maxpool1, maxpool2, maxpool3, input_data])
input_data = convolutional(input_data, (1, 1, 2048, 512))
input_data = convolutional(input_data, (3, 3, 512, 1024))
input_data = convolutional(input_data, (1, 1, 1024, 512))
return route_1, route_2, input_data
def YOLOv4(NUM_CLASS, pretrained=False):
"""Pre-trained YOLOv4 model.
Parameters
------------
NUM_CLASS : int
Number of classes in final prediction.
pretrained : boolean
Whether to load pretrained weights. Default False.
Examples
---------
Object Detection with YOLOv4, see `computer_vision.py
<https://github.com/tensorlayer/tensorlayer/blob/master/tensorlayer/app/computer_vision.py>`__
With TensorLayer
>>> # get the whole model, without pre-trained YOLOv4 parameters
>>> yolov4 = tl.app.YOLOv4(NUM_CLASS=80, pretrained=False)
>>> # get the whole model, restore pre-trained YOLOv4 parameters
>>> yolov4 = tl.app.YOLOv4(NUM_CLASS=80, pretrained=True)
>>> # use for inferencing
>>> output = yolov4(img, is_train=False)
"""
input_layer = Input([None, INPUT_SIZE, INPUT_SIZE, 3])
route_1, route_2, conv = cspdarknet53(input_layer)
route = conv
conv = convolutional(conv, (1, 1, 512, 256))
conv = upsample(conv)
route_2 = convolutional(route_2, (1, 1, 512, 256), name='conv_yolo_1')
conv = Concat()([route_2, conv])
conv = convolutional(conv, (1, 1, 512, 256))
conv = convolutional(conv, (3, 3, 256, 512))
conv = convolutional(conv, (1, 1, 512, 256))
conv = convolutional(conv, (3, 3, 256, 512))
conv = convolutional(conv, (1, 1, 512, 256))
route_2 = conv
conv = convolutional(conv, (1, 1, 256, 128))
conv = upsample(conv)
route_1 = convolutional(route_1, (1, 1, 256, 128), name='conv_yolo_2')
conv = Concat()([route_1, conv])
conv = convolutional(conv, (1, 1, 256, 128))
conv = convolutional(conv, (3, 3, 128, 256))
conv = convolutional(conv, (1, 1, 256, 128))
conv = convolutional(conv, (3, 3, 128, 256))
conv = convolutional(conv, (1, 1, 256, 128))
route_1 = conv
conv = convolutional(conv, (3, 3, 128, 256), name='conv_route_1')
conv_sbbox = convolutional(conv, (1, 1, 256, 3 * (NUM_CLASS + 5)), activate=False, bn=False)
conv = convolutional(route_1, (3, 3, 128, 256), downsample=True, name='conv_route_2')
conv = Concat()([conv, route_2])
conv = convolutional(conv, (1, 1, 512, 256))
conv = convolutional(conv, (3, 3, 256, 512))
conv = convolutional(conv, (1, 1, 512, 256))
conv = convolutional(conv, (3, 3, 256, 512))
conv = convolutional(conv, (1, 1, 512, 256))
route_2 = conv
conv = convolutional(conv, (3, 3, 256, 512), name='conv_route_3')
conv_mbbox = convolutional(conv, (1, 1, 512, 3 * (NUM_CLASS + 5)), activate=False, bn=False)
conv = convolutional(route_2, (3, 3, 256, 512), downsample=True, name='conv_route_4')
conv = Concat()([conv, route])
conv = convolutional(conv, (1, 1, 1024, 512))
conv = convolutional(conv, (3, 3, 512, 1024))
conv = convolutional(conv, (1, 1, 1024, 512))
conv = convolutional(conv, (3, 3, 512, 1024))
conv = convolutional(conv, (1, 1, 1024, 512))
conv = convolutional(conv, (3, 3, 512, 1024))
conv_lbbox = convolutional(conv, (1, 1, 1024, 3 * (NUM_CLASS + 5)), activate=False, bn=False)
network = Model(input_layer, [conv_sbbox, conv_mbbox, conv_lbbox])
if pretrained:
restore_params(network, model_path='model/yolov4_model.npz')
return network
def restore_params(network, model_path='models.npz'):
logging.info("Restore pre-trained weights")
try:
npz = np.load(model_path, allow_pickle=True)
except:
print("Download the model file, placed in the /model ")
print("Weights download: ", weights_url['link'], "password:", weights_url['password'])
txt_path = 'model/yolov4_weights_config.txt'
f = open(txt_path, "r")
line = f.readlines()
for i in range(len(line)):
network.all_weights[i].assign(npz[line[i].strip()])
logging.info(" Loading weights %s in %s" % (network.all_weights[i].shape, network.all_weights[i].name))
#! /usr/bin/python
# -*- coding: utf-8 -*-
from tensorlayer.app import YOLOv4
from tensorlayer.app import CGCNN
from tensorlayer import logging
from tensorlayer.app import yolo4_input_processing, yolo4_output_processing, result_to_json
class object_detection(object):
"""Model encapsulation.
Parameters
----------
model_name : str
Choose the model to inference.
Methods
---------
__init__()
Initializing the model.
__call__()
(1)Formatted input and output. (2)Inference model.
list()
Abstract method. Return available a list of model_name.
Examples
---------
Object Detection detection MSCOCO with YOLOv4, see `tutorial_object_detection_yolov4.py
<https://github.com/tensorlayer/tensorlayer/blob/master/example/app_tutorials/tutorial_object_detection_yolov4.py>`__
With TensorLayer
>>> # get the whole model
>>> net = tl.app.computer_vision.object_detection('yolo4-mscoco')
>>> # use for inferencing
>>> output = net(img)
"""
def __init__(self, model_name='yolo4-mscoco'):
self.model_name = model_name
if self.model_name == 'yolo4-mscoco':
self.model = YOLOv4(NUM_CLASS=80, pretrained=True)
else:
raise ("The model does not support.")
def __call__(self, input_data):
if self.model_name == 'yolo4-mscoco':
batch_data = yolo4_input_processing(input_data)
feature_maps = self.model(batch_data, is_train=False)
pred_bbox = yolo4_output_processing(feature_maps)
output = result_to_json(input_data, pred_bbox)
else:
raise NotImplementedError
return output
def __repr__(self):
s = ('(model_name={model_name}, model_structure={model}')
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
@property
def list(self):
logging.info("The model name list: 'yolov4-mscoco', 'lcn'")
class human_pose_estimation(object):
"""Model encapsulation.
Parameters
----------
model_name : str
Choose the model to inference.
Methods
---------
__init__()
Initializing the model.
__call__()
(1)Formatted input and output. (2)Inference model.
list()
Abstract method. Return available a list of model_name.
Examples
---------
LCN to estimate 3D human poses from 2D poses, see `tutorial_human_3dpose_estimation_LCN.py
<https://github.com/tensorlayer/tensorlayer/blob/master/example/app_tutorials/tutorial_human_3dpose_estimation_LCN.py>`__
With TensorLayer
>>> # get the whole model
>>> net = tl.app.computer_vision.human_pose_estimation('3D-pose')
>>> # use for inferencing
>>> output = net(img)
"""
def __init__(self, model_name='3D-pose'):
self.model_name = model_name
if self.model_name == '3D-pose':
self.model = CGCNN(pretrained=True)
else:
raise ("The model does not support.")
def __call__(self, input_data):
if self.model_name == '3D-pose':
output = self.model(input_data, is_train=False)
else:
raise NotImplementedError
return output
def __repr__(self):
s = ('(model_name={model_name}, model_structure={model}')
s += ')'
return s.format(classname=self.__class__.__name__, **self.__dict__)
@property
def list(self):
logging.info("The model name list: '3D-pose'")
#! /usr/bin/python
# -*- coding: utf-8 -*-
from .common import *
from .LCN import CGCNN
#! /usr/bin/python
# -*- coding: utf-8 -*-
"""
# Reference:
- [pose_lcn](
https://github.com/rujiewu/pose_lcn)
- [3d-pose-baseline](
https://github.com/una-dinosauria/3d-pose-baseline)
"""
import tensorflow as tf
import numpy as np
import pickle
import matplotlib.pyplot as plt
import os
import matplotlib.gridspec as gridspec
H36M_NAMES = [''] * 17
H36M_NAMES[0] = 'Hip'
H36M_NAMES[1] = 'RHip'
H36M_NAMES[2] = 'RKnee'
H36M_NAMES[3] = 'RFoot'
H36M_NAMES[4] = 'LHip'
H36M_NAMES[5] = 'LKnee'
H36M_NAMES[6] = 'LFoot'
H36M_NAMES[7] = 'Belly'
H36M_NAMES[8] = 'Neck'
H36M_NAMES[9] = 'Nose'
H36M_NAMES[10] = 'Head'
H36M_NAMES[11] = 'LShoulder'
H36M_NAMES[12] = 'LElbow'
H36M_NAMES[13] = 'LHand'
H36M_NAMES[14] = 'RShoulder'
H36M_NAMES[15] = 'RElbow'
H36M_NAMES[16] = 'RHand'
IN_F = 2
IN_JOINTS = 17
OUT_JOINTS = 17
neighbour_matrix = np.array(
[
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0., 1., 1., 0.],
[1., 1., 1., 1., 1., 1., 0., 1., 1., 1., 0., 1., 1., 0., 1., 1., 0.],
[1., 1., 1., 1., 1., 0., 0., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0.],
[1., 1., 1., 1., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 0., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0., 1., 1., 0.],
[1., 1., 0., 0., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0., 1., 0., 0.],
[1., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 0., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 0., 0., 1., 0., 0., 1., 1., 1., 1., 1., 1., 0., 1., 1., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 0., 0., 1., 0., 0.],
[1., 1., 1., 0., 1., 1., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0.],
[1., 1., 0., 0., 1., 0., 0., 1., 1., 1., 0., 1., 1., 1., 1., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 1., 1., 1., 0., 0., 0.],
[1., 1., 1., 0., 1., 1., 0., 1., 1., 1., 1., 1., 1., 0., 1., 1., 1.],
[1., 1., 0., 0., 1., 0., 0., 1., 1., 1., 0., 1., 0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 1.]
]
)
ROOT_PATH = '../../examples/app_tutorials/data/'
def mask_weight(weight):
weights = tf.clip_by_norm(weight, 1)
L = neighbour_matrix.T
mask = tf.constant(L)
input_size, output_size = weights.get_shape()
input_size, output_size = int(input_size), int(output_size)
assert input_size % IN_JOINTS == 0 and output_size % IN_JOINTS == 0
in_F = int(input_size / IN_JOINTS)
out_F = int(output_size / IN_JOINTS)
weights = tf.reshape(weights, [IN_JOINTS, in_F, IN_JOINTS, out_F])
mask = tf.reshape(mask, [IN_JOINTS, 1, IN_JOINTS, 1])
weights = tf.cast(weights, dtype=tf.float32)
mask = tf.cast(mask, dtype=tf.float32)
masked_weights = weights * mask
masked_weights = tf.reshape(masked_weights, [input_size, output_size])
return masked_weights
def flip_data(data):
"""
horizontal flip
data: [N, 17*k] or [N, 17, k], i.e. [x, y], [x, y, confidence] or [x, y, z]
Return
result: [2N, 17*k] or [2N, 17, k]
"""
left_joints = [4, 5, 6, 11, 12, 13]
right_joints = [1, 2, 3, 14, 15, 16]
flipped_data = data.copy().reshape((len(data), 17, -1))
flipped_data[:, :, 0] *= -1 # flip x of all joints
flipped_data[:, left_joints + right_joints] = flipped_data[:, right_joints + left_joints]
flipped_data = flipped_data.reshape(data.shape)
result = np.concatenate((data, flipped_data), axis=0)
return result
def unflip_data(data):
"""
Average original data and flipped data
data: [2N, 17*3]
Return
result: [N, 17*3]
"""
left_joints = [4, 5, 6, 11, 12, 13]
right_joints = [1, 2, 3, 14, 15, 16]
data = data.copy().reshape((2, -1, 17, 3))
data[1, :, :, 0] *= -1 # flip x of all joints
data[1, :, left_joints + right_joints] = data[1, :, right_joints + left_joints]
data = np.mean(data, axis=0)
data = data.reshape((-1, 17 * 3))
return data
class DataReader(object):
def __init__(self):
self.gt_trainset = None
self.gt_testset = None
self.dt_dataset = None
def real_read(self, subset):
file_name = 'h36m_%s.pkl' % subset
print('loading %s' % file_name)
file_path = os.path.join(ROOT_PATH, file_name)
with open(file_path, 'rb') as f:
gt = pickle.load(f)
return gt
def read_2d(self, which='scale', mode='dt_ft', read_confidence=True):
if self.gt_trainset is None:
self.gt_trainset = self.real_read('train')
if self.gt_testset is None:
self.gt_testset = self.real_read('test')
if mode == 'gt':
trainset = np.empty((len(self.gt_trainset), 17, 2)) # [N, 17, 2]
testset = np.empty((len(self.gt_testset), 17, 2)) # [N, 17, 2]
for idx, item in enumerate(self.gt_trainset):
trainset[idx] = item['joint_3d_image'][:, :2]
for idx, item in enumerate(self.gt_testset):
testset[idx] = item['joint_3d_image'][:, :2]
if read_confidence:
train_confidence = np.ones((len(self.gt_trainset), 17, 1)) # [N, 17, 1]
test_confidence = np.ones((len(self.gt_testset), 17, 1)) # [N, 17, 1]
elif mode == 'dt_ft':
file_name = 'h36m_sh_dt_ft.pkl'
file_path = os.path.join(ROOT_PATH, 'dataset', file_name)
print('loading %s' % file_name)
with open(file_path, 'rb') as f:
self.dt_dataset = pickle.load(f)
trainset = self.dt_dataset['train']['joint3d_image'][:, :, :2].copy() # [N, 17, 2]
testset = self.dt_dataset['test']['joint3d_image'][:, :, :2].copy() # [N, 17, 2]
if read_confidence:
train_confidence = self.dt_dataset['train']['confidence'].copy() # [N, 17, 1]
test_confidence = self.dt_dataset['test']['confidence'].copy() # [N, 17, 1]
else:
assert 0, 'not supported type %s' % mode
# normalize
if which == 'scale':
# map to [-1, 1]
for idx, item in enumerate(self.gt_trainset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
trainset[idx, :, :] = trainset[idx, :, :] / res_w * 2 - [1, res_h / res_w]
for idx, item in enumerate(self.gt_testset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
testset[idx, :, :] = testset[idx, :, :] / res_w * 2 - [1, res_h / res_w]
else:
assert 0, 'not support normalize type %s' % which
if read_confidence:
trainset = np.concatenate((trainset, train_confidence), axis=2) # [N, 17, 3]
testset = np.concatenate((testset, test_confidence), axis=2) # [N, 17, 3]
# reshape
trainset, testset = trainset.reshape((len(trainset), -1)).astype(np.float32), testset.reshape(
(len(testset), -1)
).astype(np.float32)
return trainset, testset
def read_3d(self, which='scale', mode='dt_ft'):
if self.gt_trainset is None:
self.gt_trainset = self.real_read('train')
if self.gt_testset is None:
self.gt_testset = self.real_read('test')
# normalize
train_labels = np.empty((len(self.gt_trainset), 17, 3))
test_labels = np.empty((len(self.gt_testset), 17, 3))
if which == 'scale':
# map to [-1, 1]
for idx, item in enumerate(self.gt_trainset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
train_labels[idx, :, :2] = item['joint_3d_image'][:, :2] / res_w * 2 - [1, res_h / res_w]
train_labels[idx, :, 2:] = item['joint_3d_image'][:, 2:] / res_w * 2
for idx, item in enumerate(self.gt_testset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
test_labels[idx, :, :2] = item['joint_3d_image'][:, :2] / res_w * 2 - [1, res_h / res_w]
test_labels[idx, :, 2:] = item['joint_3d_image'][:, 2:] / res_w * 2
else:
assert 0, 'not support normalize type %s' % which
# reshape
train_labels, test_labels = train_labels.reshape((-1, 17 * 3)).astype(np.float32), test_labels.reshape(
(-1, 17 * 3)
).astype(np.float32)
return train_labels, test_labels
def denormalize3D(self, data, which='scale'):
if self.gt_testset is None:
self.gt_testset = self.real_read('test')
if which == 'scale':
data = data.reshape((-1, 17, 3)).copy()
for idx, item in enumerate(self.gt_testset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
if idx < len(data):
data[idx, :, :2] = (data[idx, :, :2] + [1, res_h / res_w]) * res_w / 2
data[idx, :, 2:] = data[idx, :, 2:] * res_w / 2
else:
break
else:
assert 0
return data
def denormalize2D(self, data, which='scale'):
if self.gt_testset is None:
self.gt_testset = self.real_read('test')
if which == 'scale':
data = data.reshape((-1, 17, 2)).copy()
for idx, item in enumerate(self.gt_testset):
camera_name = item['camera_param']['name']
if camera_name == '54138969' or camera_name == '60457274':
res_w, res_h = 1000, 1002
elif camera_name == '55011271' or camera_name == '58860488':
res_w, res_h = 1000, 1000
else:
assert 0, '%d data item has an invalid camera name' % idx
if idx < len(data):
data[idx, :, :] = (data[idx, :, :] + [1, res_h / res_w]) * res_w / 2
else:
break
else:
assert 0
return data
def show3Dpose(channels, ax, lcolor="#3498db", rcolor="#e74c3c", add_labels=False): # blue, orange
"""
Visualize a 3d skeleton
Args
channels: 54x1 vector. The pose to plot.
ax: matplotlib 3d axis to draw on
lcolor: color for left part of the body
rcolor: color for right part of the body
add_labels: whether to add coordinate labels
Returns
Nothing. Draws on ax.
"""
assert channels.size == len(H36M_NAMES) * 3, "channels should have 96 entries, it has %d instead" % channels.size
vals = np.reshape(channels, (len(H36M_NAMES), -1))
I = np.array([0, 1, 2, 0, 4, 5, 0, 7, 8, 8, 14, 15, 8, 11, 12]) # start points
J = np.array([1, 2, 3, 4, 5, 6, 7, 8, 10, 14, 15, 16, 11, 12, 13]) # end points
LR = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1], dtype=bool)
# Make connection matrix
for i in np.arange(len(I)):
x, y, z = [np.array([vals[I[i], j], vals[J[i], j]]) for j in range(3)]
ax.plot(x, y, z, lw=2, c=lcolor if LR[i] else rcolor)
RADIUS = 750 # space around the subject
xroot, yroot, zroot = vals[0, 0], vals[0, 1], vals[0, 2]
ax.set_xlim3d([-RADIUS + xroot, RADIUS + xroot])
ax.set_zlim3d([-RADIUS + zroot, RADIUS + zroot])
ax.set_ylim3d([-RADIUS + yroot, RADIUS + yroot])
if add_labels:
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_zlabel("z")
# Get rid of the ticks and tick labels
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.get_xaxis().set_ticklabels([])
ax.get_yaxis().set_ticklabels([])
ax.set_zticklabels([])
# Get rid of the panes (actually, make them white)
white = (1.0, 1.0, 1.0, 0.0)
ax.w_xaxis.set_pane_color(white)
ax.w_yaxis.set_pane_color(white)
# Keep z pane
# Get rid of the lines in 3d
ax.w_xaxis.line.set_color(white)
ax.w_yaxis.line.set_color(white)
ax.w_zaxis.line.set_color(white)
def show2Dpose(channels, ax, lcolor="#3498db", rcolor="#e74c3c", add_labels=False):
"""Visualize a 2d skeleton
Args
channels: 34x1 vector. The pose to plot.
ax: matplotlib axis to draw on
lcolor: color for left part of the body
rcolor: color for right part of the body
add_labels: whether to add coordinate labels
Returns
Nothing. Draws on ax.
"""
assert channels.size == len(H36M_NAMES) * 2, "channels should have 64 entries, it has %d instead" % channels.size
vals = np.reshape(channels, (len(H36M_NAMES), -1))
I = np.array([0, 1, 2, 0, 4, 5, 0, 7, 8, 8, 14, 15, 8, 11, 12]) # start points
J = np.array([1, 2, 3, 4, 5, 6, 7, 8, 10, 14, 15, 16, 11, 12, 13]) # end points
LR = np.array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1], dtype=bool)
# Make connection matrix
for i in np.arange(len(I)):
x, y = [np.array([vals[I[i], j], vals[J[i], j]]) for j in range(2)]
ax.plot(x, y, lw=2, c=lcolor if LR[i] else rcolor)
# Get rid of the ticks
ax.set_xticks([])
ax.set_yticks([])
# Get rid of tick labels
ax.get_xaxis().set_ticklabels([])
ax.get_yaxis().set_ticklabels([])
RADIUS = 350 # space around the subject
xroot, yroot = vals[0, 0], vals[0, 1]
ax.set_xlim([-RADIUS + xroot, RADIUS + xroot])
ax.set_ylim([-RADIUS + yroot, RADIUS + yroot])
if add_labels:
ax.set_xlabel("x")
ax.set_ylabel("z")
ax.set_aspect('equal')
def visualize_3D_pose(test_data, label, result):
fig = plt.figure(figsize=(19.2, 10.8))
gs1 = gridspec.GridSpec(2, 6) # 5 rows, 9 columns
gs1.update(wspace=-0.00, hspace=0.05) # set the spacing between axes.
plt.axis('off')
subplot_idx, exidx = 1, 1
nsamples = 4
for i in np.arange(nsamples):
# Plot 2d pose
ax1 = plt.subplot(gs1[subplot_idx - 1])
p2d = test_data[exidx, :]
show2Dpose(p2d, ax1)
ax1.invert_yaxis()
# Plot 3d gt
ax2 = plt.subplot(gs1[subplot_idx], projection='3d')
p3d = label[exidx, :]
show3Dpose(p3d, ax2)
# Plot 3d predictions
ax3 = plt.subplot(gs1[subplot_idx + 1], projection='3d')
p3d = result[exidx, :]
show3Dpose(p3d, ax3, lcolor="#9b59b6", rcolor="#2ecc71")
exidx = exidx + 1
subplot_idx = subplot_idx + 3
plt.show()
#! /usr/bin/python
# -*- coding: utf-8 -*-
""" LCN to estimate 3D human poses from 2D poses.
# Reference:
- [pose_lcn](
https://github.com/rujiewu/pose_lcn)
"""
import numpy as np
import tensorflow as tf
from tensorlayer.layers import Layer, Dropout, Dense, Input, BatchNorm, Reshape, Elementwise
from tensorlayer.models import Model
from tensorlayer import logging
from .common import mask_weight, neighbour_matrix
BATCH_SIZE = 200
M_0 = 17
IN_F = 2
IN_JOINTS = 17
OUT_JOINTS = 17
F = 64
NUM_LAYERS = 3
weights_url = {'link': 'https://pan.baidu.com/s/1HBHWsAfyAlNaavw0iyUmUQ', 'password': 'ec07'}
class Base_layer(Layer):
def __init__(
self, F=F, in_joints=IN_JOINTS, out_joints=OUT_JOINTS, regularization=0.0, max_norm=True, residual=True,
mask_type='locally_connected', neighbour_matrix=neighbour_matrix, init_type='ones', in_F=IN_F
):
super().__init__()
self.F = F
self.in_joints = in_joints
self.regularizers = []
self.regularization = regularization
self.max_norm = max_norm
self.out_joints = out_joints
self.residual = residual
self.mask_type = mask_type
self.init_type = init_type
self.in_F = in_F
assert neighbour_matrix.shape[0] == neighbour_matrix.shape[1]
assert neighbour_matrix.shape[0] == in_joints
self.neighbour_matrix = neighbour_matrix
self._initialize_mask()
def _initialize_mask(self):
"""
Parameter
mask_type
locally_connected
locally_connected_learnable
init_type
same: use L to init learnable part in mask
ones: use 1 to init learnable part in mask
random: use random to init learnable part in mask
"""
if 'locally_connected' in self.mask_type:
assert self.neighbour_matrix is not None
L = self.neighbour_matrix.T
assert L.shape == (self.in_joints, self.in_joints)
if 'learnable' not in self.mask_type:
self.mask = tf.constant(L)
else:
if self.init_type == 'same':
initializer = L
elif self.init_type == 'ones':
initializer = tf.initializers.ones
elif self.init_type == 'random':
initializer = tf.random.uniform
var_mask = tf.Variable(
name='mask', shape=[self.in_joints, self.out_joints] if self.init_type != 'same' else None,
dtype=tf.float32, initial_value=initializer
)
var_mask = tf.nn.softmax(var_mask, axis=0)
self.mask = var_mask * tf.constant(L != 0, dtype=tf.float32)
def _get_weights(self, name, initializer, shape, regularization=True, trainable=True):
var = tf.Variable(initial_value=initializer(shape=shape, dtype=tf.float32), name=name, trainable=True)
if regularization:
self.regularizers.append(tf.nn.l2_loss(var))
if trainable is True:
if self._trainable_weights is None:
self._trainable_weights = list()
self._trainable_weights.append(var)
else:
if self._nontrainable_weights is None:
self._nontrainable_weights = list()
self._nontrainable_weights.append(var)
return var
def kaiming(self, shape, dtype):
"""Kaiming initialization as described in https://arxiv.org/pdf/1502.01852.pdf
Args
shape: dimensions of the tf array to initialize
dtype: data type of the array
partition_info: (Optional) info about how the variable is partitioned.
See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/init_ops.py#L26
Needed to be used as an initializer.
Returns
Tensorflow array with initial weights
"""
return (tf.random.truncated_normal(shape, dtype=dtype) * tf.sqrt(2 / float(shape[0])))
def mask_weights(self, weights):
return mask_weight(weights)
class Mask_layer(Base_layer):
def __init__(self, in_channels=17, out_channels=None, name=None):
super().__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.w_name, self.b_name = name
if self.in_channels:
self.build(None)
self._built = True
def build(self, inputs_shape):
if self.in_channels is None:
self.in_channels = inputs_shape[1]
self.weight = self._get_weights(
self.w_name, self.kaiming, [self.in_channels, self.out_channels], regularization=self.regularization != 0
)
self.bias = self._get_weights(
self.b_name, self.kaiming, [self.out_channels], regularization=self.regularization != 0
) # equal to b2leaky_relu
self.weight = tf.clip_by_norm(self.weight, 1) if self.max_norm else self.weight
self.weight = self.mask_weights(self.weight)
def forward(self, x):
outputs = tf.matmul(x, self.weight) + self.bias
return outputs
class End_layer(Base_layer):
def __init__(self):
super().__init__()
def build(self, inputs_shape):
pass
def forward(self, inputs):
x, y = inputs
x = tf.reshape(x, [-1, self.in_joints, self.in_F]) # [N, J, 3]
y = tf.reshape(y, [-1, self.out_joints, 3]) # [N, J, 3]
y = tf.concat([x[:, :, :2] + y[:, :, :2], tf.expand_dims(y[:, :, 2], axis=-1)], axis=2) # [N, J, 3]
y = tf.reshape(y, [-1, self.out_joints * 3])
return y
def batch_normalization_warp(y):
_, output_size = y.get_shape()
output_size = int(output_size)
out_F = int(output_size / IN_JOINTS)
y = Reshape([-1, IN_JOINTS, out_F])(y)
y = BatchNorm(act='lrelu', epsilon=1e-3)(y)
y = Reshape([-1, output_size])(y)
return y
def two_linear_train(inputs, idx):
"""
Make a bi-linear block with optional residual connection
Args
xin: the batch that enters the block
idx: integer. Number of layer (for naming/scoping)
Returns
y: the batch after it leaves the block
"""
output_size = IN_JOINTS * F
# Linear 1
input_size1 = int(inputs.get_shape()[1])
output = Mask_layer(in_channels=input_size1, out_channels=output_size, name=["w2" + str(idx),
"b2" + str(idx)])(inputs)
output = batch_normalization_warp(output)
output = Dropout(keep=0.8)(output)
# Linear 2
input_size2 = int(output.get_shape()[1])
output = Mask_layer(in_channels=input_size2, out_channels=output_size, name=["w3_" + str(idx),
"b3_" + str(idx)])(output)
output = batch_normalization_warp(output)
output = Dropout(keep=0.8)(output)
# Residual every 2 blocks
output = Elementwise(combine_fn=tf.add)([inputs, output])
return output
def cgcnn_train():
input_layer = Input(shape=(BATCH_SIZE, M_0 * IN_F))
# === First layer===
output = Mask_layer(in_channels=IN_JOINTS * IN_F, out_channels=IN_JOINTS * F, name=["w1", "b1"])(input_layer)
output = batch_normalization_warp(output)
output = Dropout(keep=0.8)(output)
# === Create multiple bi-linear layers ===
for idx in range(NUM_LAYERS):
output = two_linear_train(output, idx)
# === Last layer ===
input_size4 = int(output.get_shape()[1])
output = Mask_layer(in_channels=input_size4, out_channels=OUT_JOINTS * 3, name=["w4", "b4"])(output)
# === End linear model ===
output = End_layer()([input_layer, output])
network = Model(inputs=input_layer, outputs=output)
return network
# inference
def two_linear_inference(xin):
"""
Make a bi-linear block with optional residual connection
Args
xin: the batch that enters the block
y: the batch after it leaves the block
"""
output_size = IN_JOINTS * F
# Linear 1
output = Dense(n_units=output_size, act=None)(xin)
output = batch_normalization_warp(output)
# output = Dropout(keep=0.8)(output)
# Linear 2
output = Dense(n_units=output_size, act=None)(output)
output = batch_normalization_warp(output)
# output = Dropout(keep=0.8)(output)
# Residual every 2 blocks
y = Elementwise(tf.add)([xin, output])
return y
def cgcnn_inference():
input_layer = Input(shape=(BATCH_SIZE, M_0 * IN_F))
# === First layer===
output = Dense(n_units=IN_JOINTS * F, act=None)(input_layer)
output = batch_normalization_warp(output)
# output = Dropout(keep=0.8)(output)
# === Create multiple bi-linear layers ===
for i in range(3):
output = two_linear_inference(output)
# === Last layer ===
output = Dense(n_units=OUT_JOINTS * 3, act=None)(output)
output = End_layer()([input_layer, output])
network = Model(inputs=input_layer, outputs=output)
return network
def restore_params(network, model_path='model.npz'):
logging.info("Restore pre-trained weights")
try:
npz = np.load(model_path, allow_pickle=True)
except:
print("Download the model file, placed in the /model ")
print("Weights download: ", weights_url['link'], "password:", weights_url['password'])
txt_path = 'model/pose_weights_config.txt'
f = open(txt_path, "r")
line = f.readlines()
for i in range(len(line)):
# mask weights
if len(npz[line[i].strip()].shape) == 2:
_weight = mask_weight(npz[line[i].strip()])
else:
_weight = npz[line[i].strip()]
network.all_weights[i].assign(_weight)
logging.info(" Loading weights %s in %s" % (network.all_weights[i].shape, network.all_weights[i].name))
def CGCNN(pretrained=True):
"""Pre-trained LCN model.
Parameters
------------
pretrained : boolean
Whether to load pretrained weights. Default False.
Examples
---------
LCN to estimate 3D human poses from 2D poses, see `computer_vision.py
<https://github.com/tensorlayer/tensorlayer/blob/master/tensorlayer/app/computer_vision.py>`__
With TensorLayer
>>> # get the whole model, without pre-trained LCN parameters
>>> lcn = tl.app.CGCNN(pretrained=False)
>>> # get the whole model, restore pre-trained LCN parameters
>>> lcn = tl.app.CGCNN(pretrained=True)
>>> # use for inferencing
>>> output = lcn(img, is_train=False)
"""
if pretrained:
network = cgcnn_inference()
restore_params(network, model_path='model/lcn_model.npz')
else:
network = cgcnn_train()
return network
+182
-171
Metadata-Version: 2.1
Name: tensorlayer
Version: 2.2.3
Version: 2.2.5
Summary: High Level Tensorflow Deep Learning Library for Researcher and Engineer.

@@ -12,172 +12,2 @@ Home-page: https://github.com/tensorlayer/tensorlayer

Download-URL: https://github.com/tensorlayer/tensorlayer
Description: |TENSORLAYER-LOGO|
|Awesome| |Documentation-EN| |Documentation-CN| |Book-CN| |Downloads|
|PyPI| |PyPI-Prerelease| |Commits-Since| |Python| |TensorFlow|
|Travis| |Docker| |RTD-EN| |RTD-CN| |PyUP| |Docker-Pulls| |Code-Quality|
|JOIN-SLACK-LOGO|
TensorLayer is a novel TensorFlow-based deep learning and reinforcement
learning library designed for researchers and engineers. It provides a
large collection of customizable neural layers / functions that are key
to build real-world AI applications. TensorLayer is awarded the 2017
Best Open Source Software by the `ACM Multimedia
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
Design Features
=================
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
Install
=======
TensorLayer has pre-requisites including TensorFlow, numpy, and others. For GPU support, CUDA and cuDNN are required.
The simplest way to install TensorLayer is to use the Python Package Index (PyPI):
.. code:: bash
# for last stable version
pip install --upgrade tensorlayer
# for latest release candidate
pip install --upgrade --pre tensorlayer
# if you want to install the additional dependencies, you can also run
pip install --upgrade tensorlayer[all] # all additional dependencies
pip install --upgrade tensorlayer[extra] # only the `extra` dependencies
pip install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies
Alternatively, you can install the latest or development version by directly pulling from github:
.. code:: bash
pip install https://github.com/tensorlayer/tensorlayer/archive/master.zip
# or
# pip install https://github.com/tensorlayer/tensorlayer/archive/<branch-name>.zip
Using Docker - a ready-to-use environment
-----------------------------------------
The `TensorLayer
containers <https://hub.docker.com/r/tensorlayer/tensorlayer/>`__ are
built on top of the official `TensorFlow
containers <https://hub.docker.com/r/tensorflow/tensorflow/>`__:
Containers with CPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: bash
# for CPU version and Python 2
docker pull tensorlayer/tensorlayer:latest
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest
# for CPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-py3
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-py3
Containers with GPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
NVIDIA-Docker is required for these containers to work: `Project
Link <https://github.com/NVIDIA/nvidia-docker>`__
.. code:: bash
# for GPU version and Python 2
docker pull tensorlayer/tensorlayer:latest-gpu
nvidia-docker run -it --rm -p 8888:88888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu
# for GPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-gpu-py3
nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu-py3
Contribute
==========
Please read the `Contributor
Guideline <https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md>`__
before submitting your PRs.
Cite
====
If you find this project useful, we would be grateful if you cite the
TensorLayer paper:
::
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
License
=======
TensorLayer is released under the Apache 2.0 license.
.. |TENSORLAYER-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png
:target: https://tensorlayer.readthedocs.io/
.. |JOIN-SLACK-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/join_slack.png
:target: https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc
.. |Awesome| image:: https://awesome.re/mentioned-badge.svg
:target: https://github.com/tensorlayer/awesome-tensorlayer
.. |Documentation-EN| image:: https://img.shields.io/badge/documentation-english-blue.svg
:target: https://tensorlayer.readthedocs.io/
.. |Documentation-CN| image:: https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg
:target: https://tensorlayercn.readthedocs.io/
.. |Book-CN| image:: https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg
:target: http://www.broadview.com.cn/book/5059/
.. |Downloads| image:: http://pepy.tech/badge/tensorlayer
:target: http://pepy.tech/project/tensorlayer
.. |PyPI| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer.svg?label=PyPI%20-%20Release
:target: https://pypi.org/project/tensorlayer/
.. |PyPI-Prerelease| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer/all.svg?label=PyPI%20-%20Pre-Release
:target: https://pypi.org/project/tensorlayer/
.. |Commits-Since| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/commits-since/tensorlayer/tensorlayer/latest.svg
:target: https://github.com/tensorlayer/tensorlayer/compare/1.10.1...master
.. |Python| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/pypi/pyversions/tensorlayer.svg
:target: https://pypi.org/project/tensorlayer/
.. |TensorFlow| image:: https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg
:target: https://github.com/tensorflow/tensorflow/releases
.. |Travis| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/travis/tensorlayer/tensorlayer/master.svg?label=Travis
:target: https://travis-ci.org/tensorlayer/tensorlayer
.. |Docker| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/circleci/project/github/tensorlayer/tensorlayer/master.svg?label=Docker%20Build
:target: https://circleci.com/gh/tensorlayer/tensorlayer/tree/master
.. |RTD-EN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayer/latest.svg?label=ReadTheDocs-EN
:target: https://tensorlayer.readthedocs.io/
.. |RTD-CN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayercn/latest.svg?label=ReadTheDocs-CN
:target: https://tensorlayercn.readthedocs.io/
.. |PyUP| image:: https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg
:target: https://pyup.io/repos/github/tensorlayer/tensorlayer/
.. |Docker-Pulls| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/docker/pulls/tensorlayer/tensorlayer.svg
:target: https://hub.docker.com/r/tensorlayer/tensorlayer/
.. |Code-Quality| image:: https://api.codacy.com/project/badge/Grade/d6b118784e25435498e7310745adb848
:target: https://www.codacy.com/app/tensorlayer/tensorlayer
Keywords: deep learning,machine learning,computer vision,nlp,supervised learning,unsupervised learning,reinforcement learning,tensorflow

@@ -217,1 +47,182 @@ Platform: UNKNOWN

Provides-Extra: all_gpu_dev
License-File: LICENSE.rst
|TENSORLAYER-LOGO|
|Awesome| |Documentation-EN| |Documentation-CN| |Book-CN| |Downloads|
|PyPI| |PyPI-Prerelease| |Commits-Since| |Python| |TensorFlow|
|Travis| |Docker| |RTD-EN| |RTD-CN| |PyUP| |Docker-Pulls| |Code-Quality|
|JOIN-SLACK-LOGO|
TensorLayer is a novel TensorFlow-based deep learning and reinforcement
learning library designed for researchers and engineers. It provides a
large collection of customizable neural layers / functions that are key
to build real-world AI applications. TensorLayer is awarded the 2017
Best Open Source Software by the `ACM Multimedia
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
Design Features
=================
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
Install
=======
TensorLayer has pre-requisites including TensorFlow, numpy, and others. For GPU support, CUDA and cuDNN are required.
The simplest way to install TensorLayer is to use the Python Package Index (PyPI):
.. code:: bash
# for last stable version
pip install --upgrade tensorlayer
# for latest release candidate
pip install --upgrade --pre tensorlayer
# if you want to install the additional dependencies, you can also run
pip install --upgrade tensorlayer[all] # all additional dependencies
pip install --upgrade tensorlayer[extra] # only the `extra` dependencies
pip install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies
Alternatively, you can install the latest or development version by directly pulling from github:
.. code:: bash
pip install https://github.com/tensorlayer/tensorlayer/archive/master.zip
# or
# pip install https://github.com/tensorlayer/tensorlayer/archive/<branch-name>.zip
Using Docker - a ready-to-use environment
-----------------------------------------
The `TensorLayer
containers <https://hub.docker.com/r/tensorlayer/tensorlayer/>`__ are
built on top of the official `TensorFlow
containers <https://hub.docker.com/r/tensorflow/tensorflow/>`__:
Containers with CPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: bash
# for CPU version and Python 2
docker pull tensorlayer/tensorlayer:latest
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest
# for CPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-py3
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-py3
Containers with GPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
NVIDIA-Docker is required for these containers to work: `Project
Link <https://github.com/NVIDIA/nvidia-docker>`__
.. code:: bash
# for GPU version and Python 2
docker pull tensorlayer/tensorlayer:latest-gpu
nvidia-docker run -it --rm -p 8888:88888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu
# for GPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-gpu-py3
nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu-py3
Contribute
==========
Please read the `Contributor
Guideline <https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md>`__
before submitting your PRs.
Cite
====
If you find this project useful, we would be grateful if you cite the
TensorLayer papers.
::
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
@inproceedings{tensorlayer2021,
title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
License
=======
TensorLayer is released under the Apache 2.0 license.
.. |TENSORLAYER-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png
:target: https://tensorlayer.readthedocs.io/
.. |JOIN-SLACK-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/join_slack.png
:target: https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc
.. |Awesome| image:: https://awesome.re/mentioned-badge.svg
:target: https://github.com/tensorlayer/awesome-tensorlayer
.. |Documentation-EN| image:: https://img.shields.io/badge/documentation-english-blue.svg
:target: https://tensorlayer.readthedocs.io/
.. |Documentation-CN| image:: https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg
:target: https://tensorlayercn.readthedocs.io/
.. |Book-CN| image:: https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg
:target: http://www.broadview.com.cn/book/5059/
.. |Downloads| image:: http://pepy.tech/badge/tensorlayer
:target: http://pepy.tech/project/tensorlayer
.. |PyPI| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer.svg?label=PyPI%20-%20Release
:target: https://pypi.org/project/tensorlayer/
.. |PyPI-Prerelease| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer/all.svg?label=PyPI%20-%20Pre-Release
:target: https://pypi.org/project/tensorlayer/
.. |Commits-Since| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/commits-since/tensorlayer/tensorlayer/latest.svg
:target: https://github.com/tensorlayer/tensorlayer/compare/1.10.1...master
.. |Python| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/pypi/pyversions/tensorlayer.svg
:target: https://pypi.org/project/tensorlayer/
.. |TensorFlow| image:: https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg
:target: https://github.com/tensorflow/tensorflow/releases
.. |Travis| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/travis/tensorlayer/tensorlayer/master.svg?label=Travis
:target: https://travis-ci.org/tensorlayer/tensorlayer
.. |Docker| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/circleci/project/github/tensorlayer/tensorlayer/master.svg?label=Docker%20Build
:target: https://circleci.com/gh/tensorlayer/tensorlayer/tree/master
.. |RTD-EN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayer/latest.svg?label=ReadTheDocs-EN
:target: https://tensorlayer.readthedocs.io/
.. |RTD-CN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayercn/latest.svg?label=ReadTheDocs-CN
:target: https://tensorlayercn.readthedocs.io/
.. |PyUP| image:: https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg
:target: https://pyup.io/repos/github/tensorlayer/tensorlayer/
.. |Docker-Pulls| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/docker/pulls/tensorlayer/tensorlayer.svg
:target: https://hub.docker.com/r/tensorlayer/tensorlayer/
.. |Code-Quality| image:: https://api.codacy.com/project/badge/Grade/d6b118784e25435498e7310745adb848
:target: https://www.codacy.com/app/tensorlayer/tensorlayer

@@ -110,3 +110,3 @@ |TENSORLAYER-LOGO|

If you find this project useful, we would be grateful if you cite the
TensorLayer paper:
TensorLayer papers.

@@ -123,2 +123,10 @@ ::

@inproceedings{tensorlayer2021,
title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
License

@@ -125,0 +133,0 @@ =======

[tool:pytest]
testpaths = tests/
addopts = --ignore=tests/test_documentation.py
--ignore=tests/test_yapf_format.py
--ignore=tests/pending/test_decorators.py
--ignore=tests/pending/test_documentation.py
--ignore=tests/pending/test_logging.py
--ignore=tests/pending/test_pydocstyle.py
--ignore=tests/pending/test_layers_padding.py
--ignore=tests/pending/test_timeout.py
--ignore=tests/pending/test_layers_super_resolution.py
--ignore=tests/pending/test_reuse_mlp.py
--ignore=tests/pending/test_layers_importer.py
--ignore=tests/pending/test_layers_time_distributed.py
--ignore=tests/pending/test_layers_spatial_transformer.py
--ignore=tests/pending/test_layers_stack.py
--ignore=tests/pending/test_mnist_simple.py
--ignore=tests/pending/test_tf_layers.py
--ignore=tests/pending/test_array_ops.py
--ignore=tests/pending/test_layers_basic.py
--ignore=tests/pending/test_models.py
--ignore=tests/pending/test_optimizer_amsgrad.py
--ignore=tests/pending/test_logging_hyperdash.py
--ignore=tests/pending/test_yapf_format.py
--ignore=tests/pending/test_layers_normalization.py
--ignore=tests/pending/test_utils_predict.py
--ignore=tests/pending/test_layers_flow_control.py
--ignore=tests/performance_test/vgg/tl2-autograph.py
--ignore=tests/performance_test/vgg/tf2-eager.py
--ignore=tests/performance_test/vgg/exp_config.py
--ignore=tests/performance_test/vgg/tl2-eager.py
--ignore=tests/performance_test/vgg/tf2-autograph.py
--ignore=tests/performance_test/vgg/keras_test.py
--ignore=tests/performance_test/vgg/pytorch_test.py

@@ -67,4 +35,4 @@ [flake8]

allow_multiline_lambdas = True
split_penalty_for_added_line_split = 10
split_penalty_after_opening_bracket = 500
SPLIT_PENALTY_FOR_ADDED_LINE_SPLIT = 10
SPLIT_PENALTY_AFTER_OPENING_BRACKET = 500

@@ -71,0 +39,0 @@ [egg_info]

@@ -114,10 +114,2 @@ #!/usr/bin/env python

classifiers=[
# How mature is this project? Common values are
# 1 - Planning
# 2 - Pre-Alpha
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
# 6 - Mature
# 7 - Inactive
'Development Status :: 5 - Production/Stable',

@@ -124,0 +116,0 @@

Metadata-Version: 2.1
Name: tensorlayer
Version: 2.2.3
Version: 2.2.5
Summary: High Level Tensorflow Deep Learning Library for Researcher and Engineer.

@@ -12,172 +12,2 @@ Home-page: https://github.com/tensorlayer/tensorlayer

Download-URL: https://github.com/tensorlayer/tensorlayer
Description: |TENSORLAYER-LOGO|
|Awesome| |Documentation-EN| |Documentation-CN| |Book-CN| |Downloads|
|PyPI| |PyPI-Prerelease| |Commits-Since| |Python| |TensorFlow|
|Travis| |Docker| |RTD-EN| |RTD-CN| |PyUP| |Docker-Pulls| |Code-Quality|
|JOIN-SLACK-LOGO|
TensorLayer is a novel TensorFlow-based deep learning and reinforcement
learning library designed for researchers and engineers. It provides a
large collection of customizable neural layers / functions that are key
to build real-world AI applications. TensorLayer is awarded the 2017
Best Open Source Software by the `ACM Multimedia
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
Design Features
=================
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
Install
=======
TensorLayer has pre-requisites including TensorFlow, numpy, and others. For GPU support, CUDA and cuDNN are required.
The simplest way to install TensorLayer is to use the Python Package Index (PyPI):
.. code:: bash
# for last stable version
pip install --upgrade tensorlayer
# for latest release candidate
pip install --upgrade --pre tensorlayer
# if you want to install the additional dependencies, you can also run
pip install --upgrade tensorlayer[all] # all additional dependencies
pip install --upgrade tensorlayer[extra] # only the `extra` dependencies
pip install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies
Alternatively, you can install the latest or development version by directly pulling from github:
.. code:: bash
pip install https://github.com/tensorlayer/tensorlayer/archive/master.zip
# or
# pip install https://github.com/tensorlayer/tensorlayer/archive/<branch-name>.zip
Using Docker - a ready-to-use environment
-----------------------------------------
The `TensorLayer
containers <https://hub.docker.com/r/tensorlayer/tensorlayer/>`__ are
built on top of the official `TensorFlow
containers <https://hub.docker.com/r/tensorflow/tensorflow/>`__:
Containers with CPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: bash
# for CPU version and Python 2
docker pull tensorlayer/tensorlayer:latest
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest
# for CPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-py3
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-py3
Containers with GPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
NVIDIA-Docker is required for these containers to work: `Project
Link <https://github.com/NVIDIA/nvidia-docker>`__
.. code:: bash
# for GPU version and Python 2
docker pull tensorlayer/tensorlayer:latest-gpu
nvidia-docker run -it --rm -p 8888:88888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu
# for GPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-gpu-py3
nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu-py3
Contribute
==========
Please read the `Contributor
Guideline <https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md>`__
before submitting your PRs.
Cite
====
If you find this project useful, we would be grateful if you cite the
TensorLayer paper:
::
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
License
=======
TensorLayer is released under the Apache 2.0 license.
.. |TENSORLAYER-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png
:target: https://tensorlayer.readthedocs.io/
.. |JOIN-SLACK-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/join_slack.png
:target: https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc
.. |Awesome| image:: https://awesome.re/mentioned-badge.svg
:target: https://github.com/tensorlayer/awesome-tensorlayer
.. |Documentation-EN| image:: https://img.shields.io/badge/documentation-english-blue.svg
:target: https://tensorlayer.readthedocs.io/
.. |Documentation-CN| image:: https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg
:target: https://tensorlayercn.readthedocs.io/
.. |Book-CN| image:: https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg
:target: http://www.broadview.com.cn/book/5059/
.. |Downloads| image:: http://pepy.tech/badge/tensorlayer
:target: http://pepy.tech/project/tensorlayer
.. |PyPI| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer.svg?label=PyPI%20-%20Release
:target: https://pypi.org/project/tensorlayer/
.. |PyPI-Prerelease| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer/all.svg?label=PyPI%20-%20Pre-Release
:target: https://pypi.org/project/tensorlayer/
.. |Commits-Since| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/commits-since/tensorlayer/tensorlayer/latest.svg
:target: https://github.com/tensorlayer/tensorlayer/compare/1.10.1...master
.. |Python| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/pypi/pyversions/tensorlayer.svg
:target: https://pypi.org/project/tensorlayer/
.. |TensorFlow| image:: https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg
:target: https://github.com/tensorflow/tensorflow/releases
.. |Travis| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/travis/tensorlayer/tensorlayer/master.svg?label=Travis
:target: https://travis-ci.org/tensorlayer/tensorlayer
.. |Docker| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/circleci/project/github/tensorlayer/tensorlayer/master.svg?label=Docker%20Build
:target: https://circleci.com/gh/tensorlayer/tensorlayer/tree/master
.. |RTD-EN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayer/latest.svg?label=ReadTheDocs-EN
:target: https://tensorlayer.readthedocs.io/
.. |RTD-CN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayercn/latest.svg?label=ReadTheDocs-CN
:target: https://tensorlayercn.readthedocs.io/
.. |PyUP| image:: https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg
:target: https://pyup.io/repos/github/tensorlayer/tensorlayer/
.. |Docker-Pulls| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/docker/pulls/tensorlayer/tensorlayer.svg
:target: https://hub.docker.com/r/tensorlayer/tensorlayer/
.. |Code-Quality| image:: https://api.codacy.com/project/badge/Grade/d6b118784e25435498e7310745adb848
:target: https://www.codacy.com/app/tensorlayer/tensorlayer
Keywords: deep learning,machine learning,computer vision,nlp,supervised learning,unsupervised learning,reinforcement learning,tensorflow

@@ -217,1 +47,182 @@ Platform: UNKNOWN

Provides-Extra: all_gpu_dev
License-File: LICENSE.rst
|TENSORLAYER-LOGO|
|Awesome| |Documentation-EN| |Documentation-CN| |Book-CN| |Downloads|
|PyPI| |PyPI-Prerelease| |Commits-Since| |Python| |TensorFlow|
|Travis| |Docker| |RTD-EN| |RTD-CN| |PyUP| |Docker-Pulls| |Code-Quality|
|JOIN-SLACK-LOGO|
TensorLayer is a novel TensorFlow-based deep learning and reinforcement
learning library designed for researchers and engineers. It provides a
large collection of customizable neural layers / functions that are key
to build real-world AI applications. TensorLayer is awarded the 2017
Best Open Source Software by the `ACM Multimedia
Society <http://www.acmmm.org/2017/mm-2017-awardees/>`__.
Design Features
=================
TensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.
- **Simplicity** : TensorLayer has a high-level layer/model abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https://github.com/tensorlayer/awesome-tensorlayer).
- **Flexibility** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.
- **Zero-cost Abstraction** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).
TensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn
hide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,
making it easy to learn while being flexible enough to cope with complex AI tasks.
TensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from Peking University,
Imperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.
Install
=======
TensorLayer has pre-requisites including TensorFlow, numpy, and others. For GPU support, CUDA and cuDNN are required.
The simplest way to install TensorLayer is to use the Python Package Index (PyPI):
.. code:: bash
# for last stable version
pip install --upgrade tensorlayer
# for latest release candidate
pip install --upgrade --pre tensorlayer
# if you want to install the additional dependencies, you can also run
pip install --upgrade tensorlayer[all] # all additional dependencies
pip install --upgrade tensorlayer[extra] # only the `extra` dependencies
pip install --upgrade tensorlayer[contrib_loggers] # only the `contrib_loggers` dependencies
Alternatively, you can install the latest or development version by directly pulling from github:
.. code:: bash
pip install https://github.com/tensorlayer/tensorlayer/archive/master.zip
# or
# pip install https://github.com/tensorlayer/tensorlayer/archive/<branch-name>.zip
Using Docker - a ready-to-use environment
-----------------------------------------
The `TensorLayer
containers <https://hub.docker.com/r/tensorlayer/tensorlayer/>`__ are
built on top of the official `TensorFlow
containers <https://hub.docker.com/r/tensorflow/tensorflow/>`__:
Containers with CPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: bash
# for CPU version and Python 2
docker pull tensorlayer/tensorlayer:latest
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest
# for CPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-py3
docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-py3
Containers with GPU support
~~~~~~~~~~~~~~~~~~~~~~~~~~~
NVIDIA-Docker is required for these containers to work: `Project
Link <https://github.com/NVIDIA/nvidia-docker>`__
.. code:: bash
# for GPU version and Python 2
docker pull tensorlayer/tensorlayer:latest-gpu
nvidia-docker run -it --rm -p 8888:88888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu
# for GPU version and Python 3
docker pull tensorlayer/tensorlayer:latest-gpu-py3
nvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer/tensorlayer:latest-gpu-py3
Contribute
==========
Please read the `Contributor
Guideline <https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md>`__
before submitting your PRs.
Cite
====
If you find this project useful, we would be grateful if you cite the
TensorLayer papers.
::
@article{tensorlayer2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
@inproceedings{tensorlayer2021,
title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},
author={Lai, Cheng and Han, Jiarong and Dong, Hao},
booktitle={2021 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
pages={1--3},
year={2021},
organization={IEEE}
License
=======
TensorLayer is released under the Apache 2.0 license.
.. |TENSORLAYER-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/tl_transparent_logo.png
:target: https://tensorlayer.readthedocs.io/
.. |JOIN-SLACK-LOGO| image:: https://raw.githubusercontent.com/tensorlayer/tensorlayer/master/img/join_slack.png
:target: https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc
.. |Awesome| image:: https://awesome.re/mentioned-badge.svg
:target: https://github.com/tensorlayer/awesome-tensorlayer
.. |Documentation-EN| image:: https://img.shields.io/badge/documentation-english-blue.svg
:target: https://tensorlayer.readthedocs.io/
.. |Documentation-CN| image:: https://img.shields.io/badge/documentation-%E4%B8%AD%E6%96%87-blue.svg
:target: https://tensorlayercn.readthedocs.io/
.. |Book-CN| image:: https://img.shields.io/badge/book-%E4%B8%AD%E6%96%87-blue.svg
:target: http://www.broadview.com.cn/book/5059/
.. |Downloads| image:: http://pepy.tech/badge/tensorlayer
:target: http://pepy.tech/project/tensorlayer
.. |PyPI| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer.svg?label=PyPI%20-%20Release
:target: https://pypi.org/project/tensorlayer/
.. |PyPI-Prerelease| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/release/tensorlayer/tensorlayer/all.svg?label=PyPI%20-%20Pre-Release
:target: https://pypi.org/project/tensorlayer/
.. |Commits-Since| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/github/commits-since/tensorlayer/tensorlayer/latest.svg
:target: https://github.com/tensorlayer/tensorlayer/compare/1.10.1...master
.. |Python| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/pypi/pyversions/tensorlayer.svg
:target: https://pypi.org/project/tensorlayer/
.. |TensorFlow| image:: https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg
:target: https://github.com/tensorflow/tensorflow/releases
.. |Travis| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/travis/tensorlayer/tensorlayer/master.svg?label=Travis
:target: https://travis-ci.org/tensorlayer/tensorlayer
.. |Docker| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/circleci/project/github/tensorlayer/tensorlayer/master.svg?label=Docker%20Build
:target: https://circleci.com/gh/tensorlayer/tensorlayer/tree/master
.. |RTD-EN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayer/latest.svg?label=ReadTheDocs-EN
:target: https://tensorlayer.readthedocs.io/
.. |RTD-CN| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/readthedocs/tensorlayercn/latest.svg?label=ReadTheDocs-CN
:target: https://tensorlayercn.readthedocs.io/
.. |PyUP| image:: https://pyup.io/repos/github/tensorlayer/tensorlayer/shield.svg
:target: https://pyup.io/repos/github/tensorlayer/tensorlayer/
.. |Docker-Pulls| image:: http://ec2-35-178-47-120.eu-west-2.compute.amazonaws.com/docker/pulls/tensorlayer/tensorlayer.svg
:target: https://hub.docker.com/r/tensorlayer/tensorlayer/
.. |Code-Quality| image:: https://api.codacy.com/project/badge/Grade/d6b118784e25435498e7310745adb848
:target: https://www.codacy.com/app/tensorlayer/tensorlayer

@@ -100,3 +100,3 @@ imageio>=2.5.0

hyperdash<0.16,>=0.15
tensorflow-gpu>=2.0.0-alpha0
tensorflow-gpu>=2.0.0-rc1

@@ -132,3 +132,3 @@ [all_gpu_dev]

isort==4.3.21
tensorflow-gpu>=2.0.0-alpha0
tensorflow-gpu>=2.0.0-rc1

@@ -179,2 +179,2 @@ [contrib_loggers]

[tf_gpu]
tensorflow-gpu>=2.0.0-alpha0
tensorflow-gpu>=2.0.0-rc1

@@ -0,1 +1,2 @@

LICENSE.rst
README.rst

@@ -25,2 +26,10 @@ setup.cfg

tensorlayer.egg-info/top_level.txt
tensorlayer/app/__init__.py
tensorlayer/app/computer_vision.py
tensorlayer/app/computer_vision_object_detection/__init__.py
tensorlayer/app/computer_vision_object_detection/common.py
tensorlayer/app/computer_vision_object_detection/yolov4.py
tensorlayer/app/human_pose_estimation/LCN.py
tensorlayer/app/human_pose_estimation/__init__.py
tensorlayer/app/human_pose_estimation/common.py
tensorlayer/cli/__init__.py

@@ -27,0 +36,0 @@ tensorlayer/cli/__main__.py

@@ -47,2 +47,3 @@ #!/usr/bin/env python

from tensorlayer import utils
from tensorlayer import app

@@ -49,0 +50,0 @@ from tensorlayer.lazy_imports import LazyImport

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

#! /usr/bin/python
# -*- coding: utf-8 -*-
"""The tensorlayer.cli module provides a command-line tool for some common tasks."""

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ # -*- coding: utf-8 -*-

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -229,3 +229,2 @@ #! /usr/bin/python

self.channel_axis = -1 if data_format == 'channels_last' else 1
self.axes = None

@@ -292,2 +291,3 @@

self.channel_axis = len(inputs.shape) - 1 if self.data_format == 'channels_last' else 1
if self.axes is None:

@@ -294,0 +294,0 @@ self.axes = [i for i in range(len(inputs.shape)) if i != self.channel_axis]

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -250,4 +250,6 @@ #! /usr/bin/python

sequence_length = [i - 1 if i >= 1 else 0 for i in sequence_length]
sequence_length = tl.layers.retrieve_seq_length_op3(inputs)
sequence_length = [i - 1 if i >= 1 else 0 for i in sequence_length]
# set warning

@@ -254,0 +256,0 @@ # if (not self.return_last_output) and sequence_length is not None:

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -213,3 +213,4 @@ import os

for co, check_argu in enumerate([inputs, outputs]):
if isinstance(check_argu, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(check_argu):
if isinstance(check_argu,
(tf.Tensor, tf.SparseTensor, tf.Variable)) or tf_ops.is_dense_tensor_like(check_argu):
pass

@@ -223,4 +224,5 @@ elif isinstance(check_argu, list):

for idx in range(len(check_argu)):
if not isinstance(check_argu[idx], tf_ops._TensorLike) or not tf_ops.is_dense_tensor_like(
check_argu[idx]):
if not isinstance(check_argu[idx],
(tf.Tensor, tf.SparseTensor, tf.Variable)) or not tf_ops.is_dense_tensor_like(
check_argu[idx]):
raise TypeError(

@@ -227,0 +229,0 @@ "The argument `%s` should be either Tensor or a list of Tensor " % (check_order[co]) +

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -7,3 +7,3 @@ #! /usr/bin/python

MINOR = 2
PATCH = 3
PATCH = 5
PRE_RELEASE = ''

@@ -10,0 +10,0 @@ # Use the following formatting: (major, minor, patch, prerelease)

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -8,5 +8,5 @@ #! /usr/bin/python

import numpy as np
import tensorlayer as tl
from tensorlayer.lazy_imports import LazyImport
import colorsys, random

@@ -20,14 +20,5 @@ cv2 = LazyImport("cv2")

__all__ = [
'read_image',
'read_images',
'save_image',
'save_images',
'draw_boxes_and_labels_to_image',
'draw_mpii_people_to_image',
'frame',
'CNN2d',
'images2d',
'tsne_embedding',
'draw_weights',
'W',
'read_image', 'read_images', 'save_image', 'save_images', 'draw_boxes_and_labels_to_image',
'draw_mpii_people_to_image', 'frame', 'CNN2d', 'images2d', 'tsne_embedding', 'draw_weights', 'W',
'draw_boxes_and_labels_to_image_with_json'
]

@@ -667,1 +658,64 @@

W = draw_weights
def draw_boxes_and_labels_to_image_with_json(image, json_result, class_list, save_name=None):
"""Draw bboxes and class labels on image. Return the image with bboxes.
Parameters
-----------
image : numpy.array
The RGB image [height, width, channel].
json_result : list of dict
The object detection result with json format.
classes_list : list of str
For converting ID to string on image.
save_name : None or str
The name of image file (i.e. image.png), if None, not to save image.
Returns
-------
numpy.array
The saved image.
References
-----------
- OpenCV rectangle and putText.
- `scikit-image <http://scikit-image.org/docs/dev/api/skimage.draw.html#skimage.draw.rectangle>`__.
"""
image_h, image_w, _ = image.shape
num_classes = len(class_list)
hsv_tuples = [(1.0 * x / num_classes, 1., 1.) for x in range(num_classes)]
colors = list(map(lambda x: colorsys.hsv_to_rgb(*x), hsv_tuples))
colors = list(map(lambda x: (int(x[0] * 255), int(x[1] * 255), int(x[2] * 255)), colors))
random.seed(0)
random.shuffle(colors)
random.seed(None)
bbox_thick = int(0.6 * (image_h + image_w) / 600)
fontScale = 0.5
for bbox_info in json_result:
image_name = bbox_info['image']
category_id = bbox_info['category_id']
if category_id < 0 or category_id > num_classes: continue
bbox = bbox_info['bbox'] # the order of coordinates is [x1, y2, x2, y2]
score = bbox_info['score']
bbox_color = colors[category_id]
c1, c2 = (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3]))
cv2.rectangle(image, c1, c2, bbox_color, bbox_thick)
bbox_mess = '%s: %.2f' % (class_list[category_id], score)
t_size = cv2.getTextSize(bbox_mess, 0, fontScale, thickness=bbox_thick // 2)[0]
c3 = (c1[0] + t_size[0], c1[1] - t_size[1] - 3)
cv2.rectangle(image, c1, (np.float32(c3[0]), np.float32(c3[1])), bbox_color, -1)
cv2.putText(
image, bbox_mess, (c1[0], np.float32(c1[1] - 2)), cv2.FONT_HERSHEY_SIMPLEX, fontScale, (0, 0, 0),
bbox_thick // 2, lineType=cv2.LINE_AA
)
if save_name is not None:
save_image(image, save_name)
return image

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ import numpy as np

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import os

@@ -7,3 +7,2 @@ #!/usr/bin/env python

import tensorflow as tf
import numpy as np

@@ -10,0 +9,0 @@ import tensorlayer as tl

@@ -8,3 +8,2 @@ #!/usr/bin/env python

import numpy as np
import tensorflow as tf

@@ -11,0 +10,0 @@ import tensorlayer as tl

@@ -8,4 +8,2 @@ #!/usr/bin/env python

import nltk
import tensorflow as tf
from tensorflow.python.platform import gfile

@@ -12,0 +10,0 @@ import tensorlayer as tl

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #! /usr/bin/python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ #!/usr/bin/env python

@@ -0,0 +0,0 @@ import os

@@ -0,0 +0,0 @@ import platform

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is not supported yet