Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

@tensorflow/tfjs-converter

Package Overview
Dependencies
Maintainers
11
Versions
154
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@tensorflow/tfjs-converter - npm Package Compare versions

Comparing version 0.1.0 to 0.1.1

ISSUE_TEMPLATE.md

2

dist/docs/doc_gen.js

@@ -58,3 +58,3 @@ "use strict";

output.push('|---|---|\n');
transformation.forEach(function (element) {
ops.forEach(function (element) {
output.push("|" + element.tfOpName + "|" + element.dlOpName + "|\n");

@@ -61,0 +61,0 @@ });

@@ -6,2 +6,3 @@ import * as tfc from '@tensorflow/tfjs-core';

private weightManifestUrl;
private requestOption;
private executor;

@@ -12,3 +13,3 @@ private version;

readonly modelVersion: string;
constructor(modelUrl: string, weightManifestUrl: string);
constructor(modelUrl: string, weightManifestUrl: string, requestOption?: RequestInit);
private getPathPrefix();

@@ -22,2 +23,2 @@ private loadRemoteProtoFile();

}
export declare function loadFrozenModel(modelUrl: string, weightsManifestUrl: string): Promise<FrozenModel>;
export declare function loadFrozenModel(modelUrl: string, weightsManifestUrl: string, requestOption?: RequestInit): Promise<FrozenModel>;

@@ -59,5 +59,6 @@ "use strict";

var FrozenModel = (function () {
function FrozenModel(modelUrl, weightManifestUrl) {
function FrozenModel(modelUrl, weightManifestUrl, requestOption) {
this.modelUrl = modelUrl;
this.weightManifestUrl = weightManifestUrl;
this.requestOption = requestOption;
this.version = 'n/a';

@@ -95,3 +96,3 @@ this.getPathPrefix();

_d.trys.push([0, 3, , 4]);
return [4, fetch(new Request(this.modelUrl))];
return [4, fetch(this.modelUrl, this.requestOption)];
case 1:

@@ -118,3 +119,3 @@ response = _d.sent();

_b.trys.push([0, 3, , 4]);
return [4, fetch(new Request(this.weightManifestUrl))];
return [4, fetch(this.weightManifestUrl, this.requestOption)];
case 1:

@@ -147,3 +148,3 @@ manifest = _b.sent();

this.version = graph.versions.producer + "." + graph.versions.minConsumer;
return [4, tfc.loadWeights(this.weightManifest, this.pathPrefix)];
return [4, tfc.loadWeights(this.weightManifest, this.pathPrefix, undefined, this.requestOption)];
case 2:

@@ -176,3 +177,3 @@ weightMap = _b.sent();

exports.FrozenModel = FrozenModel;
function loadFrozenModel(modelUrl, weightsManifestUrl) {
function loadFrozenModel(modelUrl, weightsManifestUrl, requestOption) {
return __awaiter(this, void 0, void 0, function () {

@@ -183,3 +184,3 @@ var model;

case 0:
model = new FrozenModel(modelUrl, weightsManifestUrl);
model = new FrozenModel(modelUrl, weightsManifestUrl, requestOption);
return [4, model.load()];

@@ -186,0 +187,0 @@ case 1:

@@ -1,2 +0,2 @@

declare const version = "0.1.0";
declare const version = "0.1.1";
export { version };
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
var version = '0.1.0';
var version = '0.1.1';
exports.version = version;
//# sourceMappingURL=version.js.map
{
"name": "@tensorflow/tfjs-converter",
"version": "0.1.0",
"version": "0.1.1",
"description": "Tensorflow model converter for javascript",

@@ -15,6 +15,6 @@ "main": "dist/index.js",

"peerDependencies": {
"@tensorflow/tfjs-core": "0.0.2"
"@tensorflow/tfjs-core": "0.6.1"
},
"devDependencies": {
"@tensorflow/tfjs-core": "0.0.2",
"@tensorflow/tfjs-core": "0.6.1",
"@types/jasmine": "~2.8.6",

@@ -21,0 +21,0 @@ "@types/seedrandom": "~2.4.27",

@@ -0,1 +1,3 @@

[![Build Status](https://travis-ci.org/tensorflow/tfjs-converter.svg?branch=master)](https://travis-ci.org/tensorflow/tfjs-converter)
# Getting started

@@ -21,7 +23,8 @@

2. Run converter script provided the pacakge
2. Run the converter script provided by the pip package:
Usage:
```bash
$ tensorflowjs_coverter \
$ tensorflowjs_converter \
--input_format=tf_saved_model \

@@ -43,3 +46,3 @@ --output_node_names='MobilenetV1/Predictions/Reshape_1' \

|`--input_format` | The format of input model, use tf_saved_model for SavedModel. |
|`--output_node_names`| he names of the output nodes, separated by commas.|
|`--output_node_names`| The names of the output nodes, separated by commas.|
|`--saved_model_tags` | Tags of the MetaGraphDef to load, in comma separated format. Defaults to `serve`.|

@@ -89,2 +92,11 @@

If your server requests credentials for accessing the model files, you can provide the optional RequestOption param.
```typescript
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL,
{credentials: 'include'});
```
Please see [fetch() documentation](https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/fetch) for details.
## Supported operations

@@ -94,3 +106,3 @@

[full list](./docs/supported_ops.md).
If your model uses an unsupported ops, the `tensorflowjs_coverter` script will fail and
If your model uses an unsupported ops, the `tensorflowjs_converter` script will fail and
produce a list of the unsupported ops in your model. Please file issues to let us

@@ -104,3 +116,3 @@ know what ops you need support with.

Image-based models (MobileNet, SqueezeNet, add more if you tested) are the most supported. Models with control flow ops (e.g. RNNs) are not yet supported. The tensorflowjs_coverter script will validate the model you have and show a list of unsupported ops in your model. See [this list](./docs/supported_ops.md) for which ops are currently supported.
Image-based models (MobileNet, SqueezeNet, add more if you tested) are the most supported. Models with control flow ops (e.g. RNNs) are not yet supported. The tensorflowjs_converter script will validate the model you have and show a list of unsupported ops in your model. See [this list](./docs/supported_ops.md) for which ops are currently supported.

@@ -119,3 +131,3 @@ 2. Will model with large weights work?

5. Why the predict() method for inference is so much slower on the first time then the subsequent calls?
5. Why is the predict() method for inference so much slower on the first call than the subsequent calls?

@@ -122,0 +134,0 @@ The time of first call also includes the compilation time of WebGL shader programs for the model. After the first call the shader programs are cached, which makes the subsequent calls much faster. You can warm up the cache by calling the predict method with an all zero inputs, right after the completion of the model loading.

Sorry, the diff of this file is too big to display

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc