New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@zappar/zappar

Package Overview
Dependencies
Maintainers
11
Versions
88
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@zappar/zappar - npm Package Compare versions

Comparing version 0.3.10 to 0.3.11

lib/anchor.d.ts

9

CHANGELOG.md
# Changelog
## [0.3.11] - 2021-06-02
### Added
- Typedoc comments.
### Fixed
- Some typos in `README.md`.
## [0.3.10] - 2021-04-23

@@ -4,0 +13,0 @@ ### Changed

@@ -5,8 +5,23 @@ import { Event1 } from "./event";

export declare type BarcodeFormat = barcode_format_t;
/**
* A barcode found in the camera source.
*/
export interface BarcodeFinderFound {
/**
* The text of the barcode.
*/
text: string;
/**
* The format of the barcode.
*/
format: BarcodeFormat;
}
/**
* Detects barcodes in the images from the camera.
*/
export declare class BarcodeFinder {
private _pipeline;
/**
* Emitted when a barcode becomes visible in a camera frame.
*/
onDetection: Event1<BarcodeFinderFound>;

@@ -18,10 +33,27 @@ private _lastDetected;

private _formats;
/**
* Constructs a new BarcodeFinder.
* @param _pipeline - The pipeline that this BarcodeFinder will operate within.
*/
constructor(_pipeline: Pipeline);
/**
* Destroys the barcode finder.
*/
destroy(): void;
private _frameUpdate;
/**
* Returns an array of discovered barcodes
*/
get found(): BarcodeFinderFound[];
/**
* Gets/sets the enabled state of the barcode finder.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled(): boolean;
set enabled(e: boolean);
/**
* Gets/sets the barcode formats to scan for.
*/
get formats(): BarcodeFormat[];
set formats(f: BarcodeFormat[]);
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.BarcodeFinder = void 0;
const event_1 = require("./event");
const zappar_cv_1 = require("@zappar/zappar-cv");
const zappar_1 = require("./zappar");
/**
* Detects barcodes in the images from the camera.
*/
class BarcodeFinder {
/**
* Constructs a new BarcodeFinder.
* @param _pipeline - The pipeline that this BarcodeFinder will operate within.
*/
constructor(_pipeline) {
this._pipeline = _pipeline;
/**
* Emitted when a barcode becomes visible in a camera frame.
*/
this.onDetection = new event_1.Event1();

@@ -56,2 +67,5 @@ this._lastDetected = [];

}
/**
* Destroys the barcode finder.
*/
destroy() {

@@ -63,5 +77,12 @@ this._pipeline._onFrameUpdateInternal.unbind(this._frameUpdate);

}
/**
* Returns an array of discovered barcodes
*/
get found() {
return this._found;
}
/**
* Gets/sets the enabled state of the barcode finder.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled() {

@@ -73,2 +94,5 @@ return this._z.barcode_finder_enabled(this._impl);

}
/**
* Gets/sets the barcode formats to scan for.
*/
get formats() {

@@ -75,0 +99,0 @@ return this._formats;

import { Pipeline } from "./pipeline";
/**
* Creates a source of frames from a device camera.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
export declare class CameraSource {
private _z;
private _impl;
/**
* Constructs a new CameraSource.
* @param _pipeline - The pipeline that this tracker will operate within.
* @param deviceId - The camera device ID which will be used as the source.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
constructor(pipeline: Pipeline, deviceId: string);
/**
* Destroys the camera source.
*/
destroy(): void;
/**
* Starts the camera source.
*
* Starting a given source pauses any other sources within the same pipeline.
*/
start(): void;
/**
* Pauses the camera source.
*/
pause(): void;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.CameraSource = void 0;
const zappar_1 = require("./zappar");
/**
* Creates a source of frames from a device camera.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
class CameraSource {
/**
* Constructs a new CameraSource.
* @param _pipeline - The pipeline that this tracker will operate within.
* @param deviceId - The camera device ID which will be used as the source.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
constructor(pipeline, deviceId) {

@@ -9,8 +20,19 @@ this._z = zappar_1.z();

}
/**
* Destroys the camera source.
*/
destroy() {
this._z.camera_source_destroy(this._impl);
}
/**
* Starts the camera source.
*
* Starting a given source pauses any other sources within the same pipeline.
*/
start() {
this._z.camera_source_start(this._impl);
}
/**
* Pauses the camera source.
*/
pause() {

@@ -17,0 +39,0 @@ this._z.camera_source_pause(this._impl);

51

lib/event.d.ts

@@ -0,30 +1,45 @@

/**
* A type-safe event handling class that multiple functions to be registered to be called when events are emitted.
*/
export declare class Event {
private _funcs;
/**
* Bind new handler function.
* @param f - The callback function to be bound.
*/
bind(f: () => void): void;
/**
* Unbind an existing handler function.
* @param f - The callback function to be unbound.
*/
unbind(f: () => void): void;
/**
* Emit an event, calling the bound handler functions.
*/
emit(): void;
}
/**
* A type-safe event handling class that multiple functions to be registered to be called when events are emitted.
* This class will pass a single argument supplied to [[emit]] to the handler functions.
*
* @typeparam A - The type of the argument passed to the handler functions through [[emit]].
*/
export declare class Event1<A> {
private _funcs;
/**
* Bind new handler function.
* @param f - The callback function to be bound.
*/
bind(f: (a: A) => void): void;
/**
* Unbind an existing function.
* @param f - The callback function to be unbound.
*/
unbind(f: (a: A) => void): void;
/**
* Emit an event.
*
* @param a - The argument to pass to handler functions.
*/
emit(a: A): void;
}
export declare class Event2<A, B> {
private _funcs;
bind(f: (a: A, b: B) => void): void;
unbind(f: (a: A, b: B) => void): void;
emit(a: A, b: B): void;
}
export declare class Event3<A, B, C> {
private _funcs;
bind(f: (a: A, b: B, c: C) => void): void;
unbind(f: (a: A, b: B, c: C) => void): void;
emit(a: A, b: B, c: C): void;
}
export declare class Event5<A, B, C, D, E> {
private _funcs;
bind(f: (a: A, b: B, c: C, d: D, e: E) => void): void;
unbind(f: (a: A, b: B, c: C, d: D, e: E) => void): void;
emit(a: A, b: B, c: C, d: D, e: E): void;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Event1 = exports.Event = void 0;
/**
* A type-safe event handling class that multiple functions to be registered to be called when events are emitted.
*/
class Event {

@@ -7,5 +11,13 @@ constructor() {

}
/**
* Bind new handler function.
* @param f - The callback function to be bound.
*/
bind(f) {
this._funcs.push(f);
}
/**
* Unbind an existing handler function.
* @param f - The callback function to be unbound.
*/
unbind(f) {

@@ -17,2 +29,5 @@ const indx = this._funcs.indexOf(f);

}
/**
* Emit an event, calling the bound handler functions.
*/
emit() {

@@ -25,2 +40,8 @@ for (let i = 0, total = this._funcs.length; i < total; i++) {

exports.Event = Event;
/**
* A type-safe event handling class that multiple functions to be registered to be called when events are emitted.
* This class will pass a single argument supplied to [[emit]] to the handler functions.
*
* @typeparam A - The type of the argument passed to the handler functions through [[emit]].
*/
class Event1 {

@@ -30,5 +51,13 @@ constructor() {

}
/**
* Bind new handler function.
* @param f - The callback function to be bound.
*/
bind(f) {
this._funcs.push(f);
}
/**
* Unbind an existing function.
* @param f - The callback function to be unbound.
*/
unbind(f) {

@@ -40,2 +69,7 @@ const indx = this._funcs.indexOf(f);

}
/**
* Emit an event.
*
* @param a - The argument to pass to handler functions.
*/
emit(a) {

@@ -48,61 +82,1 @@ for (let i = 0, total = this._funcs.length; i < total; i++) {

exports.Event1 = Event1;
class Event2 {
constructor() {
this._funcs = [];
}
bind(f) {
this._funcs.push(f);
}
unbind(f) {
const indx = this._funcs.indexOf(f);
if (indx > -1) {
this._funcs.splice(indx, 1);
}
}
emit(a, b) {
for (let i = 0, total = this._funcs.length; i < total; i++) {
this._funcs[i](a, b);
}
}
}
exports.Event2 = Event2;
class Event3 {
constructor() {
this._funcs = [];
}
bind(f) {
this._funcs.push(f);
}
unbind(f) {
const indx = this._funcs.indexOf(f);
if (indx > -1) {
this._funcs.splice(indx, 1);
}
}
emit(a, b, c) {
for (let i = 0, total = this._funcs.length; i < total; i++) {
this._funcs[i](a, b, c);
}
}
}
exports.Event3 = Event3;
class Event5 {
constructor() {
this._funcs = [];
}
bind(f) {
this._funcs.push(f);
}
unbind(f) {
const indx = this._funcs.indexOf(f);
if (indx > -1) {
this._funcs.splice(indx, 1);
}
}
emit(a, b, c, d, e) {
for (let i = 0, total = this._funcs.length; i < total; i++) {
this._funcs[i](a, b, c, d, e);
}
}
}
exports.Event5 = Event5;
import { face_landmark_name_t as FaceLandmarkName, zappar_face_landmark_t } from "@zappar/zappar-cv";
import { FaceAnchor } from "./facetracker";
export { face_landmark_name_t as FaceLandmarkName } from "@zappar/zappar-cv";
/**
* Attaches content to a known point (landmark) on a face as it moves around in the camera view.
* Landmarks will remain accurate, even as the user's expression changes.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
export declare class FaceLandmark {
private _name;
/**
* The most recent pose of this landmark, relative to the [[FaceAnchor]] used to update it.
* A 4x4 column-major transformation matrix.
*/
pose: Float32Array;
private _z;
private _impl;
/**
* Constructs a new FaceLanmdmark.
* @param _name - The name of the landmark to track.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
constructor(_name: FaceLandmarkName);
/**
* Destroys the face landmark.
*/
destroy(): void;
/**
* Updates pose directly from the expression and identity in a [[FaceAnchor]].
* @param f - The anchor to derive the expression and identity from.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromFaceAnchor(f: FaceAnchor, mirror?: boolean): void;
/**
* Updates pose directly from identity and expression coefficients.
* @param identity - The identity coefficients.
* @param expression - The expression coefficients.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromIdentityExpression(identity: Float32Array, expression: Float32Array, mirror?: boolean): void;
/**
* @ignore
*/
_getImpl(): zappar_face_landmark_t;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.FaceLandmark = exports.FaceLandmarkName = void 0;
const zappar_1 = require("./zappar");
const gl_matrix_1 = require("gl-matrix");
var zappar_cv_1 = require("@zappar/zappar-cv");
exports.FaceLandmarkName = zappar_cv_1.face_landmark_name_t;
Object.defineProperty(exports, "FaceLandmarkName", { enumerable: true, get: function () { return zappar_cv_1.face_landmark_name_t; } });
/**
* Attaches content to a known point (landmark) on a face as it moves around in the camera view.
* Landmarks will remain accurate, even as the user's expression changes.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
class FaceLandmark {
/**
* Constructs a new FaceLanmdmark.
* @param _name - The name of the landmark to track.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
constructor(_name) {
this._name = _name;
/**
* The most recent pose of this landmark, relative to the [[FaceAnchor]] used to update it.
* A 4x4 column-major transformation matrix.
*/
this.pose = gl_matrix_1.mat4.create();

@@ -14,5 +29,13 @@ this._z = zappar_1.z();

}
/**
* Destroys the face landmark.
*/
destroy() {
this._z.face_landmark_destroy(this._impl);
}
/**
* Updates pose directly from the expression and identity in a [[FaceAnchor]].
* @param f - The anchor to derive the expression and identity from.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromFaceAnchor(f, mirror) {

@@ -22,2 +45,8 @@ this._z.face_landmark_update(this._impl, f.identity, f.expression, mirror || false);

}
/**
* Updates pose directly from identity and expression coefficients.
* @param identity - The identity coefficients.
* @param expression - The expression coefficients.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromIdentityExpression(identity, expression, mirror) {

@@ -27,2 +56,5 @@ this._z.face_landmark_update(this._impl, identity, expression, mirror || false);

}
/**
* @ignore
*/
_getImpl() {

@@ -29,0 +61,0 @@ return this._impl;

import { zappar_face_mesh_t } from "@zappar/zappar-cv";
import { FaceAnchor } from "./facetracker";
/**
* A mesh that fits to the user's face and deforms as the user's expression changes.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
export declare class FaceMesh {
private _z;
private _impl;
/**
* Constructs a new FaceMesh.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
constructor();
/**
* Destroys the face mesh.
*/
destroy(): void;
/**
* Loads the data for a face mesh.
* @param src - A URL or ArrayBuffer of the source mesh data.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @param fillNeck - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
load(src?: string | ArrayBuffer, fillMouth?: boolean, fillEyeLeft?: boolean, fillEyeRight?: boolean, fillNeck?: boolean): Promise<void>;
/**
* Loads the default face mesh data.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefault(): Promise<void>;
/**
* Loads the default face mesh.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefaultFace(fillMouth?: boolean, fillEyeLeft?: boolean, fillEyeRight?: boolean): Promise<void>;
/**
* The full head simplified mesh covers the whole of the user's head, including some neck.
* It's ideal for drawing into the depth buffer in order to mask out the back of 3D models placed on the user's head.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @param fillNeck - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefaultFullHeadSimplified(fillMouth?: boolean, fillEyeLeft?: boolean, fillEyeRight?: boolean, fillNeck?: boolean): Promise<void>;
/**
* Update the face mesh directly from a [[FaceAnchor]].
* @param f - The face anchor.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromFaceAnchor(f: FaceAnchor, mirror?: boolean): void;
/**
* Updates the face mesh directly from a identity and expression coefficients.
* @param identity - The identity coefficients.
* @param expression - The expression coefficients.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromIdentityExpression(identity: Float32Array, expression: Float32Array, mirror?: boolean): void;
/**
*
* @returns The vertices of the mesh.
*/
get vertices(): Float32Array;
/**
* @returns The indices of the mesh.
*/
get indices(): Uint16Array;
/**
* @returns The UVs of the mesh.
*/
get uvs(): Float32Array;
/**
* @returns The normals of the mesh.
*/
get normals(): Float32Array;
/**
* @ignore
*/
_getImpl(): zappar_face_mesh_t;
}

@@ -12,4 +12,13 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.FaceMesh = void 0;
const zappar_1 = require("./zappar");
/**
* A mesh that fits to the user's face and deforms as the user's expression changes.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
class FaceMesh {
/**
* Constructs a new FaceMesh.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
constructor() {

@@ -19,5 +28,17 @@ this._z = zappar_1.z();

}
/**
* Destroys the face mesh.
*/
destroy() {
this._z.face_mesh_destroy(this._impl);
}
/**
* Loads the data for a face mesh.
* @param src - A URL or ArrayBuffer of the source mesh data.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @param fillNeck - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
load(src, fillMouth, fillEyeLeft, fillEyeRight, fillNeck) {

@@ -35,2 +56,6 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Loads the default face mesh data.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefault() {

@@ -41,2 +66,9 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Loads the default face mesh.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefaultFace(fillMouth, fillEyeLeft, fillEyeRight) {

@@ -47,2 +79,11 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* The full head simplified mesh covers the whole of the user's head, including some neck.
* It's ideal for drawing into the depth buffer in order to mask out the back of 3D models placed on the user's head.
* @param fillMouth - If true, fills this face feature with polygons.
* @param fillEyeLeft - If true, fills this face feature with polygons.
* @param fillEyeRight - If true, fills this face feature with polygons.
* @param fillNeck - If true, fills this face feature with polygons.
* @returns A promise that's resolved once the data is loaded.
*/
loadDefaultFullHeadSimplified(fillMouth, fillEyeLeft, fillEyeRight, fillNeck) {

@@ -53,20 +94,47 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Update the face mesh directly from a [[FaceAnchor]].
* @param f - The face anchor.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromFaceAnchor(f, mirror) {
this._z.face_mesh_update(this._impl, f.identity, f.expression, mirror || false);
}
/**
* Updates the face mesh directly from a identity and expression coefficients.
* @param identity - The identity coefficients.
* @param expression - The expression coefficients.
* @param mirror - Pass `true` to mirror the location in the X-axis.
*/
updateFromIdentityExpression(identity, expression, mirror) {
this._z.face_mesh_update(this._impl, identity, expression, mirror || false);
}
/**
*
* @returns The vertices of the mesh.
*/
get vertices() {
return this._z.face_mesh_vertices(this._impl);
}
/**
* @returns The indices of the mesh.
*/
get indices() {
return this._z.face_mesh_indices(this._impl);
}
/**
* @returns The UVs of the mesh.
*/
get uvs() {
return this._z.face_mesh_uvs(this._impl);
}
/**
* @returns The normals of the mesh.
*/
get normals() {
return this._z.face_mesh_normals(this._impl);
}
/**
* @ignore
*/
_getImpl() {

@@ -73,0 +141,0 @@ return this._impl;

import { Event, Event1 } from "./event";
import { Pipeline } from "./pipeline";
export interface FaceAnchor {
import { Anchor } from "./anchor";
/**
* A point in 3D space (including orientation) in a fixed location relative to a tracked face, including identity and expression coefficients.
*/
export interface FaceAnchor extends Anchor {
/**
* Emitted when the anchor becomes visible in a camera frame.
*/
onVisible: Event;
/**
* Emitted when the anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
onNotVisible: Event;
/**
* A string that's unique for this anchor.
*/
id: string;
pose(cameraPose: Float32Array, mirror?: boolean): Float32Array;
poseCameraRelative(mirror?: boolean): Float32Array;
/**
* `true` if the anchor is visible in the current frame.
*/
visible: boolean;
/**
* The identity coefficients of the face.
*/
identity: Float32Array;
/**
* The expression coefficients of the face.
*/
expression: Float32Array;
visible: boolean;
}
/**
* Attaches content to a face as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
export declare class FaceTracker {
private _pipeline;
/**
* Emitted when an anchor becomes visible in a camera frame.
*/
onVisible: Event1<FaceAnchor>;
/**
* Emitted when an anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
onNotVisible: Event1<FaceAnchor>;
/**
* Emitted when a new anchor is created by the tracker.
*/
onNewAnchor: Event1<FaceAnchor>;
/**
* The set of currently visible anchors.
*/
visible: Set<FaceAnchor>;
/**
* A map of the available anchors by their respective IDs.
*/
anchors: Map<string, FaceAnchor>;

@@ -23,11 +62,36 @@ private _visibleLastFrame;

private _impl;
/**
* Constructs a new FaceTracker
* @param _pipeline - The pipeline that this tracker will operate within.
*/
constructor(_pipeline: Pipeline);
/**
* Destroys the face tracker.
*/
destroy(): void;
private _frameUpdate;
/**
* Loads face tracking model data.
* @param src - A URL to, or ArrayBuffer of, model data.
* @returns A promise that's resolved once the model is loaded. It may still take a few frames for the tracker to fully initialize and detect faces.
*/
loadModel(src: string | ArrayBuffer): Promise<void>;
/**
* Loads the default face tracking model.
* @returns A promise that's resolved once the model is loaded. It may still take a few frames for the tracker to fully initialize and detect faces.
*/
loadDefaultModel(): Promise<void>;
/**
* Gets/sets the enabled state of the face tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled(): boolean;
set enabled(e: boolean);
/**
* Gets/sets the maximum number of faces to track.
*
* By default only one face is tracked in any given frame. Increasing this number may reduce runtime performance.
*/
get maxFaces(): number;
set maxFaces(m: number);
}

@@ -12,11 +12,35 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.FaceTracker = void 0;
const event_1 = require("./event");
const zappar_1 = require("./zappar");
/**
* Attaches content to a face as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/face-tracking/
*/
class FaceTracker {
/**
* Constructs a new FaceTracker
* @param _pipeline - The pipeline that this tracker will operate within.
*/
constructor(_pipeline) {
this._pipeline = _pipeline;
/**
* Emitted when an anchor becomes visible in a camera frame.
*/
this.onVisible = new event_1.Event1();
/**
* Emitted when an anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
this.onNotVisible = new event_1.Event1();
/**
* Emitted when a new anchor is created by the tracker.
*/
this.onNewAnchor = new event_1.Event1();
/**
* The set of currently visible anchors.
*/
this.visible = new Set();
/**
* A map of the available anchors by their respective IDs.
*/
this.anchors = new Map();

@@ -77,2 +101,5 @@ this._visibleLastFrame = new Set();

}
/**
* Destroys the face tracker.
*/
destroy() {

@@ -84,2 +111,7 @@ this._pipeline._onFrameUpdateInternal.unbind(this._frameUpdate);

}
/**
* Loads face tracking model data.
* @param src - A URL to, or ArrayBuffer of, model data.
* @returns A promise that's resolved once the model is loaded. It may still take a few frames for the tracker to fully initialize and detect faces.
*/
loadModel(src) {

@@ -93,2 +125,6 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Loads the default face tracking model.
* @returns A promise that's resolved once the model is loaded. It may still take a few frames for the tracker to fully initialize and detect faces.
*/
loadDefaultModel() {

@@ -99,2 +135,6 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Gets/sets the enabled state of the face tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled() {

@@ -106,2 +146,7 @@ return this._z.face_tracker_enabled(this._impl);

}
/**
* Gets/sets the maximum number of faces to track.
*
* By default only one face is tracked in any given frame. Increasing this number may reduce runtime performance.
*/
get maxFaces() {

@@ -108,0 +153,0 @@ return this._z.face_tracker_max_faces(this._impl);

import { Pipeline } from "./pipeline";
/**
* Creates a source of frames from a HTML <video> or <img> element.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
export declare class HTMLElementSource {
private _z;
private _impl;
/**
* Constructs a new HTMLElementSource.
* @param pipeline - The pipeline that this tracker will operate within.
* @param element - The HTML source element.
*/
constructor(pipeline: Pipeline, element: HTMLVideoElement | HTMLImageElement);
/**
* Destroys the source.
*/
destroy(): void;
/**
* Starts the source sending frames into the pipeline.
*
* Starting a given source pauses any other sources within the same pipeline.
*/
start(): void;
/**
* Pauses the source.
*/
pause(): void;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.HTMLElementSource = void 0;
const zappar_1 = require("./zappar");
/**
* Creates a source of frames from a HTML <video> or <img> element.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
class HTMLElementSource {
/**
* Constructs a new HTMLElementSource.
* @param pipeline - The pipeline that this tracker will operate within.
* @param element - The HTML source element.
*/
constructor(pipeline, element) {

@@ -9,8 +19,19 @@ this._z = zappar_1.z();

}
/**
* Destroys the source.
*/
destroy() {
this._z.html_element_source_destroy(this._impl);
}
/**
* Starts the source sending frames into the pipeline.
*
* Starting a given source pauses any other sources within the same pipeline.
*/
start() {
this._z.html_element_source_start(this._impl);
}
/**
* Pauses the source.
*/
pause() {

@@ -17,0 +38,0 @@ this._z.html_element_source_pause(this._impl);

import { Event, Event1 } from "./event";
import { Pipeline } from "./pipeline";
export interface ImageAnchor {
import { Anchor } from "./anchor";
/**
* A point in 3D space (including orientation) in a fixed location relative to a tracked image.
*/
export interface ImageAnchor extends Anchor {
/**
* Emitted when the anchor becomes visible in a camera frame.
*/
onVisible: Event;
/**
* Emitted when the anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
onNotVisible: Event;
/**
* A string that's unique for this anchor.
*/
id: string;
pose(cameraPose: Float32Array, mirror?: boolean): Float32Array;
poseCameraRelative(mirror?: boolean): Float32Array;
/**
* `true` if the anchor is visible in the current frame.
*/
visible: boolean;
}
/**
* Attaches content to a known image as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/image-tracking/
*/
export declare class ImageTracker {
private _pipeline;
/**
* Emitted when an anchor becomes visible in a camera frame.
*/
onVisible: Event1<ImageAnchor>;
/**
* Emitted when an anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
onNotVisible: Event1<ImageAnchor>;
/**
* Emitted when a new anchor is created by the tracker.
*/
onNewAnchor: Event1<ImageAnchor>;
/**
* The set of currently visible anchors.
*/
visible: Set<ImageAnchor>;
/**
* A map of the available image anchors by their respective IDs.
*/
anchors: Map<string, ImageAnchor>;

@@ -21,8 +54,27 @@ private _visibleLastFrame;

private _impl;
/**
* Constructs a new ImageTracker
* @param _pipeline - The pipeline that this tracker will operate within.
* @param targetFile - The .zpt target file from the source image you'd like to track.
* @see https://docs.zap.works/universal-ar/zapworks-cli/
*/
constructor(_pipeline: Pipeline, targetFile?: string | ArrayBuffer);
/**
* Destroys the image tracker.
*/
destroy(): void;
private _frameUpdate;
/**
* Loads a target file.
* @param src - A URL to, or an ArrayBuffer of, the target file from the source image you'd like to track.
* @see https://docs.zap.works/universal-ar/zapworks-cli/
* @returns A promise that's resolved once the file is downloaded. It may still take a few frames for the tracker to fully initialize and detect images.
*/
loadTarget(src: string | ArrayBuffer): Promise<void>;
/**
* Gets/sets the enabled state of the image tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled(): boolean;
set enabled(e: boolean);
}

@@ -12,11 +12,37 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.ImageTracker = void 0;
const event_1 = require("./event");
const zappar_1 = require("./zappar");
/**
* Attaches content to a known image as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/image-tracking/
*/
class ImageTracker {
/**
* Constructs a new ImageTracker
* @param _pipeline - The pipeline that this tracker will operate within.
* @param targetFile - The .zpt target file from the source image you'd like to track.
* @see https://docs.zap.works/universal-ar/zapworks-cli/
*/
constructor(_pipeline, targetFile) {
this._pipeline = _pipeline;
/**
* Emitted when an anchor becomes visible in a camera frame.
*/
this.onVisible = new event_1.Event1();
/**
* Emitted when an anchor goes from being visible in the previous camera frame, to not being visible in the current frame.
*/
this.onNotVisible = new event_1.Event1();
/**
* Emitted when a new anchor is created by the tracker.
*/
this.onNewAnchor = new event_1.Event1();
/**
* The set of currently visible anchors.
*/
this.visible = new Set();
/**
* A map of the available image anchors by their respective IDs.
*/
this.anchors = new Map();

@@ -35,3 +61,3 @@ this._visibleLastFrame = new Set();

let anchor = this.anchors.get(id);
let isNew = false;
let isNew = false; // TODO: declared but never used?
if (!anchor) {

@@ -76,2 +102,5 @@ anchor = {

}
/**
* Destroys the image tracker.
*/
destroy() {

@@ -83,2 +112,8 @@ this._pipeline._onFrameUpdateInternal.unbind(this._frameUpdate);

}
/**
* Loads a target file.
* @param src - A URL to, or an ArrayBuffer of, the target file from the source image you'd like to track.
* @see https://docs.zap.works/universal-ar/zapworks-cli/
* @returns A promise that's resolved once the file is downloaded. It may still take a few frames for the tracker to fully initialize and detect images.
*/
loadTarget(src) {

@@ -92,2 +127,6 @@ return __awaiter(this, void 0, void 0, function* () {

}
/**
* Gets/sets the enabled state of the image tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled() {

@@ -94,0 +133,0 @@ return this._z.image_tracker_enabled(this._impl);

@@ -10,4 +10,6 @@ export { ImageTracker, ImageAnchor } from "./imagetracker";

export { FaceLandmarkName, FaceLandmark } from "./facelandmark";
export { Anchor } from "./anchor";
export { permissionDeniedUI, permissionGranted, permissionDenied, permissionRequest, permissionRequestUI, Permission } from "./permission";
export { LogLevel, setLogLevel, logLevel } from "./loglevel";
export { Event, Event1 } from "./event";
export { cameraDefaultDeviceID, invert, drawPlane, projectionMatrixFromCameraModel, browserIncompatible, browserIncompatibleUI } from "./zappar";
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.browserIncompatibleUI = exports.browserIncompatible = exports.projectionMatrixFromCameraModel = exports.drawPlane = exports.invert = exports.cameraDefaultDeviceID = exports.Event1 = exports.Event = exports.logLevel = exports.setLogLevel = exports.LogLevel = exports.Permission = exports.permissionRequestUI = exports.permissionRequest = exports.permissionDenied = exports.permissionGranted = exports.permissionDeniedUI = exports.FaceLandmark = exports.FaceLandmarkName = exports.Pipeline = exports.HTMLElementSource = exports.CameraSource = exports.FaceMesh = exports.FaceTracker = exports.BarcodeFinder = exports.InstantWorldTracker = exports.ImageTracker = void 0;
var imagetracker_1 = require("./imagetracker");
exports.ImageTracker = imagetracker_1.ImageTracker;
Object.defineProperty(exports, "ImageTracker", { enumerable: true, get: function () { return imagetracker_1.ImageTracker; } });
var instantworldtracker_1 = require("./instantworldtracker");
exports.InstantWorldTracker = instantworldtracker_1.InstantWorldTracker;
Object.defineProperty(exports, "InstantWorldTracker", { enumerable: true, get: function () { return instantworldtracker_1.InstantWorldTracker; } });
var barcodefinder_1 = require("./barcodefinder");
exports.BarcodeFinder = barcodefinder_1.BarcodeFinder;
Object.defineProperty(exports, "BarcodeFinder", { enumerable: true, get: function () { return barcodefinder_1.BarcodeFinder; } });
var facetracker_1 = require("./facetracker");
exports.FaceTracker = facetracker_1.FaceTracker;
Object.defineProperty(exports, "FaceTracker", { enumerable: true, get: function () { return facetracker_1.FaceTracker; } });
var facemesh_1 = require("./facemesh");
exports.FaceMesh = facemesh_1.FaceMesh;
Object.defineProperty(exports, "FaceMesh", { enumerable: true, get: function () { return facemesh_1.FaceMesh; } });
var camerasource_1 = require("./camerasource");
exports.CameraSource = camerasource_1.CameraSource;
Object.defineProperty(exports, "CameraSource", { enumerable: true, get: function () { return camerasource_1.CameraSource; } });
var htmlelementsource_1 = require("./htmlelementsource");
exports.HTMLElementSource = htmlelementsource_1.HTMLElementSource;
Object.defineProperty(exports, "HTMLElementSource", { enumerable: true, get: function () { return htmlelementsource_1.HTMLElementSource; } });
var pipeline_1 = require("./pipeline");
exports.Pipeline = pipeline_1.Pipeline;
Object.defineProperty(exports, "Pipeline", { enumerable: true, get: function () { return pipeline_1.Pipeline; } });
var facelandmark_1 = require("./facelandmark");
exports.FaceLandmarkName = facelandmark_1.FaceLandmarkName;
exports.FaceLandmark = facelandmark_1.FaceLandmark;
Object.defineProperty(exports, "FaceLandmarkName", { enumerable: true, get: function () { return facelandmark_1.FaceLandmarkName; } });
Object.defineProperty(exports, "FaceLandmark", { enumerable: true, get: function () { return facelandmark_1.FaceLandmark; } });
var permission_1 = require("./permission");
exports.permissionDeniedUI = permission_1.permissionDeniedUI;
exports.permissionGranted = permission_1.permissionGranted;
exports.permissionDenied = permission_1.permissionDenied;
exports.permissionRequest = permission_1.permissionRequest;
exports.permissionRequestUI = permission_1.permissionRequestUI;
exports.Permission = permission_1.Permission;
Object.defineProperty(exports, "permissionDeniedUI", { enumerable: true, get: function () { return permission_1.permissionDeniedUI; } });
Object.defineProperty(exports, "permissionGranted", { enumerable: true, get: function () { return permission_1.permissionGranted; } });
Object.defineProperty(exports, "permissionDenied", { enumerable: true, get: function () { return permission_1.permissionDenied; } });
Object.defineProperty(exports, "permissionRequest", { enumerable: true, get: function () { return permission_1.permissionRequest; } });
Object.defineProperty(exports, "permissionRequestUI", { enumerable: true, get: function () { return permission_1.permissionRequestUI; } });
Object.defineProperty(exports, "Permission", { enumerable: true, get: function () { return permission_1.Permission; } });
var loglevel_1 = require("./loglevel");
exports.LogLevel = loglevel_1.LogLevel;
exports.setLogLevel = loglevel_1.setLogLevel;
exports.logLevel = loglevel_1.logLevel;
Object.defineProperty(exports, "LogLevel", { enumerable: true, get: function () { return loglevel_1.LogLevel; } });
Object.defineProperty(exports, "setLogLevel", { enumerable: true, get: function () { return loglevel_1.setLogLevel; } });
Object.defineProperty(exports, "logLevel", { enumerable: true, get: function () { return loglevel_1.logLevel; } });
var event_1 = require("./event");
Object.defineProperty(exports, "Event", { enumerable: true, get: function () { return event_1.Event; } });
Object.defineProperty(exports, "Event1", { enumerable: true, get: function () { return event_1.Event1; } });
var zappar_1 = require("./zappar");
exports.cameraDefaultDeviceID = zappar_1.cameraDefaultDeviceID;
exports.invert = zappar_1.invert;
exports.drawPlane = zappar_1.drawPlane;
exports.projectionMatrixFromCameraModel = zappar_1.projectionMatrixFromCameraModel;
exports.browserIncompatible = zappar_1.browserIncompatible;
exports.browserIncompatibleUI = zappar_1.browserIncompatibleUI;
Object.defineProperty(exports, "cameraDefaultDeviceID", { enumerable: true, get: function () { return zappar_1.cameraDefaultDeviceID; } });
Object.defineProperty(exports, "invert", { enumerable: true, get: function () { return zappar_1.invert; } });
Object.defineProperty(exports, "drawPlane", { enumerable: true, get: function () { return zappar_1.drawPlane; } });
Object.defineProperty(exports, "projectionMatrixFromCameraModel", { enumerable: true, get: function () { return zappar_1.projectionMatrixFromCameraModel; } });
Object.defineProperty(exports, "browserIncompatible", { enumerable: true, get: function () { return zappar_1.browserIncompatible; } });
Object.defineProperty(exports, "browserIncompatibleUI", { enumerable: true, get: function () { return zappar_1.browserIncompatibleUI; } });
import { instant_world_tracker_transform_orientation_t } from "@zappar/zappar-cv";
import { Pipeline } from "./pipeline";
import { Anchor } from "./anchor";
export declare type InstantWorldTrackerTransformOrigin = instant_world_tracker_transform_orientation_t;
export interface InstantWorldAnchor {
pose(cameraPose: Float32Array, mirror?: boolean): Float32Array;
poseCameraRelative(mirror?: boolean): Float32Array;
export interface InstantWorldAnchor extends Anchor {
}
/**
* Attaches content to a point on a surface in front of the user as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/instant-world-tracking/
*/
export declare class InstantWorldTracker {
private _pipeline;
/**
* The instant world tracking anchor.
*/
anchor: InstantWorldAnchor;
private _z;
private _impl;
/**
* Constructs a new InstantWorldTracker.
* @param _pipeline - The pipeline that this tracker will operate within.
*/
constructor(_pipeline: Pipeline);
/**
* Destroys the instant tracker.
*/
destroy(): void;
private _anchorPoseCameraRelative;
private _anchorPose;
/**
* Gets/sets the enabled state of the instant world tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled(): boolean;
set enabled(e: boolean);
/**
* Sets the point in the user's environment that the anchor tracks from.
*
* The parameters passed in to this function correspond to the X, Y and Z coordinates (in camera space) of the point to track. Choosing a position with X and Y coordinates of zero, and a negative Z coordinate, will select a point on a surface directly in front of the center of the screen.
*
* @param orientation - The orientation of the point in space.
*/
setAnchorPoseFromCameraOffset(x: number, y: number, z: number, orientation?: InstantWorldTrackerTransformOrigin): void;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.InstantWorldTracker = void 0;
const zappar_cv_1 = require("@zappar/zappar-cv");
const zappar_1 = require("./zappar");
/**
* Attaches content to a point on a surface in front of the user as it moves around in the camera view.
* @see https://docs.zap.works/universal-ar/javascript/instant-world-tracking/
*/
class InstantWorldTracker {
/**
* Constructs a new InstantWorldTracker.
* @param _pipeline - The pipeline that this tracker will operate within.
*/
constructor(_pipeline) {
this._pipeline = _pipeline;
/**
* The instant world tracking anchor.
*/
this.anchor = {

@@ -15,2 +27,5 @@ poseCameraRelative: mirror => this._anchorPoseCameraRelative(mirror),

}
/**
* Destroys the instant tracker.
*/
destroy() {

@@ -25,2 +40,6 @@ this._z.instant_world_tracker_destroy(this._impl);

}
/**
* Gets/sets the enabled state of the instant world tracker.
* Disable when not in use to save computational resources during frame processing.
*/
get enabled() {

@@ -32,2 +51,9 @@ return this._z.instant_world_tracker_enabled(this._impl);

}
/**
* Sets the point in the user's environment that the anchor tracks from.
*
* The parameters passed in to this function correspond to the X, Y and Z coordinates (in camera space) of the point to track. Choosing a position with X and Y coordinates of zero, and a negative Z coordinate, will select a point on a surface directly in front of the center of the screen.
*
* @param orientation - The orientation of the point in space.
*/
setAnchorPoseFromCameraOffset(x, y, z, orientation) {

@@ -38,2 +64,1 @@ this._z.instant_world_tracker_anchor_pose_set_from_camera_offset(this._impl, x, y, z, orientation || zappar_cv_1.instant_world_tracker_transform_orientation_t.MINUS_Z_AWAY_FROM_USER);

exports.InstantWorldTracker = InstantWorldTracker;
const _identity = new Float32Array([1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]);
import { log_level_t as LogLevel } from "@zappar/zappar-cv";
export { log_level_t as LogLevel } from "@zappar/zappar-cv";
/**
* @returns The granularity of logging emitted by the library.
*/
export declare function logLevel(): LogLevel;
/**
* Sets the granularity of logging emitted by the library.
*/
export declare function setLogLevel(l: LogLevel): void;
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.setLogLevel = exports.logLevel = exports.LogLevel = void 0;
const zappar_1 = require("./zappar");
var zappar_cv_1 = require("@zappar/zappar-cv");
exports.LogLevel = zappar_cv_1.log_level_t;
Object.defineProperty(exports, "LogLevel", { enumerable: true, get: function () { return zappar_cv_1.log_level_t; } });
/**
* @returns The granularity of logging emitted by the library.
*/
function logLevel() {

@@ -10,2 +14,5 @@ return zappar_1.z().log_level();

exports.logLevel = logLevel;
/**
* Sets the granularity of logging emitted by the library.
*/
function setLogLevel(l) {

@@ -12,0 +19,0 @@ zappar_1.z().log_level_set(l);

@@ -0,9 +1,43 @@

/**
* The permissions that may be requested.
*/
export declare enum Permission {
/**
* Permission to access camera images.
*/
CAMERA = 0,
/**
* Permission to access device motion data (e.g. accelerometer and gyro). Some tracking algorithms require this data to operate.
*/
MOTION = 1
}
/**
* Checks if the browser has currently granted relevant permissions.
* @param onlyPermsission - The exclusive permission to query, otherwise all are queried.
* @returns The permission granted state. 'true' if permission is granted.
*/
export declare function permissionGranted(onlyPermsission?: Permission): boolean;
/**
* Checks if the browser has currently denied relevant permissions.
* @param onlyPermsission - The exclusive permission to query, otherwise all are queried.
* @returns The permission granted state. 'true' if permission is denied.
*/
export declare function permissionDenied(onlyPermission?: Permission): boolean;
/**
* Requests the browser to grant relevant permissions.
*
* This may or may not trigger a browser-provided user dialog prompting a permission choice.
*
* @param onlyPermission - The exclusive permission to query, otherwise all are requested.
* @returns A Promise containing granted status. 'true' if granted.
*/
export declare function permissionRequest(onlyPermission?: Permission): Promise<boolean>;
/**
* Shows Zappar's built-in UI to request camera and motion data permissions
* @returns A promise containing granted status.
*/
export declare function permissionRequestUI(): Promise<boolean>;
/**
* Shows Zappar's built-in permission denied UI.
*/
export declare function permissionDeniedUI(): void;

@@ -12,8 +12,23 @@ "use strict";

Object.defineProperty(exports, "__esModule", { value: true });
exports.permissionDeniedUI = exports.permissionRequestUI = exports.permissionRequest = exports.permissionDenied = exports.permissionGranted = exports.Permission = void 0;
const zappar_1 = require("./zappar");
/**
* The permissions that may be requested.
*/
var Permission;
(function (Permission) {
/**
* Permission to access camera images.
*/
Permission[Permission["CAMERA"] = 0] = "CAMERA";
/**
* Permission to access device motion data (e.g. accelerometer and gyro). Some tracking algorithms require this data to operate.
*/
Permission[Permission["MOTION"] = 1] = "MOTION";
})(Permission = exports.Permission || (exports.Permission = {}));
/**
* Checks if the browser has currently granted relevant permissions.
* @param onlyPermsission - The exclusive permission to query, otherwise all are queried.
* @returns The permission granted state. 'true' if permission is granted.
*/
function permissionGranted(onlyPermsission) {

@@ -27,2 +42,7 @@ switch (onlyPermsission) {

exports.permissionGranted = permissionGranted;
/**
* Checks if the browser has currently denied relevant permissions.
* @param onlyPermsission - The exclusive permission to query, otherwise all are queried.
* @returns The permission granted state. 'true' if permission is denied.
*/
function permissionDenied(onlyPermission) {

@@ -36,2 +56,10 @@ switch (onlyPermission) {

exports.permissionDenied = permissionDenied;
/**
* Requests the browser to grant relevant permissions.
*
* This may or may not trigger a browser-provided user dialog prompting a permission choice.
*
* @param onlyPermission - The exclusive permission to query, otherwise all are requested.
* @returns A Promise containing granted status. 'true' if granted.
*/
function permissionRequest(onlyPermission) {

@@ -90,2 +118,6 @@ switch (onlyPermission) {

exports.permissionRequest = permissionRequest;
/**
* Shows Zappar's built-in UI to request camera and motion data permissions
* @returns A promise containing granted status.
*/
function permissionRequestUI() {

@@ -97,2 +129,5 @@ return __awaiter(this, void 0, void 0, function* () {

exports.permissionRequestUI = permissionRequestUI;
/**
* Shows Zappar's built-in permission denied UI.
*/
function permissionDeniedUI() {

@@ -99,0 +134,0 @@ return zappar_1.z().permission_denied_ui();

import { zappar_pipeline_t } from "@zappar/zappar-cv";
import { Event } from "./event";
import { FaceMesh } from "./facemesh";
/**
* Pipelines manage the flow of data coming in (i.e. the camera frames) through to the output from the different tracking types and computer vision algorithms.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
export declare class Pipeline {
/**
* Emitted when the frame is updated.
*/
onFrameUpdate: Event;
/**
* @ignore
*/
_onFrameUpdateInternal: Event;

@@ -10,20 +20,138 @@ private _z;

private _lastFrameNumber;
/**
* Constructs a new Pipeline.
*/
constructor();
/**
* Destroys the pipeline.
*/
destroy(): void;
/**
* Updates the pipeline and trackers to expose tracking data from the most recently processed camera frame.
*/
frameUpdate(): void;
/**
* @ignore
*/
_getImpl(): zappar_pipeline_t;
/**
* Sets the WebGL context used for the processing and upload of camera textures.
* @param gl - The WebGL context.
*/
glContextSet(gl: WebGLRenderingContext): void;
/**
* Informs the pipeline that the GL context is lost and should not be used.
*/
glContextLost(): void;
/**
* Returns the most recent camera frame texture.
*/
cameraFrameTextureGL(): WebGLTexture | undefined;
/**
* Returns a matrix that you can use to transform the UV coordinates of the following full-screen quad in order to render the camera texture:
*
* Vertex 0: `-1, -1, 0`
*
* UV 0: `0, 0`
*
* Vertex 1: `-1, 1, 0`
*
* UV 1: `0, 1`
*
* Vertex 2: `1, -1, 0`
*
* UV 1: `1, 0`
*
* Vertex 3: `1, 1, 0`
*
* UV 1: `1, 1`
*
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param mirror - Pass `true` to mirror the camera image in the X-axis.
* @returns A 4x4 column-major transformation matrix.
*/
cameraFrameTextureMatrix(renderWidth: number, renderHeight: number, mirror?: boolean): Float32Array;
/**
* Draw the camera to the screen as a full screen quad.
*
* Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
* - The currently bound texture 2D is set to `null` (e.g. `gl.bindTexture(gl.TEXTURE_2D, null)`)
* - The currently bound array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ARRAY_BUFFER, null);`)
* - The currently bound program is set to `null` (e.g. `gl.useProgram(null)`)
* - The currently active texture is set to `gl.TEXTURE0` (e.g. `gl.activeTexture(gl.TEXTURE0)`)
* - These features are disabled: `gl.SCISSOR_TEST`, `gl.DEPTH_TEST`, `gl.BLEND`, `gl.CULL_FACE`
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param mirror - Pass `true` to mirror the camera image in the X-axis.
*/
cameraFrameDrawGL(renderWidth: number, renderHeight: number, mirror?: boolean): void;
/**
* Uploads the current camera frame to a WebGL texture.
*/
cameraFrameUploadGL(): void;
/**
* Prepares camera frames for processing.
*
* Call this function on your pipeline once an animation frame (e.g. during your `requestAnimationFrame` function) in order to process incoming camera frames.
*
* Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
* - The currently bound framebuffer is set to `null` (e.g. `gl.bindFramebuffer(gl.FRAMEBUFFER, null)`)
* - The currently bound texture 2D is set to `null` (e.g. `gl.bindTexture(gl.TEXTURE_2D, null)`)
* - The currently bound array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ARRAY_BUFFER, null);`)
* - The currently bound element array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null)`)
* - The currently bound program is set to `null` (e.g. `gl.useProgram(null)`)
* - The currently active texture is set to `gl.TEXTURE0` (e.g. `gl.activeTexture(gl.TEXTURE0)`)
* - These features are disabled: `gl.SCISSOR_TEST`, `gl.DEPTH_TEST`, `gl.BLEND`, `gl.CULL_FACE`
* - The pixel store flip-Y mode is disabled (e.g. `gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, false)`)
* - The viewport is changed (e.g. `gl.viewport(...)`)
* - The clear color is changed (e.g. `gl.clearColor(...)`)
*/
processGL(): void;
/**
* Returns the camera model (i.e. the intrinsic camera parameters) for the current frame.
*/
cameraModel(): Float32Array;
/**
* Returns a transformation where the camera sits, stationary, at the origin of world space, and points down the negative Z axis.
*
* In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
*
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseDefault(): Float32Array;
/**
* Returns a transformation where the camera sits at the origin of world space, but rotates as the user rotates the physical device.
*
* When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.
*
* In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
*
* @param mirror - Pass `true` to mirror the location in the X-axis.
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseWithAttitude(mirror?: boolean): Float32Array;
/**
* Returns a transformation with the (camera-relative) origin specified by the supplied parameter.
*
* This is used with the `poseCameraRelative(...) : Float32Array` functions provided by the various anchor types to allow a given anchor (e.g. a tracked image or face) to be the origin of world space.
*
* In this case the camera moves and rotates in world space around the anchor at the origin.
*
* @param o - The origin matrix.
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseWithOrigin(o: Float32Array): Float32Array;
/**
* Returns true if the current camera frame came from a user-facing camera
*/
cameraFrameUserFacing(): boolean;
/**
* @ignore
*/
drawFace(projectionMatrix: Float32Array, cameraMatrix: Float32Array, targetMatrix: Float32Array, m: FaceMesh): void;
/**
* Returns the number of the current frame.
*/
frameNumber(): number;
}
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Pipeline = void 0;
const zappar_1 = require("./zappar");
const event_1 = require("./event");
/**
* Pipelines manage the flow of data coming in (i.e. the camera frames) through to the output from the different tracking types and computer vision algorithms.
* @see https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/
*/
class Pipeline {
/**
* Constructs a new Pipeline.
*/
constructor() {
/**
* Emitted when the frame is updated.
*/
this.onFrameUpdate = new event_1.Event();
/**
* @ignore
*/
this._onFrameUpdateInternal = new event_1.Event();

@@ -13,5 +27,11 @@ this._lastFrameNumber = -1;

}
/**
* Destroys the pipeline.
*/
destroy() {
this._z.pipeline_destroy(this._impl);
}
/**
* Updates the pipeline and trackers to expose tracking data from the most recently processed camera frame.
*/
frameUpdate() {

@@ -26,44 +46,153 @@ this._z.pipeline_frame_update(this._impl);

}
/**
* @ignore
*/
_getImpl() {
return this._impl;
}
/**
* Sets the WebGL context used for the processing and upload of camera textures.
* @param gl - The WebGL context.
*/
glContextSet(gl) {
this._z.pipeline_gl_context_set(this._impl, gl);
}
/**
* Informs the pipeline that the GL context is lost and should not be used.
*/
glContextLost() {
this._z.pipeline_gl_context_lost(this._impl);
}
/**
* Returns the most recent camera frame texture.
*/
cameraFrameTextureGL() {
return this._z.pipeline_camera_frame_texture_gl(this._impl);
}
/**
* Returns a matrix that you can use to transform the UV coordinates of the following full-screen quad in order to render the camera texture:
*
* Vertex 0: `-1, -1, 0`
*
* UV 0: `0, 0`
*
* Vertex 1: `-1, 1, 0`
*
* UV 1: `0, 1`
*
* Vertex 2: `1, -1, 0`
*
* UV 1: `1, 0`
*
* Vertex 3: `1, 1, 0`
*
* UV 1: `1, 1`
*
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param mirror - Pass `true` to mirror the camera image in the X-axis.
* @returns A 4x4 column-major transformation matrix.
*/
cameraFrameTextureMatrix(renderWidth, renderHeight, mirror) {
return this._z.pipeline_camera_frame_texture_matrix(this._impl, renderWidth, renderHeight, mirror === true);
}
/**
* Draw the camera to the screen as a full screen quad.
*
* Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
* - The currently bound texture 2D is set to `null` (e.g. `gl.bindTexture(gl.TEXTURE_2D, null)`)
* - The currently bound array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ARRAY_BUFFER, null);`)
* - The currently bound program is set to `null` (e.g. `gl.useProgram(null)`)
* - The currently active texture is set to `gl.TEXTURE0` (e.g. `gl.activeTexture(gl.TEXTURE0)`)
* - These features are disabled: `gl.SCISSOR_TEST`, `gl.DEPTH_TEST`, `gl.BLEND`, `gl.CULL_FACE`
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param mirror - Pass `true` to mirror the camera image in the X-axis.
*/
cameraFrameDrawGL(renderWidth, renderHeight, mirror) {
this._z.pipeline_camera_frame_draw_gl(this._impl, renderWidth, renderHeight, mirror);
}
/**
* Uploads the current camera frame to a WebGL texture.
*/
cameraFrameUploadGL() {
this._z.pipeline_camera_frame_upload_gl(this._impl);
}
/**
* Prepares camera frames for processing.
*
* Call this function on your pipeline once an animation frame (e.g. during your `requestAnimationFrame` function) in order to process incoming camera frames.
*
* Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
* - The currently bound framebuffer is set to `null` (e.g. `gl.bindFramebuffer(gl.FRAMEBUFFER, null)`)
* - The currently bound texture 2D is set to `null` (e.g. `gl.bindTexture(gl.TEXTURE_2D, null)`)
* - The currently bound array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ARRAY_BUFFER, null);`)
* - The currently bound element array buffer is set to `null` (e.g. `gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null)`)
* - The currently bound program is set to `null` (e.g. `gl.useProgram(null)`)
* - The currently active texture is set to `gl.TEXTURE0` (e.g. `gl.activeTexture(gl.TEXTURE0)`)
* - These features are disabled: `gl.SCISSOR_TEST`, `gl.DEPTH_TEST`, `gl.BLEND`, `gl.CULL_FACE`
* - The pixel store flip-Y mode is disabled (e.g. `gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, false)`)
* - The viewport is changed (e.g. `gl.viewport(...)`)
* - The clear color is changed (e.g. `gl.clearColor(...)`)
*/
processGL() {
this._z.pipeline_process_gl(this._impl);
}
/**
* Returns the camera model (i.e. the intrinsic camera parameters) for the current frame.
*/
cameraModel() {
return this._z.pipeline_camera_model(this._impl);
}
/**
* Returns a transformation where the camera sits, stationary, at the origin of world space, and points down the negative Z axis.
*
* In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
*
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseDefault() {
return this._z.pipeline_camera_pose_default(this._impl);
}
/**
* Returns a transformation where the camera sits at the origin of world space, but rotates as the user rotates the physical device.
*
* When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.
*
* In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
*
* @param mirror - Pass `true` to mirror the location in the X-axis.
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseWithAttitude(mirror) {
return this._z.pipeline_camera_pose_with_attitude(this._impl, mirror || false);
}
/**
* Returns a transformation with the (camera-relative) origin specified by the supplied parameter.
*
* This is used with the `poseCameraRelative(...) : Float32Array` functions provided by the various anchor types to allow a given anchor (e.g. a tracked image or face) to be the origin of world space.
*
* In this case the camera moves and rotates in world space around the anchor at the origin.
*
* @param o - The origin matrix.
* @returns A 4x4 column-major transformation matrix
*/
cameraPoseWithOrigin(o) {
return this._z.pipeline_camera_pose_with_origin(this._impl, o);
}
/**
* Returns true if the current camera frame came from a user-facing camera
*/
cameraFrameUserFacing() {
return this._z.pipeline_camera_frame_user_facing(this._impl);
}
/**
* @ignore
*/
drawFace(projectionMatrix, cameraMatrix, targetMatrix, m) {
this._z.pipeline_draw_face(this._impl, projectionMatrix, cameraMatrix, targetMatrix, m._getImpl());
}
/**
* Returns the number of the current frame.
*/
frameNumber() {

@@ -70,0 +199,0 @@ return this._z.pipeline_frame_number(this._impl);

@@ -1,1 +0,4 @@

export declare const VERSION = "0.3.10";
/**
* SDK version.
*/
export declare const VERSION = "0.3.11";
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.VERSION = "0.3.10";
exports.VERSION = void 0;
/**
* SDK version.
*/
exports.VERSION = "0.3.11";
import { Zappar } from "@zappar/zappar-cv";
/**
* @ignore
*/
export declare function z(): Zappar;
/**
* Gets the ID or the default rear- or user-facing camera.
* @param userFacing - Whether 'selfie' camera ID should be returned.
* @returns The camera device ID.
* @see https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices
*/
export declare function cameraDefaultDeviceID(userFacing?: boolean): string;
/**
* Inverts a 4x4 Float32Array Matrix.
* @param m - The 4x4 matrix to be inverted.
* @returns The inverted Float32Array matrix.
*/
export declare function invert(m: Float32Array): Float32Array;
/**
* Calculates the projection matrix from a given camera model (i.e. intrinsic camera parameters)≥
* @param model - The camera model.
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param zNear - The near clipping plane.
* @param zFar - The far clipping plane.
* @returns A 4x4 column-major projection matrix.
*/
export declare function projectionMatrixFromCameraModel(model: Float32Array, renderWidth: number, renderHeight: number, zNear?: number, zFar?: number): Float32Array;
/**
* @ignore
*/
export declare function drawPlane(gl: WebGLRenderingContext, projectionMatrix: Float32Array, cameraMatrix: Float32Array, targetMatrix: Float32Array, texture: string): void;
/**
* Detects if your page is running in a browser that's not supported
* @returns 'true' if the browser is incompatible.
*/
export declare function browserIncompatible(): boolean;
/**
* Shows a full-page dialog that informs the user they're using an unsupported browser,
* and provides a button to 'copy' the current page URL so they can 'paste' it into the
* address bar of a compatible alternative.
*/
export declare function browserIncompatibleUI(): void;
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.browserIncompatibleUI = exports.browserIncompatible = exports.drawPlane = exports.projectionMatrixFromCameraModel = exports.invert = exports.cameraDefaultDeviceID = exports.z = void 0;
const zappar_cv_1 = require("@zappar/zappar-cv");

@@ -7,2 +8,5 @@ const gl_matrix_1 = require("gl-matrix");

let _z;
/**
* @ignore
*/
function z() {

@@ -16,2 +20,8 @@ if (!_z) {

exports.z = z;
/**
* Gets the ID or the default rear- or user-facing camera.
* @param userFacing - Whether 'selfie' camera ID should be returned.
* @returns The camera device ID.
* @see https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/enumerateDevices
*/
function cameraDefaultDeviceID(userFacing) {

@@ -21,2 +31,7 @@ return z().camera_default_device_id(userFacing || false);

exports.cameraDefaultDeviceID = cameraDefaultDeviceID;
/**
* Inverts a 4x4 Float32Array Matrix.
* @param m - The 4x4 matrix to be inverted.
* @returns The inverted Float32Array matrix.
*/
function invert(m) {

@@ -28,2 +43,11 @@ const ret = gl_matrix_1.mat4.create();

exports.invert = invert;
/**
* Calculates the projection matrix from a given camera model (i.e. intrinsic camera parameters)≥
* @param model - The camera model.
* @param renderWidth - The width of the canvas.
* @param renderHeight - The height of the canvas.
* @param zNear - The near clipping plane.
* @param zFar - The far clipping plane.
* @returns A 4x4 column-major projection matrix.
*/
function projectionMatrixFromCameraModel(model, renderWidth, renderHeight, zNear = 0.1, zFar = 100) {

@@ -33,2 +57,5 @@ return z().projection_matrix_from_camera_model_ext(model, renderWidth, renderHeight, zNear, zFar);

exports.projectionMatrixFromCameraModel = projectionMatrixFromCameraModel;
/**
* @ignore
*/
function drawPlane(gl, projectionMatrix, cameraMatrix, targetMatrix, texture) {

@@ -38,2 +65,6 @@ z().draw_plane(gl, projectionMatrix, cameraMatrix, targetMatrix, texture);

exports.drawPlane = drawPlane;
/**
* Detects if your page is running in a browser that's not supported
* @returns 'true' if the browser is incompatible.
*/
function browserIncompatible() {

@@ -43,2 +74,7 @@ return z().browser_incompatible();

exports.browserIncompatible = browserIncompatible;
/**
* Shows a full-page dialog that informs the user they're using an unsupported browser,
* and provides a button to 'copy' the current page URL so they can 'paste' it into the
* address bar of a compatible alternative.
*/
function browserIncompatibleUI() {

@@ -45,0 +81,0 @@ z().browser_incompatible_ui();

{
"name": "@zappar/zappar",
"version": "0.3.10",
"version": "0.3.11",
"description": "Zappar's computer vision for JavaScript, supporting image, face and instant world tracking, and barcode scanning.",

@@ -14,3 +14,4 @@ "main": "lib/index.js",

"webpack-puppeteer": "webpack --config=webpack.config.puppeteer.js --mode=development && webpack-dev-server --config=webpack.config.puppeteer.js",
"standalone-test-serve": "concurrently 'zapworks serve --port 7010 umd' 'zapworks serve --port 7011 puppeteer-standalone-dist' || true"
"standalone-test-serve": "concurrently 'zapworks serve --port 7010 umd' 'zapworks serve --port 7011 puppeteer-standalone-dist' || true",
"typedoc": "typedoc --out docs src/index.ts --excludePrivate --excludeProtected --theme minimal"
},

@@ -37,2 +38,3 @@ "upkg": "umd/zappar.js",

"eslint": "^7.24.0",
"eslint-plugin-tsdoc": "^0.2.14",
"html-webpack-plugin": "^3.2.0",

@@ -47,3 +49,4 @@ "jest": "^26.6.3",

"ts-node": "^9.1.1",
"typescript": "^3.8.3",
"typedoc": "^0.20.36",
"typescript": "^4.2.4",
"webpack": "^4.43.0",

@@ -50,0 +53,0 @@ "webpack-cli": "^3.3.11",

@@ -61,3 +61,3 @@ # Zappar for JavaScript/TypeScript

<!-- Added by: deim, at: Thu 3 Dec 2020 12:38:04 GMT -->
<!-- Added by: zapparadmin, at: Thu Jun 10 10:42:57 BST 2021 -->

@@ -74,3 +74,3 @@ <!--te-->

Download the bundle from this link:
https://libs.zappar.com/zappar-js/0.3.10/zappar-js.zip
https://libs.zappar.com/zappar-js/0.3.11/zappar-js.zip

@@ -86,3 +86,3 @@ Unzip into your web project and reference from your HTML like this:

```html
<script src="https://libs.zappar.com/zappar-js/0.3.10/zappar.js"></script>
<script src="https://libs.zappar.com/zappar-js/0.3.11/zappar.js"></script>
```

@@ -519,3 +519,3 @@

- `pipeline.cameraPoseDefault()` returns a transformation where camera sits, stationary, at the origin of world space, and points down the negative Z axis. Tracked anchors move in world space as the user moves the device or tracked objects in the real world.
- `pipeline.cameraPoseDefault()` returns a transformation where the camera sits, stationary, at the origin of world space, and points down the negative Z axis. Tracked anchors move in world space as the user moves the device or tracked objects in the real world.
- `pipeline.cameraPoseWithAttitude(mirror?: boolean)` returns a transformation where the camera sits at the origin of world space, but rotates as the user rotates the physical device. When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.

@@ -915,3 +915,3 @@ - `pipeline.cameraPoseWithOrigin(o: Float32Array)` returns a transformation with the (camera-relative) origin specified by the supplied parameter. This is used with the `poseCameraRelative(...) : Float32Array` functions provided by the various anchor types to allow a given anchor (e.g. a tracked image or face) to be the origin of world space. In this case the camera moves and rotates in world space around the anchor at the origin.

For a user experience featuring the user-facing camera to feel natural, the camera view must tbe mirrored. The Zappar library support two ways to provide a mirrored view:
For a user experience featuring the user-facing camera to feel natural, the camera view must be mirrored. The Zappar library support two ways to provide a mirrored view:

@@ -918,0 +918,0 @@ 1. *Mirroring the full canvas* - in this case both the camera image and the AR content appears mirrored.

Sorry, the diff of this file is too big to display

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc