
Security News
Deno 2.2 Improves Dependency Management and Expands Node.js Compatibility
Deno 2.2 enhances Node.js compatibility, improves dependency management, adds OpenTelemetry support, and expands linting and task automation for developers.
@datachecker/faceverify
Advanced tools
This project contains Datachecker's FaceVerify tool, that captures images of faces to be used in liveness detection. The tool only takes a capture once the trigger mechanism is fired.
This project contains Datachecker's FaceVerify tool, that captures images of faces to be used in liveness detection. The tool only takes a capture once the trigger mechanism is fired.
To perform liveness detection, two slightly different images of the same person are required. For example, when a person moves his/her head slightly this will generate a different image. Therefore, the tool checks difference in movement between frames.
The tool features user challenge-response, namely head pose estimation, in order to prevent video injection attacks.
The tool will be run in the browser and is therefore written in JavaScript.
The tool performs the following checks:
The movement check will only be used for the second picture. Since the first picture has no other picture to compare to.
Please visit Datachecker API documentation.
Datachecker uses OAuth authorization. In order to request the SDK token you will need to provide a valid OAuth token in the header.
Example header:
header = {'Authorization': `Bearer ${response.accessToken}`}
This OAuth token can be retrieved with the Datachecker OAuth Token API. The scope "productapi.sdk.read"
needs to be present to make use of the SDK token. If this scope is missing you will not be able to retrieve an SDK token.
FaceVerify also requires the other following scopes to send and receive results: "productapi.faceverify.write", "productapi.poll.read", "productapi.result.read"
Example OAuth:
fetch(<BASE_ENDPOINT>+"/oauth/token", {
method: 'POST',
body: JSON.stringify({
"clientId": <CLIENTID>,
"clientSecret": <CLIENTSECRET>,
"scopes": [
"productapi.sdk.read",
"productapi.faceverify.write",
"productapi.poll.read",
"productapi.result.read",
]
})
})
.then(response => response.json())
Note: Contact Datachecker for client_id and client_secret.
The SDK is locked. In order to use the SDK in production a token is required. The application can only be started with a valid token. This token is a base64
string. The token can be generated by calling the Datachecker SDK Token API.
Example:
fetch(<BASE_ENDPOINT>+"/sdk/token?number_of_challenges=2&customer_reference=<CUSTOMER>&validateWatermark=true&services=FACE_VERIFY", {
method: 'GET',
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Authorization': `Bearer <ACCESSTOKEN>`
}
})
.then(response => response.json())
To run this tool, you will need initialise with the following variables.
ATTRIBUTE | FORMAT | DEFAULT VALUE | EXAMPLE | NOTES |
---|---|---|---|---|
ASSETS_FOLDER | string | "" | "../" | optional Specifies location of locally hosted assets folder. (see Asset Fetching Configuration) |
ASSETS_MODE | string | "CDN" | "LOCAL" | optional Specifies mode of asset fetching, either through CDN or locally hosted assets. (see Asset Fetching Configuration) |
BACKEND | string | wasm | wasm | optional Neural network execution provider. Possible values: [ wasm , webgl , cpu ]. wasm is recommended whereas cpu is not recommended. |
BACKGROUND_COLOR | string (Hex color code) | "#1d3461" | "#1d3461" | optional Specifies the background color using a hex color code. |
CAPTURE_WAITING_TIME | int | 0 | 500 | optional Waiting time between capturing in milliseconds. |
CHALLENGES | array | ['up', 'right', 'down', 'left', 'up'] | optional Array of challenges that can be used for Demo purposes. | |
CONTAINER_ID | string | "FV_mount" | required div id to mount tool on. | |
COUNTDOWN_MAX | int | 0 | 500 | optional If COUNTDOWN == 0 then countdown will be a random between COUNTDOWN_MIN and COUNTDOWN_MAX . |
COUNTDOWN_MIN | int | 0 | 0 | optional If COUNTDOWN == 0 then countdown will be a random between COUNTDOWN_MIN and COUNTDOWN_MAX . |
COUNTDOWN | int | 0 | 3000 | optional Countdown in ms before picture is taken. |
DEBUG | bool | false | false | optional When debug is true more detailed logs will be visible. |
DOWN_THRESHOLD | int | 30 | 30 | optional Challenge down threshold value. |
LANGUAGE | string | "nl" | "nl" | required Notifications in specific language. |
LEFT_THRESHOLD | int | 22 | 22 | optional Challenge left threshold value. |
MODELS_PATH | string | "models/" | "models/" | optional Path referring to models location. |
MOVEMENT_THRESHOLD | int | 20 | 20 | optional Movement will be calculated from frame to frame with a value between 0-100. Recommended value between 20 and 30. |
RIGHT_THRESHOLD | int | 22 | 22 | optional Challenge right threshold value. |
STOP_AFTER | int | 10000 | optional Stopping timer in ms. | |
TOKEN | string | see SDK Token | required Datachecker SDK token. | |
UP_THRESHOLD | int | 35 | 35 | optional Challenge up threshold value. |
onComplete | javascript function | function(data) {console.log(data)} | required Callback function on complete . | |
onError | javascript function | function(error) {console.log(error)} | function(error) {console.log(error)} | required Callback function on error. |
onUserExit | javascript function | function(error) {console.log(error)} | function(error) {window.history.back()} | required Callback function on user exit. |
FaceVerify requires fetching assets, which can be done either through a CDN or by hosting them locally. Configure this in the tool settings as follows:
// configuration
{
ASSETS_MODE: "CDN",
// other configurations
}
To host assets locally, first copy them to your desired location:
cp -r dist/assets/ path/to/hosted/assets/
Then, configure the tool to use these local assets:
// configuration
{
ASSETS_MODE: "LOCAL",
ASSETS_FOLDER: "path/to/hosted/assets/",
// other configurations
}
For comphrehensive integration examples, please refer to our Integration Examples.
To ensure compatibility:
Within the application, you can take advantage of three callback functions to enhance the user experience and manage the flow of your process.
Note: When integrating the application into Native Apps using web views, it's essential to adapt and utilize these callback functions according to the conventions and requirements of the native platforms (e.g., iOS, Android). Native app development environments may have specific ways of handling JavaScript callbacks, and you should ensure seamless communication between the web view and the native code.
Example Web (JS):
let FV = new FaceVerify();
FV.init({
CONTAINER_ID: 'FV_mount',
LANGUAGE: 'en',
TOKEN: '<SDK_TOKEN>',
onComplete: function(data) {
console.log(data);
},
onError: function(error) {
console.log(error)
},
onUserExit: function(error) {
console.log(error);
window.history.back()
}
});
ATTRIBUTE | FORMAT | DEFAULT VALUE | EXAMPLE | NOTES |
---|---|---|---|---|
onComplete | javascript function | function(data) {console.log(data)} | required Callback that fires when all interactive tasks in the workflow have been completed. | |
onError | javascript function | function(error) {console.log(error)} | function(error) {console.log(error)} | required Callback that fires when an error occurs. |
onUserExit | javascript function | function(error) {console.log(error)} | function(error) {window.history.back()} | required Callback that fires when the user exits the flow without completing it. |
This callback function will be called once all the tasks within the workflow succesfully have been completed. This callback function is required. The data
parameter within the function represents the output of the completed process. You can customize this function to handle and display the data as needed.
Example Web (JS):
Within the example below we are logging the output (data
) to console.
let FV = new FaceVerify();
FV.init({
...,
onComplete: function(data) {
console.log(data);
FV.stop();
}
});
This callback can be used to alert users when something goes wrong during the process. This callback function is required. The error
parameter within the function contains information about the specific error encountered, allowing you to log or display error messages for debugging or user guidance. The errors that are thrown are either known or unknown. The known errors can be found within the Languages dictionary. On the other hand, the unknown errors will be thrown as is.
Example Web (JS):
Within the example below we are logging the output (error
) to console.
let FV = new FaceVerify();
FV.init({
...,
onError: function(error) {
console.log(error)
}
});
This callback can be used to implement actions like returning users to the previous page or prompting them for confirmation before exiting to ensure they don't lose any unsaved data or work. This callback function is required. The error
parameter within the function contains information about the specific error encountered, allowing you to log or display error messages for debugging or user guidance. The error that is thrown is "exit"
.
Example Web (JS):
Within the example below we are logging the output (error
) to console. Finally, we move back one page in the session history with window.history.back()
.
let FV = new FaceVerify();
FV.init({
...,
onUserExit: function(error) {
console.log(error);
window.history.back()
}
});
The tool first needs to be initialised to load all the models.
Once its initialised, it can be started with the function FV.start();
let FV = new FaceVerify();
FV.init({
CONTAINER_ID: ...,
LANGUAGE: ...,
TOKEN: ...,
onComplete: ...,
onError: ...,
onUserExit: ...,
}).then(() => {
FV.start();
});
To stop the camera and delete the container with its contents the stop
function can be called. This function will automatically be called within onComplete
, onError
and onUserExit
thus do not have to be called within your own custom versions of these functions.
...
FV.stop();
Example below:
let FV = new FaceVerify();
FV.init({
CONTAINER_ID: 'FV_mount',
LANGUAGE: 'nl',
TOKEN: '<SDK_TOKEN>',
onComplete: function(data) {
console.log(data);
},
onError: function(error) {
console.log(error)
},
onUserExit: function(error) {
console.log(error);
window.history.back();
},
});
Import the SDK with one of the three methods: Script tag, ES6 or CommonJS.
Easily add FaceVerify to your HTML files using the Script Tag method.
<!-- Add FaceVerify directly in your HTML -->
<script src="dist/faceverify.obf.js"></script>
For projects using NPM and a module bundler like Webpack or Rollup, you can import FaceVerify as an ES6 module or with CommonJS require syntax.
// Import FaceVerify in your JavaScript file
// ES6 style import
import FaceVerify from '@datachecker/faceverify';
// CommonJS style require
let FaceVerify = require('@datachecker/faceverify')
<!DOCTYPE html>
<html>
<head>
<title>FaceVerify</title>
</head>
<body>
<div id="FV_mount" style="height:100vh">
</div>
</body>
<script src="faceverify.obf.js" type="text/javascript"></script>
<script>
let FV = new FaceVerify();
FV.init({
CONTAINER_ID: 'FV_mount',
LANGUAGE: 'en',
TOKEN: '<SDK_TOKEN>',
onComplete: function(data) {
console.log(data)
},
onError: function(error) {
console.log(error)
},
onUserExit: function(error) {
console.log(error);
window.history.back();
},
});
</script>
</html>
There are two ways in which notifications can be loaded: from file, from object (json).
The languages can be found in assets/language/
. The current support languages are en
and nl
. More languages could be created.
The notifications can be loaded in configuration
like the following:
let FV = new FaceVerify();
FV.init({
LANGUAGE: 'en',
...
To create support for a new language, a js file needs to be created with specific keys.
The keys can be derived from the current language js files (assets/language/en.js
).
Example:
var LANGUAGE = {
"start_prompt": "Tap to start.",
"no_face": "No face detected,\nplease position your face in the frame.",
"nod_head": "Please nod your head slightly.",
"face_thresh": "Face not clearly visible.\nEnsure better lighting conditions\nor make sure your face is not covered.",
"face_far": "Please move closer to the camera.",
"face_close": "Face too close,\nplease move slightly away.",
"exp_dark": "The image is too dark.\nFind a well-lit environment.",
"exp_light": "The image is too light.\nFind a dimmer environment.",
"blur": "Image is not sharp,\nplease stay still.",
"capture_error": "We could not capture an image.\nAccess to the camera is required.",
"challenge_0": "Slowly move your face to the center.",
"challenge_out": "Watch out, your face is too far in the specified direction.\nMove slightly back.",
"challenge_1": "Slowly move your face up\nand hold still.",
"challenge_12": "Watch out, you are looking diagonally upwards.\nSlowly move your face upwards and hold still.",
"challenge_14": "Watch out, you are looking diagonally upwards.\nSlowly move your face upwards and hold still.",
"challenge_2": "Slowly move your face to the right\nand hold still.",
"challenge_21": "Watch out, you are looking diagonally upwards.\nSlowly move your face to the right and hold still.",
"challenge_23": "Watch out, you are looking diagonally downwards.\nSlowly move your face to the right and hold still.",
"challenge_3": "Slowly move your face down\nand hold still.",
"challenge_32": "Watch out, you are looking diagonally downwards.\nSlowly move your face downwards and hold still.",
"challenge_34": "Watch out, you are looking diagonally downwards.\nSlowly move your face downwards and hold still.",
"challenge_4": "Slowly move your face to the left\nand hold still.",
"challenge_41": "Watch out, you are looking diagonally upwards.\nSlowly move your face to the left and hold still.",
"challenge_43": "Watch out, you are looking diagonally downwards.\nSlowly move your face to the left and hold still."
}
Notifications can also be loaded as a json object like the following:
let FV = new FaceVerify();
FV.init({
LANGUAGE: JSON.stringify(
{
"start_prompt": "Tap to start.",
"no_face": "No face detected,\nplease position your face in the frame.",
"nod_head": "Please nod your head slightly.",
"face_thresh": "Face not clearly visible.\nEnsure better lighting conditions\nor make sure your face is not covered.",
"face_far": "Please move closer to the camera.",
"face_close": "Face too close,\nplease move slightly away.",
"exp_dark": "The image is too dark.\nFind a well-lit environment.",
"exp_light": "The image is too light.\nFind a dimmer environment.",
"blur": "Image is not sharp,\nplease stay still.",
"capture_error": "We could not capture an image.\nAccess to the camera is required.",
"challenge_0": "Slowly move your face to the center.",
"challenge_out": "Watch out, your face is too far in the specified direction.\nMove slightly back.",
"challenge_1": "Slowly move your face up\nand hold still.",
"challenge_12": "Watch out, you are looking diagonally upwards.\nSlowly move your face upwards and hold still.",
"challenge_14": "Watch out, you are looking diagonally upwards.\nSlowly move your face upwards and hold still.",
"challenge_2": "Slowly move your face to the right\nand hold still.",
"challenge_21": "Watch out, you are looking diagonally upwards.\nSlowly move your face to the right and hold still.",
"challenge_23": "Watch out, you are looking diagonally downwards.\nSlowly move your face to the right and hold still.",
"challenge_3": "Slowly move your face down\nand hold still.",
"challenge_32": "Watch out, you are looking diagonally downwards.\nSlowly move your face downwards and hold still.",
"challenge_34": "Watch out, you are looking diagonally downwards.\nSlowly move your face downwards and hold still.",
"challenge_4": "Slowly move your face to the left\nand hold still.",
"challenge_41": "Watch out, you are looking diagonally upwards.\nSlowly move your face to the left and hold still.",
"challenge_43": "Watch out, you are looking diagonally downwards.\nSlowly move your face to the left and hold still."
}
),
...
The tool uses a collection of neural networks. Make sure that you host the full directory so the models can be accessed. The models path can be configured. (see Configuration)
The models are located under models/
. Model cards can also be found in this directory.
User challenges are implemented, in order to prevent video injection attacks. These challenges are randomly chosen and thereby, processes are different from one another. The challenges consist of head pose estimation. The performed head poses with be compared with the challenges and that result will be returned as bool in output
. (see Output)
There are four poses that will be detected:
Challenges are embedded in the TOKEN
. Therefore, the challenges are not directly visible.
let FV = new FaceVerify();
FV.init({
CONTAINER_ID: 'FV_mount',
LANGUAGE: 'nl',
TOKEN: "<SDK_TOKEN>",
...
The SDK will output in the following structure:
{
"images": [{"data":"<BASE64_IMG>", "type":"LIVE"}, "..."],
"meta": [{"x":"", "y":"", "width":"", "height":""}, "..."],
"token": "<SDK_TOKEN>",
"transactionId": "<TRANSACTION_ID>",
"valid_challenges": "true|false"
}
Example:
{
"images": [{"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}, {"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}, {"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}],
"meta": [{"x": 33, "y": 182, "width": 265, "height": 354}, {"x": 33, "y": 182, "width": 265, "height": 354}, {"x": 33, "y": 182, "width": 265, "height": 354}],
"token": "<SDK_TOKEN>",
"transactionId": "<TRANSACTION_ID>",
"valid_challenges": true
}
If you want to send the images to the Datachecker FaceVerify API you must add a comparison image. This comparison image can either be a portrait picture from an identity card or a selfie. To add this image, you need to use type: "COMPARE"
.
Example JS:
let faceverify_output = {
"images": [{"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}, {"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}, {"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"LIVE"}],
"meta": [{"x": 33, "y": 182, "width": 265, "height": 354}, {"x": 33, "y": 182, "width": 265, "height": 354}, {"x": 33, "y": 182, "width": 265, "height": 354}],
"token": "<SDK_TOKEN>",
"transactionId": "<TRANSACTION_ID>",
"valid_challenges": true
}
let images = faceverify_output.images
let portrait_image = {"data":"/9j/4AAQSkZJRgABAQAAAQABAAD/...", "type":"COMPARE"}
images.unshift(portrait_image)
let data = {"images": images, "transaction_id":faceverify_output.transactionId}
fetch(<BASE_ENDPOINT>+"/faceverify", {
method: 'POST',
headers: <HEADERS>,
body: JSON.stringify(data)
})
.then(response => response.json())
FAQs
This project contains Datachecker's FaceVerify tool, that captures images of faces to be used in liveness detection. The tool only takes a capture once the trigger mechanism is fired.
The npm package @datachecker/faceverify receives a total of 1 weekly downloads. As such, @datachecker/faceverify popularity was classified as not popular.
We found that @datachecker/faceverify demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Deno 2.2 enhances Node.js compatibility, improves dependency management, adds OpenTelemetry support, and expands linting and task automation for developers.
Security News
React's CRA deprecation announcement sparked community criticism over framework recommendations, leading to quick updates acknowledging build tools like Vite as valid alternatives.
Security News
Ransomware payment rates hit an all-time low in 2024 as law enforcement crackdowns, stronger defenses, and shifting policies make attacks riskier and less profitable.