Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

ai.natml.vision.movenet-3d

Package Overview
Dependencies
Maintainers
1
Versions
3
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

ai.natml.vision.movenet-3d

Realtime 3D pose detection with MoveNet.

  • 1.0.2
  • latest
  • Source
  • npm
  • Socket score

Version published
Maintainers
1
Created
Source

MoveNet 3D

Realtime 3D pose tracking with MoveNet in augmented reality with ARFoundation.

Installing MoveNet 3D

Add the following items to your Unity project's Packages/manifest.json:

{
  "scopedRegistries": [
    {
      "name": "NatML",
      "url": "https://registry.npmjs.com",
      "scopes": ["ai.fxn", "ai.natml"]
    }
  ],
  "dependencies": {
    "ai.natml.vision.movenet-3d": "1.0.2"
  }
}

Predicting 3D Pose in Augmented Reality

These steps assume that you are starting with an AR scene in Unity with an ARSession and ARSessionOrigin. In your pose detection script, first create the MoveNet 3D predictor:

MoveNet3DPredictor predictor;

async void Start () {
    // Create the MoveNet 3D predictor
    predictor = await MoveNet3DPredictor.Create();
}

Then in Update, acquire the latest CPU camera image and depth image from ARFoundation, then predict the pose:

// Assign these in the inspector
public Camera arCamera;
public ARCameraManager cameraManager;
public AROcclusionManager occlusionManager;

void Update () {
    // Get the camera image
    if (cameraManager.TryAcquireLatestCpuImage(out var image))
        // Get the depth image
        if (occlusionManager.TryAcquireEnvironmentDepthCpuImage(out var depth)) {
            // Create an ML feature for the camera image
            var imageType = image.GetFeatureType();
            var imageFeature = new MLImageFeature(imageType.width, imageType.height);
            imageFeature.CopyFrom(image);
            // Create an ML feature for the depth image
            var depthFeature = new MLXRCpuDepthFeature(depth, arCamera);
            // Predict
            MoveNet3DPredictor.Pose pose = predictor.Predict(imageFeature, depthFeature);
        }
}

The pose contains 3D world positions for each detected keypoint.

Note that on older iOS devices that don't support environment depth, you can use the human depth image instead which is supported by iPhone XS/XR or newer.


Requirements

  • Unity 2022.3+

Quick Tips

Thank you very much!

Keywords

FAQs

Package last updated on 21 Sep 2023

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc