New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

landingai

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

landingai

Helper library for interacting with LandingAI LandingLens

  • 0.3.49
  • PyPI
  • Socket score

Maintainers
1

ci_status PyPI version version license downloads


LandingLens Python Library

The LandingLens Python library contains the LandingLens development library and examples that show how to integrate your app with LandingLens in a variety of scenarios. The examples cover different model types, image acquisition sources, and post-procesing techniques.

Documentation

Quick start

Install

First, install the LandingAI Python library:

pip install landingai

Acquire Your First Images

After installing the LandingAI Python library, you can start acquiring images from one of many image sources.

For example, from a single image file:

from landingai.pipeline.frameset import Frame

frame = Frame.from_image("/path/to/your/image.jpg")
frame.resize(width=512, height=512)
frame.save_image("/tmp/resized-image.png")

You can also extract frames from your webcam. For example:

from landingai.pipeline.image_source import Webcam

with Webcam(fps=0.5) as webcam:
    for frame in webcam:
        frame.resize(width=512, height=512)
        frame.save_image("/tmp/webcam-image.png")

To learn how to acquire images from more sources, go to Image Acquisition.

Run Inference

If you have deployed a computer vision model in LandingLens, you can use this library to send images to that model for inference.

For example, let's say we've created and deployed a model in LandingLens that detects coffee mugs. Now, we'll use the code below to extract images (frames) from a webcam and run inference on those images.

[!NOTE] If you don't have a LandingLens account, create one here. You will need to get an "endpoint ID" and "API key" from LandingLens in order to run inferences. Check our Running Inferences / Getting Started.

[!NOTE] Learn how to use LandingLens from our Support Center and Video Tutorial Library. Need help with specific use cases? Post your questions in our Community.

from landingai.pipeline.image_source import Webcam
from landingai.predict import Predictor

predictor = Predictor(
    endpoint_id="abcdef01-abcd-abcd-abcd-01234567890",
    api_key="land_sk_xxxxxx",
)
with Webcam(fps=0.5) as webcam:
    for frame in webcam:
        frame.resize(width=512)
        frame.run_predict(predictor=predictor)
        frame.overlay_predictions()
        if "coffee-mug" in frame.predictions:
            frame.save_image("/tmp/latest-webcam-image.png", include_predictions=True)

Examples

We've provided some examples in Jupyter Notebooks to focus on ease of use, and some examples in Python apps to provide a more robust and complete experience.

ExampleDescriptionType
Poker Card Suit IdentificationThis notebook shows how to use an object detection model from LandingLens to detect suits on playing cards. A webcam is used to take photos of playing cards.Jupyter Notebook Colab
Door Monitoring for Home AutomationThis notebook shows how to use an object detection model from LandingLens to detect whether a door is open or closed. An RTSP camera is used to acquire images.Jupyter Notebook Colab
Satellite Images and Post-ProcessingThis notebook shows how to use a Visual Prompting model from LandingLens to identify different objects in satellite images. The notebook includes post-processing scripts that calculate the percentage of ground cover that each object takes up.Jupyter Notebook Colab
License Plate Detection and RecognitionThis notebook shows how to extract frames from a video file and use a object detection model and OCR from LandingLens to identify and recognize different license plates.Jupyter Notebook Colab
Streaming VideoThis application shows how to continuously run inference on images extracted from a streaming RTSP video camera feed.Python application

Run Examples Locally

All the examples in this repo can be run locally.

To give you some guidance, here's how you can run the rtsp-capture example locally in a shell environment:

  1. Clone the repo to local: git clone https://github.com/landing-ai/landingai-python.git
  2. Install the library: poetry install --with examples (See the poetry docs for how to install poetry)
  3. Activate the virtual environment: poetry shell
  4. Run: python landingai-python/examples/capture-service/run.py

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc