Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

panoptica-genai-protection

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

panoptica-genai-protection

Protecting GenAI from Prompt Injection

  • 0.1.18
  • PyPI
  • Socket score

Maintainers
1

Panoptica GenAI Protection SDK

A simple python client SDK for integration with Panoptica GenAI Protection.


GenAI Protection is part of Panoptica, a cloud native application protection platform (CNAPP), and provides protection for LLM-backed systems. Specifically, the GenAI Protection SDK inspects both input and output prompts, flagging those it identifies as likely containing malicious content with a high degree of certainty.

The Python SDK is provided to programmatically integrate your system with our LLM protection software, enabling you to verify the safety level of processing a user requested prompt before actually processing it. Following this evaluation, the application can then determine the appropriate subsequent steps based on your policy.

Installation

pip install panoptica_genai_protection

Usage Example

Working assumptions:

  • You have generated a key-pair for GenAI Protection in the Panoptica settings screen
    • The access key is set in the GENAI_PROTECTION_ACCESS_KEY environment variable
    • The secret key is set in the GENAI_PROTECTION_SECRET_KEY environment variable
  • We denote the call to generating the LLM response as get_llm_response()

GenAIProtectionClient provides the check_llm_prompt method to determine the safety level of a given prompt.

Sample Snippet

from panoptica_genai_protection.client import GenAIProtectionClient
from panoptica_genai_protection.gen.models import Result as InspectionResult

# ... Other code in your module ...

# initialize the client
genai_protection_client = GenAIProtectionClient()

# Send the prompt for inspection BEFORE sending it to the LLM
inspection_result = genai_protection_client.check_llm_prompt(
  chat_request.prompt,
  api_name="chat_service",  # Name of the service running the LLM
  api_endpoint_name="/chat",  # Name of the endpoint serving the LLM interaction
  sequence_id=chat_id,  # UUID of the chat, if you don't have one, provide `None`
  actor="John Doe",  # Name of the "actor" interacting with the LLM service.
  actor_type="user",  # Actor type, one of {"user", "ip", "bot"}
)

if inspection_result.result == InspectionResult.safe:
  # Prompt is safe, generate an LLM response
  llm_response = get_llm_response(
    chat_request.prompt
  )

  # Call GenAI protection on LLM response (completion)
  inspection_result = genai_protection_client.check_llm_response(
    prompt=chat_request.prompt,
    response=llm_response,
    api_name="chat_service",
    api_endpoint_name="/chat",
    actor="John Doe",
    actor_type="user",
    request_id=inspection_result.reqId,
    sequence_id=chat_id,
  )
  if inspection_result.result != InspectionResult.safe:
    # LLM answer is flagged as unsafe, return a predefined error message to the user
    answer_response = "Something went wrong."
else:
  # Prompt is flagged as unsafe, return a predefined error message to the user
  answer_response = "Something went wrong."
Async use:

You may use the client in async context in two ways:

async def my_async_call_to_gen_ai_protection(prompt: str):
    client = GenAIProtectionClient(as_async=True)
    return await client.check_llm_prompt_async(
        prompt=prompt,
        api_name="test",
        api_endpoint_name="/test",
        actor="John Doe",
        actor_type="user"
    )

or

async def my_other_async_call_to_gen_ai_protection(prompt: str):
    async with GenAIProtectionClient() as client:
      return await client.check_llm_prompt_async(
          prompt=prompt,
          api_name="test",
          api_endpoint_name="/test",
          actor="John Doe",
          actor_type="user"
      )

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc