
Company News
Socket Named Top Sales Organization by RepVue
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.
yugenkairo-sentinel-sdk
Advanced tools
Python SDK for Sentinel - Enterprise LLM Security Gateway with AI-powered threat detection and cryptographic data protection
Python SDK for Sentinel - A self-healing LLM firewall with cryptographic data protection.
The Sentinel Python SDK provides a secure interface to LLM providers through the Sentinel security pipeline. It acts as a drop-in replacement for popular LLM SDKs while adding enterprise-grade security features including:
pip install yugenkairo-sentinel-sdk
from sentinel import SentinelClient
# Initialize the client
client = SentinelClient(
base_url="http://localhost:8080",
api_key="your-api-key"
)
# Send a chat completion request through Sentinel
response = client.chat_completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello, world!"}
]
)
print(response.choices[0].message.content)
from sentinel import SentinelClient
# Initialize with custom configuration
client = SentinelClient(
base_url="http://localhost:8080",
api_key="your-api-key",
timeout=60
)
# Sanitize a prompt before sending to LLM
sanitized = client.sanitize_prompt("Process sensitive data: 123-45-6789")
print(f"Sanitized prompt: {sanitized['sanitizedPrompt']}")
# Process an LLM response for security
response = "Here's the sensitive information: 123-45-6789"
processed = client.process_response(response)
print(f"Processed response: {processed['processedResponse']}")
__init__(base_url, api_key, timeout)Initialize the Sentinel client.
Parameters:
base_url (str): The base URL for the Sentinel gateway (default: "http://localhost:8080")api_key (str, optional): API key for authenticationtimeout (int): Request timeout in seconds (default: 30)sanitize_prompt(prompt)Sanitize a prompt before sending to LLM.
Parameters:
prompt (str): The prompt to sanitizeReturns:
dict: Sanitized prompt and metadataprocess_response(response)Process an LLM response for security.
Parameters:
response (str): The LLM response to processReturns:
dict: Processed response and metadataconfigure_policies(policies)Configure security policies.
Parameters:
policies (dict): Policy configurationReturns:
dict: Policy update resultcreate(model, messages, temperature, max_tokens, **kwargs)Create a chat completion through the Sentinel gateway.
Parameters:
model (str): The model to usemessages (list): List of message dictionariestemperature (float, optional): Sampling temperaturemax_tokens (int, optional): Maximum tokens to generate**kwargs: Additional parametersReturns:
dict: Chat completion responseSENTINEL_BASE_URL: Default base URL for the Sentinel gatewaySENTINEL_API_KEY: Default API key for authenticationSENTINEL_TIMEOUT: Default request timeout in secondsYou can also configure the client using a configuration file:
import os
from sentinel import SentinelClient
# Load configuration from environment
client = SentinelClient(
base_url=os.getenv("SENTINEL_BASE_URL", "http://localhost:8080"),
api_key=os.getenv("SENTINEL_API_KEY"),
timeout=int(os.getenv("SENTINEL_TIMEOUT", "30"))
)
The SDK raises standard Python exceptions:
from sentinel import SentinelClient
import requests
client = SentinelClient(base_url="http://localhost:8080")
try:
response = client.chat_completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hello"}]
)
except requests.exceptions.RequestException as e:
print(f"Request failed: {e}")
except Exception as e:
print(f"An error occurred: {e}")
from sentinel import SentinelClient
# Different clients for different tenants
tenant_a_client = SentinelClient(
base_url="http://localhost:8080",
api_key="tenant-a-key"
)
tenant_b_client = SentinelClient(
base_url="http://localhost:8080",
api_key="tenant-b-key"
)
from sentinel import SentinelClient
client = SentinelClient(base_url="http://localhost:8080")
# Configure custom policies
policies = {
"pii_detection": {
"enabled": True,
"languages": ["en", "es", "fr"],
"action": "tokenize"
},
"prompt_filtering": {
"enabled": True,
"threshold": 0.75
}
}
result = client.configure_policies(policies)
print(f"Policies configured: {result['success']}")
from langchain.llms import Sentinel
from langchain.prompts import PromptTemplate
llm = Sentinel(
base_url="http://localhost:8080",
api_key="your-api-key"
)
template = "What is {subject}?"
prompt = PromptTemplate.from_template(template)
chain = prompt | llm
response = chain.invoke({"subject": "artificial intelligence"})
print(response)
from llama_index.llms import Sentinel
from llama_index import VectorStoreIndex, SimpleDirectoryReader
llm = Sentinel(
base_url="http://localhost:8080",
api_key="your-api-key"
)
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents, llm=llm)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
git clone https://github.com/swayam8624/Sentinel.git
cd Sentinel/sdk/python
pip install -e .
pip install pytest
pytest tests/
pip install black flake8
black .
flake8 .
For full documentation, visit https://swayam8624.github.io/Sentinel/
For issues, feature requests, or questions, please open an issue on GitHub.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
FAQs
Python SDK for Sentinel - Enterprise LLM Security Gateway with AI-powered threat detection and cryptographic data protection
We found that yugenkairo-sentinel-sdk demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.