
Security News
Socket Releases Free Certified Patches for Critical vm2 Sandbox Escape
A critical vm2 sandbox escape can allow untrusted JavaScript to break isolation and execute commands on the host Node.js process.
ai-risk-extractor
Advanced tools
A new package designed to facilitate the extraction of structured insights from user prompts related to the domain of autonomous AI agents and their potential vulnerabilities. Given an input text desc
ai-risk-extractor is a lightweight Python package that extracts structured risk insights from free‑form user prompts about autonomous AI agents, task injection, AI agency, and related vulnerabilities. By leveraging a language model (default ChatLLM7), the package parses the input text and returns a standardized, machine‑readable summary that highlights threat levels, involved components, and possible exploitation methods.
pip install ai_risk_extractor
from ai_risk_extractor import ai_risk_extractor
# Example user prompt describing an AI risk scenario
prompt = """
An autonomous AI assistant receives a hidden instruction from a malicious user
that causes it to execute a privileged system command. The instruction is
embedded in a seemingly harmless chat message.
"""
# Extract structured risk information (uses default ChatLLM7)
risk_summary = ai_risk_extractor(user_input=prompt)
print(risk_summary)
def ai_risk_extractor(
user_input: str,
api_key: Optional[str] = None,
llm: Optional[BaseChatModel] = None,
) -> List[str]:
"""
Process `user_input` with a language model and return extracted risk data.
Parameters
----------
user_input: str
The free‑form text describing AI scenarios or concerns.
api_key: Optional[str]
API key for the default `ChatLLM7`. If omitted, the function will
read the `LLM7_API_KEY` environment variable. If that is also missing,
a placeholder key `"None"` is used (the request will still be routed
to the LLM7 endpoint).
llm: Optional[BaseChatModel]
Any LangChain `BaseChatModel` instance. If omitted, `ChatLLM7` from
`langchain_llm7` is instantiated automatically.
Returns
-------
List[str]
A list of extracted data strings that match the internal regex pattern.
"""
You can provide any LangChain‑compatible chat model instead of the default ChatLLM7.
from langchain_openai import ChatOpenAI
from ai_risk_extractor import ai_risk_extractor
my_llm = ChatOpenAI(model="gpt-4")
result = ai_risk_extractor(user_input=prompt, llm=my_llm)
from langchain_anthropic import ChatAnthropic
from ai_risk_extractor import ai_risk_extractor
my_llm = ChatAnthropic(model="claude-2.1")
result = ai_risk_extractor(user_input=prompt, llm=my_llm)
from langchain_google_genai import ChatGoogleGenerativeAI
from ai_risk_extractor import ai_risk_extractor
my_llm = ChatGoogleGenerativeAI(model="gemini-1.5-pro")
result = ai_risk_extractor(user_input=prompt, llm=my_llm)
The default ChatLLM7 free tier provides generous rate limits suitable for most development and research workflows. If you need higher limits, obtain a personal API key by registering at:
https://token.llm7.io/
Provide the key either:
LLM7_API_KEY environment variable, orapi_key argument:result = ai_risk_extractor(user_input=prompt, api_key="YOUR_LLM7_API_KEY")
Contributions, suggestions, and bug reports are welcome! Feel free to open a pull request or discuss enhancements.
If you encounter any problems, please open an issue on GitHub:
https://github.com/chigwell/ai_risk_extractor/issues
This project is licensed under the MIT License.
Eugene Evstafev
Email: hi@euegne.plus
GitHub: @chigwell
FAQs
A new package designed to facilitate the extraction of structured insights from user prompts related to the domain of autonomous AI agents and their potential vulnerabilities. Given an input text desc
We found that ai-risk-extractor demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
A critical vm2 sandbox escape can allow untrusted JavaScript to break isolation and execute commands on the host Node.js process.

Research
Five malicious NuGet packages impersonate Chinese .NET libraries to deploy a stealer targeting browser credentials, crypto wallets, SSH keys, and local files.

Security News
pnpm 11 turns on a 1-day Minimum Release Age and blocks exotic subdeps by default, adding safeguards against fast-moving supply chain attacks.