This module contains a high-risk pattern: executing model-generated code (exec of LLM responses) in the host process with access to globals and the filesystem. While there are no explicit hardcoded credentials or obvious obfuscated malware payloads in the static code, the dynamic behavior allows arbitrary remote code execution and sensitive data exfiltration (PDF contents sent as base64 to the LLM). If the LLM or its responses are compromised or manipulated, an attacker could perform data theft, spawn processes, modify files, or establish persistent backdoors. Recommend treating this as dangerous for untrusted inputs: either remove exec usage, sandbox or strictly validate generated code, restrict globals, and avoid sending sensitive documents to external services without explicit user consent.
Live on pypi for 17 hours and 20 minutes before removal. Socket users were protected even while the package was live.