
Research
Malicious npm Packages Impersonate Flashbots SDKs, Targeting Ethereum Wallet Credentials
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
A fluent interface for working with LLMs, providing a clean and intuitive API for AI-powered applications.
Expressive, opinionated, and intuitive 'fluent interface' Python library for working with LLMs.
Express every LLM interaction in your app prototypes in a single statement, without having to reach for documentation, looking up model capabilities, or writing boilerplate code.
# On Unix/macOS
export OPENAI_API_KEY=sk-...
export ANTHROPIC_API_KEY=sk-...
# On Windows (cmd)
set OPENAI_API_KEY=sk-...
set ANTHROPIC_API_KEY=sk-...
# On Windows (PowerShell)
$env:OPENAI_API_KEY="sk-..."
$env:ANTHROPIC_API_KEY="sk-..."
The llm
global instance is a LLMPromptBuilder
instance, which can be used to build prompts.
The following prompt components can be used in an arbitrary order and multiple times:
.agent(str)
: Sets the agent description, defines system behavior..context(str)
: Passes textual context to the LLM..request(str)
: Passes the main request to the LLM..image(str)
: Passes an image to the LLM..audio(str)
: Passes an audio file to the LLM.The prompt chain is terminated by the following methods:
.prompt(): str
: Sends the prompt to the LLM and expects a text response..prompt_for_image(): PIL.Image
: Sends the prompt to the LLM and expects an image response..prompt_for_audio(): soundfile.SoundFile
: Sends the prompt to the LLM and expects an audio response..prompt_for_structured_output(pydantic_model): BaseModel
: Sends the prompt to the LLM and expects a structured response.They will either return the desired response if processing was successful, or raise an exception otherwise.
You can use this library as a callable module to experiment with LLMs.
> pip install fluent-llm
> fluent-llm "llm.request('1+2=?').prompt()"
1 + 2 = 3.
Or even easier, without installing, as a tool with uvx:
uvx fluent-llm "llm.request('1+2=?').prompt()"
1 + 2 = 3.
response = llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt()
print(response)
Just works. See if you can spot the difference to the example above.
response = await llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt()
print(response)
response = llm
.agent("You are a 17th century classic painter.")
.context("You were paid 10 francs for creating a portrait.")
.request('Create a portrait of Louis XIV.')
.prompt_for_image()
assert isinstance(response, PIL.Image)
response.show()
from pydantic import BaseModel
class PaintingEvaluation(BaseModel):
museum_worthy: bool
reason: str
response = llm \
.agent("You are an art evaluator.") \
.context("You received this painting and were tasked to evaluate whether it's museum-worthy.") \
.image("painting.png") \
.prompt_for_type(PaintingEvaluation)
print(response)
Usage tracking and price estimations for the last call are built-in.
>>> llm.request('How are you?').prompt()
"I'm doing well, thank you! How about you?"
>>> print(llm.usage)
=== Last API Call Usage ===
Model: gpt-4o-mini-2024-07-18
input_tokens: 11 tokens
output_tokens: 12 tokens
💰 Cost Breakdown:
input_tokens: 11 tokens → $0.000002
output_tokens: 12 tokens → $0.000007
💵 Total Call Cost: $0.000009
==============================
>>> llm.usage.cost.total_call_cost_usd
0.000009
>>> llm.usage.cost.breakdown['input_tokens'].count
11
If choosing a provider or model per-invocation is not sufficient, you can define
a custom ModelSelectionStrategy
and pass it to the LLMPromptBuilder
constructor to select provider and model based on your own criteria.
You can specify preferred providers and models using the fluent chain API:
# Use a specific provider (will select best available model)
response = await llm \
.provider("anthropic") \
.request("Hello, how are you?") \
.prompt()
# Use a specific model
response = await llm \
.model("claude-sonnet-4-20250514") \
.request("Write a poem about coding") \
.prompt()
# Combine provider and model preferences
response = await llm \
.provider("openai") \
.model("gpt-4.1-mini") \
.request("Explain quantum computing") \
.prompt()
If the defaults are not sufficient, you can customize the behavior of the builder by creating your own LLMPromptBuilder
, instead of using the llm
global instance provided for convenience.
However, note that you're probably quickly reaching the point at which you should ask yourself if you're not better off using the official OpenAI Python client library directly. This library is designed to be a simple and opinionated wrapper around the OpenAI API, and it's not intended to be a full-featured LLM client.
Instead of using the convenience methods .prompt_*()
, you can use the .call()
method to execute the prompt and return a response.
Pass in a custom client
to the .call()
method, to use a custom client for the API call.
uv sync --dev
All tests are run with uv
:
uv run pytest
tests/test_mocked.py
.tests/test_live_api_*.py
.Licensed under the MIT License.
Almost all code written by Claude, o3 and SWE-1, concept and design by @hheimbuerger.
FAQs
A fluent interface for working with LLMs, providing a clean and intuitive API for AI-powered applications.
We found that fluent-llm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Four npm packages disguised as cryptographic tools steal developer credentials and send them to attacker-controlled Telegram infrastructure.
Security News
Ruby maintainers from Bundler and rbenv teams are building rv to bring Python uv's speed and unified tooling approach to Ruby development.
Security News
Following last week’s supply chain attack, Nx published findings on the GitHub Actions exploit and moved npm publishing to Trusted Publishers.