
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
@photon-ai/flux
Advanced tools
An open-source CLI for deploying LangChain agents to iMessage in seconds
Flux is an open-sourced CLI tool that lets developers build and deploy LangChain agents that connect to iMessage at no cost and under 5 seconds.
Get started with Flux in seconds:
# No installation needed - use npx directly
npx @photon-ai/flux login
# Create your agent file
echo 'export default {
async invoke({ message }: { message: string }) {
return `You said: ${message}`;
}
};' > agent.ts
# Test locally
npx @photon-ai/flux run --local
# Deploy to production
npx @photon-ai/flux run --prod
For the best experience, install Flux globally:
npm install -g @photon-ai/flux
# or
bun add -g @photon-ai/flux
Then run commands directly:
flux login
flux run --local
flux run --prod
You can use Flux without installing it by using npx or bunx:
npx @photon-ai/flux login
npx @photon-ai/flux run --local
// or
bunx @photon-ai/flux login
bunx @photon-ai/flux run --local
Note: When using
npxorbunx, there's no need to install the package first.npxorbunxwill download and execute it automatically.
If you're integrating Flux into a project:
npm install @photon-ai/flux
# or
bun add @photon-ai/flux
Create an agent.ts file with your LangChain agent. Make sure to have export default agent. Here's a simple example:
// agent.ts
export default {
async invoke({ message }: { message: string }) {
return `You said: ${message}`;
}
};
Authenticate with your phone number and iMessage:
flux login
# or
npx @photon-ai/flux login
Example session:
Enter your phone number (e.g. +15551234567): +1234567890
[FLUX] Requesting verification code...
[FLUX] Verification code: d33gwu
[FLUX] Opening iMessage to send verification code...
[FLUX] Please send the code "d33gwu" to +16286298650 via iMessage.
[FLUX] Waiting for verification...
[FLUX] Successfully logged in as +1234567890
If already logged in:
[FLUX] Already logged in as +1234567890
To log out:
flux logout
[FLUX] Logged out.
Validate that your agent works and exports correctly:
flux validate
# or
npx @photon-ai/flux validate
Output:
[FLUX] Validating agent.ts...
[FLUX] Agent is valid!
Test your agent through your terminal (no iMessage connection):
flux run --local
# or
npx @photon-ai/flux run --local
Interactive session:
[FLUX] Welcome to Flux! Your agent is loaded.
[FLUX] Type a message to test it. Press Ctrl+C to exit.
You: Hello!
[FLUX] Thinking...
Agent: Hello! How can I assist you today?
Run your agent locally and connect it to the iMessage bridge. When you message the Flux number (+16286298650) from your registered phone, you'll receive responses from your LangChain agent:
flux run --prod
# or
npx @photon-ai/flux run --prod
Output:
[FLUX] Loading agent from agent.ts...
[FLUX] Agent loaded successfully!
[FLUX] Connected to server at fluxy.photon.codes:443
[FLUX] Registered agent for +1234567890
[FLUX] Agent running in production mode. Press Ctrl+C to stop.
[FLUX] Messages to +1234567890 will be processed by your agent.
Now text +16286298650 from your phone to interact with your agent!
| Command | Description |
|---|---|
flux or npx @photon-ai/flux | Show help |
flux whoami | Check currently logged-in account |
flux login | Login and signup with phone number |
flux logout | Logout from current session |
flux validate | Check your agent code for errors |
flux run --local | Start development server (local testing mode) |
flux run --prod | Start with live iMessage bridge |
Authentication is based on iMessage to ensure secure and simple access:
+16286298650) via iMessage to prove phone ownershipcredentials.json, so you only need to log in onceAgents can initiate conversations without waiting for a user message. In production mode, onInit receives a sendMessage function that lets your agent send messages at any time.
let sendMessage: (to: string, text: string) => Promise<boolean>;
export default {
async onInit(send) {
sendMessage = send;
// Now you can call sendMessage() anywhere in your agent
},
// ...
};
Agents can send multi-bubble responses by using '\n'.
The splitIntoMessages helper function splits messages using \n and then loops through the split messages and sends each one.
For example, "Hello!\nHow are you?\nNice to meet you!" will be sent as three separate message bubbles.
Agents can send tapback reactions (love, like, dislike, laugh, emphasize, question). When your agent receives a message, it can react to it using the sendTapback function.
To use sendTapback, you need to:
sendTapback in onInit once at startupinvoke when processing messagesimport { FluxAgent, SendTapbackFn } from '@photon-ai/flux';
let sendTapback: SendTapbackFn | undefined;
const agent: FluxAgent = {
onInit: async (_sendMessage, _sendTapback) => {
sendTapback = _sendTapback; // Save it for later
},
invoke: async ({ message, userPhoneNumber, messageGuid }) => {
// Now you can use it
if (sendTapback && messageGuid) {
await sendTapback(messageGuid, 'love', userPhoneNumber);
}
return "Hello!";
},
};
export default agent;
An agent with a weather tool (returns mock data).
import * as z from "zod";
import { createAgent, tool } from "langchain";
const getWeather = tool(
({ city }) => `It's always sunny in ${city}!`,
{
name: "get_weather",
description: "Get the weather for a given city",
schema: z.object({
city: z.string(),
}),
},
);
const agent = createAgent({
model: "claude-sonnet-4-5-20250929",
tools: [getWeather],
});
export default agent
An advanced agent with memory:
// agent.ts
import "dotenv/config";
import { ChatOpenAI } from "@langchain/openai";
import { HumanMessage, AIMessage, SystemMessage } from "@langchain/core/messages";
const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const history: Array<HumanMessage | AIMessage> = [];
export default {
async invoke({ message }: { message: string }) {
history.push(new HumanMessage(message));
const response = await llm.invoke([
new SystemMessage("You are a helpful assistant. Be concise."),
...history,
]);
const reply = response.content as string;
history.push(new AIMessage(reply));
// Keep last 20 messages to avoid token limits
if (history.length > 20) {
history.splice(0, 2);
}
return reply;
}
};
A conversational chatbot with tapback functionalities:
import "dotenv/config";
import OpenAI from "openai";
const openai = new OpenAI();
const conversations = new Map<string, Array<{ role: "user" | "assistant" | "system"; content: string }>>();
let sendTapback: ((messageGuid: string, reaction: string, userPhoneNumber: string) => Promise<boolean>) | undefined;
export default {
onInit: async (_sendMessage: any, _sendTapback: any) => {
sendTapback = _sendTapback;
},
invoke: async ({ message, userPhoneNumber, messageGuid }: { message: string; userPhoneNumber: string; messageGuid?: string }) => {
// Get or create conversation history
if (!conversations.has(userPhoneNumber)) {
conversations.set(userPhoneNumber, [{
role: "system",
content: "You are a friendly iMessage assistant. Keep responses concise. For positive messages, start with [TAPBACK:love], [TAPBACK:laugh], or [TAPBACK:like]."
}]);
}
const history = conversations.get(userPhoneNumber)!;
history.push({ role: "user", content: message });
// Call OpenAI
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: history,
});
let response = completion.choices[0]?.message?.content || "Hello!";
// Extract and send tapback if present
const tapbackMatch = response.match(/^\[TAPBACK:(love|like|laugh|emphasize)\]/i);
if (tapbackMatch && sendTapback && messageGuid) {
await sendTapback(messageGuid, tapbackMatch[1].toLowerCase(), userPhoneNumber);
response = response.replace(tapbackMatch[0], "").trim();
}
history.push({ role: "assistant", content: response });
return response;
},
};
Connecting agents to messaging platforms traditionally involves complex processes like setting up servers, configuring webhooks, and dealing with platform APIs. Most solutions rely on SMS or WhatsApp, which can be unintuitive for many users.
Flux solves these problems:
Important: Flux is designed for personal use and development. When you deploy an agent with Flux, only your registered phone number can interact with it via iMessage. This means:
For enterprise-level iMessage agents with advanced features, consider our Advanced iMessage Kit:
Explore Advanced iMessage Kit →
We welcome contributions! Flux is fully open source and community-driven. Feel free to:
MIT License - see the LICENSE file for details.
FAQs
Flux CLI - Connect LangChain agents to iMessage
The npm package @photon-ai/flux receives a total of 35 weekly downloads. As such, @photon-ai/flux popularity was classified as not popular.
We found that @photon-ai/flux demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.