
Security News
Axios Maintainer Confirms Social Engineering Attack Behind npm Compromise
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.
Zero‑friction AI agent integration for JavaScript & TypeScript projects.
rewrite SafelyuseStorage true)Integrate AI in 2 lines of code with Heylock.
You just set up your agent on the website, then use this package to integrate AI. No hard work. No boring code. Just plug it in and look smart.
Why use Heylock?
message) – Full chat capabilities.messageStream) – Token‑by‑token replies for "instant" feel.greet) – Tailored first message so prospects are more likely to engage.shouldEngage) – Thinks “Should I pop up now?” with reasoning for debugging.rewrite) – Fix tone/grammar in one line.sort) – AI ranks any list (products, leads, docs, literally anything).fetchBalanceRemaining) – Know remaining balance early.Why you should care:
Run this in your console.
npm install heylock
Runtime requirements:
fetch + modern syntax: private fields, optional chaining, async generators, nullish coalescing). For Node 16/14 add a fetch polyfill (undici) and ensure ES2020 support.import Heylock from 'heylock';
const agent = new Heylock(process.env.HEYLOCK_AGENT_KEY, {
useStorage: false, // Set to false when running server-side
useMessageHistory: true,
suppressWarnings: false
});
agent.onInitialized(async (success) => {
const reply = await agent.message('Hello, Heylock!');
console.log('Agent:', reply);
});
Streaming example (progressive UI):
for await (const chunk of agent.messageStream('Explain this feature briefly.')) {
process.stdout.write(chunk); // append to UI / console
}
| Concept | Summary |
|---|---|
| Agent | Your AI agent, set up in dashboard, used by key |
| Context | List of user actions for better replies |
| Message History | Chat log of user and assistant messages |
| Streaming | Get reply chunks as they arrive |
| Fallbacks | Returns partial results or warnings if needed |
new Heylock(agentKey, options) — Starts the agent and checks your key.onInitialized(callback) or check isInitialized to run code when ready.addContextEntry(content, timestamp?) — Add a new context fact. Timestamp is optional.modifyContextEntry(index, content, timestamp?) — Change a context entry by its index.removeContextEntry(index) — Remove a context entry by its index.clearContext() — Remove all context entries.getContextString() — Get a human-readable summary of context.useStorage is true in the browser, context is saved in localStorage.addMessage(content, role?) — Add a message to the chat log. Role is 'user' or 'assistant' (default: 'user').modifyMessage(index, content, role?) — Change a message by its index.removeMessage(index) — Remove a message by its index.setMessageHistory(messages) — Replace the whole chat log.clearMessageHistory() — Remove all messages from the chat log.onMessageHistoryChange(callback) — Run code when the chat log changes.message(content, useContext = true, saveToMessageHistory = true) — Send a message to your agent and get the assistant’s reply as a string. If saveToMessageHistory is true, both your message and the reply are saved in the chat log.messageStream(content, useContext = true, saveToMessageHistory = true) — Stream the assistant’s reply in pieces (chunks) using an async generator. If saveToMessageHistory is true, the assistant’s message in the chat log is updated live as new chunks arrive.greet(instructions?, useContext = true, saveToMessageHistory = true) — Get a friendly first message from the agent. Use instructions to guide the greeting.shouldEngage(instructions?) — Ask if the agent should pop up now. Returns { shouldEngage, reasoning, warning?, fallback }.rewrite(content, instructions?, useContext = true) — Fix or change text using AI. Use instructions to guide the rewrite.sort(array, instructions?, useContext = true) — AI sorts your array. Returns { array, indexes, reasoning?, warning?, fallback? }. If sorting fails, you get the original array and fallback: true.fetchBalanceRemaining() — Check your balance.Interfaces:
AgentOptions (useStorage, useMessageHistory, suppressWarnings, agentId)MessageContextEntryBalanceRemainingShouldEngageResultSortResultCore Class: Heylock
isInitialized, balanceRemaining, messageHistory, contextonInitialized, onMessageHistoryChange, onContextChange| Option | Type | Default | Description |
|---|---|---|---|
useStorage | boolean | auto (true in browser, false server) | Persist context to localStorage (browser only). |
useMessageHistory | boolean | true | Maintain internal transcript. |
suppressWarnings | boolean | false | Hide non-fatal console warnings (e.g., throttling, environment). |
agentId | string | 'default' | Namespacing for multi-agent storage keys. |
Environment Behavior:
useStorage true, context persists (heylock:<agentId>:context).Why: Focused, consistent context reduces prompt size and improves response relevance without noise.
getContextString() – "User did X now", "User did X 5 minutes ago"Examples:
Why: Reduces risk of data leaks, simplifies compliance (GDPR/CCPA), and limits blast radius if logs are compromised.
addContextEntry().TypeScript example (sanitizing):
function safeContextEvent(raw: { email?: string; action: string }) {
const anon = raw.email ? raw.email.split('@')[0] : 'user';
agent.addContextEntry(`${anon} ${raw.action}`); // no full email stored
}
Why: Early incremental feedback lowers perceived latency and increases user engagement.
saveToMessageHistory true).Example:
let buffer = '';
for await (const chunk of agent.messageStream(prompt)) {
buffer += chunk;
updateChatBubble(buffer);
}
Why: Graceful degradation prevents hard failures and preserves UX while signaling issues for observability.
fallback / warning for non-fatal degradations.NODE_ENV.const result = await agent.sort(products, 'Rank by likelihood of purchase');
if (result.fallback) {
// Use original order, maybe schedule retry with different arguments.
console.warn('Sort fallback, using original array');
}
Why: Avoids spamming users and improves trust while still triggering help at high-intent moments.
shouldEngage() on meaningful user intent (time-on-page > N seconds, scroll depth, idle detection) not every minor event.{ shouldEngage, reasoning }; log sampled reasoning for tuning.const { shouldEngage, reasoning } = await agent.shouldEngage();
if (shouldEngage === true){
openChat();
} else{
debugSample(reasoning);
}
Why: Mitigates transient network/service hiccups without amplifying load or causing duplicate side effects.
message, rewrite, sort) with exponential backoff (e.g., 250ms * 2^n, max ~3 retries).shouldEngage() inside cooldown window.Pseudo-code:
async function withRetry(op, attempts = 3) {
let delay = 250;
for (let i = 0; i < attempts; i++) {
try {
return await op();
} catch (e) {
if (i === attempts - 1) throw e;
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2;
}
}
}
Why: Prevents drift from outdated or duplicative events.
function pruneContext() {
const now = Date.now();
agent.context = agent.context.filter(entry => now - (entry.timestamp || now) < 30 * 60_000);
}
Why: Protects the secret key and centralizes security, rate limiting, and auditing.
rewrite SafelyWhy: Ensures consistent tone while preserving original data for audits and rollback.
const polished = await agent.rewrite(draft, 'Professional, concise, keep emojis if present');
Why: Clear objectives yield more reliable AI ranking and simplify post-result validation.
const ranking = await agent.sort(items, 'Sort by highest estimated CTR for mobile users');
useStorage true)Why: Namespacing avoids key collisions across environments and eases user privacy controls.
agentId per environment (e.g., myagent-dev, myagent-prod).Why: Immediate feedback and constraint-based disabling reduce accidental duplicate actions.
balanceRemaining <= 0.Why: Users retain core functionality even when advanced AI features are unavailable.
Why: Stable machine-readable context improves model consistency across locales.
Why: Resetting clears model bias from earlier context and supports multi-user demos/tests.
clearMessageHistory() and optionally clearContext().Why: Isolation between personas prevents context bleed and clarifies analytics attribution.
agentIds per persona (e.g., support, recommender).| Issue | Likely Cause | Action |
|---|---|---|
isInitialized stays false | Invalid key / network error | Regenerate agent key; Check console warnings. |
Throttling warning on shouldEngage | Called again < 15s | Debounce / gate UI triggers. |
| Streaming stops early | Network interruption | Retry with exponential backoff; inspect partial assistant message. |
fallback: true in sort/rewrite | Service fallback or rate limit | Present basic result; optionally retry later. |
| Context not persisting | useStorage false or server env | Enable useStorage in browser; confirm not running server-side. |
For questions or assistance, contact support@heylock.dev.
useStorage if you don't want context written to the user's browser.Apache 2.0. See LICENSE.
FAQs
Zero‑friction AI agent integration for JavaScript & TypeScript projects.
We found that heylock demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.

Security News
The Axios compromise shows how time-dependent dependency resolution makes exposure harder to detect and contain.