
Security News
Security Community Slams MIT-linked Report Claiming AI Powers 80% of Ransomware
Experts push back on new claims about AI-driven ransomware, warning that hype and sponsored research are distorting how the threat is understood.
@ahoo-wang/fetcher-eventstream
Advanced tools
Server-Sent Events (SSE) support for Fetcher HTTP client with native LLM streaming API support. Enables real-time data streaming and token-by-token LLM response handling.
Power your real-time applications with Server-Sent Events support, specially designed for Large Language Model streaming APIs.
text/event-stream responses to async generators of ServerSentEvent objectseventStream() and jsonEventStream() methods to responses with
text/event-stream content
type:) as per SSE specification# Using npm
npm install @ahoo-wang/fetcher-eventstream
# Using pnpm
pnpm add @ahoo-wang/fetcher-eventstream
# Using yarn
yarn add @ahoo-wang/fetcher-eventstream
To use the event stream functionality, you need to import the module for its side effects:
import '@ahoo-wang/fetcher-eventstream';
This import automatically extends the Response interface with methods for handling Server-Sent Events streams:
eventStream() - Converts a Response with text/event-stream content type to a ServerSentEventStreamjsonEventStream<DATA>() - Converts a Response with text/event-stream content type to a
JsonServerSentEventStream<DATA>isEventStream getter - Checks if the Response has a text/event-stream content typerequiredEventStream() - Gets a ServerSentEventStream, throwing an error if not availablerequiredJsonEventStream<DATA>() - Gets a JsonServerSentEventStream<DATA>, throwing an error if not availableThis is a common pattern in JavaScript/TypeScript for extending existing types with additional functionality without modifying the original type definitions.
The following example shows how to create an LLM client with event stream support, similar to the integration test in the Fetcher project. You can find the complete implementation in integration-test/src/eventstream/llmClient.ts.
This example demonstrates how to interact with popular LLM APIs like OpenAI's GPT models using Fetcher's streaming capabilities.
import {
  BaseURLCapable,
  ContentTypeValues,
  FetchExchange,
  NamedFetcher,
  REQUEST_BODY_INTERCEPTOR_ORDER,
  RequestInterceptor,
} from '@ahoo-wang/fetcher';
import {
  api,
  autoGeneratedError,
  body,
  post,
  ResultExtractors,
} from '@ahoo-wang/fetcher-decorator';
import '@ahoo-wang/fetcher-eventstream';
import { JsonServerSentEventStream } from '@ahoo-wang/fetcher-eventstream';
import { ChatRequest, ChatResponse } from './types';
export const llmFetcherName = 'llm';
export interface LlmOptions extends BaseURLCapable {
  apiKey: string;
  model?: string;
}
export class LlmRequestInterceptor implements RequestInterceptor {
  readonly name: string = 'LlmRequestInterceptor';
  readonly order: number = REQUEST_BODY_INTERCEPTOR_ORDER - 1;
  constructor(private llmOptions: LlmOptions) {}
  intercept(exchange: FetchExchange): void {
    const chatRequest = exchange.request.body as ChatRequest;
    if (!chatRequest.model) {
      chatRequest.model = this.llmOptions.model;
    }
  }
}
export function createLlmFetcher(options: LlmOptions): NamedFetcher {
  const llmFetcher = new NamedFetcher(llmFetcherName, {
    baseURL: options.baseURL,
    headers: {
      Authorization: `Bearer ${options.apiKey}`,
      'Content-Type': ContentTypeValues.APPLICATION_JSON,
    },
  });
  llmFetcher.interceptors.request.use(new LlmRequestInterceptor(options));
  return llmFetcher;
}
@api('/chat', {
  fetcher: llmFetcherName,
  resultExtractor: ResultExtractors.JsonEventStream,
})
export class LlmClient {
  @post('/completions')
  streamChat(
    @body() body: ChatRequest,
  ): Promise<JsonServerSentEventStream<ChatResponse>> {
    throw autoGeneratedError(body);
  }
  @post('/completions', { resultExtractor: ResultExtractors.Json })
  chat(@body() body: ChatRequest): Promise<ChatResponse> {
    throw autoGeneratedError(body);
  }
}
Here's how to use the streamChat method to get real-time responses from an LLM API:
import { createLlmFetcher, LlmClient } from './llmClient';
// Initialize the LLM client with your API configuration
const llmFetcher = createLlmFetcher({
  baseURL: 'https://api.openai.com/v1', // Example for OpenAI
  apiKey: process.env.OPENAI_API_KEY || 'your-api-key',
  model: 'gpt-3.5-turbo', // Default model
});
// Create the client instance
const llmClient = new LlmClient();
// Example: Stream a chat completion response in real-time
async function streamChatExample() {
  try {
    // Stream the response token by token
    const stream = await llmClient.streamChat({
      messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: 'Explain quantum computing in simple terms.' },
      ],
      model: 'gpt-3.5-turbo', // Override default model if needed
      stream: true, // Enable streaming
    });
    // Process the streamed response
    let fullResponse = '';
    for await (const event of stream) {
      // Each event contains a partial response
      if (event.data) {
        const chunk = event.data;
        const content = chunk.choices[0]?.delta?.content || '';
        fullResponse += content;
        console.log('New token:', content);
        // Update UI in real-time as tokens arrive
        updateUI(content);
      }
    }
    console.log('Full response:', fullResponse);
  } catch (error) {
    console.error('Error streaming chat:', error);
  }
}
// Helper function to simulate UI updates
function updateUI(content: string) {
  // In a real application, this would update your UI
  process.stdout.write(content);
}
import { toServerSentEventStream } from '@ahoo-wang/fetcher-eventstream';
// Convert a Response object manually
const response = await fetch('/events');
const eventStream = toServerSentEventStream(response);
// Read events from the stream
const reader = eventStream.getReader();
try {
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log('Received event:', value);
  }
} finally {
  reader.releaseLock();
}
import { Fetcher } from '@ahoo-wang/fetcher';
import '@ahoo-wang/fetcher-eventstream';
const fetcher = new Fetcher({
  baseURL: 'https://api.example.com',
});
// In responses with text/event-stream content type,
// Response objects will automatically have eventStream() and jsonEventStream() methods
const response = await fetcher.get('/events');
for await (const event of response.requiredEventStream()) {
  console.log('Received event:', event);
}
// Using jsonEventStream for JSON data
const jsonResponse = await fetcher.get('/json-events');
for await (const event of response.requiredJsonEventStream<MyDataType>()) {
  console.log('Received JSON event:', event.data);
}
import { Fetcher } from '@ahoo-wang/fetcher';
import {
  toJsonServerSentEventStream,
  type TerminateDetector,
} from '@ahoo-wang/fetcher-eventstream';
const fetcher = new Fetcher({
  baseURL: 'https://api.openai.com/v1',
});
// Define termination detector for OpenAI-style completion
const terminateOnDone: TerminateDetector = event => event.data === '[DONE]';
// Get raw event stream
const response = await fetcher.post('/chat/completions', {
  body: {
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'Hello!' }],
    stream: true,
  },
});
// Convert to typed JSON stream with automatic termination
const jsonStream = toJsonServerSentEventStream<ChatCompletionChunk>(
  response.requiredEventStream(),
  terminateOnDone,
);
// Process streaming response with automatic termination
for await (const event of jsonStream) {
  const content = event.data.choices[0]?.delta?.content;
  if (content) {
    console.log('Token:', content);
    // Stream automatically terminates when '[DONE]' is received
  }
}
import { toServerSentEventStream } from '@ahoo-wang/fetcher-eventstream';
// Convert a Response object manually
const response = await fetch('/events');
const eventStream = toServerSentEventStream(response);
// Read events from the stream
const reader = eventStream.getReader();
try {
  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    console.log('Received event:', value);
  }
} finally {
  reader.releaseLock();
}
To use the event stream functionality, you need to import the module for its side effects:
import '@ahoo-wang/fetcher-eventstream';
This import automatically extends the global Response interface with methods for handling Server-Sent Events streams:
eventStream() - Converts a Response with text/event-stream content type to a ServerSentEventStreamjsonEventStream<DATA>() - Converts a Response with text/event-stream content type to a
JsonServerSentEventStream<DATA>isEventStream getter - Checks if the Response has a text/event-stream content typerequiredEventStream() - Gets a ServerSentEventStream, throwing an error if not availablerequiredJsonEventStream<DATA>() - Gets a JsonServerSentEventStream<DATA>, throwing an error if not availableThis is a common pattern in JavaScript/TypeScript for extending existing types with additional functionality without modifying the original type definitions.
In integration tests and real applications, this import is essential for working with event streams. For example:
import { Fetcher } from '@ahoo-wang/fetcher';
import '@ahoo-wang/fetcher-eventstream';
const fetcher = new Fetcher({
  baseURL: 'https://api.example.com',
});
// Response objects will automatically have eventStream() and jsonEventStream() methods
const response = await fetcher.get('/events');
// Handle event stream
for await (const event of response.requiredEventStream()) {
  console.log('Received event:', event);
}
Converts a ServerSentEventStream to a JsonServerSentEventStream<DATA> for handling Server-Sent Events with JSON
data. Optionally supports stream termination detection for automatic stream closure.
function toJsonServerSentEventStream<DATA>(
  serverSentEventStream: ServerSentEventStream,
  terminateDetector?: TerminateDetector,
): JsonServerSentEventStream<DATA>;
serverSentEventStream: The ServerSentEventStream to convertterminateDetector: Optional function to detect when the stream should be terminated. When provided, the stream will automatically close when the detector returns true for an event.JsonServerSentEventStream<DATA>: A readable stream of JsonServerSentEvent<DATA> objects// Basic usage without termination detection
const jsonStream = toJsonServerSentEventStream<MyData>(serverSentEventStream);
// With termination detection for OpenAI-style completion
const terminateOnDone: TerminateDetector = event => event.data === '[DONE]';
const terminatingStream = toJsonServerSentEventStream<MyData>(
  serverSentEventStream,
  terminateOnDone,
);
// Custom termination logic
const terminateOnError: TerminateDetector = event => {
  return event.event === 'error' || event.data.includes('ERROR');
};
const errorHandlingStream = toJsonServerSentEventStream<MyData>(
  serverSentEventStream,
  terminateOnError,
);
Interface defining the structure of a Server-Sent Event with JSON data.
interface JsonServerSentEvent<DATA> extends Omit<ServerSentEvent, 'data'> {
  data: DATA; // The event data parsed as JSON
}
Type alias for a readable stream of JsonServerSentEvent<DATA> objects.
type JsonServerSentEventStream<DATA> = ReadableStream<
  JsonServerSentEvent<DATA>
>;
A function type for detecting when a Server-Sent Event stream should be terminated. This is commonly used with LLM APIs that send a special termination event to signal the end of a response stream.
type TerminateDetector = (event: ServerSentEvent) => boolean;
event: The current ServerSentEvent being processedboolean: true if the stream should be terminated, false otherwise// OpenAI-style termination (common pattern)
const terminateOnDone: TerminateDetector = event => event.data === '[DONE]';
// Event-based termination
const terminateOnComplete: TerminateDetector = event => event.event === 'done';
// Custom termination with multiple conditions
const terminateOnFinish: TerminateDetector = event => {
  return (
    event.event === 'done' ||
    event.event === 'error' ||
    event.data === '[DONE]' ||
    event.data.includes('TERMINATE')
  );
};
// Usage with toJsonServerSentEventStream
const stream = toJsonServerSentEventStream<MyData>(
  serverSentEventStream,
  terminateOnDone,
);
[DONE] from OpenAI, Claude, or other LLM APIsConverts a Response object with a text/event-stream body to a ServerSentEventStream.
function toServerSentEventStream(response: Response): ServerSentEventStream;
response: An HTTP response with text/event-stream content typeServerSentEventStream: A readable stream of ServerSentEvent objectsInterface defining the structure of a Server-Sent Event.
interface ServerSentEvent {
  data: string; // The event data (required)
  event?: string; // The event type (optional, defaults to 'message')
  id?: string; // The event ID (optional)
  retry?: number; // The reconnection time in milliseconds (optional)
}
Type alias for a readable stream of ServerSentEvent objects.
type ServerSentEventStream = ReadableStream<ServerSentEvent>;
import { Fetcher } from '@ahoo-wang/fetcher';
import '@ahoo-wang/fetcher-eventstream';
const fetcher = new Fetcher({
  baseURL: 'https://api.example.com',
});
// Listen for real-time notifications
const response = await fetcher.get('/notifications');
for await (const event of response.requiredEventStream()) {
  switch (event.event) {
    case 'message':
      showNotification('Message', event.data);
      break;
    case 'alert':
      showAlert('Alert', event.data);
      break;
    case 'update':
      handleUpdate(JSON.parse(event.data));
      break;
    default:
      console.log('Unknown event:', event);
  }
}
import { Fetcher } from '@ahoo-wang/fetcher';
const fetcher = new Fetcher({
  baseURL: 'https://api.example.com',
});
// Track long-running task progress
const response = await fetcher.get('/tasks/123/progress');
for await (const event of response.requiredEventStream()) {
  if (event.event === 'progress') {
    const progress = JSON.parse(event.data);
    updateProgressBar(progress.percentage);
  } else if (event.event === 'complete') {
    showCompletionMessage(event.data);
    break;
  }
}
import { Fetcher } from '@ahoo-wang/fetcher';
const fetcher = new Fetcher({
  baseURL: 'https://chat-api.example.com',
});
// Real-time chat messages
const response = await fetcher.get('/rooms/123/messages');
for await (const event of response.requiredEventStream()) {
  if (event.event === 'message') {
    const message = JSON.parse(event.data);
    displayMessage(message);
  } else if (event.event === 'user-joined') {
    showUserJoined(event.data);
  } else if (event.event === 'user-left') {
    showUserLeft(event.data);
  }
}
# Run tests
pnpm test
# Run tests with coverage
pnpm test --coverage
The test suite includes:
This package fully implements the Server-Sent Events specification:
: are ignoredContributions are welcome! Please see the contributing guide for more details.
This project is licensed under the Apache-2.0 License.
Part of the Fetcher ecosystem
FAQs
Server-Sent Events (SSE) support for Fetcher HTTP client with native LLM streaming API support. Enables real-time data streaming and token-by-token LLM response handling.
The npm package @ahoo-wang/fetcher-eventstream receives a total of 1,902 weekly downloads. As such, @ahoo-wang/fetcher-eventstream popularity was classified as popular.
We found that @ahoo-wang/fetcher-eventstream demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Experts push back on new claims about AI-driven ransomware, warning that hype and sponsored research are distorting how the threat is understood.

Security News
Ruby's creator Matz assumes control of RubyGems and Bundler repositories while former maintainers agree to step back and transfer all rights to end the dispute.

Research
/Security News
Socket researchers found 10 typosquatted npm packages that auto-run on install, show fake CAPTCHAs, fingerprint by IP, and deploy a credential stealer.