New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

@markprompt/core

Package Overview
Dependencies
Maintainers
1
Versions
109
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@markprompt/core - npm Package Compare versions

Comparing version 0.4.6 to 0.5.0

dist/prompt.d.ts

54

dist/index.d.ts

@@ -1,52 +0,4 @@

import type { OpenAIModelId } from './types.js';
export type { OpenAIModelId };
export type Options = {
/** URL at which to fetch completions */
completionsUrl?: string;
/** Message returned when the model does not have an answer */
iDontKnowMessage?: string;
/** The OpenAI model to use */
model?: OpenAIModelId;
/** The prompt template */
promptTemplate?: string;
/** The model temperature */
temperature?: number;
/** The model top P */
topP?: number;
/** The model frequency penalty */
frequencyPenalty?: number;
/** The model present penalty */
presencePenalty?: number;
/** The max number of tokens to include in the response */
maxTokens?: number;
/** The number of sections to include in the prompt context */
sectionsMatchCount?: number;
/** The similarity threshold between the input question and selected sections */
sectionsMatchThreshold?: number;
/** AbortController signal */
signal?: AbortSignal;
};
export declare const MARKPROMPT_COMPLETIONS_URL = "https://api.markprompt.com/v1/completions";
export declare const STREAM_SEPARATOR = "___START_RESPONSE_STREAM___";
export declare const DEFAULT_MODEL: OpenAIModelId;
export declare const DEFAULT_I_DONT_KNOW_MESSAGE = "Sorry, I am not sure how to answer that.";
export declare const DEFAULT_REFERENCES_HEADING = "Answer generated from the following pages:";
export declare const DEFAULT_LOADING_HEADING = "Fetching relevant pages...";
export declare const DEFAULT_PROMPT_TEMPLATE = "You are a very enthusiastic company representative who loves to help people! Given the following sections from the documentation (preceded by a section id), answer the question using only that information, outputted in Markdown format. If you are unsure and the answer is not explicitly written in the documentation, say \"{{I_DONT_KNOW}}\".\n\nContext sections:\n---\n{{CONTEXT}}\n\nQuestion: \"{{PROMPT}}\"\n\nAnswer (including related code snippets if available):";
export declare const DEFAULT_TEMPERATURE = 0.1;
export declare const DEFAULT_TOP_P = 1;
export declare const DEFAULT_FREQUENCY_PENALTY = 0;
export declare const DEFAULT_PRESENCE_PENALTY = 0;
export declare const DEFAULT_MAX_TOKENS = 500;
export declare const DEFAULT_SECTIONS_MATCH_COUNT = 10;
export declare const DEFAULT_SECTIONS_MATCH_THRESHOLD = 0.5;
/**
* @param {string} prompt - Prompt to submit to the model
* @param {string} projectKey - The key of your project
* @param {(answerChunk: string) => void} onAnswerChunk - Answers come in via streaming. This function is called when a new chunk arrives
* @param {(references: string[]) => void} onReferences - This function is called when a chunk includes references.
* @param {(error: Error) => void} onError - called when an error occurs
* @param {Options} [options] - Optional options object
*/
export declare function submitPrompt(prompt: string, projectKey: string, onAnswerChunk: (answerChunk: string) => boolean | undefined | void, onReferences: (references: string[]) => void, onError: (error: Error) => void, options?: Options): Promise<void>;
export { submitPrompt, type SubmitPromptOptions, MARKPROMPT_COMPLETIONS_URL, STREAM_SEPARATOR, DEFAULT_MODEL, DEFAULT_I_DONT_KNOW_MESSAGE, DEFAULT_REFERENCES_HEADING, DEFAULT_LOADING_HEADING, DEFAULT_PROMPT_TEMPLATE, DEFAULT_TEMPERATURE, DEFAULT_TOP_P, DEFAULT_FREQUENCY_PENALTY, DEFAULT_PRESENCE_PENALTY, DEFAULT_MAX_TOKENS, DEFAULT_SECTIONS_MATCH_COUNT, DEFAULT_SECTIONS_MATCH_THRESHOLD, } from './prompt.js';
export { submitSearchQuery, type SubmitSearchQueryOptions } from './search.js';
export { type OpenAIModelId, type SearchResult, type SearchResultSection, type SearchResultsResponse, } from './types.js';
//# sourceMappingURL=index.d.ts.map

@@ -1,105 +0,4 @@

export const MARKPROMPT_COMPLETIONS_URL = 'https://api.markprompt.com/v1/completions';
export const STREAM_SEPARATOR = '___START_RESPONSE_STREAM___';
export const DEFAULT_MODEL = 'gpt-3.5-turbo';
export const DEFAULT_I_DONT_KNOW_MESSAGE = 'Sorry, I am not sure how to answer that.';
export const DEFAULT_REFERENCES_HEADING = 'Answer generated from the following pages:';
export const DEFAULT_LOADING_HEADING = 'Fetching relevant pages...';
export const DEFAULT_PROMPT_TEMPLATE = `You are a very enthusiastic company representative who loves to help people! Given the following sections from the documentation (preceded by a section id), answer the question using only that information, outputted in Markdown format. If you are unsure and the answer is not explicitly written in the documentation, say "{{I_DONT_KNOW}}".
Context sections:
---
{{CONTEXT}}
Question: "{{PROMPT}}"
Answer (including related code snippets if available):`;
export const DEFAULT_TEMPERATURE = 0.1;
export const DEFAULT_TOP_P = 1;
export const DEFAULT_FREQUENCY_PENALTY = 0;
export const DEFAULT_PRESENCE_PENALTY = 0;
export const DEFAULT_MAX_TOKENS = 500;
export const DEFAULT_SECTIONS_MATCH_COUNT = 10;
export const DEFAULT_SECTIONS_MATCH_THRESHOLD = 0.5;
/**
* @param {string} prompt - Prompt to submit to the model
* @param {string} projectKey - The key of your project
* @param {(answerChunk: string) => void} onAnswerChunk - Answers come in via streaming. This function is called when a new chunk arrives
* @param {(references: string[]) => void} onReferences - This function is called when a chunk includes references.
* @param {(error: Error) => void} onError - called when an error occurs
* @param {Options} [options] - Optional options object
*/
export async function submitPrompt(prompt, projectKey, onAnswerChunk, onReferences, onError, options = {}) {
if (!projectKey) {
throw new Error('A projectKey is required.');
}
if (!prompt)
return;
const iDontKnowMessage = options.iDontKnowMessage ?? DEFAULT_I_DONT_KNOW_MESSAGE;
try {
const res = await fetch(options.completionsUrl ?? MARKPROMPT_COMPLETIONS_URL, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
prompt: prompt,
projectKey: projectKey,
iDontKnowMessage,
model: options?.model ?? DEFAULT_MODEL,
promptTemplate: options.promptTemplate,
temperature: options.temperature,
topP: options.topP,
frequencyPenalty: options.frequencyPenalty,
presencePenalty: options.presencePenalty,
maxTokens: options.maxTokens,
sectionsMatchCount: options.sectionsMatchCount,
sectionsMatchThreshold: options.sectionsMatchThreshold,
}),
signal: options.signal,
});
if (!res.ok || !res.body) {
const text = await res.text();
onAnswerChunk(iDontKnowMessage);
onError(new Error(text));
return;
}
const reader = res.body.getReader();
const decoder = new TextDecoder();
let done = false;
let startText = '';
let didHandleHeader = false;
while (!done) {
const { value, done: doneReading } = await reader.read();
done = doneReading;
const chunkValue = decoder.decode(value);
if (!didHandleHeader) {
startText = startText + chunkValue;
if (startText.includes(STREAM_SEPARATOR)) {
const parts = startText.split(STREAM_SEPARATOR);
try {
onReferences(JSON.parse(parts[0]));
}
catch {
// do nothing
}
if (parts[1]) {
onAnswerChunk(parts[1]);
}
didHandleHeader = true;
}
}
else if (chunkValue) {
const shouldContinue = onAnswerChunk(chunkValue);
if (!shouldContinue) {
// If callback returns false, it means it wishes
// to interrupt the streaming.
done = true;
}
}
}
}
catch (error) {
onError(error instanceof Error ? error : new Error(`${error}`));
}
}
export { submitPrompt, MARKPROMPT_COMPLETIONS_URL, STREAM_SEPARATOR, DEFAULT_MODEL, DEFAULT_I_DONT_KNOW_MESSAGE, DEFAULT_REFERENCES_HEADING, DEFAULT_LOADING_HEADING, DEFAULT_PROMPT_TEMPLATE, DEFAULT_TEMPERATURE, DEFAULT_TOP_P, DEFAULT_FREQUENCY_PENALTY, DEFAULT_PRESENCE_PENALTY, DEFAULT_MAX_TOKENS, DEFAULT_SECTIONS_MATCH_COUNT, DEFAULT_SECTIONS_MATCH_THRESHOLD, } from './prompt.js';
export { submitSearchQuery } from './search.js';
export {} from './types.js';
//# sourceMappingURL=index.js.map
import { rest } from 'msw';
import { setupServer } from 'msw/node';
import { afterAll, afterEach, beforeAll, describe, expect, test, vi, } from 'vitest';
import { MARKPROMPT_COMPLETIONS_URL, STREAM_SEPARATOR, submitPrompt, } from './index.js';
import { submitPrompt } from './index.js';
import { MARKPROMPT_COMPLETIONS_URL, STREAM_SEPARATOR } from './prompt.js';
const encoder = new TextEncoder();

@@ -6,0 +7,0 @@ let response = [];

@@ -5,3 +5,31 @@ type OpenAIChatCompletionsModelId = 'gpt-4' | 'gpt-4-0314' | 'gpt-4-32k' | 'gpt-4-32k-0314' | 'gpt-3.5-turbo' | 'gpt-3.5-turbo-0301';

export type RequiredKeys<T, K extends keyof T> = Required<Pick<T, K>> & Omit<T, K>;
export type SearchResultSection = {
content: string;
meta?: {
leadHeading?: {
depth: number;
value: string;
};
};
score: number;
};
export type SearchResult = {
path: string;
meta: {
title: string;
};
source: {
type: 'github' | 'motif' | 'website' | 'file-upload' | 'api-upload';
data: {
url?: string;
domain?: string;
};
};
sections: SearchResultSection[];
};
export type SearchResultsResponse = {
project_id: string;
data: SearchResult[];
};
export {};
//# sourceMappingURL=types.d.ts.map
{
"name": "@markprompt/core",
"version": "0.4.6",
"version": "0.5.0",
"repository": {

@@ -5,0 +5,0 @@ "type": "git",

@@ -7,2 +7,28 @@ # `@markprompt/core`

<br />
<p align="center">
<a aria-label="NPM version" href="https://www.npmjs.com/package/@markprompt/core">
<img alt="" src="https://badgen.net/npm/v/@markprompt/core">
</a>
<a aria-label="License" href="https://github.com/motifland/markprompt-js/blob/main/packages/core/LICENSE">
<img alt="" src="https://badgen.net/npm/license/@markprompt/core">
</a>
<a aria-label="Coverage" href="https://app.codecov.io/gh/motifland/markprompt-js/tree/main/packages%2Fcore">
<img alt="" src="https://codecov.io/gh/motifland/markprompt-js/branch/main/graph/badge.svg" />
</a>
</p>
## Table of Contents
- [Installation](#installation)
- [Usage](#usage)
- [API](#api)
- [`submitPrompt(prompt, projectKey, onAnswerChunk, onReferences, onError, options?)`](#submitpromptprompt-projectkey-onanswerchunk-onreferences-onerror-options)
- [Arguments](#arguments)
- [Options](#options)
- [Returns](#returns)
- [Community](#community)
- [Authors](#authors)
- [License](#license)
## Installation

@@ -22,3 +48,3 @@

# Usage
## Usage

@@ -58,1 +84,50 @@ ```js

```
## API
### `submitPrompt(prompt, projectKey, onAnswerChunk, onReferences, onError, options?)`
Submit a prompt the the Markprompt API.
#### Arguments
- `prompt` (`string`): Prompt to submit to the model
- `projectKey` (`string`): The key of your project
- `onAnswerChunk` (`function`): Answers come in via streaming. This function is called when a new chunk arrives
- `onReferences` (`function`): This function is called when a chunk includes references.
- `onError` (`function`): called when an error occurs
- [`options`](#options) (`object`): Optional options object
#### Options
- `completionsUrl` (`string`): URL at which to fetch completions
- `iDontKnowMessage` (`string`): Message returned when the model does not have an answer
- `model` (`OpenAIModelId`): The OpenAI model to use
- `promptTemplate` (`string`): The prompt template
- `temperature` (`number`): The model temperature
- `topP` (`number`): The model top P
- `frequencyPenalty` (`number`): The model frequency penalty
- `presencePenalty` (`number`): The model present penalty
- `maxTokens` (`number`): The max number of tokens to include in the response
- `sectionsMatchCount` (`number`): The number of sections to include in the prompt context
- `sectionsMatchThreshold` (`number`): The similarity threshold between the input question and selected sections
- `signal` (`AbortSignal`): AbortController signal
#### Returns
A promise that resolves when the response is fully handled.
## Community
- [Twitter @markprompt](https://twitter.com/markprompt)
- [Twitter @motifland](https://twitter.com/motifland)
- [Discord](https://discord.gg/MBMh4apz6X)
## Authors
This library is created by the team behind [Motif](https://motif.land)
([@motifland](https://twitter.com/motifland)).
## License
[MIT](./LICENSE) © [Motif](https://motif.land)

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

Sorry, the diff of this file is not supported yet

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc