Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
@markprompt/core
Advanced tools
`@markprompt/core` is the core library for Markprompt, a conversational AI component for your website, trained on your data.
@markprompt/core
@markprompt/core
is the core library for Markprompt, a conversational AI
component for your website, trained on your data.
It contains core functionality for Markprompt and allows you to build abstractions on top of it.
npm install @markprompt/core
In browsers with esm.sh:
<script type="module">
import {
submitChat,
submitChatGenerator,
submitSearchQuery,
submitFeedback,
} from 'https://esm.sh/@markprompt/core';
</script>
import { submitChatGenerator } from '@markprompt/core';
// User input
const prompt = 'What is Markprompt?';
// Can be obtained in your project settings on markprompt.com
const projectKey = 'YOUR-PROJECT-KEY';
// Optional parameters, defaults displayed
const options = {
model: 'gpt-3.5-turbo', // Supports all OpenAI models
iDontKnowMessage: 'Sorry, I am not sure how to answer that.',
apiUrl: 'https://api.markprompt.com/chat', // Or your own chat API endpoint
};
for await (const chunk of submitChatGenerator(
[{ content: prompt, role: 'user' }],
projectKey,
options,
)) {
console.log(chunk);
}
submitChat(messages: ChatMessage[], projectKey: string, onAnswerChunk, onReferences, onConversationId, onPromptId, onError, options?)
Deprecated. Use submitChatGenerator
instead.
Submit a prompt to the Markprompt Completions API.
messages
(ChatMessage[]
): Chat messages to submit to the modelprojectKey
(string
): Project key for the projectonAnswerChunk
(function(chunk: string)
): Answers come in via streaming.
This function is called when a new chunk arrives. Chunks should be
concatenated to previous chunks of the same answer response.onReferences
(function(references: FileSectionReference[])
): This function
is called when receiving the list of references from which the response was
created.onConversationId
(function(conversationId: string)
): This function is
called with the conversation ID returned by the API. Used to keep track of
conversations.onPromptId
(function(promptId: string)
): This function is called with the
prompt ID returned by the API. Used to submit feedback.onError
(function
): called when an error occursoptions
(SubmitChatOptions
): Optional parametersAll options are optional.
apiUrl
(string
): URL at which to fetch completionsconversationId
(string
): Conversation IDiDontKnowMessage
(string
): Message returned when the model does not have
an answermodel
(OpenAIModelId
): The OpenAI model to usesystemPrompt
(string
): The prompt templatetemperature
(number
): The model temperaturetopP
(number
): The model top PfrequencyPenalty
(number
): The model frequency penaltypresencePenalty
(number
): The model present penaltymaxTokens
(number
): The max number of tokens to include in the responsesectionsMatchCount
(number
): The number of sections to include in the
prompt contextsectionsMatchThreshold
(number
): The similarity threshold between thesectionsScope
(number
): When a section is matched, extend the context to the parent section. For instance, if a section has level 3 and sectionsScope
is set to 1, include the content of the entire parent section of level 1. If 0, this includes the entire file.signal
(AbortSignal
): AbortController signaltools
: (OpenAI.ChatCompletionTool[]
): A list of tools the model may calltool_choice
: (OpenAI.ChatCompletionToolChoiceOption
): Controls which (if any) function is called by the modelA promise that resolves when the response is fully handled.
submitSearchQuery(query, projectKey, options?)
Submit a search query to the Markprompt Search API.
query
(string
): Search queryprojectKey
(string
): Project key for the projectoptions
(object
): Optional parametersapiUrl
(string
): URL at which to fetch search resultslimit
(number
): Maximum amount of results to returnsignal
(AbortSignal
): AbortController signalA list of search results.
submitFeedback(feedback, projectKey, options?)
Submit feedback to the Markprompt Feedback API about a specific prompt.
feedback
(object
): Feedback to submitfeedback.feedback
(object
): Feedback datafeedback.feedback.vote
("1" | "-1"
): Votefeedback.promptId
(string
): Prompt IDprojectKey
(string
): Project key for the projectoptions
(object
): Optional parametersoptions.apiUrl
(string
): URL at which to post feedbackoptions.onFeedbackSubmitted
(function
): Callback function when feedback
is submittedoptions.signal
(AbortSignal
): AbortController signalA promise that resolves when the feedback is submitted. Has no return value.
This library is created by the team behind Markprompt (@markprompt).
FAQs
`@markprompt/core` is the core library for Markprompt, a conversational AI component for your website, trained on your data.
We found that @markprompt/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.