Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
@markprompt/core
Advanced tools
`@markprompt/core` is the core library for Markprompt, a conversational AI component for your website, trained on your data.
@markprompt/core
@markprompt/core
is the core library for Markprompt, a conversational AI
component for your website, trained on your data.
It contains core functionality for Markprompt and allows you to build abstractions on top of it.
npm install @markprompt/core
In browsers with esm.sh:
<script type="module">
import { submitPrompt } from 'https://esm.sh/@markprompt/core';
</script>
import { submitPrompt } from '@markprompt/core';
// User input
const prompt = 'What is Markprompt?';
// Can be obtained in your project settings on markprompt.com
const projectKey = 'YOUR-PROJECT-KEY';
// Called when a new answer chunk is available
// Should be concatenated to previous chunks
function onAnswerChunk(chunk) {
// Process an answer chunk
}
// Called when references are available
function onReferences(references) {
// Process references
}
// Called when submitPrompt encounters an error
const onError(error) {
// Handle errors
}
// Optional parameters, defaults displayed
const options = {
model: 'gpt-3.5-turbo', // Supports all OpenAI models
iDontKnowMessage: 'Sorry, I am not sure how to answer that.',
apiUrl: 'https://api.markprompt.com/v1/completions', // Or your own completions API endpoint
};
await submitPrompt(prompt, projectKey, onAnswerChunk, onReferences, onPromptId, onError, options);
submitPrompt(prompt, projectKey, onAnswerChunk, onReferences, onError, options?)
Submit a prompt to the Markprompt Completions API.
prompt
(string
): Prompt to submit to the modelprojectKey
(string
): Project key for the projectonAnswerChunk
(function
): Answers come in via streaming. This function is
called when a new chunk arrivesonReferences
(function
): This function is called when receiving the list
of references from which the response was created.onError
(function
): called when an error occursoptions
(SubmitPromptOptions
): Optional parametersapiUrl
(string
): URL at which to fetch completionsiDontKnowMessage
(string
): Message returned when the model does not have
an answermodel
(OpenAIModelId
): The OpenAI model to usesystemPrompt
(string
): The prompt templatetemperature
(number
): The model temperaturetopP
(number
): The model top PfrequencyPenalty
(number
): The model frequency penaltypresencePenalty
(number
): The model present penaltymaxTokens
(number
): The max number of tokens to include in the responsesectionsMatchCount
(number
): The number of sections to include in the
prompt contextsectionsMatchThreshold
(number
): The similarity threshold between the
input question and selected sectionssignal
(AbortSignal
): AbortController signalsubmitChat(prompt, projectKey, onAnswerChunk, onReferences, onError, options?)
Submit a prompt to the Markprompt Completions API.
messages
(ChatMessage[]
): Chat messages to submit to the modelprojectKey
(string
): Project key for the projectonAnswerChunk
(function
): Answers come in via streaming. This function is
called when a new chunk arrivesonReferences
(function
): This function is called when receiving the list
of references from which the response was created.onError
(function
): called when an error occursoptions
(SubmitChatOptions
): Optional parametersapiUrl
(string
): URL at which to fetch completionsiDontKnowMessage
(string
): Message returned when the model does not have
an answermodel
(OpenAIModelId
): The OpenAI model to usesystemPrompt
(string
): The prompt templatetemperature
(number
): The model temperaturetopP
(number
): The model top PfrequencyPenalty
(number
): The model frequency penaltypresencePenalty
(number
): The model present penaltymaxTokens
(number
): The max number of tokens to include in the responsesectionsMatchCount
(number
): The number of sections to include in the
prompt contextsectionsMatchThreshold
(number
): The similarity threshold between the
input question and selected sectionssignal
(AbortSignal
): AbortController signalA promise that resolves when the response is fully handled.
submitSearchQuery(query, projectKey, options?)
Submit a search query to the Markprompt Search API.
query
(string
): Search queryprojectKey
(string
): Project key for the projectoptions
(object
): Optional parametersapiUrl
(string
): URL at which to fetch search resultslimit
(number
): Maximum amount of results to returnsignal
(AbortSignal
): AbortController signalA list of search results.
This library is created by the team behind Markprompt (@markprompt).
FAQs
`@markprompt/core` is the core library for Markprompt, a conversational AI component for your website, trained on your data.
The npm package @markprompt/core receives a total of 12,444 weekly downloads. As such, @markprompt/core popularity was classified as popular.
We found that @markprompt/core demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.