Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Spectre is a Ruby gem that makes it easy to AI-enable your Ruby on Rails application. Currently, Spectre focuses on helping developers create embeddings, perform vector-based searches, create chat completions, and manage dynamic prompts — ideal for applications that are featuring RAG (Retrieval-Augmented Generation), chatbots and dynamic prompts.
Feature | Compatibility |
---|---|
Foundation Models (LLM) | OpenAI |
Embeddings | OpenAI |
Vector Searching | MongoDB Atlas |
Prompt Templates | OpenAI |
💡 Note: We will first prioritize adding support for additional foundation models (Claude, Cohere, LLaMA, etc.), then look to add support for more vector databases (Pgvector, Pinecone, etc.). If you're looking for something a bit more extensible, we highly recommend checking out langchainrb.
Add this line to your application's Gemfile:
gem 'spectre_ai'
And then execute:
bundle install
Or install it yourself as:
gem install spectre_ai
First, you’ll need to generate the initializer to configure your OpenAI API key. Run the following command to create the initializer:
rails generate spectre:install
This will create a file at config/initializers/spectre.rb
, where you can set your OpenAI API key:
Spectre.setup do |config|
config.api_key = 'your_openai_api_key'
config.llm_provider = :openai
end
To use Spectre for generating embeddings in your Rails model, follow these steps:
Here is an example of how to set this up in a model:
class Model
include Mongoid::Document
include Spectre
spectre :embeddable
embeddable_field :message, :response, :category
end
Note: Currently, the Searchable
module is designed to work exclusively with Mongoid models. If you attempt to include it in a non-Mongoid model, an error will be raised. This ensures that vector-based searches, which rely on MongoDB's specific features, are only used in appropriate contexts.
To enable vector-based search in your Rails model:
Use the following methods to configure the search path, index, and result fields:
Here is an example of how to set this up in a model:
class Model
include Mongoid::Document
include Spectre
spectre :searchable
configure_spectre_search_path 'embedding'
configure_spectre_search_index 'vector_index'
configure_spectre_result_fields({ "message" => 1, "response" => 1 })
end
Create Embedding for a Single Record
To create an embedding for a single record, you can call the embed!
method on the instance record:
record = Model.find(some_id)
record.embed!
This will create the embedding and store it in the specified embedding field, along with the timestamp in the embedded_at
field.
Create Embeddings for Multiple Records
To create embeddings for multiple records at once, use the embed_all!
method:
Model.embed_all!(
scope: -> { where(:response.exists => true, :response.ne => nil) },
validation: ->(record) { !record.response.blank? }
)
This method will create embeddings for all records that match the given scope and validation criteria. The method will also print the number of successful and failed embeddings to the console.
Directly Create Embeddings Using Spectre.provider_module::Embeddings.create
If you need to create an embedding directly without using the model integration, you can use the Spectre.provider_module::Embeddings.create
method. This can be useful if you want to create embeddings for custom text outside of your models. For example, with OpenAI:
Spectre.provider_module::Embeddings.create("Your text here")
This method sends the text to OpenAI’s API and returns the embedding vector. You can optionally specify a different model by passing it as an argument:
Spectre.provider_module::Embeddings.create("Your text here", model: "text-embedding-ada-002")
Once your model is configured as searchable, you can perform vector-based searches on the stored embeddings:
Model.vector_search('Your search query', custom_result_fields: { "response" => 1 }, additional_scopes: [{ "$match" => { "category" => "science" } }])
This method will:
Embed the Search Query: Uses the configured LLM provider to embed the search query.
Note: If your text is already embedded, you can pass the embedding (as an array), and it will perform just the search.
Perform Vector-Based Search: Searches the embeddings stored in the specified search_path
.
Return Matching Records: Provides the matching records with the specified result_fields
and their vectorSearchScore
.
Keyword Arguments:
Spectre provides an interface to create chat completions using your configured LLM provider, allowing you to create dynamic responses, messages, or other forms of text.
Basic Completion Example
To create a simple chat completion, use the Spectre.provider_module::Completions.create
method. You can provide a user prompt and an optional system prompt to guide the response:
messages = [
{ role: 'system', content: "You are a funny assistant." },
{ role: 'user', content: "Tell me a joke." }
]
Spectre.provider_module::Completions.create(
messages: messages
)
This sends the request to the LLM provider’s API and returns the chat completion.
Customizing the Completion
You can customize the behavior by specifying additional parameters such as the model, maximum number of tokens, and any tools needed for function calls:
messages = [
{ role: 'system', content: "You are a funny assistant." },
{ role: 'user', content: "Tell me a joke." },
{ role: 'assistant', content: "Sure, here's a joke!" }
]
Spectre.provider_module::Completions.create(
messages: messages,
model: "gpt-4",
max_tokens: 50
)
Using a JSON Schema for Structured Output
For cases where you need structured output (e.g., for returning specific fields or formatted responses), you can pass a json_schema
parameter. The schema ensures that the completion conforms to a predefined structure:
json_schema = {
name: "completion_response",
schema: {
type: "object",
properties: {
response: { type: "string" },
final_answer: { type: "string" }
},
required: ["response", "final_answer"],
additionalProperties: false
}
}
messages = [
{ role: 'system', content: "You are a knowledgeable assistant." },
{ role: 'user', content: "What is the capital of France?" }
]
Spectre.provider_module::Completions.create(
messages: messages,
json_schema: json_schema
)
This structured format guarantees that the response adheres to the schema you’ve provided, ensuring more predictable and controlled results.
Using Tools for Function Calling
You can incorporate tools (function calls) in your completion to handle more complex interactions such as fetching external information via API or performing calculations. Define tools using the function call format and include them in the request:
tools = [
{
type: "function",
function: {
name: "get_delivery_date",
description: "Get the delivery date for a customer's order.",
parameters: {
type: "object",
properties: {
order_id: { type: "string", description: "The customer's order ID." }
},
required: ["order_id"],
additionalProperties: false
}
}
}
]
messages = [
{ role: 'system', content: "You are a helpful customer support assistant." },
{ role: 'user', content: "Can you tell me the delivery date for my order?" }
]
Spectre.provider_module::Completions.create(
messages: messages,
tools: tools
)
This setup allows the model to call specific tools (or functions) based on the user's input. The model can then generate a tool call to get necessary information and integrate it into the conversation.
Handling Responses from Completions with Tools
When tools (function calls) are included in a completion request, the response might include tool_calls
with relevant details for executing the function.
Here’s an example of how the response might look when a tool call is made:
response = Spectre.provider_module::Completions.create(
messages: messages,
tools: tools
)
# Sample response structure when a tool call is triggered:
# {
# :tool_calls=>[{
# "id" => "call_gqvSz1JTDfUyky7ghqY1wMoy",
# "type" => "function",
# "function" => {
# "name" => "get_lead_count",
# "arguments" => "{\"account_id\":\"acc_12312\"}"
# }
# }],
# :content => nil
# }
if response[:tool_calls]
tool_call = response[:tool_calls].first
# You can now perform the function using the provided data
# For example, get the lead count by account_id
account_id = JSON.parse(tool_call['function']['arguments'])['account_id']
lead_count = get_lead_count(account_id) # Assuming you have a method for this
# Respond back with the function result
completion_response = Spectre.provider_module::Completions.create(
messages: [
{ role: 'assistant', content: "There are #{lead_count} leads for account #{account_id}." }
]
)
else
puts "Model response: #{response[:content]}"
end
Spectre provides a system for creating dynamic prompts based on templates. You can define reusable prompt templates and render them with different parameters in your Rails app (think Ruby on Rails view partials).
Example Directory Structure for Prompts
Create a folder structure in your app to hold the prompt templates:
app/spectre/prompts/
└── rag/
├── system.yml.erb
└── user.yml.erb
Each .yml.erb
file can contain dynamic content and be customized with embedded Ruby (ERB).
Example Prompt Templates
system.yml.erb
:
system: |
You are a helpful assistant designed to provide answers based on specific documents and context provided to you.
Follow these guidelines:
1. Only provide answers based on the context provided.
2. Be polite and concise.
user.yml.erb
:
user: |
User's query: <%= @query %>
Context: <%= @objects.join(", ") %>
Rendering Prompts
You can render prompts in your Rails application using the Spectre::Prompt.render
method, which loads and renders the specified prompt template:
# Render a system prompt
Spectre::Prompt.render(template: 'rag/system')
# Render a user prompt with local variables
Spectre::Prompt.render(
template: 'rag/user',
locals: {
query: query,
objects: objects
}
)
template
: The path to the prompt template file (e.g., rag/system
).locals
: A hash of variables to be used inside the ERB template.Using Nested Templates for Prompts
Spectre's Prompt
class now supports rendering templates from nested directories. This allows you to better organize your prompt files in a structured folder hierarchy.
You can organize your prompt templates in subfolders. For instance, you can have the following structure:
app/
spectre/
prompts/
rag/
system.yml.erb
user.yml.erb
classification/
intent/
system.yml.erb
user.yml.erb
entity/
system.yml.erb
user.yml.erb
To render a prompt from a nested folder, simply pass the full path to the template
argument:
# Rendering from a nested folder
Spectre::Prompt.render(template: 'classification/intent/user', locals: { query: 'What is AI?' })
This allows for more flexibility when organizing your prompt files, particularly when dealing with complex scenarios or multiple prompt categories.
Combining Completions with Prompts
You can also combine completions and prompts like so:
Spectre.provider_module::Completions.create(
messages: [
{ role: 'system', content: Spectre::Prompt.render(template: 'rag/system') },
{ role: 'user', content: Spectre::Prompt.render(template: 'rag/user', locals: { query: @query, user: @user }) }
]
)
Bug reports and pull requests are welcome on GitHub at https://github.com/hiremav/spectre. This project is intended to be a safe, welcoming space for collaboration, and your contributions are greatly appreciated!
git checkout -b my-new-feature
).git commit -am 'Add some feature'
).git push origin my-new-feature
).This gem is available as open source under the terms of the MIT License.
FAQs
Unknown package
We found that spectre_ai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.