
Security News
npm ‘is’ Package Hijacked in Expanding Supply Chain Attack
The ongoing npm phishing campaign escalates as attackers hijack the popular 'is' package, embedding malware in multiple versions.
github.com/selfreliantob/dspy-go
DSPy-Go is a native Go implementation of the DSPy framework, bringing systematic prompt engineering and automated reasoning capabilities to Go applications. It provides a flexible and idiomatic framework for building reliable and effective Language Model (LLM) applications through composable modules and workflows.
go get github.com/selfreliantob/dspy-go
Here's a simple example to get you started with DSPy-Go:
package main
import (
"context"
"fmt"
"log"
"github.com/selfreliantob/dspy-go/pkg/core"
"github.com/selfreliantob/dspy-go/pkg/llms"
"github.com/selfreliantob/dspy-go/pkg/modules"
"github.com/selfreliantob/dspy-go/pkg/config"
)
func main() {
// Configure the default LLM
llms.EnsureFactory()
err := config.ConfigureDefaultLLM("your-api-key", core.ModelAnthropicSonnet)
if err != nil {
log.Fatalf("Failed to configure LLM: %v", err)
}
// Create a signature for question answering
signature := core.NewSignature(
[]core.InputField{{Field: core.Field{Name: "question"}}},
[]core.OutputField{{Field: core.Field{Name: "answer"}}},
)
// Create a ChainOfThought module that implements step-by-step reasoning
cot := modules.NewChainOfThought(signature)
// Create a program that executes the module
program := core.NewProgram(
map[string]core.Module{"cot": cot},
func(ctx context.Context, inputs map[string]interface{}) (map[string]interface{}, error) {
return cot.Process(ctx, inputs)
},
)
// Execute the program with a question
result, err := program.Execute(context.Background(), map[string]interface{}{
"question": "What is the capital of France?",
})
if err != nil {
log.Fatalf("Error executing program: %v", err)
}
fmt.Printf("Answer: %s\n", result["answer"])
}
DSPy-Go is built around several key concepts that work together to create powerful LLM applications:
Signatures define the input and output fields for modules, creating a clear contract for what a module expects and produces.
// Create a signature for a summarization task
signature := core.NewSignature(
[]core.InputField{
{Field: core.Field{Name: "document", Description: "The document to summarize"}},
},
[]core.OutputField{
{Field: core.Field{Name: "summary", Description: "A concise summary of the document"}},
{Field: core.Field{Name: "key_points", Description: "The main points from the document"}},
},
)
Signatures can include field descriptions that enhance prompt clarity and improve LLM performance.
Modules are the building blocks of DSPy-Go programs. They encapsulate specific functionalities and can be composed to create complex pipelines. Some key modules include:
The simplest module that makes direct predictions using an LLM.
predict := modules.NewPredict(signature)
result, err := predict.Process(ctx, map[string]interface{}{
"document": "Long document text here...",
})
// result contains "summary" and "key_points"
Implements chain-of-thought reasoning, which guides the LLM to break down complex problems into intermediate steps.
cot := modules.NewChainOfThought(signature)
result, err := cot.Process(ctx, map[string]interface{}{
"question": "Solve 25 × 16 step by step.",
})
// result contains both the reasoning steps and the final answer
Implements the Reasoning and Acting paradigm, allowing LLMs to use tools to solve problems.
// Create tools
calculator := tools.NewCalculatorTool()
searchTool := tools.NewSearchTool()
// Create ReAct module with tools
react := modules.NewReAct(signature, []core.Tool{calculator, searchTool})
result, err := react.Process(ctx, map[string]interface{}{
"question": "What is the population of France divided by 1000?",
})
// ReAct will use the search tool to find the population and the calculator to divide it
Programs combine modules into executable workflows. They define how inputs flow through the system and how outputs are produced.
program := core.NewProgram(
map[string]core.Module{
"retriever": retriever,
"generator": generator,
},
func(ctx context.Context, inputs map[string]interface{}) (map[string]interface{}, error) {
// First retrieve relevant documents
retrieverResult, err := retriever.Process(ctx, inputs)
if err != nil {
return nil, err
}
// Then generate an answer using the retrieved documents
generatorInputs := map[string]interface{}{
"question": inputs["question"],
"documents": retrieverResult["documents"],
}
return generator.Process(ctx, generatorInputs)
},
)
Optimizers help improve the performance of your DSPy-Go programs by automatically tuning prompts and module parameters.
Automatically selects high-quality examples for few-shot learning.
// Create a dataset of examples
dataset := datasets.NewInMemoryDataset()
dataset.AddExample(map[string]interface{}{
"question": "What is the capital of France?",
"answer": "The capital of France is Paris.",
})
// Add more examples...
// Create and apply the optimizer
optimizer := optimizers.NewBootstrapFewShot(dataset, metrics.NewExactMatchMetric("answer"))
optimizedModule, err := optimizer.Optimize(ctx, originalModule)
More advanced optimizers for multi-step interactive prompt optimization (MIPRO) and collaborative prompt optimization (Copro).
// Create a MIPRO optimizer
mipro := optimizers.NewMIPRO(dataset, metrics.NewRougeMetric("answer"))
optimizedModule, err := mipro.Optimize(ctx, originalModule)
DSPy-Go provides powerful abstractions for building more complex agent systems.
Different memory implementations for tracking conversation history.
// Create a buffer memory for conversation history
memory := memory.NewBufferMemory(10) // Keep last 10 exchanges
memory.Add(context.Background(), "user", "Hello, how can you help me?")
memory.Add(context.Background(), "assistant", "I can answer questions and help with tasks. What do you need?")
// Retrieve conversation history
history, err := memory.Get(context.Background())
Sequential execution of steps:
// Create a chain workflow
workflow := workflows.NewChainWorkflow(store)
// Add steps to the workflow
workflow.AddStep(&workflows.Step{
ID: "step1",
Module: modules.NewPredict(signature1),
})
workflow.AddStep(&workflows.Step{
ID: "step2",
Module: modules.NewPredict(signature2),
})
// Execute the workflow
result, err := workflow.Execute(ctx, inputs)
Each workflow step can be configured with retry logic:
step := &workflows.Step{
ID: "retry_example",
Module: myModule,
RetryConfig: &workflows.RetryConfig{
MaxAttempts: 3,
BackoffMultiplier: 2.0,
InitialBackoff: time.Second,
},
Condition: func(state map[string]interface{}) bool {
return someCondition(state)
},
}
Flexible task decomposition and execution:
// Create an orchestrator with subtasks
orchestrator := agents.NewOrchestrator()
// Define and add subtasks
researchTask := agents.NewTask("research", researchModule)
summarizeTask := agents.NewTask("summarize", summarizeModule)
orchestrator.AddTask(researchTask)
orchestrator.AddTask(summarizeTask)
// Execute the orchestration
result, err := orchestrator.Execute(ctx, map[string]interface{}{
"topic": "Climate change impacts",
})
DSPy-Go supports multiple LLM providers out of the box:
// Using Anthropic Claude
llm, err := llms.NewAnthropicLLM("api-key", core.ModelAnthropicSonnet)
// Using Google Gemini
llm, err := llms.NewGeminiLLM("api-key", "gemini-pro")
// Using Ollama (local)
llm, err := llms.NewOllamaLLM("http://localhost:11434", "ollama:llama2")
// Using LlamaCPP (local)
llm, err := llms.NewLlamacppLLM("http://localhost:8080")
// Set as default LLM
llms.SetDefaultLLM(llm)
// Or use with a specific module
myModule.SetLLM(llm)
DSPy-Go includes detailed tracing and structured logging for debugging and optimization:
// Enable detailed tracing
ctx = core.WithExecutionState(context.Background())
// Configure logging
logger := logging.NewLogger(logging.Config{
Severity: logging.DEBUG,
Outputs: []logging.Output{logging.NewConsoleOutput(true)},
})
logging.SetLogger(logger)
// After execution, inspect trace
executionState := core.GetExecutionState(ctx)
steps := executionState.GetSteps("moduleId")
for _, step := range steps {
fmt.Printf("Step: %s, Duration: %s\n", step.Name, step.Duration)
fmt.Printf("Prompt: %s\n", step.Prompt)
fmt.Printf("Response: %s\n", step.Response)
}
You can extend ReAct modules with custom tools:
// Define a custom tool
type WeatherTool struct{}
func (t *WeatherTool) GetName() string {
return "weather"
}
func (t *WeatherTool) GetDescription() string {
return "Get the current weather for a location"
}
func (t *WeatherTool) CanHandle(action string) bool {
return strings.HasPrefix(action, "weather(")
}
func (t *WeatherTool) Execute(ctx context.Context, action string) (string, error) {
// Parse location from action
location := parseLocation(action)
// Fetch weather data (implementation detail)
weather, err := fetchWeather(location)
if err != nil {
return "", err
}
return fmt.Sprintf("Weather in %s: %s, %d°C", location, weather.Condition, weather.Temperature), nil
}
// Use the custom tool with ReAct
react := modules.NewReAct(signature, []core.Tool{&WeatherTool{}})
Process LLM outputs incrementally as they're generated:
// Create a streaming handler
handler := func(chunk string) {
fmt.Print(chunk)
}
// Enable streaming on the module
module.SetStreamingHandler(handler)
// Process with streaming enabled
result, err := module.Process(ctx, inputs)
Check the examples directory for complete implementations:
For more detailed documentation:
DSPy-Go is released under the MIT License. See the LICENSE file for details.
FAQs
Unknown package
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
The ongoing npm phishing campaign escalates as attackers hijack the popular 'is' package, embedding malware in multiple versions.
Security News
A critical flaw in the popular npm form-data package could allow HTTP parameter pollution, affecting millions of projects until patched versions are adopted.
Security News
Bun 1.2.19 introduces isolated installs for smoother monorepo workflows, along with performance boosts, new tooling, and key compatibility fixes.