New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

overide

Package Overview
Dependencies
Maintainers
1
Versions
10
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

overide - npm Package Compare versions

Comparing version 0.0.2 to 0.0.3

assets/prompt.structure.json

121

core/formatter/format.request.js

@@ -1,17 +0,32 @@

const DirectoryHelper = require('../../helpers/help.directory');
const DirectoryHelper = require('../helpers/help.directory');
const FormatPrompt = require('./format.prompt');
/**
* The `FormatRequest` class is responsible for creating a dynamic request
* based on the active AI service platform (OpenAI or DeepSeek). It formats
* the prompt using `FormatPrompt` and constructs the request body accordingly.
*/
class FormatRequest {
// Create a dynamic request based on the active service
async createRequest(prompt, promptArray, verbose = false) {
/**
* Creates a dynamic request object based on the active service platform.
* It calls either the OpenAI or DeepSeek-specific request formatting function.
*
* @param {string} prompt - The raw prompt extracted from the file.
* @param {Array} promptArray - The array of context around the prompt.
* @param {boolean} verbose - Whether to log the request creation process.
* @returns {Object} The formatted request object for the active service.
*/
async createRequest(prompt, promptArray, completionType, verbose = false) {
try {
// Get active service details (returns details like platform, apiKey, etc.)
// Fetch details about the active AI service (platform, API key, etc.)
const activeServiceDetails = await DirectoryHelper.getActiveServiceDetails();
// Check which platform is active and call the respective request function
// Determine which platform is active and create the appropriate request
switch (activeServiceDetails.platform) {
case 'openai':
return this.createOpenAIRequest(prompt, promptArray, activeServiceDetails, verbose);
return this.createOpenAIRequest(prompt, promptArray, activeServiceDetails, completionType, verbose);
case 'deepseek':
return this.createDeepSeekRequest(prompt, promptArray, activeServiceDetails);
return this.createDeepSeekRequest(prompt, promptArray, activeServiceDetails, completionType);

@@ -26,29 +41,14 @@ default:

// Format request for OpenAI models
async createOpenAIRequest(prompt, promptArray,activeServiceDetails, verbose) {
const context = `
<First 10 lines of the file>
${promptArray[0]}
/**
* Creates and formats the request for OpenAI models.
*
* @param {string} prompt - The raw prompt extracted from the file.
* @param {Array} promptArray - The array of context around the prompt.
* @param {Object} activeServiceDetails - Details about the active service (platform, apiKey, etc.).
* @param {boolean} verbose - Whether to log the request details.
* @returns {Object} The request object for the OpenAI API.
*/
async createOpenAIRequest(prompt, promptArray, activeServiceDetails, completionType, verbose) {
const finalPrompt = await FormatPrompt.getOpenAiPrompt(promptArray, prompt, completionType);
<10 lines before the insertion>
${promptArray[1]}
<10 lines after the insertion>
${promptArray[3]}
`;
// Construct a clearer and more informative prompt
let finalPrompt = `You are a coding assistant specialized in generating accurate and efficient code completions.
Below is the current code context and an incomplete code block that needs to be completed.
Context:
${context}
Incomplete code:
${prompt}
Please generate the missing code to ensure the functionality is correct,
efficient, and follows best practices. If necessary, include comments explaining the code.`;
if (verbose) {

@@ -62,3 +62,3 @@ console.log(`Prompt Text : ${finalPrompt}`);

"metadata": {
model: "gpt-4o",
model: "gpt-4o", // Specify the model to use
messages: [

@@ -68,8 +68,8 @@ { role: 'system', content: 'You are a coding assistant api.' },

],
temperature: 0.7, // You can adjust temperature based on randomness
max_tokens: 1000, // Limit token length of the response (adjust as needed)
n: 1, // Number of completions to generate
stream: false, // Whether to stream back partial progress
presence_penalty: 0, // Encourages/discourages new ideas
frequency_penalty: 0, // Reduces repetition
temperature: 0.7, // Adjust temperature for creativity (lower = more deterministic)
max_tokens: 1000, // Max tokens for the response
n: 1, // Number of completions to generate
stream: false, // Whether to stream results
presence_penalty: 0, // Adjusts frequency of introducing new ideas
frequency_penalty: 0, // Adjusts repetition
},

@@ -79,31 +79,16 @@ };

async createDeepSeekRequest(prompt, promptArray, activeServiceDetails) {
/**
* Creates and formats the request for DeepSeek models.
*
* @param {string} prompt - The raw prompt extracted from the file.
* @param {Array} promptArray - The array of context around the prompt.
* @param {Object} activeServiceDetails - Details about the active service (platform, apiKey, etc.).
* @returns {Object} The request object for the DeepSeek API.
*/
async createDeepSeekRequest(prompt, promptArray, activeServiceDetails, completionType) {
try {
const context = `
<First 10 lines of the file>
${promptArray[0]}
const finalPrompt = await FormatPrompt.getDeepSeekPrompt(promptArray, prompt, completionType);
const messages = [{ "role": "system", "content": finalPrompt }, { "role": "user", "content": prompt }];
<10 lines before the insertion>
${promptArray[1]}
<10 lines after the insertion>
${promptArray[3]}
`;
// Construct a clearer and more informative prompt
let finalPrompt = `You are a coding assistant specialized in generating accurate and efficient code completions.
Below is the current code context and an incomplete code block that needs to be completed.
Context:
${context}
Incomplete code:
${prompt}
Please generate the complete code or missing code to ensure the functionality is correct,
efficient, and follows best practices. Don't explain the code in any way. Put the code inside markdown quote.`;
const messages = [{ "role": "system", "content": finalPrompt },
{ "role": "user", "content": prompt }];
// Construct the request body for DeepSeek API
return {

@@ -122,2 +107,2 @@ activeServiceDetails,

module.exports = new FormatRequest();
module.exports = new FormatRequest();

@@ -1,17 +0,31 @@

const DirectoryHelper = require('../../helpers/help.directory');
const DirectoryHelper = require('../helpers/help.directory');
const CodeHelper = require('../helpers/help.code');
/**
* The `FormatResponse` class is responsible for formatting the response received from
* AI service platforms like OpenAI and DeepSeek. It extracts code blocks from the response
* content and returns them for further processing.
*/
class FormatResponse {
async formatResponse(response, verbose = false) {
/**
* Formats the response based on the active service platform.
* It calls either the OpenAI or DeepSeek-specific formatting function.
*
* @param {Object} response - The API response object.
* @param {boolean} verbose - Whether to log the formatting process.
* @returns {string|null} The formatted code block extracted from the response.
*/
async formatResponse(response, completionType, verbose = false) {
try {
// Get active service details (returns details like platform, apiKey, etc.)
// Fetch details about the active AI service (platform, API key, etc.)
const activeServiceDetails = await DirectoryHelper.getActiveServiceDetails();
// Check which platform is active and call the respective request function
// Determine which platform is active and format the response accordingly
switch (activeServiceDetails.platform) {
case 'openai':
return this.formatOpenAIResponse(response, verbose)
return this.formatOpenAIResponse(response, completionType, verbose);
case 'deepseek':
return this.formatDeepSeekResponse(response);
return this.formatDeepSeekResponse(response, completionType, verbose);

@@ -22,3 +36,4 @@ default:

} catch (error) {
console.error(`Error in creating request: ${error.message}`);
console.error(`Error in formatting response: ${error.message}`);
return null;
}

@@ -28,24 +43,13 @@ }

/**
* Format the response for OpenAI models
* @param {string} response - The response from the OpenAI API
* @returns {string} - The code that needs to be inserted.
* Formats the response from OpenAI models by extracting the code block.
*
* @param {Object} response - The response from the OpenAI API.
* @param {boolean} verbose - Whether to log details of the extracted code.
* @returns {string|null} The extracted code block, or null if no code block is found.
*/
formatOpenAIResponse(response, verbose) {
formatOpenAIResponse(response, completionType, verbose) {
try {
// Extract the content from the first choice
// Extract the content from the first choice in the response
const content = response.choices[0].message.content;
// Use a regular expression to capture the code block inside ```
const codeMatch = content.match(/```[\s\S]*?\n([\s\S]*?)\n```/);
if (codeMatch && codeMatch[1]) {
if(verbose){
console.log(`Code Block : ${codeMatch[1]}`);
}
return codeMatch[1]; // Return the extracted code
} else {
throw new Error("No code block found in the response");
}
return CodeHelper.extractCodeBlock(content, completionType, verbose);
} catch (error) {

@@ -57,11 +61,13 @@ console.error("Error formatting OpenAI response:", error.message);

formatDeepSeekResponse(response) {
/**
* Formats the response from DeepSeek models by extracting the code block.
*
* @param {Object} response - The response from the DeepSeek API.
* @returns {string|null} The extracted code block, or null if no code block is found.
*/
formatDeepSeekResponse(response, completionType, verbose) {
try {
// Extract the content from the first choice in the response
const content = response.choices[0].message.content;
const codeMatch = content.match(/```[\s\S]*?\n([\s\S]*?)\n```/);
if (codeMatch && codeMatch[1]) {
return codeMatch[1]; // Return the extracted code
} else {
throw new Error("No code block found in the response");
}
return CodeHelper.extractCodeBlock(content, completionType, verbose);
} catch (error) {

@@ -74,2 +80,2 @@ console.error("Error formatting DeepSeek response:", error.message);

module.exports = new FormatResponse();
module.exports = new FormatResponse();
require('dotenv').config();
const OpenAI = require('openai');
const axios = require('axios');
// const axios = require('axios');
/**
* The `Network` class is responsible for making API requests to different
* services (OpenAI, DeepSeek, and Ollama) to generate code based on the
* provided request data.
*/
class Network {
/**
* Generate code based on the active service (OpenAI, DeepSeek, or Ollama).
* @param {object} requestData - The request data for the API.
* Generates code based on the active service (OpenAI, DeepSeek, or Ollama).
*
* @param {object} requestData - The request data containing service details and metadata.
* @returns {Promise<string>} - The generated code response.
* @throws Will throw an error if no active service details are found or if there are missing credentials.
*/
async doRequest(requestData) {
const { activeServiceDetails } = requestData;
// Validate presence of active service details
if (!activeServiceDetails) {

@@ -18,63 +26,64 @@ throw new Error("No active service details found.");

const {metadata} = requestData;
const { metadata } = requestData;
const { platform } = activeServiceDetails;
const { apiKey, orgId, baseUrl, port } = activeServiceDetails.details;
// Handle OpenAI requests
if (platform === "openai") {
if (!apiKey || !orgId) {
throw new Error("API key or Organization ID missing for OpenAI.");
}
// Handle requests based on the selected platform
switch (platform) {
case "openai":
return this.handleOpenAIRequest(activeServiceDetails, metadata);
case "deepseek":
return this.handleDeepSeekRequest(activeServiceDetails, metadata);
default:
throw new Error("No valid model or platform selected.");
}
}
try {
const openai = new OpenAI.OpenAI({
apiKey: apiKey,
organization: orgId,
});
/**
* Handles requests to the OpenAI service.
*
* @param {object} activeServiceDetails - The details of the active OpenAI service.
* @param {object} metadata - The metadata for the API request.
* @returns {Promise<string>} - The generated code response from OpenAI.
* @throws Will throw an error if the API key or organization ID is missing.
*/
async handleOpenAIRequest(activeServiceDetails, metadata) {
const { apiKey, orgId } = activeServiceDetails.details;
const completions = await openai.chat.completions.create(metadata);
return completions;
} catch (error) {
console.error(`Error generating code with OpenAI: ${error.message}`);
throw error;
}
if (!apiKey || !orgId) {
throw new Error("API key or Organization ID missing for OpenAI.");
}
// Handle DeepSeek requests
if (platform === 'deepseek') {
if (!apiKey || !baseUrl) {
throw new Error("API key or BaseUrl missing for DeepSeek.");
}
// Uncomment the following for actual API call with OpenAI package
try {
const openai = new OpenAI.OpenAI({
apiKey: apiKey,
baseURL: baseUrl,
});
const completions = await openai.chat.completions.create(metadata);
return completions;
} catch (error) {
console.error(`Error generating code with OpenAI: ${error.message}`);
throw error;
}
try {
const openai = new OpenAI.OpenAI({ apiKey, organization: orgId });
const completions = await openai.chat.completions.create(metadata);
return completions; // Return the generated code from OpenAI
} catch (error) {
console.error(`Error generating code with OpenAI: ${error.message}`);
throw error; // Rethrow error for handling at a higher level
}
}
// Handle Ollama requests
if (platform === 'ollama') {
if (!port) {
throw new Error("Port missing for Ollama.");
}
/**
* Handles requests to the DeepSeek service.
*
* @param {object} activeServiceDetails - The details of the active DeepSeek service.
* @param {object} metadata - The metadata for the API request.
* @returns {Promise<string>} - The generated code response from DeepSeek.
* @throws Will throw an error if the API key or base URL is missing.
*/
async handleDeepSeekRequest(activeServiceDetails, metadata) {
const { apiKey, baseUrl } = activeServiceDetails.details;
// Uncomment the following for actual API call with axios
try {
const response = await axios.post(`http://localhost:${port}/generate`, metadata);
return response.data;
} catch (error) {
console.error(`Error generating code with Ollama: ${error.message}`);
throw error;
}
if (!apiKey || !baseUrl) {
throw new Error("API key or BaseUrl missing for DeepSeek.");
}
throw new Error("No valid model or platform selected.");
try {
const openai = new OpenAI.OpenAI({ apiKey, baseURL: baseUrl });
const completions = await openai.chat.completions.create(metadata);
return completions; // Return the generated code from DeepSeek
} catch (error) {
console.error(`Error generating code with DeepSeek: ${error.message}`);
throw error; // Rethrow error for handling at a higher level
}
}

@@ -81,0 +90,0 @@ }

{
"name": "overide",
"version": "0.0.2",
"version": "0.0.3",
"description": "This is a CLI based Code Generation Tool.",
"main": "index.js",
"main": "bin/index.js",
"bin": {
"oi": "index.js"
"oi": "bin/index.js"
},
"scripts": {
"start": "node index.js",
"start": "node bin/index.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Abhijeet Dash",
"license": "ISC",
"license": "GPL-2.0",
"dependencies": {

@@ -20,2 +20,3 @@ "axios": "^1.7.7",

"dotenv": "^16.4.5",
"fuzzball": "^2.1.3",
"inquirer": "^11.1.0",

@@ -31,8 +32,6 @@ "openai": "^4.67.2"

"files": [
"commands",
"core",
"helpers",
"index.js",
"oi-config.json"
"assets",
"bin",
"core"
]
}
}

@@ -25,2 +25,20 @@

Please read the [Detailed Usage Guide](https://github.com/oi-overide/oi-overide/blob/main/Usage/Commands.md) for all the commands and options.
### Adding API Key
Befor starting run the following command. It will show a list of currently supported platforms and allows to add required information like API KEY, ORG ID
BASE URL and other.
```bash
oi config --global
```
If you ended up configuring multiple platforms.. i.e ran the above command multiple times and configures details for multiple platforms. You can run the
following commands to select an active platform. Oi will use the active platform.
```bash
oi config --select-active
```
### Initialize a Project

@@ -66,12 +84,2 @@

### Generate Dependency Graph - IN DEV
You can generate or update the project dependency graph by running:
```bash
oi depend
```
This will create a \`.oi-dependency.json\` file that tracks the structure of your project and its dependencies.
## Configuration

@@ -90,8 +98,2 @@

### Key Options
- **service**: The AI backend service to use (e.g., OpenAI Codex, local LLM).
- **ignore**: Files or directories to exclude from monitoring.
- **verbose**: Enable verbose logging to track detailed operations.
## Version 2.0 Plan

@@ -106,7 +108,14 @@

We welcome contributions from the community! Here’s how you can help:
We welcome contributions from the community! There's a lot going on and we are slowing building so, we can use some help.
Please take a look at [version guidelines](https://github.com/oi-overide/oi-overide/tree/main/Contribution) before starting
1. Take a look at open [project items](https://github.com/users/oi-overide/projects/1)
2. It's a good idea to join the discord to discuss the change.
After this..
1. **Fork** the repository.
2. Create a **new branch** for your feature or fix.
3. Submit a **pull request** and describe your changes.
3. Use the [target version branch](https://github.com/oi-overide/oi-overide/blob/main/Contribution/Target%20Version%20Branch..md) as base.
4. Submit a **pull request** and describe your changes.

@@ -122,7 +131,1 @@ Feel free to open issues for bugs, feature requests, or general feedback!

Oi-Override is licensed under the GNU GPL-2.0 License. See the [LICENSE](LICENSE) file for more details.
---
## Join the Community!
We’re excited to build Oi-Override into a powerful, flexible tool that enhances developer workflows with the help of AI. Follow the repository, contribute, and help us improve Oi for everyone!
SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc