
Research
/Security News
Toptal’s GitHub Organization Hijacked: 10 Malicious Packages Published
Threat actors hijacked Toptal’s GitHub org, publishing npm packages with malicious payloads that steal tokens and attempt to wipe victim systems.
@gaiaverse/semantic-turning-point-detector
Advanced tools
Detects key semantic turning points in conversations using recursive semantic distance analysis. Ideal for conversation analysis, dialogue segmentation, insight detection, and AI-assisted reasoning tasks.
The Semantic Turning Point Detector is a lightweight but powerful tool for detecting semantic turning points in conversations or textual sequences. It recursively analyzes message chains (dialogues, transcripts, chat logs) and identifies where key shifts in meaning, topic, or insight occur. These turning points are crucial for:
The confidence score is based essentially on cosine similarity applied between embeddings of texts. Thus, the confidence score provides a notion of the semantic distance from a turning point to another, aggregated together. This flattened number, thus represents a quicker angle in assessing, from a limited but useful perspective, the health or confidence of the results.
Score | What it Usually Means | Actionable Take-away |
---|---|---|
0.0 – 0.2 | Almost no semantic movement. Flat or repetitive text. | Segmentation likely not useful. |
0.2 – 0.3 | Weak shifts. Some structure, but still bland. | May need larger chunks or lower thresholds. |
0.3 – 0.4 | Good — clear but gentle turning points. | Acceptable for overviews. |
0.4 – 0.6 | Ideal — strong, natural conversation flow. | Recommended “sweet-spot” range. |
0.6 – 1.0 | Too many jumps → chaotic / fragmented input. | Clean the transcript or lower shift threshold. |
Why high > 0.6 is “bad”
A very high score means the embedding distances between consecutive messages are huge – the text keeps veering off topic, so the detector sees “turning points” everywhere. That usually signals noisy, disjoint or machine-generated content, not a well-paced human dialogue.
This repository provides a TypeScript implementation of the Adaptive Recursive Convergence (ARC) with Cascading Re-Dimensional Attention (CRA) framework described in our research paper. Unlike traditional summarization which condenses content, this detector identifies moments where conversations shift in topic, tone, insight, or purpose, demonstrating a practical application of multi-dimensional reasoning.
The Semantic Turning Point Detector is a concrete implementation of the ARC/CRA theoretical framework. It demonstrates how conversation analysis can benefit from dimensional expansion and adaptive complexity management. Key features include:
/**
* Example function demonstrating how to use the SemanticTurningPointDetector
* Implements an adaptive approach based on conversation complexity
*/
async function runTurningPointDetectorExample() {
const thresholdForMinDialogueShift = 24;
// Calculate adaptive recursion depth based on conversation length
// This directly implements the ARC concept of adaptive dimensional analysis
const determineRecursiveDepth = (messages: Message[]) => {
return Math.floor(messages.length / thresholdForMinDialogueShift);
}
const startTime = new Date().getTime();
// Create detector with configuration based on the ARC/CRA framework
const detector = new SemanticTurningPointDetector({
apiKey: process.env.OPENAI_API_KEY || '',
// Dynamic configuration based on conversation complexity
semanticShiftThreshold: 0.5 - (0.05 * determineRecursiveDepth(conversation)),
minTokensPerChunk: 512,
maxTokensPerChunk: 4096,
embeddingModel: "text-embedding-3-large",
// ARC framework: dynamic recursion depth based on conversation complexity
maxRecursionDepth: Math.min(determineRecursiveDepth(conversation), 5),
onlySignificantTurningPoints: true,
significanceThreshold: 0.75,
// ARC framework: chunk size scales with complexity
minMessagesPerChunk: Math.ceil(determineRecursiveDepth(conversation) * 3.5),
// ARC framework: number of turning points scales with conversation length
maxTurningPoints: Math.max(6, Math.round(conversation.length / 7)),
// CRA framework: explicit complexity saturation threshold for dimensional escalation
complexitySaturationThreshold: 4.5,
// Enable convergence measurement for ARC analysis
measureConvergence: true,
// classificationModel: 'phi-4-mini-Q5_K_M:3.8B',
classificationModel:'qwen2.5:7b-instruct-q5_k_m',
debug: true,
//ollama
endpoint: 'http://localhost:11434/v1'
});
try {
// Detect turning points using the ARC/CRA framework
const tokensInConvoFile = await detector.getMessageArrayTokenCount(conversation);
const turningPointResult = await detector.detectTurningPoints(conversation);
const turningPoints = turningPointResult.points;
const confidenceScore = turningPointResult.confidence;
const endTime = new Date().getTime();
const difference = endTime - startTime;
const formattedTimeDateDiff = new Date(difference).toISOString().slice(11, 19);
// Display results with complexity scores from the ARC framework
console.log('\n=== DETECTED TURNING POINTS (ARC/CRA Framework) ===\n');
console.info(`Detected ${turningPoints.length} turning points with a confidence score of ${confidenceScore.toFixed(2)} using model ${detector.getModelName()} - on ${new Date().toLocaleDateString()} at ${new Date().toLocaleTimeString()}`);
console.log(`\nTurning point detection took as MM:SS: ${formattedTimeDateDiff} for ${tokensInConvoFile} tokens in the conversation\n`);
turningPoints.forEach((tp, i) => {
console.log(`${i + 1}. ${tp.label} (${tp.category})`);
console.log(` Messages: "${tp.span.startId}" → "${tp.span.endId}"`);
console.log(` Dimension: n=${tp.detectionLevel}`);
console.log(` Complexity Score: ${tp.complexityScore.toFixed(2)} of 5`);
console.log(` Best indicator message ID: "${tp.best_id}"`);
console.log(` Emotion: ${tp.emotionalTone || 'unknown'}`);
console.log(` Significance: ${tp.significance.toFixed(2)}`);
console.log(` Keywords: ${tp.keywords?.join(', ') || 'none'}`);
if (tp.quotes?.length) {
console.log(` Notable quotes:\n${tp.quotes.flatMap(q => `- "${q}"`).join('\n')}`);
}
console.log();
});
// Get and display convergence history to demonstrate the ARC framework
const convergenceHistory = detector.getConvergenceHistory();
console.log('\n=== ARC/CRA FRAMEWORK CONVERGENCE ANALYSIS ===\n');
convergenceHistory.forEach((state, i) => {
console.log(`Iteration ${i + 1}:`);
console.log(` Dimension: n=${state.dimension}`);
console.log(` Convergence Distance: ${state.distanceMeasure.toFixed(3)}`);
console.log(` Dimensional Escalation: ${state.didEscalate ? 'Yes' : 'No'}`);
console.log(` Turning Points: ${state.currentTurningPoints.length}`);
console.log();
});
// Save turning points to file
fs.writeJSONSync('results/turningPoints.json', turningPoints, { spaces: 2, encoding: 'utf-8' });
// Also save convergence analysis
fs.writeJSONSync('results/convergence_analysis.json', convergenceHistory, { spaces: 2, encoding: 'utf-8' });
console.log('Results saved to files.');
} catch (err) {
console.error('Error detecting turning points:', err);
}
}
All configuration options are detailed and documented in the SemanticTurningPointDetectorConfig
interface. Refer to the interface and its jSDoc comments for descriptions of each parameter at
src/types.ts.
OPENAI_API_KEY
: Your OpenAI API key for embedding and classification models.LLM_API_KEY
: If setting a custom endpoint
for an LLM, this key will be used to authenticate requests.EMBEDDINGS_API_KEY
: If using a custom embedding endpoint with authentication, this key will be used to authenticate requests.npm i @gaiaverse/semantic-turning-point-detector
The ARC framework posits that complex problems can be solved through iterative refinement at various dimensions, with controlled dimensional escalation when local refinements cannot resolve complexity. Our implementation demonstrates:
CRA provides a mechanism for detecting saturation and determining when dimensional expansion is necessary. Our implementation demonstrates:
semantic-turning-point-detector/
├── README.md // This file
├── package.json // NPM metadata
├── src/
│ ├── semanticTurningPointDetector.ts // Main implementation of ARC/CRA framework
│ ├── tokensUtil.ts // Utility for token counting
│ └── conversation.ts // Sample conversation for testing
├── results/
│ ├── turningPoints.json // Output of turning point detection
│ └── convergence_analysis.json // Convergence metrics from the ARC process
└── ...
Here's how to use the Semantic Turning Point Detector with the ARC/CRA framework:
import { SemanticTurningPointDetector, Message } from './src/semanticTurningPointDetector';
// Sample conversation
const conversation: Message[] = [
{ id: 'msg-1', author: 'user', message: 'Hello, I need help with my project.' },
{ id: 'msg-2', author: 'assistant', message: 'I\'d be happy to help! What kind of project are you working on?' },
// ... more messages
];
// Dynamic configuration based on conversation complexity
const thresholdForMinDialogueShift = 24;
const determineRecursiveDepth = (messages: Message[]) => {
return Math.floor(messages.length / thresholdForMinDialogueShift);
}
// Create detector with ARC/CRA framework parameters
const detector = new SemanticTurningPointDetector({
apiKey: process.env.OPENAI_API_KEY,
// Dynamic configuration based on conversation complexity
semanticShiftThreshold: 0.5 - (0.05 * determineRecursiveDepth(conversation)),
embeddingModel: "text-embedding-3-large",
// ARC framework: dynamic recursion depth based on conversation complexity
maxRecursionDepth: Math.min(determineRecursiveDepth(conversation), 5),
// ARC framework: chunk size scales with complexity
minMessagesPerChunk: Math.ceil(determineRecursiveDepth(conversation) * 3.5),
// CRA framework: complexity saturation threshold for dimensional escalation
complexitySaturationThreshold: 4.5,
// Enable convergence measurement for ARC analysis
measureConvergence: true,
});
// Detect turning points using the ARC/CRA framework
async function analyzeConversation() {
const turningPoints = await detector.detectTurningPoints(conversation);
console.log('Detected Turning Points:', turningPoints);
// Get convergence history to analyze the ARC process
const convergenceHistory = detector.getConvergenceHistory();
console.log('ARC Framework Convergence Analysis:', convergenceHistory);
}
analyzeConversation().catch(console.error);
The paper defines a discrete complexity function χ(x) → {1,2,3,4,5} that determines when dimensional escalation is necessary. In our implementation:
// Calculate complexity score (chi function) from significance and semantic distance
private calculateComplexityScore(significance: number, semanticShiftMagnitude: number): number {
// Maps [0,1] significance to [1,5] complexity range
let complexity = 1 + significance * 4;
// Adjust based on semantic shift magnitude
complexity += (semanticShiftMagnitude - 0.5) * 0.5;
// Ensure complexity is in [1,5] range
return Math.max(1, Math.min(5, complexity));
}
This function maps continuous significance metrics to the discrete complexity scores defined in the paper.
The transition operator Ψ(x,n) determines whether to remain in dimension n or escalate to dimension n+1:
// Implement Transition Operator Ψ from the ARC/CRA framework
const maxComplexity = Math.max(...mergedLocalTurningPoints.map(tp => tp.complexityScore));
const needsDimensionalEscalation = maxComplexity >= this.config.complexitySaturationThreshold;
if (needsDimensionalEscalation) {
// Create meta-messages from turning points for dimension n+1
const metaMessages = this.createMetaMessagesFromTurningPoints(mergedLocalTurningPoints, messages);
// Recursively process in dimension n+1
return this.multiLayerDetection(metaMessages, dimension + 1);
} else {
// Remain in current dimension
return this.filterSignificantTurningPoints(mergedLocalTurningPoints);
}
This directly implements the paper's formal definition of Ψ.
When complexity saturates in dimension n, the system creates meta-messages that represent higher-dimensional abstractions:
// Create meta-messages from turning points for higher-level analysis
// This implements the dimensional expansion from n to n+1
private createMetaMessagesFromTurningPoints(
turningPoints: TurningPoint[],
originalMessages: Message[]
): Message[] {
// Group turning points by category
const groupedByCategory: Record<string, TurningPoint[]> = {};
turningPoints.forEach(tp => {
const category = tp.category;
if (!groupedByCategory[category]) {
groupedByCategory[category] = [];
}
groupedByCategory[category].push(tp);
});
// Create meta-messages (one per category for dimension n+1)
const metaMessages: Message[] = [];
// Process each category...
return metaMessages;
}
The ARC framework guarantees convergence through contraction mappings:
// Calculate a difference measure between two states for convergence tracking
private calculateStateDifference(
state1: TurningPoint[],
state2: TurningPoint[]
): number {
if (state1.length === 0 || state2.length === 0) return 1.0;
// Calculate average significance difference
const avgSignificance1 = state1.reduce((sum, tp) => sum + tp.significance, 0) / state1.length;
const avgSignificance2 = state2.reduce((sum, tp) => sum + tp.significance, 0) / state2.length;
// Normalize by max possible difference
return Math.abs(avgSignificance1 - avgSignificance2);
}
One of the key innovations of our framework is its model-agnostic nature. The same implementation works effectively across different LLMs:
Model | Processing Time | Max Dimension | Turning Points | Max Complexity |
---|---|---|---|---|
Qwen 2.5 (7B) | 2:58 | n=2 | 5 | 5.00 |
Phi-4-mini (3.8B) | 2:07 | n=1 | 10 | 4.84 |
GPT-4o | 0:48 | n=1 | 10 | 4.84 |
This consistent behavior demonstrates that ARC/CRA captures fundamental principles of recursive convergence and dimensional expansion regardless of model architecture. See the results that can be found in the results
directory after running the detector, and the provided ones for the model.
To see the results as well from the readme, checkout README.results.md for the output of the detector on a sample conversation, for each model used.
Running the detector produces turning points with complexity scores and dimensional information:
{
"id": "tp-0-8-9",
"label": "Memory, Not Wear Insight",
"category": "Insight",
"span": {
"startId": "msg-8",
"endId": "msg-9",
"startIndex": 8,
"endIndex": 9
},
"semanticShiftMagnitude": 0.881,
"keywords": ["memory", "wear", "temporal stresses", "cognition"],
"quotes": ["It's memory, not wear! CRG-007 is recording temporal stresses into its alloy, actively consuming itself through cognition."],
"emotionalTone": "surprise",
"detectionLevel": 0,
"significance": 0.98,
"complexityScore": 4.79
}
The convergence analysis shows how the framework transitions between dimensions:
[
{
"dimension": 1,
"convergenceDistance": 0.032,
"hasConverged": true,
"didEscalate": true,
"turningPoints": 3
},
{
"dimension": 2,
"convergenceDistance": 0.070,
"hasConverged": true,
"didEscalate": true,
"turningPoints": 2
}
]
The implementation is grounded in the mathematical foundations described in our paper:
The Semantic Turning Point Detector demonstrates that the theoretical ARC/CRA framework can be successfully implemented in practice. By combining local refinements with dimensional escalation triggered by complexity saturation, we achieve a system that adaptively processes conversations at the appropriate level of abstraction.
This implementation validates the core claims of our paper:
Liu, Ziping, and Moriba Jah. (2025). "Adaptive Recursive Convergence and Semantic Turning Points: A Self-Verifying Architecture for Progressive AI Reasoning." Research Square, PREPRINT. Version 1* published 19 May 2025. .
* A second version is under review.
FAQs
Detects key semantic turning points in conversations using recursive semantic distance analysis. Ideal for conversation analysis, dialogue segmentation, insight detection, and AI-assisted reasoning tasks.
The npm package @gaiaverse/semantic-turning-point-detector receives a total of 5 weekly downloads. As such, @gaiaverse/semantic-turning-point-detector popularity was classified as not popular.
We found that @gaiaverse/semantic-turning-point-detector demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
/Security News
Threat actors hijacked Toptal’s GitHub org, publishing npm packages with malicious payloads that steal tokens and attempt to wipe victim systems.
Research
/Security News
Socket researchers investigate 4 malicious npm and PyPI packages with 56,000+ downloads that install surveillance malware.
Security News
The ongoing npm phishing campaign escalates as attackers hijack the popular 'is' package, embedding malware in multiple versions.