Comparing version 1.1.1 to 1.1.2
@@ -6,3 +6,3 @@ import { Parser, ParserRuleContext } from 'antlr4ts'; | ||
tokens: Map<number, TokenList>; | ||
rules: Set<number>; | ||
rules: Map<number, RuleList>; | ||
} | ||
@@ -9,0 +9,0 @@ export declare class CodeCompletionCore { |
@@ -9,3 +9,3 @@ 'use strict'; | ||
this.tokens = new Map(); | ||
this.rules = new Set(); | ||
this.rules = new Map(); | ||
} | ||
@@ -85,5 +85,9 @@ } | ||
console.log("\n\nCollected rules:\n"); | ||
this.candidates.rules.forEach(rule => { | ||
console.log(this.ruleNames[rule]); | ||
}); | ||
for (let rule of this.candidates.rules) { | ||
let path = ""; | ||
for (let token of rule[1]) { | ||
path += this.ruleNames[token] + " "; | ||
} | ||
console.log(this.ruleNames[rule[0]] + ", path: ", path); | ||
} | ||
let sortedTokens = new Set(); | ||
@@ -112,5 +116,17 @@ for (let token of this.candidates.tokens) { | ||
if (this.preferredRules.has(ruleStack[i])) { | ||
this.candidates.rules.add(ruleStack[i]); | ||
if (this.showDebugOutput) | ||
console.log("=====> collected: ", this.ruleNames[i]); | ||
let path = ruleStack.slice(0, i); | ||
let addNew = true; | ||
for (let rule of this.candidates.rules) { | ||
if (rule[0] != ruleStack[i] || rule[1].length != path.length) | ||
continue; | ||
if (path.every((v, j) => v === rule[1][j])) { | ||
addNew = false; | ||
break; | ||
} | ||
} | ||
if (addNew) { | ||
this.candidates.rules.set(ruleStack[i], path); | ||
if (this.showDebugOutput) | ||
console.log("=====> collected: ", this.ruleNames[i]); | ||
} | ||
return true; | ||
@@ -117,0 +133,0 @@ } |
{ | ||
"name": "antlr4-c3", | ||
"version": "1.1.1", | ||
"version": "1.1.2", | ||
"description": "A code completion core implmentation for ANTLR4 based parsers", | ||
@@ -5,0 +5,0 @@ "main": "out/index.js", |
@@ -113,7 +113,7 @@ [![NPM](https://nodei.co/npm/antlr4-c3.png?downloads=true&downloadRank=true)](https://nodei.co/npm/antlr4-c3/) | ||
public tokens: Map<number, TokenList>; | ||
public rules: RuleList; | ||
public rules: Map<number, RuleList>; | ||
}; | ||
``` | ||
For the lexer tokens there can be a list of extra tokens which directly follow the given token in the grammar (if any). That's quite a neat additional feature which allows you to show token sequences to the user if they are always used together. For example consider this SQL rule: | ||
where the map keys are the lexer tokens and the rule indices, respectively. Both can come with additional numbers, which you may or may not use for your implementation. For parser rules the list represents the call stack at which the given rule was found during evaluation. This allows to determine a context for rules that are used in different places. For the lexer tokens the list consists of further token ids which directly follow the given token in the grammar (if any). This allows you to show **token sequences** if they are always used together. For example consider this SQL rule: | ||
@@ -120,0 +120,0 @@ ```typescript |
Sorry, the diff of this file is not supported yet
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
License Policy Violation
LicenseThis package is not allowed per your license policy. Review the package's license to ensure compliance.
Found 1 instance in 1 package
163834
1241