sentence-splitter
Split {Japanese, English} text into sentences.
Installation
npm install sentence-splitter
Usage
export interface SeparatorParserOptions {
separatorCharacters?: string[]
}
export interface splitOptions {
SeparatorParser?: SeparatorParserOptions;
}
export declare function split(text: string, options?: splitOptions): SentenceSplitterTxtNode[];
export declare function splitAST(paragraphNode: TxtParentNode, options?: splitOptions): SentenceSplitterTxtNode;
See also TxtAST.
Example
Node
This node is based on TxtAST.
Node's type
Str
: Str node has value
. It is same as TxtAST's Str
node.Sentence
: Sentence Node has Str
, WhiteSpace
, or Punctuation
nodes as childrenWhiteSpace
: WhiteSpace Node has \n
.Punctuation
: Punctuation Node has .
, 。
Get these SentenceSplitterSyntax
constants value from the module:
import { SentenceSplitterSyntax } from "sentence-splitter";
console.log(SentenceSplitterSyntax.Sentence);
Node's interface
export type TxtSentenceNode = Omit<TxtParentNode, "type"> & {
readonly type: "Sentence";
};
export type TxtWhiteSpaceNode = Omit<TxtTextNode, "type"> & {
readonly type: "WhiteSpace";
};
export type TxtPunctuationNode = Omit<TxtTextNode, "type"> & {
readonly type: "Punctuation";
};
Fore more details, Please see TxtAST.
Node layout
Node layout image.
This is 1st sentence. This is 2nd sentence.
<Sentence>
<Str /> |This is 1st sentence|
<Punctuation /> |.|
</Sentence>
<WhiteSpace /> | |
<Sentence>
<Str /> |This is 2nd sentence|
<Punctuation /> |.|
</Sentence>
Note: This library will not split Str
into Str
and WhiteSpace
(tokenize)
Because, Tokenize need to implement language specific context.
For textlint rule
You can use splitAST
for textlint rule.
splitAST
function can preserve original AST's position unlike split
function.
import { splitAST, SentenceSplitterSyntax } from "sentence-splitter";
export default function(context, options = {}) {
const { Syntax, RuleError, report, getSource } = context;
return {
[Syntax.Paragraph](node) {
const parsedNode = splitAST(node);
const sentenceNodes = parsedNode.children.filter(childNode => childNode.type === SentenceSplitterSyntax.Sentence);
console.log(sentenceNodes);
}
}
}
Examples
Reference
This library use "Golden Rule" test of pragmatic_segmenter
for testing.
Tests
Run tests:
npm test
Create input.json
from _input.md
npm run createInputJson
Update snapshots(output.json
):
npm run updateSnapshot
Adding snapshot testcase
- Create
test/fixtures/<test-case-name>/
directory - Put
test/fixtures/<test-case-name>/_input.md
with testing content - Run
npm run updateSnapshot
- Check the
test/fixtures/<test-case-name>/output.json
- If it is ok, commit it
Contributing
- Fork it!
- Create your feature branch:
git checkout -b my-new-feature
- Commit your changes:
git commit -am 'Add some feature'
- Push to the branch:
git push origin my-new-feature
- Submit a pull request :D
License
MIT