New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

tokenizer

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

tokenizer

  • 0.3.0
  • Rubygems
  • Socket score

Version published
Maintainers
1
Created
Source

= Tokenizer

{RubyGems}[http://rubygems.org/gems/tokenizer] | {Homepage}[http://bu.chsta.be/projects/tokenizer] | {Source Code}[https://github.com/arbox/tokenizer] | {Bug Tracker}[https://github.com/arbox/tokenizer/issues]

{Gem Version}[https://rubygems.org/gems/tokenizer] {Build Status}[https://travis-ci.org/arbox/tokenizer] {Code Climate}[https://codeclimate.com/github/arbox/tokenizer] {Dependency Status}[https://gemnasium.com/arbox/tokenizer]

== DESCRIPTION A simple multilingual tokenizer -- a linguistic tool intended to split a written text into tokens for NLP tasks. This tool provides a CLI and a library for linguistic tokenization which is an anavoidable step for many HLT (Human Language Technology) tasks in the preprocessing phase for further syntactic, semantic and other higher level processing goals.

Tokenization task involves Sentence Segmentation, Word Segmentation and Boundary Disambiguation for the both tasks.

Use it for tokenization of German, English and Dutch texts.

=== Implemented Algorithms to be ...

== INSTALLATION +Tokenizer+ is provided as a .gem package. Simply install it via {RubyGems}[http://rubygems.org/gems/tokenizer].

To install +tokenizer+ issue the following command: $ gem install tokenizer

If you want to do a system wide installation, do this as root (possibly using +sudo+).

Alternatively use your Gemfile for dependency management.

== SYNOPSIS

You can use +Tokenizer+ in two ways.

  • As a command line tool: $ echo 'Hi, ich gehe in die Schule!. | tokenize

  • As a library for embedded tokenization:

    require 'tokenizer' de_tokenizer = Tokenizer::WhitespaceTokenizer.new de_tokenizer.tokenize('Ich gehe in die Schule!') => ["Ich", "gehe", "in", "die", "Schule", "!"]

  • Customizable PRE and POST list

    require 'tokenizer' de_tokenizer = Tokenizer::WhitespaceTokenizer.new(:de, { post: Tokenizer::Tokenizer::POST + ['|'] }) de_tokenizer.tokenize('Ich gehe|in die Schule!') => ["Ich", "gehe", "|in", "die", "Schule", "!"]

See documentation in the Tokenizer::WhitespaceTokenizer class for details on particular methods.

== SUPPORT

If you have question, bug reports or any suggestions, please drop me an email :) Any help is deeply appreciated!

== CHANGELOG For details on future plan and working progress see CHANGELOG.rdoc.

== CAUTION This library is work in process! Though the interface is mostly complete, you might face some not implemented features.

Please contact me with your suggestions, bug reports and feature requests.

== LICENSE

+Tokenizer+ is a copyrighted software by Andrei Beliankou, 2011-

You may use, redistribute and change it under the terms provided in the LICENSE.rdoc file.

FAQs

Package last updated on 20 Jan 2016

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc