🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
DemoInstallSign in
Socket

thai2transformers

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

thai2transformers

Pretraining transformer based Thai language models

0.1.2
PyPI
Maintainers
1

thai2transformers

Pretraining transformer-based Thai language models


thai2transformers provides customized scripts to pretrain transformer-based masked language model on Thai texts with various types of tokens as follows:

  • spm: a subword-level token from SentencePiece library.
  • newmm : a dictionary-based Thai word tokenizer based on maximal matching from PyThaiNLP.
  • syllable: a dictionary-based Thai syllable tokenizer based on maximal matching from PyThaiNLP. The list of syllables used is from pythainlp/corpus/syllables_th.txt.
  • sefr: a ML-based Thai word tokenizer based on Stacked Ensemble Filter and Refine (SEFR) [Limkonchotiwat et al., 2020] based on probabilities from CNN-based deepcut and SEFR tokenizer is loaded with engine="best".


Thai texts for language model pretraining


We curate a list of sources that can be used to pretrain language model. The statistics for each data source are listed in this spreadsheet.

Also, you can download current version of cleaned datasets from here.



Model pretraining and finetuning instructions:


a) Instruction for RoBERTa BASE model pretraining on Thai Wikipedia dump:

In this example, we demonstrate how pretrain RoBERTa base model on Thai Wikipedia dump from scratch


b) Instruction for RoBERTa model finetuning on existing Thai text classification, and NER/POS tagging datasets.

In this example, we demonstrate how to finetune WanchanBERTa, a RoBERTa base model pretrained on Thai Wikipedia dump and Thai assorted texts.

  • Finetune model for sequence classification task from exisitng datasets including wisesight_sentiment, wongnai_reviews, generated_reviews_enth (review star prediction), and prachathai67k: 5a_finetune_sequence_classificaition.md

  • Finetune model for token classification task (NER and POS tagging) from exisitng datasets including thainer and lst20: 5b_finetune_token_classificaition.md



BibTeX entry and citation info

@misc{lowphansirikul2021wangchanberta,
      title={WangchanBERTa: Pretraining transformer-based Thai Language Models}, 
      author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
      year={2021},
      eprint={2101.09635},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Keywords

thainlp

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts