Module tokenizers::tokenizer

source ·
Expand description

Represents a tokenization pipeline.

A Tokenizer is composed of some of the following parts.

  • Normalizer: Takes care of the text normalization (like unicode normalization).
  • PreTokenizer: Takes care of the pre tokenization (ie. How to split tokens and pre-process them.
  • Model: A model encapsulates the tokenization algorithm (like BPE, Word base, character based, …).
  • PostProcessor: Takes care of the processing after tokenization (like truncating, padding, …).

Re-exports§

Modules§

Structs§

Enums§

Traits§

  • A Decoder changes the raw tokens into its more readable form.
  • Represents a model used during Tokenization (like BPE or Word or Unigram).
  • Takes care of pre-processing strings.
  • A PostProcessor has the responsibility to post process an encoded output of the Tokenizer. It adds any special tokens that a language model would require.
  • The PreTokenizer is in charge of doing the pre-segmentation step. It splits the given string in multiple substrings, keeping track of the offsets of said substrings from the NormalizedString. In some occasions, the PreTokenizer might need to modify the given NormalizedString to ensure we can entirely keep track of the offsets and the mapping with the original string.
  • A Trainer has the responsibility to train a model. We feed it with lines/sentences and then it can train the given Model.

Type Aliases§