项目作者: cpcdoy

项目描述 :
Rust port of sentence-transformers (https://github.com/UKPLab/sentence-transformers)
高级语言: Rust
项目地址: git://github.com/cpcdoy/rust-sbert.git
创建时间: 2020-04-19T14:27:46Z
项目社区:https://github.com/cpcdoy/rust-sbert

开源协议:Apache License 2.0

下载


Rust SBert Latest Version Latest Doc Build Status

Rust port of sentence-transformers using rust-bert and tch-rs.

Supports both rust-tokenizers and Hugging Face’s tokenizers.

Supported models

  • distiluse-base-multilingual-cased: Supported languages: Arabic, Chinese, Dutch, English, French, German, Italian, Korean, Polish, Portuguese, Russian, Spanish, Turkish. Performance on the extended STS2017: 80.1

  • DistilRoBERTa-based classifiers

Usage

Example

The API is made to be very easy to use and enables you to create quality multilingual sentence embeddings in a straightforward way.

Load SBert model with weights by specifying the directory of the model:

  1. let mut home: PathBuf = env::current_dir().unwrap();
  2. home.push("path-to-model");

You can use different versions of the models that use different tokenizers:

  1. // To use Hugging Face tokenizer
  2. let sbert_model = SBertHF::new(home.to_str().unwrap(), None);
  3. // To use Rust-tokenizers
  4. let sbert_model = SBertRT::new(home.to_str().unwrap(), None);

Now, you can encode your sentences:

  1. let texts = ["You can encode",
  2. "As many sentences",
  3. "As you want",
  4. "Enjoy ;)"];
  5. let batch_size = 64;
  6. let output = sbert_model.forward(texts.to_vec(), batch_size).unwrap();

The parameter batch_size can be left to None to let the model use its default value.

Then you can use the output sentence embedding in any application you want.

Convert models from Python to Rust

Firstly, get a model provided by UKPLabs (all models are here):

  1. mkdir -p models/distiluse-base-multilingual-cased
  2. wget -P models https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/distiluse-base-multilingual-cased.zip
  3. unzip models/distiluse-base-multilingual-cased.zip -d models/distiluse-base-multilingual-cased

Then, you need to convert the model in a suitable format (requires pytorch):

  1. python utils/prepare_distilbert.py models/distiluse-base-multilingual-cased

A dockerized environment is also available for running the conversion script:

  1. docker build -t tch-converter -f utils/Dockerfile .
  2. docker run \
  3. -v $(pwd)/models/distiluse-base-multilingual-cased:/model \
  4. tch-converter:latest \
  5. python prepare_distilbert.py /model

Finally, set "output_attentions": true in distiluse-base-multilingual-cased/0_distilbert/config.json.