| | --- |
| | language: |
| | - en |
| | - pl |
| | tags: |
| | - translation |
| | license: cc-by-4.0 |
| | datasets: |
| | - quickmt/quickmt-train.pl-en |
| | model-index: |
| | - name: quickmt-pl-en |
| | results: |
| | - task: |
| | name: Translation pol-eng |
| | type: translation |
| | args: pol-eng |
| | dataset: |
| | name: flores101-devtest |
| | type: flores_101 |
| | args: ell_Grek eng_Latn devtest |
| | metrics: |
| | - name: BLEU |
| | type: bleu |
| | value: 27.46 |
| | - name: CHRF |
| | type: chrf |
| | value: 57.18 |
| | - name: COMET |
| | type: comet |
| | value: 85.04 |
| | --- |
| | |
| |
|
| | # `quickmt-pl-en` Neural Machine Translation Model |
| |
|
| | `quickmt-pl-en` is a reasonably fast and reasonably accurate neural machine translation model for translation from `pl` into `en`. |
| |
|
| |
|
| | ## Try it on our Huggingface Space |
| |
|
| | Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-gui |
| |
|
| |
|
| | ## Model Information |
| |
|
| | * Trained using [`eole`](https://github.com/eole-nlp/eole) |
| | * 195M parameter transformer 'big' with 8 encoder layers and 2 decoder layers |
| | * 20k separate Sentencepiece vocabs |
| | * Expested for fast inference to [CTranslate2](https://github.com/OpenNMT/CTranslate2) format |
| | * Training data: https://huggingface.co/datasets/quickmt/quickmt-train.pl-en/tree/main |
| |
|
| | See the `eole` model configuration in this repository for further details and the `eole-model` for the raw `eole` (pytorch) model. |
| |
|
| |
|
| | ## Usage with `quickmt` |
| |
|
| | You must install the Nvidia cuda toolkit first, if you want to do GPU inference. |
| |
|
| | Next, install the `quickmt` [python library](github.com/quickmt/quickmt). |
| |
|
| | ```bash |
| | git clone https://github.com/quickmt/quickmt.git |
| | pip install ./quickmt/ |
| | ``` |
| |
|
| | Finally, use the model in python: |
| |
|
| | ```python |
| | from quickmt import Translator |
| | from huggingface_hub import snapshot_download |
| | |
| | # Download Model (if not downloaded already) and return path to local model |
| | # Device is either 'auto', 'cpu' or 'cuda' |
| | t = Translator( |
| | snapshot_download("quickmt/quickmt-pl-en", ignore_patterns="eole-model/*"), |
| | device="cpu" |
| | ) |
| | |
| | # Translate - set beam size to 1 for faster speed (but lower quality) |
| | sample_text = 'Dr Ehud Ur, będący profesorem medycyny na Uniwersytecie Dalhousie w Halifaxie w Nowej Szkocji oraz przewodniczącym oddziału klinicznego i naukowego Kanadyjskiego Stowarzyszenia Cukrzycy, przestrzegł, iż badania nadal dopiero się zaczynają.' |
| | |
| | t(sample_text, beam_size=5) |
| | ``` |
| |
|
| | > 'Dr. Ehud Ur, a professor of medicine at Dalhousie University in Halifax, Nova Scotia and chairman of the clinical and scientific division of the Canadian Diabetes Association, warned that research is still just beginning.' |
| |
|
| | ```python |
| | # Get alternative translations by sampling |
| | # You can pass any cTranslate2 `translate_batch` arguments |
| | t([sample_text], sampling_temperature=1.2, beam_size=1, sampling_topk=50, sampling_topp=0.9) |
| | ``` |
| |
|
| | > 'Professor of Medicine at Dalhous University Halifax in Nova Scotia, MD and Chair of the Canadian Diabetes Association’s Clinical and Scientific Division, cautioned that research is just beginning.' |
| |
|
| | The model is in `ctranslate2` format, and the tokenizers are `sentencepiece`, so you can use `ctranslate2` directly instead of through `quickmt`. It is also possible to get this model to work with e.g. [LibreTranslate](https://libretranslate.com/) which also uses `ctranslate2` and `sentencepiece`. A model in safetensors format to be used with `eole` is also provided. |
| |
|
| |
|
| | ## Metrics |
| |
|
| | `bleu` and `chrf2` are calculated with [sacrebleu](https://github.com/mjpost/sacrebleu) on the [Flores200 `devtest` test set](https://huggingface.co/datasets/facebook/flores) ("pol_Latn"->"eng_Latn"). `comet22` with the [`comet`](https://github.com/Unbabel/COMET) library and the [default model](https://huggingface.co/Unbabel/wmt22-comet-da). "Time (s)" is the time in seconds to translate the flores-devtest dataset (1012 sentences) on an RTX 4070s GPU with batch size 32 (faster speed is possible using a larger batch size). |
| |
|
| | | | bleu | chrf2 | comet22 | Time (s) | |
| | |:---------------------------------|-------:|--------:|----------:|-----------:| |
| | | quickmt/quickmt-pl-en | 27.46 | 57.18 | 85.04 | 1.46 | |
| | | Helsinki-NLP/opus-mt-pl-en | 25.55 | 55.39 | 83.8 | 4.01 | |
| | | facebook/nllb-200-distilled-600M | 29.28 | 57.11 | 84.65 | 21.61 | |
| | | facebook/nllb-200-distilled-1.3B | 30.99 | 58.77 | 86.04 | 37.64 | |
| | | facebook/m2m100_418M | 22.12 | 52.51 | 80.41 | 17.99 | |
| | | facebook/m2m100_1.2B | 27.13 | 56.36 | 84.48 | 35.01 | |
| |
|