Datasets:

License:
Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0xc4 in position 10: invalid continuation byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
                  for key, pa_table in ex_iterable.iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/text/text.py", line 98, in _generate_tables
                  batch = f.read(self.config.chunksize)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
                  out = read(*args, **kwargs)
                        ^^^^^^^^^^^^^^^^^^^^^
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc4 in position 10: invalid continuation byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Synthetic Parallel Corpora

Description

The dataset consists of synthetic parallel corpora for the following language pairs: Lithuanian-English, Lithuanian-French, and Lithuanian-German. The corpora are intended for training and improving neural machine translation systems as well as for other natural language processing tasks. The dataset contains more than 3 million parallel sentence pairs, with over 1 million sentence pairs for each language pair. The data were generated using a context-based template method, which enables the systematic inclusion of named entities (e.g., personal names, locations, organizations, etc.) and ensures their correct usage across various grammatical forms. In addition, the Lithuanian-English corpora include medical terminology-based synthetic data. The dataset also includes the resources used for synthetic data generation: more than 20,000 named entities per language pair (across 11 named entity categories), as well as more than 50 context templates for each category. The corpora feature a controlled structure and linguistic diversity, as the sentences are generated from templates derived from real language usage examples. The data are provided in a parallel format, ensuring direct sentence alignment across languages, and are suitable for immediate use in model training. The dataset is available in TXT and TMX formats, making it compatible with both machine learning environments and translation memory-based software. This resource is particularly useful for tasks involving the translation of named entities and domain-specific terminology, as well as for model evaluation and error analysis.

Publisher

State Digital Solutions Agency

Date issued

2026-04-13

Type

Corpus, text

Size

3328707 sentences

Licence

CC0 1.0

Acknowledgement

NextGenerationEU/Naujos kartos Lietuva Project code:02-107-P-0001 Project name:Development of synthetic parallel corpora / Sintetinių lygiagrečių tekstynų sukūrimas


Downloads last month
45