Datasets:
Quality scoring non-English web data -- approaches and challenges
FineWeb's quality pipeline is well-documented for English. Has anyone here applied similar filtering to German, French, or other European languages?
A few open questions from our work on a Swiss multilingual corpus (.ch domains, DE/FR/EN/IT):
1. Sentence structure heuristics -- German compound nouns and complex clause structures break most English-tuned quality classifiers. We ended up building language-specific rulesets for scoring things like sentence length distribution and paragraph density. Has anyone found a more generalizable approach that transfers well across languages without per-language calibration?
2. Multi-axis vs single score -- We decomposed quality into 9 independent dimensions (capitalization consistency, sentence structure, paragraph density, repetition metrics, language confidence, domain trust, and a few others) rather than a single aggregate. The benefit is that downstream users can define "quality" for their specific use case -- a RAG system cares about different axes than pretraining. But it adds filtering complexity. Has anyone experimented with multi-dimensional quality scoring at scale?
3. Cross-lingual dedup -- In multilingual corpora, the same content often exists in multiple languages (especially common in European government/corporate domains). Standard hash-based dedup misses these. We combined URL pattern dedup, exact content hashing, and fuzzy matching on translated content blocks. What approaches have others used?
For context, we documented our pipeline in our Swiss web corpus: https://huggingface.co/datasets/OptiTransferData/swiss-web-premium-ch
Interested in what others have tried -- especially whether anyone has built quality scoring that generalizes well across European languages.