Normalize tokenization strategy label to wordseg
Browse files
README.md
CHANGED
|
@@ -204,7 +204,7 @@ A lightweight, evaluation-ready subset of [CodeSearchNet](https://huggingface.co
|
|
| 204 |
- Auto selection chooses the best strategy per split using retrieval metrics.
|
| 205 |
- Strategy types:
|
| 206 |
- `transformer`: `Qwen/Qwen3-0.6B` tokenizer
|
| 207 |
-
- `
|
| 208 |
- `stemmer`: `PyStemmer`
|
| 209 |
- `whitespace`: `str.split()`
|
| 210 |
|
|
|
|
| 204 |
- Auto selection chooses the best strategy per split using retrieval metrics.
|
| 205 |
- Strategy types:
|
| 206 |
- `transformer`: `Qwen/Qwen3-0.6B` tokenizer
|
| 207 |
+
- `wordseg`: language-specific word segmentation (`ja`, `zh`, `th`, `ko`)
|
| 208 |
- `stemmer`: `PyStemmer`
|
| 209 |
- `whitespace`: `str.split()`
|
| 210 |
|