File size: 30,792 Bytes
75d5ab7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
105ea5a
 
 
 
75d5ab7
105ea5a
 
 
 
 
 
 
 
 
 
 
 
 
75d5ab7
105ea5a
 
 
 
75d5ab7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
#!/usr/bin/env python3
"""Build the Linear B (Mycenaean Greek) dataset from downloaded raw data.

Parses and combines data from:
  1. Unicode UCD — Sign inventory (88 syllabograms + 123 ideograms)
  2. jhnwnstd/shannon — Linear B Lexicon (2,747 entries)
  3. Wiktionary — Mycenaean Greek lemmas (~435 entries with IPA)
  4. IE-CoR — Existing 43 Mycenaean Greek (gmy) words with expert IPA

Output files:
  data/linear_b/linear_b_signs.tsv — Full sign inventory
  data/linear_b/sign_to_ipa.json — Sign transliteration → IPA mapping
  data/linear_b/linear_b_words.tsv — Word list (Word, IPA, SCA, Source, Concept_ID, Cognate_Set_ID)
  data/linear_b/README.md — Documentation

Transliteration → IPA mapping:
  Reference: Ventris & Chadwick (1973) "Documents in Mycenaean Greek", 2nd ed.
  The Linear B syllabary encodes CV syllables. The conventional transliteration
  uses Latin characters that are near-IPA with these systematic differences:
    q = /kʷ/ (labiovelar stop)
    z = /ts/ or /dz/ (affricate, exact value debated)
    j = /j/ (palatal glide)
    w = /w/ (labial glide)
    p2 = /pʰ/ (aspirated p)
    t2 = /tʰ/ (aspirated t)  — actually written as "pu2" etc. in convention

Usage:
    python scripts/build_linear_b_dataset.py
"""

from __future__ import annotations

import csv
import io
import json
import re
import sys
import unicodedata
from collections import OrderedDict
from pathlib import Path

sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")

ROOT = Path(__file__).resolve().parent.parent
RAW_DIR = ROOT / "data" / "training" / "raw" / "linear_b"
OUT_DIR = ROOT / "data" / "linear_b"

# ── Linear B Unicode ranges ──
LINB_SYLLABARY_START = 0x10000
LINB_SYLLABARY_END = 0x1007F
LINB_IDEOGRAM_START = 0x10080
LINB_IDEOGRAM_END = 0x100FF

# ── Transliteration → IPA mapping ──
# Reference: Ventris & Chadwick (1973), "Documents in Mycenaean Greek", 2nd ed.
# Palmer (1963), "The Interpretation of Mycenaean Greek Texts"
# Hooker (1980), "Linear B: An Introduction"
#
# The conventional transliteration values are based on the Ventris decipherment
# (1952) and CIPEM standard. Most consonants map directly; the key differences are:
#   - q-series represents labiovelars /kʷ/, not /k/
#   - z-series represents affricates, transcribed as /ts/ (Hooker 1980: p.68)
#   - j represents /j/ (palatal approximant)
#   - w represents /w/ (labio-velar approximant)
#
# The "2" variants (a2, a3, pu2, etc.) represent:
#   - a2 = /ha/ (initial aspiration)
#   - a3 = /ai/ (diphthong)
#   - pu2 = /pʰu/ (aspirated)
#   - ra2 = /rja/ (palatalized)
#   - ro2 = /rjo/
#   - ta2 = /tja/
#   - nwa = /nwa/
#
# For undeciphered signs (*18, *19, etc.), IPA is left as "-".

TRANSLIT_TO_IPA = {
    # Pure vowels
    "a": "a", "e": "e", "i": "i", "o": "o", "u": "u",
    # d-series
    "da": "da", "de": "de", "di": "di", "do": "do", "du": "du",
    # j-series (palatal glide)
    "ja": "ja", "je": "je", "jo": "jo", "ju": "ju",
    # k-series
    "ka": "ka", "ke": "ke", "ki": "ki", "ko": "ko", "ku": "ku",
    # m-series
    "ma": "ma", "me": "me", "mi": "mi", "mo": "mo", "mu": "mu",
    # n-series
    "na": "na", "ne": "ne", "ni": "ni", "no": "no", "nu": "nu",
    # p-series
    "pa": "pa", "pe": "pe", "pi": "pi", "po": "po", "pu": "pu",
    # q-series (labiovelars)
    "qa": "kʷa", "qe": "kʷe", "qi": "kʷi", "qo": "kʷo",
    # r-series (covers both /r/ and /l/ — Linear B does not distinguish)
    "ra": "ra", "re": "re", "ri": "ri", "ro": "ro", "ru": "ru",
    # s-series
    "sa": "sa", "se": "se", "si": "si", "so": "so", "su": "su",
    # t-series
    "ta": "ta", "te": "te", "ti": "ti", "to": "to", "tu": "tu",
    # w-series
    "wa": "wa", "we": "we", "wi": "wi", "wo": "wo",
    # z-series (affricates: Hooker 1980, p.68)
    "za": "tsa", "ze": "tse", "zi": "tsi", "zo": "tso", "zu": "tsu",
    # Special/variant signs
    "a2": "ha", "a3": "ai",
    "nwa": "nwa",
    "pu2": "pʰu",
    "ra2": "rja", "ra3": "rai",
    "ro2": "rjo",
    "ta2": "tja",
    "two": "two",
    "dwe": "dwe",
    "dwo": "dwo",
    "twe": "twe",
    # Undeciphered signs — no IPA
}


def parse_unicode_signs(ucd_path: Path) -> list[dict]:
    """Parse Linear B signs from UnicodeData.txt.

    Each line has format: codepoint;name;category;...
    We extract signs in U+10000-U+100FF range.
    """
    signs = []
    with open(ucd_path, encoding="utf-8") as f:
        for line in f:
            parts = line.strip().split(";")
            if len(parts) < 2:
                continue
            cp_hex = parts[0]
            name = parts[1]
            cp = int(cp_hex, 16)

            if LINB_SYLLABARY_START <= cp <= LINB_SYLLABARY_END:
                sign_type = "syllabogram"
            elif LINB_IDEOGRAM_START <= cp <= LINB_IDEOGRAM_END:
                sign_type = "ideogram"
            else:
                continue

            # Parse Bennett number and phonetic value from name
            # Format: "LINEAR B SYLLABLE B008 A" or "LINEAR B IDEOGRAM B100 MAN"
            bennett = ""
            phonetic = ""
            m = re.match(r"LINEAR B (?:SYLLABLE|SYMBOL) (B\d+)\s*(.*)", name)
            if m:
                bennett = m.group(1)
                phonetic = m.group(2).strip().lower() if m.group(2) else ""
            else:
                m = re.match(r"LINEAR B IDEOGRAM (B\d+\w*)\s*(.*)", name)
                if m:
                    bennett = m.group(1)
                    phonetic = m.group(2).strip() if m.group(2) else ""

            # Get IPA from transliteration
            ipa = TRANSLIT_TO_IPA.get(phonetic, "-") if phonetic else "-"

            signs.append({
                "Codepoint": f"U+{cp_hex}",
                "Unicode_Char": chr(cp),
                "Bennett_Number": bennett,
                "Name": name,
                "Type": sign_type,
                "Transliteration": phonetic if phonetic else "-",
                "IPA": ipa,
            })

    return signs


def parse_shannon_lexicon(csv_path: Path) -> list[dict]:
    """Parse jhnwnstd/shannon Linear_B_Lexicon.csv.

    Columns: word (Unicode), transcription (Latin), definition (scholarly notes)
    We extract: transliteration, clean definition, and classify as common/proper noun.
    """
    entries = []
    with open(csv_path, encoding="utf-8") as f:
        reader = csv.DictReader(f)
        for row in reader:
            word_unicode = row.get("word", "").strip()
            translit = row.get("transcription", "").strip()
            definition = row.get("definition", "").strip()

            if not translit:
                continue

            # Classify: is this a common noun or anthroponym/toponym?
            def_lower = definition.lower()
            is_anthroponym = "anthroponym" in def_lower and ":" not in def_lower.split("anthroponym")[0][-20:]
            is_toponym = "toponym" in def_lower and ":" not in def_lower.split("toponym")[0][-20:]

            # Try to extract a clean gloss from the definition
            # Patterns:
            #   "Chadwick & Ventris 1973: anthroponym" → type=proper, gloss=anthroponym
            #   "Chadwick & Ventris 1973: figs" → type=common, gloss=figs
            gloss = ""
            # Look for meaning after first colon
            colon_parts = definition.split(":", 1)
            if len(colon_parts) > 1:
                after_colon = colon_parts[1].strip()
                # Take the first meaningful phrase (up to next reference or semicolon)
                # Clean up common noise
                gloss_match = re.match(
                    r"([\w\s,/()?.!'\-]+?)(?:\s+(?:Chadwick|McArthur|Witczak|van |Palmer|"
                    r"Ruijgh|Bernabé|Appears|KN|PY|MY|TH|TI))",
                    after_colon,
                )
                if gloss_match:
                    gloss = gloss_match.group(1).strip().rstrip(",;.")
                else:
                    # Take first 80 chars as fallback
                    gloss = after_colon[:80].strip()
                    # Cut at first reference-like pattern
                    for cutoff in ["Chadwick", "McArthur", "Ventris", "John and"]:
                        if cutoff in gloss:
                            gloss = gloss[: gloss.index(cutoff)].strip().rstrip(",;.")
                            break

            # Determine word type
            if "anthroponym" in gloss.lower():
                word_type = "anthroponym"
            elif "toponym" in gloss.lower():
                word_type = "toponym"
            elif "theonym" in gloss.lower():
                word_type = "theonym"
            elif "ethnic" in gloss.lower():
                word_type = "ethnic"
            elif not gloss or gloss.lower() in ("meaning obscure", "meaning unknown",
                                                 "meaning uncertain", "hapax"):
                word_type = "unknown"
            else:
                word_type = "common"

            entries.append({
                "Word_Unicode": word_unicode,
                "Transliteration": translit,
                "Gloss": gloss,
                "Word_Type": word_type,
                "Source": "shannon_lexicon",
            })

    return entries


def unicode_to_translit(title: str) -> str:
    """Convert Linear B Unicode characters in a title to transliteration.

    Uses Python's unicodedata to get character names, then extracts the
    phonetic value from names like "LINEAR B SYLLABLE B008 A" → "a".
    """
    parts = []
    for ch in title:
        cp = ord(ch)
        if LINB_SYLLABARY_START <= cp <= LINB_SYLLABARY_END:
            try:
                name = unicodedata.name(ch, "")
                m = re.match(r"LINEAR B (?:SYLLABLE|SYMBOL) B\d+\s*(.*)", name)
                if m and m.group(1):
                    parts.append(m.group(1).strip().lower())
                else:
                    # Undeciphered symbol
                    m2 = re.match(r"LINEAR B SYMBOL (B\d+)", name)
                    if m2:
                        parts.append(f"*{m2.group(1)[1:]}")
            except ValueError:
                pass
        elif LINB_IDEOGRAM_START <= cp <= LINB_IDEOGRAM_END:
            # Ideograms — skip or mark
            try:
                name = unicodedata.name(ch, "")
                m = re.match(r"LINEAR B IDEOGRAM (B\d+\w*)\s*(.*)", name)
                if m:
                    parts.append(f"[{m.group(2).strip() or m.group(1)}]")
            except ValueError:
                pass
        # Skip non-Linear B characters (spaces, combining marks, etc.)
    return "-".join(parts) if parts else ""


def parse_wiktionary_lemmas(json_path: Path) -> list[dict]:
    """Parse Wiktionary Mycenaean Greek lemma data.

    Extract from wikitext:
      - ts= parameter → IPA transcription
      - # [[gloss]] → English meaning
      - head template → part of speech
    """
    with open(json_path, encoding="utf-8") as f:
        lemmas = json.load(f)

    entries = []
    for lemma in lemmas:
        title = lemma["title"]
        wikitext = lemma["wikitext"]

        # Skip if not Mycenaean Greek
        if "==Mycenaean Greek==" not in wikitext:
            continue

        # Convert Unicode title to transliteration
        title_translit = unicode_to_translit(title)

        # Skip ideogram-only entries (titles that are purely ideograms or *NNN)
        if not title_translit or all(
            p.startswith("[") or p.startswith("*") for p in title_translit.split("-") if p
        ):
            # Check if it has a tr= parameter we could use instead
            tr_check = re.search(r"\|tr=([^|}]+)", wikitext)
            if not tr_check:
                continue

        # Extract IPA from ts= parameter — ONLY from {{head|gmy|...}} template,
        # NOT from {{quote|gmy|...}} tablet quotation contexts.
        # The head template has format: {{head|gmy|noun|ts=VALUE}}
        # Quote templates have format: {{quote|gmy|...|ts=FULL_SENTENCE}}
        ipa = ""
        head_match = re.search(r"\{\{(?:head|h)\|gmy\|[^}]*\|ts=([^|}]+)", wikitext)
        if head_match:
            ipa = head_match.group(1).strip()
            # Sanity check: headword IPA should be a single word, not a sentence
            # If it contains spaces or <br>, it's a tablet quotation that leaked in
            if " " in ipa or "<br>" in ipa:
                ipa = ""

        # Extract transliteration: prefer Unicode title conversion, fallback to tr= from head template.
        # IMPORTANT: Do NOT use a global tr= search — {{quote-book}} templates contain
        # tr= with full tablet transliterations (e.g., "o-di-do-si du-ru-to-mo / ..."),
        # which are NOT the headword transliteration. Only use tr= from {{head|gmy|...}}.
        translit = title_translit  # Primary: Unicode character names → transliteration
        if not translit:
            # Fallback: tr= from head template only
            head_tr_match = re.search(r"\{\{(?:head|h)\|gmy\|[^}]*\|tr=([^|}]+)", wikitext)
            if head_tr_match:
                translit = head_tr_match.group(1).strip()

        # Clean transliteration: remove tablet context, bold markers, etc.
        # Wiktionary titles sometimes embed context like "'''di-wo''' u-ta-jo-jo"
        if translit:
            # Remove wikitext bold markers
            translit = translit.replace("'''", "")
            # If transliteration contains spaces (tablet context), take first word only
            if " " in translit:
                translit = translit.split()[0]
            # Remove trailing punctuation
            translit = translit.strip(".,;:!?")
            # Skip if still contains non-transliteration characters
            if re.search(r"[<>\[\]{}|=]", translit):
                continue

        # Skip entries with no usable transliteration
        if not translit or translit == "-":
            continue

        # Skip pure ideogram/logogram entries (*NNN without syllabic content)
        # These are ideograms like *142, *150, etc. that have no phonetic reading
        translit_parts = [p for p in translit.split("-") if p]
        syllabic_parts = [p for p in translit_parts
                          if not p.startswith("*") and not p.startswith("[")]
        if not syllabic_parts:
            continue  # Skip: no syllabic content at all

        # Extract gloss from definition lines (# [[word]] or # text)
        glosses = []
        for line in wikitext.split("\n"):
            line = line.strip()
            if line.startswith("# ") and not line.startswith("# {{def-uncertain"):
                # Clean wikitext markup
                gloss = line[2:]
                # Remove templates but preserve content for some
                gloss = re.sub(r"\{\{l\|en\|([^|}]+)[^}]*\}\}", r"\1", gloss)
                gloss = re.sub(r"\{\{[^}]*\}\}", "", gloss)
                # Remove links but keep text: [[word|display]] → display, [[word]] → word
                gloss = re.sub(r"\[\[(?:[^|\]]*\|)?([^\]]*)\]\]", r"\1", gloss)
                # Remove remaining markup
                gloss = re.sub(r"['\[\]]", "", gloss)
                # Remove wikitext remnants like }}, {{, etc.
                gloss = re.sub(r"\}\}|\{\{", "", gloss)
                # Remove leading/trailing whitespace and orphaned punctuation
                gloss = gloss.strip().strip(".,;:")
                if gloss and len(gloss) > 1:
                    glosses.append(gloss)

        # Extract part of speech
        pos = ""
        pos_match = re.search(r"\{\{head\|gmy\|(\w+)", wikitext)
        if pos_match:
            pos = pos_match.group(1)

        # Extract etymology cognates (useful for Concept_ID mapping)
        cognates = []
        cog_matches = re.finditer(r"\{\{cog\|grc\|([^|}]+)", wikitext)
        for m in cog_matches:
            cognates.append(m.group(1))

        gloss_text = "; ".join(glosses) if glosses else "-"

        # Determine word type from POS and content
        word_type = "common"
        if pos == "proper noun":
            word_type = "proper"
        elif "toponym" in gloss_text.lower():
            word_type = "toponym"
        elif "anthroponym" in gloss_text.lower():
            word_type = "anthroponym"

        entries.append({
            "Title_Unicode": title,
            "Transliteration": translit,
            "IPA": ipa,
            "Gloss": gloss_text,
            "POS": pos,
            "Word_Type": word_type,
            "Greek_Cognate": cognates[0] if cognates else "-",
            "Source": "wiktionary_gmy",
        })

    return entries


def transliterate_to_ipa(translit: str) -> str:
    """Convert Linear B transliteration to IPA.

    Reference: Ventris & Chadwick (1973), Hooker (1980)

    Linear B transliterations use the format: syllable-syllable-syllable
    where each syllable is a CV value from the Ventris grid.
    E.g., "a-ke-ro" → "akero", "pa-ka-na" → "pakana"
    """
    if not translit or translit == "-":
        return "-"

    # Remove leading/trailing hyphens and whitespace
    translit = translit.strip().strip("-")

    # Split on hyphens
    syllables = translit.split("-")

    ipa_parts = []
    for syl in syllables:
        syl = syl.strip().lower()
        if not syl:
            continue
        # Check for undeciphered signs (*18, *47, etc.)
        if syl.startswith("*"):
            ipa_parts.append("?")
            continue
        # Look up in mapping
        if syl in TRANSLIT_TO_IPA:
            ipa_parts.append(TRANSLIT_TO_IPA[syl])
        else:
            # Unknown syllable — keep as-is (it may already be a valid value)
            ipa_parts.append(syl)

    return "".join(ipa_parts)


def load_iecor_gmy_words() -> list[dict]:
    """Load existing Mycenaean Greek (gmy) words from cognate pairs Parquet."""
    try:
        import pyarrow.parquet as pq
        import pyarrow.compute as pc
    except ImportError:
        print("  [WARN] pyarrow not available, skipping IE-CoR data")
        return []

    parquet_path = ROOT / "data" / "training" / "cognate_pairs" / "cognate_pairs_inherited.parquet"
    if not parquet_path.exists():
        return []

    t = pq.read_table(parquet_path)
    mask_a = pc.equal(t["Lang_A"], "gmy")
    mask_b = pc.equal(t["Lang_B"], "gmy")

    words = {}  # translit → {ipa, concept_ids}

    # Extract from Lang_A side
    gmy_a = t.filter(mask_a)
    for i in range(gmy_a.num_rows):
        w = gmy_a.column("Word_A")[i].as_py()
        ipa = gmy_a.column("IPA_A")[i].as_py()
        cid = gmy_a.column("Concept_ID")[i].as_py()
        if w and w != "-":
            if w not in words:
                words[w] = {"ipa": ipa or "-", "concept_ids": set()}
            if cid and cid != "-":
                words[w]["concept_ids"].add(cid)

    # Extract from Lang_B side
    gmy_b = t.filter(mask_b)
    for i in range(gmy_b.num_rows):
        w = gmy_b.column("Word_B")[i].as_py()
        ipa = gmy_b.column("IPA_B")[i].as_py()
        cid = gmy_b.column("Concept_ID")[i].as_py()
        if w and w != "-":
            if w not in words:
                words[w] = {"ipa": ipa or "-", "concept_ids": set()}
            if cid and cid != "-":
                words[w]["concept_ids"].add(cid)

    result = []
    for translit, data in words.items():
        result.append({
            "Transliteration": translit,
            "IPA": data["ipa"],
            "Concept_IDs": ",".join(sorted(data["concept_ids"])),
            "Source": "iecor",
        })

    return result


def build_sign_inventory(signs: list[dict]) -> None:
    """Write sign inventory TSV and sign_to_ipa.json."""
    OUT_DIR.mkdir(parents=True, exist_ok=True)

    # TSV
    tsv_path = OUT_DIR / "linear_b_signs.tsv"
    cols = ["Codepoint", "Unicode_Char", "Bennett_Number", "Name", "Type",
            "Transliteration", "IPA"]
    with open(tsv_path, "w", encoding="utf-8", newline="") as f:
        writer = csv.DictWriter(f, fieldnames=cols, delimiter="\t")
        writer.writeheader()
        for sign in signs:
            writer.writerow(sign)
    print(f"  Signs TSV: {len(signs)} signs → {tsv_path}")

    # sign_to_ipa.json (only syllabograms with phonetic values)
    sign_map = OrderedDict()
    for sign in signs:
        if sign["Type"] == "syllabogram" and sign["Transliteration"] != "-":
            sign_map[sign["Transliteration"]] = sign["IPA"]
    json_path = OUT_DIR / "sign_to_ipa.json"
    json_path.write_text(json.dumps(sign_map, ensure_ascii=False, indent=2), encoding="utf-8")
    print(f"  sign_to_ipa.json: {len(sign_map)} mappings → {json_path}")

    # Stats
    syllabograms = [s for s in signs if s["Type"] == "syllabogram"]
    ideograms = [s for s in signs if s["Type"] == "ideogram"]
    with_phonetic = [s for s in syllabograms if s["Transliteration"] != "-"]
    print(f"  Syllabograms: {len(syllabograms)} ({len(with_phonetic)} with phonetic values)")
    print(f"  Ideograms: {len(ideograms)}")


def build_word_list(
    shannon_entries: list[dict],
    wiktionary_entries: list[dict],
    iecor_entries: list[dict],
) -> None:
    """Merge all word sources and write linear_b_words.tsv."""
    # Priority order for IPA: IE-CoR (expert) > Wiktionary (ts=) > transliteration conversion
    # Priority order for glosses: Wiktionary > Shannon > IE-CoR (no glosses)

    # Index IE-CoR by transliteration
    iecor_by_translit = {}
    for e in iecor_entries:
        t = e["Transliteration"]
        iecor_by_translit[t] = e

    # Index Wiktionary by transliteration
    wikt_by_translit = {}
    for e in wiktionary_entries:
        t = e["Transliteration"]
        if t:
            wikt_by_translit[t] = e

    # Build merged word list
    # Key: transliteration (hyphenated form like "a-ke-ro")
    all_words = {}  # translit → merged dict

    # 1. Start with Shannon entries (largest source)
    for e in shannon_entries:
        t = e["Transliteration"]
        if t not in all_words:
            all_words[t] = {
                "Transliteration": t,
                "Gloss": e["Gloss"],
                "Word_Type": e["Word_Type"],
                "IPA": "-",
                "Source": "shannon_lexicon",
                "Concept_ID": "-",
                "Cognate_Set_ID": "-",
            }

    # 2. Merge Wiktionary (better glosses, has IPA)
    for e in wiktionary_entries:
        t = e["Transliteration"]
        if not t:
            continue
        # Skip non-standard transliterations from Wiktionary:
        # - Single letters (measure symbols like L, N, P, Q, S, T, V, Z)
        # - ALL-CAPS abbreviations (AES, KAPO, etc.) — these are ideogram labels
        # - Entries that look like Greek or modern language forms
        if len(t) <= 2 and t.isalpha() and "-" not in t:
            continue
        if t.isupper() and len(t) <= 6:
            continue
        # Valid transliterations use lowercase with hyphens (a-ke-ro)
        # or start with * for undeciphered signs
        if not re.match(r'^[\-a-z0-9*]+$', t.replace("-", "")):
            continue
        if t in all_words:
            # Update gloss if Wiktionary has a better one
            if e["Gloss"] != "-":
                all_words[t]["Gloss"] = e["Gloss"]
            if e["IPA"]:
                all_words[t]["IPA"] = e["IPA"]
            all_words[t]["Source"] = "wiktionary_gmy"
            if e["Word_Type"] != "common":
                all_words[t]["Word_Type"] = e["Word_Type"]
        else:
            all_words[t] = {
                "Transliteration": t,
                "Gloss": e["Gloss"],
                "Word_Type": e["Word_Type"],
                "IPA": e["IPA"] if e["IPA"] else "-",
                "Source": "wiktionary_gmy",
                "Concept_ID": "-",
                "Cognate_Set_ID": "-",
            }

    # 3. Merge IE-CoR (best IPA, has concept IDs)
    for e in iecor_entries:
        t = e["Transliteration"]
        if t in all_words:
            # IE-CoR IPA takes priority (expert reconstructions)
            if e["IPA"] and e["IPA"] != "-":
                all_words[t]["IPA"] = e["IPA"]
            if e["Concept_IDs"]:
                all_words[t]["Concept_ID"] = e["Concept_IDs"]
            # Mark as having IE-CoR data
            all_words[t]["Source"] = "iecor+" + all_words[t]["Source"]
        else:
            all_words[t] = {
                "Transliteration": t,
                "Gloss": "-",
                "Word_Type": "common",
                "IPA": e["IPA"],
                "Source": "iecor",
                "Concept_ID": e.get("Concept_IDs", "-"),
                "Cognate_Set_ID": "-",
            }

    # 4. For entries without IPA, generate from transliteration
    for t, entry in all_words.items():
        if entry["IPA"] == "-" or not entry["IPA"]:
            entry["IPA"] = transliterate_to_ipa(t)
            if entry["IPA"] != "-":
                entry["IPA_Source"] = "translit_conversion"
            else:
                entry["IPA_Source"] = "none"
        else:
            entry["IPA_Source"] = "expert"

    # 5. Compute SCA (Sound Class Alphabet) encoding
    try:
        sys.path.insert(0, str(ROOT / "cognate_pipeline" / "src"))
        from cognate_pipeline.normalise.sound_class import ipa_to_sound_class
        has_sca = True
    except ImportError:
        has_sca = False
        print("  [WARN] cognate_pipeline not available, SCA will be computed from IPA directly")

    for entry in all_words.values():
        if has_sca and entry["IPA"] != "-":
            try:
                entry["SCA"] = ipa_to_sound_class(entry["IPA"])
            except Exception:
                entry["SCA"] = entry["IPA"].upper()
        elif entry["IPA"] != "-":
            # Simple uppercase fallback
            entry["SCA"] = entry["IPA"].upper()
        else:
            entry["SCA"] = "-"

    # Write output
    OUT_DIR.mkdir(parents=True, exist_ok=True)
    tsv_path = OUT_DIR / "linear_b_words.tsv"
    cols = ["Word", "IPA", "SCA", "Source", "Concept_ID", "Cognate_Set_ID",
            "Gloss", "Word_Type", "IPA_Source"]

    # Sort: common nouns first, then by transliteration
    type_order = {"common": 0, "unknown": 1, "theonym": 2, "ethnic": 3,
                  "proper": 4, "toponym": 5, "anthroponym": 6}
    sorted_entries = sorted(
        all_words.values(),
        key=lambda e: (type_order.get(e["Word_Type"], 9), e["Transliteration"]),
    )

    with open(tsv_path, "w", encoding="utf-8", newline="") as f:
        writer = csv.DictWriter(f, fieldnames=cols, delimiter="\t",
                                extrasaction="ignore")
        writer.writeheader()
        for entry in sorted_entries:
            writer.writerow({
                "Word": entry["Transliteration"],
                "IPA": entry["IPA"],
                "SCA": entry["SCA"],
                "Source": entry["Source"],
                "Concept_ID": entry["Concept_ID"],
                "Cognate_Set_ID": entry["Cognate_Set_ID"],
                "Gloss": entry["Gloss"],
                "Word_Type": entry["Word_Type"],
                "IPA_Source": entry.get("IPA_Source", "unknown"),
            })

    # Statistics
    total = len(sorted_entries)
    common = sum(1 for e in sorted_entries if e["Word_Type"] == "common")
    proper = total - common
    with_expert_ipa = sum(1 for e in sorted_entries if e.get("IPA_Source") == "expert")
    with_translit_ipa = sum(1 for e in sorted_entries if e.get("IPA_Source") == "translit_conversion")

    print(f"\n  Words TSV: {total} entries → {tsv_path}")
    print(f"  Common nouns: {common}")
    print(f"  Proper nouns (names/places): {proper}")
    print(f"  IPA from expert sources: {with_expert_ipa}")
    print(f"  IPA from transliteration conversion: {with_translit_ipa}")
    print(f"  No IPA: {total - with_expert_ipa - with_translit_ipa}")

    # Source distribution
    src_counts = {}
    for e in sorted_entries:
        s = e["Source"]
        src_counts[s] = src_counts.get(s, 0) + 1
    print(f"\n  Source distribution:")
    for src, count in sorted(src_counts.items(), key=lambda x: -x[1]):
        print(f"    {src}: {count}")

    return sorted_entries


def main():
    print("=" * 70)
    print("LINEAR B DATASET BUILD")
    print("=" * 70)

    # 1. Parse Unicode sign inventory
    print("\n[1/4] Parsing Unicode UCD for Linear B signs...")
    ucd_path = RAW_DIR / "UnicodeData.txt"
    if not ucd_path.exists():
        print("  ERROR: UnicodeData.txt not found. Run ingest_linear_b.py first.")
        sys.exit(1)
    signs = parse_unicode_signs(ucd_path)
    build_sign_inventory(signs)

    # 2. Parse Shannon lexicon
    print("\n[2/4] Parsing Shannon Linear B Lexicon...")
    shannon_path = RAW_DIR / "shannon_Linear_B_Lexicon.csv"
    if not shannon_path.exists():
        print("  ERROR: shannon_Linear_B_Lexicon.csv not found. Run ingest_linear_b.py first.")
        sys.exit(1)
    shannon_entries = parse_shannon_lexicon(shannon_path)
    print(f"  Parsed {len(shannon_entries)} entries")
    type_counts = {}
    for e in shannon_entries:
        type_counts[e["Word_Type"]] = type_counts.get(e["Word_Type"], 0) + 1
    for wt, c in sorted(type_counts.items(), key=lambda x: -x[1]):
        print(f"    {wt}: {c}")

    # 3. Parse Wiktionary lemmas
    print("\n[3/4] Parsing Wiktionary Mycenaean Greek lemmas...")
    wikt_path = RAW_DIR / "wiktionary_gmy_lemmas.json"
    if not wikt_path.exists():
        print("  ERROR: wiktionary_gmy_lemmas.json not found. Run ingest_linear_b.py first.")
        sys.exit(1)
    wiktionary_entries = parse_wiktionary_lemmas(wikt_path)
    print(f"  Parsed {len(wiktionary_entries)} entries")
    with_ipa = sum(1 for e in wiktionary_entries if e["IPA"])
    with_translit = sum(1 for e in wiktionary_entries if e["Transliteration"])
    with_gloss = sum(1 for e in wiktionary_entries if e["Gloss"] != "-")
    print(f"    With IPA (ts=): {with_ipa}")
    print(f"    With transliteration: {with_translit}")
    print(f"    With gloss: {with_gloss}")

    # 4. Load IE-CoR existing data
    print("\n[4/4] Loading IE-CoR Mycenaean Greek data...")
    iecor_entries = load_iecor_gmy_words()
    print(f"  Loaded {len(iecor_entries)} entries from cognate pairs")

    # 5. Merge and build word list
    print("\n[BUILD] Merging all sources...")
    entries = build_word_list(shannon_entries, wiktionary_entries, iecor_entries)

    print("\n" + "=" * 70)
    print("BUILD COMPLETE")
    print("=" * 70)
    print(f"\nOutput directory: {OUT_DIR}")
    for p in sorted(OUT_DIR.iterdir()):
        print(f"  {p.name}: {p.stat().st_size:,} bytes")


if __name__ == "__main__":
    main()