WordNet 3.0 Research Guide
This guide explains the structure of the WordNet files used in this dataset project, how this adapter parses them, and how to interpret the resulting records for research workflows.
Scope
This project packages WordNet 3.0 lexical database files for four parts of speech:
- noun
- verb
- adjective
- adverb
The adapter supports three variants:
data- synset records (data.noun,data.verb,data.adj,data.adv)index- lemma index records (index.noun,index.verb,index.adj,index.adv)exceptions- morphology exceptions (noun.exc,verb.exc,adj.exc,adv.exc)
Source Files In This Project
Under datasets/wordnet/data:
data.noun,data.verb,data.adj,data.advindex.noun,index.verb,index.adj,index.advnoun.exc,verb.exc,adj.exc,adv.exc
Important: WordNet files begin with license/header text before structured records. Those rows are valid file content but not structured lexical records.
Canonical Format References
Primary references for WordNet 3.0 file formats:
- Princeton
wndb(5WN): data/index/exception file grammar - Princeton
wninput(5WN): pointer symbols and lexical conventions
See:
Core Concepts
- Synset: a set of synonymous word senses.
synset_offset: byte offset used as an address intodata.*files.ss_type: part-of-speech/synset type marker (n,v,a,s,r).- Pointer: typed relation from one synset (or specific word sense) to another.
source/target: a four-hex-digit field that distinguishes semantic and lexical pointers.
Synset Identity
synset_offset alone is not globally unique across all POS files.
Use (synset_offset, pos) as the unique synset key.
Raw File Grammar
data.* Files
Per wndb(5WN), data lines are:
synset_offset lex_filenum ss_type w_cnt word lex_id [word lex_id ...] p_cnt [ptr ...] [frames ...] | gloss
Where:
synset_offset: 8-digit decimal byte offset.lex_filenum: 2-digit decimal lexical file id.ss_type: one ofn,v,a,s,r.w_cnt: 2-digit hexadecimal count of words in the synset.word: token, underscores for spaces (adjectives may include markers like(p)).lex_id: 1-digit hexadecimal sense discriminator within lexicographer files.p_cnt: 3-digit decimal pointer count.ptr:
pointer_symbol target_synset_offset pos source/target
source/target: four hexadecimal digits, split asss tt:0000means semantic pointer (synset-to-synset)- non-zero means lexical pointer from source word number
ssto target word numbertt
frames(verbs only):
f_cnt + f_num w_num [ + f_num w_num ... ]
gloss: free text after|.
index.* Files
Per wndb(5WN), index lines are:
lemma pos synset_cnt p_cnt [ptr_symbol ...] sense_cnt tagsense_cnt synset_offset [synset_offset ...]
Where:
lemma: lower-case lemma/collocation (_for spaces).pos:n,v,a,r.synset_cnt: number of senses/synsets for this lemma in POS.p_cnt: number of pointer symbols listed.ptr_symbol: pointer symbol types seen for lemma senses.sense_cnt: redundant historical count field.tagsense_cnt: number of tagged (frequency-ranked) senses.- trailing
synset_offsetlist length should equalsynset_cnt.
*.exc Exception Files
Per wndb(5WN), exception lines are:
inflected_form base_form [base_form ...]
These map irregular inflections to one or more base forms.
Pointer Symbols You Will See
This project normalizes common pointer symbols to labels when parsing data.*.
| Symbol | Label |
|---|---|
! |
antonym |
@ |
hypernym |
@i |
instance_hypernym |
~ |
hyponym |
~i |
instance_hyponym |
#m |
member_holonym |
#s |
substance_holonym |
#p |
part_holonym |
%m |
member_meronym |
%s |
substance_meronym |
%p |
part_meronym |
= |
attribute |
+ |
derivationally_related_form |
;c |
domain_of_synset_topic |
-c |
member_of_domain_topic |
;r |
domain_of_synset_region |
-r |
member_of_domain_region |
;u |
domain_of_synset_usage |
-u |
member_of_domain_usage |
* |
entailment |
> |
cause |
^ |
also_see |
$ |
verb_group |
& |
similar_to |
< |
participle_of_verb |
\\ |
pertainym_or_derived_from_adjective |
Meaning and allowed POS combinations are defined by WordNet documentation.
Adapter Parsing Behavior
Variant data with parse_records=true
Adds structured fields:
is_recordoffset,synset_offsetlex_filenumss_typew_cnt,word_countwords(list of objects)lemmas(normalized with spaces)p_cnt,pointer_countpointers(list of objects)frames(list of verb frame objects; usually empty for non-verbs)glossparse_error(flag when line partially matches but is malformed)
words entries include:
wordlemmamarker(for adjective markers likea,p,ip)lex_idlex_id_intword_number
pointers entries include:
symbollabeltarget_offsetpossource_targetsource_word_numbertarget_word_numberis_semantic
frames entries include:
frame_numberword_numberapplies_to_all_words
Variant index with parse_records=true
Adds:
is_recordlemma,lemma_textpossynset_cntp_cntptr_symbolssense_cnttagsense_cntsynset_offsetsparse_error
Variant exceptions with parse_records=true
Adds:
is_recordinflected_form,inflected_form_textbase_forms,base_forms_text
Header/License Row Handling
Non-record lines are expected at the top of files.
- Parsed output marks these as
is_record=false. - WordNet dataset config enables automatic filtering of these rows when parsing.
Project default in datasets/wordnet/dataset.json:
{
"adapter_defaults": {
"records_only": true
}
}
Practical effect:
- If you run with
--parse-records, non-record rows are removed by default. - To keep raw non-record rows, set
records_only=false.
Research Workflows
Quick CLI Usage
Parse data records and keep only true records (default behavior):
python -m tether.datasets run-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--json
Disable filtering for diagnostics/audit:
python -m tether.datasets run-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--option records_only=false \
--json
Build parquet artifact bundle for analysis:
python -m tether.datasets prepare-dataset wordnet \
--base-dir datasets/wordnet \
--variant data \
--parse-records \
--output-dir artifacts/wordnet-research \
--format parquet \
--json
Example Analysis Ideas
- Build graph edges from
pointersusing(synset_offset, ss_type)keys. - Compare lexical vs semantic relation rates via
is_semantic. - Analyze gloss lengths, lemma counts (
word_count), and pointer density (pointer_count). - Filter by specific pointer symbols (for example
@hypernym,~hyponym).
Recommended Quality Checks
- Assert no remaining
is_record=falserows in final analysis datasets when using defaults. - Track
parse_error=truecounts as ingestion quality telemetry. - Validate that index
len(synset_offsets) == synset_cntfor parsed records.
Known Caveats
- Some WordNet tokens contain punctuation and abbreviations (
A.D.,B.C.). - Not all pointer symbols apply to all parts of speech.
- Exception files may include forms not present in your selected downstream subset.
- Sense ranking in index files depends on WordNet tag counts and ordering conventions.
Citation And License
- Citation: Miller (1995), WordNet: A Lexical Database for English.
- License: WordNet 3.0 license (Princeton University), included in source files.
Use source-level attribution and license text in downstream publications and releases as required.