python-stdlib-rlat / README.md
tenfingers's picture
v2.0 README: 617 files, 49,179 passages
961c047 verified
metadata
license: other
license_name: bsl-1.1
license_link: https://github.com/tenfingerseddy/resonance-lattice/blob/main/LICENSE.md
tags:
  - resonance-lattice
  - rlat
  - knowledge-model
  - retrieval
language: en

python-stdlib — rlat knowledge model (v2.0)

A Resonance Lattice knowledge model of python/cpython at commit d2f506ae, scope Doc.

Quick start

pip install rlat
huggingface-cli download tenfingers/python-stdlib-rlat python-stdlib.rlat --local-dir .
rlat search python-stdlib.rlat "your question" --top-k 5

The model uses remote storage mode — passages reference source files at raw.githubusercontent.com pinned to the commit SHA above. The first query fetches each cited source once and caches it locally; subsequent queries on the same passages are sub-20ms warm.

Build details

Field Value
Encoder Alibaba-NLP/gte-modernbert-base 768d, CLS-pooled, L2-normalised
Encoder revision e7f32e3c00f91d699e8c43b53106206bcc72bb22 (pinned)
Format rlat knowledge-model v4 (ZIP + JSON + NPZ)
Storage mode remote (source pinned at SHA, fetched on demand, SHA-verified)
Source repo python/cpython
Source scope Doc
Source commit d2f506ae07e0bc097039634a28cf85b5d804ef72
Source branch main (commit SHA-pinned; reproducible regardless of branch movement)
Files indexed 617
Passages 49,179
Build date 2026-04-28
Built on Kaggle T4 (GPU encoding, batch_size=64, runtime=torch)
File size 292.5 MB

Usage

Single-hop search

rlat search python-stdlib.rlat "what does X do?" --top-k 5

Skill-context (Anthropic skill !command block)

!`rlat skill-context python-stdlib.rlat --query "$user_query" --top-k 5`

The output is markdown with citation anchors, drift status, and ConfidenceMetrics — ready for an LLM to ground on.

Multi-hop deep-search

rlat deep-search python-stdlib.rlat "harder cross-file question" --max-hops 3

Requires an Anthropic API key. See the deep-search docs.

Refreshing against upstream

This model pins to the source commit d2f506ae. To re-index against the current upstream tip:

# Option A: rebuild on Kaggle's free T4 (recommended for big corpora)
# See the rlat-build-on-kaggle skill at:
# https://github.com/tenfingerseddy/resonance-lattice/tree/main/.claude/skills/rlat-build-on-kaggle

# Option B: rebuild locally
pip install rlat[build,ann]
rlat install-encoder
git clone --depth 1 -b main https://github.com/python/cpython.git src/
rlat build src/Doc -o python-stdlib.rlat \
  --store-mode remote \
  --remote-url-base https://raw.githubusercontent.com/python/cpython/<NEW_SHA>/Doc/ \
  --runtime torch

Honest limits

  • The encoder is gte-modernbert-base 768d with no per-corpus optimisation. Default retrieval is dense cosine over the base band — single recipe, no rerankers, no lexical sidecar.
  • For per-corpus retrieval lift, you can run rlat optimise locally to add a 512d MRL-trained band on top of this archive (opt-in, costs API + GPU time). See docs/user/OPTIMISE.md.
  • Drift detection is automatic: if the source files at GitHub change, query results show a drifted status until the model is rebuilt against the new commit.

License

The rlat software is licensed under BSL 1.1 (Business Source License — source-available; production use of the licensed work is permitted up to the parameters in LICENSE.md).

This .rlat archive contains embeddings + metadata + a SHA-pinned URL manifest; source bytes are NOT bundled and are fetched from upstream GitHub at query time, where the upstream repository's license applies.