Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Card for Epstein Files - Vector Embeddings (Chroma DB)

Pre-built ChromaDB vector database derived from the Epstein Files 20K dataset. Contains dense vector embeddings of 100K+ semantically chunked document pieces, ready to plug directly into a RAG pipeline — no re-embedding required.

Dataset Details

Dataset Description

This dataset is a pre-computed Chroma vector store built from the raw Epstein Files 20K document corpus. The raw text (2.5M+ lines) was cleaned, reconstructed by source filename, semantically chunked, and embedded using sentence-transformers/all-MiniLM-L6-v2. The resulting Chroma DB is uploaded here so that users can skip the computationally expensive embedding step (~20–45 minutes) and immediately run RAG queries.

Dataset Sources [optional]

Uses

Direct Use

This dataset is intended to be used as a drop-in vector store for RAG (Retrieval-Augmented Generation) applications querying the Epstein Files documents. Load it with LangChain's Chroma integration and immediately run semantic search or MMR retrieval without any preprocessing.

from langchain_community.vectorstores import Chroma
from langchain_huggingface import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-MiniLM-L6-v2")

vectorstore = Chroma(
    persist_directory="./chroma_db",
    embedding_function=embeddings
)

retriever = vectorstore.as_retriever(
    search_type="mmr",
    search_kwargs={"k": 5, "fetch_k": 20}
)

docs = retriever.invoke("Who visited Epstein's island?")

Out-of-Scope Use

This dataset is not suitable for training or fine-tuning language models. It should not be used for harassment, targeting individuals, or any illegal purposes. Users are responsible for complying with applicable laws and ethical guidelines.

Dataset Structure

The dataset contains the ChromaDB persistence directory (chroma_db/) with the following files:

File Description
chroma.sqlite3 SQLite metadata store with document text, chunk IDs, and metadata
data_level0.bin HNSW vector index for approximate nearest-neighbor search
header.bin Index header
length.bin Vector length metadata
link_lists.bin HNSW graph structure

Each embedded chunk contains these metadata fields: source (original document filename), chunk_id (unique identifier), and text (raw chunk content). The collection contains 100K+ chunks with 384-dimensional embeddings (MiniLM-L6-v2), chunk size of ~500 tokens with 50-token overlap, indexed via HNSW.

Dataset Creation

Curation Rationale

The Epstein Files 20K dataset contains raw, fragmented document text that is difficult to query directly. This vector database was created to enable fast, accurate semantic retrieval over the full document corpus as part of a no-hallucination RAG system. Pre-computing and uploading the embeddings eliminates the biggest time barrier (~45 min embedding job) for anyone wanting to build on top of this data.

Source Data

Data Collection and Processing

The pipeline that produced this dataset follows four stages:

  1. Download — Raw data fetched from teyler/epstein-files-20k on Hugging Face (~2.5M document lines) via the datasets library.
  2. Clean & Reconstruct — Junk rows removed, documents reconstructed and grouped by source filename. Output: cleaned.json.
  3. Semantic Chunking — Documents split into overlapping chunks (~500 tokens, 50-token overlap) using LangChain's RecursiveCharacterTextSplitter. Output: chunks.json.
  4. Embed & Index — Chunks embedded with sentence-transformers/all-MiniLM-L6-v2 and stored in ChromaDB. Output: chroma_db/.

Who are the source data producers?

The original documents are public records from the Jeffrey Epstein court case files, aggregated and published on Hugging Face by teyler. This dataset is a derivative of that public corpus.

Annotations [optional]

Annotation process

No manual annotations were added. Metadata fields (source, chunk_id) are automatically generated during the chunking and embedding pipeline.

Who are the annotators?

Automated pipeline — no human annotators.

Personal and Sensitive Information

The source corpus contains real names of individuals mentioned in legal documents, including victims, witnesses, and associates. This data is sourced entirely from public court records. No anonymization has been applied. Users should handle this data responsibly and ethically.

Bias, Risks, and Limitations

  • Named individuals: The documents contain names of real people from court records. Any application built on this data must handle this with appropriate care.
  • OCR/scan artifacts: Some source documents may contain OCR errors from the original scan-to-text conversion, which can affect retrieval quality.
  • Chunking boundary effects: Semantic chunking may split context across boundaries, occasionally resulting in incomplete retrieved passages.
  • Embedding model limitations: all-MiniLM-L6-v2 is a general-purpose model; domain-specific legal terminology may not embed with optimal accuracy.
  • No ground truth: There is no annotated QA benchmark for this corpus, so retrieval quality is evaluated qualitatively.

Recommendations

Always use MMR retrieval (search_type="mmr") over pure similarity search to avoid redundant results from the same document. Ground all LLM responses strictly in retrieved context. Review results critically — this is a research tool, not a legal or journalistic authority.

Citation [optional]

BibTeX:

@misc{nayak2026epsteinfilesrag,
  author       = {Ankit Kumar Nayak},
  title        = {EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset},
  year         = {2026},
  howpublished = {\url{https://github.com/AnkitNayak-eth/EpsteinFiles-RAG}},
}

APA:

Nayak, A. K. (2026). EpsteinFiles-RAG: A RAG Pipeline over the Epstein Files 20K Dataset. GitHub. https://github.com/AnkitNayak-eth/EpsteinFiles-RAG

Glossary [optional]

  • RAG (Retrieval-Augmented Generation): A technique where an LLM answers questions using only context retrieved from a vector database, preventing hallucination.
  • MMR (Maximal Marginal Relevance): A retrieval algorithm that balances relevance and diversity to avoid returning redundant chunks from the same document.
  • ChromaDB: An open-source vector database used to store and query embeddings.
  • Embedding: A dense numerical vector representation of text that captures semantic meaning.
  • HNSW: Hierarchical Navigable Small World — the graph-based approximate nearest-neighbor index used by ChromaDB.

More Information [optional]

Full pipeline source code, API server, and Streamlit UI available at: https://github.com/AnkitNayak-eth/EpsteinFiles-RAG

Dataset Card Authors [optional]

Ankit Kumar Nayak

Dataset Card Contact

GitHub: https://github.com/AnkitNayak-eth

Downloads last month
19