Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

fic-agent

A no-finetune, retrieval-augmented character chatbot for fiction.

Goals

  • Factual consistency with the source text
  • Character-true language style and emotional profile
  • Worldview consistency (settings, rules, relationships)

Repository layout

  • configs/ Runtime configs (API keys, model names, flags)
  • data/raw/ Raw novels and metadata
  • data/processed/ Chunked text, extracted dialogues, persona notes
  • data/indexes/ Vector indexes and graph artifacts
  • data/eval/ Evaluation inputs and references
  • outputs/ Run outputs and logs
  • scripts/ CLI helpers and pipelines
  • src/fic_agent/ Core library

Core pipeline (high level)

  1. Ingest raw text
  2. Extract character dialogue + context
  3. Build persona profile (style, affect, worldview)
  4. Build fact and worldview indexes
  5. Retrieve relevant evidence + persona
  6. Generate response with persona-conditioned prompt
  7. Evaluate (facts, style, worldview)

End-to-End Run (Dataset -> Question -> Answer)

  1. Activate environment and install dependencies.
conda activate fic_agent
cd /home/tleautomat/Project/fic-agent
python -m pip install -r requirements.txt
  1. Set API keys (OpenRouter key can be reused for LLM and embeddings).
export OPENROUTER_API_KEY="your_key"
export OPENAI_API_KEY="$OPENROUTER_API_KEY"
export EMBEDDING_API_KEY="$OPENROUTER_API_KEY"
  1. Put your novel text in data/raw/ (example already included).

  2. Run preprocessing + memory building + index building.

python scripts/run_pipeline.py \
  --input "data/raw/The Strange Case of Dr. Jekyll and Mr. Hyde.txt" \
  --book-id jekyll_hyde \
  --characters "Jekyll,Hyde,Utterson,Poole" \
  --all-characters \
  --worldview-llm \
  --build-index
  1. Ask a user question with the meta-cognitive QA loop.
python scripts/run_meta_qa.py \
  --query "As Utterson, should you trust Hyde after the trampling incident?" \
  --character Utterson \
  --max-iter 3 \
  --style-correct \
  --dump-trace \
  --save-json outputs/qa/utterson_q1.json
  1. (Optional) Evaluate one generated answer with the new comprehensive LLM judge.
PYTHONPATH=src python scripts/eval_answer.py \
  --result-json outputs/qa/utterson_q1.json \
  --character Utterson \
  --processed-dir data/processed \
  --mode llm \
  --rounds 3 \
  --temperature 0.2 \
  --top-n 6 \
  --save-json outputs/qa/utterson_q1_eval.json

Evaluation (LLM Comprehensive Judge)

The project now uses a comprehensive LLM evaluator (v2) that combines canon grounding, character realization, world logic, and answer usefulness.

Core modules (each sub-score uses a 1-5 scale):

  • canon_grounding: evidence_fit, fact_consistency, timeline_consistency
  • character_realization: linguistic_style, tone, vocabulary, emotional_expression, motivation_alignment
  • world_logic: rule_consistency, setting_relation_consistency
  • response_usefulness: question_fit, clarity

Outputs include:

  • scorecard.overall_100 (weighted by 40/35/15/10)
  • same_character (Yes/No)
  • confidence_100 (0-100)
  • issues.critical/major/minor
  • penalties and per-round breakdown
  • Backward-compatible fields: scores.facts/persona/worldview/overall and confidence (0-1)

Run:

PYTHONPATH=src python scripts/eval_answer.py \
  --result-json outputs/qa/utterson_q1.json \
  --character Utterson \
  --processed-dir data/processed \
  --mode llm \
  --rounds 3

Note:

  • If you already installed with pip install -e ., you can omit PYTHONPATH=src.

Main Outputs

  • data/processed/chunks.jsonl: document chunks
  • data/processed/dialogues.jsonl: extracted dialogues with speaker guesses
  • data/processed/persona_*.json: persona profiles per character
  • data/processed/worldview_notes.jsonl: worldview memory notes
  • data/indexes/*.faiss and data/indexes/*.jsonl: vector indexes + metadata
  • outputs/qa/*.json: final answer, loop trace, merged evidence
  • outputs/qa/*_eval.json: comprehensive LLM evaluation reports (including same_character, confidence_100, and penalties)

Dataset Scene Experiments (105_Persuasion)

The new batch script runs the following protocol:

  • Iterate all scenes with is_two_person_dialogue_scene=true in ficset/105_Persuasion
  • Use all prior scenes as RAG knowledge
  • Use the first speaker's utterance as query input
  • Ask the model to answer as the other character (reusing existing retrieval and meta-loop logic)

Dry-run (no model API calls, just case construction):

cd /home/tleautomat/Project/fic-agent
PYTHONPATH=src python scripts/run_scene_experiments.py \
  --book 105_Persuasion \
  --dry-run \
  --run-name exp105_dryrun

Full run (all true scenes):

cd /home/tleautomat/Project/fic-agent
bash scripts/run_105_persuasion_experiments.sh \
  --run-name exp105_full \
  --style-correct

Optional: run per-case evaluation in the same batch

bash scripts/run_105_persuasion_experiments.sh \
  --run-name exp105_eval \
  --eval-mode llm \
  --eval-rounds 3

Outputs:

  • outputs/experiments/<run_name>/summary.json: global summary and per-case status
  • outputs/experiments/<run_name>/cases.jsonl: per-case records (line-delimited)
  • outputs/experiments/<run_name>/cases/<case_key>/qa_full.json: QA result for one case
  • outputs/experiments/<run_name>/cases/<case_key>/eval_full.json: evaluation result (if enabled)
Downloads last month
36