The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
fic-agent
A no-finetune, retrieval-augmented character chatbot for fiction.
Goals
- Factual consistency with the source text
- Character-true language style and emotional profile
- Worldview consistency (settings, rules, relationships)
Repository layout
configs/Runtime configs (API keys, model names, flags)data/raw/Raw novels and metadatadata/processed/Chunked text, extracted dialogues, persona notesdata/indexes/Vector indexes and graph artifactsdata/eval/Evaluation inputs and referencesoutputs/Run outputs and logsscripts/CLI helpers and pipelinessrc/fic_agent/Core library
Core pipeline (high level)
- Ingest raw text
- Extract character dialogue + context
- Build persona profile (style, affect, worldview)
- Build fact and worldview indexes
- Retrieve relevant evidence + persona
- Generate response with persona-conditioned prompt
- Evaluate (facts, style, worldview)
End-to-End Run (Dataset -> Question -> Answer)
- Activate environment and install dependencies.
conda activate fic_agent
cd /home/tleautomat/Project/fic-agent
python -m pip install -r requirements.txt
- Set API keys (OpenRouter key can be reused for LLM and embeddings).
export OPENROUTER_API_KEY="your_key"
export OPENAI_API_KEY="$OPENROUTER_API_KEY"
export EMBEDDING_API_KEY="$OPENROUTER_API_KEY"
Put your novel text in
data/raw/(example already included).Run preprocessing + memory building + index building.
python scripts/run_pipeline.py \
--input "data/raw/The Strange Case of Dr. Jekyll and Mr. Hyde.txt" \
--book-id jekyll_hyde \
--characters "Jekyll,Hyde,Utterson,Poole" \
--all-characters \
--worldview-llm \
--build-index
- Ask a user question with the meta-cognitive QA loop.
python scripts/run_meta_qa.py \
--query "As Utterson, should you trust Hyde after the trampling incident?" \
--character Utterson \
--max-iter 3 \
--style-correct \
--dump-trace \
--save-json outputs/qa/utterson_q1.json
- (Optional) Evaluate one generated answer with the new comprehensive LLM judge.
PYTHONPATH=src python scripts/eval_answer.py \
--result-json outputs/qa/utterson_q1.json \
--character Utterson \
--processed-dir data/processed \
--mode llm \
--rounds 3 \
--temperature 0.2 \
--top-n 6 \
--save-json outputs/qa/utterson_q1_eval.json
Evaluation (LLM Comprehensive Judge)
The project now uses a comprehensive LLM evaluator (v2) that combines canon grounding, character realization, world logic, and answer usefulness.
Core modules (each sub-score uses a 1-5 scale):
canon_grounding:evidence_fit,fact_consistency,timeline_consistencycharacter_realization:linguistic_style,tone,vocabulary,emotional_expression,motivation_alignmentworld_logic:rule_consistency,setting_relation_consistencyresponse_usefulness:question_fit,clarity
Outputs include:
scorecard.overall_100(weighted by 40/35/15/10)same_character(Yes/No)confidence_100(0-100)issues.critical/major/minorpenaltiesand per-round breakdown- Backward-compatible fields:
scores.facts/persona/worldview/overallandconfidence(0-1)
Run:
PYTHONPATH=src python scripts/eval_answer.py \
--result-json outputs/qa/utterson_q1.json \
--character Utterson \
--processed-dir data/processed \
--mode llm \
--rounds 3
Note:
- If you already installed with
pip install -e ., you can omitPYTHONPATH=src.
Main Outputs
data/processed/chunks.jsonl: document chunksdata/processed/dialogues.jsonl: extracted dialogues with speaker guessesdata/processed/persona_*.json: persona profiles per characterdata/processed/worldview_notes.jsonl: worldview memory notesdata/indexes/*.faissanddata/indexes/*.jsonl: vector indexes + metadataoutputs/qa/*.json: final answer, loop trace, merged evidenceoutputs/qa/*_eval.json: comprehensive LLM evaluation reports (includingsame_character,confidence_100, andpenalties)
Dataset Scene Experiments (105_Persuasion)
The new batch script runs the following protocol:
- Iterate all scenes with
is_two_person_dialogue_scene=trueinficset/105_Persuasion - Use all prior scenes as RAG knowledge
- Use the first speaker's utterance as query input
- Ask the model to answer as the other character (reusing existing retrieval and meta-loop logic)
Dry-run (no model API calls, just case construction):
cd /home/tleautomat/Project/fic-agent
PYTHONPATH=src python scripts/run_scene_experiments.py \
--book 105_Persuasion \
--dry-run \
--run-name exp105_dryrun
Full run (all true scenes):
cd /home/tleautomat/Project/fic-agent
bash scripts/run_105_persuasion_experiments.sh \
--run-name exp105_full \
--style-correct
Optional: run per-case evaluation in the same batch
bash scripts/run_105_persuasion_experiments.sh \
--run-name exp105_eval \
--eval-mode llm \
--eval-rounds 3
Outputs:
outputs/experiments/<run_name>/summary.json: global summary and per-case statusoutputs/experiments/<run_name>/cases.jsonl: per-case records (line-delimited)outputs/experiments/<run_name>/cases/<case_key>/qa_full.json: QA result for one caseoutputs/experiments/<run_name>/cases/<case_key>/eval_full.json: evaluation result (if enabled)
- Downloads last month
- 36