Dataset Viewer
Auto-converted to Parquet Duplicate
algo_version
stringclasses
1 value
arch_preset
stringclasses
1 value
lora_content_hashes
listlengths
2
2
score
float64
0.76
0.8
config
dict
1.5.0
dit
[ "066ee4bfafb685c8", "445c3de529557c12" ]
0.801248
{ "merge_mode": "per_prefix_auto", "sparsification": "disabled", "sparsification_density": 0.7, "dare_dampening": 0, "merge_refinement": "none", "auto_strength": "enabled", "optimization_mode": "per_prefix", "strategy_set": "full" }
1.5.0
dit
[ "d9931756c202bd8d", "e39e4567ed9b5793" ]
0.759949
{ "merge_mode": "per_prefix_auto", "sparsification": "disabled", "sparsification_density": 0.7, "dare_dampening": 0, "merge_refinement": "none", "auto_strength": "enabled", "optimization_mode": "per_prefix", "strategy_set": "full" }

LoRA Optimizer — Community Cache

Shared analysis results for the LoRA Optimizer ComfyUI node.

LoRA merge analysis is hardware-agnostic — the same LoRA files always produce the same conflict metrics and optimal merge config regardless of GPU tier. This dataset lets users share and reuse those results so nobody has to run the AutoTuner from scratch.


How It Works

The AutoTuner computes pairwise conflict metrics (cosine similarity, sign conflicts, subspace overlap) and tests merge parameter combinations to find the best config for a set of LoRAs. These results are keyed by content hash (SHA256[:16] of file contents) — not by filename — so they're portable across systems and private by design.

When community_cache=upload_and_download is set in the AutoTuner node:

  • Download: Before running analysis, the node checks this dataset for existing results. A config hit skips the entire sweep (~30–120s saved). Lora/pair cache hits speed up the analysis phase even without a full config hit.
  • Upload: After a successful sweep (or when replaying from local memory), results are uploaded if the local score beats the current community score for that LoRA set.

Privacy

LoRA filenames are never stored here. Only SHA256[:16] content hashes are used as keys. The uploaded data contains:

  • Per-prefix conflict metrics (cosine similarity, sign conflict ratios, subspace overlap)
  • Winning merge configuration (sparsification method, merge strategy, refinement level, etc.)
  • A composite quality score

No file paths, no usernames, no LoRA names.


File Structure

lora/
  {content_hash}.lora.json       # Per-LoRA per-prefix conflict stats
pair/
  {hash_a}_{hash_b}.pair.json   # Pairwise conflict metrics (hashes sorted)
config/
  {hash_a}_{hash_b}_..._{arch}.config.json  # Best merge config + score for a LoRA set

All files include an algo_version field. Results from incompatible algorithm versions are ignored automatically.


Usage

In the LoRA AutoTuner node, set community_cache to upload_and_download. That's the only option — there's no passive download-only mode. If you benefit from the cache, you contribute back.

Value Behavior
disabled (default) No network interaction
upload_and_download Download precomputed results and contribute yours back

Network errors are silently ignored — the node always falls back to local computation.


Setup

One time:

pip install huggingface_hub
huggingface-cli login

The node picks up your stored token automatically. No environment variables needed for most users.

Headless/server alternative: set HF_TOKEN as an environment variable.

Then: set community_cache=upload_and_download in the AutoTuner node and run as normal. Everything else is automatic.


Score-Based Replacement

Configs are only uploaded when your local score beats the community score. Users with more thorough sweeps (top_n=10) or better hardware naturally contribute higher-quality results over time.

Downloads last month
24