Text Classification
Transformers
Safetensors
modernbert
regression
legal
locus
text-embeddings-inference
Instructions to use LocalLaws/LOCUS-Problem-Salience with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LocalLaws/LOCUS-Problem-Salience with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="LocalLaws/LOCUS-Problem-Salience")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("LocalLaws/LOCUS-Problem-Salience") model = AutoModelForSequenceClassification.from_pretrained("LocalLaws/LOCUS-Problem-Salience") - Notebooks
- Google Colab
- Kaggle
metadata
base_model: answerdotai/ModernBERT-base
library_name: transformers
pipeline_tag: text-classification
tags:
- text-classification
- regression
- legal
- locus
- modernbert
license: apache-2.0
datasets:
- LocalLaws/LOCUS-v1.0
LocalLaws/LOCUS-Problem-Salience
A ModernBERT regression model that scores local-ordinance text along the Problem Salience axis of the LOCUS (Local Ordinances Corpus, United States) dataset.
Fine-tuned from answerdotai/ModernBERT-base. The
target is a TrueSkill mu distilled from pairwise LLM comparisons on the
problem salience axis, then z-score normalized across the training corpus.
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tok = AutoTokenizer.from_pretrained("LocalLaws/LOCUS-Problem-Salience")
model = AutoModelForSequenceClassification.from_pretrained("LocalLaws/LOCUS-Problem-Salience")
model.eval()
text = "No person shall keep any swine within the city limits."
enc = tok(text, return_tensors="pt", truncation=True, max_length=2048)
with torch.no_grad():
score = model(**enc).logits.squeeze(-1).item()
print(score)