| | ---
|
| | pipeline_tag: text-classification
|
| | library_name: transformers
|
| | license: mit
|
| | ---
|
| |
|
| | # Query Complexity Classifier
|
| |
|
| | This model classifies user queries based on their **complexity level** so they can be routed to an appropriate Large Language Model (LLM).
|
| |
|
| | The model predicts three classes:
|
| |
|
| | * **Simple**
|
| | * **Medium**
|
| | * **Complex**
|
| |
|
| | It can be used as a **pre-routing layer** in AI systems where different LLMs handle different levels of query complexity.
|
| |
|
| | ---
|
| |
|
| | ## Model
|
| |
|
| | Base Model: DistilBERT
|
| | Task: Text Classification (3 classes)
|
| |
|
| | ---
|
| |
|
| | ## Download and Use
|
| |
|
| | You can download and load the model directly from Hugging Face using the `transformers` library.
|
| |
|
| | ### Install dependencies
|
| |
|
| | ```bash
|
| | pip install transformers torch
|
| | ```
|
| |
|
| | ### Load the model
|
| |
|
| | ```python
|
| | from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| | import torch
|
| |
|
| | model_name = "Shaheer001/Query-Complexity-Classifier"
|
| |
|
| | tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| | model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
| | ```
|
| |
|
| | ### Run inference
|
| |
|
| | ```python
|
| | text = "Explain how Kubernetes architecture works."
|
| |
|
| | inputs = tokenizer(text, return_tensors="pt", truncation=True)
|
| |
|
| | outputs = model(**inputs)
|
| |
|
| | prediction = torch.argmax(outputs.logits, dim=1).item()
|
| |
|
| | labels = ["Simple", "Medium", "Complex"]
|
| |
|
| | print("Predicted Complexity:", labels[prediction])
|
| | ```
|
| |
|
| | ---
|
| |
|
| | ## Example
|
| |
|
| | Input:
|
| |
|
| | ```
|
| | Explain Kubernetes architecture
|
| | ```
|
| |
|
| | Output:
|
| |
|
| | ```
|
| | Complex
|
| | ```
|
| |
|
| | ---
|
| |
|
| | ## Use Case
|
| |
|
| | This model can be used to build **LLM routing systems** where queries are automatically sent to different language models depending on their complexity.
|
| |
|
| | Example workflow:
|
| |
|
| | User Query → Complexity Classifier → LLM Router → Selected LLM
|
| |
|