Vitals Interpreter Model (Fine-Tuned LLM)
Project Overview
This project implements a fine-tuned transformer model that interprets basic human vital signs and generates structured health guidance.
The model takes numerical vitals as input and produces a concise, human-readable output consisting of:
- Health status classification
- Suggested action/advice
Objective
To build a lightweight, efficient AI system that:
- Understands structured vital inputs
- Classifies health condition into categories
- Generates consistent and controlled responses
Model Details
- Base Model: t5-small
- Architecture: Encoder-Decoder Transformer
- Fine-Tuning Type: Supervised Fine-Tuning (SFT)
- Framework: Hugging Face Transformers
Input Format
interpret vitals -> heart rate X, blood pressure Y/Z, temperature T
Example:
interpret vitals -> heart rate 125, blood pressure 150/95, temperature 100
Output Format
Status: <Normal | High | Low | Critical> | Advice:
Example Output:
Status: High | Advice: Monitor and consult doctor
Dataset
- Type: Synthetic dataset
- Size: ~30โ50 samples
- Design Approach:
- Based on medically accepted ranges of vital signs
- Balanced across categories:
- Normal
- High
- Low
- Critical
Why Synthetic Data?
Due to lack of publicly available labeled text datasets for this task, a controlled dataset was generated to:
- Ensure consistency in output format
- Improve learning efficiency
- Avoid noisy or unstructured data
Training Configuration
- Epochs: 20โ30
- Batch Size: 2โ4
- Learning Rate: 5e-5
- Max Sequence Length: 64
- Tokenizer: AutoTokenizer (T5)
Evaluation
Method:
- Manual testing with unseen inputs
- Verification of:
- Correct classification (Normal / High / Low / Critical)
- Proper output structure
- Relevance of advice
Sample Predictions:
| Input | Output |
|---|---|
| HR: 125, BP: 150/95, Temp: 100 | Status: High | Advice: Monitor and consult doctor |
| HR: 72, BP: 120/80, Temp: 98.6 | Status: Normal | Advice: No action needed |
| HR: 140, BP: 170/110, Temp: 103 | Status: Critical | Advice: Emergency care required |
How to Use
Installation
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
Author:
Archee Sinha
B.Tech CSE (AI)
ABES Institute of Technology
- Downloads last month
- 43
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support