YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
language: - en tags: - peft - lora - medical - triage - emergency - text-classification base_model: google/medgemma-4b-it library_name: peft pipeline_tag: text-classification license: mit
ESI-1 LoRA Adapter (MIETIC) for MedGemma 4B
Model Summary
This repository contains a LoRA adapter (not a full standalone model) for ESI-1 prediction in emergency triage settings.
The adapter is trained on MIETIC using few-shot, parameter-efficient fine-tuning (PEFT) on top of MedGemma 4B (google/medgemma-4b-it).
Model Details
- Model type: LoRA adapter
- Base model:
google/medgemma-4b-it - Task: ESI-1 prediction (emergency severity triage)
- Training approach: Specialized few-shot PEFT
- Repository owner:
AdilA1016
Files in this Repo
adapter_config.jsonadapter_model.safetensorschat_template.jinjaprocessor_config.jsontokenizer_config.jsontokenizer.json
Intended Use
This model is intended for research and decision-support prototyping for emergency triage workflows. It is not intended to replace clinician judgment.
Out-of-Scope / Limitations
- Not validated as an autonomous clinical decision maker.
- Performance may vary by site, population, and documentation style.
- Should not be used as the sole basis for real-time medical decisions.
Training Data
- Dataset: MIETIC
- Domain: Emergency/clinical triage text
- Label focus: ESI-1 identification
Add a short description of MIETIC access/curation and any preprocessing steps you applied.
Training Procedure
- Method: LoRA fine-tuning on MedGemma 4B
- Regime: Few-shot specialized adaptation
- Frameworks: PEFT + Transformers
- Hardware: [fill in]
- Epochs / steps: [fill in]
- Learning rate: [fill in]
- Batch size: [fill in]
- LoRA config (
r,alpha, target modules): [fill in]
Evaluation
- Validation setup: [fill in]
- Primary metrics: [fill in, e.g., recall/precision/F1 for ESI-1]
- Key results: [fill in]
- Failure modes observed: [fill in]
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_id = "google/medgemma-4b-it"
adapter_id = "AdilA1016/esi1trainedmodel"
tokenizer = AutoTokenizer.from_pretrained(adapter_id)
base_model = AutoModelForCausalLM.from_pretrained(base_id)
model = PeftModel.from_pretrained(base_model, adapter_id)
## Safety and Ethics
This model operates in a high-stakes medical context. Outputs may be incorrect, incomplete, or biased.
Human oversight by qualified clinicians is required for any practical use.
## Citation
If you use this adapter, please cite:
- MIETIC dataset/source: [fill in]
- MedGemma base model: [fill in official citation/link]
- This repository: AdilA1016/esi1trainedmodel
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support