Text Generation
Transformers
Safetensors
phi
axolotl
Generated from Trainer
conversational
text-generation-inference
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SystemAdmin123/tiny-random-PhiForCausalLM")
model = AutoModelForCausalLM.from_pretrained("SystemAdmin123/tiny-random-PhiForCausalLM")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Quick Links
See axolotl config
axolotl version: 0.6.0
base_model: echarlaix/tiny-random-PhiForCausalLM
batch_size: 128
bf16: true
chat_template: tokenizer_default_fallback_alpaca
datasets:
- format: custom
path: argilla/databricks-dolly-15k-curated-en
type:
field_input: original-instruction
field_instruction: original-instruction
field_output: original-response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
device_map: auto
eval_sample_packing: false
eval_steps: 200
flash_attention: true
gradient_checkpointing: true
group_by_length: true
hub_model_id: SystemAdmin123/tiny-random-PhiForCausalLM
hub_strategy: checkpoint
learning_rate: 0.0002
logging_steps: 10
lr_scheduler: cosine
max_steps: 10000
micro_batch_size: 32
model_type: AutoModelForCausalLM
num_epochs: 100
optimizer: adamw_bnb_8bit
output_dir: /root/.sn56/axolotl/tmp/tiny-random-PhiForCausalLM
pad_to_sequence_len: true
resize_token_embeddings_to_32x: false
sample_packing: true
save_steps: 200
save_total_limit: 1
sequence_len: 2048
special_tokens:
pad_token: <|endoftext|>
tokenizer_type: GPTNeoXTokenizerFast
torch_dtype: bf16
training_args_kwargs:
hub_private_repo: true
trust_remote_code: true
val_set_size: 0.1
wandb_entity: ''
wandb_mode: online
wandb_name: echarlaix/tiny-random-PhiForCausalLM-argilla/databricks-dolly-15k-curated-en
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: default
warmup_ratio: 0.05
tiny-random-PhiForCausalLM
This model is a fine-tuned version of echarlaix/tiny-random-PhiForCausalLM on the argilla/databricks-dolly-15k-curated-en dataset. It achieves the following results on the evaluation set:
- Loss: 6.3360
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| No log | 0.1 | 1 | 6.9373 |
| 6.3375 | 20.0 | 200 | 6.3360 |
Framework versions
- Transformers 4.48.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 10
Model tree for SystemAdmin123/tiny-random-PhiForCausalLM
Base model
echarlaix/tiny-random-PhiForCausalLM
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SystemAdmin123/tiny-random-PhiForCausalLM") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)