Instructions to use sethderrick/gemma-2-2B-it-thinking-function_calling-V0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use sethderrick/gemma-2-2B-it-thinking-function_calling-V0 with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-2b-it") model = PeftModel.from_pretrained(base_model, "sethderrick/gemma-2-2B-it-thinking-function_calling-V0") - Notebooks
- Google Colab
- Kaggle
# Function Calling Fine-tuned Gemma Model
This is a fine-tuned version of google/gemma-2-2b-it optimized for function calling with thinking.
## Model Details
- Base model: google/gemma-2-2b-it
- Fine-tuned with LoRA for function calling capability
- Includes "thinking" step before function calls
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
model_name = "sethderrick/gemma-2-2B-it-thinking-function_calling-V0"
# Load the model
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_name)
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use for function calling
# ...
```
- Downloads last month
- -