Instructions to use avanish07/sci-mcq-LLMs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use avanish07/sci-mcq-LLMs with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b") model = PeftModel.from_pretrained(base_model, "avanish07/sci-mcq-LLMs") - Notebooks
- Google Colab
- Kaggle
Training procedure
Sci-MCQ-LLMs is a fine-tuned language model trained using the falcon-7b architecture. The model has been fine-tuned on a dataset of multiple-choice questions (MCQs) related to science subjects. The fine-tuning process was conducted using the Hugging Face Transformers library and supervised training techniques.
The fine-tuned model can generate predictions for science-related MCQs based on user input. It utilizes the 'falcon-7b' base model, which has a capacity of 7 billion parameters, making it suitable for complex language understanding tasks.
To use the Sci-MCQ-LLMs model, the user can provide a question or context, and the model will generate the most appropriate response among the available multiple-choice options. The predictions are generated through tokenization and language modeling techniques, ensuring accurate and contextually relevant answers.
Framework versions
- PEFT 0.5.0.dev0
- Downloads last month
- -