| | --- |
| | library_name: gemma.cpp |
| | license: gemma |
| | extra_gated_heading: Access RecurrentGemma on Hugging Face |
| | extra_gated_prompt: To access RecurrentGemma on Hugging Face, you’re required to review |
| | and agree to Google’s usage license. To do this, please ensure you’re logged-in |
| | to Hugging Face and click below. Requests are processed immediately. |
| | extra_gated_button_content: Acknowledge license |
| | --- |
| | |
| | # RecurrentGemma Model Card |
| |
|
| | **Model Page**: [RecurrentGemma]( https://ai.google.dev/gemma/docs/recurrentgemma/model_card) |
| |
|
| | This model card corresponds to the 2B instruction-tuned version of the RecurrentGemma model for gemma.cpp, the C++ implementation of the Gemma model family. You can also visit the model card of the [2B base model](https://huggingface.co/google/recurrentgemma-2b-sfp-cpp). |
| |
|
| | SFP checkpoints are pre-quantized to 8-bit using Switching Floating Point (SFP), a method that switches between eXmY floating point representations based on the values stored. gemma.cpp uses the [Highway](https://github.com/google/highway) SIMD library for optimal performance across most CPU architectures. |
| |
|
| | Visit [https://github.com/google/gemma.cpp](https://github.com/google/gemma.cpp) for more information and usage examples! |
| |
|
| | **Resources and technical documentation:** |
| |
|
| | * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) |
| | * [RecurrentGemma on Kaggle](https://www.kaggle.com/models/google/recurrentgemma) |
| |
|
| | **Terms of Use:** [Terms](https://www.kaggle.com/models/google/recurrentgemma/license/consent/verify/huggingface?returnModelRepoId=google/recurrentgemma-2b-it-sfp-cpp) |
| |
|
| | **Authors:** Google |