--- license: apache-2.0 language: - en pipeline_tag: text-generation library_name: transformers --- ![GRaPE_Logo](https://cdn-uploads.huggingface.co/production/uploads/66960602f0ffd8e3a381106a/XjHkzctrE41e1qqJYeDzN.png) _The **G**eneral **R**easoning **A**gent (for) **P**roject **E**xploration_ # The GRaPE Family | Attribute | Size | Modalities | Domain | | :--- | :--- | :--- | :--- | | **GRaPE Flash** | 7B A1B | Text in, Text out | High-Speed Applications | | **GRaPE Mini** | 3B | Text + Image + Video in, Text out | On-Device Deployment | | **GRaPE Nano** | 700M | Text in, Text out | Extreme Edge Deployment | *** # Capabilities The GRaPE Family was trained on about **14 billion** tokens of data after pre-training. About half was code related tasks, with the rest being heavy on STEAM. Ensuring the model has a sound logical basis. *** GRaPE Flash and Nano are monomodal models, only accepting text. GRaPE Mini being trained most recently supports image and video inputs. *** ## Reasoning Modes As GRaPE Mini is the only model that thinks, it has *some* support for reasoning modes. In testing, these modes sometimes work. Likely due to an innefficient dataset formatting for it. To use thinking modes, you need an XML tag, ``, which can equal these values: - **Minimal**: Skip thinking *(does not work most of the time, you'll have to be careful with this one)* - **Low**: Think Below 1024 tokens - **Medium**: Think between 1024 and 8192 tokens - **High**: Think for any amount above 8192 tokens In your prompt, place the thinking mode at the *end* of your prompt, like this: ``` Build me a website called "Aurora Beats."