Local Models
Collection
16 items • Updated • 1
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/mixtral# Run inference directly in the terminal:
llama-cli -hf cortexso/mixtral# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/mixtral# Run inference directly in the terminal:
./llama-cli -hf cortexso/mixtralgit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/mixtral# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/mixtraldocker model run hf.co/cortexso/mixtralThe Mixtral-7x8B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-7x8Boutperforms Llama 2 70B on most benchmarks we tested.
| No | Variant | Cortex CLI command |
|---|---|---|
| 1 | 7x8b-gguf | cortex run mixtral:7x8b-gguf |
cortexhub/mixtral
cortex run mixtral
We're not able to determine the quantization variants.
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf cortexso/mixtral# Run inference directly in the terminal: llama-cli -hf cortexso/mixtral