Text Generation
Transformers
Safetensors
GGUF
English
Chinese
glm4_moe_lite
glm4
prism
Mixture of Experts
conversational
Instructions to use Ex0bit/GLM-4.7-Flash-PRISM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Ex0bit/GLM-4.7-Flash-PRISM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Ex0bit/GLM-4.7-Flash-PRISM") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Ex0bit/GLM-4.7-Flash-PRISM") model = AutoModelForCausalLM.from_pretrained("Ex0bit/GLM-4.7-Flash-PRISM") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use Ex0bit/GLM-4.7-Flash-PRISM with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Ex0bit/GLM-4.7-Flash-PRISM", filename="GLM-4.7-Flash-PRISM-GGUFs/GLM-4.7-Flash-PRISM-IQ4_NL.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Ex0bit/GLM-4.7-Flash-PRISM with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Use Docker
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Ex0bit/GLM-4.7-Flash-PRISM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Ex0bit/GLM-4.7-Flash-PRISM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- SGLang
How to use Ex0bit/GLM-4.7-Flash-PRISM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Ex0bit/GLM-4.7-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Ex0bit/GLM-4.7-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use Ex0bit/GLM-4.7-Flash-PRISM with Ollama:
ollama run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- Unsloth Studio new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
- Pi new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Ex0bit/GLM-4.7-Flash-PRISM with Docker Model Runner:
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- Lemonade
How to use Ex0bit/GLM-4.7-Flash-PRISM with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Run and chat with the model
lemonade run user.GLM-4.7-Flash-PRISM-Q4_K_M
List all available models
lemonade list
Add tokenizer_config.json to root for TGI compatibility
Browse files- tokenizer_config.json +34 -0
tokenizer_config.json
ADDED
|
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"backend": "tokenizers",
|
| 3 |
+
"clean_up_tokenization_spaces": false,
|
| 4 |
+
"do_lower_case": false,
|
| 5 |
+
"eos_token": "<|endoftext|>",
|
| 6 |
+
"extra_special_tokens": [
|
| 7 |
+
"<|endoftext|>",
|
| 8 |
+
"[MASK]",
|
| 9 |
+
"[gMASK]",
|
| 10 |
+
"[sMASK]",
|
| 11 |
+
"<sop>",
|
| 12 |
+
"<eop>",
|
| 13 |
+
"<|system|>",
|
| 14 |
+
"<|user|>",
|
| 15 |
+
"<|assistant|>",
|
| 16 |
+
"<|observation|>",
|
| 17 |
+
"<|begin_of_image|>",
|
| 18 |
+
"<|end_of_image|>",
|
| 19 |
+
"<|begin_of_video|>",
|
| 20 |
+
"<|end_of_video|>",
|
| 21 |
+
"<|begin_of_audio|>",
|
| 22 |
+
"<|end_of_audio|>",
|
| 23 |
+
"<|begin_of_transcription|>",
|
| 24 |
+
"<|end_of_transcription|>"
|
| 25 |
+
],
|
| 26 |
+
"is_local": false,
|
| 27 |
+
"model_max_length": 128000,
|
| 28 |
+
"model_specific_special_tokens": {},
|
| 29 |
+
"pad_token": "<|endoftext|>",
|
| 30 |
+
"padding_side": "left",
|
| 31 |
+
"remove_space": false,
|
| 32 |
+
"tokenizer_class": "TokenizersBackend",
|
| 33 |
+
"chat_template": "[gMASK]<sop>\n{%- if tools -%}\n<|system|>\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{% for tool in tools %}\n{{ tool | tojson(ensure_ascii=False) }}\n{% endfor %}\n</tools>\n\nFor each function call, output the function name and arguments within the following XML format:\n<tool_call>{function-name}<arg_key>{arg-key-1}</arg_key><arg_value>{arg-value-1}</arg_value><arg_key>{arg-key-2}</arg_key><arg_value>{arg-value-2}</arg_value>...</tool_call>{%- endif -%}\n{%- macro visible_text(content) -%}\n {%- if content is string -%}\n {{- content }}\n {%- elif content is iterable and content is not mapping -%}\n {%- for item in content -%}\n {%- if item is mapping and item.type == 'text' -%}\n {{- item.text }}\n {%- elif item is string -%}\n {{- item }}\n {%- endif -%}\n {%- endfor -%}\n {%- else -%}\n {{- content }}\n {%- endif -%}\n{%- endmacro -%}\n{%- set ns = namespace(last_user_index=-1) %}\n{%- for m in messages %}\n {%- if m.role == 'user' %}\n {% set ns.last_user_index = loop.index0 -%}\n {%- endif %}\n{%- endfor %}\n{% for m in messages %}\n{%- if m.role == 'user' -%}<|user|>{{ visible_text(m.content) }}\n{%- elif m.role == 'assistant' -%}\n<|assistant|>\n{%- set reasoning_content = '' %}\n{%- set content = visible_text(m.content) %}\n{%- if m.reasoning_content is string %}\n {%- set reasoning_content = m.reasoning_content %}\n{%- else %}\n {%- if '</think>' in content %}\n {%- set reasoning_content = content.split('</think>')[0].rstrip('\\n').split('<think>')[-1].lstrip('\\n') %}\n {%- set content = content.split('</think>')[-1].lstrip('\\n') %}\n {%- endif %}\n{%- endif %}\n{%- if ((clear_thinking is defined and not clear_thinking) or loop.index0 > ns.last_user_index) and reasoning_content -%}\n{{ '<think>' + reasoning_content.strip() + '</think>'}}\n{%- else -%}\n{{ '</think>' }}\n{%- endif -%}\n{%- if content.strip() -%}\n{{ content.strip() }}\n{%- endif -%}\n{% if m.tool_calls %}\n{% for tc in m.tool_calls %}\n{%- if tc.function %}\n {%- set tc = tc.function %}\n{%- endif %}\n{{- '<tool_call>' + tc.name -}}\n{% set _args = tc.arguments %}{% for k, v in _args.items() %}<arg_key>{{ k }}</arg_key><arg_value>{{ v | tojson(ensure_ascii=False) if v is not string else v }}</arg_value>{% endfor %}</tool_call>{% endfor %}\n{% endif %}\n{%- elif m.role == 'tool' -%}\n{%- if m.content is string -%}\n{%- if loop.first or (messages[loop.index0 - 1].role != \"tool\") %}\n {{- '<|observation|>' }}\n{%- endif %}\n{{- '<tool_response>' }}\n{{- m.content }}\n{{- '</tool_response>' }}\n{%- else -%}\n<|observation|>{% for tr in m.content %}\n<tool_response>{{ tr.output if tr.output is defined else tr }}</tool_response>{% endfor -%}\n{% endif -%}\n{%- elif m.role == 'system' -%}\n<|system|>{{ visible_text(m.content) }}\n{%- endif -%}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n <|assistant|>{{- '</think>' if (enable_thinking is defined and not enable_thinking) else '<think>' -}}\n{%- endif -%}"
|
| 34 |
+
}
|