How to use NchuNLP/taide-qa with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="NchuNLP/taide-qa")
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("NchuNLP/taide-qa") model = AutoModelForCausalLM.from_pretrained("NchuNLP/taide-qa")
How to use NchuNLP/taide-qa with vLLM:
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "NchuNLP/taide-qa" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NchuNLP/taide-qa", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker model run hf.co/NchuNLP/taide-qa
How to use NchuNLP/taide-qa with SGLang:
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "NchuNLP/taide-qa" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NchuNLP/taide-qa", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "NchuNLP/taide-qa" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "NchuNLP/taide-qa", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
How to use NchuNLP/taide-qa with Docker Model Runner:
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Log in or Sign Up to review the conditions and access this model content.
使用taide/b.1.0.0開發的qa模型
"[INST] <<SYS>>\n請根據提供的問題,從提供的內文中尋找答案並回答,回答時只需要輸出答案,不需輸出其他資訊,如果從提供的內文無法找到答案,請回答\"無法回答\"\n<</SYS>>\n\n問題:\n{query}\n\n內文:\n{doc}\n [/INST]答案:\n"
{query}改為輸入的問題{doc}改成文章回覆如果是"無法回答"代表此文章無法回答問題
Files info