| | ---
|
| | license: apache-2.0
|
| | ---
|
| |
|
| | The base model of AutoCode_QW_7B is [CodeQwen1.5-7b](https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat).
|
| |
|
| | In this version, we fixed the problem that the model will only start the code interpreter when you ask it to *verify* its code.
|
| |
|
| | you can try the code interpreter function on the [AutoCoder GitHub](https://github.com/bin123apple/AutoCoder/tree/main/Web_demo)
|
| |
|
| | For the simple code generation without code interpreter ability, try the following script:
|
| |
|
| | ```python
|
| | from transformers import AutoTokenizer, AutoModelForCausalLM
|
| | from datasets import load_dataset
|
| | model_path = "Bin12345/AutoCoder_QW_7B"
|
| | tokenizer = AutoTokenizer.from_pretrained(model_path)
|
| | model = AutoModelForCausalLM.from_pretrained(model_path,
|
| | device_map="auto")
|
| |
|
| | Input = "" # input your question here
|
| |
|
| | messages=[
|
| | { 'role': 'user', 'content': Input}
|
| | ]
|
| | inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True,
|
| | return_tensors="pt").to(model.device)
|
| | outputs = model.generate(inputs,
|
| | max_new_tokens=1024,
|
| | do_sample=False,
|
| | temperature=0.0,
|
| | top_p=1.0,
|
| | num_return_sequences=1,
|
| | eos_token_id=tokenizer.eos_token_id)
|
| | answer = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
|
| | ```
|
| |
|