Corelyn commited on
Commit
3f7839c
·
verified ·
1 Parent(s): 0a6004f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation
5
+ - instruction-tuned
6
+ - maincoder
7
+ - gguf
8
+ - chatbot
9
+ library_name: llama.cpp
10
+ language: en
11
+ datasets:
12
+ - custom
13
+ model-index:
14
+ - name: Corelyn Neosepcyn Leon
15
+ results: []
16
+ ---
17
+
18
+
19
+ # Corelyn NeoMini GGUF Model
20
+
21
+ ## Specifications :
22
+ - Model Name: Corelyn Neosepcyn Leon
23
+ - Base Name: Leon_1B
24
+ - Type: Instruct / Fine-tuned
25
+ - Architecture: Maincoder
26
+ - Size: 1B parameters
27
+ - Organization: Corelyn
28
+
29
+ ## Model Overview
30
+
31
+ Corelyn Neosepcyn Leon is a 1-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
+
33
+ - Fine-tuning type: Instruct
34
+
35
+ - Base architecture: Maincoder
36
+
37
+ - Parameter count: 3B
38
+
39
+
40
+ ### This model is suitable for applications such as:
41
+
42
+ - Algorithms
43
+
44
+ - Websites
45
+
46
+ - Python, JavaScript, Java...
47
+
48
+ - Code and text generation
49
+
50
+ ## Usage
51
+
52
+ Download from : [LeonCode_1B](https://huggingface.co/CorelynAI/LeonCode/blob/main/LeonCode_1B.gguf)
53
+
54
+ ```python
55
+
56
+ # pip install pip install llama-cpp-python
57
+
58
+ from llama_cpp import Llama
59
+
60
+ # Load the model (update the path to where your .gguf file is)
61
+ llm = Llama(model_path="path/to/the/file/LeonCode_1B.gguf")
62
+
63
+ # Create chat completion
64
+ response = llm.create_chat_completion(
65
+ messages=[{"role": "user", "content": "Create a python sorting algorithm"}]
66
+ )
67
+
68
+ # Print the generated text
69
+ print(response.choices[0].message["content"])
70
+
71
+
72
+ ```