Corelyn commited on
Commit
ee9bad0
·
verified ·
1 Parent(s): 48813b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -71
README.md CHANGED
@@ -1,72 +1,75 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - text-generation
5
- - instruction-tuned
6
- - maincoder
7
- - gguf
8
- - chatbot
9
- library_name: llama.cpp
10
- language: en
11
- datasets:
12
- - custom
13
- model-index:
14
- - name: Corelyn Neosepcyn Leon
15
- results: []
16
- ---
17
-
18
-
19
- # Corelyn NeoMini GGUF Model
20
-
21
- ## Specifications :
22
- - Model Name: Corelyn Neosepcyn Leon
23
- - Base Name: Leon_1B
24
- - Type: Instruct / Fine-tuned
25
- - Architecture: Maincoder
26
- - Size: 1B parameters
27
- - Organization: Corelyn
28
-
29
- ## Model Overview
30
-
31
- Corelyn Neosepcyn Leon is a 1-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
-
33
- - Fine-tuning type: Instruct
34
-
35
- - Base architecture: Maincoder
36
-
37
- - Parameter count: 3B
38
-
39
-
40
- ### This model is suitable for applications such as:
41
-
42
- - Algorithms
43
-
44
- - Websites
45
-
46
- - Python, JavaScript, Java...
47
-
48
- - Code and text generation
49
-
50
- ## Usage
51
-
52
- Download from : [LeonCode_1B](https://huggingface.co/CorelynAI/LeonCode/blob/main/LeonCode_1B.gguf)
53
-
54
- ```python
55
-
56
- # pip install pip install llama-cpp-python
57
-
58
- from llama_cpp import Llama
59
-
60
- # Load the model (update the path to where your .gguf file is)
61
- llm = Llama(model_path="path/to/the/file/LeonCode_1B.gguf")
62
-
63
- # Create chat completion
64
- response = llm.create_chat_completion(
65
- messages=[{"role": "user", "content": "Create a python sorting algorithm"}]
66
- )
67
-
68
- # Print the generated text
69
- print(response.choices[0].message["content"])
70
-
71
-
 
 
 
72
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation
5
+ - instruction-tuned
6
+ - maincoder
7
+ - gguf
8
+ - chatbot
9
+ library_name: llama.cpp
10
+ language: en
11
+ datasets:
12
+ - custom
13
+ model-index:
14
+ - name: Corelyn Neosepcyn Leon
15
+ results: []
16
+ base_model:
17
+ - yourGGUF/Maincoder-1B_GGUF
18
+ ---
19
+
20
+ ![logo](./images/neospecyn.png)
21
+
22
+ # Corelyn Leon GGUF Model
23
+
24
+ ## Specifications :
25
+ - Model Name: Corelyn Neosepcyn Leon
26
+ - Base Name: Leon_1B
27
+ - Type: Instruct / Fine-tuned
28
+ - Architecture: Maincoder
29
+ - Size: 1B parameters
30
+ - Organization: Corelyn
31
+
32
+ ## Model Overview
33
+
34
+ Corelyn Neosepcyn Leon is a 1-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
35
+
36
+ - Fine-tuning type: Instruct
37
+
38
+ - Base architecture: Maincoder
39
+
40
+ - Parameter count: 3B
41
+
42
+
43
+ ### This model is suitable for applications such as:
44
+
45
+ - Algorithms
46
+
47
+ - Websites
48
+
49
+ - Python, JavaScript, Java...
50
+
51
+ - Code and text generation
52
+
53
+ ## Usage
54
+
55
+ Download from : [LeonCode_1B](https://huggingface.co/CorelynAI/LeonCode/blob/main/LeonCode_1B.gguf)
56
+
57
+ ```python
58
+
59
+ # pip install pip install llama-cpp-python
60
+
61
+ from llama_cpp import Llama
62
+
63
+ # Load the model (update the path to where your .gguf file is)
64
+ llm = Llama(model_path="path/to/the/file/LeonCode_1B.gguf")
65
+
66
+ # Create chat completion
67
+ response = llm.create_chat_completion(
68
+ messages=[{"role": "user", "content": "Create a python sorting algorithm"}]
69
+ )
70
+
71
+ # Print the generated text
72
+ print(response.choices[0].message["content"])
73
+
74
+
75
  ```