Corelyn commited on
Commit
f14203a
·
verified ·
1 Parent(s): 1f871cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -71
README.md CHANGED
@@ -1,72 +1,73 @@
1
- ---
2
- license: apache-2.0
3
- tags:
4
- - text-generation
5
- - instruction-tuned
6
- - llama
7
- - gguf
8
- - chatbot
9
- library_name: llama.cpp
10
- language: en
11
- datasets:
12
- - custom
13
- model-index:
14
- - name: Corelyn NeoMini
15
- results: []
16
- ---
17
-
18
-
19
- # Corelyn NeoMini GGUF Model
20
-
21
- ## Specifications :
22
- - Model Name: Corelyn NeoMini
23
- - Base Name: NeoMini-3.2
24
- - Type: Instruct / Fine-tuned
25
- - Architecture: LLaMA
26
- - Size: 3B parameters
27
- - Organization: Corelyn
28
-
29
- ## Model Overview
30
-
31
- Corelyn NeoMini is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
-
33
- - Fine-tuning type: Instruct
34
-
35
- - Base architecture: LLaMA
36
-
37
- - Parameter count: 3B
38
-
39
-
40
- ### This model is suitable for applications such as:
41
-
42
- - Chatbots and conversational AI
43
-
44
- - Knowledge retrieval and Q&A
45
-
46
- - Code and text generation
47
-
48
- - Instruction-following tasks
49
-
50
- ## Usage
51
-
52
- Download from : [NeoMini3.2](https://huggingface.co/CorelynAI/NeoMini/resolve/main/NeoMini_3B.gguf)
53
-
54
- ```python
55
-
56
- # pip install pip install llama-cpp-python
57
-
58
- from llama_cpp import Llama
59
-
60
- # Load the model (update the path to where your .gguf file is)
61
- llm = Llama(model_path="path/to/the/file/NeoMini_3B.gguf")
62
-
63
- # Create chat completion
64
- response = llm.create_chat_completion(
65
- messages=[{"role": "user", "content": "Create a Haiku about AI"}]
66
- )
67
-
68
- # Print the generated text
69
- print(response.choices[0].message["content"])
70
-
71
-
 
72
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation
5
+ - instruction-tuned
6
+ - llama
7
+ - gguf
8
+ - chatbot
9
+ library_name: llama.cpp
10
+ language: en
11
+ datasets:
12
+ - custom
13
+ model-index:
14
+ - name: Corelyn NeoMini
15
+ results: []
16
+ ---
17
+
18
+ ![logo](./images/neospecyn.png)
19
+
20
+ # Corelyn NeoMini GGUF Model
21
+
22
+ ## Specifications :
23
+ - Model Name: Corelyn NeoMini
24
+ - Base Name: NeoMini-3.2
25
+ - Type: Instruct / Fine-tuned
26
+ - Architecture: LLaMA
27
+ - Size: 3B parameters
28
+ - Organization: Corelyn
29
+
30
+ ## Model Overview
31
+
32
+ Corelyn NeoMini is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
33
+
34
+ - Fine-tuning type: Instruct
35
+
36
+ - Base architecture: LLaMA
37
+
38
+ - Parameter count: 3B
39
+
40
+
41
+ ### This model is suitable for applications such as:
42
+
43
+ - Chatbots and conversational AI
44
+
45
+ - Knowledge retrieval and Q&A
46
+
47
+ - Code and text generation
48
+
49
+ - Instruction-following tasks
50
+
51
+ ## Usage
52
+
53
+ Download from : [NeoMini3.2](https://huggingface.co/CorelynAI/NeoMini/resolve/main/NeoMini_3B.gguf)
54
+
55
+ ```python
56
+
57
+ # pip install pip install llama-cpp-python
58
+
59
+ from llama_cpp import Llama
60
+
61
+ # Load the model (update the path to where your .gguf file is)
62
+ llm = Llama(model_path="path/to/the/file/NeoMini_3B.gguf")
63
+
64
+ # Create chat completion
65
+ response = llm.create_chat_completion(
66
+ messages=[{"role": "user", "content": "Create a Haiku about AI"}]
67
+ )
68
+
69
+ # Print the generated text
70
+ print(response.choices[0].message["content"])
71
+
72
+
73
  ```