Corelyn commited on
Commit
ad646da
·
verified ·
1 Parent(s): 09e4430

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation
5
+ - instruction-tuned
6
+ - llama
7
+ - gguf
8
+ - chatbot
9
+ library_name: llama.cpp
10
+ language: en
11
+ datasets:
12
+ - custom
13
+ model-index:
14
+ - name: Corelyn NeoMini
15
+ results: []
16
+ ---
17
+
18
+
19
+ # Corelyn NeoMini GGUF Model
20
+
21
+ ## Specifications :
22
+ - Model Name: Corelyn NeoMini
23
+ - Base Name: NeoMini-3.2
24
+ - Type: Instruct / Fine-tuned
25
+ - Architecture: LLaMA
26
+ - Size: 3B parameters
27
+ - Organization: Corelyn
28
+
29
+ ## Model Overview
30
+
31
+ Corelyn NeoMini is a 3-billion parameter LLaMA-based instruction-tuned model, designed for general-purpose assistant tasks and knowledge extraction. It is a fine-tuned variant optimized for instruction-following use cases.
32
+
33
+ - Fine-tuning type: Instruct
34
+
35
+ - Base architecture: LLaMA
36
+
37
+ - Parameter count: 3B
38
+
39
+
40
+ ### This model is suitable for applications such as:
41
+
42
+ - Chatbots and conversational AI
43
+
44
+ - Knowledge retrieval and Q&A
45
+
46
+ - Code and text generation
47
+
48
+ - Instruction-following tasks
49
+
50
+ ## Usage
51
+
52
+ Download from : [NeoMini3.2](https://huggingface.co/CorelynAI/NeoMini/resolve/main/NeoMini_3B.gguf)
53
+
54
+ ```python
55
+
56
+ # pip install pip install llama-cpp-python
57
+
58
+ from llama_cpp import Llama
59
+
60
+ # Load the model (update the path to where your .gguf file is)
61
+ llm = Llama(model_path="path/to/the/file/NeoMini_3B.gguf")
62
+
63
+ # Create chat completion
64
+ response = llm.create_chat_completion(
65
+ messages=[{"role": "user", "content": "Create a Haiku about AI"}]
66
+ )
67
+
68
+ # Print the generated text
69
+ print(response.choices[0].message["content"])
70
+
71
+
72
+ ```