nzros commited on
Commit
868c848
·
verified ·
1 Parent(s): 836c049

Model card DevnexAI Updated

Browse files
Files changed (1) hide show
  1. README.md +46 -3
README.md CHANGED
@@ -1,3 +1,46 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ language:
4
+ - en
5
+ - code
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - python
9
+ - software-architecture
10
+ - clean-code
11
+ - senior-level
12
+ - optimization
13
+ - devnexai
14
+ base_model: meta-llama/Meta-Llama-3-8B
15
+ widget:
16
+ - text: "Refactor this function to use a Decorator for logging execution time and memory usage:"
17
+ - text: "Explain the difference between threading and asyncio in Python with a thread-safe Singleton example."
18
+ ---
19
+
20
+ # 🚀 DevNexAI-v1-Pro: The Senior Python Architect
21
+
22
+ **Model by [DevNexAi]** | *Part of the DevNexAI Ecosystem*
23
+
24
+ > **"Stop generating Junior code. Start generating Architecture."**
25
+
26
+ **DevNexAI-v1-Pro** is a specialized fine-tuned Large Language Model based on **Llama-3-8B**, engineered specifically for Senior Software Engineers, System Architects, and Tech Leads.
27
+
28
+ Unlike generalist models that prioritize speed or generic scripting, this model has been rigorously trained on a curated dataset of **Senior-Level Python**, focusing on maintainability, performance, and enterprise-grade best practices.
29
+
30
+ ## 🧠 Senior-Level Capabilities
31
+ This model doesn't just write code; it understands the engineering behind it.
32
+ * **🐍 Idiomatic Python (Pythonic):** Expert usage of List Comprehensions, Generators, Context Managers, and Metaclasses.
33
+ * **🏗️ Clean Architecture:** Strict application of SOLID principles, Design Patterns (Factory, Strategy, Observer), and Hexagonal Architecture concepts.
34
+ * **⚡ Optimization & Concurrency:** Correct implementation of `asyncio`, `multiprocessing`, and efficient memory management.
35
+ * **🛡️ Robustness:** Strict Type Hinting, professional Docstrings, and defensive error handling.
36
+
37
+ ## 💻 How to Use (Local Inference)
38
+
39
+ The most efficient way to run this model locally while keeping your data private is using **Ollama** or **LM Studio**.
40
+
41
+ ### Option A: Ollama (Recommended)
42
+ 1. Download the `.gguf` file from this repository.
43
+ 2. Create a file named `Modelfile` with the following content:
44
+ ```dockerfile
45
+ FROM ./devnexai-v1-pro.Q4_K_M.gguf
46
+ SYSTEM "You are a Senior Software Architect. You write efficient, documented, and idiomatic Python code. You prefer clean architecture over quick hacks."