How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="APRKDEV/icarus-1-3b",
	filename="icarus-1-3b.Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

ICARUS-1 3B

Icarus-1 3B is a proprietary deep reasoning kernel engineered from the ground up by Neonaut Studio. It features a custom neural architecture optimized for zero-latency institutional chat and rapid tactical synthesis.

Specifications

  • Architecture: Icarus-1
  • Parameters: 3.0B
  • Precision: Q4_K_M GGUF
  • Context Length: 8,192 Tokens
  • License: Proprietary (Neonaut Studio)

Initiate the Kernel. Control the future.

Downloads last month
405
Safetensors
Model size
3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support