AI & ML interests

NoesisLab advances machine learning research in deep contemplation and reflective reasoning to enable more profound and self-aware artificial intelligence.

Recent Activity

OzTianluย  updated a collection 6 days ago
Geilim Large Language Models
OzTianluย  updated a model 6 days ago
NoesisLab/Geilim-1B-SR-Instruct
View all activity

OzTianluย 
in NoesisLab/Geilim-1B-SR-Instruct about 4 hours ago

GGUF model please

1
#1 opened 2 days ago by
JLouisBiz
OzTianluย 
posted an update 6 days ago
view post
Post
2766
Geilim-1B-SR-Instruct โ€” Serbian Intelligence for Deep Reasoning ๐Ÿง ๐Ÿ‡ท๐Ÿ‡ธ
NoesisLab/Geilim-1B-SR-Instruct
Geilim-1B-SR-Instruct is a lightweight Large Language Model (LLM) designed to bring advanced reasoning capabilities to low-resource languages. It focuses on Serbian understanding and generation while maintaining robust English reasoning. Built on the LLaMA-3 architecture with a proprietary hybrid reasoning mechanism, it delivers deep logic while keeping outputs concise and natural. ๐Ÿš€

Core Innovations ๐Ÿ’ก

Implicit Deep Reasoning: Combines standard attention mechanisms with graph-structured reasoning components for rigorous logic and causal inference. ๐Ÿ•ธ๏ธ

ASPP & -flow Hybrid Design: High-efficiency structured propagation + internal probability space optimization for high-quality reasoning without long-winded intermediate steps. โšก
Bilingual Adaptation: Primarily focused on Serbian while preserving English logic, making it perfect for multilingual chats and cross-lingual tasks. ๐ŸŒ
Lightweight & Efficient: At ~1.3B parameters, it runs smoothly on consumer-grade GPUs, ideal for edge devices and research. ๐Ÿ’ป

Use Cases ๐Ÿ› ๏ธ

Serbian Chatbots: Intelligent assistants with local linguistic nuance. ๐Ÿ—ฃ๏ธ
Educational Tools: Multi-turn interactive tasks and learning support. ๐Ÿ“š

Key Advantages โœจ

Clean Output: Avoids messy "thinking" tags; reasoning happens internally, delivering clear and direct results. โœ…
Open Access: Licensed under Apache-2.0, making it easy for research and engineering integration. ๐Ÿ”“
AI Democratization: Empowering low-resource language ecosystems with cutting-edge intelligence. ๐Ÿค
  • 1 reply
ยท
OzTianluย 
updated a Space 6 days ago
OzTianluย 
posted an update 9 days ago
view post
Post
2553
๐Ÿš€ Geilim-1B-Instruct โ€” Implicit Deep Reasoning, Zero Verbosity
NoesisLab/Geilim-1B-Instruct
https://huggingface.co/collections/NoesisLab/geilim-large-language-models
No <think> tags. No long CoT.
Reasoning happens inside the hidden states, not in the output.
Whatโ€™s different
๐Ÿง  Implicit reasoning: deep causal reasoning without exposing chains
๐Ÿ•ธ๏ธ ASPP (Adjacency-Structured Parallel Propagation): parent-only causal graph, O(n) message passing
๐ŸŒŠ ฯ€-flow: internal probability-space refinement instead of token-level deliberation
โš–๏ธ Hybrid gating: learns when to use structure vs attention
Why it matters
Lower latency & token cost
Cleaner, production-ready outputs
CoT-level reasoning depth without verbosity tax
Built on Llama-3.2-1B-Instruct, trained for math, logic, and commonsense.
Designed for small-model reasoning at the edge.
#ImplicitReasoning #SmallLLM #EfficientAI #ReasoningModels #ASPP #PiFlow
  • 2 replies
ยท
OzTianluย 
posted an update 19 days ago
view post
Post
1190

๐Ÿš€ Introducing Asterisk โ€” Hybrid ASPP-Attention Architecture! ๐ŸŒŸ

https://huggingface.co/NoesisLab/Asterisk

Weโ€™re excited to launch Asterisk, a cutting-edge language model by NoesisLab on Hugging Face! ๐ŸŽ‰ Built on top of SmolLM2-135M-Instruct, Asterisk integrates Adjacency-Structured Parallel Propagation (ASPP) with standard attention to bring structured reasoning power into language modeling.

โœจ Key Highlights:

๐Ÿ”น Hybrid Architecture โ€“ Fuses graph-centric ASPP local reasoning with global attention for richer representations.
๐Ÿ”น Enhanced Reasoning โ€“ ASPP enables iterative local state evolution that complements traditional transformer layers.
๐Ÿ”น Efficient Design โ€“ ~171M parameters with smart supervised fine-tuning (Capybara dataset).
๐Ÿ”น Flexible & Open โ€“ Apache-2.0 licensed and ready to integrate via Hugging Face ๐Ÿค— Transformers.

๐Ÿ“ˆ Asterisk showcases how hybrid operators โ€” inspired by theoretical frameworks like the Asterisk Operator โ€” can bring structured reasoning into modern LMs in a scalable way.

๐Ÿ‘‰ Try it out, explore the code, and start building: huggingface.co/NoesisLab/Asterisk
  • 1 reply
ยท