Sam McLeod
smcleod
AI & ML interests
cool things
Recent Activity
liked
a model 2 days ago
aufklarer/PersonaPlex-7B-MLX-4bit liked
a model 3 days ago
AesSedai/Qwen3.5-122B-A10B-GGUF liked
a model 11 days ago
Qwen/Qwen3.5-35B-A3B Organizations
What's the organisation & project relating to (I'm a r/localllama mod)
#1 opened 14 days ago
by
smcleod
iMatrix quants
2
#2 opened about 1 month ago
by
smcleod
New Refresh with added Tool Calling in calibration dataset and improved imatrix
❤️ 9
9
#21 opened about 1 month ago
by
danielhanchen
Day 0 llama.cpp support?
❤️ 👍 4
3
#3 opened 2 months ago
by
sbeltz
GGUF or MLX support?
👍 19
5
#2 opened 5 months ago
by
smcleod
llama.cpp support
👀 👍 5
5
#1 opened 7 months ago
by
djuna
Context Size?
3
#4 opened 6 months ago
by
smcleod
Thinking tokens issue
👍 2
12
#9 opened 7 months ago
by
iyanello
jinja2 chat template is malformed
1
#13 opened 7 months ago
by
smcleod
When GGUF?
🔥 🚀 16
7
#6 opened 7 months ago
by
ChuckMcSneed
I have a draft PR up to llama.cpp, keen for your input
❤️ 7
#4 opened 7 months ago
by
smcleod
Any chance of a smaller coding model in the 30-70b range?
🚀 ❤️ 50
4
#6 opened 8 months ago
by
smcleod
GGUF version?
👍 23
2
#6 opened 8 months ago
by
smcleod
Are you planning on adding llama.cpp support?
1
#1 opened 8 months ago
by
smcleod
Any chance of creating these with RoPE/Yarn for a context size larger than 32k?
5
#2 opened 10 months ago
by
smcleod
Larger context version?
1
#3 opened 10 months ago
by
smcleod
Qwen2 or 3, GGUF quants & context size?
1
#3 opened 10 months ago
by
smcleod
License seems hostile
👀 3
#7 opened 8 months ago
by
smcleod
Any chance of a 128k version so we can use it as a draft model for the larger 128k models?
➕ 3
8
#3 opened 10 months ago
by
smcleod
128k version of YARN
👍 3
5
#6 opened 10 months ago
by
sovetboga