Impressed by the model

#7
by LagOps - opened

I have to say, this model is impressing me in many ways and it's a great release overall! Thanks a lot for making this available to anyone and thank you as well for releasing different checkpoints during training.

I particularly like how the model adheres to system prompts and references them during the chain of thought. I am also very happy with the chain of thought itself. For easy prompts it doesn't waste much time and overall it's much more concise and "sane" compared to most other models. Typically I avoid thinking models due to the limitations of my hardware, but with this model the wait isn't much of an issue at all.

One more postive I have to point out is that the model doesn't appear to be overly sycophantic or "assistant-like". If i prompt it to be a critic, it will actually be a critic. The outputs themselves also don't suffer from the typical slop formulations and read more human like than many other models. The model also produces more diverse responses than what is common these days, but at the same time it doesn't feel as if the model itself is planning ahead much in terms of what to say (repeated continuations from a partial response tend to diverge significantly rather early on). It appears that the model doesn't feature MTP as a training objective or inference optimization (or maybe i missed it), so perhaps that's what makes it feel a bit random at times.

On the technical side, the low active parameter count is very much appreciated and among 400b range models, this is pretty much the lowest one available.

Again, thank you very much for this release! I am looking forward to more models like this (or improvements on it - still feels a bit raw at times and could improve quite a lot with more iterations).

Sign up or log in to comment