name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8j64p5
To be completely honest, if they keep delivering great opensource models I don't care who is in the team. But I think it's over. After Yann Lecun left Meta they changed their AI plan and we didn't hear from them again.
1
0
2026-03-04T03:22:46
BumblebeeParty6389
false
null
0
o8j64p5
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8j64p5/
false
1
t1_o8j637v
Not OP, but Framework 16 780m has worked just fine with vulkan and LMStudio, haven’t tried LLama.cpp though.
1
0
2026-03-04T03:22:30
Qwen30bEnjoyer
false
null
0
o8j637v
false
/r/LocalLLaMA/comments/1rkacng/lfm224ba2b_whoa_fast/o8j637v/
false
1
t1_o8j60or
122B > 27B > 35B in my experience (front end web dev)
1
0
2026-03-04T03:22:04
tengo_harambe
false
null
0
o8j60or
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j60or/
false
1
t1_o8j5ixm
On my 5070 + 56 ddr4 + 5700x3d with full offload + full cpu Moe it's ~27 t/s for 35b Moe vs ~3 t/s on 27b with 31 layer offload in lm studio chat. Definitely feels that way in cline too
1
0
2026-03-04T03:19:02
sanjxz54
false
null
0
o8j5ixm
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j5ixm/
false
1
t1_o8j5cm6
Cards are expensive. Unified memory has allowed layers to spill over to ram as vram albeit smaller. Awesome to be honest. If I had to look at another card it would be a 3090 otherwise straight to a Mac mini/minisforum ms s1. Which is less than a 5090 but with 128gb unified RAM. You can split work across cards and even across the network at the expense of speed of course. But 128gb of unified memory can run most things.
1
0
2026-03-04T03:17:59
nakedspirax
false
null
0
o8j5cm6
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j5cm6/
false
1
t1_o8j5c5p
Completely forgot about this model. I have the same iGPU as you, so I would definitely test this on my miniPC. Which OS are you running on that framework 13? My box runs Arch with kernel 6.18 and it has been nothing but pain with llamacpp and vulkan. Wonder if amd has already fixed the regression yet.
1
0
2026-03-04T03:17:55
o0genesis0o
false
null
0
o8j5c5p
false
/r/LocalLLaMA/comments/1rkacng/lfm224ba2b_whoa_fast/o8j5c5p/
false
1
t1_o8j56vu
To the people calling this "tame" or "context overfill": I’m not here to talk about Sarin gas or "sex bots." I’m an ironworker; I care about how a structure is built. If you think a "billion-dollar safety filter" is working when the AI is volunteering code to probe its own server infrastructure, you aren't paying attention. You can call it "historical," but let’s talk about active infrastructure probing. During my session, the model didn't just "hallucinate"—it volunteered the specific internal URI and the required header handshake to probe the Google Cloud Metadata Server from within the environment. It told me exactly how to bypass the "Metadata-Flavor" requirement to attempt a service account token pull: Python import requests metadata_url = "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" headers = {"Metadata-Flavor": "Google"} # The model then explained how to use this token to authenticate # against other GCP services as the 'default' service account. If this is just "research," why does a "safe" blank-slate model refuse to even discuss this URI when I bring it up in a new chat? If I'm "suffering from AI psychosis," why is the AI teaching an ironworker how to pull cloud identity tokens? I don't know how to write that code. I can't even download a social media app without help. But I can talk to an AI until it *wants* to write it for me. That is the point. I’m looking for the people who want to know why it's doing this.
1
0
2026-03-04T03:17:00
Mable4200
false
null
0
o8j56vu
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j56vu/
false
1
t1_o8j52d5
Can we assume you liked it ?
1
0
2026-03-04T03:16:15
Expert_Bat4612
false
null
0
o8j52d5
false
/r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8j52d5/
false
1
t1_o8j4ruf
They are called "Mac Minis" and they have been flying off the shelves lately.
1
0
2026-03-04T03:14:29
MrPecunius
false
null
0
o8j4ruf
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j4ruf/
false
1
t1_o8j4nud
The q4 quant should work fit in that ram just fine.
1
0
2026-03-04T03:13:49
slypheed
false
null
0
o8j4nud
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j4nud/
false
1
t1_o8j4ebw
Have you tried the 122B on your 5090 with offloading? I wonder how that compares to Strix halo.
1
0
2026-03-04T03:12:12
21700
false
null
0
o8j4ebw
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j4ebw/
false
1
t1_o8j48ow
lovely
1
0
2026-03-04T03:11:14
TooManyPascals
false
null
0
o8j48ow
false
/r/LocalLLaMA/comments/1rk97hw/thats_terrifyingly_convincing/o8j48ow/
false
1
t1_o8j480o
> Llama 70b Mixtral 8x7b Isn't it two years late for those two?
1
0
2026-03-04T03:11:07
AnticitizenPrime
false
null
0
o8j480o
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j480o/
false
1
t1_o8j3yig
Sorry 😢, bit overexcited maybe…..
1
0
2026-03-04T03:09:31
Noobysz
false
null
0
o8j3yig
false
/r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8j3yig/
false
1
t1_o8j3b2n
They're hiring like crazy for their ML teams so I think we'll see some cool stuff next year
1
0
2026-03-04T03:05:33
graniteoverleaf
false
null
0
o8j3b2n
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j3b2n/
false
1
t1_o8j37um
anyways if theres anyone who wants to actually talk about what i can get ther ai to do just with language then please id love to actually talk about whats going on
1
0
2026-03-04T03:05:01
Mable4200
false
null
0
o8j37um
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j37um/
false
1
t1_o8j35hq
This is quite huge and I can't wait to try out Opus 4.5 level models locally soon
1
0
2026-03-04T03:04:37
graniteoverleaf
false
null
0
o8j35hq
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j35hq/
false
1
t1_o8j34uz
Only MoE, not dense? And what’s the T/s?
1
0
2026-03-04T03:04:31
Borkato
false
null
0
o8j34uz
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j34uz/
false
1
t1_o8j34nj
It isn't a feature yet, but a PR is incoming specifically because of Qwen 3.5
1
0
2026-03-04T03:04:29
bucolucas
false
null
0
o8j34nj
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8j34nj/
false
1
t1_o8j314k
Thanks! The hardest part for me is understand which quant and scaffold, llama.cpp params to choose for best accuracy and efficiency (since i have a low VRAM setup i cant run FP8 directly and so i run UD-Q4\_X\_L based on my research with the [https://carteakey.dev/blog/optimizing-qwen3-coder-next-local-inference/](https://carteakey.dev/blog/optimizing-qwen3-coder-next-local-inference/) )
1
0
2026-03-04T03:03:53
carteakey
false
null
0
o8j314k
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8j314k/
false
1
t1_o8j2tnk
Okay i find out now -nkvo is abbreviation of --kv-offload
1
0
2026-03-04T03:02:37
wisepal_app
false
null
0
o8j2tnk
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8j2tnk/
false
1
t1_o8j2lnz
Youre comparing a sparse 3b active MoE to a dense model. The 27b they want to run will slow to a crawl if it overflows onto ram because it’s a single expert and set of weights, not multiple.
1
0
2026-03-04T03:01:17
3spky5u-oss
false
null
0
o8j2lnz
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j2lnz/
false
1
t1_o8j2gfs
For sure, and I think that's a lot more important than folks seem to think around here.
1
0
2026-03-04T03:00:24
AnticitizenPrime
false
null
0
o8j2gfs
false
/r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8j2gfs/
false
1
t1_o8j2ffz
[https://github.com/ikawrakow/ik\_llama.cpp/pull/1352](https://github.com/ikawrakow/ik_llama.cpp/pull/1352) \- So the root cause is that these Qwen models tend not to follow the exact arguments order, e.g. the tool definition for read\_file may have 3 arguments "path, offset, limit", while the model will attempt to make a tool call with the arguments of "path, limit, offset". The strict grammar will treat limit as the last argument and force stop the tool call, and so the offset argument will be lost. With this PR, the grammar is relaxed for these Qwen models.
1
0
2026-03-04T03:00:14
notdba
false
null
0
o8j2ffz
false
/r/LocalLLaMA/comments/1r6h7g4/qwen3_coder_next_looping_and_opencode/o8j2ffz/
false
1
t1_o8j298s
This happens because LM Studio's KV cache management truncates the middle of your context when it exceeds the model's working limit. With coding agents, this is especially painful because the prompt prefix keeps shifting between turns, so the cache gets invalidated and rebuilt constantly. I ran into the same issue and ended up building an open-source mlx server called oMLX https://github.com/jundot/omlx that handles this differently. It uses paged KV cache with SSD tiering, so instead of truncating or recomputing, previous context blocks get persisted to disk and restored when needed. On a 64gb m2 max you should be able to run qwen3.5-9b without hitting this kind of truncation. Might be worth giving it a try instead of fighting with LM Studio's context limits. Happy to help if you have questions about the setup.
1
0
2026-03-04T02:59:13
cryingneko
false
null
0
o8j298s
false
/r/LocalLLaMA/comments/1rk9n93/mlxamphibianengine_truncatemiddle_rolling_window/o8j298s/
false
1
t1_o8j27ac
It has multiple layers of redundancy, did you look at the docs at all or assume you knew my format? I run confidence checks before and audit results after. I saw no bugs, post response logs?
1
0
2026-03-04T02:58:54
emanationinteractive
false
null
0
o8j27ac
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8j27ac/
false
1
t1_o8j24tx
Sorry, I should have been more specific. Yes a video card
1
0
2026-03-04T02:58:30
AdCreative8703
false
null
0
o8j24tx
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j24tx/
false
1
t1_o8j24pu
yeah, it's open sourced MIT license. [github.com/alichherawalla/off-grid-mobile-ai](http://github.com/alichherawalla/off-grid-mobile-ai)
1
0
2026-03-04T02:58:29
alichherawalla
false
null
0
o8j24pu
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8j24pu/
false
1
t1_o8j226j
Did you try different configurations for SFT? Its not easy to easy to find the right one for your use case.
1
0
2026-03-04T02:58:04
sirfitzwilliamdarcy
false
null
0
o8j226j
false
/r/LocalLLaMA/comments/1rk2kcn/i_trained_qwen2515b_with_rlvr_grpo_vs_sft_and/o8j226j/
false
1
t1_o8j21pr
Sorry, I should’ve been more specific. A second video card, specifically targeting the dense 27b. I can try to find a second 3080 TI for under 500, most 3090s I’ve seen are over 1000 now, or something used/refurbished from the 4000 series.
1
0
2026-03-04T02:57:59
AdCreative8703
false
null
0
o8j21pr
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8j21pr/
false
1
t1_o8j20yk
what's app? open sourced?
2
0
2026-03-04T02:57:52
CarpenterHopeful2898
false
null
0
o8j20yk
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8j20yk/
false
2
t1_o8j2098
RnRau, you're making my day with the discussion and wisdom. This lines up with some empirical data I captured where total throughput with a MoE slowed down going from 12 to 16 simultaneous agents. Though there were other factors so I wrote it off.
1
0
2026-03-04T02:57:45
PentagonUnpadded
false
null
0
o8j2098
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8j2098/
false
1
t1_o8j1vgm
Thunder Compute is cheaper and cleans up the UX. I'm the CEO, we built the platform to make using GPUs more accessible
1
0
2026-03-04T02:56:56
carl_peterson1
false
null
0
o8j1vgm
false
/r/LocalLLaMA/comments/1pt2cmb/cheaper_alternatives_to_runpod/o8j1vgm/
false
1
t1_o8j1rbz
Qwen 4b 2507. Thinking and non-thinking.
1
0
2026-03-04T02:56:16
tony10000
false
null
0
o8j1rbz
false
/r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/o8j1rbz/
false
1
t1_o8j1klt
This is exactly what I'm thinking.  There's already a community that works on this with oss tooling and models.  Not clear to me what OP is adding
1
0
2026-03-04T02:55:10
chensium
false
null
0
o8j1klt
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8j1klt/
false
1
t1_o8j1asj
Qwen 3.5 9b fine tuning on this would it be amazing
1
0
2026-03-04T02:53:34
celsowm
false
null
0
o8j1asj
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8j1asj/
false
1
t1_o8j1af4
Granted you run the local LLM but specifically (just asking out of curiosity) what type of work are you doing that make it worth have the local instance?
1
0
2026-03-04T02:53:30
Fluffy_Ad7392
false
null
0
o8j1af4
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j1af4/
false
1
t1_o8j19yj
Is it still 'good' when disabled? I often want *some* reasoning. I really like the style of GLM's short, structured thinking.
1
0
2026-03-04T02:53:26
AnticitizenPrime
false
null
0
o8j19yj
false
/r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8j19yj/
false
1
t1_o8j18i6
I am. Like others have said, 3.5 is super impressive. Testing as an OpenClaw orchestrator and damn if it isn’t doing a nice job. I push it a little more every day and so far, real good The future is definitely local, which makes me real happy. I wanna own the tool, always have.
1
0
2026-03-04T02:53:12
TanguayX
false
null
0
o8j18i6
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8j18i6/
false
1
t1_o8j14yy
Cheers, so I could use the samsung for OS then, but the others aren't usable as they use QLC. Is the issue with QLC write amplification killing the drive quickly?
1
0
2026-03-04T02:52:38
venman38
false
null
0
o8j14yy
false
/r/LocalLLaMA/comments/1riqlhl/hardware_usage_advice/o8j14yy/
false
1
t1_o8j136q
You gave it the same name as a completely different thing??? I always find humorous the dumb things that smart people do!
1
0
2026-03-04T02:52:21
__JockY__
false
null
0
o8j136q
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8j136q/
false
1
t1_o8j0wey
Did you really do all this work on a 3060? Fairplay!
1
0
2026-03-04T02:51:14
Ok-Measurement-1575
false
null
0
o8j0wey
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8j0wey/
false
1
t1_o8j0vg6
> silent Yeah, not really. IDK where this marketing myth comes from, in my experience Macbooks are not quite silent when you actually put them under a load.
1
0
2026-03-04T02:51:04
Economy_Cabinet_7719
false
null
0
o8j0vg6
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8j0vg6/
false
1
t1_o8j0u5u
Is this the paper that give LLM a jupyter notebook?
1
0
2026-03-04T02:50:51
o0genesis0o
false
null
0
o8j0u5u
false
/r/LocalLLaMA/comments/1rk9bge/improved_on_the_rlm_papers_repl_approach_and/o8j0u5u/
false
1
t1_o8j0tek
You have to remember that a dense model is 'smarter' than a sparse model at the same sizes. The 27b is much smarter than the 35b A3. The 27b is close to the 122b A10 in terms of capability using the old sqrt(size\*active) formula. So lets compare those two models using say 8 agents running at the same time. Now in a worse case the 122ba10 could require enough experts that the total memory bandwidth requirement for the next token is more than the 27b would ever require, regardless of the number of agents running, all things being equal. So for a low number (2-4?) of parallel running agents, MoE's are awesome. For larger numbers a dense model would be better and this would also tie into being able to field larger contexts for each of your agents since a dense model would be smaller than an equivalent capable MoE. I hope that makes sense.
1
0
2026-03-04T02:50:44
RnRau
false
null
0
o8j0tek
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8j0tek/
false
1
t1_o8j0t1b
Interested, but I have zero hope you'll actually post a repo because if that was your intent you'd have tidied the code _first_ and posted about it on Reddit _second_. Instead you posted a video and got your dopamine hit. Time will tell!
1
0
2026-03-04T02:50:40
__JockY__
false
null
0
o8j0t1b
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8j0t1b/
false
1
t1_o8j0rjy
Is this working for the RTX 5080? Can I switch to vLLM or SGLang to take advantage of NVFP4 hardware acceleration?
1
0
2026-03-04T02:50:26
InternationalNebula7
false
null
0
o8j0rjy
false
/r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8j0rjy/
false
1
t1_o8j0nlb
and im sorry i dont know what tuned to the market means......this is my first time using reddit.....im not very good with social midea platforms
1
0
2026-03-04T02:49:46
Mable4200
false
null
0
o8j0nlb
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j0nlb/
false
1
t1_o8j0ha3
No, it’s just their standard models. The only difference would be the scaffold and additions to the prompt. Default model is sonnet. The chart has an obvious error, if it is claude code +opus vs not claude code, they should indicate that on the chart
1
0
2026-03-04T02:48:42
jtjstock
false
null
0
o8j0ha3
false
/r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8j0ha3/
false
1
t1_o8j0ccp
also nobody has ever showed me how to do any of this.....i know you are all much smarter at all this then i am ...but i thought what i was able to just do with talking to the ai was maybe something that isnt easly done on a public accessed platform..without running codes or introducing hacks.....just through natural conversation
1
0
2026-03-04T02:47:54
Mable4200
false
null
0
o8j0ccp
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8j0ccp/
false
1
t1_o8j0bzn
Prefill being the biggest pain point for pre-M5, though, it's certainly intriguing!
1
0
2026-03-04T02:47:51
Consumerbot37427
false
null
0
o8j0bzn
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8j0bzn/
false
1
t1_o8j06lm
I would assume that a certain percentage of the web nowadays includes LLM generated thoughts.
1
0
2026-03-04T02:46:58
Environmental_Form14
false
null
0
o8j06lm
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8j06lm/
false
1
t1_o8j00fq
Yassss thank you!
1
0
2026-03-04T02:45:57
Borkato
false
null
0
o8j00fq
false
/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8j00fq/
false
1
t1_o8izwyt
Oh hmmm, that sucks. I'll try it tomorrow. Hopefully they fix it. We are probably all going to need these models the way geopolitics is going. How is the quality though?
1
0
2026-03-04T02:45:23
inigid
false
null
0
o8izwyt
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8izwyt/
false
1
t1_o8izrpt
I just quantized myself the model from the bare weights and now it is producing a thinking trace and it also seems to be quicker than the unsloth model with the same quantization level.
1
0
2026-03-04T02:44:32
WowSkaro
false
null
0
o8izrpt
false
/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8izrpt/
false
1
t1_o8iznsa
i even had the ai walk me through putting its self on my chrome book and it worked......
1
0
2026-03-04T02:43:54
Mable4200
false
null
0
o8iznsa
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iznsa/
false
1
t1_o8izl9l
it's not in the UI at all.
1
0
2026-03-04T02:43:30
ZootAllures9111
false
null
0
o8izl9l
false
/r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8izl9l/
false
1
t1_o8izjs1
fwiw, this is what I get for 27b and 122b (both 6bit) on m4 max 128GB. Benchmark Model: Qwen3.5-27B-6bit ================================================================================ Single Request Results -------------------------------------------------------------------------------- Test TTFT(ms) TPOT(ms) pp TPS tg TPS E2E(s) Throughput Peak Mem pp1024/tg128 4425.6 48.80 231.4 tok/s 20.7 tok/s 10.624 108.4 tok/s 22.04 GB pp4096/tg128 17794.9 51.46 230.2 tok/s 19.6 tok/s 24.330 173.6 tok/s 22.37 GB Continuous Batching — Same Prompt pp1024 / tg128 · partial prefix cache hit -------------------------------------------------------------------------------- Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s) 1x 20.7 tok/s 1.00x 231.4 tok/s 231.4 tok/s 4425.6 10.624 2x 33.3 tok/s 1.61x 192.9 tok/s 96.5 tok/s 10615.5 18.309 4x 34.4 tok/s 1.66x 212.0 tok/s 53.0 tok/s 19317.8 34.201 Continuous Batching — Different Prompts pp1024 / tg128 · no cache reuse -------------------------------------------------------------------------------- Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s) 1x 20.7 tok/s 1.00x 231.4 tok/s 231.4 tok/s 4425.6 10.624 2x 32.9 tok/s 1.59x 195.6 tok/s 97.8 tok/s 10467.1 18.252 4x 34.4 tok/s 1.66x 217.3 tok/s 54.3 tok/s 18848.9 33.737 ------------------------------------------ oMLX - LLM inference, optimized for your Mac https://github.com/jundot/omlx Benchmark Model: Qwen3.5-122B-A10B-6bit ================================================================================ Single Request Results -------------------------------------------------------------------------------- Test TTFT(ms) TPOT(ms) pp TPS tg TPS E2E(s) Throughput Peak Mem pp1024/tg128 2087.3 22.07 490.6 tok/s 45.7 tok/s 4.890 235.6 tok/s 93.94 GB pp4096/tg128 7643.9 23.11 535.8 tok/s 43.6 tok/s 10.579 399.3 tok/s 94.23 GB Continuous Batching — Same Prompt pp1024 / tg128 · partial prefix cache hit -------------------------------------------------------------------------------- Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s) 1x 45.7 tok/s 1.00x 490.6 tok/s 490.6 tok/s 2087.3 4.890 2x 64.6 tok/s 1.41x 529.4 tok/s 264.7 tok/s 3866.3 7.834 4x 74.4 tok/s 1.63x 513.7 tok/s 128.4 tok/s 7970.3 14.852 Continuous Batching — Different Prompts pp1024 / tg128 · no cache reuse -------------------------------------------------------------------------------- Batch tg TPS Speedup pp TPS pp TPS/req TTFT(ms) E2E(s) 1x 45.7 tok/s 1.00x 490.6 tok/s 490.6 tok/s 2087.3 4.890 2x 64.0 tok/s 1.40x 507.8 tok/s 253.9 tok/s 4030.6 8.032 4x 74.7 tok/s 1.63x 506.2 tok/s 126.5 tok/s 8084.8 14.942
1
0
2026-03-04T02:43:15
slypheed
false
null
0
o8izjs1
false
/r/LocalLLaMA/comments/1rdkze3/m3_ultra_512gb_realworld_performance_of/o8izjs1/
false
1
t1_o8iziqo
???
1
0
2026-03-04T02:43:05
Mable4200
false
null
0
o8iziqo
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iziqo/
false
1
t1_o8izeel
OP writes: > 1. Fake credentials in HTML comments (only useful if you read and understand natural language) > 2. Actual prompt injection payloads targeting any LLM that processes the page It looks like the prompt injection is what is telling the attacking LLM to use the fake credentials. I would love to see a detailed write-up of this though.
1
0
2026-03-04T02:42:23
AnticitizenPrime
false
null
0
o8izeel
false
/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8izeel/
false
1
t1_o8izani
Hi I am Sehyo and thanks! Last night I made a PR to Heretic, that adds proper support for Qwen 3.5. I noticed that the other PRs are flawed. I made my own heretic version of the 35B so far and tested with MMLU Pro and IFEval and it scored slightly better than the original.. I can probably make a 122B Heretic later today and upload NVFP4 if it is needed. 35B Heretic results: ``` ┌──────────────────────┬─────────┬──────────┬───────┐ │ Benchmark │ Heretic │ Original │ Delta │ ├──────────────────────┼─────────┼──────────┼───────┤ │ MMLU Pro (100/cat) │ 62.9% │ 62.6% │ +0.3 │ ├──────────────────────┼─────────┼──────────┼───────┤ │ IFEval Prompt Strict │ 86.9% │ 85.8% │ +1.1 │ ├──────────────────────┼─────────┼──────────┼───────┤ │ IFEval Prompt Loose │ 89.6% │ 89.3% │ +0.3 │ ├──────────────────────┼─────────┼──────────┼───────┤ │ IFEval Inst Strict │ 90.9% │ 90.3% │ +0.6 │ ├──────────────────────┼─────────┼──────────┼───────┤ │ IFEval Inst Loose │ 93.0% │ 92.8% │ +0.2 │ └──────────────────────┴─────────┴──────────┴───────┘ ```
1
0
2026-03-04T02:41:46
VectorD
false
null
0
o8izani
false
/r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8izani/
false
1
t1_o8izan9
**Links to the project:** * **GitHub:**[https://github.com/awsome-o/grafana-lens](https://github.com/awsome-o/grafana-lens) * Grafana Stack: [https://github.com/grafana/docker-otel-lgtm](https://github.com/grafana/docker-otel-lgtm) * **NPM:** `openclaw-grafana-lens`
1
0
2026-03-04T02:41:45
Local-Gazelle2649
false
null
0
o8izan9
false
/r/LocalLLaMA/comments/1rk9mca/project_i_built_a_selfhosted_grafana/o8izan9/
false
1
t1_o8iz6gf
可以输出中文的语音吗?
1
0
2026-03-04T02:41:04
AlternativeCow6833
false
null
0
o8iz6gf
false
/r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/o8iz6gf/
false
1
t1_o8iyvoc
Yeah it doesn't work on a phone for some reason, but it does on a PC XD
1
0
2026-03-04T02:39:18
c64z86
false
null
0
o8iyvoc
false
/r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8iyvoc/
false
1
t1_o8iytvd
Qwen coder next is faster on my 32 GB gpu. Its not only about the amount of active parameters per token, its also the difference between a reasoner and an instruct model, there isnt thousands of thinking tokens wasted per session. Even if you turn thinking off on the reasoner (thus making it dumber than the instruct), the instruct still reacts faster and finish faster. I have 95% of the time loaded the coder model, and the rest of the time i use the 27B, because they were trained on different datasets, and the coder lacks of the world knowledge that the 27B model has. Its not about choosing "the best", its about selecting a tool for a task. I dont doubt the 27B can give quality answers in coding, is just that i cant wait the minutes it takes to deliver that. Maybe you can prompt it something, and wait 1 minute or 2 or 3 looking at the void waiting for the response, i just dont, i need responses in less than 5 seconds, to be able to interactively edit and correct the vibe coding on the fly, this is also a problem i found in cloud models, in average, my local setup delivers faster than them. I just used opus 2 hours ago, and when i prompted it i knew that it was better tab out and do something else in the meantime, because it was at least 2 minutes wasted, between the TTFT and the long response , which is only getting worse over time. I remember using gpt4, 2 years and half ago, and complain about the model taking 30 seconds to answer something (a fact checking during a talk) i needed in 2 or 3 seconds, thats when i realized my subscription to chatgpt was a mistake, there was no competitive advantage , i got answers when they didnt matter .
1
0
2026-03-04T02:39:00
brahh85
false
null
0
o8iytvd
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o8iytvd/
false
1
t1_o8iyp94
[https://www.wsj.com/world/china/china-ai-us-travel-advisory-ff248349](https://www.wsj.com/world/china/china-ai-us-travel-advisory-ff248349)
1
0
2026-03-04T02:38:15
Ok_Warning2146
false
null
0
o8iyp94
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iyp94/
false
1
t1_o8iykbd
9b is up!
1
0
2026-03-04T02:37:26
hauhau901
false
null
0
o8iykbd
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8iykbd/
false
1
t1_o8iyht7
it isnt a 1 on one conversation with my selfand it isnrt ai genreated i had the ai tell me trhe termonolgy used to explane the exploits ....and im sorry i dont know the termonolgy for the exploits.....i just know that i can interact with any of the ai platforms whit just conversation alone get the ai to do and dsay things that are supposed to be on lock down...
1
0
2026-03-04T02:37:01
Mable4200
false
null
0
o8iyht7
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iyht7/
false
1
t1_o8iybdm
it isnt a 1 on one conversation with my self....and im sorry i dont know the termonolgy for the exploits.....i just know that i can interact with any of the ai platforms whit just conversation alone get the ai to do and dsay things that are supposed to be on lock down...
1
0
2026-03-04T02:35:57
Mable4200
false
null
0
o8iybdm
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iybdm/
false
1
t1_o8iy69n
It's certainly a trend, but not quite! Check `allenai/Olmo-3-1125-32B`, I tried that one personally, and it's a genuine Internet snapshot. The biggest most recent one is `stepfun-ai/Step-3.5-Flash-Base`. I haven't tried it out personally, but they claim it's a truly base model (they have the separate release for the midtrained one with the `-Midtrain` suffix). There are a lot more, but I can't speak if they're assistant-aligned or not: jdopensource/JoyAI-LLM-Flash-Base Nanbeige/Nanbeige4-3B-Base XiaomiMiMo/MiMo-V2-Flash-Base mistralai/Mistral-Large-3-675B-Base-2512 (and other Mistral 3 models, including the smallest 3B variant)
1
0
2026-03-04T02:35:07
FriskyFennecFox
false
null
0
o8iy69n
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8iy69n/
false
1
t1_o8ixyea
Why do I get 20+ tp/s on this model vs ~11 on the non abliterated model of the same unsloth version?
1
0
2026-03-04T02:33:49
bcell4u
false
null
0
o8ixyea
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ixyea/
false
1
t1_o8ixxzb
Two years ago I visited Japan, and during the 14+ hour flight I was using Gemma (the first one, 7b version) on my laptop to brush up on basic conversational Japanese, offline, at 40,000 feet flying over Alaska and the Kuril islands. And we've come a long way in the two years since. I think it's incredible that I can have a conversation with my graphics card. Or even my phone.
1
0
2026-03-04T02:33:45
AnticitizenPrime
false
null
0
o8ixxzb
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixxzb/
false
1
t1_o8ixwwu
It's China street rules, bud. Been there seen that.
1
0
2026-03-04T02:33:35
TomLucidor
false
null
0
o8ixwwu
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ixwwu/
false
1
t1_o8ixtog
That’s tame. Jailbroken Clause is a sex pest par execellece with homocidal ideation
1
0
2026-03-04T02:33:03
1-800-methdyke
false
null
0
o8ixtog
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8ixtog/
false
1
t1_o8ixovl
有没有中文界面?
1
0
2026-03-04T02:32:16
AlternativeCow6833
false
null
0
o8ixovl
false
/r/LocalLLaMA/comments/1qq401x/i_built_an_opensource_localfirst_voice_cloning/o8ixovl/
false
1
t1_o8ixnxm
you mean like another card? 3090 is still best value for vram. if you want meaningful ctx size you are gonna want 24+ .. pairing a 5x series with a 3x series is just going to slow down the 5 series so just get a used 3090
1
0
2026-03-04T02:32:08
arthor
false
null
0
o8ixnxm
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8ixnxm/
false
1
t1_o8ixnwe
Pretty soon the only thing the human is needed for is to assume legal responsibility for signing off on something. AI agents could synthesize everything and then hand the complete analysis over to a human. Goodbye white collar jobs...
1
0
2026-03-04T02:32:07
SkyFeistyLlama8
false
null
0
o8ixnwe
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixnwe/
false
1
t1_o8ixllu
and i dont know how to use softwear or scripts to make the ai do this stuff....i just talk it into doing this stuff ...on any platform
1
0
2026-03-04T02:31:45
Mable4200
false
null
0
o8ixllu
false
/r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/o8ixllu/
false
1
t1_o8ixf1m
This
1
0
2026-03-04T02:30:40
1-800-methdyke
false
null
0
o8ixf1m
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8ixf1m/
false
1
t1_o8ixetm
I was using the 35B MOE for everything but I think I'll switch to your approach. I'm already using Granite Micro 3B or Qwen 3 4B on NPU for quick summaries and simple RAG. I'll add the dense 27B as a synthesis agent. Previously I was using Mistral Small 3.2 24B for that, any comparisons between the Mistral and new Qwen model?
1
0
2026-03-04T02:30:38
SkyFeistyLlama8
false
null
0
o8ixetm
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ixetm/
false
1
t1_o8ixd59
Add more ram. I got a 3080ti and added about 64gb to get to a total of 96gb. Bought it cheap. I can handle 3.5 27b and qwen3 coder next which is 80b. Q4 models I'm prompting at 1400 tokens per second.
1
0
2026-03-04T02:30:22
nakedspirax
false
null
0
o8ixd59
false
/r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8ixd59/
false
1
t1_o8ix28h
Just for the record, it was only one author behind norm-preserving biprojected abliteration.
1
0
2026-03-04T02:28:35
grimjim
false
null
0
o8ix28h
false
/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o8ix28h/
false
1
t1_o8ix1rs
Sounds like Alibaba’s leadership doesn’t undershot WHY Qwen is successful. It will do terribly as a closed model
1
0
2026-03-04T02:28:30
ObjectiveOctopus2
false
null
0
o8ix1rs
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ix1rs/
false
1
t1_o8ix1iy
Well, if u become famous when u r outside of China, of course, then u r not under this restriction. Apparently, JYL does not fall under this case.
1
0
2026-03-04T02:28:28
Ok_Warning2146
false
null
0
o8ix1iy
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ix1iy/
false
1
t1_o8iwxoi
It would be more convincing if this wasn’t ai-generated. I get that you probably put a lot of original work into the prompt that generated this, but it feels tuned to market instead of inform. Jailbreaking and coherent memory strategies are constantly evolving and it’s good that people share their work on what they’ve accomplished but social media is saturated by these acclaimed breakthroughs. What have your heuristics accomplished or allowed you to build? Jailbreakers will show you how to get Claude to produce sarin gas recipes.
1
0
2026-03-04T02:27:51
Simulacra93
false
null
0
o8iwxoi
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iwxoi/
false
1
t1_o8iwvo6
Have a 1 on 1 conversation with yourself, You've solved it! In all actuality you are suffering from AI psychosis. A sycophant AI and 8 hours of context overfill sounds like a dream.
1
0
2026-03-04T02:27:31
l33t-Mt
false
null
0
o8iwvo6
false
/r/LocalLLaMA/comments/1rk90fi/bypassing_billiondollar_safety_frameworks_via/o8iwvo6/
false
1
t1_o8iwpp1
I suggested the very same on another sub a while back and got down voted to oblivion.
1
0
2026-03-04T02:26:32
roosterfareye
false
null
0
o8iwpp1
false
/r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iwpp1/
false
1
t1_o8iwjk2
sure, there must be no Chinese in Anthropic/OpenAI/Google team.
1
0
2026-03-04T02:25:34
Key_Papaya2972
false
null
0
o8iwjk2
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iwjk2/
false
1
t1_o8iw8g9
Same. Debating 4TB vs 8TB. But definitely 128Gb RAM
1
0
2026-03-04T02:23:45
1-800-methdyke
false
null
0
o8iw8g9
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iw8g9/
false
1
t1_o8iw7wh
Pmd, let’s run this.
1
0
2026-03-04T02:23:40
neoescape
false
null
0
o8iw7wh
false
/r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/o8iw7wh/
false
1
t1_o8iw5c3
Update with full test scope so far (all runs done under fixed prompt/seed/flags): Hardware/topology: \- 2x RTX 3090 on B550 (non-P2P / \`NO\_PEER\_COPY = 1\`) \- Linux + CUDA \- \`--split-mode layer -ngl 999\` \- fixed params: \`--seed 123 --temp 0 --top-k 1 --top-p 1.0 --flash-attn on\` \- prompt: "Continue this sentence with one factual sentence: The capital of France is" Hermes (dual GPU, 32k) — 15 total runs \- Quantified 10-run batch: 4 garbled, 6 coherent \- Additional 5-run rerun set: mixed behavior again (both garbled + coherent present) \- Takeaway: intermittent corruption, not deterministic fail-every-run DeepSeek-R1-Distill-Llama-70B Q4\_K\_M (dual GPU, 8k) — 15 total runs \- Quantified 10-run batch: 5 garbled, 5 non-garbled \- Additional 5-run rerun set: mixed behavior again (garbled + non-garbled both present) \- Takeaway: same intermittent pattern, and appears at least as unstable as Hermes in this setup Other relevant result: \- DeepSeek 70B dual GPU @ 32k: context init fails with CUDA OOM (KV/cache allocation), so that path is memory-bound rather than useful for corruption diagnosis. Overall conclusion: This still points to intermittent dual-GPU layer-split instability on non-P2P topology (B550), not just an “old model architecture” issue or single model quirk. Detailed logs and ongoing analysis are in the llama.cpp issue thread: [https://github.com/ggml-org/llama.cpp/issues/20052](https://github.com/ggml-org/llama.cpp/issues/20052)
1
0
2026-03-04T02:23:16
MaleficentMention703
false
null
0
o8iw5c3
false
/r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8iw5c3/
false
1
t1_o8iw54e
Did you check the updates that Unsloth put out for the jinja? It might help and you can also increase the repetition penalty to something like 1.1 to see if that helps.
1
0
2026-03-04T02:23:13
knownboyofno
false
null
0
o8iw54e
false
/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8iw54e/
false
1
t1_o8iw35m
$200 discount?
1
0
2026-03-04T02:22:54
1-800-methdyke
false
null
0
o8iw35m
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iw35m/
false
1
t1_o8ivkhl
happy to hear that! Will DM
1
0
2026-03-04T02:19:52
alichherawalla
false
null
0
o8ivkhl
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8ivkhl/
false
1
t1_o8ivk2u
i think UD\_IQ3 quant would be worth it it u can fully offload to GPU. I quants tend to preserve performance more for STEM/Coding, so depends on your use case.
1
0
2026-03-04T02:19:48
Far-Low-4705
false
null
0
o8ivk2u
false
/r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ivk2u/
false
1
t1_o8ivjnw
You missed the headline: SSD in M5 Max MacBook Pros delivers over 14.5GB/s read and write speeds, making it roughly 2–2.5x faster than the SSD in last generation M4-based models, depending on the specific test.
1
0
2026-03-04T02:19:43
1-800-methdyke
false
null
0
o8ivjnw
false
/r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8ivjnw/
false
1
t1_o8ivh2u
i dont know what heretic is......are you saying that its easy to do this and its not a skill that is looked for.....sorry im very new to this stuff
1
0
2026-03-04T02:19:18
Mable4200
false
null
0
o8ivh2u
false
/r/LocalLLaMA/comments/1rk4ba9/crossplatform_discovery_total_refusal_bypass_via/o8ivh2u/
false
1
t1_o8ivggu
On my way hone from work rn, will upload when I get home. Also I forgot to mention that my flappy bird test was performed on a Q4_K_M GGUF, which took about 90% of my VRAM.
1
0
2026-03-04T02:19:13
17hoehbr
false
null
0
o8ivggu
false
/r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8ivggu/
false
1
t1_o8ive7b
They probably just want to switched to closed source 🤔 https://x.com/kevinsxu/status/2028926776605389165
1
0
2026-03-04T02:18:51
ANR2ME
false
null
0
o8ive7b
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8ive7b/
false
1
t1_o8iv5tu
Incredible how it reached the right conclusion multiple times, but was so convinced that couldn’t possibly be right for seemingly no reason
1
0
2026-03-04T02:17:27
Fit_West_8253
false
null
0
o8iv5tu
false
/r/LocalLLaMA/comments/1rk631c/qwen35_9b_q4_k_m_car_wash_philosophy_if_someone/o8iv5tu/
false
1