Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

what's your take between Qwen3.5-35B-A3B and Qwen3-Coder-Next?


In my experience Qwen3.5 is better even at smaller distillations. From what I understand the Qwen3-next series of models was just a test/preview of the architectural changes underpinning Qwen3.5. So Qwen3.5 is a more complete and well trained version of those models.


In my experience qwen 3 coder next is better. I ran quite a few tests yesterday and it was much better at utilizing tool calls properly and understanding complex code. For its size though 3.5 35B was very impressive. coder next is an 80b model so i think its just a size thing - also for whatever reason coder next is faster on my machine. Only model that is competitive in speed is GLM 4.7 flash


What do you use as the orchestrator? By this I mean opencode, or the like. Is that the right term?


I use the term "harness" for those - or just "coding agent". I think orchestrator is more appropriate for systems that try to coordinate multiple agents running at the same time.

This terminology is still very much undefined though, so my version may not be the winning definition.


I'm basically using the agentic features of the Zed editor: https://zed.dev/agentic

It's really easy to setup with any OpenAI compatible API and I self host Qwen Coder 3 Next on my personal MBP using LM Studio and just dial in from my work laptop with Zed and tailscale so i can connect from wherever i might be. It's able to do all sorts of things like run linting checks and tests and look for issues and refactor code and create files and things like this. I'm definitely still learning, but it's a pretty exciting jump from just talking to a chat bot and copying and pasting things manually.


Another vote in favour of "harness".

I'm aligning on Agent for the combination of harness + model + context history (so after you fork an agent you now have two distinct agents)

And orchestrator means the system to run multiple agents together.


This has also been my understanding of all of these terms so far


We don't have a Qwen3.5-Coder to compare with, but there is a chart comparing Qwen3.5 to Qwen3 including Qwen3-Next[0].

[0] https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/visuali...


In my tests, Qwen3.5-35B-A3B is better, there is no comparison. Better tool calling and reasoning than Qwen3-Coder-Next for Html/Js coding tasks of medium size. Beware the quants and llama.cpp settings, they matter a lot and you have to try out a bunch of different quants to find one with acceptable settings, depending on your hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: