Hacker Timesnew | past | comments | ask | show | jobs | submit | bytesandbits's commentslogin

we constantly underestimate the power of inference scaffolding. I have seen it in all domains: coding, ASR, ARC-AGI benchmarks you name it. Scaffolding can do a lot! And post-training too. I am confident our currently pre-trained models can beat this benchmark over 80% with the right post-training and scaffolding. That being said I don't think ARC-AGI proves much. It is not a useful task at all in the wild. it is just a game; a strange and confusing one. For me this is just a pointless pseudo-academic exercise. Good to have, but by no means measures intelligence and even less utility of a model.

That's unsurprising given that a lot of our own abilities as humans come from having painstakingly acquired practices and methodologies and tools (like pencil and paper, note taking, let alone algebra, formal methods and electromechanical aids). We call this "education" but it works in a way that is more similar to agentic harnesses than to pretraining or fine-tuning. This is reflected in the fundamental different way in which children and adults learn new skills

Scaffolding is all you need. I am absolutely certain about that. It's abound finding good ways to approximate the reward function being used during post-training, but at inference time. A general enough reward that can score candidates well will inevitably improve the abilities of LLMs when put inside scaffolds.

what exactly does scaffolding mean in this context? genuine question

anything that doesn't touch the model parameters at all once it has been compiled. for example, in streaming ASR of an encoder-decoder you can get gains in accuracy just by enhancing the encoder-decoder orchestration and ratio, frequency of fwd passes, dynamically adjusting the length of rolling windows (if using full attention). Prompting would be part of this too, including few-shot examples. Decoding strategy is also part of this (top-k, nucleus, speculative decoding, greedy or anything else). Applying signal processing or any kind of processing to the input before getting it into the model, or to the output. There are a lot of things you can do.

Also think about the program-synthesis approach proposed by Poetiq.ai. python programs are being generated and evaluated against previous examples. Then in-context learning is done programmatically via prompt concatenation. If you can "score" online the working and non working examples, then you have a very strong reward signal.

Apple



4x faster PREFILL not decode. Decode is bandwidth-bounded. Prefill is flops-constrained.


do you run two eSIMs when traveling and if so how is stability / battery life?


Always 2 SIM/esim running simultaneously. Compared to previous non-apple modem it's night and day battery-wise.

Didn't notice any issues with connection speed/stability.


incredible work


sensei karpathy has done it again


parakeet v3 has a much better RTFx than moonshine, it's not just about parameter numbers. Runs faster.

https://huggingface.co/spaces/hf-audio/open_asr_leaderboard


That was my experience when I tried Moonshine against Parakeet v3 via Handy. Moonshine was noticeably slower on my 2018-era Intel i7 PC, and didn't seem as accurate either. I'm glad it exists, and I like the smaller size on disk (and presumably RAM too). But for my purposes with Handy I think I need the extra speed and accuracy Parakeet v3 is giving me.


It is about the parameter numbers if what you care about is edge devices with limited RAM. Beyond a certain size your model just doesn't fit, it doesn't matter how good it is - you still can't run it.


I am not sure what "edge" device you want to run this on, but you can compress parakeet to under 500MB on RAM / disk with dynamic quants on-the-fly dequantization (GGUF or CoreML centroid palettization style). And retain essentially almost all accuracy.

And just to be clear, 500MB is even enough for a raspberry Pi. Then your problem is not memory, is FLOPS. It might run real-time in a RPi 5, since it has around 50 GFLOPS of FP32, i.e. 100 GFLOPS of FP16. So about 20-50 times less than a modern iPhone. I don't think it will be able to keep it real time, TBF, but close.

regardless, this model with such quantization strategy runs real time at +10x real-time factor even in 6-year old iPhones (which you can acquire for under $200) and offline at a reasonable speed, essentially anywhere.

You get the best of both worlds: the accuracy of a whisper transformer at the speed and footprint of a small model.


maybe a deepseek v4 distill. give it a few days


its cause of a chain of events.

Next week Chinese New year -> Chinese labs release all the models at once before it starts -> US labs respond with what they have already prepared

also note that even in US labs a large proportion of researchers and engineers are chinese and many celebrate the Chinese New Year too.

TLDR: Chinese New Year. Happy Horse year everybody!


Not trained in Ascend that is BS. Hopper GPU cluster. Please remove that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: