Hacker Timesnew | past | comments | ask | show | jobs | submit | matrix2596's commentslogin

looks like flash attention concepts applied to kmeans, nice speedup results


Gemini 3.1 Pro is based on Gemini 3 Pro


Lol, and this line:

> Geminin 3.1 Pro can comprehend vast datasets

Someone was in a hurry to get this out the door.


yea, they "officially" dont release benchmarks even if we asked the AWS reps


awesome that sparse attention used in real world setting


well said, just numbers with no plan or investment is worse even


people dont realize how lucky US citizens have it just by luck of being born in US


Feels like our luck is running out


I wonder why India has not had the same success when its in a similar situation, is the domecratic power worse than the authoritarianism ( at least in the case of China). Or would you say India will eventually catch up and be better for the democracy.


Indian voters care more about religion, language, caste compared to development.


You missed another important point - corruption.


is is possible for your tokenizer to give different tokenization ever then openai tokenizer? i am asking because there are multiple ways to tokenize the same string?? sry if i am mistaken


Should be the same. Both use Byte-Pair Encoding (BPE) as underlying algo.


Really interesting to see the deep dive into PCB design and EMI considerations here. It's a great reminder how much thought goes into balancing cost, manufacturability, and compliance, even for hobbyist products. The point about using one layer as a near-continuous ground plane is especially practical, and it's fascinating how even seemingly minor layout choices can have big implications for signal integrity. Thanks to everyone for sharing their expertise—it's one of the things that makes this community so valuable.


yes


Best of luck.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: