Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Sure, although I still think a system with less of a contrast between how well it performs 'modally' and how bad it performs incidentally, would be more practical.

What I wonder is whether LLM's will inherently always have this dichotomy and we need something 'extra' (reasoning, attention or something les biomimicried), or whether this will eventually resolves itself (to an acceptable extend) when they improve even further.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: