Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

It's cool how rapidly these oft-called "fundamental problems with LLMs" vanish with bigger/better models.


GPT-4 examples elsewhere in the comments suggest otherwise.


That's just you showing that you clearly misunderstand. These aren't examples of fundamental problems, instead they are clear examples that these things are just autocomplete regardless of how many people think they are doing something more complex. It's not that getting the example correct is interesting, it's that getting it wrong is a clear sign of stupidity.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: