Hacker Timesnew | past | comments | ask | show | jobs | submit | sixtyj's commentslogin

I was waiting “this page has problems to load” on my iPhone :)

In media there was a rule 1-9-90. One creates, 9 comment, 90 use or are silent/don’t care.

Richard Branson realized that a company starts to behave differently when it reaches more than stuff of 135 people that coincides with average number of people you can consider as personally known to you.

Context switching is a bitch. You cannot do it for a long time. Abundance brought by AI will somehow consolidate as people cannot digest everything created by it.

There are more than 45,000 models avail at HF (if I remember it right). Choose wisely :)


One potential solution to this is AI summarization. Imagine coming home, and while preparing dinner your AI assistant recounts what happened in all your favourite tv shows that day. Then while you're doing the laundry, it tells you about all the new games it found and tested for you.

These are just thought starters, but something like this could significantly raise the ceiling on what one person is able to consume in a 24 hour period.


Nah. These would be pseudo calories.

Adults tend to forget that they gained their powers of reasoning by exercising them.

Getting a summary, the way you described it, will be minus the effort required to think about it. This is great for information that you are already informed.

This is related to the illusion of explanatory depth. Most of us “know” how something works, until we have to actually explain it. Like drawing a bi-cycle, or explaining how a flush works.

People in general are not aware of how their brain works, and how much mental exercise they used get with the way the world is set up.

I suppose we can set up brain gyms, where people can practice using mental skills so that they don’t atrophy?


This dev thinks that it knows everything /s

RSS’ death is real - 15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.

So if you want to make news feed from news sites, you have to use parsing their html code, and ofc everybody has its own structure. JS powered sites are painful ones.


15 years ago, almost every news site had a RSS feed, some had several ones. Today? RSS feed is rare.

It may be a reflection of where you get your news.

New York Times, Washington Post, Wall Street Journal, Radio Free Europe, Mainichi, and lots of other legitimate primary source Big-J journalism news sites have RSS.

Rando McRepost's AI-Generated Rehash Blog? Not so much.


I don't know, I also only use RSS (with the exception of Reddit I think) so I would not even notice a website that a) provides content I want to get notified about and not actively visit for a reason and b) has no feed.

Reddit also has RSS feeds, add `.rss` to urls.

There are feeds of everything. You just have to look harder.

edit: provide an example please



Uh, they lie about everything?

https://www.abc.net.au/news/feed/51120/rss.xml

I haven't fully examined it but looking at the xml I see it was last build in 2026 and a headline about Women's Asian Cup 2026.

abc.net.au/news/2026-03-05/matildas-iran-asian-cup-quick-hits-hayley-raso-mary-fowler/106413886


Oh that's wild. I guess the system is just on autopilot and nobody knew how to actually act on their policy change.

It's all about licensing sadly...


It is somehow less funny today but in the 90's we would say "is there something wrong with your hands?"

A truly funny story: I wrote an rss aggregator and one day I discover some feeds had died without me noticing it. I looked at the feed, it was gone, I look at my aggregate and the headlines were all there?!?!

Since I gather a lot of feeds I couldn't help but noticed that a very large amount isn't wellformed. For example, in xml attributes the & (in urls) is suppose to be &, if you do that however many aggregators won't be able to parse it.

Every other month I wrote little bits of code to address the most annoying issues. 1) if I cant find a <link> or <guide> etc I eventually just gather <a>'s and take the href. 2) if I really cant find a title for the item I had it fail back on whatever is in the <a> since I was gathering those anyway. 3) if I cant even find an <item> I just look for the things that are suppose to go in the <item> 4) if I cant find a proper time stamp ill try parse one out of the url 5) if the urls are relative path complete them.

What was actually going on: The feed was gone, it redirected to the home page. In an attempt to parse the "xml" it eventually resorted to gathering the url and title from the <a>'s and build valid time stamps from the urls.


Not exactly a "news" site, but this is still an example site that you'd expect would have a feed:

https://mistral.ai/news/


Mistral used to serve a feed actually up until 6ish months ago I guess? Their admin console used to be built with HTMX too which I found kinda interesting.

Now the news site and admin console is all in Next.js and slow and no feed.



Device should have been accompanied with a lot of examples so people are really aware how stored data could be misused. Alexa or any other similar device - their users are technically illiterate. Do you remember leaks of movie stars’ iPhone images? Multiply it by thousands… Court order, burglars, hackers - all bad actors imaginable…

For you, as producer, those situations can be a nightmare if not well described in operating conditions. And devices should not be pre-setup (don’t be “Google-evil”, as they track everything if you don’t set it up different; and it is always hidden deep in the third level menu under 2-steps verification)


> This isn’t an accident. This is the result of two decades of deliberate, calculated effort by the largest technology companies on earth to turn users into consumers, instruments into appliances, and technical literacy into a niche hobby for weirdos. They succeeded beyond their wildest expectations. Congratulations to everyone involved. You’ve built a generation that can’t extract a zip file without a dedicated app and calls it innovation.

As a power user, I feel weirdo when trying explain something what I take for granted. :)

Total commander/norton/midnight commander, bash, cron, portable apps, zip a file, automation of email processing, having a non-gmail address, markdown, “don’t touch mouse” editing, pdf manipulation, block editing in Sublime text (don’t mention vi/vim, Emacs :)


That's a wide spectrum. Not understanding that gmail isn't email is well into "How do you not know this?" territory. Whereas only very specific users know about Bash and Emacs. I do often have that experience of needing to climb 47 levels upward to successfully explain something to someone. Right now I'm just intrigued by the fact that I can go out into my neighbourhood and nobody will know what 90% of these things are, yet I'm probably far from the only person on this forum who recognizes and has experience with the vast majority of that list.


Well… it is happening. You can’t put spilled milk back to bottle. You can do future requirements that will try to stop this behaviour.

E.g. in the submission form could be a mandatory field “I hereby confirm that I wrote the paper personally.” In conditions there will be a note that violating this rule can lead to temporary or permanent ban of authors. In the world where research success is measured by points in WOS, this could lead to slow down the rise of LLM-generated papers.


Maybe we need to find a new metric to judge academics by beyond quantity of papers


Unironically, maybe they should be scored by LLMs? My first thought was that the reviewers could score the papers but that would lead to even more group-think.

Ideally whoever is paying the academics should just be paying attention to their work and its worth, but that would be crazy.


This approach dismisses the cases where Ai submissions generally are better.

I don't think this is appreciated enough: a lot of Ai adaptation is not happening because of cost on the expense of quality. Quite the opposite.

I am in the process of switching my company's use of retool for an Ai generated backoffice.

First and foremost for usability, velocity and security.

Secondly, we also save a buck.


> This approach dismisses the cases where Ai submissions generally are better.

You’re perhaps missing the not so subtle subtext of Peter Woit’s post, and entire blog, which is:

While AI is getting better, it’s still not _good_ by the standards of most science. However it’s as good as hep-th where (according to Peter Woit) the bar is incredibly low. His thesis is part “the whole field is bad” and part “Arxiv for this subfield is full of human slop.”

I don’t have the background to engage with whether Peter Woit’s argument has merit, but it’s been consistent for 25+ years.


My comment was more an answer to the proposed gatekeeping of science as a human activity.

Yes, Ai is still not good in the grand scheme of things. But everybody actively using it has gotten concerned over the past 2 months by the leap frigging of LLMs - and surprised as they thought we had arrived at the plateau.

We will see in a year or two if humans still hold an advantage in research - currently very few do in software development, despite what they think about themselves.


> gatekeeping of science as a human activity

The other side of the coin is: automating science as a machine activity.

Is that what we want? I agree with you that the use of language models in science is an inevitable paradigm shift, but now is the time to make collective decisions about how we're going to assimilate this increasingly super-human "intelligence" into academic practices, and the rest of daily life. Otherwise we will be the ones being assimilated by a force beyond our control.

The progress is so rapid that the only people who might have control over the process are the ones with self-interest, mainly financial, and not aligned with - in some aspects opposed to - the interests of humanity.


> Is that what we want?

Only if there are some very fundamental and convincing arguments that are still not uncovered.

We can't protect science and let services like medical services be too expensive for people to have access to them.

That would be introducing new social classes: people who do science can get unnecessary protection, everybody else can not.

That is not going to fly.


Its already automated. Do you think astronomers manually count stars or medical scientists manually run chemical reactions? Why is automation by ai wrong when all other automations were beneficial?


The single most valuable part of science is keeping the gates: not adding things to the corpus of scientific knowledge unless they can be properly substantiated.


What about the new result that was recently derived by GPT 5.2 Pro/Deep Research? That was also hep-th. https://openai.com/index/new-result-theoretical-physics/ https://arxiv.org/abs/2602.12176


LLMs are really eager to start coding (as interns are eager to start working), so the sentence “don’t implement yet” has to be used very often at the beginning of any project.


Most LLM apps have a 'plan' or 'ask' mode for that.


I find that even then I often need to be clear that i'm just asking a question and don't want them running off to solve the larger problem.


Plug it into skull bone. Neuralink + slot for a model that you can buy in s grocery store instead of prepaid Netflix card.


We better solve the energy usage and cooling first otherwise that will be a very spicy body mod.


Anything bigger in context? Unfortunately - maybe I have bad luck…

But I don’t get how they code in Anthropic when they say that almost all their new code is written by LLM.

Do they have some internal much smarter model that they keep in secret and don’t sell it to customers? :)


>> when they say that almost all their new code is written by LLM.

Kepping in mind they are trying hard to sell their code assistant what else they can say?

Goal is simple: just lie your way forward to the next VC funding round.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: