Hacker Timesnew | past | comments | ask | show | jobs | submit | dwaltrip's commentslogin

One of the weaknesses is incentivizing people to do horrible things, if that makes their bet a wining one.

Time to wake up:

π*0.6: two and a half hours of unseen folding laundry (Physical Intelligence)

https://www.youtube.com/watch?v=ZpHapIlJnMo


Looks like the first two hours were spent trying to fold the same t-shirt :)

It’d be cool to see your process in depth. You should record some of your sessions :)

I mostly believe you. I have seen hints of what you are talking about.

But often times I feel like I’m on the right track but I’m actually just spinning when wheels and the AI is just happily going along with it.

Or I’m getting too deep on something and I’m caught up in the loop, becoming ungrounded from the reality of the code and the specific problem.

If I notice that and am not too tired, I can reel it back in and re-ground things. Take a step back and make sure we are on reasonable path.

But I’m realizing it can be surprisingly difficult to catch that loop early sometimes. At least for me.

I’ve also done some pretty awesome shit with it that either would have never happened or taken far longer without AI — easily 5x-10x in many cases. It’s all quite fascinating.

Much to learn. This idea is forming for me that developing good “AI discipline” is incredibly important.

P.s. sometimes I also get this weird feeling of “AI exhaustion”. Where the thought of sending another prompt feels quite painful. The last week I’ve felt that a lot.

P.p.s. And then of course this doesn’t even touch on maintaining code quality over time. The “after” part when the LLM implements something. There are lots of good patterns and approaches for handling this, but it’s a distinct phase of the process with lots of complexities and nuances. And it’s oh-so-temping to skip or postpone. More so if the AI output is larger — exactly when you need it most.


It kind of sounds like you are saying it is impossible to improve on the current state of the world.

That if it was possible to improve things, someone would have already done it. And they haven’t, so it must not be possible.

That feels a bit extreme… Maybe I’m misunderstanding?


No, it is certainly possible to come up with an innovation that allows progress.

But the tone I get from discussions about repairability and performance is that it would be trivial to make the device, if only businesses wanted to.

However, given the fact that it hasn’t happened yet from a variety of alternative manufacturers, the probability seems very low that the ideal device is possible with current technology at a price that is viable.

Basically, it is a competitive market (or was), and what won out was what was possible. Barring some leap in technology, it is unrealistic to assume we can do better without suffering tradeoffs.


Are you saying there isn’t an actual sycophancy problem?

We are talking about overall patterns here, not the experience of a small subset of skilled and careful users.


> (b) bombing is very expensive so nobody actually profits from the insider trading

The people profiting aren't buying the bombs with their own money.


> up to the point where it could be illegal misapropriation

Huh..?

> And then taking the moral highground and being judgemental about people because they worked in gambling is probably something one should reconsider.

Ah I see.


HN occasionally devolves into “supremely pedantic and nitpicky” mode. Today is one of those days.


If you tried for a few more minutes you would have figured it out.


Maybe they meant un-uninstallable?


oh yeah, that's actually how I read it though now I realize it's nonsensical... like when someone says "I could care less" when they actually mean "couldn't"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: