It’d be cool to see your process in depth. You should record some of your sessions :)
I mostly believe you. I have seen hints of what you are talking about.
But often times I feel like I’m on the right track but I’m actually just spinning when wheels and the AI is just happily going along with it.
Or I’m getting too deep on something and I’m caught up in the loop, becoming ungrounded from the reality of the code and the specific problem.
If I notice that and am not too tired, I can reel it back in and re-ground things. Take a step back and make sure we are on reasonable path.
But I’m realizing it can be surprisingly difficult to catch that loop early sometimes. At least for me.
I’ve also done some pretty awesome shit with it that either would have never happened or taken far longer without AI — easily 5x-10x in many cases. It’s all quite fascinating.
Much to learn. This idea is forming for me that developing good “AI discipline” is incredibly important.
P.s. sometimes I also get this weird feeling of “AI exhaustion”. Where the thought of sending another prompt feels quite painful. The last week I’ve felt that a lot.
P.p.s. And then of course this doesn’t even touch on maintaining code quality over time. The “after” part when the LLM implements something. There are lots of good patterns and approaches for handling this, but it’s a distinct phase of the process with lots of complexities and nuances. And it’s oh-so-temping to skip or postpone. More so if the AI output is larger — exactly when you need it most.
No, it is certainly possible to come up with an innovation that allows progress.
But the tone I get from discussions about repairability and performance is that it would be trivial to make the device, if only businesses wanted to.
However, given the fact that it hasn’t happened yet from a variety of alternative manufacturers, the probability seems very low that the ideal device is possible with current technology at a price that is viable.
Basically, it is a competitive market (or was), and what won out was what was possible. Barring some leap in technology, it is unrealistic to assume we can do better without suffering tradeoffs.
oh yeah, that's actually how I read it though now I realize it's nonsensical... like when someone says "I could care less" when they actually mean "couldn't"
reply