Walking was given as an example of "hard to program a robot to do it" by GP. Well, now we have robots that can walk.
What evidence is there that LLMs have hit a ceiling at being able to do things like talk to users or stakeholders to elicit requirements? Using LLMs to help with design and architecture decisions is already a pretty common example that people give.
I understand that it is not easy to relate to the author who pays (or was paying) $150 for a house cleaner, I understand that this is a 1-person history and may be highly biased, I understand the author is motivated to create a story from it.
However, if you think that any of the conditions described in the article are acceptable or that this is a fair price to pay for having AI, I think you are a horrible human being and I hope you'll be expelled from civilized society.
We’re horrible humans who should be expelled from society if we think it’s fine for people to get paid $52/hr for a boring remote job at a shitty bureaucratic company? Ok.
Is it, though? Can we really keep saying that "hardware will always be cheaper than human labour" when RAM prices are soaring, GPUs are becoming prohibitively expensive, and we're looking at a probably chip shortage?
I think the era of "poor software for fantastic hardware" is coming to an end.
RAM + GPU are getting more expensive but mostly for applications that require a lot of it like AI.
The hardware cost for regular applications has not vastly increased (especially when factoring in inflation).
Spending 2x development time on a problem often is not worth it (or only with large deployments).
UI development is an even more special case here.
The customer buys the machine which runs the code, not the company.
So sadly "good enough" is the standard.
One example for me here is the "switch product option" button on Amazon listings (e.g. switch green to blue color, smaller to larger model).
On my phone this sometimes takes >5 seconds to properly load.
Horribly optimised.
It’s not even close to at an end. Hardware would need to increase in cost by hundreds or even thousands of times to materially change that calculation.
Just as an example, the cost of one week of engineering time corresponds to tens of thousands of vCPU-hours, which is many years of CPU time.
As such, it only ever makes business sense to optimize code either when it has bottlenecks that can’t be fixed by throwing hardware at it, or when it’s so inefficient that it can be sped up by several orders of magnitude.
Given that YAML it's a configuration language I'd say that not being turing complete it's a feature, not a bug nor a limitation; I always want my language files to be declarative to not suffer the perils of logic.
Edit: Also, I don't see the need for a turing complete language for something like docker compose, if you need something really complex you can always script a docker-compose.yml generator with all the logic and complexity you need.
I can't believe that people is comparing opening a project in a code editor with running a build script.
The PoC doesn't even open a file, it just opens the directory. It's a pretty big difference, when you execute a build script you _expect_ to run code, when you open a directory in your editor you don't expect any side effect _at all_.
My guess is that since the proc_macros returns a TokenStream, rust-analyzer have no way to know what it provides except running it.
I'm not sure there's a solution for this that doesn't cripple macros in Rust, apart from being able to configure rust-analyzer to ignore the macros, which clearly limit its usefulness.
More specifically, a proc macro is a Rust function that is compiled and run inside the compiler at build time. With IDEs, LSP and other protocols for having your editor query the compiler (or language runtime, like SLIME/SWANK), the compiler now runs whenever you open your editor.
It’s just not a new problem. Bash does auto–completion on Makefiles, which requires running make and asking it what the make targets are. IDEs can and will run ./configure for you, so that it can find the right include paths. Etc, etc.
Personally, I thought everyone already knew about this. I knew that proc macros would be a risk when I first heard about rls, years ago.
Certainly editors need to confirm with the user that they are ok with starting the compiler when they load a new project, but also we need to use fine–grained security systems like SELinux that can and do prevent programs from accessing things that they’re not supposed to access.
You'd have to sandbox the analyzer. Let it run arbitrary code but don't let it do IO. That can be pretty tricky to do for a language not designed to be sandboxed.
Safest way would probably be something hilarious like having the analyzer compiled to WASM and ran in node.js.
By default rust-analyzer also executes Rust build scripts (build.rs) just by opening the project in an IDE, so as far as Rust goes the comparison is apt.
rust-analyzer.cargo.runBuildScripts (default: true)
Run build scripts (build.rs) for more precise code analysis.
Why do you believe they wont? I think it's reasonable to assume that we will hit a ceiling that current models will not be able to break.
> We have robots walking just fine now, by the way.
Walking and reasoning are unrelated abilities.
reply