Are we talking about Copilot in particular or AI code assistants in general?
The value, for me, is extremely high.
My teammates feel the same. Our shared opinion/experience is that ChatGPT 4 is better than Copilot in general but Copilot shines in-editor because it's aware of your project. So we use both in tandem. They mostly use Chat GPT and I split about 50%/50%. (Note: I'm using the Copilot X beta which I believe uses GPT-4)
People say they're "only good for boilerplate code" but well, that's the vast majority of what anybody is writing IMO.
If I need to traverse a tree or list or something, I'm letting AI write that code. Could I write it myself faster? No, and it's going to have an off-by-one error some non-zero portion of the time if I write it. I also find it's superior to e.g. memorizing all 10,000 CSS properties along with all the classes that pertain to Boostrap or Tailwind or whatever.
I see the AI code assistant hate here and it just baffles me. It's so obviously useful to me, and I really can't imagine I'm that atypical.
Edit 1: AI help is especially pertinent if you are a "full stack coder" who is working on everything from database to frontend. Since frontends really multiplied in complexity about 15 years ago, I have not met a single "full stack engineer" who is truly fluent and expert in the entire db->app->frontend stack, because complexity and choice has proliferated at each of those levels.
Edit 2: While most of us are (hopefully) not literally writing tree or list traversals by hand in our actual daily programming lives, I hope my meaning is still clear -- I'm talking about that mundane sort of code, iterating over things, etc.
> If I need to traverse a tree or list or something, I'm letting AI write that code. Could I write it myself faster? No, and it's going to have an off-by-one error some non-zero portion of the time if I write it.
Many languages/companies have existing well understood solutions that _won't_ have errors. Maybe that is the disconnect? I can't remember the last non-interview time I had to write a non-trivial traversal.
Many languages/companies have existing well understood
solutions that _won't_ have errors.
I admit: I chose poor examples in my above post.
In a literal sense it has been years since I wrote a tree or list traversal by hand and I would be very surprised and concerned to see a PR where somebody is doing it by hand rather than using a library.
But, I hope my meaning comes through despite that. I mean the sort of mundane "iterate through a thing, and do a thing with some of the things" sort of code that many/most of us are writing on a regular, hour-to-hour basis.
Maybe that is the disconnect?
Maybe! Another disconnect might be the level of polyglot one is expected to be.
I'm generally a "full stack" web developer (currently switching between Python and Ruby on the backend) and I don't mind admitting: front end crap changes fast enough that I can't possibly keep up with it. In my experience nobody is expert in the whole stack. Altogether it's just a really big surface area of Shit I Need To Know. AI is very welcome here for me.
Other coders might have a smaller surface area of shit they need to know, and they already know it inside and out, and therefore see no real value add from an AI buddy who is not correct and optimal 100% of the time.
Having used Copilot for a year now I very much doubt this figure. In my experience it only works well in boilerplate kind of situations where most code is copy/paste work anyhow. As soon as the code gets a little complicated it stops working well. It has also gotten quite slow for me lately. So I doubt it increases my work efficiency by more than 5%, but I do like it for reducing strain on the hands. For that I find the price appropriate.
> In my experience it only works well in boilerplate kind of situations where most code is copy/paste work anyhow.
As a data point, this matches my personal observations. But reducing the time spent on that bolierplate plus not needing to search where the "copy" portion comes from, may justify $30/month (and probably much more than that). My 2c.
The quote from the article says "more than 1.5 million people have used it and it is helping build nearly half of Copilot users’ code"
Not a native speaker, but to me this sounds much more ambiguous than up to 50% of code is produced by Copilot.
Also, how different from previous solutions is this actually? I use autocomplete and code snippets extensively. Never measured it , but I wouldn't be surprised if my IDE had generated more source code than I myself typed over the last 10 or so years.
It doesn't sound ambiguous to me. It says that those people have Copilot enabled while they write more than half of their code. AKA it's on the editor they use for most things.
I mean you could have made a similar argument about the productivity benefits of smoking cigarettes 100 years ago. Just because a lot of people are doing something doesn't mean it's valuable or that we have an accurate picture of the cost/benefits. The verdict is still very much out on LLMs.
Any new product that gained +1M paying subscribers (Github Copilot) in its first year is a success. You cannot like it, that is allowed. And it may not help you, but +1M subscribers is a lot of people.
They are definitely here to stay until they are superseded by even better technology or it gets sued into oblivion.
They don't give any source or anything; the way it's worded sounds ambiguous enough that I suspect it's not that 50% of their code is generated by Copilot. Is there something a bit more convincing elsewhere?
I really doubt this.
IMO, the jury is still out deciding if the value is above zero for enough people to matter.