From the article:
> A 2024 GitHub survey found that nearly all enterprise developers (97%) are using Generative AI coding tools. These tools have rapidly evolved from experimental novelties to mission-critical development infrastructure, with teams across the globe relying on them daily to accelerate coding tasks.
That seemed high, what the actual report says:
> More than 97% of respondents reported having used AI coding tools at work at some point, a finding consistent across all four countries. However, a smaller percentage said their companies actively encourage AI tool adoption or allow the use of AI tools, varying by region. The U.S. leads with 88% of respondents indicating at least some company support for AI use, while Germany is lowest at 59%. This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.
Fun that the survey uses the stats to say that companies should support increasing usage, while the article uses it to try and show near-total usage already.
I’d be pretty skeptical of any of these surveys about AI tool adoption. At my extremely large tech company, all developers were forced to install AI coding assistants into our IDEs and browsers (via managed software updates that can’t be uninstalled). Our company then put out press releases parading how great the AI adoption numbers were. The statistics are manufactured.
Yup, this happened at my midsize software company too
Meanwhile no one actually building software that I have been talking to is using these tools seriously for anything that they will admit to, anyways
More and more when I see people who are strongly pro-AI code and "vibe coding" I find they are either formerly devs that moved into management and don't write much code anymore, or people who have almost no dev experience at all and are absolutely not qualified to comment on the value of the code generation abilities of LLMs
When I talk to people whose job is majority about writing code, they aren't using AI tools much. Except the occasional junior dev who doesn't have much of a clue
These tools have some value, maybe. But it's nowhere near the hype would suggest
This is surprising to me. My company (top 20 by revenue) has forbidden us from using non-approved AI tools (basically anything other than our own ChatGPT / LLM tool). Obviously it can't truly be enforced, but they do not want us using this stuff for security reasons.
My company sells AI tools, so there’s a pretty big incentive for to promote their use.
We have the same security restrictions for AI tools that weren’t created by us.
I’ skeptical when I see “% of code generated by AI” metrics that don’t account for the human time spent parsing and then dismissing bad suggestions from AI.
Without including that measurement there exist some rather large perverse incentives.
An AI powered autocompletion engine is an AI tool. I think few developers would complain about saving a few keystrokes.
I think few developers didn't already have powerful autocomplete engines at their disposal.
The AI autocomplete I use (Jetbrains) stands head-and-shoulders above its non-AI autocomplete, and Jetbrains' autocomplete is already considered best-in-class. Its python support is so good that I still haven't figured out how to get anything even remotely close to it running in VSCode
The Jetbrains AI autocomplete is indeed a lot better than the old one, but I can't predict it, so I have to keep watching it.
E.g. a few days ago I wanted to verify if a rebuilt DB table matched the original. So we built a query with autocomplete
SELECT ... FROM newtable n JOIN oldtable o ON ... WHERE n.field1<> o.field1 OR
and now we start autocompleting field comparisons and it nicely keeps generating similar code.
Until: n.field11<> o.field10
Wait? Why 10 instead of 11?
Even that quote itself jumps from "are using" to "mission-critical development infrastructure ... relying on them daily".
> This highlights an opportunity for organizations to better support their developers’ interest in AI tools, considering local regulations.
This is a funny one to see included in GitHub's report. If I'm not mistaken, github is now using the same approach as Shoplify with regards to requiring LLM use and including it as part of a self report survey for annual review.
I guess they took their 2024 survey to heart and are ready to 100x productivity.
I've tried coding with AI for the first time recently[1] so I just joined that statistic. I assume most people here already know how it works and I'm just late to the party, but my experience was that Copilot was very bad at generating anything complex through chat requests but very good at generating single lines of code with autocompletion. It really highlighted the strengths and shortcomings of LLM's for me.
For example, if you try adding getters and setters to a simple Rect class, it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted. You type pub fn right() and it autocompletes left + width. That's convenient and not something traditional code completion can do.
I wouldn't say it's "mission critical" however. It's just faster than copy pasting or Googling.
The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.
[1] https://www.virtualcuriosities.com/articles/4935/coding-with...
> it's so fast to do it with Copilot you might just add more getters/setters than you initially wanted
Especially if you don't need getters and setters at all. It depends on you use case, but for your Rect class, you can just have x, y, width, height as public attributes. I know there are arguments against it, but the general idea is that if AI makes it easy to write boilerplate you don't need, then it made development slower in the long run, not faster, as it is additional code to maintain.
> The vulnerability highlighted in the article appears to only exist if you put code straight from Copilot into anything without checking it first. That sounds insane to me. It's just as untrusted input as some random guy on the Internet.
It doesn't sound insane to everyone, and even you may lower you standards for insanity if you are on a deadline and just want to be done with the thing. And even if you check the code, it is easy to overlook things, especially if these things are designed to be overlooked. For example, typos leading to malicious forks of packages.
Once the world is all run on AI generated code how much memory and cpu cycles will be lost to unnecessary code? Is the next wave of HN top stories “How we ditched AI code and reduced our AWS bill by 10000%”?
I don't know but the current situation is already so absurd that AI probably won't make it much worse. It can even make it a little better. I am talking about AI generated "classical" code, not the AIs themselves.
Today, we are piling abstraction on top of abstractions, culminating with chat apps taking a gigabyte of RAM. Additional getters and setters are nothing compared to it, maybe literally nothing, as these tend to get optimized out by the compiler.
The way it may improve things is that it may encourage people to actually code a solution (more like having it AI generated) rather than pulling an big library for a small function. Both are bad, but from an efficiency standpoint by being more specialized code, the AI solution may have an edge.
Note that this argument is only about runtime performance and memory consumption, not matters like code maintainability and security.
Your IDE should already have facilities for generating class boilerplate, like package address and brackets and so on. And then you put in the attributes and generate a constructor and any getters and setters you need, it's so fast and trivially generated that I doubt LLM:s can actually contribute much to it.
Perhaps they can make suggestions for properties based on the class name but so can a dictionary once you start writing.
IDE's can generate the proper structure and make simple assumptions, but LLM's can also guess what algorithms should look like generally. In the hands of someone who knows what they are doing I'm sure it helps produce more quality code than they otherwise would be capable of.
I'm unfortunate that it has become used by students and juniors. You can't really learn anything from Copilot, just as I couldn't learn Rust just by telling it to write Rust. Reading a few pages of the book explained a lot more than Copilot fixing broken code with new bugs and the fixing the bugs by reverting its own code to the old bugs.
It might be fun if it didn't seem dishonest. The report tries to highlight a divide between employee curiosity and employer encouragement, undercut by their own analysis that most have tried them anyway.
The article MISREPRESENTS that statistic to imply universal utility. That professional developers find it so useful that they universally chose to make daily use of it. It implies that Copilot is somehow more useful than an IDE without itself making that ridiculous claim.
The article is typical security issue embellishment/blogspam. They are incentivized to make it seem like AI is a mission-critical piece of software, because more AI reliance means a security issue in AI is a bigger deal, which means more pats on the back for them for finding it.
Sadly, much of the security industry has been reduced to a competition over who can find the biggest vuln, and it has the effect of lowering the quality of discourse around all of it.
And employers are now starting to require compliance with using LLMs regardless of employee curiosity.
Shopify now includes LLM use in annual reviews, and if I'm not mistaken GitHub followed suit.
In some way, we reached 100% of developers, and now usage is expanding, as non-developers can now develop applications.
Wouldn't that then make those people developers? The total pool of developers would grow, the percentage couldn't go above 100%.
I mean, I spent years learning to code in school and at home, but never managed to get a job doing it, so I just do what I can in my spare time, and LLMs help me feel like I haven't completely fallen off. I can still hack together cool stuff and keep learning.
I actually meant it as a good thing! Our industry plays very loose with terms like "developer" and "engineer". We never really defined them well and its always felt more like gate keeping.
IMO if someone uses what tools they have, whether thats an LLM or vim, and is able to ship software they're a developer in my book.
Probably. There is a similar question: if you ask ChatGPT / Midjourney to generate a drawing, are you an artist ? (to me yes, which would mean that AI "vibe coders" are actual developers in their own way)
If my 5 yo daughter draws a square with a triangle on top is she an architect?
That's quite a straw man example though.
If your daughter could draw a house with enough detail that someone could take it and actually build it then you'd be more along the lines of the GP's LLM artist question.
Not really, the point was contrasting sentimental labels with professionally defined titles, which seems precisely the distinction needed here. It's easy enough to look up on the agreed upon term for software engineer / developer and agree that it's more than someone that copy pastes code until it just barely runs.
EDIT: To clarify I was only talking about vibe coder = developer. In this case the LLM is more of the developer and they are the product manager.
Do we have professionally defined titles for developer or software engineer?
I've never seen it clarified so I tend to default to the lowest common denominator - if you're making software in some way you're a developer. The tools someone uses doesn't really factor into it for me (even if that is copy/pasting from stackoverflow).
nope. if i ask an llm to give me a detailed schematic to build a bridge, im not magically * poof * a structural engineer.
I don't know, if you actually design in some way and deliver the solution for the structure of the bridge, aren't you THE structural engineer for that project ?
Credentials don't define capability, execution does.
> Credentials don't define capability, execution does.
All the same, if my city starts to hire un-credentialed "engineers" to vibe-design bridges, I'm not going to drive on them
again, it doesn’t make me a structural engineer—it’s the outcome of hiring someone else to do it. it really isn’t complicated.
i’m not suddenly somehow a structural engineer. even worse, i would have no way to know when its full of dangerous hallucinations.
This argument runs squarely into the problems of whether credentials or outcomes are what's important, and whether the LLM is considered a tool or the one actually responsible doing the work.
it’s not that deep.
*if* it were a structurally sound bridge, it means i outsourced it. it’s that simple. it doesn’t magically make me a structural engineer, it means it was designed elsewhere.
if i hire someone to paint a picture it doesn’t magically somehow make me an artist.
If I tell a human artist to draw me something, am I an artist?
No.
Neither are people who ask AI to draw them something.
That probably depends on whether you consider LLMs, or human artists, as tools.
If someone uses an LLM to make code, is consider the LLM to be a tool that will only be as good as the person prompting it. The person, then, is the developer while the LLM is a tool they're using.
I don't consider auto complete, IDEs, or LSPs to take away from my being a developer.
This distinction likely goes out the window entirely if you consider an LLM to actually be intelligent, sentient, or conscious though.
I wonder if AI here also stands in for decades long tools like language servers and intellisense.
I agree this sounds high, I wonder if "using Generative AI coding tools" in this survey is satisfied by having an IDE with Gen AI capabilities, not necessarily using those features within the IDE.