I’d be pretty skeptical of any of these surveys about AI tool adoption. At my extremely large tech company, all developers were forced to install AI coding assistants into our IDEs and browsers (via managed software updates that can’t be uninstalled). Our company then put out press releases parading how great the AI adoption numbers were. The statistics are manufactured.
Yup, this happened at my midsize software company too
Meanwhile no one actually building software that I have been talking to is using these tools seriously for anything that they will admit to, anyways
More and more when I see people who are strongly pro-AI code and "vibe coding" I find they are either formerly devs that moved into management and don't write much code anymore, or people who have almost no dev experience at all and are absolutely not qualified to comment on the value of the code generation abilities of LLMs
When I talk to people whose job is majority about writing code, they aren't using AI tools much. Except the occasional junior dev who doesn't have much of a clue
These tools have some value, maybe. But it's nowhere near the hype would suggest
This is surprising to me. My company (top 20 by revenue) has forbidden us from using non-approved AI tools (basically anything other than our own ChatGPT / LLM tool). Obviously it can't truly be enforced, but they do not want us using this stuff for security reasons.
My company sells AI tools, so there’s a pretty big incentive for to promote their use.
We have the same security restrictions for AI tools that weren’t created by us.
I’ skeptical when I see “% of code generated by AI” metrics that don’t account for the human time spent parsing and then dismissing bad suggestions from AI.
Without including that measurement there exist some rather large perverse incentives.
An AI powered autocompletion engine is an AI tool. I think few developers would complain about saving a few keystrokes.
I think few developers didn't already have powerful autocomplete engines at their disposal.
The AI autocomplete I use (Jetbrains) stands head-and-shoulders above its non-AI autocomplete, and Jetbrains' autocomplete is already considered best-in-class. Its python support is so good that I still haven't figured out how to get anything even remotely close to it running in VSCode
The Jetbrains AI autocomplete is indeed a lot better than the old one, but I can't predict it, so I have to keep watching it.
E.g. a few days ago I wanted to verify if a rebuilt DB table matched the original. So we built a query with autocomplete
SELECT ... FROM newtable n JOIN oldtable o ON ... WHERE n.field1<> o.field1 OR
and now we start autocompleting field comparisons and it nicely keeps generating similar code.
Until: n.field11<> o.field10
Wait? Why 10 instead of 11?