I’m pretty much in the same boat as you, but here’s one place that LLMs helped me:
In python I was scanning 1000’s of files each for thousands of keywords. A naive implementation took around 10 seconds, obviously the largest share of execution time after running instrumentation. A quick ChatGPT led me to Aho-Corasick and String searching algorithms, which I had never used before. Plug in a library and bam, 30x speed up for that part of the code.
I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.
I could have searched the web and probably found it out.
But the LLM basically auto completed the web, which I appreciate.
This is where education comes in. When we come cross a certain scale, we should know that O(n) comes into play, and study existing literature before trying to naively solve the problem. What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation? What if you couldn't find a library with a usable license?
Once I had to look up a research paper to implement a computational geometry algorithm because I couldn't find it any of the typical Web sources. There were also no library to use with a license for our commercial use.
I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
In my context, the scale is small. It just passed the threshold where a naive implementation would be just fine.
> What would happen if the "AI" and web search didn't return anything? Would you have stuck with your implementation?
I was fairly certain there must exist some type of algorithm exactly for this purpose. I would have been flabbergasted if I couldn’t find something on the web. But it that failed, I would have asked friends and cracked open the algorithms textbooks.
> I'm not against use of "AI". But this increasing refusal of those who aspire to work in specialist domains like software development to systematically learn things is not great. That's just compounding on an already diminished capacity to process information skillfully.
I understand what you mean, and agree with you. I can also assure you that that is not how I use it.
There is a time and a place for everything. Software development is often about compromise and often it isn’t feasible to work out a solution from foundational principles and a comprehensive understanding of the domain.
Many developers use libraries effectively without knowing every time consideration of O(n) comes into play.
Competently implemented, in the right context, LLMs can be an effective form of abstraction.
Yes! This is how AI should be used. You have a question that’s quite difficult and may not score well on traditional keyword matching. An LLM can use pattern matching to point you in the right direction of well written library based on CS research and/or best practices.
I mean, even in the absence of knowledge of the existence of text searching algorithms (where I'm from we learn that in university) just a simple web search would have gotten you there as well no? Maybe would have taken a few minutes longer though.
Extremely likely, yes. In this case, since it was an unknown unknown at the time, the LLM nicely explaining that this class of algorithms exists was nice, then I could immediately switch to Wikipedia to learn more (and be sure of the underlying information)
I think of LLMs as an autocomplete of the web plus hallucinations. Sometimes it’s faster to use the LLM initially rather than scour through a bunch of sites first.
But do you know every important detail of that library. For example, maybe that lib is not thread safe, or it allocates a lot of memory to speed thing up, or it wont work on ARM CPU because it uses some x86 hackery ASM?
Nope. And I don’t need to. That is the beauty of abstractions and information hiding.
Just read the docs and assume the library works as promised.
To clarify, the LLM did not tell me about the specific library I used. I found it the old fashioned way.
And that's why there is leaky abstraction. it's very hard to abstract everything.
Sounds like a job for silver/ripgrep and possibly stack exchange. Might take another minute to get it rolling but has other benefits like cost and privacy.
> I could have asked my knowledgeable friends and coworkers, but not at 11PM on a Saturday.
Get friends with weirder daily schedules. :-)
I think it's best if we all keep the hours from ~10pm to the morning sacred. Even if we are all up coding, the _reason_ I'm up coding at that hour is because no one is pinging me