A question that came up in discussions recently and that I found interesting: How will new APIs, libraries or tooling be introduced in the future?
The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.
So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.
I guess the counter-question is does it matter if nobody is building tools optimized for humans, when humans aren't being paid to write software?
I saw a submission earlier today that really illustrated perfectly why AI is eating people who write code:
> You could spend a day debating your architecture: slices, layers, shapes, vegetables, or smalltalk. You could spend several days eliminating the biggest risks by building proofs-of-concept to eliminate unknowns. You could spend a week figuring out how you’ll store, search, and cache data and which third–party integrations you’ll need.
$5k/person/week to have an informed opinion of how to store your data! AI going to look at the billion times we already asked these questions and make an instant decision and the really, really important part is it doesn't really matter what we choose anyway because there are dozens of right answers.
There will still be people who care to go deeper and learn what an API is and how to design a good one. They will be able to build the services and clients faster and go deeper using AI code assistants.
And then, yes, you’ll have the legions of vibe coders living in Plato’s cave and churning out tinker toys.
That’s it then isn’t it? We are at the level where we’re making tinker toys. What is the tinker toy industry like? Instead of expensive start up Google office. Do I at least get a workshop in the back of the garden? How much does it pay?
I mean yeah a lot of tech is doing is tinker toy BS. A lot of people in it to make money, not make the world better in any material way. To some extent that’s fine, but some people become deluded.
There are still real things being done, but they often don’t pay as nicely or live in the spotlight.
It's not an issue. Claude routinely uses internal APIs and frameworks on one of my projects that aren't public. The context windows are big enough now that it can learn from a mix of summarized docs and surrounding examples and get it nearly right, nearly all the time.
There is an interesting aspect to this whereby there's maybe more incentive to open source stuff now just to get usage examples in the training set. But if context windows keep expanding it may also just not matter.
The trick is to have good docs. If you don't then step one is to work with the model to write some. It can then write its own summaries based on what it found 'surprising' and those can be loaded into the context when needed.
Not sure this is going to be a big issue practice. Tools like ChatGPT regularly get new knowledge cutoffs and those seem to work well in my experience. I haven't tested it with programming features specifically, but you could simply do a small experiment: take the tool of your choice and a programming feature that was introduced after it first launched and see whether you can get it to use it correctly.
> unless a new finetuning is performed
That's where we're at. The LLM needs to be told about the brand new API by feeding it new docs, which just uses up tokens in its context window.