This reminds me of https://www.adit.io/posts/2013-04-17-functors,_applicatives,...
I think over the recent years, there's been a rise in typed languages that support functional programming like TypeScript and Rust. It will be interesting to see if this trend continues in the context of AI assistant programming. My guess is that it will become easier for beginners, and the type systems will help to build more robust programs in cooperation with AI.
Yes, type checking works very well with AI since typing provides a step of verification. The more precise the type system is, and the more guarantees you can have. At some extreme, you can entirely specify the program with types. So if the program type checks, it is guaranteed to implement its spec and you can trust the AI. I assume the AI will have a hard time to write the code though. But I had very good results in Rust, less in Haskell. I think it's also because the Rust compiler gives more meaningful error messages, which helps the AI to iterate.
Even in early 2025, LLMs are already the most powerful type inference algorithm. Why would they need a static type system in 2030?
My guess is it'll be the opposite: I suspect compared to humans, LLMs will make fewer type errors, and more errors that are uncaught by types. Thus I expect type systems will be of lower value to them (compared to humans), leading to a shift toward dynamic languages and the possible extinction of typed languages.
The alternative I could imagine is moving toward Haskell-like languages with MUCH stronger type systems, where higher-level errors ARE type errors.
My one concrete observation in this direction: "press . to see valid options" behavior is a traditional strong point of typed languages. And interestingly, proved to be one the first thing that early/dumb LLMs were actually pretty good at. I believe that indicates LLMs are relatively good at type inference (compared to other things you can ask them to do), and we should expect that to continue being a strong point for them.
In working with Cline in both TypeScript and JavaScript, I find the LLM making tons of errors it has to go and fix in a future iteration, but virtually none of them are type errors.
I suspect LLMs are relatively good at duck-typed languages because they have a much bigger working memory than humans. As a result, the LLM can hold in working memory not just e.g., the function argument in front of them, but also all the callers of the function, and how they used the variable, and callers of the callers, and thus what "duck-type" it will be.
A system that can do this level of automatic type inference doesn't necessarily benefit from a formal, static, compile time type system.
A well-typed program provides a soundness proof that can be straightforwardly verified by just compiling the program. Even in a humongous codebase in a slow-to-compile language, this is cheaper than e.g. running an LLM on your codebase every time you push a commit (especially if you use incremental compilation). Type systems, even simpler ones, just give a lot of bang for the buck.
> Why would they need a static type system in 2030?
Why do many people talk about type systems as if they're only a safety guard?
To me that's never the main role of type systems. I don't know what's the word for it, but types allow me to read the code, on a high level. Sure AI will write them but as long as software engineers exist, we still have to read the code. How do you even read code without types? Comments? Unit tests? Actual implementation?
> LLMs will make fewer type errors, and more errors that are uncaught by types
> extinction of typed languages
Don't you find these contradictory? If LLM increases the rate of error uncaught by types, then type systems or the usage of them should catch up, otherwise there is no magical way for software to get better with LLM.
In the current state of LLM, the type system (or lsp and/or automated tests) is what allows the "agentic" AI coder to have a feedback loop and iterate a few times before it hands off to the programmer, perhaps that gives the delusion that the LLM is doing it completely without type system.
> How do you even read code without types?
We're not going to settle the preference for dynamic vs static types here. Its probably older than both of us, with many fine programmers on both sides of the fence. I'll leave it at this: well-informed programmers choosing to write in dynamically typed languages DO read code without types, and have happily done so since the late 1950s (lisp).
The funny thing is, I experience the same "how do you even??" feeling reading statically typed code. There's so much... noise on the screen, how can you even follow what's going on with the code? I guess people are just different?
> LLMs will make fewer type errors, and more errors that are uncaught by types
The errors I'm talking about are like "this CSS causes the element to draw part of its content off-screen, when it probably shouldn't". In theory, some sufficiently advanced type system could catch that (and not catch elements off screen that you want off-screen)? But realistically: pretty challenging for a static type system to catch.
The errors I see are NOT errors that throw exceptions at runtime either, in other words, they are beyond the scope of current type systems, either dynamic (runtime) or static (compile time). Remember that dynamic languages ARE usually typed, they are just type checked at runtime not compile time.
> perhaps that gives the delusion that the LLM is doing it completely without type system.
I mentioned coding in JS with cline, so no delusion. It does fine w/o a type system, and it rarely generates runtime errors. I fix those like I do with runtime errors generated when /I/ program with a dynamic language: I see them, I fix them. I find they're a lot rarer in both LLM generated code and in human generated code that proponents of static typing seem to think?
> The funny thing is, I experience the same "how do you even??" feeling reading statically typed code. There's so much... noise on the screen, how can you even follow what's going on with the code? I guess people are just different?
That really depends on the language, though. If you have type inference, you don't really have to write down that many types, e.g. it's in theory possible entire Haskell programs without mentioning a single type - though in practice, nobody does that for larger programs.
> Remember that dynamic languages ARE usually typed, they are just type checked at runtime not compile time.
This is a common misconception. Dynamically typed languages are not type checked at all. The only thing they'll do is throw errors at runtime in specific situations, whereas statically typed languages verify that the types are correct for every possible execution (well, there are always some escape hatches, but you have to know what you're doing). In a dynamically typed languages it's e.g. perfectly possible to introduce a minor change somewhere that will suddenly cause a function very far removed to break, e.g. if you suddenly return a null value where none was expected, something which I've experienced a ton when working on Ruby codebases (tbf, this can also happen in any statically typed language that adjoins "null" to every type - but there's plenty of modern languages like Rust, Swift, Kotlin, Scala etc. that fix this oversight). If you do that in a modern statically typed language, the compiler will yell at you immediately.
> The errors I'm talking about are like "this CSS causes the element to draw part of its content off-screen, when it probably shouldn't". In theory, some sufficiently advanced type system could catch that (and not catch elements off screen that you want off-screen)? But realistically: pretty challenging for a static type system to catch
I'm perfectly willing to believe that type systems are not very helpful for those kinds of tasks. I think type systems are definitely more useful when there's complex business logic involved, e.g. in huge, complicated backends with many different sorts of entities.