Looking at that blog post, I find it illustrative in how people who like strong types and people who dislike strong types are addressing different form of bugs. If the main types of issues comes from bugs like 1 + "2" == 12", then strong types is a big help. It also enables many developers who spend the majority of time in a programming editor to quickly get automatic help with such bugs.
The other side is those people who do not find those kind of bugs annoying, or they simply don't get hit by such bugs at a rate that is high enough to warrant using a strong type system. Developers who spend their time prototyping in ipython also get less out of the strong types. The bugs that those developers are concerned about are design bugs, like finding out why a bunch of small async programs reading from a message buss may stall once every second Friday, and where the bug may be a dependency of a dependency of a dependency that do not use a socket timeout. Types are similar not going to help those who spend the wast majority of time on bugs where someone finally says "This design could never have worked".
Take care to differentiate strong/weak typing from dynamic/static typing. Many dynamically typed languages (especially older ones) are also weakly typed, but some dynamic langugages, like Python, are strongly typed. 1 + "2" == 12 is weak typing, and Python has strong typing. Type declarations are static typing, in contrast to traditional Python, which had (and still has) dynamic typing.
What's even worse, when typing is treated as an indisputable virtue (and not a tradeoff), pretty much every team starts sacrificing readability for the sake of typing.
And lo and behold, they end up with _more_ design bugs. And the sad part is that they will never even recognize that too much typing is to blame.
Nonsense. You might consider it a tradeoff, but it's a very heavily skewed one. Minor downsides on one side, huge upsides on the other.
Also I would say type hints sacrifice aesthetics, not readability. Most code with type hints is easier to read, in the same way that graphs with labelled axes and units are easier to read. They might have more "stuff" there which people might think is ugly, but they convey critical information which allows you to understand the code.
> Most code with type hints is easier to read
That has not been my experience in the past few years.
I've always been a fan of type hints in Python: intention behind them was to contribute to readability and when developer had that intention in mind, they worked really well.
However, with the release of mypy and Typescript, engineering culture largely shifted towards "typing is a virtue" mindset. Type hints are no longer a documentation tool, they are a constraint enforcing tool. And that tool is often at odds with readability.
Readability is subjective and ephemeral, type constraints (and intellisense) are very tangible. Naturally, developers are failing to find balance between the two.
I write a lot of typescript and rust. In those languages, when I want to understand some code I haven’t seen before, I always start by reading the types. Understanding what and how the data moves through a system is usually key to understanding everything. And usually I lean heavily on my editor for this - in typescript there’s a lot of value in the simple act of hovering over values to see what type they are.
I’m working with a medium size python program at the moment. It’s mostly written by someone smart but early career, and they’ve made a rabbit warren of classes and mixins that get combined in complex ways. I’ve been encouraging him to add types - and wherever those types exist, the code becomes 100% more legible to my code editor - and ultimately to me.
I don’t think I’d bother with types in Python for small programs. But my experience is that good type hints lay out a welcome mat to anyone who comes along later to figure the code out. And honestly, a lot of the time that person is the original author, just months or years after the code was written.
> pretty much every team starts sacrificing readability
People are sacrificing this when they start using python in the first place
It's not about the bugs, it's about designing the layout of the program in types first (ie, laying out all of the data structures required) such that the actual coding of the functionality is fairly trivial. This is known as type driven development: https://blog.ploeh.dk/2015/08/10/type-driven-development/
At work, I find type hints useful as basically enforced documentation and as a weak sort of test, but few type systems offer decent basic support for the sort of things you would need to do type driven programming in scientific/numerical work. Things like making sure matrices have compatible dimensions, handling units, and constraining the range of a numerical variable would be a solid minimum.
I've read that F# has units, Ada and Pascal have ranges as types (my understanding is these are runtime enforced mostly), Rust will land const generics that might be useful for matrix type stuff some time soon. Does any language support all 3 of these things well together? Do you basically need fully dependent types for this?
Obviously, with discipline you can work to enforce all these things at runtime, but I'd like it if there was a language that made all 3 of these things straightforward.
I suspect C++ still comes the closest to what you’re asking for today, at least among mainstream programming languages.
Matrix dimensions are certainly doable, for example, because templates representing mathematical types like matrices and vectors can be parametrised by integers defining their dimension(s) as well as the type of an individual element.
You can also use template wizardry to write libraries like mp-units¹ or units² that provide explicit representations for numerical values with units. You can even get fancy with user-defined literals so you can write things like 0.5_m and have a suitably-typed value created (though that particular trick does get less useful once you need arbitrary compound units like kg·m·s⁻²).
Both of those are fairly well-defined problems, and the available solutions do provide a good degree of static checking at compile time.
IMHO, the range question is the trickiest one of your three examples, because in real mathematical code there are so many different things you might want to constrain. You could define a parametrised type representing open or closed ranges of integers between X and Y easily enough, but how far down the rabbit hole do you go? Fractional values with attached precision/error metadata? The 572 specific varieties of matrix that get defined in a linear algebra textbook, and which variety you get back when you compute a product of any two of them?
I'd be happy for just ranges on floats being quick and easy to specify even if the checking is at runtime (which it seems like it almost will have to be). I can imagine how to attach precision error/metadata when I need it with custom types as long as operator overloading is supported. I think similarly for specialized matrices, normal user defined types and operator overloading gets tolerably far. Although I can understand how different languages may be better or worse at it. Multiple dispatch might be more convenient than single dispatch, operator overloading is way more convenient than not having operator overloading, etc.
A lot of my frustration it is that the ergonomics of these things tend to be not great even when they are available. Or the different pieces (units, shape checking, ranges) don't necessarily compose together easily because they end up as 3 separate libraries or something.
Crystal certainly supports that kind of typing, and being able to restrict bounds based on dynamic elements recently landed in GCC making it simple in plain C as well.
If x is of type T, what type do you want (x - x) to be?
That's a hard one because it depends on what sort of details you let into types and maybe even on the specific type T. Not saying what I'm asking for is easy! Units and shape would be preserved in all cases I can think of. But with subranges (x - x) may have a super-type of x... or if the type system is very clever the type of (x - x) maybe be narrowed to a value :p
And then there's a subtlety where units might be preserved, but x may be "absolute" where as (x - x) is relative and you can do operations with relative units you can't with absolute units and vice versa. Like the difference between x being a position on a map and delta_x being movement from a position. You can subtract two positions on a map in a standard mathematical sense but not add them.
I think you're missing the point of the blog a bit, as the `1 + "2" == "12"` type of issues wasn't it. It definitely also sucks and much more common than you make it sound (especially when refactoring) but it's definitely not that.
Anyhow, no need to rehash the same arguments, there was a long thread here on HN about the post, you can read some of it here: https://news.ycombinator.com/item?id=37764326
I think there is another overlooked factor: some languages’ type systems suck and your opinion of types depends more on your first experience rather than a true comparison.
> The other side is those people who do not find those kind of bugs annoying
Anecdotally, I find these are the same people who work less effectively and efficiently. At my company, I know people who mainly use Notepad++ for editing code when VSCode (or another IDE) is readily available, who use print over debuggers, who don't get frustrated by runtime errors that could be caught in IDEs, and who opt out of using coding assistants. I happen to know as a matter of fact that the person who codes in Notepad++ frequently has trivial errors, and generally these people don't push code out as fast they could.
And they don't care to change the way they work even after seeing the alternatives and knowing they are objectively more efficient.
I am not their managers, so I say to myself "this is none of my business" and move on. I do feel pity for them.
Well, using print over debuggers is fairly common in Rust and other languages with strong type systems because most bugs are, due to the extreme lengths the compiler goes to to able to detect them even before running the program, just lacks of information of the value of an expression at a single point in the program flow, which is where dbg! shines. I agree with all the other points though.
Anecdotally, I was just writing a generic BPE implementation, and spend a few hours tracking down a bug. I used debug statements to look at the values of expressions, and noticed that something was off. Only later did I figure out that I modified a value, but used the old copy — a simple logic error that #[must_use] could have prevented. cargo clippy -W pedantic is annoying, but this taught be I better listen to what it has to say.
>these people don't push code out as fast they could.
Well, one of my coworkers pushes code quite fast, and also he is the one who get rejected more often because he keep adding .tmp, .pyc and even .env files to his commits. I guess "git add asterisk" is faster, and thus more efficient, than adding files slowly or taking time to edit gitignore.
Not so long ago I read a history here in HN about a guy that first coded in his head, then wrote everything in paper, and finally coded in a computer. It compiled without errors. Slow pusher? Inefficient?
> Not so long ago I read a history here in HN about a guy that first coded in his head, then wrote everything in paper, and finally coded in a computer. It compiled without errors. Slow pusher? Inefficient?
I've read and heard stories about these folks too, apparently this was more common decades ago.
To be clear, I don't think I could pull it off with any language. It's quite impressive and admirable to get things right on the first try.
Having said that, the thing is, languages were a lot simpler back then too. I'm not convinced this is realistically even possible with today's languages unless you constrain yourself to some overly restrictive subset. Like try this with C++, and I would be shocked if you can write nontrivial programs without getting compiler errors. Like to give a trivial example, every time I write my own iterator class for a container, I miss something when I hit compile: like either a comparison operator, or subtraction, or conversion to const iterator, or post-decrement, or subscript, or some member typedef. Or try it with python, and I bet you'll call .get() on something and then forget to check for null somewhere.
I would love to be proven wrong though. If anyone knows of someone who does this with a modern language, please share.
They invented .gitignore to prevent those files to get checked in into the repository.
Head, paper, keyboard is what we did in the 80s when compilers were too slow to afford throwing code at them and fix the errors later. Was that code in the HN story a substantial piece of code or some 100 lines program? Our programs used to be small.