The fact that the source is so small is wild. I would have expected a huge convoluted parsing library implemented in types.
On the other hand, the fact that this is even possible is more wild. Instead of replacing JS with a proper statically-typed language, we're spending all this effort turning a preprocessor's type system into a turing-complete metalanguage. Pretty soon we'll be able to compile TypeScript entirely using types.
TypeScript does an amazing job at describing the types of real-world JavaScript. It's incredibly good, and very useful, even in the face of extremely dynamic programs. The fact that it can describe transforms of types, like "this is a utility that adds an `xxx` prefix to every property name" is frankly unparalleled in mainstream languages, but more importantly lets us describe patterns that come up in real-world JS programs - it's not fluff!
And luckily, the most complex of types are usually limited to and contained within library type definitions. They add a lot of value for the library users, who usually don't have to deal that that level of complexity.
Typescript is so much better than almost every other dependently typed language in terms of expressing these things[0], and it's still kind of miserable.
We still have a long way to go in figuring out how to get our type systems to be easy enough to use to where this stuff doesn't surprise people anymore (because it shouldn't! identifier manipulation should be table stakes and yet)
[0]: modulo soundness of course! Though I don't think that's intrinsic to the expressiveness
I don't disagree! It's just the fact that it has to be transpiled to JS that's the problem, because it means none of the types are "real"; there's no runtime assurance that a string is actually a string. TS is great and I'd never go back to JS, but it's ultimately a bandaid. Native TS support in browsers is probably never going to happen, though, sadly.
Imagine if WASM were supported natively instead, with browsers exposing the same DOM interfaces that they do to JS. You could link a wasm binary in a <script> and do everything you can with JS/TS, but with any language of your choosing. No doubt a compiled form of TS would appear immediately. We'd no longer need separate runtime type checking.
Just feels like priorities are in the wrong place.
I think you're conflating cause and effect in several cases. TypeScript can't be thought of, and would never exist, independently from JavaScript like you're trying to do.
TypeScript wasn't created separate from JavaScript and then chose JavaScript as a backend. TypeScript only exists to perform build-time type checking of JavaScript. There wouldn't be a TypeScript that compiled to something else, because other languages already have their own type systems.
Runtime type-checking isn't part of TypeScript because 1) It isn't part of JavaScript, and TypeScript doesn't add runtime features anymore. 2) It'd be very expensive for simple types, 3) Complex types would be prohibitively expense as you have to both reify the types and perform deep structural checking.
WASM also is natively supported, and with newer extensions like reference types and GC, we're getting closer to the point where a DOM API could be defined. It'll still be a long while, but that's the long-term direction it's heading in. But even then, you would only see a TypeScript-to-WASM compiler[1] because there's already so much TypeScript out there, not because TypeScript is a particularly good language for that environment. A more static language would be a lot better for a WASM target.
[1]: Porfor is already such a compiler for JS and TS, but it does not do runtime type-checking: https://porffor.dev/
I was thinking more along the lines of a TypeScript-like compiled language. For example, AssemblyScript[0] but with the web APIs added back in. (Personally I'd prefer C# or Rust, but you know most devs will want to keep using JS/TS.) WASM isn't natively supported in the way that I'm wishing it were, though; you still have to use JS to bootstrap it, and JS to call back into web apis. In my ideal world, I'd want to be able to compile
public static void Main() { Document.Body.Append(new Div("hello world")); }
and be able to use it in a page like <script src="hello.wasm"></script>
and have that just work without any JS "glue code". Maybe someday. I know they're working on the DOM APIs, but as you said, it's been slow going. Feels like priorities are elsewhere. Even CSS is moving forward with new features faster than WASM is (nesting and view transitions are awesome though).(Btw when I said "separate runtime type checking" I didn't mean language-level; I was referring to the validation libraries and `typeof`'s that are required today since TS types obviously no longer exist after build. If it were a real static language, then of course you can't store a bool in a string in the first place.)
[0]: https://www.assemblyscript.org/ (Porffor looks neat too. Wonder if it could be useful in plugin architectures? E.g. plugins can written in JS, and the program only needs a WASM interpreter. I'll bookmark it. Thanks.)
> there's no runtime assurance that a string is actually a string.
As someone who's written a lot of Typescript in fairly large projects: in practice this isn't really an issue if you
1. ban casting and 'any' via eslint,
2. use something like io-ts at http api/storage boundaries to validate data coming in/out of your system without a risk of validator/type mismatch.
But you have to have total buy in from everyone, and be willing to sit down with new devs and explain why casting is bad, and how they can avoid needing that eslint suppression they just added to the codebase. It certainly would be easier if it just wasn't possible to bypass the type system like this.
I know, but it's that last bit: it shouldn't be possible to bypass it. C# actually got itself into a similar issue despite being a proper static language, because when it added "nullable reference types" (where you can't assign null to a variable of type `Foo` unless it's explicitly typed as `Foo?`) they did it like TypeScript using purely static analysis to avoid having to change the language at a lower level (for compatibility).
Even though it works 99% of the time, just like in TS you can occasionally run into a bug because some misbehaving library handed you a null that it said can't be a null...
On the other hand, disallowing bypassing it limits what you can do. There's always a ceiling to what the compiler can figure out, and some very complex types can't be analysed statically right now. By allowing bypassing the system, I can still accurately type those functions and reap all the rewards, and I can make sure everything works by combining unit tests with type unit tests. If bypassing was disallowed, I'd be more limited in what I can express.
Safety bypasses should be opt-in, case by case, and very explicit. For example, Rust's `unsafe` allows bypassing any limitation the language safety imposes on you normally, but all code not explicitly labeled unsafe is always in the very very safe mode.
Even inside the Typescript rules, `as` is a ridiculously dangerous timebomb.
Typescript is 100% about "convenience" and write-lots-of-code-now style of productivity, ~0% about safety or long-term maintainability.
What's the big difference between `unsafe` and `as` regarding explicit labelling? Both are opt-in and explicit. As the user of a function, you don't see either from the outside. If you don't like `as`, it's fine to use a linter to disallow it.
The difference is that in everyday Typescript you end up using `as`, so it's presence is not a blaring alarm.
Grepping a real world codebase that would not be `unsafe` in Rust:
event as CustomEvent<T>
const errorEvent = event as ErrorEvent;
const element = getByRole("textbox");
expect(element).toBeInstanceOf(HTMLInputElement);
const input = element as HTMLInputElement;
const element = parent.firstElementChild as HTMLElement;
type ItemMap = Map<Item["id"], Item>;
...
new Map() as ItemMap
const clusterSource = this.map.getSource(sourceName) as GeoJSONSource;
[K in keyof T as T[K] extends Fn ? K : never]: T[K];
target[type] as unknown as Fn<...
export const Foo = [1,2,3] as const;
and on it goes. Typescript normalizes unsafe behavior. Many, if not most, of these occurrences can be made safe. It's very rare that I need `as`, and even more rare that I can't actually check the relevant properties at runtime to ensure the code path is valid.
It's on you to ensure that you don't misuse `as`. If I could choose between current TS, and a "safer" one that's less expressive in complex cases, I'd choose the current one any day of the week.
"Typescript can be made safe" is the "C++ has a subset that is good" argument. Meh.
Almost every language has some way to do stupid things. Say you're working in C# - you can forcefully cast almost anything to almost anything else, just like in TS. So according to you, C# is just as bad as TS in this respect, right?
You can only do this with `unsafe { }` or `Unsafe.As/.BitCast`. Casts from/to `object` are type-safe even though may not be very user-friendly or good use of the type system in general.
If that's a thing commonly needed for basic operations like letting your event handler actually access the event details, then very much yes.
Sane languages have a downcast mechanism that doesn't pretend it succeeds every time.
Weird, I don't need to do that.
Also weird that Typescript has exactly the mechanism you're talking about. Why are you acting like it doesn't?
Wasm gc was needed for that. Wasm evolves slowly so that it can be done right. Even if the dom api comes, not a lot of it will change since only c-like languages will be as small as possible to fit into the space of JavaScript.
People have fussed the same of the C preprocessor, around the same time I and maybe you were born. (There's a pretty good chance I'm your parents' age, and nearly no chance you're the age of mine.)
The criticisms were valid then, too. C (including the preprocessor of course) is still not fully parseable if you include things like token concatenation.
I make no representation as to soundness, then or now. Not till I figure out where my copy of the UNIX-HATERS Handbook has got to, at any rate. I've had cause reasonably recently to reread the X and sendmail chapters, not so much this one.
X and sendmail are not really very relevant today.
The mistakes embodied in both thus far look not just still relevant but positively timeless. Certainly, to judge by how often young people with no sense of their field's history recapitulate those mistakes.
Without specifics it's difficult to evaluate what you're saying.
What are some examples of the timeless mistakes in those programs? I think X was a pretty good effort, it's just that it essentially ossified and has been left behind compared to some more modern systems (although I'm using it right now.) Sendmail approach to dynamic configuration was sub-optimal. But these aren't examples of mistakes that I see recapitulated often.
I referred to the relevant chapters of a book which I named. This is the book: https://web.mit.edu/~simsong/www/ugh.pdf
Not every configuration system is as bad as the m4 nightmare that sendmail used, and I understand nothing really better was feasible in the prelapsarian or Stone Age days of its implementation. But I worked - fought - with sendmail for years and, as in the book, I also remain mildly surprised that Allman continues to perambulate. Most such things in my later professional experience differ by degree, not kind. YAML is not as bad as what was typically perpetrated in Perl days, but it does too much and too little and all its fiddly rules give me headaches. JSON is awful and what we're basically stuck with, because even though it's so simple it's almost useless, at least it's simple. XML is much better than it gets credit for, but nobody likes it because most programmers seem to regard the need to use a keyboard as an imposition, and I assume also have frequent nightmares featuring lots of pointy angle brackets. (I use Emacs because I don't hate myself, and I wish more people had the sense to keep things as simple as Emacs Lisp typically is.)
I don't want to talk about X. Wayland has been about 60% mistakes by volume, and I like too many of the people who made it too well to be anything other than sad about that.
I read that book in the last millenium, I wasn't inclined to go and refresh my memory just to understand your comment.
Re sendmail, when were you working with that? My reaction was just to look at it, say "nope", and used Exim instead. Perhaps the most instructive lesson here is the importance of good choices when it comes to selecting systems to depend on.
Other than that, I'm not sure what the lesson is in "people collectively decided to depend on one of the worst alternatives available." We still see that today with programming languages.
There's nothing really wrong with YAML, except perhaps the way some people use it. I classify that as "skill issue". I work with Kubernetes regularly, and its YAML usage is fine.
Something similar applies to JSON. If it's so terrible, what's better? With JSON Schema and OpenAPI, it's feature-comparable to XML now.
The problem with XML is its completely unnecessary verbosity outweighs its usefulness. I can only assume it was designed by ex-mainframe people who, unlike me, actually yearned for a return to the overengineered environments they were used to. It's no surprise that JSON and YAML edged out XML.
Emacs Lisp is an abomination. Sure, Lisp has its place historically - I had a spirited discussion with John McCarthy about that at a Lisp conference in the 2000s. I'll just mention two words: dynamic scoping. They took decades to even figure out a solution to the funarg problem, and that still didn't really fix the language. Luckily Guy Steele came along and noticed that Church had solved that problem before computers were even invented.
> Re sendmail, when were you working with that?
Oh, only until about 2005. Other options on the table included Postfix and qmail; when I reached a point where burgeoning trust in my engineering judgment coincided with time to replace the oldest production boxes, we commenced to switch to Postfix, primarily because administering that yielded me the lowest Excedrin bill.
Anything with genuine numerical precision would be better than JSON, is what. I appreciate this is an open-ended suggestion with no implementation offered, but if I have to spend one more mortal minute bikeshedding bignum representations in strings, I won't be held responsible for my actions. Indeed just the thought reminds me part of my purpose in this time apart from labor is to decide whether indeed I will train as a lawyer, where I understand time similarly spent is billable in six-minute increments.
I wish I'd been a fly on the wall for your discussion with McCarthy, as perhaps I also wish you could have been for a very spirited chat I had with Stallman around 2016 on the merits and externalities of his and FSF's philosophical approach. I appreciate you taking the time of such a detailed and thoughtful reply, which I confide I'll later revisit and find benefit beyond that already apparent. Enjoy your day!
Here is Doom in TypeScript types: https://www.tomshardware.com/video-games/porting-doom-to-typ...
A fun read / Video...
I wish javascript had gone in the same direction as php with types.