The first publicly available version of Oracle Database (v2 released in 1979) was written in assembly for PDP-11. Then Oracle rewrote v3 in C (1983) for portability across platforms. The mainframes at the time didn't have C compilers, so instead of writing a mainframe-specific database product in a different language (COBOL?), they just wrote a C compiler for mainframes too.
UNIX was ported to the System/370 in 1980, but it ran on top of TSS, which I understand was an obscure product.
"Most of the design for implementing the UNIX system for System/370 was done in 1979, and coding was completed in 1980. The first production system, an IBM 3033AP, was installed at the Bell Laboratories facility at Indian Hill in early 1981."
https://web.archive.org/web/20240930232326/https://www.bell-...
Interesting. Summer 84/85 (maybe 85/86) I used a port of PCC to System/360 (done, I believe, by Scott Kristjanson) on the University of British Columbia mainframes (Amdahls running MTS). I was working on mail software, so I had to deal with EBCDIC/ASCII issues, which was no fun.
I sometimes wonder if that compiler has survived anywhere.
z/OS 3.1 is certified for UNIX 95, if this list is correct:
https://www.opengroup.org/openbrand/register/index2.html
That would include a C compiler, but yours is probably on tape somewhere.
Linux has been on this list, courtesy of two Chinese companies.
> The first publicly available version of Oracle Database (v2 released in 1979) was written in assembly for PDP-11.
I wonder if anybody still has a copy of Oracle v2 or v3?
Oldest I've ever seen on abandonware sites is Oracle 5.1 for DOS
> The mainframes at the time didn't have C compilers
Here's a 1975 Bell Labs memo mentioning that C compilers at the time existed for three machines [0] – PDP-11 UNIX, Honeywell 6000 GCOS, and "OS/370" (which is a bit of a misnomer, I think it actually means OS/VS2 – it mentions TSO on page 15, which rules out OS/VS1)
That said, I totally believe Oracle didn't know about the Bell Labs C compiler, and Bell Labs probably wouldn't share it if they did, and who knows if it had been kept up to date with newer versions of C, etc...
SAS paid Lattice to port their C compiler to MVS and CMS circa 1983/1984, so probably around the same time Oracle was porting Oracle to IBM mainframes – because I take it they also didn't know about or couldn't get access to the Bell Labs compiler
Whereas, Eric Schmidt succeeded in getting Bell Labs to hand over their mainframe C compiler, which was used by the Princeton Unix port, which went on to evolve into Amdahl UTS. So definitely Princeton/Amdahl had a mainframe C compiler long before SAS/Lattice/Oracle did... but maybe they didn't know about it or have access to it either. And even though the original Bell Labs C compiler was for MVS (aka OS/VS2 Release 2–or its predecessor SVS aka OS/VS2 Release 1), its Amdahl descendant may have produced output for Unix only
I assume whatever C compiler AT&T's TSS-based Unix port (UNIX/370) used was also a descendant of the Bell Labs 370 C compiler. But again, it probably produced code only for Unix not for MVS, and probably wasn't available outside of AT&T either
[0] https://archive.org/details/ThePortableCLibrary_May75/page/n...
I very much doubt anyone from the time wants to talk about it, but there is substantial bad blood about Oracle and Ingres. I believe not all of this story is in the public domain, nor capable of being discussed without lawyers.
Writing something that large in assembly is pretty crazy, even in 1979!
Keep in mind, Oracle was designed to run with 128KB of RAM (no swapping). So it was really tens of thousands of lines, not millions.
Was it actually that uncommon back then? My understanding is that there were other things (including Unix itself, since it predated C and was only rewritten in it later) written in assembly initially back in the 70s. Maybe Oracle is much larger compared to other things done this way than I realize, or maybe the veneration of Unix history has just been part of my awareness for too long, but for some reason hearing that this happened with Oracle doesn't seem to hit as hard for me as it seems for you. It's possible become so accustomed to something historically significant that I fail to be impressed by a similar feat, but I genuinely thought that assembly was just the language used for stuff low-level for a long time (not that I'm saying there weren't other systems languages besides C, but my recollection is having read that for a while some people were skeptical of the idea of using any high-level language in the place of assembly for systems programming).
This is my favorite function :): https://github.com/mortdeus/legacy-cc/blob/936e12cfc756773cb...
Gotta love the user-friendliness of these old Unix tools:
if (argc<4) {
error("Arg count");
exit(1);
}
SQLite error messages are similarly spartan. I wrote a SQLite extension recently and didn't find it difficult to have detailed/dynamic error messages, so it may have just been a preference of the author.
Ah, yes, was that because of a lack of inline assembly? I feel like these could be replaced by 'nop' operations.
What is the point of it?
It's an awkward way to reserve memory. The important detail here is that both compiler phases do this, and the way the programs are linked guarantees that the reserved region has the same address in both phases. Therefore an expression tree involving pointers can be passed to the second phase very succinctly. Not pretty, no, but hardware limitations force you to do come up with strange solutions sometimes.
Here's the actual code that references the 'ospace' from before 'waste': https://github.com/mortdeus/legacy-cc/blob/936e12cfc756773cb...
Thank you! Is it relevant today at all, or is there an use-case for it today?
No, if you need fixed addresses i suppose a linker script would be the way to go? Or in this case you'd just serialize the data such that it doesn't contain any pointers in the first place.
There are better tools to do this these days—with the GNU toolchain, for example, you’d use a linker script and make sure you’re building a non-position-independent static executable. Alternatively, you could use self-relative pointers: instead of having foo_t *foo and putting p there, have ptrdiff_t foo and put ((char *)p - (char *)&foo) there.
It's an obscure way to statically allocate memory for the ospace pointer.
What's the advantage over an array though, which would allow you to better control the size without making assumptions about code generation.
edit: http://cm.bell-labs.co/who/dmr/primevalC.html (linked from another comment has the answer):
> A second, less noticeable, but astonishing peculiarity is the space allocation: temporary storage is allocated that deliberately overwrites the beginning of the program, smashing its initialization code to save space. The two compilers differ in the details in how they cope with this. In the earlier one, the start is found by naming a function; in the later, the start is simply taken to be 0. This indicates that the first compiler was written before we had a machine with memory mapping, so the origin of the program was not at location 0, whereas by the time of the second, we had a PDP-11 that did provide mapping. (See the Unix History paper). In one of the files (prestruct-c/c10.c) the kludgery is especially evident.
So I guess it has to be a function in order to be placed in front of main() so the buffer can overflow into the no longer needed code at the start of it.
Without actually knowing, i'd guess that would generate bytecode's that could be modified later by patching the resulting binary ?
I remember few buddies using similar pattern in ASM that just added n NOP's into code to allow patching and thus eliminating possible recompilation..
I suspect that’s it.
There was a lot of self-modification, going on, in those days. Old machine language stuff had very limited resources, so we often modified code, or reused code space.
The C alternative for the hardware "halt and catch fire" instruction?
Aside: I was playing with Think C [2] yesterday and macOS 6.0.8 (emulated with Mini vMac [1]).
Boy it took a lot of code to get a window behaving back in the day... And this is a much more modern B/C; it's actually ANSI C but the API is thick.
I did really enjoy the UX of macOS 6 and it's terse look, if you can call it that [3].
[1] https://www.gryphel.com/c/minivmac/start.html
[2] https://archive.org/details/think_c_5
[3] https://miro.medium.com/v2/resize:fit:1024/format:webp/0*S57...
It's much less of your own code if you use TCL (THINK Class Library), which shipped with THINK C 4.0 (and THINK Pascal) in mid 1989.
Your System 6.0.8 is from April 1991, so TCL was well established by then and the C/C++ version in THINK C 5 even used proper C++ features instead of the hand-rolled "OOP in C" (nested structs with function pointers) used by TCL in THINK C 4.
I used TCL for smaller projects, mostly with THINK Pascal which was a bit more natural using Object Pascal, and helped other people use it and transition their own programs that previously used the Toolbox directly, but my more serious programs used MacApp which was released for Object Pascal in 1985, and for C++ in 1991.
Thanks for this. I was using think C 3.X last night unaware that there is a 5.0. I figured it out as I typed and googled this morning. I will have to revisit the 5.0, and pick up a digitised book.
My favorite function, which some might say even made it into Windows ;-)
waste() /* waste space */
{
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
waste(waste(waste),waste(waste),waste(waste));
}
But why? Waste (compiled) binary space? Or source code space, perhaps for early employee metrics gaming purposes?
And don't answer "to waste space of course" please. :)
There is a variable declared right before the waste space function. The 'wasted' space is statically allocated memory for the variable 'ospace' just before it.
There's nothing in that repo that says, but at a guess: old machines often had non-uniform ways to access memory, so it may have been to test that the compiler would still work if the binary grew over some threshold.
Even today's machines often have a limit as to the offset that can be included in an instruction, so a compiler will have to use different machine instructions if a branch or load/store needs a larger offset. That would be another thing that this function might be useful to test. Actually that seems more likely.
It might be instructive to compare the binary size of this function to the offset length allowed in various PDP-11 machine instructions
Yes it seems like this is something to do with hardware testing. Maybe memory or registers or something that needed just X bytes etc for overflows or something. It’s really random and the only person who would know it is the one who wrote it :)
Wild guess: it was a way to offset the location of the "main" function by an arbitrary amount of bytes. In the a.out binary format, this translates to an entry point which is not zero.
http://cm.bell-labs.co/who/dmr/primevalC.html
" A second, less noticeable, but astonishing peculiarity is the space allocation: temporary storage is allocated that deliberately overwrites the beginning of the program, smashing its initialization code to save space. The two compilers differ in the details in how they cope with this. In the earlier one, the start is found by naming a function; in the later, the start is simply taken to be 0. This indicates that the first compiler was written before we had a machine with memory mapping, so the origin of the program was not at location 0, whereas by the time of the second, we had a PDP-11 that did provide mapping. (See the Unix History paper). In one of the files (prestruct-c/c10.c) the kludgery is especially evident. "
One possible reason is to allocate a static global area. Without read-only protection of memory you could write to that area.
The comment is a waste too. It could have explained why the function is doing what it does.
Interesting usage of "extern" and "auto". Quite different from contemporary C:
tree() {
extern symbol, block, csym[], ctyp, isn,
peeksym, opdope[], build, error, cp[], cmst[],
space, ospace, cval, ossiz, exit, errflush, cmsiz;
auto op[], opst[20], pp[], prst[20], andflg, o, p, ps, os;
...
Looks like "extern" is used to bring global symbols into function scope. Everything looks to be "int" by default. Some array declarations are specifying a size, others are not. Are the "sizeless" arrays meant to be used as pointers only? >Looks like "extern" is used to bring global symbols into function scope.
a better way to think of extern is, "this symbol is not declared/defined/allocated here, it is declared/defined/allocated someplace else"
"this is its type so your code can reference it properly, and the linker will match up your references with the declared/defined/allocated storage later"
(i'm using reference in the generic english sense, not pointer or anything. it's "that which can give you not only an r-value but an l-value")
Yes, pretty much. To be fair, C at this point was basically BCPL with slightly different syntax (and better char/string support). The introduction of structs (and then longs) changed it forever.
BCPL had a lot of features C didn't have at this point and still doesn't. You mean B.
Could you elaborate on those features? From the top of my head, those are: nested functions — those always were of dubious usefulness compared to the implementation difficulties needed; labels are actual constants, so computed GOTO is available — that's definitely a feature standard C still doesn't have; manifest constants — this one is Ritchie's most baffling omission in the language; multiple assignment — it's not actually parallel so merely a syntax nicety (with a footgun loaded); valof-resultis — while very nice, it's also merely a syntax nicety, "lvalue := valof (... resultis expr; ...)" is the same as "{... lvalue = expr; goto after; ... } after: ;".
What else is there? Pointless distinction between the declaration syntax of functions and procedures?
That includes everything I was thinking of and several things I didn't know about.
"auto" used to mean automatic memory management because if you are coming from assembly or even some other older higher-level languages you can't just declare a local variable and use it as you please. You must declare somewhere to store it and manage its lifetime (even if that means everything is global).
C and its contemporaries introduced automatic or in modern terms local or stack allocated values, often with lexically-scoped lifetimes. extern meaning something outside this file declares the storage for it and register meaning the compiler should keep the value in a register.
However auto has always been the default and thus redundant and style-wise almost no one ever had the style of explicitly specifying auto so it was little-used in the wild. So the C23 committee adopted auto to mean the same as C++: automatically infer the type of the declaration.
You can see some of B's legacy in the design of C. Making everything int by default harkens back to B's lack of types because everything was a machine word you could interpret however you wanted.
Also with original C's function declarations which don't really make sense. The prototype only declares the name and the local function definition then defines (between the closing paren and the opening brace) the list of parameters and their types. There was no attempt whatsoever to have the compiler verify you passed the correct number or types of parameters.
Declaring a variable or function as extern(al) just tells the compiler to assume that it is defined "externally", i.e. in another source file. The compiler will generate references to the named variable/function, and the linker will substitute the actual address of the variable/function when linking all the object files together.
Modern C won't let you put extern declarations inside a function like this, basically because it's bad practice and makes the code less readable. You can of course still put them at global scope (e.g. at top of the source file), but better to put them into a header file, with your code organized into modules of paired .h definition and .c implementation files.
You can do the same with a modren C compiler - the extern and auto mean the same and int is still the default type.
In C23, auto doesn't have a default type, if you write auto without a type then you get the C++ style "type deduction" instead. This is part of the trend (regretted by some WG14 members) of WG14 increasingly serving as a way to fix the core of C++ by instead mutating the C language it's ostensibly based on.
You can think of deduction as crap type inference.
Design by committee, the outcome is usually not what the people on the trenches would like to get.
Nobody in the trenches seemed to use old-style auto in the last decades.
BTW: The right place to complain if you disagree would be the compiler vendors. In particular the Clang side pushes very much for keeping C and C++ aligned, because they have a shared C/C++ FE. So if you want something else, please file or comment on bugs in their bug tracker. Similar for other compilers.
> Nobody in the trenches seemed to use old-style auto in the last decades.
To the beat of my knowledge, there was no case where "auto" wasn't redundant. See e.g. https://stackoverflow.com/a/2192761
This makes me feel better about repurposing it, but I still hate the shitty use it's been put to.
Indeed, however many in the treches would like a more serious take on security, complaining has not served anything in the last 50 years until goverment agencies finally decided to step in.
This is again a problem compilers could have addressed, but didn't. Mostly because the users in the trenches did not care. Instead they flocked in droves to the compiler optimizing in the most aggressive way and rejecting everything costing performance. So I do not have the feeling that users were really pushing for safety. They are very good at complaining though.
GCC and Clang support asan/ubsan, which lets you trade performance for nicer behavior related to memory access and undefined behavior. Whenever I do C development for a platform that supports asan/ubsan, I always develop and test with them enabled just because of how much debugging time they save.
It is like democracy, election results not always reflect the needs of everyone, and some groups are more favored than others.
I think my point is that a standardization committee is not a government.
It surely looks like one from the outside.
Features only get added when there is a champion to push for them forward across all hurdles (candidate), and voted in by its peers (election), at the end of a government cycle (ISO revision), the compiler users rejoice for the new set of features.
Isn't the original inclusion of the auto keyword more in line with what you expect from design by committee? Including a keyword which serves no purpose other than theretical completeness?
I was talking more in general, not specific regarding auto.
Actually I did use C compilers, with K&R C subset for home computers, where auto mattered.
Naturally they are long gone, this was in the early 1990's.
An interesting secondary meaning of "design by committee", the reason why what you mention happens, is "design in committee".
People can skip the usual lifecycle and feedback for an idea by presenting jumping directly to committee stage./
People in the trenches seem pretty happy with what the committee designed here.
It doesn't matter. The people in the trenches don't update their standard versions.
All of these things come directly from B:
https://www.nokia.com/bell-labs/about/dennis-m-ritchie/bintr...
As to "sizeless" arrays - yes.
Have a look at the early history of C document on DMR's site, it mentions that the initial syntax for pointers was that form.
Reminds me of the humility every programmer should have, basically we're standing on the shoulders of giants and abstraction for the most part. 80+ years of computer science.
Cool kids may talk about memory safety but ultimately someone had to take care of it, either in their code or abstracted out of it.
Memory safety predates C by a decade, in languages like JOVIAL (1958), ESPOL/NEWP (1961) and PL/I (1964), it follows along in the same decade outside Bell Labs, PL/S(1970), PL.8 (1970), Mesa (1976), Modula-2 (1978).
If anything the cool kids are rediscovering what we lost in systems programming safety due to the wide adoption of C, and its influence in the industry, because the cool kids from 1980's decided memory safety wasn't something worth caring about.
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
-- C.A.R Hoare's "The 1980 ACM Turing Award Lecture"
Guess what programming language he is referring to by "1980 language designers and users have not learned this lesson".
The "cool kids talking about memory safety" are indeed standing on the shoulders of giants, to allow for others to stand even taller.
Big non sequitur, but your comment triggered a peeve of mine that I find it ironic when people talk like oldsters can't understand technology.
> ...people talk like oldsters can't understand technology
IMO it is young people that have trouble understanding.
The same mistakes are made over and over, lessons learned long ago are ignored in the present
It easier to write than read, easier to talk than listen, build new than expand the old
This is the way of young people in every domain, not just technology. Much like teenagers think they're the first ones to ever have sex before, young people tend to think they are the first ones to notice "hey this status quo really sucks" and try to solve it.
This can be a strength, to be fair - the human mind really does tend to get stuck in a rut based on familiarity, and someone new to the domain can spot solutions that others haven't because of that. But more often, it turns into futile attempts to solve problems while forgetting the lessons of the past.
Understanding one level of abstraction doesn't mean you understand the levels of abstraction built on top of it. And vice versa.
Your comment sounds like a riddle. I've programmed for 25 years but appreciate there's a lot more going on than what I know.
Upon my own rereading, it is unclear. My point is that the languages most of us use and the fundamental technologies in the oses we use were designed/invented by people who are in their 80s now, many of the Linux core team are 50-60.
The thing I always loved about C was its simplicity, but in practice it's actually very complex with tons of nuance. Are there any low level languages like C that actually are simple, through and through? I looked into Zig and it seems to approach that simplicity, but I have reservations that I can't quite put my finger on...
The reality is, the only languages that are truly simple are Turing tarpits, like Brainfuck.
Reality is not simple. Every language that’s used for real work has to deal with reality. It’s about how the language helps you manage complexity, not how complex the language is.
Maybe Forth gets a pass but there’s good reason why it’s effectively used in very limited circumstances.
The perceived complexity from a semantic standpoint comes from the weakly-typed nature of the language. When the operands of an expression have different types, implicit promotions and conversions take place. This can be avoided by using the appropriate types in the first place. Modern compilers have warning flags that can spot such dodgy conversions.
The rest of the complexity stems from the language being a thin layer over a von Neumann abstract machine. You can mess up your memory freely, and the language doesn’t guarantee anything.
C is simple.
Representing computation as words of a fixed bit length, in random access memory, is not (See The Art of Computer Programming). And the extent to which other languages simplify is creating simpler memory models.
What about C is simple? Its syntax is certainly not simple, it's hard to grok and hard to implement parsers for, and parsing depends on semantic analysis. Its macro system is certainly not simple; implementing a C preprocessor is a huge job in itself, it's much more complex than what appears to be necessary for a macro system or even general text processor. Its semantics are not simple, with complex aliasing rules which just exist as a hacky trade-off between programming flexibility and optimizer implementer freedom.
C forces programs to be simple, because C doesn't offer ways to build powerful abstractions. And as an occasional C programmer, I enjoy that about it. But I don't think it's simple, certainly not from an implementer's perspective.
First (as in my other comment), the idea that C parsing depends on semantic analysis is wrong (and yes, I wrote C parsers). There are issues which may make implementing C parsers hard if you are not aware of them, but those issues hardly compare to the complexities of other languages, and can easily be dealt with if you know about then. Many people implemented C parsers.
The idea that C does not offer ways to build powerful abstractions is also wrong in my opinion. It basically allows the same abstractions as other languages, but it does not provide as much syntactic sugar. Whether this syntactic sugar really helps or whether it obscures semantics is up to debate. In my opinion (having programmed a lot more C++ in the past), it does not and C is better for building complex applications than C++. I build very complex applications in C myself and some of the most successful software projects were build using C. I find it easier to understand complex applications written in C than in other languages, and I also find it easier to refactor C code which is messed up compared to untangling the mess you can create with other languages. I admit that some people might find it helpful to have the syntactic sugar as help for building abstractions. In C you need to know how to build abstractions yourself based on training or experience.
I see a lot of negativity towards C in recent years, which go against clear evidence, e.g. "you can not build abstractions" or "all C programs segfault all the time" when in reality most of the programs I rely on on a daily basis and which in my experience never crash are written in C.
Huh? How are you supposed to parse a statement like 'x * y;' without some form of semantic analysis? You need to be able to look up whether 'x' has been declared as a variable or a type, and parse it as either a multiplication expression or a variable declaration. Am I wrong on this?
True. But this does not require full semantic analysis, it only requires distinguishing between typedef names and other identifiers. You can argue that this part of semantic analysis, but this would be rather pedantic. Tracking this could equally seen as part of parsing.
Parsing isn't too bad compared to, say, Perl.
The preprocessor is a classic example of simplicity in the wrong direction: it's simple to implement, and pretty simple to describe, but when actually using it you have to deal with complexity like argument multiple evaluations.
The semantics are a disaster ("undefined behavior").
> Parsing isn't too bad compared to, say, Perl.
This is damning with faint praise. Perl is undecidable to parse! Even if C isn't as bad as Perl, it's still bad enough that there's an entire Wikipedia article devoted to how bad it is: https://en.wikipedia.org/wiki/Lexer_hack
> The Clang parser handles the situation in a completely different way, namely by using a non-reference lexical grammar. Clang's lexer does not attempt to differentiate between type names and variable names: it simply reports the current token as an identifier. The parser then uses Clang's semantic analysis library to determine the nature of the identifier. This allows a simpler and more maintainable architecture than The Lexer Hack. This is also the approach used in most other modern languages, which do not distinguish different classes of identifiers in the lexical grammar, but instead defer them to the parsing or semantic analysis phase, when sufficient information is available.
Doesn't sound as much of a problem with the language as it is with the design of earlier compilers.
Unifying identifiers in the lexer doesn't solve the problem. The problem is getting the parser to produce a sane AST without needing information from deeper in the pipeline. If all have is `foo * bar;`, what AST node do you produce for the operator? Something generic like "Asterisk", and then its child nodes get some generic "Identifier" node (when at this stage, unlike in the lexer, you should be distinguishing between types and variables), and you fix it up in some later pass. It's a flaw in the grammar, period. And it's excusable, because C is older than Methuselah and was hacked together in a weekend like Javascript and was never intended to be the basis for the entire modern computing industry. But it's a flaw that modern languages should learn from and avoid.
C ain't simple, it's an organically complex language that just happens to be small enough that you can fit a compiler into the RAM of a PDP-11.
I would probably describe Perl as really complex to parse as well if I knew enough about it. Both are difficult to parse compared to languages with more "modern sensibilities" like Go and Rust, with their nice mostly context free grammars which can be parsed without terrible lexer hacks and separately from semantic analysis.
Walter Bright (who, among other things, has been employed to work on a C preprocessor) seems to disagree that the C preprocessor is simple to implement: https://news.ycombinator.com/item?id=20890749
> The preprocessor is fiendishly tricky to write. [...] I had to scrap mine and reimplement it 3 times.
I have seen other people in the general "C implementer/standards community" complain about it as well.
Each of these elements is even worse in every other language I can think of. What language do you think is simple in comparison?
Pascal (and most other Wirth languages) is better in most of these respects than C. Of course there are other flaws with Pascal (cf “Why Pascal is not my favorite programming language”), but it proves that C has a lot of accidental complexity in its design.
Go, Rust, Zig?
I'm curious, what language do you know of with a more complex macro system than the whole C preprocessor?
EDIT: To be clear to prospective downvoters, I'm not just throwing these languages out because they're hype or whatever. They all have a grammar that's much simpler to parse. Notably, you can construct a parse tree without a semantic analyser which is capable of running in lockstep with the parser to provide semantic information to the parser. You can just write a parser which makes a parse tree.
I've never written a parser for any of those languages but my intuition is that Go is easier to parse than C. The others are debatable. Rust macros are definitely not simpler than C macros. I'm not sure what could be simpler than text substition. Zig doesn't have macros and comptime is implemented as a language VM that runs as a compilation step(last I knew), so that's definitely not simpler. I don't use go often, but I don't think it has macros at all so that's definitely simpler.
When people say that C is a simple language, my interpretation is that they mean it is easy to interpret what a C program does at a low level, not that it is simple to write.
The other languages can be written by a parser. A parser for C needs a semantic analyzer working in tandem.
The C preprocessor is not text substitution.
It is not easy to describe what C does at a low level. There are simple, easy to describe and wrong models of what C does "at a low level". C's semantics are defined by a very difficult to understand standards document, and if you use one of those simple and enticing mental models, you will end up with incorrect C code which works until you try a different compiler or enable optimisations.
A parser for C does not need a semantic analyzer. What C does it allows semantic analysis to be integrated into the parser.
The preprocessor has some weird behavior, it it is also not very complicated.
And I would argue that the abstract machine model of C is still relatively simple. There are are certainly simpler languages in this regard, but they give up one of the key powers of C, i.e. that you can manipulate the representation of objects on a byte level.
By that argument the other languages mentioned are impossible to understand since they don't have a spec, except for Go again.
No. The other languages have documented semantics too. Just happens that C's are in the shape of a standards document.
It’s not really clear to me how you could have a simple low level language without tons of nuance. Something like Go is certainly simple without tons of nuance, but it’s not low level, and I think extending it to be low level might add a lot of nuance.
forth would come to mind, some people have build surprising stuff with it though I find it too low-level.
Lisp is build from a few simple axioms. Would that make it simple?
Lisp could be simple... but there's a lot of reasons it isn't.
It uses a different memory model than current hardware, which is optimized for C. While I don't know what goes on under SBCL's hood, the simpler Lisps I'm familiar with usually have a chunk of space for cons cells and a chunk of "vector" space kinda like a heap.
Lisp follows s-expression rules... except when it doesn't. Special forms, macros, and fexprs can basically do anything, and it's up to the programmer to know when sexpr syntax applies and when it doesn't.
Lisp offers simple primitives, but often also very complex functionality as part of the language. Just look at all the crazy stuff that's available in the COMMON-LISP package, for instance. This isn't really all that different than most high level languages, but no one would consider those "simple" either.
Lisp has a habit of using "unusual" practices. Consider Sceme's continuations and use of recursion, for example. Some of those - like first-class functions - have worked their way into modern languages, but image how they would have seemed to a Pascal programmer in 1990.
Finally, Lisp's compiler is way out there. Being able to recompile individual functions during execution is just plain nuts (in a good way). But it's also the reason you have EVAL-WHEN.
All that said, I haven't invested microcontroller Lisps. There may be one or more of those that would qualify as "simple."
Mostly we have eval-when because of outdated defaults that are worth re-examining.
A Lisp compiler today should by default evaluate every top level form that are compiles, unless the program opts out of it.
I made the decision in TXR Lisp and it's so much nicer that way.
There are fewer surprises and less need for boilerplate for compile time evaluation control. The most you usually have to do is tell the compiler not to run that form which starts your program: for instance (compile-only (main)). In a big program with many files that could well be the one and only piece of evaluation control for the file compiler.
The downside of evaluating everything is that these definitions sit in the compiler's environment. This pollution would have been a big deal when the entire machine is running a single Lisp image. Today I can spin up a process for the compiling. All those definitions that are not relevant to the compile job go away when that exits. My compiler uses a fraction of the memory of something like GCC, so I don't have to worry that these definitions are taking up space during compilation; i.e. that things which could be written to the object file and then discarded from memory are not being discarded.
Note how when eval-when is used, it's the club sandwich 99% of the time: all three toppings, :compile-toplevel, :load-toplevel, :execute are present. The ergonomics are not very good. There are situations in which it would make sense to only use some of these but they rarely come up.
So are entire branches of mathematics, and I feel safe in saying they are not "simple"
I would say rust. When you learn the basics, rust is very simple and will point to you any errors you have, so you get basically no runtime errors. Also the type system is extremely clean, making the code very readable.
But also C itself is very simple language. I do not mean C++, but pure C. I would probably start with this. Yes, you will crash at runtime errors, but besides that its very very simple language, which will give you good understanding of memory allocation, pointers etc.
Got through C and K&R with no runtime errors, on four platforms, but the first platform... Someone asked the teacher why a struct would not work on Lattice C. The instructor looked at the code, sat down at the students computer, typed in a small program compiled it, and camly put the disks in the box with the manual and threw it in the garbage. "We will have a new compiler next week." We switched to Manx C, which is what we had on the Amiga. Structs worked on MS C, which I thought was the lettuce compiler. ( Apparently a different fork of the portable C compiler, but later they admitted that it was still bigendian years later )
Best programming joke. Teacher said when your code becomes "recalcitrent", we had no idea what he meant. This was in the bottom floor of the library, so on break, we went upstairs and used the dictionary. Recalcitrant means not obeying authority. We laughed out loud, and then went silent. Opps.
The instructor was a commentator on the cryptic-C challenges, and would often say... "That will not do what you think it will do" and then go on and explain why. Wow. We learned a lot about the pre-processor, and more about how to write clean and useful code.
Lattice C (on the Amiga) was my first C compiler! Do you remember what the struct issue you ran into? This was a pretty late version... like 5.x.
Modula-2 is a language operating on the same level (direct memory addressing, no GC etc) but with saner syntax and semantics.
It's still a tad more complicated than it needs to be - e.g. you could drop non-0-based arrays, and perhaps sets and even enums.
Modula-2, Object Pascal, Oberon specially Oberon-07.
I would say Zig is the spiritual follower from the first two, while Go follows up the Oberon and Limbo heritage.
It depends what you mean by simple. C still is simple, but it doesn't include a lot of features that other languages do, and to implement them in C is not simple.
C is simple for some use cases, and not for others.
> C still is simple
Syntactically, yes. Semantically, no.
There are languages with tons of "features" with far, far less semantic overhead than C.
https://blog.regehr.org/archives/767
FWIW, writing programs in C has been my day job for a long time.
Exactly. There is a lot happening implicitly in a C program that the programmer has to be aware of and keep in mind. And it’s made worse by valid compile implementation choices. I remember chasing a bug for a day that was based on me forgetting that the particular implementation I was working with had signed characters and was sign extending something at an inopportune time.
As someone who has had to parse C syntax for a living, I'd argue that it's not syntactically simple either. (Declarators are particularly nasty in C and even more so in C++).
Entirely my point. Simpler in some ways, more difficult in others. Totally depends on the use case
The appeal of C is that you're just operating on raw memory, with some slight conveniences like structs and arrays. That's the beauty of its simplicity. That's why casting a struct to its first argument works, why everything has an address, or why pointer arithmetic is so natural. Higher level langs like C++ and Go try to retain the usefulness of these features while abstracting away the actuality of them, which is simultaneously sad and helpful.
> The appeal of C is that you're just operating on raw memory ... why everything has an address, or why pointer arithmetic is so natural
That is just an illusion to trip unsuspecting programmers who have false mental models. Pointers are not addresses, and pointer arithmetic is rife with pitfalls. There is the whole pointer provenance thing, but that's more like the tip of the iceberg.
That is really the problem with C; it feels like you can do all sorts of stuff, but in reality you are just invoking nasal demons. The real rules on what you can and can not do are far more intricate and arcane, and nothing about them is very obvious on the surface level.
A typical C program of useful length typically includes a spattering of implicit type conversions that the programmer never intended or considered. It's the consequence of a feature that abstracts away how the type system and memory really[1] acts.
[1]for certain definitions of 'really'
> That's why casting a struct to its first argument works
Until WG14 makes everything you love about C "undefined behavior" in the name of performance.
> Until WG14 makes everything you love about C "undefined behavior" in the name of performance.
What do you mean?
I just looked up WG14 and I cannot see what you mean
A link perhaps? Am I going to have to "pin" my C compiler version?
Some people have this idea that when they write utter nonsense it should do what they meant because - ie they're missing out the whole discipline of programming and going straight from "I want it to work" to "It should work" and don't understand what they're doing wrong.
For some of these people WG14 (the C language sub-committee of SC22, the programming language sub-committee of JTC1, the Joint Technical Commitee between ISO and the IEC) is the problem because somehow they've taken this wonderful language where you just write stuff and it definitely works and does what you meant and turned into something awful.
This doesn't make a whole lot of sense, but hey, they wrote nonsense and they're angry that it didn't work, do we expect high quality arguments from people who mumble nonsense and make wild gestures on the street because they've imagined they are wizards? We do not.
There are others who blame the compiler vendors, this at least makes a little more sense, the people who write Clang are literally responsible for how your nonsense C is translated into machine code which does... something. They probably couldn't have read your mind and ensured the machine code did what you wanted, especially because your nonsense doesn't mean that, but you can make an argument that they might do a better job of communicating the problem (C is pretty hostile to this, and C programmers no less so)
For a long time I thought the best idea was to give these people what they ostensibly "want" a language where it does something very specific, as a result it's slow and clunky and maybe after you've spent so much effort to produce a bigger, slower version of the software a friend wrote in Python so easily these C programmers will snap out of it.
But then I read some essays by C programmers who had genuinely set out on this path and realised to their horror that their fellow C programmers don't actually agree what their C programs mean, the ambiguity isn't some conspiracy by WG14 or the compiler vendors, it's their reality, they are bad at writing software. The whole point of software is that we need to explain exactly what the machine is supposed to do, when we write ambiguous programs we are doing a bad job of that.
The premise "lol who needs memory safety at runtime, you get sigsegv if there's a problem no biggie, lets make it FAST and dont bother with checks" was the original horror. There are enough cowboys around that loved the approach. It's actually not so surprising such mindset became cancerous over time. The need to extract maximum speed devoured the language semantics too. And it is spreading, webassembly mostly inherited it.
I've said before that C is small, but not simple.
Turing Tarpits like Brainfuck or the Binary Lambda Calculus are a more extreme demonstration of the distinction, they can be very tiny languages but are extremely difficult to actually use for anything non-trivial.
I think difficulty follows a "bathtub" curve when plotted against language size. The smallest languages are really hard to use, as more features get added to a language it gets easier to use, up to a point where it becomes difficult to keep track of all the things the language does and it starts getting more difficult again.
> but in practice it's actually very complex with tons of nuance
That's because computers are very complex with tons of nuance.
1972 is the answer to the question on the lips of everybody too busy to look at the source files.
The first 4 commits in GO are:
commit d82b11e4a46307f1f1415024f33263e819c222b8 Author: Brian Kernighan <[email protected]> Date: Fri Apr 1 02:03:04 1988 -0500
last-minute fix: convert to ANSI C
R=dmr
DELTA=3 (2 added, 0 deleted, 1 changed)
:100644 100644 8626b30633 a689d3644e M src/pkg/debug/macho/testdata/hello.ccommit 0744ac969119db8a0ad3253951d375eb77cfce9e Author: Brian Kernighan <research!bwk> Date: Fri Apr 1 02:02:04 1988 -0500
convert to Draft-Proposed ANSI C
R=dmr
DELTA=5 (2 added, 0 deleted, 3 changed)
:100644 100644 2264d04fbe 8626b30633 M src/pkg/debug/macho/testdata/hello.ccommit 0bb0b61d6a85b2a1a33dcbc418089656f2754d32 Author: Brian Kernighan <bwk> Date: Sun Jan 20 01:02:03 1974 -0400
convert to C
R=dmr
DELTA=6 (0 added, 3 deleted, 3 changed)
:100644 000000 05c4140424 0000000000 D src/pkg/debug/macho/testdata/hello.b
:000000 100644 0000000000 2264d04fbe A src/pkg/debug/macho/testdata/hello.ccommit 7d7c6a97f815e9279d08cfaea7d5efb5e90695a8 Author: Brian Kernighan <bwk> Date: Tue Jul 18 19:05:45 1972 -0500
hello, world
R=ken
DELTA=7 (7 added, 0 deleted, 0 changed)
:000000 100644 0000000000 05c4140424 A src/pkg/debug/macho/testdata/hello.b Am I interpreting this repo correctly? The first C compiler was written in...C?
It would have been bootstrapped in assembly (or B/BCPL?) and then once you can compile enough C to write a C compiler you rewrite your compiler in C.
I remember a Computerphile video where prof. Brailsford said something along the lines of "nobody knew who wrote the first C compiler, everybody just kinda had it and passed it around the office" which I think is funny. There's some sort of analogy to life and things emerging from the primordial soup there, if you squint hard enough.
yes. the Q you're asking is: "how was this bootstrapped?"
the page that's referenced from GitHub doesn't describe that
http://cm.bell-labs.co/who/dmr/primevalC.html
however there probably was a running c compiler (written in assembly) and an assembler and a linker available, hand bootstrapped from machine code, then assembler, linker, then B, NB and then C...
We can't tell but that would make sense...
The first B compiler was written in BCPL on the GE 635 mainframe. Thompson wrote a B compiler in BCPL which they used to cross-compile for PDP-7. Then Thompson rewrote B in B, using the BCPL compiler to bootstrap. AFAIK this is the only clean "bootstrap" step involved in the birth of C (BCPL -> B -> self-compiled B)
Then they tweaked the compiler and called it NB (New B), then eventually tweaked it enough they decided to call it C.
The compiler continuously evolved by compiling new versions of itself through the B -> New B -> C transition. There was no clean cutoff to say "ah this was the first C compiler written in New B".
You can see evidence of this in the "pre-struct" version of the compiler after Ritchie had added structure support but before the compiler itself actually used structs. They compiled that version of the compiler then modified the compiler source to use structs, thus all older versions of the compiler could no longer compile the compiler: https://web.archive.org/web/20140708222735/http://thechangel...
Primeval C: https://web.archive.org/web/20140910102704/http://cm.bell-la...
A modern bootstrapping compiler usually keeps around one or more "simplified" versions of the compiler's source. The simplest one either starts with C or assembly. Phase 0 is compiled or assembled then is used to compile Phase 1, which is used to compile Phase 2.
(Technically if you parsed through all the backup tapes and restored the right versions of old compilers and compiler source you'd have the bootstrap chain for C but no one bothered to do that until decades later).
Funnily enough, it is emphatically not a single-pass compiler.
I don’t think anybody thinks or thought it was.
I thought it would be, given that C is designed in such a way that a single pass ought to be sufficient. Single-pass compilers were not uncommon in that era.
Was it really designed this way? I keep hearing this claim but I don't think Ritchie himself actually confirmed that?
Also, notice how the functions call each other from wherever, even from different files, without need of any forward declarations, it simply works, which, as I have been repeatedly told, is not something a single-pass compiler can implement :)
I mean, I don't know if it was ever explicitly stated, but consider: parsing requires at most one token of lookahead - assuming that you use the lexer hack to disambiguate declarations; and in earliest versions of C without typedef, you don't even need the hack because all declarations must start with a keyword. You cannot reference any declarations - not types, not variables, not functions - until they are defined, with a special exemption for pointer types that is precisely one case where the compiler doesn't care because a pointer is a pointer. Altogether, C design permits and even encourages a naive implementation that literally does a single pass parsing and emitting highly unoptimal, but working assembly code as it goes (e.g. always use stack for locals and temporaries). There's also stuff like "register" which is incredibly useful if you're only doing a single pass and of very dubious utility otherwise. I find it hard to believe that it is all a happy coincidence.
Regarding functions, it only works if the function returns int and you match the types correctly for any call that doesn't have the prototype in scope. I believe this to be one of the relict B compatibility features, BTW, since that's exactly how it also worked in B (except that int is the only type there so you only had to match the argument count correctly).
As someone who has no touchpoints with lower languages at all, can you explain to me why those files are called c01, c02 etc.?
Also read how a compiler can be written in the same language - https://en.wikipedia.org/wiki/Bootstrapping_%28compilers%29
Highly interesting!
main(argc, argv)
int argv[];
This is a culture shock. Did the PDP-11 not distinguish between `char` and `int`? Of course it did -- this was one of the distinguishing features (byte addressing) of the PDP-11 vs the original machine that ran UNIX, the PDP-7, after all ;-)
In "ancient"/K&R C, types weren't specified with the parameters, but on the following lines afterwards. GCC would still compile code like this, if passed the -traditional flag, until ... some point in the last decade or so. Still, this style was deprecated with ANSI C/C89, so it had a good run.
I find even more interesting that in a later version this appears:
main(argc, argv)
char argv[][];
Which sadly is no longer valid in C. I thought the first C compiler was written in B.
If we had the full change history you would see that it is written in B. New features were added and changes were iteratively made along the way, but it is the same codebase. Nowadays we'd pick some change point and call it B v2, but back then they named that point C.
That's not quite correct. See my comment here: https://news.ycombinator.com/item?id=43465698
B was bootstrapped in BCPL, then rewritten in B to be self-hosting. But the transition from B to NB (New B) to C was continuous evolution. Thompson or Richie would add a feature to the compiler, compile a new compiler, then change the compiler source to use the new feature. If you did not have a sufficiently new enough B/NB/C compiler you could not compile the compiler and there was no path maintained to deal with that. You went down the hall and asked someone else to give you the newer compiler.
There also wasn't a definitive point where NB became C... they just decided it had changed enough and called it C.
What's not quite correct?
I apologize, I meant to reply to the parent!
I was refuting the idea that they sat down and wrote the C compiler in B, then rewrote the compiler in C and compiled it with the B-compiled C compiler. You and the parent might not have meant it that way but I wanted to clarify because in modern terms that is what many people will assume.
Yes. I'm not an expert in compilers, but how is the first c compiler also written in C? How did they compile the compiler?
Can't stop thinking about Ken Thompson Hack. This should be a clean one ...
Probably one of my favorite pieces of software of all times. Learned so much from this!
Do you remember any interesting anecdote you can share?
Anecdote probably not. But i learned how a compiler works from it and reconstructed the B compiler based on it (found here: https://github.com/aap/b, warning: repo is messy, will clean up more soon hopefully).
Can people who have used/were around at this time (early 1970s) give a description of the typical dev environment?
Also helpful: C history https://en.wikipedia.org/wiki/C_language#History
From wikipedia, early Unix was developed on PDP/11 (16-bit).
signed 16-bit ints, 8-bit chars, arrays of those previous types.
identifiers were limited in length? (I'm seeing 8 chars, lowercase, as the longest)
octal numeric constants, was hexadecimal used?
there was only a line editor available (vi was 1976)
did the file system support directories at that point?
no C preprocessor, no header files. (1973)
no make/makefiles (1976)
was there a std library used with the linker or an archive of object files that was the 'standard' library?
Bourne shell wasn't around (1979), so wikipedia seems to point to the Thompson shell - https://en.wikipedia.org/wiki/Thompson_shell
was there a debugger or was printf the only tool?
I'm not sure about max identifier length in general, but identifiers exported across translation units (i.e. non-static in modern C) were limited to 6 significant chars as late as ISO C90, although I don't think there were still any compilers around at the time that actually made use of this limit.
which compiler is used to compile the first compiler?
With BCPL
https://web.archive.org/web/20250130134200/https://www.bell-...
See also this comment https://news.ycombinator.com/item?id=43462794
Love how unserious some of the code comments are. Makes you feel less noob for a second :')