410
133
m4r1k 1 day ago

I once saw a talk from Brian Kernighan who made a joke about how in three weeks Ken Thompson wrote a text editor, the B compiler, and the skeleton for managing input/output files, which turned out to be UNIX. The joke was that nowadays we're a bit less efficient :-D

ferguess_k 1 day ago

Ken is definitely a top-notch programmer. A top-notch programmer can do a LOT given 3 weeks of focus time. I remember his wife took the kids to England so he was free to do whatever he wanted. And he definitely had a lot of experience before writing what was first version UNIX.

Every programmer that has a project in mind should try this: Put away 3 weeks of focus time in a cabin, away from work and family, gather every book or document you need and cut off the Internet. Use a dumb phone if you can live with it. See how far you can go. Just make sure it is something that you already put a lot of thoughts and a bit of code into it.

After thinking more thoroughly about the idea, I believe low level projects that rely on as few external libraries as possible are the best ones to try the idea out. If your project relies on piles of 3rd party libraries, you are stuck if you have an issue but without the Internet to help you figure it out. Ken picked the right project too.

foxglacier 23 hours ago

> low level projects that rely on as few external libraries

I think this is key. If you already have the architecture worked out in your head, then it's just smashing away at they keyboard. Once you have a 3rd party library, you can spend most of your time fighting with and learning about that.

ferguess_k 23 hours ago

Exactly. Both projects mentioned in this thread (UNIX, Git) have clean cuts visions of what the authors wanted to achieve from the beginning. Nowadays it is almost impossible to FIND such a project. I'm not saying that you can't write another Git or UNIX but most likely you won't even bother using it yourself, so what's the point? That's why I think "research projects" don't fit here -- you learn something and then you throw them away.

What I have in mind are embedded projects -- you are probably going to use it even when you are the only user. So that fixes the motivation issue. You probably have a clean cut objective so that clicks the other checkbox. You need to bring a dev board, a bunch of breadboards and electronics components to the cabin, but that doesn't take a lot of spaces. You need the specifications of the dev board and of the components used in the project, but those are just pdf files anyway. You need some C best practices? There must be a pdf for that. You can do a bit of experimental coding before you leave for the cabin, to make sure the idea is solid, feasible and the toolchain works. The preparations give you a wired up breadboard and maybe a few hundred lines of C code. That's all you need to complete the project in 3 weeks.

Game programming, modding and mapping come into my mind, too. They are fun, clean cut and well defined. The thing is you might need the Internet to check documents or algorithms from time to time. But it is a lot better to cut off Internet completely. I think they fit if you are well into them already -- and then you boost them up working 3 weeks in a cabin.

There must be other lower level projects that fit the bill. I'm NOT even a good, ordinary programmer, so the choices are few.

cjs_ac 1 day ago

The interview is here: https://www.youtube.com/watch?v=EY6q5dv_B-o

One hour long, and Thompson tells a lot of entertaining stories. Kernighan does a good job of just letting Thompson speak.

Cthulhu_ 1 day ago

A newer joke is that Ken Thompson (along with Rob Pike and Robert Griesemer) designed Go while waiting for C / C++ to compile.

kragen 7 hours ago

Not C/C++. Specifically C++. C compiles pretty fast. And it's not really a joke, though obviously it wasn't a single build they were waiting for.

xattt 1 day ago

I’m wondering what the process was for the early UNIX developers to attain this level of productivity.

Did they treat this as a 9-5 effort, or did they go into a “goblin mode” just to get it done while neglecting other aspects of their lives?

ironmanszombie 1 day ago

Back in my early career, the company I worked for needed an inventory system tailored to their unique process flow. Such system was already in development and was scheduled to launch "soon". A few months went by and I got fed up with the toil. Sat down one weekend and implemented the whole thing in Django. I'm no genius and I managed to have a solution that my team used for a few years until the company had theirs launched. In a weekend. Amazing what you can do when you want to Get Shit Done!

deaddodo 1 day ago

That's fine when it's self-motivated, but it sets a terrible precedent for expectations. Doing things like this can put in management's mind unrealistic expectations for you to always work at that pace. Which can be unhealthy and burnout-inducing.

smm11 20 hours ago

I worked at a place in love with their ERP system. Some there had been using it 30+ years, since it ran in DOS.

My Excel skills completely blow, and I hate Microsoft with a passion, but I created a shared spreadsheet one long Saturday afternoon that had more functionality than our $80K annual ERP system. Showed it to a few more open-minded employees, then moved it to my server, never to be shown again. Just wanted to prove when I said the ERP system was pointless, that I was right.

masom 23 hours ago

A big one is the lack of peer reviews and processes, including team meetings, that would slow them down. No PM, no UX, just yourself and the keyboard with some goals in mind. No OKRs or tickets to close.

It's a bit like any early industry, from cars to airplanes to trains. Earlier models were made by a select few people, and there was several versions until today where GM and Ford have thousands of people involved in designing a single car iteration.

jandrese 20 hours ago

IMHO the biggest thing is that they were their own customer. There was no requirements gathering, ui/ux consultation, third party bug reporting, just like you said. They were eating their own dogfood and loving it. No overhead meant they could focus entirely on the task at hand.

kragen 7 hours ago

We aren't talking about a very large amount of code here. Mainly the process was implementing several similar systems over the previous 10 years. You'd be surprised how much faster it is to write a program the fifth time, now that you know all the stuff you can leave out.

noisy_boy 1 day ago

Genius level mind minus scrum/agile nonsense can help.

hylaride 21 hours ago

Impossible! How can the product managers maintain control without the bureaucracy? /s

Daishiman 23 hours ago

A lot of the supposed "features" we have in Unix nowadays are the result of artifacts resulting from primitive limitations, like dotfiles.

If you're willing to let everything crash if you stray from the happy path you can be remarkably productive. Likewise if you make your code work on one machine, on a text interface, with no other requirements except to deliver the exact things you need.

somat 23 hours ago

It is also the case that the first 80% of a projects functionality goes really quickly. Especially when you are interested and highly motivated about the project. That remaining 20% though. That is a long tail, it tends to be a huge slog that kills your motivation.

jefurii 1 hour ago

The first 80% of a project takes 80% of the time. The last 20% of the project takes the other 80% of the time.

pinoy420 1 day ago

Otoh: I got react to run my tests without any warnings today.

9dev 1 day ago

If I write a bunch of tests for new code, and all of them pass on the first attempt, I'm immediately suspicious of a far more egregious bug hiding somewhere…

michaelcampbell 20 hours ago

"never trust a test you've never seen fail." has kept me honest on more than one occasion.

throwanem 1 day ago

Where feasible, I like to start a suite with a unit test that validates the unit's intended side effects actually occur, as visible in their mocks being exercised.

pinoy420 1 day ago

I laughed. Thank you for that

throwanem 15 hours ago

Sure. For Patreon subscribers at the $5/month tier and up, I also have a course on making integration ("e2e", "functional") tests more maintainable by eliminating side effects.

noisy_boy 1 day ago

// Todo: remove

return true;

kps 1 day ago

/bin/true used to be an empty file. On my desktop here, it's 35K (not counting shared libraries), which is an asolute increase of 35K and a relative increase of ∞%.

mr_toad 23 hours ago

I’ve heard that Torvalds build Git in 5 (or 10) days and that Brendan Eich created JavaScript in 10 days.

Maybe the average programmer is less efficient, but the distribution is probably heavily skewed these days.

somat 23 hours ago

> I’ve heard that Torvalds build Git in 5 days

And it shows.

I am joking of course, git is pretty great, well half-joking, what is it about linux that it attracts such terrible interfaces. git vs hg, iptables vs pf. there is a lot of technical excellence present, marred by a substandard interface.

wbl 21 hours ago

That's why Magit exists

markus_zhang 23 hours ago

I'd argue that ordinary programmers can perform the same *type* of exercises if they:

- Put away a few weeks and go into Hermit mode;

- Plan ahead what projects they have in mind, which books/documents to bring with them. Do enough research and a bit of experimental coding beforehand;

- Reduce distraction to minimum. No Internet. Dumb phone only. Bring a Garmin GPS if needed. No calls from family members;

I wouldn't be surprised if they could up-level skills and complete a tough project in three weeks. Surely they won't write a UNIX or Git, but a demanding project is feasible with researches allocated before they went into Hermit mode.

richardlblair 23 hours ago

I also think people under estimate how much pondering one does before starting a project.

markus_zhang 22 hours ago

I think so. I don't think Ken had zero thought about UNIX and then suddenly came up with a minimum but complete solution in under 3 weeks. Previous experience also tells a lot too. Wozniak was able to quickly design some electronics, but he probably already bagged 10,000 hours (just to borrow the popular metaphor) before he joined HP.

nyrikki 21 hours ago

They both had been working on the Multics project for Bell Labs before they pulled out of the project and had written several languages already.

While some ideas like hierarchical filesystems were new it was mainly a modernized version of CTSS according to Dennis Ritchie's paper "The UNIX Time-sharing SystemA Retrospective"

I was playing with this version on simh way too late last night, taking a break from ITS, and being very familiar with v7 2.11 etc.. It is quite clearly very cut down.

I think being written in Assembly, which they produced by copying the DEC PAL-11R helped a lot.

If you look through the v1 here:

https://www.tuhs.org/Archive/Distributions/Research/Dennis_v...

It is already very modular, and obviously helped by dmr's MIT work:

https://people.csail.mit.edu/meyer/meyer-ritchie.pdf

But yet...work for years making an ultra complex OS that intended to provide 'utility scale' compute, and writing a fairly simple OS for a tiny mini would be much easier....if not so for us mortals.

It isn't like they just came out of a code boot camp...they needed the tacit knowledge and experience to push out 100K+ lines in one year from two people over 300bps terminals etc...

ForOldHack 7 hours ago

"EDIT: This was created to collect everything:" Wow. Amazing. Dennis would have been proud. Thank you, and thank everyone for their work. Thanks.

markus_zhang 19 hours ago

Yeah. They were pretty professional by then :D

wbl 21 hours ago

Brendan Eich would say "10 days" whenever one of the big warts from the that are unfixable came up.

digitalsushi 1 day ago

Spock levels of fascinating from me. I want to learn how to compile a pdp11 emulator on my mac.

thequux 1 day ago

Compiling an emulator is quite easy: have a look at simh. It's very portable and should just work out of the box.

Once you've got that working, try installing a 2.11BSD distribution. It's well-documented and came after a lot of the churn in early Unix. After that, I've had great fun playing with RT-11, to the point that I've actually written some small apps on it.

an-unknown 1 day ago

> After that, I've had great fun playing with RT-11 [...]

If you want to play around with RT-11 again, I made a small PDP-11/03 emulator + VT240 terminal emulator running in the browser. It's still incomplete, but you can play around with it here: https://lsi-11.unknown-tech.eu/ (source code: https://github.com/unknown-technologies/weblsi-11)

The PDP-11/03 emulator itself is good enough that it can run the RT-11 installer to create the disk image you see in the browser version. The VT240 emulator is good enough that the standalone Linux version can be used as terminal emulator for daily work. Once I have time, I plan to make a proper blog post describing how it all works / what the challenges were and post it as Show HN eventually.

somat 1 day ago

The daves garage youtube has an episode where he documents the pitfalls of compiling 2bsd for a PDP-11/83. https://www.youtube.com/watch?v=IBFeM-sa2YY basically it is an art on a memory constrained system.

What I found entertaining was when he was explaining how to compile the kernel, I went Oh! that's where openbsd gets it from. it is still a very similar process.

azinman2 1 day ago

What’s the process look like?

somat 23 hours ago

On openbsd it's

    cd /sys/arch/$(machine)/conf
    cp GENERIC CUSTOM
    vi CUSTOM    # make your changes
    config CUSTOM
    cd ../compile/CUSTOM
    make
https://www.openbsd.org/faq/faq5.html

I have never done it for 2bsd but according to http://www.vaxman.de/publications/bsd211_inst.pdf

    cd /usr/src/sys/conf
    cp GENERIC CUSTOM
    vi CUSTOM
    ./config CUSTOM
    cd /sys/CUSTOM
    make

icedchai 1 day ago

I've been messing around with RSX-11M myself! I find these early OSes quite fascinating. So far I've set up DECNet with another emulator running VMS, installed a TCP stack, and a bunch of compilers.

colechristensen 1 day ago

From the link:

> It's somewhat picky about the environment. So far, aap's PDP-11/20 emulator (https://github.com/aap/pdp11) is the only one capable of booting the kernel. SIMH and Ersatz-11 both hang before reaching the login prompt. This makes installation from the s1/s2 tapes difficult, as aap's emulator does not support the TC11. The intended installation process involves booting from s1 and restoring files from s2.

kragen 7 hours ago
aap_ 1 day ago

good luck though. my emulator is not particularly user friendly, as in, it has no user interface. i recommend simh (although perhaps not for this thing in particular).

colechristensen 1 day ago

So what mechanism do you have set up to reply 4 minutes after being mentioned? :)

aap_ 1 day ago

Compulsively checking HN i suppose :D

lanstin 1 day ago

Also looking at threads view first before actual news helps with that.

snovymgodym 1 day ago

https://opensimh.org/

Works great on Apple Silicon

haunter 1 day ago

What’s the difference between an emulator and a simulator in this context?

bityard 1 day ago

There is LOADS of gray area, overlap, and room for one's own philosophical interpretation... But typically simulators attempt to reproduce the details of how a particular machine worked for academic or engineering purposes, while emulators are concerned mainly with only getting the desired output. (Everything else being an implementation detail.)

E.g. since the MAME project considers itself living documentation of arcade hardware, it would be more properly classified as a simulator. While the goal of most other video game emulators is just to play the games.

Imustaskforhelp 1 day ago

I don't want to offend you , but this has made me even wonder more what the difference is.

It just feels that one is emulator if its philosophy is "it just works" and simulator if "well sit down kids I am going to give you proper documentation and how it was built back in my days"

but I wonder what that means for programs themselves...

I wonder if simulator==emulator is more truer than what javascript true conditions allow.

kragen 7 hours ago

It's fuzzy.

anthk 1 day ago

Not the case at all. Tons of emulators are near 100% accurate.

Brian_K_White 1 day ago

Irrelevant to the concept being expressed, and does not invalidate.

The goals merely overlap, which is obvious. Equally obviously, if two goals are similar, then the implimentations of some way to attain those goals may equally have some overlap, maybe even a lot of overlap. And yet the goals are different, and it is useful to have words that express aspects of things that aren't apparent from merely the final object.

A decorative brick and a structural brick may both be the same physical brick, yet if the goals are different then any similarity in the implimentation is just a coincidense. It would not be true to say that the definition of a decorative brick includes the materials and manufacturing steps and final physical properties of a structural brick. The definition of a decorative brick is to create a certain appearance, by any means you want, and it just so happens that maybe the simplest way to make a wall that looks like a brick wall, is to build an actual brick wall.

If only they had tried to make it clear that there is overlap and the definitions are grey and fuzzy and open to personal philosophic interpretation and the one thing can often look and smell and taste almost the same as the other thing, if only they had said anything at all about that, it might have headed off such a pointless confusion...

bityard 1 day ago

Huh? I didn't mention anything about accuracy. And "accuracy" (an overloaded and ill-defined term on its own) doesn't have anything to do with the differences between simulators and emulators.

Imustaskforhelp 1 day ago

exactly. makes you wonder , is it all just philosophical.

Calling the same thing a different name.

o11c 1 day ago

In theory, an emulator is oriented around producing a result (this may mean making acceptable compromises), whereas a simulator is oriented around inspection of state (this usually means being exact).

In practice the terms are often conflated.

codr7 1 day ago

The difference is about as crystal clear as compiler/interpreter.

Imustaskforhelp 1 day ago

compiler creates a binary in elf format or other format which can be run given a shared object exists.

Intepreter either writes it in bytecode and then executes the bytecode line by line ?

Atleast that is what I believe the difference is , care to elaborate , is there some hidden joke of compiler vs intepreter that I don't know about ?

dpassens 1 day ago

I assume GP meant that a lot of compilers also interpret and interpreters also compile.

For compilers, constant folding is a pretty obvious optimization. Instead of compiling constant expressions, like 1+2, to code that evaluates those expressions, the compiler can already evaluate it itself and just produce the final result, in this case 3.

Then, some language features require compilers to perform some interpretation, either explicitly like C++'s constexpr, or implicitly, like type checking.

Likewise, interpreters can do some compilation. You already mentioned bytecode. Producing the bytecode is a form of compilation. Incidentally, you can skip the bytecode and interpret a program by, for example, walking its abstract syntax tree.

Also, compilers don't necessarily create binaries that are immediately runnable. Java's compiler, for example, produces JVM bytecode, which requires a JVM to be run. And TypeScript's compiler outputs JavaScript.

Imustaskforhelp 23 hours ago

Then what is the difference, I always thought of Java as closer to python in the sense that it's running the byte code. And python also has bytecode.

I don't know what the difference is , I know there can be intepreters of compilers but generally speaking it's hard to find compilers of intepreters

Eg C++ has compilers , intepreters both (cpi) , gcc

Js doesn't have compilers IIRC , it can have transpilers Js2c is good one but i am not sure if they are failsafe (70% ready) ,

I also have to thank you , this is a great comment

o11c 21 hours ago

Programming languages mostly occupy a 4-dimensional space at runtime. These axes are actually a bit more complicated than just a line:

* The first axis is static vs dynamic types. Java is mostly statically-typed (though casting remains common and generics have some awkward spots); Python is entirely dynamically-typed at runtime (external static type-checkers do not affect this).

* The second axis is AOT vs JIT. Java has two phases - a trivial AOT bytecode compilation, then an incredibly advanced non-cached runtime native JIT (as opposed to the shitty tracing JIT that dynamically-typed languages have to settle for); Python traditionally has an automatically-cached barely-AOT bytecode compiler but nothing else (it has been making steps toward runtime JIT stuff, but poor decisions elsewhere limit the effectiveness).

* The third axis is indirect vs inlined objects. Java and Python both force all objects to be indirect, though they differ in terms of primitives. Java has been trying to add support for value types for decades, but the implementation is badly designed; this is one place where C# is a clear winner. Java can sometimes inline stack-local objects though.

* The fourth axis is deterministic memory management vs garbage collection. Java and Python both have GC, though in practice Python is semi-deterministic, and the language has a somewhat easier way to make it more deterministic (`with`, though it is subject to unfixable race conditions)

I have collected a bunch more information about language implementation theory: https://gist.github.com/o11c/6b08643335388bbab0228db763f9921...

amszmidt 21 hours ago

The easy definition is that an interpreter takes somethings and runs/executes it.

A compiler takes the same thing, but produces an intermediate form (byte code, machine code, another languages sometimes called "transpilar"). That you can then pass through an interpreter of sorts.

There is no difference between Java and JVM, and Python and the Python Virtual Machine, or even a C compiler targeting x86 and a x86 CPU. One might call some byte code, and the other machine code .. they do the same thing.

amszmidt 21 hours ago

While an interpreter can do optimizations, they do not produce "byte code" -- by that time they are compilers!

As for the comparison with the JVM .. compare to a compiler that produces x86 code, it cannot be run without an x86 machine. You need a machine to run something, be it virtual or not.

codr7 1 day ago

Thank you!

somat 22 hours ago

I would generalize it to a compiler produces some sort of artifact that is intended to later be used directly, while for an interpreter the whole mechanism(source to execution) is intended to be used directly.

The same tool can often be used to do both. trival example: a web browser. save your web page as a pdf? compiler. otherwise interpreter. but what if the code it is executing is not artisanal handcrafted js but the result of a typescript compiler?

amszmidt 22 hours ago

An interpreter runs the code as it is being read in.

A compiler processes the code and provides an intermediate result which is then "interpreted" by the machine.

So to take the "writes it in byte code" -- that is a compiler. "executes the byte code" -- is the interpreter.

If byte code is "machine code" or not, is really secondary.

Imustaskforhelp 12 hours ago

Then isn't theoretically all languages assembly intepreters in the endd

ijustlovemath 1 day ago

Adding some anecdata, I feel like emulator is mainly used in the context of gaming, in which case they actually care a great deal about accurate reproduction (see: assembly bugs in N64 emulators that had to be reproduced in order to build TAS). I haven't seen it used much for old architectures; instead I'd call those virtual machines.

definitely agree on simulator though!

amszmidt 21 hours ago

I think it is more about design, emulation mimics what something does. A simulator replicates what something does.

It is a tiny distinction, but generally I'd say that a simulator tries to accurately replicate what happens on an electrical level as good one can do.

While an emulator just does things as a black box ... input produces the expected output using whatever.

You could compare it to that an accurate simulator of a 74181 tries to do it by using AND/OR/NOT/... logic, but an emulator does it using "normal code".

In HDL you have a similar situation between structural, behavioral design ... structural is generally based on much more lower level logic (eg., AND/NOR/.. gates ...), and behavioral on higher logic (addition, subtraction ...).

"100%" accuracy can be achieved with both methods.

nonrandomstring 1 day ago

Yep, this is a metal-detectorists finding religious relic moment.

boznz 1 day ago

Too easy! Going to build one with NAND gates.

ForOldHack 1 day ago

You can make logic gates out of almonds???

wglb 1 day ago

Or you could go the way of this quite impressive project: http://fpgaretrocomputing.org/

dataf3l 1 day ago

I love this!

first time I see people use 'ed' for work!!!

I wonder who else has to deal with ed also... recently I had to connect to an ancient system where vi was not available, I had to write my own editor, so whoever needs an editor for an ancient system, ping me (it is not too fancy).

amazing work by the creators of this software and by the researchers, you have my full respect guys. those are the real engineers!

wpollock 1 day ago

I remember using an ed-like editor on a Honeywell timeshare system in the 1960s, over a Teltype ASR-33. I don't remember much except you invoked it using "make <filename>" to create a new file. And if you typed "make love" the editor would print "not war" before entering the editor.

skissane 1 day ago

The “MAKE LOVE”/“NOT WAR” easter egg was in TECO for DEC PDP-6/10 machines. But DEC TECO was also ported to Multics, so maybe that was the Honeywell machine you used it on.

But, for a whole bunch of reasons, I’m left with the suspicion you may be misremembering something from the early 1970s as happening in the 1960s. While it isn’t totally impossible you had this experience in 1968 or 1969, a 1970s date would be much more historically probable

flyinghamster 1 day ago

The easter egg carried over to the PDP-11 as well. I remember it being present in RSTS/E 7.0's TECO back in my high school days, and I just fired up SIMH and found it's definitely there.

On the other hand, I never really tried to do anything with TECO other than run VTEDIT.

wpollock 22 hours ago

You're probably right. It definitely was teco and likely 1970ish.

pjmlp 1 day ago

Not ed, but definilty inspired by it, I am old enough to have done typewriting school exam on MS-DOS 3.3 edlin.

And since then never used it ever again, nor ed when a couple of years later we had Xenix access, as vi was much saner alternative.

skissane 1 day ago

I also remember using MS-DOS 3.3 EDLIN in anger, on our home computer [0] when I was roughly 8, because it was the only general purpose text editor we had. (We also had Wordstar, which I believe could save files in plain text mode, but I don’t think my dad or I knew that at the time.) I didn’t do much with it but used it to write some simple batch files. My dad had created a directory called C:\BAT and we used it a bit like a menu system, we put batch files in it to start other programs. I don’t remember any PC-compatible machines at my school, it was pretty much all Apple IIs, although the next year moved to a new school which as well as Apple IIs, also had IBM PC JXs (IBM Japan variant of the IBM PCjr which was sold to schools in Australia/New Zealand) and Acorn Archimedes.

[0] it was an IBM PC clone, an ISA bus 386SX, made by TPG - TPG are now one of Australia’s leading ISPs, but in the late 1980s were a PC clone manufacturer. It had a 40Mb hard disk, two 5.25 inch floppy drives (one 1.2Mb, the other 360Kb), and a vacant slot for a 3.5 inch floppy, we didn’t actually install the floppy in it until later. I still have it, but some of the innards were replaced, I think the motherboard currently in it is a 486 or Pentium

relistan 1 day ago

In the mid 90s we had an AT&T 3B2 that only had ed on it. We used it via DEC VT-102 terminals. It (ed) works but it’s not fun by any modern standards. Must’ve been amazing on a screen compared to printout from a teletype though!

Side note: that ~1 MIP 3B2 could support about 20 simultaneous users…

wglb 1 day ago

An early consulting gig was to write a tutorial for ed (on the Coherent system). I often use ed--in fact I used it yesterday. I needed to edit something without clearing the screen.

Earlier, I wrote an editor for card images stored on disks. Very primitive.

kragen 1 day ago

I used ed in Termux on my cellphone to write http://canonical.org/~kragen/sw/dev3/justhash.c in August. Someone, I forget who, had mentioned they were using ed on their cellphone because the Android onscreen keyboard was pretty terrible for vi, which is true. So I tried it. I decided that, on the cellphone, ed was a little bit worse than vi, but they are bad in different ways. It really is much easier to issue commands to ed than to vi on the keyboard (I'm using HeliBoard) but a few times I got confused about the state of the buffer in a way that I wouldn't with vi. Possibly that problem would improve with practice, but I went back to using vi.

kps 1 day ago

In my first computing job I used ed for about six months (we didn't have character-mode I/O yet). I learned to make good use of regular expressions.

WhyNotHugo 1 day ago

The keystokes are pretty much what you'd press in vim to perform the same actions. Except that append mode ends when they finished the line (apparently) rather than having to press Esc.

The feedback from the editor, however, is… challenging.

rchard2scout 1 day ago

In ed, append mode ends by entering a single '.' on an empty line, and then pressing enter. You can see that happening in the article.

ThePowerOfFuet 1 day ago

Now we know where SMTP got it, I guess.

kragen 1 day ago

That's possible but unlikely. MTP as defined by Suzanne Sluizer and Jon Postel in RFC 772 in September 01980 https://datatracker.ietf.org/doc/html/rfc772 seems to have been where SMTP got that convention for ending the message:

> ...and considers all succeeding lines to be the message text. It is terminated by a line containing only a period, upon which a 250 completion reply is returned.

But in 01980 Unix had only been released outside of Bell Labs for five years and was only starting to support ARPANET connections (using NCP), so I wouldn't expect it to be very influential on ARPANET protocol design yet. I believe both Sluizer and Postel were using TOPS-20; the next year the two of them wrote RFC 786 about an interface used under TOPS-20 at ISI (Postel's institution, not sure if Sluizer was also there) between MTP and NIMAIL.

For some context, RFC 765, the June 01980 version of FTP, extensively discusses the TOPS-20 file structure, mentions NLS in passing, and mentions no other operating systems in that section at all. In another section, it discusses how different hardware typically handles ASCII:

> For example, NVT-ASCII has different data storage representations in different systems. PDP-10's generally store NVT-ASCII as five 7-bit ASCII characters, left-justified in a 36-bit word. 360's store NVT-ASCII as 8-bit EBCDIC codes. Multics stores NVT-ASCII as four 9-bit characters in a 36-bit word. It may be desirable to convert characters into the standard NVT-ASCII representation when transmitting text between dissimilar systems.

Note the complete absence of either of the hardware platforms Unix could run on in this list!

(Technically Multics is software, not hardware, but it only ever ran on a single hardware platform, which was built for it.)

RFC 771, Cerf and Postel's "mail transition plan", admits, "In the following, the discussion will be hoplessly [sic] TOPS20[sic]-oriented. We appologize [sic] to users of other systems, but we feel it is better to discuss examples we know than to attempt to be abstract." RFC 773, Cerf's comments on the mail service transition plan, likewise mentions TOPS-20 but not Unix. RFC 775, from December 01980, is about Unix, and in particular, adding hierarchical directory support to FTP:

> BBN has installed and maintains the software of several DEC PDP-11s running the Unix operating system. Since Unix has a tree-like directory structure, in which directories are as easy to manipulate as ordinary files, we have found it convenient to expand the FTP servers on these machines to include commands which deal with the creation of directories. Since there are other hosts on the ARPA net which have tree-like directories, including Tops-20 and Multics, we have tried to make these commands as general as possible.

RFC 776 (January 01981) has the email addresses of everyone who was a contact person for an Internet Assigned Number, such as JHaverty@BBN-Unix, Hornig@MIT-Multics, and Mathis@SRI-KL (a KL-10 which I think was running TOPS-20). I think four of the hosts mentioned are Unix machines.

So, there was certainly contact between the Unix world and the internet world at that point, but the internet world was almost entirely non-Unix, and so tended to follow other cultural conventions. That's why, to this day, commands in SMTP and header lines in HTTP/1.1 are terminated by CRLF and not LF; why FTP and SMTP commands are all four letters long and case-insensitive; and why reply codes are three-digit hierarchical identifiers.

So I suspect the convention of terminating input with "." on a line of its own got into ed(1) and SMTP from a common ancestor.

I think Sluizer is still alive. (I suspect I met her around 01993, though I don't remember any details.) Maybe we could ask her.

bbanyc 22 hours ago

The "." to terminate input was used in FTP mail on ARPANET, defined in RFC 385 which was well before anyone outside Bell had heard of Unix.

kragen 9 hours ago

Oh wow, really? I didn't look because I assumed mail over FTP was transferred over a separate data connection, just like other files. Thank you!

And yes, in August 01972 probably nobody at MIT had ever used ed(1) at Bell Labs. Not impossible, but unlikely; in June, Ritchie had written, "[T]he number of UNIX installations has grown to 10, with more expected." But nothing about it had been published outside Bell Labs.

The rationale is interesting:

> The 'MLFL' command for network mail, though a useful and essential addition to the FTP command repertoire, does not allow TIP users to send mail conveniently without using third hosts. It would be more convenient for TIP users to send mail over the TELNET connection instead of the data connection as provided by the 'MLFL' command.

So that's why they added the MAIL command to FTP, later moved to MTP and then in SMTP split into MAIL, RCPT, and DATA, which still retains the terminating "CRLF.CRLF".

https://gunkies.org/wiki/Terminal_Interface_Processor explains:

> A Terminal Interface Processor (TIP, for short) was a customized IMP variant added to the ARPANET not too long after it was initially deployed. In addition to all the usual IMP functionality (including connection of host computers to the ARPANET), they also provided groups of serial lines to which could be attached terminals, which allowed users at the terminals access to the hosts attached to the ARPANET.

> They were built on Honeywell 316 minicomputers, a later and un-ruggedized variant of the Honeywell 516 minicomputers used in the original IMPs. They used the TELNET protocol, running on top of NCP.

lmm 1 day ago

I had to use ed to configure X on my alpha/vms machine back when I had it, there was something wrong with the terminfo setup so visual editors didn't work, only line-based programs.

jamesfinlayson 1 day ago

Never had to use ed but I remember working with someone a fair bit older than me that remembered using ed.

S04dKHzrKT 1 day ago

Real Programmers use ed. https://xkcd.com/378/

duohedron 1 day ago

Of course. Ed is the standard text editor. https://www.gnu.org/fun/jokes/ed-msg.en.html

dboreham 1 day ago

Hmm. I still use ed now and then. It's an alias to vim I assume these days.

ajross 1 day ago

Interestingly it's actually a sort of degenerate use of ed. All it does is append one line to an empty buffer and write it to "hello.c". It's literally the equivalent of

    echo 'int main(void) { printf("hello!\n"); }' > hello.c
...EXCEPT...

It's not, because the shell redirection operators didn't exist yet at this point in time. Maybe (or maybe not?) it would work to cat to the file from stdin and send a Ctrl-D down the line to close the descriptor. But even that might have been present yet. Unix didn't really "look like Unix" until v7, which introduced the Bourne shell and most of the shell environment we know today.

starspangled 1 day ago

I love browsing the tuhs mailing list from time to time. Awesome to see names like Ken Thompson and Rob Pike, and a bunch of others with perhaps less recognizable names but who were involved in the early UNIX and computing scene.

typeofhuman 1 day ago

Software archeology

api 1 day ago

One of the many things I dislike about the SaaS era is that this will never happen. Nobody in 2075 will boot up an old version of Notion or Figma for research or nostalgia.

Like the culture produced and consumed on social media and many other manifestations of Internet culture it is perfectly ephemeral and disposable. No history, no future.

SaaS is not just closed but often effectively tied to a literal single installation. It could be archived and booted up elsewhere but this would be a much larger undertaking, especially years later without the original team, than booting 1972 Unix on a modern PC in an emulator. That had manuals and was designed to be installed and run in more than one deployment. SaaS is a plate of slop that can only be deployed by its authors, not necessarily by design but because there are no evolutionary pressures pushing it to be anything else. It's also often tangled up with other SaaS that it uses internally. You'd have to archive and restore the entire state of the cloud, as if it's one global computer running proprietary software being edited in place.

pjmlp 1 day ago

And since many applications are basically plugging SaaS with each other via APIs and webhooks, not even those.

We're living the SOA dreams, but it will be an hefty price.

joquarky 1 day ago

It's not as glamorous as it sounds.

JeffTickle 1 day ago

Can anyone provide a reference on what those file permissions mean? I can make a guess but when I searched around, could not find anything about unix v2 permissions. ls output looks so familiar, except for the sdrwrw!

b0in 1 day ago

Someone in the mailing list thread linked the man pages that they were able to extract out

https://gitlab.com/segaloco/v1man/-/blob/master/man1/stat.1?...

for sdrwrw:

- column 1 is s or l meaning small or large

- column 2 is d, x, u, -; meaning directory, executable, setuid, or nothing.

- the rest are read-write bits for owner and non-owner.

Postosuchus 21 hours ago

Pretty interesting. I guess it was way later, when they came up with the SUID semantics and appropriated the first character for symlinks (l) or setuid binaries (s)...

WhyNotHugo 1 day ago

1328 bytes for a hello world? BLOAT!

runlevel1 1 day ago

That reminded me of the compiler that used to include a large poem in every binary, just for shits and giggles. You've heard of a magic number, it had a magic sonnet.

I thought it was early versions of the Rust compiler, but I can't seem to find any references to it. Maybe it was Go?

EDIT: Found it: 'rust-lang/rust#13871: "hello world" contains Lovecraft quotes' https://github.com/rust-lang/rust/issues/13871

ptspts 1 day ago

My https://github.com/pts/minilibc686 can do printf-hello-world on i386 in less than 1 KiB. write-hello-world is less than 170 bytes.

kragen 7 hours ago

Very nice!

ramon156 1 day ago

Time to rice my unix!

yjftsjthsd-h 1 day ago

Hm. I wonder how hard it would be to write a neofetch (...er, "oldfetch"?) for v1 Unix. Maybe hardcode some of it? Should work.

doublerabbit 1 day ago

Cool. Can we enter that time portal and live in that alternate reality?

IgorPartola 1 day ago

When gasoline was leaded, cigarette smoke was normal everywhere, and asbestos was used for everything you can think of? It is a fascinating decade but also quality of life likely has skyrocketed since.

queuebert 1 day ago

Depends on what you value. Purchasing power of wages has declined, for example. That's probably not better.

I suspect the sentiment is more that it would be nice to live in a simpler time, with fewer options, because it would reduce anxiety we all feel about not being able to "keep up" with everything that is going on. Or maybe I'm just projecting.

smeeger 1 day ago

it is fascinating to consider that this might not be true even though it seems true

msla 1 day ago

No? Thinking the world has gotten worse is classic old person chuntering from time immemorial.

smeeger 1 day ago

thinking the world can only get better is another thing too

oguz-ismail 1 day ago

> quality of life likely has skyrocketed since

it hasn't

Cthulhu_ 1 day ago

Is that an objective truth or rose-tinted nostalgia speaking? (I wouldn't know, I wasn't alive then.)

azinman2 1 day ago

Depends on the specifics of your life.

As a gay man, I’m much happier in 2025.

msla 1 day ago

I survived cancer because of modern medical advances.

I'll take the world with Rituxan and CAR T-cell therapy, thank you.

yjftsjthsd-h 1 day ago

I mean... Sure? Go buy an actual VT* unit ( maybe https://www.ebay.com/itm/176698465415?_skw=vt+terminal&itmme... ?), get the necessary adaptors to plug into a computer, and run simh on it running your choice of *nix. I recommend https://jstn.tumblr.com/post/8692501831 as a reference. Once you have it working, shove the host machine behind a desk or otherwise out of sight, and you can live like it's 1980.

an-unknown 1 day ago

The only problem with real VTs is you have to be careful not to get one where the CRT has severe burn-in, like in the ebay listing. Sure, some VTs (like the VT240 or VT525) are a separate main box + CRT, but then you're missing the "VT aesthetics". The VT525 is probably the easiest one to get which also uses (old) standard interfaces like VGA for the monitor and PS/2 for the keyboard, so you don't need an original keyboard / CRT. At least for me, severe burn-in, insane prices, and general decay of some of the devices offered on ebay are the reason why I don't have a real VT (yet).

The alternative is to use a decent VT emulator attached to roughly any monitor. By "decent" I certainly don't mean projects like cool-retro-term, but rather something like this, which I started to develop some time ago and which I'm using as my main terminal emulator now: https://github.com/unknown-technologies/vt240

cbm-vic-20 1 day ago

There is firmware available online for some terminals; you could potentially get a lot more accuracy in emulating the actual firmware, but I'm sure a lot of that code gets into the guts of timing CRT cycles and other "real-world" difficulties. I'm not suggesting this would be easy to build out, just pointing out that it's available. While I haven't searched for the VT240 firmware, the firmware for the 8031AH CPU inside the VT420 (and a few other DEC terminals) is available on bitsavers. The VT240 has a T-11 processor, which is actually a PDP-11-on-a-chip.

an-unknown 1 day ago

Actually I have the VT240 firmware ROM dumps, that's where I got the original font from. The problem is, at least the VT240 is a rather sophisticated thing, with a T-11 CPU, some additional MCU, and a graphics accelerator chip. There is an extensive service manual available, with schematics and everything, but properly emulating the whole firmware + all relevant peripherals is non-trivial and a significant amount of work. The result is then a rather slow virtual terminal.

There is a basic and totally incomplete version of a VT240 in MAME though, which is good enough to test certain behavior, but it completely lacks the graphics part, so you can't use it to check graphics behavior like DRCS and so on.

EDIT: I also know for sure that there is a firmware emulation of the VT102 available somewhere.

kragen 7 hours ago

You can also just use the terminal despite the burn-in.

MobiusHorizons 1 day ago

Ha, I just bought a VT420 a couple of weeks ago. I just finished a hacked together converter for USB keyboards working well enough (in the last hour actually). Next job is to connect it up as a login terminal for my freebsd machine.

icedchai 1 day ago

I love those old terminals! I remember using them during late nights in college...

unit149 1 day ago

Recovering RF tapes, even a simple text file demonstrates buffer space that is not being used by the dos, or .iso file. Even in a 2.11 BSD distro, a default tiling and window manager has to be installed on the native OS. So yes, going with KDE or the X11 wm.