wavemode 4 days ago

This is kind of bug unit tests don't catch. The only way to exhaustively test that you're handling every possible input, is to loop over every possible input (which may tend to infinity).

Therein lies the importance of runtime assertions (so we can sanity check that parsing actually succeeded rather than silently failing) and monitoring (so we can sanity check that, for example, we don't ever go 24 hours without receiving data from the parsing job).

3
kccqzy 4 days ago

> to loop over every possible input (which may tend to infinity).

This attitude is defeatist. The success of property based testing (see QuichCheck in Haskell or Hypothesis in Python) especially when combined with fuzzing shows that instead of looping over every possible input, looping over thousands of inputs tend to be good enough in practice to catch bugs.

Throwing infinity as a cop out is a lazy opinion held by people who don't understand infinity, or rather, the concept of infinity that's countable. Everything we model on a computer is countably infinite. When we have multiple of such countable infinite sets, the standard dovetail constructions guarantees that their union will be countable. Their Cartesian product will also be countable. You can always obtain an interesting prefix of such an infinite set for testing purposes.

wavemode 4 days ago

Your tone implies to me that you are under the impression that I'm suggesting one should not test their software. Nothing could be further from the truth.

What I'm saying is that it's foolish not to take any measures at runtime to validate that the system is behaving correctly.

Who's to say that the logs themselves are even formatted correctly? Your software could be perfectly bug-free and you'd still have problems without knowing it, due to bugs in some other person's software. That's the point you're missing - no matter how many edge cases you account for, there's always another edge case.

kccqzy 3 days ago

Oh no not at all. I didn't imply that you are suggesting one shouldn't test their software. Instead, I believe you have an overly narrow view of what tests can accomplish.

I didn't say anything about measures at runtime to validate things. That's complementary to good tests.

lionkor 3 days ago

No there are 12 months you can exhaustively test 12 cases.

imtringued 4 days ago

It's called fuzzing.

wavemode 4 days ago

Fuzzing does not catch all bugs. And even if it, did your software can still misbehave when all logical bugs are eliminated. Say for example the program parses correctly but uses up a large amount of RAM on certain inputs, causing occasional crashes and restarts in production. Say for example your program behaves perfectly but there's actually an occasional bug in the date formatting in the logs themselves.

So yeah, you need monitoring and assertions. A decent coverage of unit tests is good, but I wouldn't bother trying to invest in some sort of advanced fuzzing or quickcheck system. In my experience the juice isn't worth the squeeze.

senderista 3 days ago

IME PBT is complementary to assertions: PBT probes the space of inputs and your assertions find inputs that make your code violate invariants.

When I was writing a nontrivial data structure library I was amazed (and humbled) by how many bugs were caught by PBT (again, combined with copious assertions) but not by my unit tests (which tried to cover all the "obvious" edge cases).