petesergeant 2 days ago

That's neat and a similar idea. I think JSON probably ends up being too expressive (not just an array of identically-shaped shallow objects), too restrictive (too few useful primitives), and also too verbose of a format, but the idea of a wrapping command like that as a starting point is neat

1
ramses0 2 days ago

I'll share this comment from 7 months ago with you:

https://news.ycombinator.com/item?id=40100069

"prefer shallow arrays of 'records', possibly with a deeply nested 'uri'-style identifier"

...the clutch result is: "it can be loaded into a database and treated as a table".

The origin of this technique for me was someone saying back in 2000'ish timeframe (and effectively modernized here):

    sqlite-utils insert example.db ls_part <( jc ls -lart )
    sqlite3 example.db --json \
      "SELECT COUNT(*) AS c, flags FROM ls_lart GROUP BY flags" 
    [
      {
        "c": 9,
        "flags": "-rw-r--r--"
      },
      {
        "c": 2,
        "flags": "drwxr-xr-x"
      }
    ]
...this is a 'trivial' example, but it puts a really fine point on the capabilities it unlocks. You're not restricted to building a single pipeline, you can use full relational queries (eg: `... WHERE date > ...`, `... LEFT JOIN files ON git_status...`), you can refer to things by column names rather than weird regexes or `awk` scripts.

This particular example is "dumb" (but ayyyy, I didn't get a UUOC cat award!) in that you can easily muddle through it in different (existing pipeline) ways, but SQL crushes the primitive POSIX relationship tooling (so old, ugly, and unused they're tough to find!), eg: `comm`, `paste`, `uniq`, `awk`