kristopolous 7 days ago

It's worth noting the streaming markdown renderer I wrote just for this tool: https://github.com/day50-dev/Streamdown

More background: https://github.com/simonw/llm/issues/12

(Also check out https://github.com/day50-dev/llmehelp which features a tmux tool I built on top of Simon's llm. I use it every day. Really. It's become indispensable)

5
kristopolous 7 days ago

Also I forgot to one other built on llm.

This one is a ZSH plugin that uses zle to translate your English to shell commands with a keystroke.

https://github.com/day50-dev/Zummoner

It's been life changing for me. Here's one I wrote today:

    $ git find out if abcdefg is a descendent of hijklmnop 
In fact I used it in one of these comments

    $ for i in $(seq 1 6); do 
      printf "%${i}sh${i}\n\n-----\n" | tr " " "#"; 
    done | pv -bqL 30 
Was originally

    $ for i in $(seq 1 6); do 
      printf "(# $i times)\n\n-----\n"
    done | pv (30 bps and quietly)
I did my trusty ctrl-x x and the buffer got sent off through openrouter and got swapped out with the proper syntax in under a second.

kazinator 7 days ago

The brace expansion syntax in Bash and Zsh expands integer ranges: {1..6}; no calling out to external command.

It's also intelligent about inferring leading zeros without needing to be told with options, e.g. {001..995}.

vicek22 7 days ago

This is fantastic! Thank you for that.

I use fish, but the language change is straightforward https://github.com/viktomas/dotfiles/blob/master/fish/.confi...

I'll use this daily

CGamesPlay 7 days ago

I built a similar one to this one: https://github.com/CGamesPlay/llm-cmd-comp

Looks from the demo like mine's a little less automatic and more iterative that yours.

kristopolous 7 days ago

Interesting! I like it!

The conversational context is nice. The ongoing command building is convenient and the # syntax carryover makes a lot of sense!

My next step is recursion and composability. I want to be able to do things contextualized. Stuff like this:

   $ echo PUBLIC_KEY=(( get the users public key pertaining to the private key for this repo )) >> .env
or some other contextually complex thing that is actually fairly simple, just tedious to code. Then I want that <as the code> so people collectively program and revise stuff <at that level as the language>.

Then you can do this through composability like so:

    with ((find the variable store for this repo by looking in the .gitignore)) as m:
      ((write in the format of m))SSH_PUBLICKEY=(( get the users public key pertaining to the private key for this repo ))
or even recursively:

    (( 
      (( 
        ((rsync, rclone, or similar)) with compression 
      ))  
        $HOME exclude ((find directories with secrets))         
        ((read the backup.md and find the server)) 
        ((make sure it goes to the right path))
    ));
it's not a fully formed syntax yet but then people will be able to do something like:

    $ llm-compile --format terraform --context my_infra script.llm > some_code.tf
and compile publicly shared snippets as specific to their context and you get abstract infra management at a fractional complexity.

It's basically GCC's RTL but for LLMs.

The point of this approach is your building blocks remain fairly atomic simple dumb things that even a 1b model can reliably handle - kinda like the guarantee of the RTL.

Then if you want to move from terraform to opentofu or whatever, who cares ... your stuff is in the llm metalanguage ... it's just a different compile target.

It's kinda like PHP. You just go along like normal and occasionally break form for the special metalanguage whenever your hit a point of contextual variance.

rglynn 7 days ago

Ah this is great, in combo with something like superwhisper you can use voice for longer queries.

rcarmo 6 days ago

Okay, this is very cool.

simonw 7 days ago

Wow, that library is looking really great!

I think I want a plugin hook that lets plugins take over the display of content by the tool.

Just filed an issue: https://github.com/simonw/llm/issues/1112

Would love to get your feedback on it, I included a few design options but none of them feel 100% right to me yet.

kristopolous 7 days ago

The real solution is semantic routing. You want to be able to define routing rules based on something like mdast (https://github.com/syntax-tree/mdast) . I've built a few hacked versions. This would not only allow for things like terminal rendering but is also a great complement to tool calling. Being able to siphon and multiplex inputs for the future where cerebras like speeds become more common, dynamic configurable stream routing will unlock quite a bit more use cases.

We have cost, latency, context window and model routing but I haven't seen anything semantic yet. Someone's going to do it, might as well be me.

rpeden 7 days ago

Neat! I've written streaming Markdown renderers in a couple of languages for quickly displaying streaming LLM output. Nice to see I'm not the only one! :)

kristopolous 7 days ago

It's a wildly nontrivial problem if you're trying to only be forward moving and want to minimize your buffer.

That's why everybody else either rerenders (such as rich) or relies on the whole buffer (such as glow).

I didn't write Streamdown for fun - there are genuinely no suitable tools that did what I needed.

Also various models have various ideas of what markdown should be and coding against CommonMark doesn't get you there.

Then there's other things. You have to check individual character width and the language family type to do proper word wrap. I've seen a number of interesting tmux and alacritty bugs in doing multi language support

The only real break I do is I render h6 (######) as muted grey.

Compare:

    for i in $(seq 1 6); do 
      printf "%${i}sh${i}\n\n-----\n" | tr " " "#"; 
    done | pv -bqL 30 | sd -w 30
to swapping out `sd` with `glow`. You'll see glow's lag - waiting for that EOF is annoying.

Also try sd -b 0.4 or even -b 0.7,0.8,0.8 for a nice blue. It's a bit easier to configure than the usual catalog of themes that requires a compilation after modification like with pygments.

icarito 7 days ago

That's right this is a nontrivial problem that I struggled with too for gtk-llm-chat! I resolved it using the streaming markdown-it-py library.

kristopolous 7 days ago

Huh this might be another approach with a bit of effort. Thanks for that. I didn't know about this

hanatanaka1984 7 days ago

Interesting, I will be sure to check into this. I have been using llm and bat with syntax highlighting.

kristopolous 7 days ago

Do you just do

| bat --language=markdown --force-colorization ?

hanatanaka1984 7 days ago

A simple bash script provides quick command line access to the tool. Output is paged syntax highlighted markdown.

  echo "$@" | llm "Provide a brief response to the question, if the question is related to command provide the command and short description" | bat --plain -l md
Lauch as:

  llmquick "why is the sky blue?"

kristopolous 7 days ago

I've got a nice tool as well

https://github.com/day50-dev/llmehelp/blob/main/Snoopers/wtf

I've thought about redoing it because my needs are things like

   $ ls | wtf which endpoints do these things talk to, give me a map and line numbers. 
What this will eventually be is "ai-grep" built transparently on https://ast-grep.github.io/ where the llm writes the complicated query (these coding agents all seem to use ripgrep but this works better)

Conceptual grep is what I've wanted my while life

Semantic routing, which I alluded to above, could get this to work progressively so you quickly get adequate results which then pareto their way up as the token count increases.

Really you'd like some tampering, like a coreutils timeout(1) but for simplex optimization.

johnisgood 7 days ago

> DO NOT include the file name. Again, DO NOT INCLUDE THE FILE NAME.

Lmao. Does it work? I hate that it needs to be repeated (in general). ChatGPT could not care less to follow my instructions, through the API it probably would?

hanatanaka1984 7 days ago

| bat -p -l md

simple and works well.

nbbaier 7 days ago

Ohh I've wanted this so much! Thank you!