simonw 7 days ago

I find it extremely useful as a research tool. It can talk to probably over 100 models at this point, providing a single interface to all of them and logging full details of prompts and responses to its SQLite database. This makes it fantastic for recording experiments with different models over time.

The ability to pipe files and other program outputs into an LLM is wildly useful. A few examples:

  llm -f code.py -s 'Add type hints' > code_typed.py
  git diff | llm -s 'write a commit message'
  llm -f https://raw.githubusercontent.com/BenjaminAster/CSS-Minecraft/refs/heads/main/main.css \
    -s 'explain all the tricks used by this CSS'
It can process images too! https://simonwillison.net/2024/Oct/29/llm-multi-modal/

  llm 'describe this photo' -a path/to/photo.jpg
LLM plugins can be a lot of fun. One of my favorites is llm-cmd which adds the ability to do things like this:

  llm install llm-cmd
  llm cmd ffmpeg convert video.mov to mp4
It proposes a command to run, you hit enter to run it. I use it for ffmpeg and similar tools all the time now. https://simonwillison.net/2024/Mar/26/llm-cmd/

I'm getting a whole lot of coding done with LLM now too. Here's how I wrote one of my recent plugins:

  llm -m openai/o3 \
    -f https://raw.githubusercontent.com/simonw/llm-hacker-news/refs/heads/main/llm_hacker_news.py \
    -f https://raw.githubusercontent.com/simonw/tools/refs/heads/main/github-issue-to-markdown.html \
    -s 'Write a new fragments plugin in Python that registers issue:org/repo/123 which fetches that issue
      number from the specified github repo and uses the same markdown logic as the HTML page to turn that into a fragment'
I wrote about that one here: https://simonwillison.net/2025/Apr/20/llm-fragments-github/

LLM was also used recently in that "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation" story - to help automate running 100s of prompts: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...

2
setheron 7 days ago

Wow what a great overview; is there a big doc to see all these options? I'd love to try it -- I've been trying `gh` copilot pulgin but this looks more appealing.

simonw 7 days ago

I really need to put together a better tutorial - there's a TON of documentation but it's scattered across a bunch of different places:

- The official docs: https://llm.datasette.io/

- The workshop I gave at PyCon a few weeks ago: https://building-with-llms-pycon-2025.readthedocs.io/

- The "New releases of LLM" series on my blog: https://simonwillison.net/series/llm-releases/

- My "llm" tag, which has 195 posts now! https://simonwillison.net/tags/llm/

setheron 7 days ago

I use NixOS seems like this got me enough to get started (I wanted Gemini)

``` # AI cli (unstable.python3.withPackages ( ps: with ps; [ llm llm-gemini llm-cmd ] )) ```

looks like most of the plugins are models and most of the functionality you demo'd in the parent comment is baked into the tool itself.

Yea a live document might be cool -- part of the interesting bit was seeing "real" type of use cases you use it for .

Anyways will give it a spin.

th0ma5 7 days ago

"LLM was used to find" is not what they did

> had I used o3 to find and fix the original vulnerability I would have, in theory [...]

they ran a scenario that they thought could have lead to finding it, which is pretty much not what you said. We don't know how much their foreshadowing crept into their LLM context, and even the article says it was also sort of chance. Please be more precise and don't give into these false beliefs of productivity. Not yet at least.

simonw 7 days ago

I said "LLM was also used recently in that..." which is entirely true. They used my LLM CLI tool as part of the work they described in that post.

th0ma5 7 days ago

Very fair, I expect others to confuse what you mean productivity of your tool called LLM vs. the doubt that many have on the actually productivity of LLM the large language model concept.