Hey HN! It’s Eric from Firecrawl (https://firecrawl.dev).
I just launched llms.txt Generator, a tool that transforms any website into a clean, structured text file optimized for feeding to LLMs. You can learn more about the standard at https://llmstxt.org.
Here’s how it works under the hood:
1. We use Firecrawl, our open-source scraper, to fetch the full site, handling JavaScript-heavy pages and complex structures. 2. The markdown content is parsed and then the title and description are extracted using GPT-4o-mini. 3. The everything is combined and the result is a lightweight llms.txt file that you can paste into any LLM.
Let me know what you think!
For a simple solution, you can just right click->Save Page As.. and upload the resulting `.html` file into Claude/ChatGPT as an attachment. They're both more than capable of parsing the article content from the HTML without needing any pre-processing.
I like the idea but Firecrawl and GPT4o is quite heavy. I use https://github.com/unclecode/crawl4ai in some projects, it works very well without these dependencies and is modular so you can use LLMs but do not have to :)
Thanks for facilitating even more widespread and frictionless copyright violations
Wait till you hear about copy/paste