It is 5x if you are already a senior SE knowing your programming language really well, constantly suggesting good architecture yourself ("seed files" is a brilliant idea), and not accepting any slop / asking to rewrite things if something is not up to your standards (of course, every piece of code should be reviewed).
Otherwise, it can be 0.2x in some cases. And you should not use LLMs for anything security-related unless you are a security expert, otherwise you are screwed.
(this is SOTA as of April 2025, I expect things to become better in the near future)
> It is 5x if you are already a senior SE knowing your programming language really well, constantly suggesting good architecture yourself ("seed files" is a brilliant idea), and not accepting any slop / asking to rewrite things if something is not up to your standards (of course, every piece of code should be reviewed).
If you know the programming language really well, that usually means you know what libraries are useful, memorized common patterns, and have some project samples laying out. The actual speed improvement would be on typing the code, but it's usually the activity that requires the least time on any successful project. And unless you're a slow typist, I can't see 5x there.
If you're lacking in fundamental, then it's just a skill issue, and I'd be suspicious of the result.
"Given this code, extract all entities and create the database schema from these", "write documentation for these methods", "write test examples", "write README.md explaining how to use scripts in this directory", "refactor everything in this directory just like this example", etc etc
Everything boring can be automated and it takes five seconds compared to half an hour.
It can only be automated if the only thing you care about is having the code/text, and not making sure they are correct.
> Given this code, extract all entities and create the database schema from these
Sometimes, the best representation for storing and loading data is not the best for manipulating it and vice-versa. Directly mapping code entities to database relations (assuming it's SQL) is a sure way to land yourself in trouble later.
> write documentation for these methods
The intent of documentation is to explain how to use something and the multiple why's behind an implementation. What is there can be done using a symbol explorer. Repeating what is obvious from the name of the function is not helpful. And hallucinating something that is not there is harmful.
> write test examples
Again the type of tests matters more than the amount. So unless you're sure that the test is correct and the test suite really ensure that the code is viable, it's all for naught.
...
Your use cases assume that the output is correct. And as the hallucination risk from LLM models is non-zero, such assumption is harmful.
Well, of course I check the output and correct it as needed. It is still much faster than writing it myself. And less boring.
As for the documentation part — I infer that you hadn't used state of the art models, had you? They do not write symbol docs mechanistically. They understand what the code is _doing_. Up to their context limits, which are now 128k for most models. Feed them 128k of code and more often than not it will understand what it is about. In seconds (compared to hours for humans).
> They do not write symbol docs mechanistically. They understand what the code is _doing_.
What the code is doing is important only when you intend to modify it. Normally, what's important is how to use it. That's the whole point of design: Presenting an API that hides how things happens in favor of making it easier (natural) to do something. The documentation should focus on that abstract design and the relation to the API. The concrete implementation rarely matters if you're on the other side of the API.