aunty_helen 8 days ago

Nah. This isn’t true. Every time you hit enter you’re not just getting a jr dev, you’re getting a randomly selected jr dev.

So, how did I end up with a logging.py, config.py, config in __init__.py and main.py? Well I prompted for it to fix the logging setup to use a specific format.

I use cursor, it can spit out code at an amazing rate and reduced the amount of docs I need to read to get something done. But after its second attempt at something you need to jump in and do it yourself and most likely debug what was written.

1
skydhash 8 days ago

Are you reading a whole encyclopedia each time you assigned to a task? The one thing about learning is that it compounds. You get faster the longer you use a specific technology. So unless you use a different platform for each task, I don't think you have to read that much documentation (understanding them is another matter).

achierius 8 days ago

This is an important distinction though. LLMs don't have any persistent 'state': they have their activations, their context, and that's it. They only know what's pre-trained, and what's in their context. Now, their ability to do in-context learning is impressive, but you're fundamentally still stuck with the deviations and, eventually, forgetting that characterizes these guys -- while a human, while less quick on the uptake, will nevertheless 'bake in' the lessons in a way that LLMs currently cannot.

In some ways this is even more impressive -- every prompt you make, your LLM is in effect re-reading (and re-comprehending) your whole codebase, from scratch!