I find introspecting about how I formulate the question and what works better or worse for me personally fascinating.
I am content to use the AI to perform "menial" tasks: I had a textfile in something parsable by field with some minor quirks (like right justified text) and was able to specify the field SEMANTICS in a way that made for a prompt to an ICS file calendar which just imported fine as-is. Getting a years forward planning from a texttual note in some structure into calendar -> import -> from-file was sweet. Do I need to train an AI to use a token/API key to do this directly? No. But thinking about how I say efficiently what fields are, and what the boundaries are, helps me understand my data.
BTW while I have looked at a ICS file and can see it is type:value, I have no idea of the types, or what specific GMT/Z format it wants for date/time, or the distinctions of meaning for confirmed/pending or the like. These are higher level constructs which seem to have made useful distinct behaviours in the calendar and the AI description of what it had done, and what I should expect lined up. I did not e.g. stipulate the mappings from semantic field to ICS type. I did say "this is a calendar date" and it did the rest.
I used AI to write a DJANGO web to do some trivial booking stuff. I did not expect the code to run as-is, but it did. Again, could I live with this product? Yes, but the extensibility worries me. Adding features, I am very conscious one wrong prompt and it can turn this into .. drek. It's fragile.
My method is this: before I use AI, I try to ask myself "how much should I surrender my judgment on this problem?"
Some problems are too big to surrender judgment. Some problems are solved differently depending on what you want to optimize. Sometimes you want to learn something. Sometimes there's ethics.
Nice. I think I agree, Size of the problem isn't same as "code complexity" or LOC or anything. If the consequences of the wrong solution being deployed are big enough even a 1 line fix can be a disaster.
I like surrender judgement. Its loss of locus of control. I also find myself asking if there are ways the AI systems "monetize" the nature of problems being put forward for solutions. I am probably implicitly giving up some IPR asking these questions, I could even be in breach of an NDA in some circumstances.
Some problems should not be put to an anonymous external service. I doubt the NSA wants people using claude or mistral or deepseek to solve NSA problems. Unless the goal, is to feed misinformation or mis-drection out into the world.