yodsanklai 4 days ago

You're not supposed to trust the tool, you're supposed to review and rework the code before submitting for external review.

I use AI for rather complex tasks. It's impressive. It can make a bunch of non-trivial changes to several files, and have the code compile without warnings. But I need to iterate a few times so that the code looks like what I want.

That being said, I also lose time pretty regularly. There's a learning curve, and the tool would be much more useful if it was faster. It takes a few minutes to make changes, and there may be several iterations.

7
ryandrake 4 days ago

> You're not supposed to trust the tool, you're supposed to review and rework the code before submitting for external review.

It sounds like the guys in this article should not have trusted AI to go fully open loop on their customer support system. That should be well understood by all "customers" of AI. You can't trust it to do anything correctly without human feedback/review and human quality control.

schmichael 4 days ago

> You're not supposed to trust the tool

This is just an incredible statement. I can't think of another development tool we'd say this about. I'm not saying you're wrong, or that it's wrong to have tools we can't just, just... wow... what a sea change.

ModernMech 4 days ago

Imagine! Imagine if 0.05% of the time gcc just injected random code into your binaries. Imagine, you swing a hammer and 1% of the time it just phases into the wall. Tools are supposed to be reliable.

arvinsim 4 days ago

There are no existing AI tools that guarantee correct code 100% of the time.

If there is such a tool, programmers will be on path of immediate reskilling or lose their jobs very quickly.

ryandrake 4 days ago

Imagine if your compiler just randomly and non-deterministically compiled valid code to incorrect binaries, and the tool's developer couldn't really tell you why it happens, how often it was expected to happen, how severe the problem was expected to be, and told you to just not trust your compiler to create correct machine code.

Imagine if your calculator app randomly and non-deterministically performed arithmetic incorrectly, and you similarly couldn't get correctness expectations from the developer.

Imagine if any of your communication tools randomly and non-deterministically translated your messages into gibberish...

I think we'd all throw away such tools, but we are expected to accept it if it's an "AI tool?"

andrei_says_ 4 days ago

Imagine that you yourself never use these tools directly but your employees do. And the sellers of said tools swear that the tools are amazing and correct and will save you millions.

They keep telling you that any employee who highlights problems with the tools are just trying to save their job.

Your investors tell you that the toolmakers are already saving money for your competitors.

Now, do you want that second house and white lotus vacation or not?

Making good tools is difficult. Bending perception (“is reality”) is easier and enterprise sales, just like good propaganda, work. The gold rush will leave a lot of bodies behind but the shovelmakers will make a killing.

ModernMech 4 days ago

I feel like there's a lot of motivated reasoning going on, yeah.

arvinsim 4 days ago

If you think of AI like a compiler, yes we should throw away such tools because we expect correctness and deterministic outcomes

If you think of AI like a programmer, no we shouldn't throw away such tools because we accept them as imperfect and we still need to review.

bigstrat2003 4 days ago

> If you think of AI like a programmer, no we shouldn't throw away such tools because we accept them as imperfect and we still need to review.

This is a common argument but I don't think it holds up. A human learns. If one of my teammates or I make a mistake, when we realize it we learn not to make that mistake in the future. These AI tools don't do that. You could use a model for a year, and it'll be just as unreliable as it is today. The fact that they can't learn makes them a nonstarter compared to humans.

ToValueFunfetti 4 days ago

If the only calculators that existed failed at 5% of the calculations, or if the only communication tools miscommunicated 5% of the time, we would still use both all the time. They would be far less than 95% as useful as perfect versions, but drastically better then not having the tools at all.

gitremote 4 days ago

Absolutely not. We'd just do the calculations by hand, which is better than running the 95%-correct calculator and then doing the calculations by hand anyway to verify its output.

ToValueFunfetti 4 days ago

Suppose you work in a field where getting calculations right is critical. Your engineers make mistakes less than .01% of the time, but they do a lot of calculations and each mistake could cost $millions or lives. Double- and triple-checking help a lot, but they're costly. Here's a machine that verifies 95% of calculations, but you'd still have to do 5% of the work. Shall I throw it away?

Unreliable tools have a good deal of utility. That's an example of them helping reduce the problem space, but they also can be useful in situations where having a 95% confidence guess now matters more that a 99.99% confidence one in ten minutes- firing mortars in active combat, say.

There's situations where validation is easier than computation; canonically this is factoring, but even division is much simpler than multiplication. It could very easily save you time to multiply all of the calculator's output by the dividend while performing both a multiplication and a division for the 5% that are wrong.

edit: I submit this comment and click to go the front page and right at the top is Unsure Calculator (no relevance). Sorry, I had to mention this

diputsmonro 4 days ago

> Here's a machine that verifies 95% of calculations, but you'd still have to do 5% of the work.

The problem is that you don't know which 5% are wrong. The AI is confidently wrong all the time. So the only way to be sure is to double check everything, and at some point its easier to just do it the right way.

Sure, some things don't need to be perfect. But how much do you really want to risk? This company thought a little bit of potential misinformation was acceptable, and so it caused a completely self inflicted PR scandal, pissed off their customer base, and lost them a lot of confidence and revenue. Was that 5% error worth it?

Stories like this are going to keep coming the more we rely on AI to do things humans should be doing.

Someday you'll be affected by the fallout of some system failing because you happen to wind up in the 5% failure gap that some manager thought was acceptable (if that manager even ran a calculation and didn't just blindly trust whatever some other AI system told them) I just hope it's something as trivial as an IDE and not something in your car, your bank, or your hospital. But certainly LLMs will be irresponsibly shoved into all three within the next few years, if it's not there already.

ToValueFunfetti 3 days ago

>The problem is that you don't know which 5% are wrong

This is not a problem in my unreliable calculator use-cases; are you disputing that or dropping the analogy?

Because I'd love to drop the analogy. You mention IDEs- I routinely use IntelliJ's tab completion, despite it being wrong >>5% of the time. I have to manually verify every suggestion. Sometimes I use it and then edit the final term of a nested object access. Sometimes I use the completion by mistake, clean up with backspace instead of undo, and wind up submitting a PR that adds an unused dependency. I consider it indispensable to my flow anyway. Maybe others turn this off?

You mention hospitals. Hospitals run loads of expensive tests every day with a greater than 5% false positive and false negative rate. Sometimes these results mean a benign patient undergoes invasive further testing. Sometimes a patient with cancer gets told they're fine and sent home. Hospitals continue to run these tests, presumably because having a 20x increase in specificity is helpful to doctors, even if it's unreliable. Or maybe they're just trying to get more money out of us?

Since we're talking LLMs again, it's worth noting that 95% is an underestimate of my hit rate. 4o writes code that works more reliably than my coworker does, and it writes more readable code 100% of the time. My coworker is net positive for the team. His 2% mistake rate is not enough to counter the advantage of having someone there to do the work.

An LLM with a 100% hit rate would be phenomenal. It would save my company my entire salary. A 99% one is way worse; they still have to pay me to use it. But I find a use for the 99% LLM more-or-less every day.

gitremote 3 days ago

> This is not a problem in my unreliable calculator use-cases; are you disputing that or dropping the analogy?

If you use an unreliable calculator to sum a list of numbers, you then need to use a reliable method to sum the numbers to validate that the unreliable calculator's sum is correct or incorrect.

ToValueFunfetti 3 days ago

Yes, so in my first example in the GP, this happens first. Humans do the work. The calculator double checks and gives me a list of all errors plus 5% of the non-errors, and I only need to double check that list.

In my third example, the calculator does the hard work of dividing, and humans can validate by the simpler task of multiplication, only having to do extra work 5% of the time.

(In my second, the unreliablity is a trade-off against speed, and we need the speed more.)

In all cases, we benefit from the unreliable tool despite not knowing when it is unreliable.

gitremote 3 days ago

In your first example, you appear to assume that for calculations where "each mistake could cost $millions or lives", engineers who calculated by hand typically didn't double-check by redoing the calculation, so a second check with a 95% accuracy tool is better than nothing. This assumption is false. I suggest you watch the 2016 film Hidden Figures to understand the level of safety at NASA when calculations were done by hand. You are suggesting lowering safety standards, not increasing them.

Your third example is unclear. No calculators can perform factoring of large numbers, because that is the expected ability of future quantum computers that can break RSA encryption. It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.

ToValueFunfetti 3 days ago

>you appear to assume that for calculations where "each mistake could cost $millions or lives", engineers who calculated by hand typically didn't double-check by redoing the calculation

Not at all! For any n extra checks, having an n+1 phase that takes a 20th of the effort is beneficial. I did include triple-checks to gesture at this.

>It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.

This actually fascinates me. Computers and human both take longer to divide than to multiply (in computers, by roughly an order of magnitude!) I'm not really sure why this is in a fundamental information theory kind of way, but it being true in humans is sufficient to make my point.

To address your specific criticism: you haven't factored out the division there, you've just changed the numerator to 1. I'd much rather do 34/17 in my head than 34 * (1/17).

Tainnor 1 day ago

> It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.

Well sure, but once you multiply by 1/n you leave N (or Z) and enter Q, and I suspect that's what makes it more difficult because Q is just a much more complex structure because it formally consists of equivalence relations. In fact it's easy to divide an integer x by an integer y, it's just x/y ... the problem is that we usually want the fraction in lowest terms, though.

ModernMech 3 days ago

I'd like to second the point made to you in this thread that went without reply: https://news.ycombinator.com/item?id=43702895

It's true that we use tools with uncertainty all the time, in many domains. But crucially that uncertainty is carefully modeled and accounted for.

For example, robots use sensors to make sense of the world around them. These sensors are not 100% accurate, and therefore if the robots rely on these sensors to be correct, they will fail.

So roboticists characterize and calibrate sensors. They attempt to understand how and why they fail, and under what conditions. Then they attempt to cover blind spots by using orthogonal sensing methods. Then they fuse these desperate data into a single belief of the robot's state, which include an estimate of its posterior uncertainty. Accounting for this uncertainty in this way is what keeps planes in the sky, boats afloat, and driverless cars on course.

With LLMs It seems like we are happy to just throw out all this uncertainty modeling and to leave it up to chance. To draw an analogy to robotics, what we should be doing is taking the output from many LLMs, characterizing how wrong they are, and fusing them into a final result, which is provided to the user with a level of confidence attached. Now that is something I can use in an engineering pipeline. That is something that can be used as a foundation to something bigger.

ToValueFunfetti 3 days ago

>went without reply

Yeah, I was getting a little self-conscious about replying to everyone and repeating myself a lot. It felt like too much noise.

But my first objection here is to repeat myself- none of my examples are sensitive to this problem. I don't need to understand what conditions cause the calculator/IDE/medical test/LLM to fail in order to benefit from a 95% success rate.

If I write a piece of code, I try to understand what it does and how it impacts the rest of the app with high confidence. I'm still going to run the unit test suite even if it has low coverage, and even if I have no idea what the tests actually measure. My confidence in my changes will go up if the tests pass.

This is one use of LLMs for me. I can refactor a piece of code and then send ChatGPT the before and after and ask "Do these do the same thing". I'm already highly confident that they do, but a yes from the AI means I can be more confident. If I get a no, I can read its explanation and agree or disagree. I'm sure it can get this wrong (though it hasn't after n~=100), but that's no reason to abandon this near-instantaneous, mostly accurate double-check. Nor would I give up on unit testing because somebody wrote a test of implementation details that failed after a trivial refactor.

I agree totally that having a good model of LLM uncertainty would make them orders of magnitude better (as would, obviously, removing the uncertainty altogether). And I wouldn't put them in a pipeline or behind a support desk. But I can and do use them for great benefit every day, and I have no idea why I should prefer to throw away the useful thing I have because it's imperfect.

ModernMech 2 days ago

> none of my examples are sensitive to this problem.

That's not true. You absolutely have to understand those conditions because when you try to use those things outside of their operating ranges, they fail at a higher than the nominal rate.

> I'm still going to run the unit test suite even if it has low coverage, and even if I have no idea what the tests actually measure. My confidence in my changes will go up if the tests pass.

Right, your confidence goes up because you know that if the test passes, that means the test passed. But if the test suite can probabilistically pass even though some or all of the tests actually fail, then you will have to fall back to the notions of systematic risk management in my last post.

> I can refactor a piece of code and then send ChatGPT the before and after and ask "Do these do the same thing". I'm already highly confident that they do, but a yes from the AI means I can be more confident. If I get a no, I can read its explanation and agree or disagree. I'm sure it can get this wrong (though it hasn't after n~=100)

This n is very very small for you to be confident the behavior is as consistent as you expect. In fact, it gets this wrong all the time. I use AI in a class environment so I see n=100 on a single day. When you get to n~1k+ you see all of these problems where it says things are one way but really thing are another.

> mostly accurate double-check

And that's the problem right there. You can say "mostly accurate" but you really have no basis to assert this, past your own experience. And even if it's true, we still need to understand how wrong it can be, because mostly accurate with a wild variance is still highly problematic.

> But I can and do use them for great benefit every day, and I have no idea why I should prefer to throw away the useful thing I have because it's imperfect.

Sure, they can be beneficial. And yes, we shouldn't throw them out. But that wasn't my original point, I wasn't suggesting that. What I had said was that they cannot be relied on, and you seem to agree with me in that.

Tainnor 4 days ago

> Unreliable tools have a good deal of utility.

This is generally true when you can quantify the unreliability. E.g. random prime number tests with a specific error rate can be combined so that the error rates multiply and become negligible.

I'm not aware that we can quantify the uncertainty coming out of LLM tools reliably.

jimbokun 3 days ago

> Here's a machine that verifies 95% of calculations

Which 95% did it get right?

mrheosuper 4 days ago

> you'd still have to do 5% of the work

No, you still have to do 100% of the work.

ToValueFunfetti 3 days ago

You simply do not. You do the math yourself to calculate 2(n) for n in [1, 2, 3, 4] and get [2, 5, 6, 8]. You plug it into your (75% accurate) unreliable calculator and get [3, 4, 6, 8]. You now know that you only need to recheck the first two (50%) of the entries.

throwway120385 3 days ago

I resent becoming QA/QC for the machine instead of doing the same or better thinking myself.

ToValueFunfetti 3 days ago

This is fair. I expect you would resent the tool even more if it was perfect and you couldn't even land a job in QA anymore. If that's the case, your resentment doesn't reflect on the usefulness of LLMs.

tevon 4 days ago

Stackoverflow is like this, you read an answer but are not fully sure if its right or if it fits your needs.

Of course there is a review system for a reason, but we frequently use "untrusted" tools in development.

That one guy in a github issue that said "this worked for me"

shipp02 4 days ago

In Mechanical Engineering, this is 100% a thing with fluid dynamics simulation. You need to know if the output is BS based on a number of factors that I don't understand.

theonething 4 days ago

> I can't think of another development tool we'd say this about.

Because no other dev tool actually generates unique code like AI does. So you treat it like the other components of your team that generates code, the other developers. Do you trust other developers to write good code without mistakes without getting it reviewed by others. Of course not.

seabird 4 days ago

Yes, actually, I do! I trust my teammates with tens of thousands of hours of experience in programming, embedded hardware, our problem spaces, etc. to write from a fully formed worldview, and for their code to work as intended (as far as anybody can tell before it enters preliminary testing by users) by the time the rest of the team reviews it. Most code review is uneventful. Have some pride in your work and you'll be amazed at what's possible.

theonething 4 days ago

so your saying that yes you do "trust other developers to write good code without mistakes without getting it reviewed by others."

And then you say "by the time the rest of the team reviews it. Most code review is uneventful."

So you trust your team to develop without the need for code review but yet, your team does code review.

So what is the purpose of these code reviews? Is it the case that you actually don't think they are necessary, but perhaps management insists on them? You actually answer this question yourself:

> Most code review is uneventful.

Keyword here is "most" as opposed to "all" So based your team's applied practices and your own words, code review is for the purpose of catching mistakes and other needed corrections.

But it seems to me if you trust your team not to make mistakes, code review is superfluous.

As an aside, it seems your team culture doesn't make room for juniors because if your team had juniors I think it would be even more foolish to trust them not to make mistakes. Maybe a junior free culture works for your company, but that's not the case for every company.

My main point is code review is not superfluous no matter the skill level; junior, senior, or AI simply because everyone and every AI makes mistakes. So I don't trust those three classes of code emitters to not ever make mistakes or bad choices (i.e. be perfect) and therefore I think code review is useful.

Have some honesty and humility and you'll amazed at what's possible.

seabird 4 days ago

I never said that code review was useless, I said "yes, I do" to your question as to whether or not I "trust other developers to write good code without mistakes without getting it reviewed by others". Of course I can trust them to do the right thing even when nobody's looking, and review it anyway in the off-chance they overlooked something. I can't trust AI to do that.

The purpose of the review is to find and fix occasional small details before it goes to physical testing. It does not involve constant babysitting of the developer. It's a little silly to bring up honesty when you spent that entire comment dancing around the reality that AI makes an inordinately large number of mistakes. I will pick the domain expert who refuses to touch AI over a generic programmer with access to it ten times out of ten.

The entire team as it is now (me included) were juniors. It's a traditional engineering environment in a location where people don't aggressively move between jobs at the drop of a hat. You don't need to constantly train younger developers when you can retain people.

theonething 4 days ago

You spend your comment dancing around the fact that everyone makes mistakes and yet you claim you trust your team not to make mistakes.

> I "trust other developers to write good code without mistakes without getting it reviewed by others". Of course I can trust them to do the right thing even when nobody's looking, and review it anyway in the off-chance they overlooked something.

You're saying yes, I trust other developers to not make mistakes, but I'll check anyways in case they do. If you really trusted them not to make mistakes, you wouldn't need to check. They (eventually) will. How can I assert that? Because everyone makes mistakes.

It's absurd to expect anyone to not make mistakes. Engineers build whole processes to account for the fact that people, even very smart people make mistakes.

And it's not even just about mistakes. Often times, other developers have more context, insight or are just plain better and can offer suggestions to improve the code during review. So that's about teamwork and working together to make the code better.

I fully admit AI makes mistakes, sometimes a lot of them. So it needs code review . And on the other hand, sometimes AI can really be good at enhancing productivity especially in areas of repetitive drudgery so the developer can focus on higher level tasks that require more creativity and wisdom like architectural decisions.

> I will pick the domain expert who refuses to touch AI over a generic programmer with access to it ten times out of ten.

I would too, but I won't trust them not to make mistakes or occasional bad decisions because again, everybody does.

> You don't need to constantly train younger developers when you can retain people.

But you do need to train them initially. Or do you just trust them to write good code without mistakes too?

anonymars 4 days ago

I trust my colleagues to write code that compiles, at the very least

ModernMech 4 days ago

Oh at the very least I trust them to not take code that compiles and immediately assess that it's broken.

chrisweekly 4 days ago

But of course everyone absolutely NEEDS to use AI for codereviews! How else could the huge volume of AI-generated code be managed?

forgetfreeman 4 days ago

"Do you trust other developers to write good code without mistakes without getting it reviewed by others."

Literally yes. Test coverage and QA to catch bugs sure but needing everything manually reviewed by someone else sounds like working in a sweatshop full of intern-level code bootcamp graduates, or if you prefer an absolute dumpster fire of incompetence.

ryandrake 4 days ago

I would accept mistakes and inconsistency from a human, especially one not very experienced or skilled. But I expect perfection and consistency from a machine. When I command my computer to do something, I expect it to do it correctly, the same way every time, to convert a particular input to an exact particular output, every time. I don't expect it to guess, or randomly insert garbage, or behave non-deterministically. Those things are called defects(bugs) and I'd want them to be fixed.

tevon 4 days ago

This seems like a particularly limited view of what a machine is. Specifically expecting it to behave deterministically.

ModernMech 4 days ago

Still, the whole Unix philosophy of building tools starts with a foundation of building something small that can do one thing well. If that is your foundation, you can take advantage of composability and create larger tools that are more capable. The foundation of all computing today is built on this principle of design.

Building on AI seems more like building on a foundation of sand, or building in a swamp. You can probably put something together, but it's going to continually sink into the bog. Better to build on a solid foundation, so you don't have to continually stop the thing from sinking, so you can build taller.

forgetfreeman 3 days ago

Would you welcome your car behaving in a nondeterministic fashion?

senordevnyc 4 days ago

Then you are going to hate the future.

ryandrake 3 days ago

Way ahead of you. I already hate the present, at least the current sad state of the software industry.

forgetfreeman 4 days ago

Exactly this.

theonething 4 days ago

Ok, here I thought requiring PR review and approval before merging was standard industry best practice. I guess all the places I've worked have been doing it wrong?

forgetfreeman 4 days ago

There's a lot of shit that has become "best practice" over the last 15 years, and a lot more that was "best practice" but fell out of favor because reasons. All of it exists on a continuum of what is actually reasonable given the circumstances. Reviewing pull requests is one of those things that is reasonable af in theory, produces mediocre results in practice, and is frequently nothing more than bureaucratic overhead. Consider a case where an individual adds a new feature to an existing codebase. Given they are almost certainly the only one who has spent significant time researching the particulars of the feature set in question, and are the only individual with any experience at all with the new code, having another developer review it means you've got inexperienced, low-info eyes examining something they do not fully understand, and will have to take some amount of time to come up to speed on. Sure they'll catch obvious errors, but so would a decent test suite.

Am I arguing in favor of egalitarian commit food fights with no adults in the room? Absolutely not. But demanding literally every change go through a formal review process before getting committed, like any other coding dogma, has a tendency to generate at least as much bullshit as it catches, just a different flavor.

Tainnor 4 days ago

Code review is actually one of the few practices for which research does exist[0] which points in the direction of it being generally good at reducing defects.

Additionally, in the example you share, where only one person knows the context of the change, code review is an excellent tool for knowledge sharing.

[0]: https://dl.acm.org/doi/10.1145/2597073.2597076, for example

forgetfreeman 3 days ago

Oh I have no doubt it's an excellent tool for knowledge sharing. So are mailing lists (nobody reads email) and internal wikis (evergreen fist fight to get someone, anyone, to update). Despite best intentions knowledge sharing regimes are little more than well-intentioned pestering with irrelevant information that is absolutely purged from headspace during any number of daily/weekly/quarterly context switches. As I said, mediocre results.

Tainnor 3 days ago

You're free to believe whatever you want, but again, this is one of the few things that we actually empirically know to be working.

rixed 4 days ago

And there is worst: in the cases when the reviewer has actually some knowledge of the problem at hand, she might say "oh you did all this to add that feature? But it's actually already there. You just had to include that file and call function xyz". Or "oh but two months ago that very same topic was discussed and it was decided that it would make more sense to wait for module xyz to be refactored in order to make it easier ", etc.

gtirloni 4 days ago

1) Once you get it to output something you like, do you check all the lines it changed? Is there a threshold after which you just... hope?

2) No matter what the learning curve, you're using a statistical tool that outputs in probabilities. If that's fine for your workflow/company, go for it. It's just not what a lot of developers are okay with.

Of course it's a spectrum with the AI deniers in one corner and the vibe coders in the other. I personally won't be relying 100% on a tool and letting my own critical thinking atrophy, which seems to be happening, considering recent studies posted here.

nkoren 3 days ago

I've been doing AI-assisted coding for several months now, and have found a good balance that works for me. I'm working in Typescript and React, neither of which I know particularly well (although I know ES6 very well). In most cases, AI is excellent at tasks which involve writing quasi-custom boilerplate (eg. tests which require a lot of mocking), and at answering questions of how I should do _X_ in TS/React. For the latter, those are undoubtedly questions I could eventually find the answers on Stack Overflow and deduce how to apply those answers to my specific context -- but it's orders of magnitude faster to get the AI to do that for me.

Where the AI fails is in doing anything which requires having a model of the world. I'm writing a simulator which involves agents moving through an environment. A small change in agent behaviour may take many steps of the simulator to produce consequential effects, and thinking through how that happens -- or the reverse: reasoning about the possible upstream causes of some emergent macroscopic behaviour -- requires a mental model of the simulation process, and AI absolutely does _not_ have that. It doesn't know that it doesn't have that, and will therefore hallucinate wildly as it grasps at an answer. Sometimes those hallucinations will even hit the mark. But on the whole, if a mental model is required to arrive at the answer, AI wastes more time than it saves.

jimbokun 3 days ago

> AI is excellent at tasks which involve writing quasi-custom boilerplate (eg. tests which require a lot of mocking)

I wonder if anyone has compared how well the AI auto-generating approach works compared to meta programming approaches (like Lisp macros) meant to address the same kind of issues with repetitive code.

kazinator 3 days ago

The generation of volumes of boiler plate takes effort; nobody likes to do it.

The problem is, that phase is not the full life cycle of the boiler plate.

You have to live with it afterward.

pjerem 4 days ago

> 1) Once you get it to output something you like, do you check all the lines it changed? Is there a threshold after which you just... hope?

Not op but yes. It sometimes takes a lot of time but I read everything. It still faster than nothing. Also, I ask very precise changes to the AI so it doesn’t generate huge diffs anyway.

Also for new code, TDD works wonders with AI : let it write the unit tests (you still have to be mindful of what you want to implement) and ask it to implement the code that run the tests. Since you talk the probabilistic output, the tool is incredibly good at iterating over things (running and checking tests) and also, unit tests are, in themselves, a pretty perfect prompt.

iforgotpassword 4 days ago

> It sometimes takes a lot of time but I read everything. It still faster than nothing.

Opposite experience for me. It reliably fails at more involved tasks so that I don't even try anymore. Smaller tasks that are around a hundred lines maybe take me longer to review that I can just do it myself, even though it's mundane and boring.

The only time I found it useful is if I'm unfamiliar with a language or framework, where I'd have to spend a lot of time looking up how to do stuff, understand class structures etc. Then I just ask the AI and have to slowly step through everything anyways, but at least there's all the classes and methods that are relevant to my goal and I get to learn along the way.

riffraff 4 days ago

How do you have it write tests before the code? It seems writing a prompt for the LLM to generate the tests would take the same time as writing the tests themselves.

Unless you're thinking of repetitive code I can't imagine the process (I'm not arguing, I'm just curious of what you're flow looks like).

yodsanklai 3 days ago

> Is there a threshold after which you just... hope?

Generally, all the code I write is reviewed by humans, so commits need to be small and easily reviewable. I can't submit something I don't understand myself or I may piss off my colleagues, or it may never get reviewed.

Now if it was a personal project or something with low value, I would probably be more lenient but I think if you use a statically typed language, the type system + unit tests can capture a lot of issues so it may be ok to have local blocks that you don't look in details.

ModernMech 3 days ago

Yeah for me, I use AI with Rust and a suite of 1000 tests in my codebase. I also use CoPilot VS code plugin mostly, which as far as I can tell heavily weights toward local code around it and often it just writing code based on my other code. I've found AI to be a good macro debugger too, as macro debugging tools are severely lacking in most ecosystems.

But when I see people using these AI tools to write JavaScript of Python code wholesale from scratch, that's a huge question mark for me. Because how?? How are you sure that this thing works? How are you sure when you update it won't break? Indeed the answer seems to be "We don't know why it works, we can't tell you under which conditions it will break, we can't give you any performance guarantees because we didn't test or design for those, we can't give you any security guarantees because we don't know what security is and why that's important."

People forgot we're out here trying to do software engineering, not software generation. Eternal September is upon us.

senordevnyc 4 days ago

1) Yes, I review every line it changed.

2) I find the tool analogy helpful but it has limits. Yes, it’s a stochastic tool, but in that sense it’s more like another mind, not a tool. And this mind is neither junior nor senior, but rather a savant.

bigstrat2003 4 days ago

> You're not supposed to trust the tool, you're supposed to review and rework the code before submitting for external review.

Then it's not a useful tool, and I will decline to waste time on it.

jorvi 2 days ago

> But I need to iterate a few times so that the code looks like what I want.

The LLM too. You can get a pretty big improvement by telling the LLM to "iterate 4 times on whichever code I want you to generate, but only show me the final iteration, and then continue as expected".

I personally just inject the request for 4 iterations into the system prompt.

mrheosuper 4 days ago

If i dont trust my tool, i would never use it, or use something else better

e3bc54b2 3 days ago

> You're not supposed to trust the tool, you're supposed to review and rework the code before submitting for external review.

The vibe coding guy said to forget the code exists and give in to vibes, letting the AI 'take care' of things. Review and rework sounds more like 'work' and less like 'vibe'.

/s