> Here's a machine that verifies 95% of calculations, but you'd still have to do 5% of the work.
The problem is that you don't know which 5% are wrong. The AI is confidently wrong all the time. So the only way to be sure is to double check everything, and at some point its easier to just do it the right way.
Sure, some things don't need to be perfect. But how much do you really want to risk? This company thought a little bit of potential misinformation was acceptable, and so it caused a completely self inflicted PR scandal, pissed off their customer base, and lost them a lot of confidence and revenue. Was that 5% error worth it?
Stories like this are going to keep coming the more we rely on AI to do things humans should be doing.
Someday you'll be affected by the fallout of some system failing because you happen to wind up in the 5% failure gap that some manager thought was acceptable (if that manager even ran a calculation and didn't just blindly trust whatever some other AI system told them) I just hope it's something as trivial as an IDE and not something in your car, your bank, or your hospital. But certainly LLMs will be irresponsibly shoved into all three within the next few years, if it's not there already.
>The problem is that you don't know which 5% are wrong
This is not a problem in my unreliable calculator use-cases; are you disputing that or dropping the analogy?
Because I'd love to drop the analogy. You mention IDEs- I routinely use IntelliJ's tab completion, despite it being wrong >>5% of the time. I have to manually verify every suggestion. Sometimes I use it and then edit the final term of a nested object access. Sometimes I use the completion by mistake, clean up with backspace instead of undo, and wind up submitting a PR that adds an unused dependency. I consider it indispensable to my flow anyway. Maybe others turn this off?
You mention hospitals. Hospitals run loads of expensive tests every day with a greater than 5% false positive and false negative rate. Sometimes these results mean a benign patient undergoes invasive further testing. Sometimes a patient with cancer gets told they're fine and sent home. Hospitals continue to run these tests, presumably because having a 20x increase in specificity is helpful to doctors, even if it's unreliable. Or maybe they're just trying to get more money out of us?
Since we're talking LLMs again, it's worth noting that 95% is an underestimate of my hit rate. 4o writes code that works more reliably than my coworker does, and it writes more readable code 100% of the time. My coworker is net positive for the team. His 2% mistake rate is not enough to counter the advantage of having someone there to do the work.
An LLM with a 100% hit rate would be phenomenal. It would save my company my entire salary. A 99% one is way worse; they still have to pay me to use it. But I find a use for the 99% LLM more-or-less every day.
> This is not a problem in my unreliable calculator use-cases; are you disputing that or dropping the analogy?
If you use an unreliable calculator to sum a list of numbers, you then need to use a reliable method to sum the numbers to validate that the unreliable calculator's sum is correct or incorrect.
Yes, so in my first example in the GP, this happens first. Humans do the work. The calculator double checks and gives me a list of all errors plus 5% of the non-errors, and I only need to double check that list.
In my third example, the calculator does the hard work of dividing, and humans can validate by the simpler task of multiplication, only having to do extra work 5% of the time.
(In my second, the unreliablity is a trade-off against speed, and we need the speed more.)
In all cases, we benefit from the unreliable tool despite not knowing when it is unreliable.
In your first example, you appear to assume that for calculations where "each mistake could cost $millions or lives", engineers who calculated by hand typically didn't double-check by redoing the calculation, so a second check with a 95% accuracy tool is better than nothing. This assumption is false. I suggest you watch the 2016 film Hidden Figures to understand the level of safety at NASA when calculations were done by hand. You are suggesting lowering safety standards, not increasing them.
Your third example is unclear. No calculators can perform factoring of large numbers, because that is the expected ability of future quantum computers that can break RSA encryption. It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.
>you appear to assume that for calculations where "each mistake could cost $millions or lives", engineers who calculated by hand typically didn't double-check by redoing the calculation
Not at all! For any n extra checks, having an n+1 phase that takes a 20th of the effort is beneficial. I did include triple-checks to gesture at this.
>It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.
This actually fascinates me. Computers and human both take longer to divide than to multiply (in computers, by roughly an order of magnitude!) I'm not really sure why this is in a fundamental information theory kind of way, but it being true in humans is sufficient to make my point.
To address your specific criticism: you haven't factored out the division there, you've just changed the numerator to 1. I'd much rather do 34/17 in my head than 34 * (1/17).
> It is also unclear why multiplication and division have different difficulties, when dividing by n is equal to multiplying by 1/n.
Well sure, but once you multiply by 1/n you leave N (or Z) and enter Q, and I suspect that's what makes it more difficult because Q is just a much more complex structure because it formally consists of equivalence relations. In fact it's easy to divide an integer x by an integer y, it's just x/y ... the problem is that we usually want the fraction in lowest terms, though.
I'd like to second the point made to you in this thread that went without reply: https://news.ycombinator.com/item?id=43702895
It's true that we use tools with uncertainty all the time, in many domains. But crucially that uncertainty is carefully modeled and accounted for.
For example, robots use sensors to make sense of the world around them. These sensors are not 100% accurate, and therefore if the robots rely on these sensors to be correct, they will fail.
So roboticists characterize and calibrate sensors. They attempt to understand how and why they fail, and under what conditions. Then they attempt to cover blind spots by using orthogonal sensing methods. Then they fuse these desperate data into a single belief of the robot's state, which include an estimate of its posterior uncertainty. Accounting for this uncertainty in this way is what keeps planes in the sky, boats afloat, and driverless cars on course.
With LLMs It seems like we are happy to just throw out all this uncertainty modeling and to leave it up to chance. To draw an analogy to robotics, what we should be doing is taking the output from many LLMs, characterizing how wrong they are, and fusing them into a final result, which is provided to the user with a level of confidence attached. Now that is something I can use in an engineering pipeline. That is something that can be used as a foundation to something bigger.
>went without reply
Yeah, I was getting a little self-conscious about replying to everyone and repeating myself a lot. It felt like too much noise.
But my first objection here is to repeat myself- none of my examples are sensitive to this problem. I don't need to understand what conditions cause the calculator/IDE/medical test/LLM to fail in order to benefit from a 95% success rate.
If I write a piece of code, I try to understand what it does and how it impacts the rest of the app with high confidence. I'm still going to run the unit test suite even if it has low coverage, and even if I have no idea what the tests actually measure. My confidence in my changes will go up if the tests pass.
This is one use of LLMs for me. I can refactor a piece of code and then send ChatGPT the before and after and ask "Do these do the same thing". I'm already highly confident that they do, but a yes from the AI means I can be more confident. If I get a no, I can read its explanation and agree or disagree. I'm sure it can get this wrong (though it hasn't after n~=100), but that's no reason to abandon this near-instantaneous, mostly accurate double-check. Nor would I give up on unit testing because somebody wrote a test of implementation details that failed after a trivial refactor.
I agree totally that having a good model of LLM uncertainty would make them orders of magnitude better (as would, obviously, removing the uncertainty altogether). And I wouldn't put them in a pipeline or behind a support desk. But I can and do use them for great benefit every day, and I have no idea why I should prefer to throw away the useful thing I have because it's imperfect.
> none of my examples are sensitive to this problem.
That's not true. You absolutely have to understand those conditions because when you try to use those things outside of their operating ranges, they fail at a higher than the nominal rate.
> I'm still going to run the unit test suite even if it has low coverage, and even if I have no idea what the tests actually measure. My confidence in my changes will go up if the tests pass.
Right, your confidence goes up because you know that if the test passes, that means the test passed. But if the test suite can probabilistically pass even though some or all of the tests actually fail, then you will have to fall back to the notions of systematic risk management in my last post.
> I can refactor a piece of code and then send ChatGPT the before and after and ask "Do these do the same thing". I'm already highly confident that they do, but a yes from the AI means I can be more confident. If I get a no, I can read its explanation and agree or disagree. I'm sure it can get this wrong (though it hasn't after n~=100)
This n is very very small for you to be confident the behavior is as consistent as you expect. In fact, it gets this wrong all the time. I use AI in a class environment so I see n=100 on a single day. When you get to n~1k+ you see all of these problems where it says things are one way but really thing are another.
> mostly accurate double-check
And that's the problem right there. You can say "mostly accurate" but you really have no basis to assert this, past your own experience. And even if it's true, we still need to understand how wrong it can be, because mostly accurate with a wild variance is still highly problematic.
> But I can and do use them for great benefit every day, and I have no idea why I should prefer to throw away the useful thing I have because it's imperfect.
Sure, they can be beneficial. And yes, we shouldn't throw them out. But that wasn't my original point, I wasn't suggesting that. What I had said was that they cannot be relied on, and you seem to agree with me in that.