jimjimjim 4 days ago

A few hallucinations. It's right more times than it's wrong. Humans make mistakes as well. Cosmic justice.

1
aarestad 4 days ago

Yes, but humans can be held accountable.

jimjimjim 4 days ago

I probably should have added sarcasm tags to my post. My very firm opinion is that AI should only make suggestions to humans and not decisions for humans.

raphman 4 days ago

I'd argue that humans also more easily learn from huge mistakes. Typically, we need only one training sample to avoid a whole class of errors in the future (also because we are being held accountable).

SparkyMcUnicorn 4 days ago

As annoying as it is when the human support tech is wrong about something, I'm not hoping they'll lose their job as a result. I want them to have better training/docs so it doesn't happen again in the future, just like I'm sure they'll do with this AI bot.

rurp 4 days ago

That only works well if someone is in an appropriate job though. Keeping someone in a position they are unqualified for and majorly screwing up at isn't doing anyone any favors.

SparkyMcUnicorn 4 days ago

Fully agree. My analogy fits here too.

recursive 4 days ago

> I'm not hoping they'll lose their job as a result

I have empathy for humans. It's not yet a thought crime to suggest that the existence of an LLM should be ended. The analogy would make me afraid of the future if I think about it too much.

mgraczyk 4 days ago

How is this not an example of humans being held accountable? What would be the difference here if a help center article contained incorrect information? Would you go after the technical writer instead of the founders or Cursor employees responding on Reddit?