DSingularity 6 days ago

Is that example representative for the LLM tasks for which we seek explainability ?

1
stavros 6 days ago

Are we holding LLMs to a higher standard than people?

f_devd 6 days ago

Ideally yes, LLMs are tools that we expect to work, people are inherently fallible and (even unintentionally) deceptive. LLMs being human-like in this specific way is not desirable.

stavros 6 days ago

Then I think you'll be very disappointed. LLMs aren't in the same category as calculators, for example.

f_devd 5 days ago

I have no illusions on LLMs, I have been working with them since og BERT, always with these same issues and more. I'm just stating what would be needed in the future to make them reliably useful outside of creative writing & (human-guided & checked) search.

If an LLM provides an incorrect/orthogonal rhetoric without a way to reliably fix/debug it it's just not as useful as it theoretically could be given the data contained in the parameters.