> people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read
It's bad that this need exists. However, introducing this feature did not create the need. And if this need exists, fulfilling it is still better, because otherwise these kind of people wouldn't get this information at all.
This is worse because the AI slop is full of hallucinations which they will now confidently parrot. No way in hell does this type of person verify or even think critically about what the LLMs tell them. No information is better than bad information. Less information while practicing the ability to critically use it is better than bad information in excess.
Do you have examples of recent models hallucinating when asked to summarize a text?