What's funny is every GenAI "incredible email/essay" would be better communicated with the prompt used to generate it.
I’m only half joking when I’ve described ChatGPT-authored emails as a uniquely inefficient transport format.
Author feeds bullet points into ChatGPT which burns CPU cycles producing paragraphs of fluff. Recipient feeds paragraphs of fluff into ChatGPT and asks it to summarise into bullet points.
> Author feeds bullet points into ChatGPT which burns CPU cycles producing paragraphs of fluff.
GOOG, AMZN, and MSFT reportedly need to use nuclear energy to power the LLM farms that we are told we must have.
One must ask who (or what) in this feedback loop of inanity is doing the most hallucinating.
[1] _ https://apnews.com/article/climate-data-centers-amazon-googl...
[2] _ https://www.npr.org/2024/09/20/nx-s1-5120581/three-mile-isla...
Then users turn around and feed the fluff into energy hungry summarizers because who has time for a 5 paragraph email that could’ve been a three point bulleted list?
It would be a net win if it could normalize sending prompts instead of normal communication, which is not far in terms of useless waste of energy and space to LLM output that emulates it.
Similarily, MSFT recently announced the upcoming ability to clone your voice for Teams meetings. Extrapolating, in a few months, there will be Teams meetings which are only frequented by avatars. At the end of the meeting, you get an email with the essential content. Weird times ahead.
I think you are on to something there. I've heard executives talk about their current AI flow and it all sounds like summarization.
There's an increasing amount of prose written that will only ever be read by LLMs.
"Your essay must be at least X words" has always been an impediment to truly good writing skills, but now it's just worthless.
The way I explained it when I taught English 101 to first-year university students: any substantive question can generate an answer of a paragraph or a life's work; in this assignment I expect you to go into this much depth. Of course, good expository writing is as to-the-point as possible, so the first hurdle for most students was eliminating the juvenile trick of padding out their prose with waffle to meet an arbitrary word-count. Giving a word-count to an AI seems (currently) to activate the same behavior. I've not yet seen an AI text that's better writing than a college freshman could be expected to produce.
> Of course, good expository writing is as to-the-point as possible, so the first hurdle for most students was eliminating the juvenile trick of padding out their prose with waffle to meet an arbitrary word-count.
This is the most beautiful sentence I’ve read today.
Problem is, I doubt many people overcome this hurdle - that "juvenile trick" is pretty much the defining quality of articles and books we consume.
Indeed. We need better English teachers. :-)
We won't get them unless we appreciate both teaching and the Humanities more than we do. I was good, but by no means the best (75th percentile, maybe?). I loved doing it, but changed careers to IT because I'd never have been able to support a family on what I was paid.
A culture which pays teachers poorly, treats them with disrespect ("those who can do..."), and relentlessly emphasizes STEM, STEM, STEM is one that's slowly killing itself, no matter how much shiny tech at inflated valuations it turns out.
I don't know how it is elsewhere, but where I grew up we had minimum word limits on pretty much all essays. Doesn't matter if you can say what you want to say in 6 sentences, you need 4000 words or 2 pages or whatever metric they use
Oh, of course. Length requirements are important, for the reasons I explained up-thread. However, if teachers accept any old thing ("padding") to reach the count, then that metric is arbitrary, which (justifiably!) makes students cynical.
If a student can say all that they want to say in six sentences then they need to learn more about the topic, and / or think through their opinions more deeply. Teachers who do not take that next step are bad teachers, because they are neither modeling nor encouraging critical thinking.
In some places the majority of teachers are themselves incapable of critical thinking, because those who are leave the profession (or the locale) for the reasons in my comment above.
[Edit to add]: Please note that I say bad teachers, not bad people. Same goes for students / citizens, as well. The ability to think critically is not a determinate of moral worth, and in some ways and some cases might be anti-correlated.
Wish my high school English teachers had taught that. I remember fluffing essays to get to a minimum
College admissions essays on the other hand had the opposite problem - answering a big question in 500 words. Each sentence needed to pack a punch.
Don't get me started on college admissions essays. Rich kids pay other people to write them. Poor kids don't understand the class-markers they're expected to include. If AI consigns them to the dustbin of history it might be the first unalloyed good that tech ever does.
Clarification: "that tech" meaning the direct antecedent: "[AI] tech", not tech in general. I'm a Humanities guy, not an idiot. :-)
One of my favorite Mark Twain quotes comes from one of his correspondences: 'My apologies for such a long letter, I hadn't the time to write a short one.'
Probably not by Twain, but still a good quote. https://quoteinvestigator.com/2012/04/28/shorter-letter/
I never had that requirement outside the first years of school- where it’s more about writing practice than writing actual essays. After it was always “must be below X pages”
X words is supposed to be a proxy for do enough research that you have something to say with depth. A history of the world in 15 minutes better cover enough ground to be worth 15 minutes - as opposed to 1 minute and then filler words. Of course filler is something everyone who writes such a thing and comes up a few words short does - but you are supposed to go find something more to say.
I eventually flipped from moaning about word count minimums to whining about conference page limits but it took a long, long time- well into grad school. The change came when I finally had something to say.
When I was a lazy kid, and there was a requirement to fill a number of lines/pages, I'd just write with bigger letters.
Write a comment explaining that the ostensibly simple task of writing a dozen or so thank you letters for those socks/etc you received for Christmas can, for some people, be an excruciating task that takes weeks to complete, but with the aid of LLMs can easily be done in an hour.
On second thought, you're right. That was easy.
That's old world stuff - I've never sent a thank you card and have lost no sleep bc of it.
I’d prefer to receive no thank you than to receive an AI written one. One says you don’t care, the other says you don’t care but also want to deceive me.
There's the third case: they care. In which you wouldn't be able to tell whether the card is "genuine" or AI written; the two things aren't even meaningfully different in this scenario.
You can tell if the thank-you card is hand-written. Most people don't have a pen plotter connected to their AI text generator to write thank-you notes.
Emailed or texted "thank you" notes don't count. At all.
Yes, but they could also generate the text and transcribe it onto paper by hand.
For many people, myself included, 90%+ work on things like thank-you notes, greetings, invitations, or some types of e-mails, is in coming up with the right words and phrases. LLMs are a great help here, particularly with breaking through the "blank page syndrome".
It's not that different from looking up examples of holiday greetings on-line, or in a book. And the way I feel about it, if an LLM manages to pick just the right words, it's fine to use them - after all, I'm still providing the input, directing the process, and making a choice which parts of output to use and how. Such text is still "mine", even if 90% of it came from GPT-4.
I guess if someone went through the effort to prompt an LLM for a thank-you card note, and then transcribed that by hand to a card and mailed it, that would count. It's somehow more about knowing that they are making some actual effort to send a personalized thank you than it is about who wrote it.
But honestly I don't think "blank page syndrome" is very common for a thank-you card. We're talking about a few sentences expressing appreciation. You don't really have to over-think it. People who don't send thank-yous are mostly just being lazy.
My financial advisor sends out Christmas cards and Birthday cards. They are pre-printed stock cards. I don't even open them. I should tell him not to waste the money. If he even wrote just one sentence that expressed some personal interest, then they would mean something.
These kinds of messages are on the one hand just pro-forma courtesies, but on the other hand they require that that some personal effort is invested, or else they are meaningless.
Sure, it's not just thank you cards though. I once had a job in which my boss assigned me the weekly task of manually emailing an automatically generated report to his boss, and insisted that each email have unique body text amounting to "here's that report you asked for" but stretched into three or four sentences, custom written each week and never repeating. The guy apparently hated to receive automated emails and would supposedly be offended if I copy-pasted the same email every time.
Absolutely senseless work, perfect job for an LLM.
There's something painfully ironic and disturbing that the pseudo-Kolmogorov complexity of clickbait content, as judged "identical in quality" by an average human viewer, is arguably less than the length of the clickbait headline itself, and perhaps even less than the embedding vector of said headline!
It's always been this way, it's just rules of polite/corporate culture don't allow to say what you actually mean - you have to hit the style points and flatter the right people the right way, and otherwise pad the text with noise.
If the spread of AI would make it OK to send prompts instead of generated output, all it would do is to finally allow communicating without all the bullshit.
Related, a paradox of PowerPoint: it may suck as communication tool, but at the same time, most communication would be better off if done in bullet points.