Had a funny conversation with a friend of mine recently who told me about how he's in the middle of his yearly review cycle, and management is strongly encouraging him and his team to make greater use of AI tools. He works in biomedical lab research and has absolutely no use for LLMs, but everyone on his team had a great time using the corporate language model to help write amusing resignation letters as various personalities, pirate resignation, dinosaur resignation etc. I dont think anyone actually quit, but what a great way to absolutely nuke team moral!
I've been getting the same thing at my company. Honestly no idea what is driving it other than hype. But it somehow feels different than the usual hype; so prescribed, as though coordinated by some unseen party. Almost like every out of touch business person had a meeting where they agreed they would all push AI for no reason. Can't put my finger on it.
Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
Prior hype, like block chain are more abstract, therefore less useful to people who understand managing but not the actual work.
> this one is super easy for an MBA to point at and sort of see a way to integrate it
Because a core feature of LLMs is to minimize the distance between {quality answers} and {gibberish that looks correct}.
As a consequence, this maximizes {skill required to distinguish the two}.
Are we then surprised that non-subject matter experts overestimate the output's median usefulness?
Also I think this has been a long time dream of business types. They have always resented domain experts, because they need them for their businesses to be successful. They hate the leverage the domain experts have and they think these LLMs undermine that leverage.
"Business types" get a funny look on their face, when I explain to them that they're the domain expert I seek to eliminate.
In fact, we should try to LLM them away. I wonder, would LLMs then be promoted less?
Actually, I feel like executing this startup and pitching would be hilarious and therapeutic.
"How we will eliminate your job with LLMs, MBA."
I can sort of relate. If you hire an expert, you need to trust them. If you don't like what they say, you're inclined to want a second opinion. Now you need to pay two experts, which is often not reasonable financially, or problematic when it comes to corporate politics. And even if you have two experts, what if they disagree, pay a third?
To manage this well, you need the courage to trust people, as well as the intelligence and patience to question them. Not everybody has that.
But that aside, I think business people generally like having (what they think are) strong experts. It means they can use their people skills and networks to create competitive advantage.
Happens in programming as well, often even by developers.
The "copilot experiences", that finishes the next few lines can be useful and intuitive - an "agent" writing anything more than boilerplate is bound to create more work than it lifted in my experience.
Where I am having a blast with LLMs is learning new programming languages more deeply. I am trying to understand Rust better - and LLMs can produce nice reasoning to whether one should use "Vec<impl XYZ>" or "Vec<Box<dyn XYZ>>". I am sure this trivial for any experienced Rust developer though.
>> I've been getting the same thing at my company. Honestly no idea what is driving it other than hype.
> Is because unlike prior hype cycles, this one is super easy for an MBA to point at and sort of see a way to integrate it.
This particular hype is the easiest one thus far for an MBA to understand because employing it is the closest thing to a Ford assembly line[0] the software industry has made available yet.
Since the majority of management training centers on early 20th century manufacturing concepts, people taught same believe "increasing production output" is a resource problem, not an understanding problem. Hence the allure of "generative AI can cut delivery times without increasing labor costs."
Shame that management is deciding that listening to marketing is more important than the craftsmen they push it on.
They’ve always resented those employees having leverage to negotiate better pay and status. Many techies looked at near-management compensation and thought that meant we were part of the elite clubhouse, but they never did.
Can we stop with MBA bashing?
I feel it degrades a whole group of people to a specific stereotype that might or might not be true.
How about lawyers, PhDs, political science majors, etc.
Let’s look at the humans and their character, not titles.
By the way, I have an MBA too and feel completely misjudged with statements like that.
The thing with stereotypes is that, while they tend to be well enough based in fact for most people to recognize, they are no better than anything else at applying generalizations to large groups of people. Some will always be unfairly targeted by them. You personally might not have done anything to contribute to those things we are lashing out against (and if not, thank you!), but then again you personally were not targeted by these remarks. In the same way that you are possibly unfairly swept up in these assertions, it is, to a degree, unfair for you to use your wounds to deprive the rest of us of freely voicing our well-founded grievances. Problems must be recognized before they can be addressed, after all, and collectively so for anything so widely spread. It's never pleasant to be told to "just tough it out", but perfect solutions are rare when people are involved, just as how surgeons have to cut healthy flesh to remove the unhealthy.
An analogue to this would be "all cops are bastards". Sure, there are some good ones out there, but there are enough bad ones out there that the stereotype generally applies. The statement is a rallying cry for something to be done about it. The "guilty by association" bit that tends to follow is another thing entirely.
Automation of knowledge work. Simply by using AI you are training your own replacement and integrating it into company processes.
Rather than some conspiracy, my suspicion is that AI companies accidentally succeded in building a machine capable of hacking (some) people's brains. Not because it's superhumanly intelligent, or even has any agenda at all, but simply because LLMs are specifically tuned to generate the kind of language that is convincing to the "average person".
Managers and politicians might be especially susceptible to this, but there's also enough in the tech crowd who seem to have been hypnotized into becoming mindless enthusiasts for AI.
> strongly encouraging him and his team to make greater use of AI tools
I've seen this with other tools before. Every single time, it's because someone in the company signed a big contract to get seats, and they want to be able to show great utilization numbers to justify the expense.
AI has the added benefit of being the currently in-vogue buzzword, and any and every grant or investment sounds way better with it than without, even if it adds absolutely nothing whatsoever.
Has your friend talked with current bio research students? It’s very common to hear that people are having success writing Python/R/Matlab/bash scripts using these tools when they otherwise wouldn’t have been able to.
Possibly this is just among the smallish group of students I know at MIT, but I would be surprised to hear that a biomedical researcher has no use for them.
Recommending that someone in the industry take pointers from how students do their work is always solid advice.
Unironically, yes. The industry clearly has more experience, but it’s silly to assume students don’t have novel and useful ideas that can (and will) be integrated
I'm taking a course on computational health laboratory. I do have to say gemini is helping me a lot, but someone who knows what's happening is going to be much better than us. Our professor told us it is of course allowed to make things with llms, since on the field we will be able to do that. However, I found they're much less precise with bio-informatic libraries than others...
I do have to say that we're just approaching the tip of the iceberg and there are huge issues related to standardization, dirty datas... We still need the supervision and the help of one of the two professors to proceed even with llms
I have general one-shot success asking chatgpt to make bash/python scripts and one-liners where otherwise it would take 1hr to a day to figure out on my own (and I'd use one of my main languages maybe) or I might not even bother trying, which is great for productivity but also over 90% of my job doesn't need throw-away scripts and one-liners.
That is both hilarious and depressingly on-brand for how AI is being handled in a lot of orgs right now. Management pushes it because they need to tick the "we're innovating" box, regardless of whether it makes any sense for the actual work being done
Our org seems to be taking some benefits from being sped up by using AI tools for code generation (much of it is CRUD or layout stuff), however at times I'm asked for help by colleagues and the first thing I've done is Googled and found the answer and gotten a "Oh right, you can google also" since they've been trying to figure out the issue with ChatGPT or similar.
Gemini loves to leave poetry on our reviews, right below the three bullet points about how we definitely needed to do this refactor but also we did it completely wrong and need to redo it. So we mainly just ignore it. I heard it gives good advice to web devs though.
I really hope that if someone does quit over this, they do it with a fun AI-generated resignation letter. What a great idea!
Or maybe they can just use the AI to write creative emails to management explaining why they weren’t able to use AI in their work this day/week/quarter.
If you are not building AI into your workflows right now you are falling behind those that do. It's real, it's here to stay and it's only getting better.