I don't have much sympathy for this. This country has long expected millions and millions of blue collar workers to accept and embrace change or lose their careers and retirements. When those people resisted, they were left to rot. Now I'm reading a sob story about someone throwing a fit because they refuse to learn to use ChatGPT and Claude and the CEO had to sit them down and hold their hand in a way. Out of all the skillset transitions that history has required or imposed, this is one of the easiest ever.
They weren't fired; they weren't laid off; they weren't reassigned or demoted; they got attention and assistance from the CEO and guidance on what they needed to do to change and adapt while keeping their job and paycheck at the same time, with otherwise no disruption to their life at all for now.
Prosperity and wealth do not come for free. You are not owed anything. The world is not going to give you special treatment or handle you with care because you view yourself as an artisan. Those are rewards for people who keep up, not for those who resist change. It's always been that way. Just because you've so far been on the receiving end of prosperity doesn't mean you're owed that kind of easy life forever. Nobody else gets that kind of guarantee -- why should you?
The bottom line is the people in this article will be learning new skills one way or another. The only question is whether those are skills that adapt their existing career for an evolving world or whether those are skills that enable them to transition completely out of development and into a different sector entirely.
> These are rewards for people who keep up, not for those who resist change.
lol. I work with LLM outputs all day -- like it's my job to make the LLM do things -- and I probably speak to some LLM to answer a question for me between 10 and 100 times a day. They're kinda helpful for some programming tasks, but pretty bad at others. Any company that tried to mandate me to use an LLM would get kicked to the curb. That's not because I'm "not keeping up", it's because they're simply not good enough to put more work through.
Wouldn't this depend a lot on how management responds to your use? For example, if you just kept a log of prompts and outputs with notes about why the output wasn't acceptable, that could be considered productive use in this early stage of LLMs, especially if management's goal was to have you learning how to use LLMs. Learning how not to use something is just as important in the process of adapting any new tool.
If management is convinced of the benefits of LLMs and the workers are all just refusing to use them, the main problem seems to be a dysfunctional working environment. It's ultimately management's responsibility to work that out, but if the management isn't completely incompetent, people tasked with using them could do a lot to help the situation by testing and providing constructive feedback rather than making a stand by refusing to try and providing grand narratives about damaging the artistic integrity of something that has been commoditized from inception like video game art. I'm not saying that video game art can't be art, but it has existed in a commercial crunch culture since the 1970s.
What sort of tasks have you seen them struggle with? Not to dispute, just collecting datapoints for my own sake.
Anything with even vaguely complicated TypeScript types, hallucinating modules, writing tests that are useful rather than just performative, as recent examples…
If you're not doing the work, you're not learning from the result.
The CEOs in question bought what they believed to be a power tool, but got what is more like a smarter copy machine. To be clear, copy machines are not useless, but they also aren't going to drive the 200% increases in productivity that people think they will.
But because management demands the 200% increase in productivity they were promised by the AI tools, all the artists and programmers on the team hear "stop doing anything interesting or novel, just copy what already exists". To be blunt, that's not the shit they signed up for, and it's going to result in a far worse product. Nobody wants slop.
Having spend hours upon hours with image snythesis for artistic hobby purposes, it is indeed an awesome tool. If you get into it you might learn about its limitations though.
Real knowledge here is often absend from the strongest AI prosletisers, others are more realistic about it. It still remains an awesome tool, but a limited one.
AIs today are not creative at all. They find statistical matches. They perform a different work than artists do.
But please, replace all your artwork with AI generated ones. I believe the forced "adapt" phase with that approach would realize itself rather quickly.
> It still remains an awesome tool, but a limited one.
And that's enough to drive significant industry-wide change. Just because it can't fully automate everything doesn't mean companies aren't going to expect (and, indeed, increasingly require) their employees to learn how to effectively utilize the technology. The CEO of Shopify recently made it clear that refusal to learn to use AI tools will factor directly into performance evaluations for all staff. This is just the beginning. It's best to be wise and go where the puck is headed.
The article gives several examples of where these tools are used to rapidly accelerate experimentation, pitches, etc. Supposedly this is a bad thing and should be avoided because it's not sufficiently artisan, but no defensible argument was presented as to why these use cases are illegitimate.
In terms of writing code, we're entering an era where developers who have invested in learning how to utilize this technology are simply better and more valuable to companies than developers who have not. Naysayers will find all sorts of false ways to nitpick that statement, yet it remains true. Effective usage means knowing when (and when not) to use these tools -- and to what degree. It also, for now at least, means remaining a human expert about the craft at hand.