> The only thing that I have seen convince people (and it always does)
...when anyone starts talking in universals like this, they're usually deep in some hype cycle.
This is a problematic approach that many people take; they posit that:
1) AI is fundamentally transformative.
2) People who don't acknowledge that simply haven't tried it.
However, I posit that:
3) People who think that haven't actually used it a serious capacity or are deliberately misrepresenting things.
The problem is that:
> In reality, I go back and forth with AI constantly—sometimes dozens of times on a single piece of work. I refine, iterate, and improve each part through ongoing dialogue. It's like having a thoughtful and impossibly fast colleague who's always available to help me develop and sharpen my ideas.
...is only true for trivial problems.
The author calls this out, saying:
> It won't excel at consistently citing specific papers, building codes, or case law correctly. (Advanced techniques exist for these tasks, but they're not worth learning when you're just starting out. For now, consider them out of scope.)
...but, this is really the heart of everything.
What are those advanced techniques? Seriously, after 30 days of using AI if all you're doing is:
> Prepare for challenging conversations by using ChatGPT to simulate potential scenarios, helping you approach interpersonal dynamics with empathy and grace.
Then what the absolute heck are you doing.
Stop gaslighting everyone.
Those 'advanced techniques' are all anyone cares about, because they are the things that are hard, and don't work.
In reality, it doesn't matter how much time you spend learning; the technology is fundamentally limited. It can't do some things.
Spending time learning how to do trivial things will never enable you to do hard things.
It's not missing the 'human touch'.
It's the crazy hallucinations, invalid logic, failure to do as told, flat out incorrect information or citations, inability to perform a task (eg. as an agent) without messing some other thing up.
There are a few techniques that can help you have an effective workflow; but seriously, if you're a skeptic about AI, spending a month doing trivial stuff like asking for '10 ideas about X' is an insult to your intelligence and doesn't address any of the concerns that, I would argue, skeptics and real people actually have about AI.
> This is a problematic approach that many people take; they posit that
It’s like the people who think that everyone who opposes cryptocurrencies only do so because they are jealous they didn’t invest early.
Let’s take vim and emacs, or bash. People do not spend years on them only for pleasure or fun, it’s because they’re trying to eliminate tedious aspects of their previous workflows.
That’s the function of a tool. To help do something in a more relaxed manner. Learning to use it can take some time, but the acquired proficiency will compensate for that.
General public LLMs have been there for two years, and still today, there are no concrete uses cases that can have the definition of tools. It’s trust me bro! and warnings in small print.
> there are no concrete uses cases that can have the definition of tools
There are some, but you won't like them. Three big examples:
a) Automating human interactions. (E.g., "write some birthday wishes for my coworker".)
b) Offensive jokes and memes.
c) Autogenerated NPC's for role-playing games.
So, generally things that don't require actual intelligence. (Weird that empathy is the first thing we managed to automate away with "AI".)