Stop Expecting a Hammer to Cut Wood
April 1, 2025
As for me, I feel like I'm in the twilight zone. I use AI tools all day everyday, and yet I don't fall on either extreme. So much of the zeitgeist seems to misunderstand that there's an infinite value spectrum between 0 and 1, and that all AI tools currently fall therein.
Instead, AI tools are seen as either garbage or the coming of an omnipotent digital deity. But as with most things, the truth lies somewhere in the middle. We haven't created a god machine yet, so we should quit treating every new tool like it is one.
A lot of the fault here — evidenced by the links above — is due to marketing, product decisions (good luck clearly and accurately articulating the differences between GPT 4, 4o, o1, etc.), the sheer complexity of the underlying technology (hard to know exactly how to use a tool when the creators don't even know exactly how it works), and the nature of being in a Gartner hype cycle.
"AI" as everyone thinks about it today really boils down to recent, huge improvements in an area of study that is not new. We've made some historic strides, but that's the unfortunate thing: the value that these incredible breakthroughs have given us is lost on so many for the reasons above.
Instead of either throwing a tool away or expecting it to be the end-all-be-all, what we should do is treat a tool like a tool.
What's it good at? What's it not good at? When is it cost or time prohibitive to use it?
Hammers are great. But if I grab one and start swinging it at wood touting how it's going to change everything, you're going to think I'm crazy. But on the other hand, does that mean it's useless?
Let's look at some real-world examples of the AI polarization I'm talking about:
AI Lawyer
Can AI replace a lawyer today? No.
Can it help you do a quick first pass of a document to flag areas of interest or potential issues? Yes.
Can you lean on that as gospel? No. Hallucinations are a thing. Plus, your AI friend isn't going to defend their legalese for you in court.
Great, now you have some pretty clear boundaries to operate within. You don't have to be afraid to use it within those confines, and you can know what not to expect from it. There is a non-trivial amount of value to derive inside the walls of those constraints.
AI Coding
Can AI code full apps without supervision? No.
Will it hallucinate? Yes.
Will it follow your instructions 100% of the time? No.
Can it work incredibly well at refactoring, writing tests, boilerplating, and similar? Yes.
It isn't replacing software engineers anytime soon, and it also isn't useless. AI-assisted coding is incredibly powerful. I'm using an agentic workflow consisting of RooCode paired with Claude's Sonnet, and it's insanely powerful. I can give you real-world examples of complex math, large features, and more that would take me a week+ to do on my own, but that I can accomplish in mere hours with this setup. All while following coding styles and conventions that me or my team stand by.
Lovable? "Oh, it's a toy." I've generated functional, beautiful proof-of-concepts in a few hours that literally would've taken weeks. If it's a toy, that's a damn valuable one.
These are tools. They're good at certain things. They're not good at other things. Just like all tools.
By that same token, if you're a construction worker, you'll get left behind if you try to use a hammer for every task, but you'll also get left behind if you never pick up a hammer when it comes time to drive some nails.
I hope the message I'm trying to convey here is clear: there is value in AI today. Period. You can't tell me there isn't, because I can prove there is. I've benefited from it. That said, you need to know where, when, and how to use it. Because while I don't agree with the naysayers or over-hypers, I do agree with Richard Baldwin's take:
Can it help you do a quick first pass of a document to flag areas of interest or potential issues? Yes.
Can you lean on that as gospel? No. Hallucinations are a thing. Plus, your AI friend isn't going to defend their legalese for you in court.
Great, now you have some pretty clear boundaries to operate within. You don't have to be afraid to use it within those confines, and you can know what not to expect from it. There is a non-trivial amount of value to derive inside the walls of those constraints.
AI Coding
Can AI code full apps without supervision? No.
Will it hallucinate? Yes.
Will it follow your instructions 100% of the time? No.
Can it work incredibly well at refactoring, writing tests, boilerplating, and similar? Yes.
It isn't replacing software engineers anytime soon, and it also isn't useless. AI-assisted coding is incredibly powerful. I'm using an agentic workflow consisting of RooCode paired with Claude's Sonnet, and it's insanely powerful. I can give you real-world examples of complex math, large features, and more that would take me a week+ to do on my own, but that I can accomplish in mere hours with this setup. All while following coding styles and conventions that me or my team stand by.
Lovable? "Oh, it's a toy." I've generated functional, beautiful proof-of-concepts in a few hours that literally would've taken weeks. If it's a toy, that's a damn valuable one.
These are tools. They're good at certain things. They're not good at other things. Just like all tools.
By that same token, if you're a construction worker, you'll get left behind if you try to use a hammer for every task, but you'll also get left behind if you never pick up a hammer when it comes time to drive some nails.
I hope the message I'm trying to convey here is clear: there is value in AI today. Period. You can't tell me there isn't, because I can prove there is. I've benefited from it. That said, you need to know where, when, and how to use it. Because while I don't agree with the naysayers or over-hypers, I do agree with Richard Baldwin's take:
AI won't take your job. It's somebody using AI that will take your job.
Richard Baldwin
Want more? See all posts