The AI Disconnect

The AI Disconnect
Photo by Cash Macanaya / Unsplash

I read a "tweet" of someone saying how AI made him 10 times more productive (in the amount and the quality of the code produced) than he was before, with 25 years of experience in software development. And it's a wild thing to say.


First of all - that's a very weird way of saying you're really bad at your job after two and a half decades of practice. Really? 10x the quality? How terrible was your code before AI? How did you accumulate 25 years with that quality of work?

But more importantly - I feel a total disconnect. Putting all valid reasons not to jump onto the AI bandwagon - the actual cost of running the thing (which is subsidized more than Uber VCs money paying for chargers and free water back in the day), the societal collapse (losing jobs, losing sanity interacting with bots), destruction of the environment (electricity, drinkable water, pollution), among many more. I just don't get a code that's anywhere near a standard of something I would consider production ready. With any AI I tried using and I tried pretty much all of them, in a span of months. A lot of the times the code doesn't even run the first 10 times cause of mistakes, outdated libraries, tools.

And that's where my disconnect happens. You listen to this AI marketing BS, like "Most if not all white-collar jobs will be replaced in the next 12-18 months". Or you hear some vibe bros going crazy on social media how they vibe coded something in 45 minutes. But on the other side of the fence I stand there with a generated code that sometimes works, sometimes doesn't. Depending on how strict you are on what constitutes "works", the latter is just getting larger. If it's tech I don't know that well, I may not catch some clear mistakes in the code, but they're there when you look for them. And it's usually code I'd call somewhat simple - imagine the mistakes in more critical or security-related code. Or in large codebases, that needs to be maintained.

I had a thought the other day about vibe coding.

It's like being an editor instead of a writer of a book. And the AI writer sends you a draft and it's not what you want. The names are all wrong, it happens in a wrong place, the theme is off, and it sounds like a generic book that's also half-copied from another book you read before. Generic, sloppy work, that barely contitutes a book, mostly because it's mostly grammatically correct.

And sure, you can send feedback, change the names, change the countries, you can point here and there to make adjustments, and the (very mediocre may I add) writer may change those things. Maybe it won't introduce new problems somewhere along the way, maybe it will. But the book will still be mediocre copy of something that's already there. And you have to read the whole thing every time, cause you simply can't really trust AI to do the right thing.

So after a couple of back-and-forths, you're kind of done, tired, deep into "if you want something done right" territory. It was supposed to be make it easier, it just made it worse just in another way. And you didn't even want to be editor in the first place, you wanted to write books!

Ultimately, throughout this whole hype, this is my experience with AI. The appeal is surface-level. Can it do really cool things? Sure. Especially when you don't compare it with those who actually have those skills. A drawing from a 5-year-old is also amazing, cause you're surprised that they got that far and compare it with what they did 3 years ago (I also see a lot of Will Smith eating pasta comparisons lately). But it's like watching television and tuning into a show that just happens to be around a topic you know first hand. You see how wrong it's portrayed, how many mistakes there are, how using 4 hands on a keyboard doesn't actually help with stopping a hacker from breaching NCIS mainframe. And (hopefully) it clicks, that all shows are like that, you just lack the expertise to see it when it's a medical show, or a courtroom drama.

I read a comment somewhere before, that was along the lines of:

It's funny how AI is great at everything, except for the area you know a lot about, at which is sucks.

My somewhat biased observations are that AI is great as faking skill and talent to people who lack those in the areas that they use AI for. If you never made any music, song generators feel like magic, if you never edited a single video, the 8-10s clips of Hollywood actors by generative AI feel like something you can use to make a 2h feature film if you generated enough of them. If you never coded (or you haven't done it in a long time, at some point being promoted to manage), it feels like magic or 3am at a startup in Web 2.0 era.
But it's a lie. Will that 8s clip get views on TikTok? Sure. Would it work in an actual movie? If AI slop ads and videos by some of the biggest corporations with unlimited budgets are any indicator - no. Will that generated code run? Maybe, probably at some point. Should you put it out there or run it so people can access it? From my experience - honestly - probably not. Just like you shouldn't live, let alone sell houses or cars you built without knowing anything about building houses or cars. The fact it stopped falling apart after 40th incantation to a black box, doesn't mean it works and it's safe.

You see - AI is very deceptive term. There's no intelligence there. There's probability. That's what LLMs do - predict the next token (to simplify - the next word). They are extremely expensive, devastating for many areas of our lives, keyboard autocomplete features we had for decades. It's a simplification, but it's closer than to that of an actual "artificial intelligence". It's flipping coins, roulettes, slot machines. Every time you ask AI to do something, to perform a change in what you or it has already given - you're spinning a roulette, you're flipping a coin. And similar to those - some people win, and run around screaming how easy it is. Some even provide systems and courses how to become just like them, clearly demonstrating how they fundamentally don't understand how the system they operate in works (if AI is so great why would I sign up for your course instead of asking AI how to do what you're selling?). And like in Casino games, the house always wins. The house being OpenAI, Google, Anthropic, Microsoft, Nvidia, selling you, each other, and investors (which now include governments) this magical solution. They move the goalpost, they rig the game, they use all the marketing tools they have ("you better sign up now, or you'll miss your chance" which sounds more like someone selling pots or pyramid scheme, than a healthy business that would exist if it was also somewhat regulated and had to pay for things), and they know who to target with their "benefits".

I keep going back, with my usual "maybe I'm wrong" after boosted social media posts, ads, new "breakthroughs". And I keep failing at proving myself wrong. I keep getting contradicting "so you're saying I can use PIN 0 on the microcontroller, but from your description and my usage of it I actually can't" and getting the patronizing, overly-nice and honestly super annoying by now "You're absolutely right and yes, you’ve reached the correct conclusion". A rubber ducky would be way less expensive and would not lead me to wasted time and materials, and I would reach the same conclusion.

I'll keep on checking in, obviously. The tools became better over time, or more cynically - less horrible in some areas, usually exposing horrible gaps and mistakes somewhere else. But they keep on saying "it'll get smarter/better" so I will, like I have been for the past few years. But the disconnect doesn't disappear, in some ways it gets bigger. With the scam being more obvious and devastating, yet more people falling for it, if you believe the media at least.
I may come back here in 12-18 months to state my defeat. To say I was wrong, that the jobs got replaced, and AI is all it was said it will be. But I wouldn't bet on it. Cause of the experience so far, and absolute madness of that statement - fundamentally changing the world and job market would take more time even if AI was able to replace all these jobs today. Also - if that would be the case, it wouldn't be me writing it, it would be AI. To be consumed by AI. To generate something else with AI. I think I prefer SkyNet at that point.

Subscribe to shaw.stream(s)

Sign up now to get access to the library of members-only issues.
Jamie Larson
Subscribe