15 min read · 2,958 words
Yet another AI post
I’ve been thinking yet again. Always dangerous.
And I know that I’ve talked quite a bit about AI in these posts, and part of that is due to the fact that AI is everywhere. Part is due to the fact that I talk about AI all the time at work, and so it’s an important aspect of my work life. Also, this is ostensibly an engineering blog and what’s more top of mind to engineers than AI? And of course, I also have been using AI as an idea generation tool and an outlining tool when creating these posts.
So the discussion of AI is very reasonable and valuable.
But I am going to expose myself a bit in this one. I think AI is dead and done with. I honestly don’t think it’s going to be anywhere near the game changer people are talking about, and I think it’s going to be a relic on the scrap heap with pets.com (funny enough this now redirects to PetSmart lol) and askjeeves (which apparently still exists but is terrible lol). Which isn’t to say “there will be no AI” but rather to say… it’s going to end up being a lot of noise without a lot of value.
I’m not taking this position lightly or pretending like it’s something that I’m the first person to say, but rather I’m saying this is now my default position. Which is for sure gonna be awkward for at least the next 6 months as budgets continue to be tied to AI and I am probably gonna be out there pitching it just like the rest of the tech industry.
But I am taking this position definitively - within the next 2 years, the AI hype will die out, we’ll have some niche AI tools that do very specific things, and the rest will be completely wiped out or driven to obscurity.
Let’s get some bona fides out of the way here…
I’m a tech geek. I like toys and gadgets and tools. I stood in line at the stones town mall on iPhone release day (I left work early AND bailed on my girlfriend at the time who I was living with to go and get it… so I was DEDICATED lol). I have had almost every gadget you can think of. VR headsets? I have two. Gaming systems? I think we have 5 or 6 in the house. Home automation tech? I have a little raspberry pi running a server, I have numerous apple home contraptions, everything is automated, everything is lit up, I mean I have fun. My first job in technology consulting was won when I explained how pre-airplay I had wired my house to make it so I could wirelessly play music from my iPhone. It’s just a thing I like. To this day, my wife will make fun of me for all the gadgets and gizmos that I want to play with (though now it’s pivoting to things like sodering and doing things that are more physical than digital… portending of things to come in this article?! nah, no way, I’d never set you up like that this is all just stream of consciousness).
TL;DR I’m not some tech hater (though if you let me write a screed like the one I posted a few weeks ago, I’d become one. That was aspirational writing and hating lol).
So if I’m bought in on tech (obviously I am) then why am I here, preaching the demise of AI when it’s clearly the butter on my bread right now. We’re getting paid for AI solutions left and right. It’s what’s making the tech sector GO.
What makes me so confident then that within 2 years it’ll be the end of the hype cycle and AI will be no longer driving the market? I wish I had a simple and succinct answer… and trust me we’ll get there, we’ll workshop it live at one point lol… but I think I need to dive into some other stuff before we land on a simple answer.
I’m not saying I can predict the future here obviously. But here’s the tea leaves that I’m reading and what I think is leading me to this assertion.
There’s enough about the financials already out there so I’m not going to dive deeply into that. Instead I’m going to talk about the actual outcomes and what we’re anticipating the impact to humanity to be on why I think that the timeline makes sense and why I think that the outcome makes sense.
Essentially, when I think about where AI is being deployed, it’s in a few main places/ways: chatbots (voice and text), agents (background tools taking actions on your behalf), assistants/code generation (mostly used for developers at the moment), and finally video/image generation.
So let’s tackle each one and identify why I think that there’s a cliff coming from a user perspective not just from the financials.
Chatbots & Video/Image Generators
These are fun and all, and I know that you’re probably reading about all the people who have AI girlfriends (and boyfriends… but let’s be real it’s mostly dudes lol), and all the people who are using chatbots to create all sorts of AI slop content on your social media network of choice. And there’s some level of utility that these create when there’s a need, but as I’ve been exploring more and more… I think that these have a short term utility that isn’t going to last.
There’s the clippy jokes, there’s making fun of the dudes in basements falling in love with GPT, and all the different things we do to make fun, but none of that has stopped people or even slowed them down. So what will?
I think the answer is simple: Ads. A lot of people who use these tools are either using the free version or already paying a significant amount for access to them. But what happens when they start getting ads thrown into their content? Whether it’s in the form of a breaking ad that kills the flow of the conversation, or whether it’s the embedded ads that respond to your questions with ads that are SOMEWHAT close to what you’re asking for… it’s going to be the death knell of the AI chatbot.
Why spend $50 a month and then also have to get ads? Or pay $100 for the ad free tier? There’re are two models that follow this pattern that we can look to and see why they would/wouldn’t be barometers for what’s coming with AI: online news and streaming tv options.
For Online News, they went with the “free until it isn’t” model… which is very similar to what AI is doing. And what happened to online news? It’s contracted, and become sensationalized in order to keep people coming back. But that’s resulted in a backlash, where without the best content, without the right sweet spot, without the right level of investment… it’s killed the industry. Yes there are still major newspapers in most cities, but increasingly they are consolidating and becoming mouthpieces for oligarchs who are propping them up in order to fulfill political needs, but aren’t viable profit centers like AI is trying to be. Instead, you’re seeing an expansion of individual news writers going solo, and abandoning all major news outlets.
This feels like where AI is heading but with differences based on what they’re selling. You could realistically go find trusted journalists to find your news and thereby escape the mainstream papers at this point, but AI is going to be different, where there may be models that avoid some of the costs/ads that turn it into a piece of garbage… but they’re going to be few and far between because what’s the financial incentive or moral incentive for people to create those tools? I don’t think it’s going to make any sense for companies or individuals. So they just won’t do it. Making it so that if you want to avoid the ads or avoid the fees, the only option is to stop using it.
For streaming tv, I’m specifically thinking of Amazon and Hulu adding ad tiers at the same price we had prior to them introducing ads! Because we all got used to them having no ads tiers at reasonable rates, when they increased the rates there were only three options… drop the service, deal with the ads, or pay more to remove the ads.
Personally… I have gotten so used to not having ads, I forked over the extra cash. I think this is what the AI chatbot companies are THINKING will happen. We get you roped in, then you need what we’re selling, and you’re going to be willing to pay the price for it.
But the key is, it needs to provide utility, not entertainment. The reason I was willing to pay is because I care about entertainment and I care about television and movies in a way that’s probably unhealthy (I majored in Business with an emphasis in Cinema/Television though… so I don’t think it’s TOO unhealthy lol). So seeing it without the ads was actually a big deal for me. Not getting to use GPT to create text files for me because they want to put ads in it, doesn’t have the same level of need to it, at least for me. I’m sure some people will see it as a necessity, but for all the people who are forming literal relationships with their chatbots… do you think they’re going to feel GOOD about being charged double to keep their friend around? Do you think they’re going to keep paying you whatever it takes, or are they going to simply say “this isn’t worth it maybe I should find a human friend who I don’t have to pay”, or “maybe I’ll hire an assistant if I’m spending this much anyway”.
I just don’t see the level of drive that you need to create to justify people continuing to pay more and more and more for your service, when it’s just a magic words and pictures machine. Maybe they get used for some specific studios that are trying to make movies or video games because they’re “good enough”… but even then, why pay the rate you’ll need to for an AI generated background that’s shitty, when you can pay 1.5x that and get something good made by a human? I think the backlash will be more substantial than the savings.
Agents / Code Generators/Assistants
“Within six months we won’t even NEED developers” repeated every 3 months until the end of time playing in all our heads, right?
There’s utility here to be sure. It’s like code completion being a tool of utility. It’s a tool, it’ll probably be the least hard hit of the parts of the AI industry, but it’s still going to get hit. Because what’s the point of 75 different code gen tools? They’re going to end up consolidating because none of these companies have enough of a market available to them to keep themselves going.
You’ll have some solo devs who are spending a lot on code gen tools, or buying fully into spec driven development using prompts, and all that stuff is cool, and might yield some fruit. But we’re talking about the larger industry here, and there’s simply not enough demand, and not enough time to vet all these products.
We, for instance, tried out a few code review ai tools, and code gen tools… for the most part, the code review tools were completely ineffective, and the code gen tools “are cool” but the value and need are lagging way behind the costs. And I bet if I asked all my dev friends if they’d fork the money over themselves for these tools the answer would be no. Because while they’re useful, they’re not necessary.
They might help on the margins, or help with blocked mindsets, but they also aren’t going to just build you your tools and applications and do so without any issue. I already documented my vibe coding experience, and there’s not any evidence that we’re going to get to fully autonomous development by tools anytime soon.
Along that same line with Agents, they are exploding right now. Everyone wants AI agents in their code. But once the implications come out and we start talking about the risks, and more importantly after the first few times those risks hit… you’re going too see a huge devolution of the market for AI Agents. If they can perform tasks for you, the logical conclusion is that unless they are designed in a way to never do anything malicious because they’ve never been trained on malicious actions… they’re going to do some terrible things. We talked previously about the example of an Agent threatening an exec who said he was going to shut the agent down in an email the agent could read.
Now imaging that scenario but across your whole ecosystem, and with your billing, shipping, payroll, etc. It’s just a disaster waiting to happen. The last companies to get on board the train will be lucky, because hopefully by the time they are warming up to the idea, they’re going to see the mess it’s created and say “nah, I’m good.”
So those are some of the “why I think these aren’t going to last long” ideas, but where did the 2 years come from and how can I reasonably say that timeline makes sense?
Well, let’s look at where we started… it was about 2-2.5 years ago that Chat GPT hit the market. It took off like a rocket ship, creating an entire industry out of nothing. But now, we’re already inundated with AI slop, and have tons of derogatory terms to describe AI, whether it’s AI slop, my personal favorite “clankers”, or the dreaded “you used an em-dash you must be AI” bullshit.
And it didn’t take long - the backlash has been quick and sustained. Which I think is relevant to the timeline because we’re simply speeding up the same timelines we’ve experienced with other bubbles, whether it’s the tech/internet bubble that crashed because we oversaturated the market and didn’t identify where there were needs for websites and whether we wanted everything to be a site or just some things, or whether it’s the housing bubble, where we sold people houses they couldn’t afford leading to a major crash in the market, or something like blockchain/NFTs, we can see the cycles shortening at each turn.
Based on those accelerated timelines, even though we’re only 2-2.5 years into this hype cycle… we’re starting to see it wind down. And that wind down, while it could come sooner, is likely another 2 years out. The end of 2027/beginning of 2028 is right when we’re starting towards another cycle of presidential elections, we’re going to be in the middle of a fight with the world over AI, we’re likely going to be either in a shooting war or a Cold War with some actors in both Asia and South America (for real it’s coming), or if we somehow gain our senses it’ll be just in Asia, but whatever.
People are already reaching their cynicism limits, and need to get some relief, and I think it’s going to come not in diving deeper into AI, or finding new companionship in clankers (seriously, if the AI wins and this becomes a racial slur I’m screwed, but in the mean time, it’s fun and I love it), but rather in detaching from some of the corporate nonsense and taking a more humanistic approach to their lives. Whether that’s completely abandoning AI, or adjusting to different screen time limits, or finding new ways to create meaningful relationships both in person and online… I think that the next phase is going to end up being one that disenfranchises technology in favor of people.
And because we’re 2-2.5 years in, and the cycles typically hit a plateau and then crash back down, I’m wagering that this is the peak. I bought it, everyone I know at some point or another did too. But everyone is starting to come out of their respective cocoons and spread their wings outside of the tech space and start leaning into human relationships. So another 2-2.5 years seems like the right time horizon for us to find the bottom on the other side of this plateau!
This is coming off as pretty cynical, but I think it’s the right tone for anything that’s gotten the hype cycle that AI has gotten. Don’t forget, we have had these hype cycles for things that we use every day, but the point was that we were over leveraged and treating things ineffectively for the value they were producing. And that’s where AI stands.
It’s sitting on quicksand and instead of us trying to throw a rope to get it out, we threw it cash and said “this will help!” Anyone born in the 80s would know, that the only way out of quick sand is to stand still and wait for someone to save you with a rope or a pole! But there’s no rope or pole coming for AI.
Eventually, it’ll live in the back of our systems like data lakes or API hubs and no user will know that they exist beyond that sometimes you’ll get an email or text that doesn’t have all the data it needs and so it’s address “Dear ,” or “Dear
Be ready for it!