... But why?

đź“… ⏱️ 11 min read · 2,178 words ✏️ Updated:

It’s been tough for me to think about what I should write about lately.

I can’t tell if it’s just “I’m busy” or “I’m using my time for other purposes” or if it’s “other people are saying things I want to say so why would I add to already brilliant analyses” but it’s some combination of those things, mixed with the usual “writing is hard” stuff.

But, as I’m prone to doing, I listened to a podcast and it made me sit up and go “YOWZAH” lol. Or some equivalent.

This weeks decoder episode was a banger – it was Nilay explaining his theory of “software brain” – the translation of everything we do into software. Loops and databases was said at least a few times. And I think it hit on the thing that I’ve been thinking about lot but unable to articulate.

The entire world is turning into an app. Everything has an app. Water bottles have apps. Toothbrushes have apps. Washing machines have apps. Everything is a fucking app.

In and of itself, that’s not a big problem. Who cares if they have an app, you don’t need to use it. I don’t use the washing machine or dryer apps for my washing machine or dryer (and yes, they are two different apps, and yes I did use them at one point, and guess what I’ve been missing since I stopped using them… if you guessed nothing, you win).

The problem isn’t in the creation of these things, but rather in the reframing of society – to paraphrase a part of what Nilay argued, for some people this mindset is freeing. Everything is a data base, it creates clearer structures, makes you feel more empowered, makes you come up with ideas that wouldn’t have existed otherwise. But the problem is, that those people aren’t the majority – the majority of people want technology to fit into their lives, but don’t want to have to fit their lives into technology.

But the never ending push for apps and databases whose true value nowadays is to power an AI that can read all that data? That is a recipe for removal of our humanity in exchange for “certainty”.

But even that certainty, as Nilay also argues, is an illusion. He uses the law as an example as he went to law school and was at one point a lawyer – and the idea is that the law for lawyers feels deterministic, but really nothing in the world is truly deterministic (that’s me putting spin on the ball he left it at the law not being deterministic). And that’s the crutch that this software brain, app focused, and theoretical determinism promises – a future that you can predict. Which I get it, who doesn’t want to be able to predict the outcomes of our actions? Those neurotics among us create scenarios and walk through all the possible iterations in our head before we do most things. But you know what rarely happens? Those walk through are 1:1 what happens.

Not because we can’t create all the possible scenarios in our head, but because we can’t predict the future. If we could, there’d be no need for the stock market, because everyone would always be winning. Casinos wouldn’t make money (caveat – casinos not owned by trump wouldn’t make money, which is different, whereas trump owned casinos never made money lol). It’d be a really different world.


Ok cool, so we’ve established “people can’t predict the future” good job Scott, really going out on a limb there lol.

But there’s a larger point behind that. Determinism is the goal of software. Retraining your brain to think like software creates a false sense of determinism. That creates a false sense of KNOWING a lot of stuff. That then results in shitty decision making.

And this is where this all manifests for me. It’s not about what technology is good or bad or whether AI has the potential to be life changing – honestly that’s not the stuff that excites me anymore. I used to be a gadget guy, and in my heart of hearts I still am, but it was always about gadgets that made life better. Better speakers to make you immersed in music or immersed in a movie. A better phone to make it easier to connect to people. A better monitor to make the images on my screen clearer. But at this point, I have so many damn gadgets and I don’t really know if I’m in need of more? Like a really nice smoker with an electric temp setting is a fun gadget! And I’d be stoked to have one… but it’s not like I’m lining up at the Traeger store to get one at midnight lol.

Point being, that if you’re going to build a gadget or build software the first thing that I am thinking is “the market is saturated and everything is a commodity so why are YOU doing it?”

I’ve talked about this before, but we had a trainer at Smith Barney who was amazing and one of her little tricks she used was, in any setting when you want someone to give you the heart of why they’re doing something or thinking something, you have to keep asking why.

So if someone wants to build a widget, you ask “why do you want to build the widget?” “oh it’s to make it so you know if your dog is happy” “ok, why are you doing that” “well, because I want my dog to be happy” “and why do you want that?” “because I love and care about my dog” “ok, why do you need to measure your dogs happiness though?” “because I’m not sure if it does” “why aren’t you sure” “because it keeps biting me” “why does it keep biting you” etc etc.

It’s the five why’s exercise but just doing it face to face with someone.

And I know you’re looking at me going “I watched the Simon Sinek start with why ted talk too Scott” (secret time, I watched the first five minutes got the gist then stopped watching lol, but Simon is still great!).

But it’s not just starting with why. It’s giving yourself time and space to explore the why. That why doesn’t come instantaneously. And it doesn’t come without effort. And it doesn’t mean that you can’t experiment. But my hypothesis, and the one Simon puts out in starting with why, is that the intentionality of knowing why you’re doing something results in better outcomes.


And that’s where I think that the AI craze is going to eat itself alive.

We’ve already gone through it with the .com bubble, and then the housing crisis, and then again now with the Iran war.

Action without perspective on why you’re doing a thing can result in great individual accomplishments – but it doesn’t create lasting value.

With the .com bubble it was “slap a .com on it and put up a website everything will be fine” and suddenly every company was an internet company. It’s not that pets.com was a bad idea, obviously petco has leveraged a huge online presence and still is afloat, and then Chewy provides a great service for pet owners as well. The problem wasn’t with what they were doing. It’s why are we doing it and why are we the ones that should do it that was the problem.

With the housing crisis, it was “we can make money bundling these subprime mortgages” but no one ever asked “but why is this something I should buy” or “what happens if interest rates go up and people start defaulting” or “what’s the actual risk here?”

With the Iran war fuck I don’t know if anyone in charge asked anything ever lol. They just said “BOMBS GO BOOM YAYYYY” and then did whatever.

But AI is getting to a point where I think that it’s going to fall down a similar “but why” fate.


Part of this is covered by Nilay – largely this is around the idea of “people need to have technology fit their lives rather than have their lives fit into technology”. I just want to reframe this though – technology is made in the service of people. If it’s not being made to somehow service people then it’s not successful.

If I need to drastically alter my life in ways that are counter intuitive or create more work, then the outcome doesn’t matter. I’m not going to do it.

When I was thinking through this and how to describe it I used a terrible metaphor of caring about your dogs health.

Getting software brained or AI brained says, let’s start with gathering data and putting it in a database. So now I’m logging the food that she eats, I’m logging her poops, I’m logging our walks, I’m logging everything. And then from that, there’s some level of analysis that is done that then results in feedback telling me if my dog is healthy or not.

But let’s be real – am I going to keep that database up to date? Am I going to log it every day? No. No way.

What you need is something that automagically does it for you because otherwise why would I bother? It’s why the Apple Watch went from a nuisance in its early form to a thing that I wear all day every day. It just logs the data to HealthKit and I never have to think about it. If I want to look at, export, use the data in any way, I can just do it because it doesn’t take EFFORT for me to maintain the database. Even my weight, I have a scale that talks to HealthKit so I don’t have to log that either, I just get a graph showing the progress (or lack thereof) that I’m making.

If I had to log all that data, it simply wouldn’t happen.

The reason isn’t a will power problem, or a desire problem. I logged everything I ate for like 2 straight years because I really cared about losing weight and that data was incredibly useful and then informed the way that I behaved. But the problem is, life gets in the way. You get busy, you’re just trying to get through your day, then two weeks have passed with no logs. And then you have to weigh if it’s worth forcing the issue again.

And this is similar to how I see data for AI. AI is only as valuable as the data and context it has access to.

So the value that it creates needs to be seamless and easy for people to use.

And this is the point Nilay was making which is that if the value is only created by people changing their behavior, then the value isn’t sustainable. It’s like building a for profit passenger rail system in the US. You can’t just get value out of that by building it, you’d need to change people’s behaviors. Without a triggering event that enables that change, you can’t be surprised if your venture fails.

Right now, that’s where AI is though – it’s only helpful if you change your behavior. And for some people, they will make those changes because they see the value. But the question is, can they sustain that changed behavior or are they going to end up abandoning it because of life getting in the way.


Let me be really clear on this – I use AI daily. And there are times I find a great amount of utility from it and think it’s a great tool.

But even when I do I have this weird feeling like something is missing.

And I want to tell you it’s empathy, because that’d be a great little kicker right? It’d fit right in my oeuvre.

That might be true, or at least partially true, but for real I think it’s actually just this – it doesn’t feel like it’s human. It feels like I need to change my humanity to be successful. And I think I’m not alone. I talk to people all day about AI and hear people say things like “I won’t allow misspellings in my prompts because I wouldn’t do that with a person” or “What did you name your AI” and it’s all in search of this need to feel like there is a human interaction happening.

And that chase for humanity I think is where AI is currently falling on its face and will ultimately tell us what direction it goes in. I have said before that I feel like this is analogous to the .com crash. The .com crash didn’t wipe out the internet, it focused it and made it so that it was more easily usable by humans. AI is gonna have to do the same or suffer the fate of blockchain and fall into a much narrower path than the AI overlords would want.