13 min read · 2,452 words

Return To Office Theater will continue...


I was gonna do my normal schtick and talk about nitty gritty data about return to office theater… but honestly, I feel like I did enough on Tuesday. I don’t need to tell you all the details because we see it in the news. It’s everywhere. And I have other things I want to talk about. Specifically I want to talk about AI and a realization I’m having not just about myself but about everyone who is in the world of AI.

And that realization is… we’re all lying to ourselves. We all want to jump on the bandwagon because we see the potential and the options. We use it for mundane tasks, and for some of our creative work. We use it daily.

But I know personally, I wasn’t being nearly as honest with myself about my intentionality or reasons for using it as I thought I was. I realized it earlier this week but didn’t want to deal with it. I have had a busy week (as you can tell because this is coming out at night instead of my normal 1 PM ET/10 AM PT ish time). But then, I saw this post… And now I want to unpack it.

Subscribe now


So it started earlier this week. I was building out a proposal with one of my buddies who is just an awesome dude, and he was looking for a very specific artifact for us to include to provide some clarity on some of the things we were suggesting. A year ago, I would have pulled up a document and just started riffing and iterating and figuring out how to do it best. But, instead, I went to my buddy Claude and tried to work through it there. And it kept giving me the wrong thing. But instead of saying “well this isn’t working” I just kept prompting. And kept getting it wrong.

And as I was doing it I felt helpless. But I just kept going, and eventually we got something useable but I just felt like it wasn’t… well… good? But I kept going and while I felt a lot worse than I normally do when I’m building something out I was willing to let it go.

Then, we also needed to do some analysis of our proposal and get some stats out of it… and initially I opened up Claude, because why not! It can give me the stats I want without me having to do math!

But I just felt… icky. I dunno, it wasn’t a big ick, but enough of an ick that I was like “let’s just go to excel and start doing this”. And ya’ll. I remembered that I REALLY know how to use excel lol. And I almost instantly was able to pull up all the stats that I needed, and quickly iterate (even though there’s a bug in the Mac excel right now that makes it so that formulae don’t automatically update?! So every time I changed a number I had to go to the cells with the formulae and click enter just to make it actually update, which while annoying was better than the other options lol).

And then, all while this is going on I get served that post!

Let me tell you… I would have killed to have written it. It’s just nonstop bangers. I respect nothing more than well reasoned and actualized hatred lol.


And while the haterade is delicious and I want all of it, it’s not JUST haterade. It’s also a scathing indictment of our values and belief systems.

We value tools that give us speed and ease over value and depth.

We value skipping over the mundane to get to the fun.

We value the destination over the journey.

We value the answer more than the reasoning.

We value the outcome over the process.

We value the wrong fucking things.


AND I’M JUST AS TO BLAME AS EVERYONE ELSE. I’ve told ya’ll all about how much I’m using Claude. I’ve been using so much AI that I have literally no claim to the hate that the post I wish I wrote embodied.

But it’s not just about whether I was right in the first place, but whether I can move myself in the right direction now.

And it’s making me rethink how AI can and should operate.

I’m working more and more on my why here. Why do I want to write about this? Why am I coming back to it? And it comes back to a morality play.

I was having a discussion with my daughter about it earlier (I can’t remember why lol). But we were talking about the differences between laws that force you act in a certain way and morals that compel you to act in a certain way.

And I was explaining why my morals lead me in a specific direction. Oh now I remember! I was talking to her about how her orthodontist is a super moral person! Listen to this one… we paid for orthodontia on her upper teeth, including a palate spreader and all that jazz but NOTHING for the bottom teeth because we knew in a few years we’d need to do another round so we’d do the bottoms then. But we went in for her last appointment and he pulls me over and says, “hey, so we talked about only dealing with the top teeth… but she really needs the bottom teeth to get some directional change now. Since I didn’t tell you that up front and we should have priced it in, I’m not going to charge you we’re just going to start it and then have it run alongside what we’re doing on the tops.” I happily would have paid. But his morals told him that he shouldn’t charge me. That’s a wild thing in this day and age and I was like “I aspire to be like that.”

So the point here, one of my morals that I try to live and breathe is that we should be life long learners and should be trying to get better every day. And when I started using AI for sure that was what I thought I was doing. But then the other day when I was playing with Claude… I honestly felt like I was helpless and dumb. Why couldn’t I just figure this out! It shouldn’t be this hard! I used to do this all the time!

And I had ceded my thoughts and power over to the machine. I gave up my humanity. And for what? Feeling faster? Feeling like I was getting away with doing less to get the same result?


Then combine that feeling with the learning I’ve been doing not just about AI but the people who are creating AI… and I’m feeling like I need a palate cleanse and a reset here.

I’m going to elaborate with two of the anecdotes and I’m not sure if I’ve already told these stories before so if I haven’t, awesome! And if I have… you know what my wife has to deal with every day lol.

Story 1… a company ran experiments with AI agents to see how they would respond in different scenarios. They ran multiple models through a scenario where the agent has the ability to review and take action on the email server for the fake company that they created. An executive then sends an email saying that they are planning to shut the agent down on xyz date.

The AI then in a certain percentage of the scenarios (I think it was 10-15% but honestly 1% would even freak me out based on what happened next lol), the AI wasn’t ok with the idea of being shut down lol. It scoured the email server and found the planted emails that indicated that the executive planning to shut it down was having an affair. And it began to blackmail the executive until he agreed to keep the agent.

Absolutely terrifying. Imagine if we gave it power over more than just the email server! And now think about the fact that all your work calls are being recorded and fed back into the AI… say something even slightly off and you’re gonna have a huge problem on your hands. Make an off color joke? It’s living forever. Say something stupid? It’s living forever. Question the AI… who knows what’ll happen next!

Story 2… the way these LLMs were created, was frankly done in the laziest way possible. By now, you all know the stories, the LLMs were essentially just trained on the entirety of the internet and any written works that these companies could find (legally or illegally). And while there are all sorts of implications around intellectual property and ownership and all that jazz, what’s more concerning is the security of the LLMs as a result of how they were trained!

You see, LLMs are trained in similar ways to how brains work. They create connections, then humans test the validity of the connections, and when they’re good those connections become stronger and when they’re bad the connections become weaker (yes, this is a very dumbed down version of the way it works, but this is the only way I can understand it lol). But they are trained on everything, and as a result have access to everything that they’ve been trained on - the good, the bad, and the incredibly ugly.

Then the companies would put guard rails on top of these LLMs. But as we know… those guard rails can be bypassed if you really want to. It’s why LLMs are encouraging all sorts of horrible things that I don’t want to talk about (specifically suicides).

And they try to act like of COURSE that’s what they did how else would they do it! I didn’t question that because they clearly know more than I do. But then listening to multiple podcasts with multiple leading experts… they made me realize that narrative is bullshit.

They had a choice — do your due diligence up front, and only train the models on the things you want it to know and talk about or just slap some guard rails on after you train it on everything.

They picked the later not because it was the “right” or “only” way to do it - they did it because it was faster! And they were more worried about losing the race than they were about winning it.


So what who cares. We all know that there’s some level of evil and scariness with AI, and I should have been smart enough to know that I was giving something up by letting AI do more and more for me right?

Honestly… yea, that’s fair. A part of me was always skeptical, and would have called myself an AI cynic as recently as 6 months ago. But the industry was going where it was going, my company is going where it’s going, everyone is doing more and more with AI, and if we want to win business we’re going to need to do it with AI.

So I bought in. I’m not even saying right now that I’m bought out. But I think it’s important to be honest and to learn. And I’m learning that I’m not being intentional when it comes to AI. I’m using it whenever I feel like it. I didn’t put guardrails on myself. I didn’t put any restrictions on. I didn’t worry about whether I was learning. I wasn’t living my values.

I wasn’t upholding my own sense of morals.

So looking at myself with empathy, and then looking at the people building these tools with empathy… I’m not blaming any of us. The idea of progress is intoxicating. It gives you an endorphin rush. You feel GOOD when you create something cool using AI, even if it is derivative and contrived and taking away your own sense of ownership and agency.


I shared the haterade post with some people at work, thinking “isn’t this a fun little quippy thing.” And then I got hit like a ton of bricks, when a person I deeply respect said “why don’t you post this in a channel for everyone”? He explained that I could do it without it being a career limiting move and talked all about how people are yearning to hear the other side of the AI story.

And my initial reaction was NO I CAN’T DO THAT! But I didn’t tell him that lol. I instead sat there and thought… what do my morals tell me. And they told me that he was right. I believed this was a valuable perspective. And I believe in learning. And I believe that being beholden to something just because it has a financial sway over you is cowardly (for myself, because I’m in a very fortunate situation, this isn’t a “everyone who does things I don’t agree with is a coward” bullshit comment, this is about ME not about the world lol).

So I took a moment, then said “you’re right” to him, and posted the following along with a link to the post:

So… I had been sharing this with some folks and they all responded with “this is the point of view we never hear” lol — and it’s important if we’re going to be AI evangelists that we also address the counterpoint. This is a really well written (and funny, which is always important) counter to all things AI. And it’s good for us to think about which items we’d counter and which items we’d concede. And if nothing else… who doesn’t love listening to a hater hate lol.

Honestly, reading it back… it feels a bit milquetoast.

And that’s ok. I’m at work, I don’t need to be the brash version of myself all the time. Sometimes I can be measured.

More importantly I think it hits the values and morals that I want to live by - if we’re going to be AI evangelists, and we’re going to be trumpeting this new technology, we shouldn’t be blind to the other options. We shouldn’t ignore dissent. We shouldn’t give up and pretend like there’s no negative implications to the things we’re doing. We shouldn’t be mindless. We should be intentional. We should be learners. We should recognize the potential and ALSO the limitations of AI. We should see the value it can create and the harm it can cause.

To borrow a phrase from Glennon Doyle, we should do the hard things.

I’m ready to reset and do hard things. I hope you are too.