10 min read · 1,895 words

KT is KIA Part 2!


On Tuesday, I warned that I wasn’t in the mood to be writing and… we’re back with the same bat attitude. (that’s not a typo, that’s a reference to the old batman for you youngin’s who didn’t know that you could tune in for the next episode at the same bat time on the same bat channel)

But the best way to get through not wanting to write is to write, which seems like a trick and a thing that shouldn’t work, but it does - so here I am!

Let’s talk more about augmented mentorships and what the real world evidence is of what is happening and what is/isn’t effective!


BUT BEFORE WE DO THAT I’M GOING TO LAUNDER IN A {POLITICAL} MESSAGE.

If you’re in California… vote yes on prop 50. I know it feels gross and dirty. I get it. I like the idea of an independent commission too! I think it should be that way everywhere! And we shouldn’t be trying to pick our voters but instead pick our representatives! But we don’t live in a utopia, and we can’t pretend like playing by the rules is the only way to play when the only other team looks at the rules and laughs.

Some important points - there’s a time limit on it. It’s only through 2030. I know Arnie is out here saying if something is in the government it’s forever, but that’s really not true - look at the child tax credit! Universally a great tool, used during the pandemic to help families with kiddos, could have (and should have) gone on forever… but it didn’t.

Another important point! It’s a mutually assured destruction measure - it doesn’t go into effect if Texas doesn’t redistrict either. So really we’re saying “we don’t want to play this game, but if you’re forcing us, we aren’t going to back down from a fight”. Mutually assured destruction fucking sucks, but when your opponent is hell bent on your destruction regardless, you can’t just sit down and let it happen, you have to do SOMETHING.

Yet ANOTHER important point… You can’t promote your values in government if you keep losing elections. Look at the democrats in congress right now. The loss of elections made them weak. The only thing that they could do was not open the government to keep doing the will of people that hate half the country, and even that they don’t want the credit for! They want to pretend it’s just the republicans fault! We need to win elections and we need to elect people with values instead of aspirations.

So with all that, vote yes on 50. Please. Do it for the children. Or the dogs, or cats, or adults or whatever you love and care about.

/rant


Alright, after that hard left turn, let’s go back and talk about the facts around AI powered programming mentorship. People toss around numbers all the time around what type of efficiency and benefits we’re getting from AI development. And I have no reason to believe that they’re not telling the truth, but also I feel like it’s important to dive into them and see what they’re really telling us.

Let’s start with some of the things that you’ll hear more often, like that a meta-analysis of 35 studies shows AI tools significantly reduced task completion time (69% improvement) and improved performance scores (86% improvement) in programming education. It’s well sourced, done through universities, has lots of very professional sounding people involved, so let’s assume for a second that they’re correct in their assessments (check it out yourself here if you want! ref 1: https://www.mdpi.com/2073-431X/14/5/185).

We’ve also heard things like student AI familiarity jumped from 28% to 100% over 12 weeks, with satisfaction improving over time, which again is coming from a reputable source that we can assume is correct based on the level of due diligence that they’ve done. (ref 2: https://www.mdpi.com/2227-7102/14/10/1089 for reference!)

But, one of the problems with measurements and statistics is that you can use them to tell whatever story you want. These numbers tell a story of increased performance and familiarity creating a more effective use of developers time. But just because that is what this data is telling us, doesn’t mean it’s the lived reality.

Looking deeper at both of those analyses, we also see caveats like, studies showing “no statistically significant advantage in learning success or ease of understanding”¹ and tools “pose potential drawbacks, such as fostering over-reliance, diminishing problem-solving skills, and promoting superficial understanding”¹. In addition, students used AI primarily for “creating comments (91.7%), identifying bugs (80.2%), and seeking information (68.5%)”².

So the question should be less “is this effective” or “are people using it” and more focused on what tasks and skills it’s effective AT and what it’s doing to the workflow. The idea that students used AI for creating comments seems WILD to me. Why would you do that?! Here’s how hard it is to make a comment, and I struggle to think that anyone reading this couldn’t do it… here we go…

//this is a comment

Woah. Mind blown right? And identifying bugs… cool that’s what they used their time for, but… was it right? Did it work? lol. Who knows! It’s not relevant because it’s not what they were measuring. And then we get back to learning… these scholars call out the risks of this new world order not providing any ease in learning or understanding… but didn’t think that it could possibly be a DETRACTOR to learning and understanding.

I came across a great analysis on LinkedIn where a developer put his coding in the hands of Claude code, and while it worked and went quickly, the second he went to massage the code or make changes, he realized that he had no idea what code was doing what. Because he didn’t write it. He effectively couldn’t debug or alter the code because he couldn’t understand what it was doing. That’s insane.

But it’s the true measure that we’re missing. We’re measuring for efficiency, but that’s a short term measurement. Sure I might be super efficient getting this to market, but what about maintaining it? What about making changes to it? And I get it, some of that already exists when senior developers who have owned the code forever leave, and someone new has to take it on, but what happens when that AI is now in charge instead of the senior developer? It doesn’t understand all the context, it can’t tell you the specifics of the pain it’s experienced, because it can’t feel pain. It’s a piece of code.

And then add on to this that these studies completely leave out half of the equation - the mentors themselves. What’s the net effect on them? Are they learning and growing still? Are they becoming more effective stewards of their teams and organizations? What’s the impact to their time and attention?

We don’t know, we only have anecdotes, because again, we’re measuring the wrong things.


And then, there’s the paradox of personalization - the idea that AI mentors can more effectively teach you because they can understand your personal, cultural, and social tenor and make more effective teaching tools for you. And I get it, that’s the promise of a lot of AI tools, it’ll get to know you and what resonates with you so that you can be more effective as a learner.

It sounds great in a bubble — especially in a white dude bubble, where my perspective is the prevailing one in society and I never have to feel any of the pain of having my specific experiences be “othered”. And it also sounds kinda great if you ARE othered (from the white guy perspective) because maybe it’ll be more attuned to how you learn! Or give you examples more apt for your cultural upbringing!

You can read all about tools promoting this exact world view, for example: https://chronus.com/blog/mentoring-in-the-ai-world

But, just like with Prop 50, we’re not living in a utopia. Using culture as a tool for education isn’t a new idea. But it is one that’s laced with subconscious bias and bigotry.

AI models tend to “inadvertently perpetuate societal biases present in their training data, leading to discriminatory mentor-mentee matching algorithms”. So instead of being brought to a mentor that would be effective for you, it’ll just find someone that looks like you or has a similar cultural background! Further isolating groups and limiting the ability of people to see those outside of their culture as knowledgable, and leading to more racism in society! Sounds great right?!

It’s a further example of taking something fundamentally human - finding someone who you trust and value to train you - and turning it into a mechanical exercise focused on outputs instead of growth. You’re missing opportunities for human connection that can create new connections in your brain that didn’t exist before - matching two seemingly unrelated pieces together to form something new. AI isn’t going to do that for you, but your mentor is!


All this is to just reiterate, that just because we have numbers to measure something doesn’t mean the measurement tells us anything valuable. And I’m not saying that these measurements AREN’T valuable just because they conflict with my hypothesis. Actually the opposite, they ARE valuable and do tell us something - I just think we’re missing what that something is, which is that we’re training away from the basics in order to spend more time doing things that require the basics as a base level of understanding.

Which leads us to what is any of this data and analysis actually telling us?

It’s validating my priors which makes me nervous is the real truth lol. It’s telling me that while we are finding the efficiencies in AI aides when it comes to working, it’s not providing any of the intangible benefits of mentorship that we’re losing by relying on AI to be a mentor or teacher.

Using it for mundane tasks, for things you already know how to do and just need to have done in less time, or for enhancing existing skills… it’s got chops. It can get the job done.

But once you start outsourcing learning ESPECIALLY the basics, it turns into a magic box. It’s doing things without you understanding them. And as long as it works, we figure it’s fine. But in the long run, it’s going to create more work than it relieves, and it’ll lead to such a brain drain that the only way out will be to completely abandon the tools at all costs. I think that’s a significant problem that we need to address and ensure that as we’re adopting tools we’re adopting them in the right way.

To go back to our pulley analogy from the previous article, if used correctly, it can be a huge aid to solve problems that were really hard to solve without the tool! But used incorrectly, it can wield incredible power that can be very harmful.

Without taking the time to learn and understand the limitations, we’re just going to abdicate ourselves over to machines. And guess who owns those machines… the oligarchs who only see you as a big pile of money to extract! I’m not really in for that… are you?