|
January is over. The longest month! I managed to get a break in Chamonix, nominally to do some skiing, but for the most part it was marvelling at the beauty of it all. The world around us. So, I'm in the alps taking a break from the uncertainty of the present and facing my own mortality on the gentlest slopes near Mont Blanc. It's wonderful being up there with friends, I feel very lucky, and I also don't feel any real need to push myself like I would have done in the older days. I went out for a couple of hours of "getting back into the groove" and completely failed to find any groove let alone a consistent way of turning but it didn't matter. I took my time, I enjoyed the moments when it did click and I wasn't hard on myself when it didn't work. And what did I do to ensure that I didn't overdo it during this week. Of course I consulted ChatGPT and got its advice and feedback. And the week or two before the trip I got ChatGPT to advise me on what exercises to do to ensure that I had enough strength to flexibility to enjoy the skiing that I did. And you know what? It made a difference. I felt like I didn't have to push myself and I felt like I didn't have to prove anything. Do you know what else it made me feel? Like there was nothing really that special or exciting about what I was doing. Now, I'm not sure I could blame an LLM on that feeling alone. I suspect the overstimulation of a ski resort in the middle of winter probably also had something to do with it - I mean I think I came back and reasoned that I would probably prefer trying something a little more outdoorsy i.e. cross country or telemark skiing the next time. But don't you think that perhaps we're just outsourcing our fears and uncertainties a little bit if we cautiously cross check all of our experiences against some statistical model of the world? For that is what an LLM is right? A statistical prediction of the mean of all experiences. When we check against it, we aim for the middle of the road. We aim for safety. Last week I went to Config Management Camp in Ghent, Belgium. A small, lively gathering of world experts talking about automation technologies for software infrastructure. I'd pitched a talk I've given before called "How We Treat Each Other at Work" and I had a few nice conversations afterwards with people about what the fuck we are doing to this planet through these Insane Prediction Machines. IPM. It turns out a lot of people feel as confused as I do about using LLMs in their work and in their leisure. We marvel at their abilities and yet worry about the impact on jobs and the planet. I feel like this is not talked about enough in terms of what we do, especially in the software engineering world. As an engineer I feel a moral obligation to improve the lives of people through my work, however the use of GenAI and in particular AgenticAI (where fleets of GenAI are marshalled by yet more GenAI) has me worried. This is a bumper edition of my newsletter with lots of thought pieces linked below. I'm increasingly angry about the indifference that engineers show towards AgenticAI and it's impact on the planet. So many LinkedIn posts bragging about what they've built and how quickly. So many entrepreneurs thinking that lines of code is equivalent to meaningful progress. Even today, the internet hyperscalers (AWS, Meta, Google) are announcing more data centres in the Netherlands. Do we just accept all of this and carry on as normal or do we take a stand? And what does it mean to build something for the benefit of humanity? Do these vast computing factories help us or doom us? Ok, so perhaps my vacation didn't calm me down as much as I hoped :) I guess my soul is perturbed right now as I read Brian Merchant's excellent "Blood in the Machine" (both the book and his substack). We are imperilled and yet cheering on from the sidelines, not just in terms of LLMs and AgenticAI, but in terms of megacorporations deciding our futures. However, rather than overtly politicising or radicalising the conversation, I feel that at the very least, our engineering conferences and our engineering conversations should have more soul searching, more philosophical standpoints on progress vs efficiency, on humans vs automation. While many of us have come to engineering to further the human race, we should really ask ourselves if our daily actions still fulfil that mission. Have a great Sunday! The Moral and Planetary Cost of the Use of GenAIPublished on February 7, 2026 This week, I was at a conference in Ghent, Belgium, where I had the pleasure of speaking to an audience of software engineers about some of the themes I’ve explored in the book Human Software. The title of my Ignite talk was “How We Treat Each Other At Work”, and while I wasn’t directly talking about AI, I felt I had to react to some presentations I’d heard on earlier days. Where Office Space meets Local HeroPublished on January 19, 2026 As I sit in my bed, feeling a little under the weather and sorry for myself, I think about what I can watch to cheer me up. Usually, that means something light-hearted. I’ve tried a few movies and series recently, a few that are worthy and weighty. “The Brutalist”. “The Outrun”. I enjoyed “The Beast… Read More »Where Office Space meets Local Hero The post Where Office Space meets Local Hero appeared first on HUMAN SOFTWARE: A Life in I.T. - A Novel. James Cameron’s Terrible Art is an Warning To Us AllPublished on January 22, 2026 I got into an argument with AI the other day because I didn’t like a piece of art. Specifically, I didn’t like the art in the film Titanic. The drawings that the character “Jack” makes in Paris, and of course, the heroine “Rose”. While even the names “Jack” and “Rose” seem to suggest some sort… Read More »James Cameron’s Terrible Art is an Warning To Us All
Legacy Systems and the Cost of Hidden Technical DebtPublished on January 14, 2026 One of the most important recurring technical themes I explore in Human Software is how chronic underinvestment in core legacy systems, in favour of either aggressive expansion or assumed obsolescence, leaves engineers having a terrible time. Systems that are described as “creaking” after 30-plus years represent a significant risk for managers who must balance business… Read More »Legacy Systems and the Cost of Hidden Technical Debt
|
Exploring the human factors that make software engineering so unique, so difficult, so important and all consuming. Learning to work with the systems, not against them.
A few weeks before Christmas I asked ChatGPT a series of questions along the lines of "ok, so what next?" I was out of ideas. I was tired. My freelance contract was coming to an end so I was already looking for a new one. Launching Human Software had been exhilarating but exhausting. I'd burned the candle at both ends on social media plus done some podcasts (a few of which are yet to see the light of day) and also put myself in front of bookshops and chased reviews and talked talked talked...
REBRAND ALERT!! So it's been a while since I renamed this newsletter but I feel it's due a slight sidestep following the launch of my book. So welcome to episode 286 overall, but episode #1 of The Human Engineer. Despite me constantly rename this newsletter, over these years the subject has never really varied too much. I talk about software systems and how they relate to human systems. I find my work increasingly focusses on the human side of the this divide - because it is a divide right?...
Last couple of weeks, I've been rebuilding some Windows base images in order to comply with corporate patching policies. The new images are CIS hardened which means they follows guidelines set out by the Center for Internet Security benchmark. This ultimately means that the images are restricted in what they can do, what they can access, what is installed upon them by default. These security measures work in opposition to the automation we already have in place for our customers. This is the...