by Dirk Depré - August 4, 2025
by Dirk Depré - August 4, 2025
While the discussion keeps going whether Agile is dead or alive and kicking, Artificial Intelligence has already become the new silver bullet in IT. Organisations are looking for speed, delivery, efficiency. Markets are already, and soon even more to be bombarded with what Ai has created. And already for that reason alone, the empirical mindset is never more alive and Agile is just its name tag.
Just the other day, I've read a post on LinkedIn, in which the author celebrated that their Ai was able to write 250.000 lines of production-level code in one week:
It means that work of months can be done in a day, work of years in months. Speed is at our fingertips. The responsibility to build the right thing becomes even more important. That responsibility now comes with great power to execute. As the author closes his post, it's about designing intent. The question is not speed anymore, since this is possible. The questions isn't delivery since that can also be automated. Then what is the question? Do we need it and do we want it? How can we get it? In order to get the results, you need thorough understanding of what you aim to achieve and you need understanding of how you can achieve that? So, it is professionals with Ai as companion.
Artificial Intelligence causes us to think about ethical issues.
Consumerism at the core of our world wide economy: if products are being made by computers and robots, will we keep consuming this? Are we willing to pay for something that isn't touched by a person anymore? We consume because we don't know: only recently it became known that the band The Velvet Sundown, which achieved over a million streams on Spotify, was entirely Ai-generated. When people stream Ai made music, it means that they don't stream human-made music. It means that artists are gaining less because somebody was smart enough to create an illusion. Are you still willing to pay for that kind of music? Think about it for a minute because it goes further. If organisations keep moving in the direction of having their business run on robots and artificial intelligence, how do people get paid? And if they don't get paid, how will they consume? Are you willing to pay for things that Ai and Robots have created? And do we then need a new model?
Ecological challenges with Ai: Artificial Intelligence needs a lot of data and computing power to do the job. Every Ai call is bad for the environment. Organisations want to speed up because they don't want to die, they want to continue to exist. When everybody is investing in Ai, you can't stay behind. But it comes with a huge side-effect: co2 emissions. When you are polluting, you need to be sanctioned. "The energy demands of AI, powered by data centres, significantly outpace current efficiency gains. A single ChatGPT inquiry consumes about five times more electricity than that of a web search, while it is also estimated that training a single language model such as GPT-3 uses electricity equivalent to the power consumed annually by 130 US homes.", as stated on the World Economic Forum on June 1, 2025.
In the hands of the mass: Artificial intelligence gives us the opportunity to simplify the most common things. When you want to write a text on a birthday post card, you can ask your Ai-companion to come forward with something funny or sweet. It feels like we are slowly killing our own thinking. There is value in taking time to write something heartfelt down on a piece of paper. The same counts for looking for a recipe: there is value in browsing through a cookbook, taking the time to explore options rather than going for the straight forward answer suggested by an Ai-companion. It's not because it's easy that it is of real value.
And while there are ethical questions, there is also opportunity: Ai could help us solve ecological problems, it could bring welfare to area's in the world where there is poverty, it could help us explore space and maybe could help us to stay healthy for longer periods of our life. The opportunities are within science. And one of the best ways to make progress in science is through an empirical method: observation, hypothesis formation, experimentation, analysis and conclusion. This brings me back to Agile in the workplace: Agile isn't dead and buried. That spirit will never die because we are still looking for ways to deliver value through small iterations and build things incrementally. Looking at what customers want, you start with intent and try to bring that to life. I can only hope we balance People with Ai and not the other way around. In that sense, Agile Leadership is never been more important than it is nowadays. While big tech companies lay off workforce in favour of their Ai execution today, soon other companies will follow. And it's okay to a certain extend, as long as the intent of what Ai produces is led by humans. Think of Ai not as a “machine with feelings,” but as a system designed to interpret patterns, hold mirrors up to human reasoning, and help to challenge assumptions.
Technology does not have intrinsic morality.
Ai and data can be used to optimise performance, strip inefficiencies and make decisions faster than humans ever could. The risk is that we have those decisions being made for us, but without context, compassion or consequences in mind. Using technology that way will move us backwards on the human evolutionary timeline. We move back to control and strict hierarchy. And maybe the biggest risk would be that a few then decide for the many supported by polished dashboards and some out-of-context yet convincing metrics. It's not Ai that lacks empathy, it's leaders who outsource their conscience to a dashboard. When Ai leads us, the risk is that it amplifies and extends based on what it was taught. In that sense… what it amplifies, depends entirely on us. When humans disengage from their own moral compass, tools like Ai will not save us. On the contrary, they will simply accelerate the damage.
- Mo Gawdat in "Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World" -
Back to leadership. The moral decay of leadership happens when leadership justifies or hide behind the data and Ai tools. “The data told us this or that”, or “The model recommends us to do this”, is basically saying the following as a leader: I don’t take my human accountability when I make this choice. So in that sense, it is not Ai taking away empathy out of leadership, it’s the people doing that when they use the tools as shields and not as instruments. Thinking about how Ai amplifies, it just made it easier to pretend that empathy is inefficient. So the real debate is whether we choose for what is right on top of what works. And so, it would be crazy to argument against Ai or data, because as an instrument, it creates value, yet we decide how we use that. If leaders use Ai not to dominate, but to inform more compassionate decisions, if they use data not to remove responsibility but to deepen understanding, than it would help us to swing forward. And once we've swung forward, we will double back, then reassess and move again. Not in linear way, but in a human way, learning from the experiments we do.
We are learning as we go. And the danger is that we steer with Ai , not through Ai. That is an important nuance. Steering with Ai suggest more delegation, but steering through Ai suggest navigation, while humans are still at the helm. In that sense, our intent as humans becomes more important when we design the next possible solution. Is it Ai or the human that holds the pen when intent is designed? At the same time we should ask what values guide the hand which holds the pen? The leader still should be kept accountable under this complexity. We still look for leaders who act with courage, but with moral courage.
With Ai entering our life, it may help us to further develop and evolve. Ai is a companion, a pattern detection tool. But it is still a tool. A tool doesn't know right or wrong. A tool can help you become more efficient. But in the hands of evil, evil can become worse; in the hands of goodness, goodness becomes superb, outstanding and terrific. I prefer us to fully embrace that empirical mindset while we figure out how we steer through our Ai companion. And while we get such a powerful tool as Ai at our disposal, we most definitely need leadership to help define the goals and give direction while we are approaching it more efficiently. It's humans at the centre of that Ai revolution. So the question rises while we move from human to humAin organisations: “What kind of humans do we want to be, now that we no longer need to act like machines?”
Yes, it's still individuals and interactions over processes and tool.