
Start Small, Start Safe: Why Empirical Ways of Working Matter Even More in the Age of AI
For more than a decade, much of my work has centred on a deceptively simple idea: make the work visible. Transparency, inspection, and adaptation—core tenets of empirical process design—are not abstract principles; they are the way complex delivery systems learn, evolve, and perform.
When we introduce a new way of working, whether it's agile practices or a shift in team roles, we start with transparency. We surface what’s really happening—the actual as-is system, the lived experience of teams, the data (both quantitative and qualitative), and the behaviours and mindsets that shape outcomes. Only then do we decide what to change, when to change it, and who must adopt the change for it to stick.
But when the change on the table is AI, the conversation shifts. Not because the principles are different—they’re not—but because the scale, uncertainty, and emotional charge of AI adoption are operating at a different order of magnitude.
This is where some organisations are making a critical mistake.
They assume AI adoption is a technology decision. It isn’t. It’s a way-of-working change, with all the complexity, human nuance, and emergent behaviour that implies—just amplified.
Why Transparency Still Comes First
Even with AI, the starting point doesn’t change: We need to understand the current way of working before we speculate about the future.
Where is value created today? Where is time being lost? Where are decisions being delayed? Where do people feel stretched, unsafe, overwhelmed, or unclear?
Until this is visible, every AI conversation is either abstract or vendor-led—and both create unnecessary risk.
The anxiety sweeping through organisations right now isn’t solely about job loss. It’s also about:
Not knowing how AI will change decision rights
Not knowing how it will reshape accountability
Not knowing how it will affect security, reputation, and compliance
Not knowing how to adopt AI safely without breaking what already works
This is where empirical design becomes essential again. You cannot reduce fear until you increase visibility.
Inspection: AI as a Change Hypothesis, Not a Foregone Conclusion
A misconception I’m seeing everywhere: “We need to find places to use AI.”
No. The right question is: “Where might AI meaningfully improve the way we work—if at all—and what would it take to do this safely?”
The moment we treat AI as a hypothesis rather than an inevitability, we regain clarity. We inspect how work currently flows. We examine the constraints. We talk with the people doing the work, not just leaders imagining the future state.
This is the work Source Agility has always done: making the system inspectable so we can make decisions grounded in reality, not hype.
Adaptation: The Step That Is Scaring Everyone
This is the real tension point.
In any complex system, introducing a change triggers unpredictable adaptation. We know this from years of agile transformation: teams don’t just “adopt” new practices—they reshape them, resist them, evolve them, reinterpret them.
But with AI, the adaptation phase feels more threatening:
People fear being replaced
Leaders fear loss of control
Legal teams fear accidental exposure
Security teams fear new vulnerabilities
Executives fear reputational fallout
And if we’re honest, many organisations fear adopting AI incorrectly far more than they fear not adopting it at all.
This is why the "scale quickly" narrative is dangerous. In the face of a technology that is not yet stable, not yet fully understood, and not yet safely governed in most organisations, adapting too fast increases risk—not value.
The solution isn’t to slow down innovation. It’s to treat AI adoption as a sequence of small, safe-to-learn experiments, just as we would with any other change to the way we work.
The Principle That Hasn’t Changed: Start Small, Start Safe, Learn Fast
Here’s the irony: AI may be new, but the responsible adoption of change isn’t.
The same principles apply:
Make the work visible
Understand how people actually operate today
Frame AI as a hypothesis, not a mandate
Run small, safe experiments
Observe how the system adapts
Scale only when the benefits and risks are clear
This approach doesn’t just reduce fear. It builds confidence—because people can see what’s happening, contribute to shaping it, and trust that decisions are grounded in reality rather than hype.
This is the foundation of our new service, AI Adoption & Scaling: a people-centred, transparent, experiment-driven approach to introducing AI safely and intelligently into the flow of work.
If You're Navigating These Questions, Join Us Next Week
We’re running a webinar next week called AI Reality Check: What Leaders Can Expect in 2026.
If you want a grounded, human-centred perspective—not vendor-driven noise—this session will help you make sense of what’s coming and how to adopt AI responsibly.
Link below. Hope you can make it.


