Partner with us for reliable IT support. Contact us now and find out how we can streamline your IT needs!


Across March, I hit the road, hosting a series of cyber security roundtables with customers across Australia. The big message in every city? AI is forcing security teams to revisit the old-fashioned fundamentals (identity, data governance, exposure management) and deal with new patterns of risk, like “machine-speed” attacks and “AI-washing”. In the first of a two-part blog series, I cover what’s changing and why and offer practical guardrails and steps technology leads should be thinking of now.
One of the privileges of my role is when I get the chance to connect with and interrogate customers IRL. Across March, I did just that, running a cybersecurity listening tour in Melbourne, Sydney, Adelaide, Brisbane, and Perth, to hear from 50+ senior technology leads in person.
The theme was simple (“The New Cybersecurity Topography”) and I framed it by setting the scene with three real-world challenges, followed by putting a (un)lucky customer on the spot:
How do we retain visibility and control while modernising at pace and preparing for AI-enabled threats?
What stood out most wasn’t any single “prediction”, although there were some fabulous provocations (cyborgs are real people, too!). It was how the consistency in which customers from very different organisations were describing the same pressure points, and how willing they were to share what they’re dealing with.
Call me old-fashioned, but these open, generous, sometimes blunt conversations are where the real progress happens. Here are four themes that came through strongly across the sessions.
A repeated point (and one worth stating plainly) is this:
You don’t get to outsource accountability to an algorithm.
When an organisations’ customers interact with an AI-driven chatbot, or an internal team relies on AI-generated outputs, the organisation remains responsible for what’s delivered. Even (especially) when the output is wrong. And in safety-critical environments – like health, like construction, like transport - “wrong” can mean far more than reputational damage.
It’s becoming increasingly clear that AI doesn’t just help defenders (the shield).
It also lowers the cost of running sophisticated reconnaissance and exploitation (the sword).
One phrase that has stuck with me is the idea of attacks happening at “machine-speed”.
In practice, this looks like automated discovery and exploitation: scanning large numbers of endpoints, finding weak or no-auth services, then extracting data quickly, including prompts, datasets, or anything else the tool has access to.
“It’s not that attackers have suddenly become smarter. It’s that automation has lowered the cost of being persistent.”
Peter Soulsby, Director of Cybersecurity, Brennan
Across the sessions, the most common “AI question” wasn’t “Which tool should we buy?”
It was: what happens to our data?
See also: Where is it stored? Is it used to train models? What telemetry is retained? What do the enterprise agreements actually stipulate? What happens when tools request broad access to SharePoint, Teams, and content repositories?
If you can’t answer those questions cleanly, you’re effectively taking risk on trust.
And trust is not a control. (Although it’s easy to lose control once the trust is gone.)
Every new gold field attracts a flood of prospectors. And right now, there’s a deluge of vendors and start-ups offering AI capabilities, with many asking (or hoping) to ingest organisational data quickly.
“I heard concerns about weak security maturity, unclear contracts, and even a scenario where “AI” actually turned out to organic humans (yes, people) manually processing data offshore.”
Peter Soulsby, Director of Cybersecurity, Brennan
Add in the speed and ease of credit-card procurement and shadow IT, and it’s easy to see how fast risk accumulates with data flowing to places you never approved.
There’s plenty of gold for consideration, some if it new to me. But it’s just some of what customers were generous enough to share across sessions.
So, in part two, I’ll focus on what many told me they are already building: guardrails that enable the business without pretending risk doesn’t exist, including intake paths, safe sandboxes, and the “getting the basics right” foundations that make everything else possible.



Partner with us for reliable IT support. Contact us now and find out how we can streamline your IT needs!