Get in touch

The New Cybersecurity Topography | Part Two

Peter Soulsby
Director of Cyber Security, Brennan

From “no” to “yes, but”: how security teams are designing fast approvals and durable controls for AI tools.

What you need to know:

In my last blog, I shared some of what I heard during a national cybersecurity listening tour: accountability doesn’t change with AI, the attack surface is expanding quickly, data governance is now central, and vendor risk is evolving. In this blog: I explore the practical patterns customers are adopting to enable AI without losing control. The most effective approaches aren’t complicated: get identity and data fundamentals in shape, create a clear intake and approval path for AI tools, provide safe ways to experiment, and invest in training so people don’t have to guess what “safe” means.

 

In my last blog, I took a look at several of the key themes our customers were generous enough to share in a cybersecurity listening tour held around the country in March: accountability remains with the organisation, the attack surface is expanding at “machine-speed”, data governance is the real AI question, and vendor risk is getting noisier.

What follows is just as practical: the patterns I heard repeatedly (and the ones I’ve seen work) for enabling AI while staying in control.

1. Get the basics right: identity, endpoints, vulnerabilities

If there’s one phrase that kept coming up (in every city), it was some version of: “we need to get the basics right.”

AI security is often discussed like it’s a new domain. But the reality is that most AI risk is amplified by ordinary and expected weaknesses:

  • Poor identity governance
  • Legacy data platforms
  • Overly broad access
  • Unmanaged service accounts
  • Exposed endpoints
  • Weak vulnerability management
  • Unclear data handling rules

Identity management in particular was repeatedly called out as decisive, including a newer concern: humans creating identities for AI agents, and, perhaps more troubling, AI agents creating identities for other agents.

Three practical takeouts:

  • Tighten MFA and conditional access; enforce least privilege.
  • Actively govern service accounts and privileged access.
  • Reduce exposed endpoints and continuously scan for the obvious gaps.

 

2. Build an AI tool intake path that scales

One of the healthiest shifts I’m seeing is security teams moving away from being default blockers. Not because risk went away, but because the risk of saying “no” to everything is often worse for the business.

If cybersecurity can’t or doesn’t provide a path, users will create one. That’s how shadow IT returns. A strong system that caught my ear in one session was an AI tool intake and approval process with:

  • Standard questions and checklists
  • Fast-track for low-risk use
  • Deeper review for broad integrations or sensitive data access
  • Clear “never acceptable” rules (especially around uncontrolled access to repositories).

Three practical takeouts:

  • Create a lightweight intake form and publish it widely.
  • Define approval tiers (low/medium/high risk) with predictable timelines
  • Require extra scrutiny when tools ask for SharePoint/Teams/repository-wide permissions.

 

3. Safe experimentation: sandboxes, controlled pilots, and “burners”

Another practical theme was safe experimentation. People are exploring tools quickly, and that isn’t going to slow down anytime soon. The question is whether experimentation happens inside guardrails or outside them.

“Several customers described using isolated environments (“burner” devices or even home labs) to test tools outside of the corporate network, and without corporate data. Controversial? Maybe. Pragmatic? Absolutely.”

Peter Soulsby, Director of Cybersecurity, Brennan

Others spoke about controlled pilots: limited datasets, clear objectives, and a defined boundary before anything touches production.

Three practical takeouts:

  • Provide a sanctioned sandbox environment with non-sensitive data.
  • Define pilot rules: what can be tested, what can’t, and how results are measured.
  • Don’t let “we’re just testing” become an excuse for uncontrolled integrations.

 

4. Train humans, not just policies

A point made bluntly in my notes: policies only go so far. Even if they exist and even when they’re read.

People will push boundaries. That’s just human nature. The goal isn’t perfect compliance; it’s building enough understanding, incentives, and tooling that safe behaviour is the path of least resistance.

Some customers discussed “licensing” or gating approaches: short training sessions, simple assessment, then broader access. Others emphasised guidance on safe prompting and data handling, plus how to verify outputs.

Three practical takeouts:

  • Run short, role-specific training: prompting hygiene, data handling, output verification.
  • Offer sanctioned tools so people don’t go hunting for alternatives.
  • Reinforce continuously. Change management is not a one-off event.

 

5. Don’t bind yourself to one platform

Finally, the pace of change we’re seeing and experiencing is extraordinary. We’ve already watched the conversation shift rapidly from one market leader to another, and the alternatives keep improving.

“A future-proof approach isn’t betting everything on a single model or tool. It’s designing an operating model where you can rotate providers while keeping your controls consistent.”

Peter Soulsby, Director of Cybersecurity, Brennan

For example, creating an “AI chokepoint” where prompts are funnelled through a governed interface with a revolving roster of the best LLMs sitting on the other side.

Three practical takeouts:

  • Design for portability: consistent policy, consistent logging, consistent controls.
  • Avoid hard-wiring sensitive workflows to one tool without an exit plan.
  • Keep reviewing agreements and technical configurations as vendors embed AI “by default”.

 

Closing thought

If these roundtables reinforced anything for me – and they reinforced a lot – it’s that the next few years will feel uncomfortable, and that’s okay. The problems and opportunities AI introduce are surprisingly similar across organisations.

And there’s no substitute for hearing it directly from peers. Across Melbourne, Sydney, Adelaide, Brisbane, and Perth, the generosity of insight in the rooms was outstanding. And to my mind, it’s how we’ll all navigate this shift with a bit more confidence (and far fewer surprises).

Share on: 

Recommended for you

The New Cybersecurity Topography | Part One

What Australian technology leaders revealed about AI-driven risk, machine-speed attacks, and more.
Read more

Postcards from ADAPT CIO Edge Sydney 2026

The Agentic AI era is here. Are organisations ready for the operational redesign?
Read more

The Cloud Conundrum

The cloud transformation may be over. But the transformation is far from complete.
Read more

Boost your
business efficiency

Partner with us for reliable IT support. Contact us now and find out how we can streamline your IT needs!

chevron-downarrow-leftarrow-right