AI beyond the hype: Governance and cyber security
As Artificial Intelligence (AI) rapidly reshapes the professional landscape, there’s a need to move beyond the hype and take ownership of AI governance and cyber security. We recently brought together members in practice and technology experts to cut through the noise, offering practical insights on how firms can harness the benefits of AI while meeting rising regulatory and client expectations.
From the outset, audience polling made one point clear: Scottish accountancy is embracing AI, but there’s still work to do on policy, governance and getting everyone on the same page.
A large majority (86%) said that Copilot is in active use across their firms, reflecting the rapid rise of AI tools into modern workflows. Many firms use a blend of AI technologies with ChatGPT also being used by over half of firms and one in five firms using Google Gemini.
Sentiment towards adoption was broadly positive, though nearly half were still in the exploratory middle - keen to realise efficiencies but cautious of risks and mindful of policy gaps.
Just over half (57%) reported their firms had a clear AI policy, signalling growing governance maturity across firms, but at the same time leaving many exposed to unnecessary risk.
Governance is a board level duty
Liz Smith, Business Development Director at Lugo, ICAS’ technology partner, set out clearly that AI and cyber risk are governance issues, not just IT tasks. Boards or other management equivalents should own cyber risk, keep it on the leadership agenda, and recognise that people are both the strongest defence and, at times, the weakest link. Most breaches still start with human error, whether clicking a phishing email or delaying critical updates.
The National Cyber Security Centre recommends tracked awareness training for people with regular refreshers every six to twelve months, supplemented by phishing simulations and topical updates. Encouragingly, around half the room indicated they have run simulations - an essential step in building a resilient culture and measuring real world behaviour under pressure.
The takeaway: Cyber security must be treated within organisations as continuous learning.
Three legislative developments to look out for
Three developments are framing the upcoming regulatory landscape:
- AI Regulation Bill – reintroduced as a Private Member’s Bill, it proposes an AI authority and a risk based oversight regime. If enacted, boards may need to appoint an AI responsible person, enable independent audits, and be transparent about model training data. It’s still a Bill (not law), but the direction of travel is clear: Demonstrable governance for AI use will be expected.
- Cyber Security and Resilience Bill – announced in the King’s Speech and formally introduced to Parliament in November, amongst key provisions it intends to widen scope of regulation to include managed service providers and imposes strict incident reporting timelines for cyber security incidents on certain entities: Notify within 24 hours, deliver a full report within 72 hours. Penalties could reach up to 4% of global turnover for major failures. Again, it’s still a Bill, but it signals tougher expectations on supply chain resilience and rapid reporting readiness.
- Data (Use and Access) Act 2025 – already law, with phased implementation, this legislation strengthens rules around automated decision making, data sharing and grants the ICO enhanced powers. Firms should review how automated decisions are made, tighten complaints processes and ensure data governance is watertight.
The takeaway: As reliance on AI and cyber grows, governance frameworks and regulation will expand to fill gaps which exist. They aren’t optional - they will be a necessity.
Enterprise Copilot vs public AI tools
A recurring risk discussed was the use of public AI tools with client data. Prompts and outputs may be logged or reused by providers to train models, creating confidentiality and compliance issues for firms. Mike Markey, a new technology specialist with Ingram Micro, highlighted that Enterprise Copilot stands apart because it can be configured to keep data inside the Microsoft 365 tenant, with Microsoft stating that customer content is not used to train foundation models unless you opt in. Nonetheless, governance still matters: Policy, permissions and oversight must be designed and enforced by firms to ensure data stays safe.
The session examined “shadow AI”— employees using unmanaged tools at work. While such tools can feel convenient, they can introduce security vulnerabilities, compliance gaps (including GDPR issues), data governance incidents, and hidden costs. The practical remedy is management and control: With the example given of using Intune to separate work and personal applications, enforce encryption and role based access. This can also prevent copy pasting of sensitive data from managed to personal apps. If a device is lost, organisations can remotely wipe managed applications, preserving data safety without intruding on personal content.
Copilot: Where it helps - and the guardrails
In a tour of Microsoft’s Copilot ecosystem, Mike contrasted the free, web grounded Copilot chat experience with the paid Copilot for Business, which grounds queries in work data and embeds AI across Microsoft business applications such as Outlook, Word, Excel, PowerPoint and Teams. Early adopter data suggests meaningful productivity gains: Users report faster research, stronger first drafts, quicker email summarisation, and more insightful analysis of spreadsheets. Crucially, Copilot respects existing permissions; it’s not a master key. If an environment is poorly prepared, Copilot may only surface those gaps more quickly - so readiness and governance configurations are essential.
Probably the most used AI feature in any firm just now is the recording and summarising of meetings. While some clients fear that recordings “go all over the internet”, when configured correctly, meeting data remains within the Microsoft tenant of the firm unless explicitly shared externally. Clear up front communication—ideally codified in firm AI policy and reinforced before engagements helps clients understand your approach, the security model, and their options. Firms should ideally bring clients along on the AI journey with them by setting expectations early, explain the security and privacy controls in place, and be explicit about whether clients own note taking agents are permitted in your meetings with them, and if so whether there are any restrictions on what note-taking applications can be used.
Insurance, resilience and exercising the plan
While nobody likes to think that they will be the subject of a cyber incident, the reality is it’s more likely to be a ‘when’ than ‘if’ scenario. Planning for this is therefore a must.
Cyber insurance is now typically a separate policy, and underwriters expect evidence of controls and breach response discipline. Acting outside an insurer’s instructions can jeopardise claims. Firms were reminded that if they discover a cyber incident, a call to the cyber policy provider is probably as crucial as the first call to make as a call to the firm’s IT support.
Ransomware stays prevalent: Industry reports point to its presence in a large share of breaches and rising third party involvement. The advice is to practise your response - run “tabletop” exercises with leadership, test your incident plans and measure readiness using metrics like Microsoft Secure Score, patch compliance, and Multi Factor Authentication (MFA) adoption. The aim isn’t perfection; you don’t need to “outrun the bear” but simply be “faster than the person beside you”. Just be better prepared than the next firm by strengthening controls and reducing the firm’s attack surface.
Practical steps
In summary, the practical guidance to take away can be distilled into a handful of actions.
- Start with governance: Document how AI is used in the firm, establish roles and approvals, and assess risks for any AI assisted work processes.
- Ensure technical baselines such as MFA, encryption, and role based access are consistently enforced.
- Track security posture through services such as Microsoft Secure Score, keep devices patched and rehearse incident response plans.
- Align people, process and technology by tightening policies around data labelling and retention, and by controlling shadow AI through sanctioned platforms and management tools.
- Keep up to date with emerging legislation, wider sector trends and guidance.
Categories:
- AI & technology
- Practice
- Business




