- Athencia Insights
- Posts
- Copilot, ChatGPT & Client Confidentiality: An AI Governance Starter Pack for Professional Services
Copilot, ChatGPT & Client Confidentiality: An AI Governance Starter Pack for Professional Services
A straightforward way to use modern AI tools while protecting sensitive client work.

A lot of firms want to use AI tools this year. Copilot in Microsoft 365. ChatGPT. Small, focused assistants built into the software you already have. The interest is there. The pressure is there. The potential is real.
But professional services firms have a unique problem: your entire business sits on a foundation of confidentiality and trust. You cannot treat AI adoption the same way a startup or a marketing agency might. You operate in a world where one bad decision with client data has real consequences.
The goal isn’t to avoid these tools. It’s to use them responsibly and predictably, with clear boundaries that match the way your firm works.
This is a practical starter pack to help you do that.
Start with a simple idea: AI isn’t the risk, your data handling is
Most of the fear around AI comes from not knowing where data goes or how it’s used. But the truth is simpler. AI tools are only as risky as the access you allow and the context you give them.
If someone can paste a client document into a public AI tool, that’s not an AI problem. It’s a data governance problem.
If Copilot can reach sensitive files it shouldn’t have access to, that’s not an AI problem. It’s an access control problem.
Good AI governance begins with the same fundamentals every firm should already have in place:
Clear access boundaries
Strong identity controls
Basic data classification
A predictable file structure
People who understand what “confidential” actually means
AI doesn’t erase any of that. It just exposes it.
Understand the two types of AI you’re dealing with
Most firms will touch two broad categories of tools.
1. Public AI (ChatGPT, Gemini, Claude, etc.)
These are general-purpose tools. Useful. Powerful. Not tied to your data unless you intentionally put it there. The risk comes from people copying and pasting client information into these tools without thinking.
Your policy here should be simple:
No client data
No confidential firm information
No internal documents
No “just to check something quickly” exceptions
If someone wouldn’t email that information to a stranger, they shouldn’t paste it into a public AI tool.
2. Enterprise AI (Copilot for Microsoft 365)
This is different. It runs inside your tenant and respects your existing permissions. If someone doesn’t have access to a document, Copilot can’t see it either.
This makes it much safer for day-to-day work, but it also brings a new requirement: your permissions need to be correct. Sloppy access models lead to sloppy AI output.
Before turning on Copilot, firms should clean up:
Overshared SharePoint sites
Old “everyone in the company” links
Personal OneDrives full of client material
Teams channels with unclear ownership
Legacy folders carried forward out of convenience
Copilot magnifies whatever structure you already have. If your tenant is organized, it performs incredibly well. If it’s not, it reflects that too.
Set boundaries people can understand and actually follow
AI governance doesn’t need to be a 20-page document. Start with a one-page guide that covers:
What people can use AI for
Drafting
Summarizing
Brainstorming
Rewriting
Simplifying internal explanations
What they cannot use AI for
Client documents
Matter-specific information
Financials
Sensitive personal data
Anything bound by a confidentiality agreement
Anything that identifies a specific client situation
What to do instead
Use templates
Use internal examples
Strip identifiable details
Ask a colleague before asking a model
Clarity beats perfection. People will follow simple rules, and if you’re an Athencia One client, we’re happy to help you draft them.
Tie AI use back to your existing confidentiality obligations
Professional services firms already have standards:
Engagement letters
Ethical rules
Regulatory requirements
Client confidentiality clauses
Cyber insurance controls
Your AI standards should map directly to those. You’re not inventing new expectations. You’re applying old ones to a new tool.
A good way to explain it is this:
“Use AI the same way you would use a contractor you don’t know yet. Helpful, but not someone you give sensitive client information to.”
Put yourself in a defensible position
If a client or insurer asks about AI use, they’re not looking for perfection. They’re looking for evidence that you’ve thought about the issue.
Have these things ready:
A short AI policy
A list of approved tools
A list of disallowed tools
A basic explanation of how Copilot or ChatGPT handles data
A record of staff training or acknowledgement
Confirmation that confidential data isn’t sent to public AI tools
Confirmation that enterprise AI respects existing permissions
A short internal FAQ answering common questions
When firms can show this level of preparation, the conversation becomes much easier.
Monitor the environment the same way you already should
Nothing about AI replaces the need for basic monitoring. If anything, it makes it more important.
You still need:
Strong identity controls
MFA everywhere
Conditional Access
Clear device policies
Proper access reviews
A reliable offboarding process
A 24/7 SOC to catch the things you don’t see
AI doesn’t introduce new risks so much as it sharpens the ones already in your system. A mature monitoring posture fills in the gaps.
Start small, move steadily, and keep people in the loop
You don’t need a grand rollout. The best path looks like this:
Publish a simple policy
Approve a small set of tools
Train your people on how to use them
Start with low-risk use cases
Tighten access and structure as you learn
Add new capabilities when the firm is ready
Your goal is steady, confident progress. Not a big-bang announcement.
The bottom line
Professional services firms can safely adopt AI. Many should. The work these tools can automate will free teams for the higher-value parts of your practice.
The key is structure. Clear boundaries. A predictable framework. And a culture where people understand both the promise and the responsibility.
That is what AI governance looks like at this stage. It’s not complicated. It’s not dramatic. It’s simply part of running a modern firm.