AI tools are spreading across nonprofit operations fast. 92% of nonprofits have adopted AI, from fundraising and donor analytics to program delivery and grant writing. Most don’t have a governance program keeping pace. This piece is for operations and compliance leads who have moved past “should we use AI” and into “how do we govern what we’re already using.”
What is AI governance for nonprofits?
AI governance for nonprofits is the set of policies, processes, and oversight structures that ensure AI tools are adopted responsibly, documented clearly, and aligned with the organization’s mission and legal obligations. It covers knowing what AI is in use, who owns each system, what data it touches, and whether it meets donor, funder, and regulatory expectations. This is a leadership and compliance function, not a technical one.
The stakes are specific. Donors trust organizations with their data and their dollars. Funders are beginning to ask about AI use in grant applications. Boards are being asked to attest to responsible technology stewardship. Governance is the mechanism that makes those attestations credible rather than aspirational.
Why AI governance matters for mission-driven organizations
Donors and institutional funders expect transparent stewardship of organizational resources, including AI tools that handle their data or influence program decisions. Organizations that can document their AI governance practices are better positioned to answer funder due diligence questions and maintain donor confidence. A governance program provides specific answers where others offer assurances. That distinction matters when a funder asks how donor data is handled in an AI-assisted fundraising tool.
Regulatory obligations already apply, and tax-exempt status doesn’t change that. HIPAA applies to health-focused organizations. CCPA and state privacy laws apply to California donors and beneficiaries regardless of organizational structure. Colorado SB 205 may apply to organizations making high-risk AI-influenced decisions about beneficiary eligibility or program access, where covered. These obligations require documented evidence of oversight, not large compliance teams. The documentation burden is manageable. The absence of documentation isn’t.
Boards are increasingly responsible for AI risk oversight, and without a governance program producing regular reporting, they can’t fulfill that responsibility. When something goes wrong with an AI tool, whether oversight was in place becomes a reputational and legal question, not just an operational one — IBM found 63% of breached organizations lacked AI governance. A quarterly board update on AI portfolio status and governance decisions is the minimum reporting structure that gives board members meaningful visibility into what they’re accountable for.
Key components of a nonprofit AI governance program
| Component | What It Does | Why It Matters |
|---|---|---|
| AI Inventory | Centralized record of all AI tools and use cases | Prevents shadow AI, enables oversight |
| Risk Assessment | Evaluates each use case against risk criteria | Focuses review effort where it matters |
| Governance Policies | Documents rules for acceptable AI use | Creates clarity and accountability |
| Approval Workflows | Routes new AI proposals through structured review | Ensures oversight before deployment |
| Audit Trails | Logs all governance decisions and changes | Supports board and funder reporting |
AI inventory
You can’t govern what you don’t know exists. An AI inventory is the centralized record of every AI tool in use: what it does, what data it touches, who owns it, and what its review status is. For most organizations, this starts with a survey of existing software subscriptions, including AI features embedded in CRMs, email platforms, and program management tools. The inventory is the foundation for every other governance activity. Without it, risk assessments, policy reviews, and board reporting are all operating on incomplete information.
Risk assessment
Not every AI tool carries the same risk. A chatbot answering public questions about programs is a different risk profile than an AI system influencing beneficiary eligibility decisions. Risk assessment evaluates each use case against criteria like data sensitivity, impact on beneficiaries, and regulatory exposure, and assigns a tier that determines the depth of review required. This is where governance programs become efficient: high-risk tools get scrutiny, low-risk ones move faster. Risk-tiered review is what keeps governance from becoming a bottleneck.
Governance policies
An AI governance policy documents what AI use is permitted, what requires approval, how data may be used with AI tools, and who is accountable for oversight. It gives staff clear guidance, reduces shadow AI adoption, and creates the documented standard against which the organization can be evaluated. Policies should be reviewed annually and updated as new tools are adopted and regulations evolve. A policy that hasn’t been reviewed since the organization’s first ChatGPT experiment isn’t a governance policy. It’s a timestamp.
Approval workflows
Before any AI tool is deployed, someone with appropriate authority should review and approve it. A structured intake workflow captures the use case, routes it to the right reviewer based on risk level, and produces a documented record of the decision. Low-risk tools can move through review in days. The point isn’t that every review is identical. It’s that every tool has a documented approval, and that approval is traceable when a funder or auditor asks for it.
Audit trails
Every governance decision, approval, policy change, and risk assessment should be logged with timestamps and rationale. Audit trails are what make governance provable rather than claimed. When a board member asks whether a specific AI tool was reviewed before deployment, or a funder asks how the organization handles AI decisions affecting beneficiaries, an audit trail provides the answer. Without one, the best you can offer is “we believe so.”These records also form the backbone of any formal AI governance audit.
Protecting donor and beneficiary data in AI tools
Data categories that require special handling
Before adopting any AI tool, identify what data it will access. Categories requiring careful governance include donor PII (names, addresses, payment details), beneficiary records and case notes, health information covered by HIPAA, financial account details, and internal strategic documents. These categories should have explicit data handling rules in the governance policy and should be evaluated carefully in any vendor assessment. The question isn’t whether the tool is reputable. It’s whether the organization has documented what data it can access and under what conditions.
Evaluating vendor AI practices
Most AI tools used by nonprofits are third-party products. The organization remains responsible for how those tools handle its data. Key questions worth asking: Does the vendor use your data to train its models? What are the data retention periods? Where is data stored? What are the third-party sharing practices? Vendor privacy policies are written to protect the vendor, not inform the buyer. Reviewing them with a standard set of governance questions produces more defensible assessments than casual review, and creates a documented record that the organization did its due diligence.
How to build a nonprofit AI governance program
Phase 1: Establish visibility. Conduct an AI inventory across all departments. Survey department heads. Review software subscriptions. Identify AI features in existing tools that may not have been adopted as “AI” but function that way. By the end of this phase, the organization knows what AI is in use, who is using it, and what data it touches. This is the foundation. Everything after it depends on it being complete.
Phase 2: Operationalize oversight. Draft an AI governance policy covering acceptable use, approval requirements, and data handling rules. Stand up a structured intake process for new AI proposals. Assign ownership for each AI use case in the inventory. Define risk tiers and apply them to existing use cases. The goal is to move from informal awareness to structured, documented review. That shift doesn’t require new headcount. It requires a process.
Phase 3: Connect to reporting and compliance.Build a regular reporting cadence for board and leadership. Quarterly board updates on AI portfolio status, risk distribution, and significant governance decisions. Map documented controls to applicable regulatory frameworks. By the end of this phase, the organization has a defensible governance record and a program that scales as AI adoption grows. Mature governance programs achieve 10X faster intake and 60% reduction in governance cycle times compared to manual, ad hoc approaches. That’s what the infrastructure produces when it’s built properly.
Common challenges and how to address them
Shadow AI is the most consistent gap. A WalkMe survey found 78% of employees use unapproved AI tools, often because the approval process is unclear or doesn’t exist. The solution isn’t prohibition. It’s a simple, accessible intake process that makes asking for approval easier than working around it. When governance is faster than the alternative, teams use it.
Limited resources are a real constraint, but they’re not the barrier most organizations assume. Governance doesn’t require dedicated headcount. A structured intake process, a policy document, and a quarterly board report can be maintained alongside other responsibilities. Purpose-built governance platforms reduce the manual burden by automating risk scoring, routing, and documentation. The barrier is usually setup, not ongoing maintenance. Starting with inventory and policy is enough to begin. Sophistication can follow adoption.
Evolving regulations create legitimate uncertainty. Privacy laws and AI-specific requirements are changing faster than most governance programs update. The practical response is infrastructure designed to adapt: policies with annual review cycles, vendor assessments revisited when vendors change their practices, and compliance framework mappings that accommodate new requirements without rebuilding from scratch. The organizations that handle regulatory change best aren’t the ones tracking every development. They’re the ones whose governance programs can absorb new requirements without treating each one as a rebuild.
FAQ
HIPAA applies to health-focused organizations handling protected health information. CCPA and state privacy laws apply based on where donors and beneficiaries are located, regardless of the organization’s tax status. Colorado SB 205 may apply to organizations making high-risk AI-influenced decisions about beneficiary eligibility or program access where those decisions affect Colorado residents. Governance documentation is the mechanism for demonstrating compliance with each. The documentation burden is proportional to the risk. The absence of documentation isn’t.
Annually at minimum for all AI systems, with additional review triggered by material changes to the tool, its data inputs, or applicable regulations. High-risk systems, those influencing beneficiary decisions or processing sensitive data, warrant more frequent review. New tool adoption should trigger review before deployment, not after. Governance that only looks backward isn’t oversight.
Yes. A basic governance program, AI inventory, acceptable use policy, structured intake process, and quarterly board reporting, can be maintained alongside other responsibilities with the right tools. Start with inventory and policy. Add workflow automation as the program matures. The mistake is treating governance as an enterprise-only capability. The underlying obligations apply regardless of organizational size.
Typically a compliance officer, COO, IT director, or executive director depending on organizational size. What matters more than the title is that someone has clear ownership, defined authority to approve or reject AI proposals, and a regular reporting relationship with the board. Governance without ownership is documentation without accountability. The specific role is less important than the clarity of who holds it.
The organizations that donors and funders trust most with AI aren’t necessarily the ones using the most sophisticated tools. They’re the ones that can show they’ve thought carefully about what they’re using and why. Governance is how you demonstrate that, concretely and on the record, rather than by assurance.
Explore Trustible’s resources on building practical AI governance programs.