Preventing Charity Fraud LogoPreventing Charity Fraud LogoPreventing Charity Fraud LogoPreventing Charity Fraud Logo
  • About
  • Awareness Week
  • Resources
  • Events
  • Fraud Pledge
✕

AI AND YOUR CHARITY: A FRIENDLY STARTER GUIDE

It has been impossible to avoid AI  (artificial intelligence) in the last  12-months, as it has burst onto  the scene, forcing many of us to  reimagine what our lives look like at  work and homeThe charitable sector  is no exception, and AI promises to  come crashing through the doors  like a new puppy. It’s full of potential,  occasionally messy and is more than  likely to find escape routes that you  did not even know existed. Hopefully,  it will result in fewer accidents and  chewed clothes, the technology is not  quite there for that yet!  

Many charities, both large and small,  are being encouraged to “use AI”  without a clear sense of what that  actually means, what the risks are, and  where the real opportunities lie. 

This helpsheet is here to give you  a clear, practical starting point.  Think of it as a guide to the things  to be aware of and the questions to  consider before fully embracing the AI  revolution. 

1. So… what is AI in this context?

There is a lot of jargon and complexity  thrown around whenever we discuss  AI, but it is very straightforward. AI  refers to computer systems that can  perform tasks we would normally  expect a human to take care of,  whether that is writing text, analysing  documents, spotting patterns,  answering questions, or even  generating deepfake videos.

Most AI that charities encounter today  falls into two buckets: 

  • Generative AI. Everyone’s first  thought when we mention AI,  capable of creating content (think  ChatGPT, Claude, Midjourney) 
  • Analytical AI. The tools that review,  sort, detect or predict. 

You don’t need a computer science  degree, you just need a handle on the  limitations of these systems.

2. Why should charities look to AI?

It can genuinely help, from triaging  inbound queries to drafting newsletters, sorting case notes,  speeding up admin, or analysing  financial data. It can extend your  team’s capacity without extending your  payroll. But, we can’t just delegate everything  to a machine. It’s important that AI is  used safely, ethically, and in a way that  respects the people you serve.

3. The Very First Questions to Ask

Before adopting any AI tools, gather  your trustees, leadership team, and  someone who understands some  of the technical side of things, and  consider: 

  1. a) What problem are we trying  to solve? 

If the answer is “Everyone else is using  AI,” please stop there. Tools should  meet needs, not create them. 

  1. b) What data will the AI see? 

Some AI tools send your data to third  parties for training their systems. That  might be fine for drafting a birthday  message but not so fine for anything  containing personal information,  safeguarding details, donor  

information, or staff HR records. 

  1. c) Who in the charity will use it,  and how? 

Avoid the “rogue AI enthusiast”  scenario where someone quietly  uploads half the finance drive to a free  chatbot to “help tidy it up.” Trying to  prevent the use of AI tools completely  often results in people taking  sidesteps to make their work better  (and sometimes easier). 

  1. d) What are the risks if something  goes wrong? 

Deepfaked CEOs. Fraudulent payment  requests. Incorrect advice given to  vulnerable service users. Inappropriate  automated responses. These things are  already happening. 

A charity’s reputation is one of its most  valuable assets, and AI mistakes can  erode trust quickly.

4. The Fraud Risks You Need to Know About

AI isn’t just writing cheerful emails and  generating pictures of otters wearing  sunglasses. Fraudsters are using AI  heavily too. 

CEO Fraud 2.0 

This used to be an email problem. Now  it’s becoming a video call problem. • Fraudsters can generate a convincing  voice clone of your CEO. 

  • Some can appear as them on live  video. 
  • They can instruct staff to make  urgent payments or share sensitive  information. 

If the request is unusual, high-value,  or urgent, verify it outside the  channel it came through. 

Hyper-realistic phishing 

AI can write flawless emails tailored  to your charity, your mission, and  even your writing style. It can write  thousands of them in very little time,  making it a scammers dream!  

Deepfake documents 

AI can generate fake invoices,  bank statements, and IDs that look  painfully legitimate. 

The biggest risk? Over-trusting  the tech. 

AI outputs look confident even when  they’re confidently wrong.

5. Practical Safeguards

Create a simple AI policy 

It needs to be clear, inclusive, and  easy to follow. There’s no benefit in  getting mired in technical language,  just focusing on the key elements. It  should cover: 

  • What tools are allowed 
  • What data staff can and cannot  upload 
  • How decisions involving AI are  checked 
  • What to do if something looks  suspicious 
  1. b) Train your teams 

Short sessions are fine. Staff  should know: 

  • AI can be wrong 
  • AI can be manipulated 
  • AI may leak data if used unsafely • AI cannot replace human  

judgement 

DISCLAIMER 

Published 2025. 

  1. c) Protect identities 

Have clear internal processes for  verifying unexpected requests from  senior staff. 

  1. d) Use two-person controls 

This is classic fraud prevention,  still effective even in the age of  deepfakes: 

  • Payments 
  • Contract approvals 
  • Access to sensitive records 
  1. e) Avoid “free” AI tools for  

sensitive content 

If it’s confidential, personal, or  reputationally risky, stick to  

tools where: 

  • you have a contract 
  • there’s a data processing  

agreement 

  • the provider clearly states your  data won’t be used to train their  models 
  1. f) Keep AI experimental work away  from live systems 

A sandbox environment is your  friend, which means testing your new  AI tools in a safe, contained space.  It is always best to keep a new tool  away from your data and live systems  until you fully understand how it  works, what it needs access to and  what it does once it gets there. It is  not worth risking the rest of your  charity’s systems, data, or reputation.

6. AI Can Be Hugely Helpful, When Used Thoughtfully

It’s not all doom. Done well, AI can  improve: 

  • Staff efficiency 
  • Accessibility (transcriptions,  translations, summaries) 
  • Fraud detection 
  • Service reach 
  • Data analysis 
  • Digital engagement 

Many charities are already benefiting,  but the successful ones start small,  stay human-centred, and use AI to  support staff, not replace them. AI  is not all powerful, it can get things  wrong, and it is still vital to check  things. AI is not about replacing  people, it’s more about letting  machines do the things that they  are good at (e.g. analysing lots of  data very quickly) in partnership  with people, doing the things where  they’re strongest (e.g. making  decisions, thinking about the ethical  implications). 

7. When to Get Help

If you’re not sure: 

  • whether a tool is safe, 
  • what the data risks are, 
  • or how to assess a vendor’s AI  claims (a lot of these tools do very little despite grand marketing), 

Don’t be afraid to reach out for  independent advice. AI is moving fast;  charities don’t have to navigate  it alone. 

8. Key Messages to Leave With Your Team

AI can help you, but it needs  guardrails. 

  • Stay curious, not fearful. 
  • Trust, but verify. 
  • If something feels “off,” check it. • The human in the loop is irreplaceable.

ACKNOWLEDGEMENT

Written by Oli Buckley, Professor in Cyber Security at Loughborough University

DISCLAIMER

Published 2025. © Fraud Advisory Panel and Charity Commission for England and Wales, 2025. Fraud Advisory Panel and Charity Commission for England and Wales will not be liable for any reliance you place on the information in this material. You should seek independent advice.

Download

AI AND YOUR CHARITY document cover

AI AND YOUR CHARITY: A FRIENDLY STARTER GUIDE

Category Helpsheets

Share
Fraud Advisory Panel
Charity Commission For England And Wales Logo
  • Contact us
  • Privacy notice
  • Cookie notice
  • Terms & conditions
  • Accessibility
  • Sitemap
Fraud Advisory Panel is a registered charity in England and Wales (1108863) and a company limited by guarantee, registered company in England and Wales (04327390). Registered office: Chartered Accountants’ Hall, Moorgate Place, London EC2R 6EA

© Fraud Advisory Panel and Crown Copyright 2021. All rights reserved.
    0

    £0.00

      ✕

      Login

      Lost your password?