Policies / Statement of principles around AI

I’m looking for ways people are addressing the use of AI both, within their orgs and the communities they serve. Are there specific principles and values that are being explicitly named as it becomes more and more a tool people are reaching for?

I work for a network of nonprofits in Toronto - we are looking at drafting something that will guide us in a good way, that embraces possibilities when it comes to AI.

Thanks!
Sree

2 Likes

Not sure this is what you are looking for but I think this might be a good starting point:
https://bristoluniversitypress.co.uk/resisting-ai

All best,
jean

2 Likes

Thank you, Jean! I really appreciate this lead!
Sree

I definitely rate McQuillan. On “AI” I agree with him.

Global Voices has the best policy I’ve seen yet - but it doesn’t discuss the use of Gen AI internally, only externally Global Voices’ policy on AI · Global Voices

In this space, I think there is a lot of confusion in terms. Use of Generative AI, LLMs, and the like I think should really be questioned and mostly discouraged. (Aside from concerns about its part in a larger authoritarian, fascist project, how can small organisations afford to use tools that are consistently wrong 10% of the time?) However I think there is a larger conversation to be had about automation in non-profits and how humans can make their jobs better and more meaningful. This means looking at our internal processes, looking for effeciencies through digital tools.

(And just leaving this here for interest. If you believe there is such a thing as “ethical AI” – I don’t – mySociety created this framework mySociety AI Framework)

3 Likes

Those are worthwhile considerations, thanks for sharing them @Janet_Gunter

Folks might be interested in this bewitching AI companion from Gesturing Towards Decolonial Futures - Chat with Aiden
I found it very helpful in framing an AI policy informed by key relational values that ground the work we do. Among other things.

The overall research and work/tools coming out of the GTDF Collective is quite remarkable. Worth making time to explore, I think.

Sree

1 Like

Hi @sreedevi - Thanks for the link to GTDF - will check it out!

As this is definitely an area we see coming up more and more for groups in the community, is there any chance you might be up for sharing the policy you’ve been developing in the RadHR Library? It’s definitely not something anyone has uploaded so far, so would likely be a really good starting point for many other groups!

Thanks!

Hi Liam,

Happy to! I was having trouble finding how to upload the policy we’re working from. Attached is the general template that I think folks could use as a base and tweak to reflect their particular orgs. Please feel free to put it up.

Sorry I couldn’t figure out how to do this on my own but would love direction on how in future! Thanks!

warmly,
Sree

(Attachment AI Policy Nonprofit.pdf is missing)

1 Like

Hi again, Liam!

My attempt to upload seems to have failed. Below are guidelines summing up considerations and happy to share a fuller template if you can help me figure out how to do that.

AI Policy Summary for Nonprofit Use

Guiding Principles

  • Transparency & Consent

  • Human Oversight

  • Bias & Equity Awareness

  • Relational Responsibility

  • Privacy & Data Ethics

Acceptable Uses

  • Drafting communications (with human review)

  • Scheduling & routine administrative tasks

  • Summarizing research and reports

  • Data analysis to support learning and evaluation

Prohibited Uses

  • Surveillance or data extraction without consent

  • Automated decision-making in hiring or service eligibility

  • Replacing human presence in trauma-informed or care-based roles

  • Generating misinformation or misleading content

Ongoing Commitments

  • Annual policy review with staff and community input

  • Staff training and support for responsible AI use

  • Clear response pathways for addressing harm or concerns

  • A living policy that evolves as technology and context shift

warmly,

Sree

1 Like

The aim of this playbook is to distil practical, actionable guidance grounded in real-world experience. We’re keenly aware that using AI requires a delicate balance. We are as uneasy as others about some of the bad outcomes that AI can cause. Rather than promoting hype, our aim is to offer a structured path forward that helps charitable organisations navigate this transition thoughtfully and effectively, ensuring technology serves their mission rather than distracting from it.

1 Like

Hi @sreedevi - Thanks so much for sharing this! Great to see the headlines that you/your org have been thinking about with this stuff! Really helps to move it away from abstracted debate and into practical discussion and clarity!

Sorry for not sending this over before, but here is the upload link, to share a policy in the RadHR Library: Policy Upload | RadHR

Definitely think there are a lot of folks in the community who would benefit from having your work as a starting point, if you were able to post it there. (and feel free to DM or email me the policy, if easier: liam@radhr.org).

Thanks!

Just to add to the thread here, Adfree Cities have just shared their AI staff use policy in the library, in case it is a useful reference point for others here: Use of AI (Artificial Intelligence) tools and software by Adfree Cities | RadHR Library