INSIDE AI – GEORGE IRISH

By George Irish

2023 was the year of the AI Revolution, but the benefits for nonprofits have been more promise than reality.

ChatGPT is the mascot for the new Age of AI, dubbed the most groundbreaking technology since the advent of the Internet and the iPhone. Its impressive —  sometimes stunning — text-smithing abilities are predicted to transform our daily work-lives:

  • supercharging the writing process, especially for content like fundraising appeals and email campaigns;
  • streamlining creative workflows, from concept development to producing and customizing communication outputs like social media posts;
  • taking on heavy knowledge-processing tasks like summarizing reports, producing data insights, and making information products more accessible.

ChatGPT is joined by a host of other new AI-powered data intelligence and automation tools that may soon populate many aspects of our lives.

In the business world, AI is being embraced at a fever pitch. A recent study from market research firm Gartner reports that more than 80% of enterprises will have used Generative AI or AI-enabled apps by 2026 (this seems like a conservative estimate).

In the nonprofit sector the picture has been different. An October 2023 XLerateDay survey reveals the majority of nonprofit staff are at the early stages of AI adoption, and many haven’t started at all.

Two common scenarios seem to be playing out among nonprofit staff eager to incorporate AI into their work:

  • they’ve started using it unofficially and quietly because of concerns about risk, bias, and lack of endorsement. (“I’m using it on the side, quietly”)
  • they’re not using it yet because of concerns about risk, bias, and lack of endorsement. (“I don’t think we’re allowed to use it.”)

Neither of these is really a healthy way to approach AI adoption. And while there are examples of nonprofits making significant investments into AI-enabled tech and workflows, they’re not really representative of the broader trend toward AI hesitation.

ISTOCK/AMNART JORNPIBUN

AI hesitation: a risky approach?

A cautious approach to new technology is reasonable and often smart, but it can also bring risks. If a shifting tech landscape moves beyond the boundaries of established policies, then mistakes can and sometimes will be made.

As reported in The Guardian, the Amnesty International office in Norway was criticized for using an AI-generated image in social media posts to promote their reports on systemic brutality used by Colombian police to quell protests in 2021. The decision to use the image prioritized protecting the identities and safety of protesters from further repercussions, but at the same time was branded as ‘fake news’ and raised questions about the creditability of Amnesty International’s reputation.

The lack of a formal AI policy, including the use of AI-generated images, leaves decision-making at the operational or personal level, not strategic.

AI Everywhere

AI hesitation has been a possible (and reasonable) approach for many organizations already struggling to deliver their mandates with limited resources and overworked staff, but it may not be a feasible strategy for much longer. This one-year anniversary of ChatGPT’s launch also marks the start of the next broad wave of AI expansion: AI Everywhere.

So far, the new AI tools like ChatGPT existed mostly in their own ecosystems, where they could be used or avoided with no impact on your existing systems. Moving into 2024 this landscape is expected to shift with the integration of AI features into our familiar office systems, such as Microsoft Office 365 and Google Workspace, and other common tools like Canva, Photoshop, Zoom, and Mailchimp.

AI will become increasingly embedded into our daily tasks. Whether our organizational instinct is to avoid or adopt, we will soon have no choice: AI is here to stay.

Responsible AI: a call for deeper nonprofit involvement with AI

Responsible AI has become the label for efforts across academia, the tech industry, government, and civil society to ensure that AI’s development does not represent a threat to society and that social harms are prevented. Responsible AI initiatives are informing new regulations and controls that will shape the development of future AI.

US President Biden recently signed new government regulations on Safe, Secure and Trustworthy AI following consultations with top AI companies and researchers, and UK Prime Minister Rishi Sunak convened the first international AI Safety Summit in London, bringing together global leaders and AI tech giants, but with noticeably few participants from the nonprofit and civil society sectors.

Could the cautious approach to AI adoption by nonprofits leave our voice absent from important decision-making fora? Are we going to leave it up to the Googles, Microsofts, and Facebooks to decide what kind of AI we have in the future?

Assessing your organization’s AI Readiness

AI is not yet a must-have technology for most organizations, and it would be generally wise to avoid making big AI investments or commitments right now. However, this could be opportune time to begin exploring AI opportunities on a smaller scale.

Assessing your organization’s AI Readiness can be a helpful first step. Here are a few questions to help assess your organization’s AI Readiness:

  • Does your organization have a learn-while-doing culture that is accepting of occasional failure or disappointment?
  • Are there internal staff or external advisors with an innovation mindset who can lead AI experimentation?
  • Is your leadership prepared to support investments of time and resources into AI innovation projects that may not have a clear ROI?
  • Can your organization provide adequate training and reskilling to help staff adapt to new AI-based tools and workflows?
  • Is your organization prepared to make increased investments in knowledge management systems and processes? (including databases and CRMs, document libraries, images, and communications histories)

Setting guardrails

In the world of AI language models, guardrails are built-in controls to make sure they don’t produce offensive or unacceptable outputs. For AI innovation projects, guardrails can help orient the compass and set the boundaries to keep on a safe track.

  • Build around your champions. There are likely people in your organization who are already using/exploring ChatGPT, and could form the core of a working group — formal or informal, preferably cross-team — to start sharing what works. Getting a few voices together to talk can help clarify the risk vs. opportunity balance.
  • Be cautious, but don’t overthink it. The landscape of best-practice AI policies and ethical red-lines is evolving rapidly, so avoid getting locked in a fixed position. Stay flexible, and lean on your existing policies for guidance.
  • Humans stay in charge. This should go without saying, but always ensure there’s human oversight of AI products, whether it’s marketing content, data insights, or document summations. AI chatbots are unreliable right now, but they will get better.
  • Keep focus on your mission. There’s a lot going on right now in the AI space and it’s easy to get distracted by the latest shiny announcements (“Now with 3D video!”). Try to stay focused on what will actually help deliver your programs now.
  • Prepare for disappointment, then success. Innovation rarely follows a linear path forward. Expect to hit the ‘trough of disillusionment’ along the way, and avoid putting too many eggs in one basket. AI is still an emerging workplace technology, so manage your expectations — and risks.

These ground rules can help create a predictable, supportive environment for staff to begin to push the envelope.

Do you need an AI policy?

A majority of XLerateDay survey responders indicated that they did not have any existing policies or frameworks in place for using AI. Only a handful indicated they were starting work to develop AI frameworks.

This is still a largely undeveloped space in nonprofit governance, and the quickly evolving AI landscape makes it even more challenging. Fortunately, there are sector-wide efforts emerging to provide resources and guidance for organizations on AI policy frameworks.

A leading AI framework for nonprofits is from Fundraising.ai, an independent collaborative promoting Responsible AI. The framework covers 10 key areas of Responsible AI for nonprofit fundraising.

  • Privacy and Security,
  • Data Ethics
  • Inclusiveness
  • Accountability
  • Transparency and Explainability
  • Continuous Learning
  • Collaboration
  • Legal Compliance
  • Social Impact
  • Sustainability

This is a good starting point for any nonprofit’s own AI Policy document. However, writing a complete and comprehensive AI Usage Policy would be a big lift for many organizations, and beyond the means of smaller nonprofits.

Instead, maybe an AI mandate?

AI innovation may be supported in the short term with a simpler mandate statement that recognizes the opportunity presented by new AI technologies and provides assurances to kick-start innovation.

An AI Mandate could be a simple single-paragraph statement or a more complex document. Its stated goal would be to empower staff to move ahead with learning and experimentation, understanding that the organization is comfortable with the uncertainty and risks.

Don’t neglect your Human Stack

‘Tech Stack’ is a common term used by digital technicians to describe the collection of interconnected systems and software that they manage. Technology transformation expert and podcaster Tim Lockie advises nonprofits to focus more on their ‘Human Stack’ — the web of interpersonal relationships that actually make organizations run.

As we move into the new year of AI, Tim reminds us of the critical importance of the Human Stack, and the understanding that technology change in the workplace is a highly disruptive experience. Building trust and understanding among team members, openly communicating challenges and setbacks, and managing expectations throughout the adoption journey are as critical to success as the technology itself.

 

George Irish is a veteran of strategy, coaching and consulting for AI-powered charity fundraising. He works with Amnesty International Canada and Greenpeace among other organizations.uo. He writes this column exclusively for each issue of Foundation Magazine.

 

 

Feature Box:

SAMPLE AI MANDATE TEMPLATE

Mission Statement: Our nonprofit organization is dedicated to harnessing the power of Artificial Intelligence (AI) for positive social and environmental impact while prioritizing careful risk assessment and mitigation. We will seek to use AI technologies to further our mission, promote ethical AI practices, and ensure that the benefits of AI are accessible to all while actively preventing potential negative consequences.

Core Values:

  1. Innovation: We will continuously explore and promote innovative AI solutions, while rigorously evaluating potential risks and challenges and actively seeking to mitigate them.
  2. Ethical and Responsible AI: We will adhere to the highest standards of ethical AI development and deployment, emphasizing the proactive identification and mitigation of biases, discrimination, and potential harm.
  3. Collaboration: We emphasize the need for collaboration and sharing across our work teams, not only to amplify our impact but also to collectively assess and manage the risks associated with AI projects.
  4. Transparency: We will maintain transparency in all our activities, including risk assessment and management, from project selection and funding allocation to AI development and data usage. We will be open and accountable to our stakeholders.

With this mandate, our nonprofit organization is committed to investigating the potential of AI and striving to further our mission through the responsible and impactful use of artificial intelligence, while taking a proactive approach to risk assessment and mitigation to prevent potential negative consequences.

Previous post

10 AI Terms Everyone Should Know

Next post

CanadaHelps' Report Highlights 2023 Giving Trends, Revealing Our Most Generous Cities

The Editor

The Editor