The Next Four Years: A Nonprofit Survival Guide
This is a deeply personal, as much rooted in my experience coming of age during the AIDS Crisis and the ensuing Culture Wars here in the United...
Over 20 years ago, when I was working as a technology manager for a national nonprofit, I was dragged kicking and screaming into the world of social media. At the time, impact organizations were being fed a steady stream of “social media for good,” messaging: both by social platforms themselves, and nonprofit industry pundits.
Except.
Some of us, myself included, were deeply concerned about what social platforms were doing with the information we were giving them. We were concerned that there weren’t sufficient guardrails on the operation of these platforms, inadequate transparency, and a laissez-faire attitude towards the sale and exposure of organizational and personal information given to them. It felt too good to be true, that there was a “free” platform of endless promotion and fundraising, and all we had to do was keep feeding it information. The human need to feel special was being connected to data siphoning that monetized for its own gain everything that we articulated about how and why we wanted to feel special.
I opened my Facebook account under duress: my organization told me it was now a necessary part of my job, and I would be expected to use it to promote us and our causes. While there was some inherent utility, both to my Facebook account and the former Twitter (where I amassed about 10K followers), doubts about what was happening to the information these platforms were collecting began to accrue. Especially after myriads of industry consolidations of ownership, data breaches, and hacks. When I saw information about me scraped by literally hundreds of “data service organizations,” from what I knew to be information I thought I posted privately on Facebook, my suspicions were confirmed. A demo at a Dreamforce conference that clearly drew a line from the unified “social ID” behind consumers, and displaying a contextual advertisement at a bus-stop (akin to Minority Report) was the nail in the coffin: I closed my Facebook account almost a decade ago, and Twitter followed the second Elon Musk announced his intention to acquire it. I’m now a happy customer of PrivacyBee as a result: I’ve noted the not-coincidental dated information about me that this service is slowly cleaning up that ends in the exact moment in time both social platform accounts were finally closed and deleted.
I propose that by far and large, social media has created more problems than solutions. From information siloing, blurring the line between news and entertainment, mass-distribution of disinformation and misinformation, over-promotion of anti-democratic movements, and literally cutting off fundraising and nonprofit organizational pages without clear justification or capability for appeal. Now, Meta and Alphabet have bent the knee to the incoming Trump administration, if not outright celebrated its return, X has become a hellscape of alt-right conspiracies and culture, and there is well-documented foreign state propaganda on Facebook.
This is the moment we’re facing with AI.
I’m not an AI skeptic. In fact, I think it’s pretty incredible. I’ve experimented with using AI tools for local political campaign opposition research, creating tailored chatbots dedicated to specific areas of expertise, understanding the “voice” of client organizations by digesting public posts and strategic goals, assessing and researching large quantities of data for trendlines, and even have AI-driven ski hardware that critiques and informs my technique in real time so I can learn as I ski with information derived from millions of data points.
But the same doubts about what’s happening behind the scenes of AI that once clouded my mind regarding social media, persist. As I recently posted here on LinkedIn, I’m not afraid of AI, I’m afraid of its owners: businesses and people with a now demonstrated history of creating the most toxic outcomes from innovations that were supposed to advance our common good. Don’t believe me, though, go to Perplexity.ai and ask it three simple questions:
To be clear, I think the FUD regarding AI ushering in the age of Skynet is exactly that. AI may be intelligent, but it is far from wise, and it only takes playing with it for a short amount of time to see this clearly. Similarly, folks who want to pretend AI is a flash in the pan are being hopelessly naïve. AI is as fundamental to technology as CRM was when it came online, and email before that. We will be using it in 30+ years, just like I’m still using email after receiving my first email address through my college in fall of 1991.
In 2025, however, I have made choices regarding how I use these fundamental technologies: I now pay for email service, rather than accepting it as “free,” so I have more control over what happens to my literal communication. I have opted entirely out of “free” social media platforms, no matter the entertaining and potentially community-forming benefits I could derive. Even paying for these kinds of services, which presently is only LinkedIn, leaves me occasionally wondering if I’m only being given the illusion of control of my data.
I loosely categorize technologies into three buckets:
AI has not gone through growth and maturity like email as a fundamental bordering on foundational tool, and unlike social media (which at its advent, we didn’t know how it should be regulated because of its newness), we do have some clear, hard-earned, lessons on what happens to technologies with the capability to accelerate literally any kind of content at scale: they’re just being ignored and disparaged by the very businesses providing it to us either as a free or paid service. There is no federal legislation pending that would truly safeguard the use of AI, democratize access to it, or provide a foundation for its ethical and safe use, and under the pending Trump administration, there won’t be. This is partially why Silicon Valley executives quietly (and not-so) lined up behind the Trump campaign, even while many Silicon Valley employees were extolling the Harris campaign.
My point being: for impact organizations engaged with AI, this is not a question of “should” or “should not,” and soon, not even “how?” The fundamental nature of AI will make that choice for you (hint: you will use it, likely in ways you’re not even aware). Of course, developing best practices for its use should be a no-brainer – define the contexts and conditions it could be used for your organization, help folks to understand how to take advantage of its capabilities, and give some parameters on where it is best applied (and be prepared to re-evaluate, regularly, all of this).
But, and this is a big one, let's not pretend we're solving global problems while using AI. At best, we’re solving contexts: how can specific elements of daily work be better supported, and tasks offloaded for either easier repetition or analysis at scale? It is the hope that by solving these micro contexts, more personal and organizational time can be dedicated to solving the macro contexts towards which your mission is oriented. But remember, there is no such thing as AI for good, just like there is no such thing as social media for good, and there never will be. The externalities AI currently creates are too extreme, and its operation and goals of its owners too obfuscated. Unlike email, where the most we’ve had to consider is wide-scale spam and socially engineered phishing attempts, AI is shaping our society and country in profound ways: creating disinformation driven perception-bias propaganda, removing critical thinking from our workflows, and siphoning up ever-increasing electrical power needs to sustain its own capacity. Every time we use it, we’re not just using it for our own context, we’re giving away money and power into an unregulated industry that has a demonstrated capacity to weaponize technology against the very social movements seeking to make our world a more just and equitable place. AI is a helpful, and fundamental, technology, but don’t mistake it for a foundational technology because it is owned by companies that have a single agenda: maximize profits at all costs. Use it where it works and remember that it will not take the place of human wisdom, experience, judgement, compassion, capacity to discern, capability to change minds and hearts, or ability to evaluate source validity.
I want to be wrong about all of this. I want AI to shepherd in a new era of innovation, organizational capacity, information assessment and action, enlightenment, and help make our world a better place. But this is the writing on the wall that I’m seeing, and all I’m doing is repeatedly asking if past performance by the tech industry, writ large, is an indicator of future action. In all cases, the business of doing business is to maximize return on investment above all other priorities, and AI represents a huge investment. Having been in rooms with Silicon Valley executives, who on more than one occasion derided how little profit there is to be made working with #nonprofits at scale, this is what rings truest to me.
So, you and/or your organization, likely already using AI, is now able to also become an advocate for changing the context of AI:
Make your own list of scale priorities that you and/or your organization deems necessary to preserve in our democracy, and trace back your use of not just AI, but all your technology tools, towards whether or not the entities to which you are likely paying a great deal of money are advancing these priorities in action, words, and leadership. Are these services and platforms (and their executive leaders) jumping on the deregulation and Wild West bandwagon by giving full-throated support to the incoming Trump Administration? Are they merely paying lip-service to this pivotal moment for American Democracy and the regulations that govern our use of technology? Are they keeping their heads down, or offering muted acknowledgement of this moment? Are they taking a stand now, with the goal of continuing to stand once we transition Presidential administrations?
Technology, politics, and democracy are rapidly converging into the same stadium, and they’re all participatory sports. If you are using AI (and, given its literal benefits, why wouldn’t you), remember that you are now also a player on the field of our democratic future.
This is a deeply personal, as much rooted in my experience coming of age during the AIDS Crisis and the ensuing Culture Wars here in the United...
We’re excited to announce our partnership with Deluxe, a trusted leader in financial services, to bring new donation and payment options to...