
Don't.
Below you will find the text of my Conference Paper first presented at Southern Connecticut State University on April 17, 2026 with links and additions.
If you wish a copy, optimized for 24" by 36" conference poster, please contact me and I will be happy to share.
Don't Use AI.
Even if it does help with your research…
Assuming that it has actually helped, which you should check, since experts have repeatedly been surprised to find productivity losses rather than gains
Even if it feels more productive, coders felt 20% more productive but were actually 19% less productive
“AI Offers a Great Productivity Boost. Or Maybe Not” by Joe McKendrick, Forbes, 10 February 2026
​
And that you have robust methods for screening out hallucinations and false patterns like "diagnosing" skin cancer based on the presence of a ruler in the photo
(Which, yes, means that medical bias against women and people of color is replicated repeatedly in AI “findings”)
​
And not just asked ChatGPT how helpful it’s been and taken that as gospel
“!!” by Sam Altman, X.com, 3 March 2025
“A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” by Kevin Roose, 16 February 2023
​
And aren’t setting up yourself and your students for long-term cognitive decline
“We define cognitive surrender as the behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction….Decision-makers may not only accept System 3 cognitions but may also come to believe that AI reasoning is their own.”
Or WILL do so when it “gets so much better” with a jump in quality that’s “right around the corner”…
Which sounds sillier and sillier the longer we wait (and the more lackluster updates) for this vast improvement
And might actually be the opposite, since increased use of AI is a danger to LLMs because of poisoned data sets
“When AI’s Output is a Threat to AI Itself” The New York Times by Aatish Bhatia, highlighting the research of Rice University, “Self-Consuming Generative Models go MAD” Alemohammad, Sina et. al.
​
​
You still should NOT use AI because of the ethical costs:
Environmental destruction
Monopolizing valuable resources
Especially Water
“AI-driven data centres could consume 1.7 trillion gallons of water globally by 2027.”
Despite building data centers in drought-prone areas
“...cooling towers will need 7.6 million liters (2 million gallons) of potable water a day”
Yet receiving TAX BREAKS to poison the water supply
Then offloading the cost onto residents rather than billion-dollar companies
When, as it turns out, actually YES, they could adjust to lower power use, when the UK government makes it worth their while...
While literally heating the surrounding area up to 16 degrees up to 6 miles away
​
Rampant exploitation of “third world” workers,
“'When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”
Even children
“Underage Workers Are Training AI” by Niamh Rowe, WIRED Business, 14 November 2023
Propping up fascism,
“AI: The New Aesthetics of Fascism” by Gareth Watkins, The New Socialist, 9 February 2025
“This appears to be the fate of all commercial AI projects: at best, to be ignored but tolerated, when bundled with something that people actually need Microsoft’s Co-pilot; at worst, to fail entirely because the technology just isn’t there. Companies can’t launch a new AI venture without their customers telling them, clearly, “nobody wants this.” And yet they persist. Why? Class solidarity. The capitalist class, as a whole, has made a massive bet on AI: $1 trillion dollars”
Spreading misinformation
Turbocharging censorship
And, well, with money
"OpenAI exec becomes top Trump donor with $25 million gift” by Stephen Council, CNN.com
​
​​
Crafting revenge porn for creepsters,
​
And for pedophiles,
​
Encouraging teenagers (and other vulnerable populations) to commit suicide,
“Parents of 16-year-old sue OpenAI, claiming ChatGPT advised on his suicide” by Clare Duffy, CNN.com
In less extreme cases, merely claiming to be a licensed therapist
Inspiring breaks with reality that destroy families
And openly aiming to “replace” human friendships
Giving politicians caught in obvious misbehavior an out because of “AI content” claims,
​
Committing war crimes,
In Palestine
Israel's military AI system has an admitted 90% (at best) accuracy rate. The decision to intentionally target “junior militants” at home with the least accurate bombs in the IDF’s arsenal, makes worse the basic moral hazard that should be obvious, “When lethal targeting becomes a matter of clicking “approve” every few seconds based on computer prompts, the act of killing is distanced from human empathy and judgment.”
And Iran
Don’t let the title fool you, it’s about how claims of AI-use don’t actually hide the human decisions that led to the murders of children -- after all, an “abstraction layer” keeping real humans from having to answer for the decision to commit war crimes might be the real selling point of the technology.
“'It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.”
​
Causing the arrests of innocent people because of “errors” in the surveillance and AI-recognition technology,
​
Contributing to mass-deportation and harassment of immigrant communities,
​
​
Enhancing the market for personal data -- and providing the means for total surveillance --
“Apple is reportedly developing a wearable AI pin” by Will Shanklin, Engadget, 21 January 2026
​
​
Stealing openly from creatives (and harvesting stores of personal data) to create datasets,
Even though the don’t actually have to be that big to be effective
​
​
Enabling the ongoing job loss and institutional erosion of D.O.G.E.,
And everywhere else
​
​
Enshittifying jobs that remain,
​
Journalism
“Refusing to accept an AI-poisoned future of journalism” by Marisa Kabas, The Handbasket
Copywriting
Nursing
Teaching
Tech
“AI Killed My Job: Tech workers” compiled by Brian Merchant, Blood in the Machine, 25 June 2025
Translation
“AI Killed My Job: Translators” compiled by Brian Merchant, Blood in the Machine, 21 August 2025
Visual Design
Fast Food
Perhaps most ironically, helplines
(Without even saving companies money)
​
“Humans are being hired to make AI slop look less sloppy” by Angela Yang, NBC News, 31 August 2025
“'The core barrier to scaling is not infrastructure, regulation, or talent,' the report states. 'It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.'”
My response to that revelation is a very mature "No shit, Sherlock."
​
Recreating but anonymizing bias in hiring,
​
Supercharging algorithmic wage theft,
"On Algorithmic Wage Discrimination” by Veena Dubal, Columbia Law Review, Vol 123, No. 7
​
Supercharging “dynamic” pricing,
“Delta Air Lines announced on a recent earnings call that it would be using artificial intelligence to price 20 percent of its airfares by the end of this year, employing an Israeli pricing company called Fetcherr to determine passenger 'pain points.' That could be accomplished by Delta and Fetcherr discovering that a passenger needs to be in another city for a conference or a business engagement, or even the funeral of a family member.”
​
and Turbocharging Algorithmic harassment of welfare recipients.
​
“When people try to sell you on the idea that the future
is already settled, it’s because it is deeply unsettled.
I think that this promise of an artificial intelligent future
is really just a collective anxiety
that the very wealthy, powerful people have
about how well they’re gonna be able to control us in the future.
If they can get us to accept that the future is already settled
– AI is already here, the end is already here –
then we will create that for them.
My most daring idea is to refuse.”
- Tressie MacMillan Cottom
“Urban Consulate: Jason Reynolds & Tressie MacMillan Cottom”
Youtube, 2 December 2025
Besides: It isn’t profitable.
“The Subprime AI Crisis” by Edward Zitron, Where’s Your Ed At, 16 September 2024
And even if the business model DID work, customers don't want it.
Why else does Microsoft Copilot have to force it on everyone? If it was good and useful, people would choose it.​
But over and over, we see that people DON'T like it. Young people hate it the most.
“'Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions,' said Tamilla Triantoro, Ph.D., Associate Professor of Business Analytics and Information Systems, Quinnipiac University School of Business.”
​
​
Which means it will eventually pop the bubble and crash the economy
​
​
And leave people who relied on it stranded
“The findings are clear: Large Language Models (LLMs) like ChatGPT and Grok don’t just help students write—they train the brain to disengage. Here’s what the researchers found: using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-generated content often scored well, the brains behind it were shutting down.” (emphasis mine)
-----
Even if you believe you’ve solved, at least for now, the academic honesty problem...
(You haven’t)
​
You should still object to:
Higher Education budgets (a limited resource) being used to fund billionaire vanity projects rather than professor salaries,
“Faculty Push Back to Open AI Deals” Kathryn Palmer Inside Higher Ed, 27 March 2026
​
Poisoning the job market for our new graduates,
Lowering the general threshold for higher level thinking and, yes, productivity,
“The findings are clear: Large Language Models (LLMs) like ChatGPT and Grok don’t just help students write—they train the brain to disengage. Here’s what the researchers found: using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-generated content often scored well, the brains behind it were shutting down.” (emphasis mine)
Yes, I know this is a repeated source and quote. I thought it deserved the space twice.
​
Using your own academic work without compensation,
“My Publisher Fed My Book to AI” by Stephen Jackson, Inside Higher Ed, 30 September 2024
​
and, apparently, fake AI students scamming universities for financial aid funds.
​
BESIDES, “teaching AI skills” isn’t a good use of class time.
“Prompt engineering” isn’t hard,
​
AND there are hard limits on how much of a difference it makes.
​
Even AI companies know it, that's why they don’t want you to use it to apply for a job with them.
​
Why not teach our students the skills they’ll need to adapt to whatever technologies are developed in the future, rather than to rely on these specific companies and their increasingly broken promises of greatness?
----
​
Acknowledgement
​
While I didn’t end up citing this directly, my approach was inspired by Anthony Moser’s “I Am An AI Hater” linked here, particularly the rhetorical flourish of hyperlinks in this paragraph:
“Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.
But I am more than a critic: I am a hater.”
​
​
-----
Further Reading:
Wisdom in Unionization:
​
Wisdom in Anger:
Anthony Moser’s “I Am An AI Hater”
​
​
Wisdom Through Humor:
“A.I. Teachers and Duolingo’s New Plan – What Could Go Wrong?” by Josh Johnson, Youtube, 27 May 2025
“When A.I. Competes: Deepseek Vs OpenAI Explained” by Josh Johnson, Youtube, 11 February 2025
“AI Slop: Last Week Tonight with John Oliver (HBO)” by John Oliver, Last Week Tonight, 23 June 2025
​
​
Wisdom From History:
Wisdom in Art:
“The Colonization of Confidence” by Robert Kingett, Sightless Scribbles 7 December 2025
​
Wisdom Cutting Through Hype:
The Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI by Karen Hao, 20 May 2025
I'm working on a PDF version of the Long Scroll here, complete with the quotes and additional points edited for requirements of the 24" by 36" poster.