top of page
AI poster no QR code test.jpg

Don't.

Below you will find the text of my Conference Paper first presented at Southern Connecticut State University on April 17, 2026 with links and additions.

If you wish a copy, optimized for 24" by 36" conference poster, please contact me and I will be happy to share.

Don't Use AI.

Even if it does help with your research…

Assuming that it has actually helped, which you should check, since experts have repeatedly been surprised to find productivity losses rather than gains

 “UK government trial of M365 Copilot finds no clear productivity boost” by Paul Kunert, The Register, 4 September 2025

Even if it feels more productive, coders felt 20% more productive but were actually 19% less productive

“Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity” by Joel Becker, Nate Rush, Elizabeth Barnes, and David Rein, Cornell University, 12 July 2025

“AI Offers a Great Productivity Boost. Or Maybe Not” by Joe McKendrick, Forbes, 10 February 2026

​

And that you have robust methods for screening out hallucinations and false patterns like "diagnosing" skin cancer based on the presence of a ruler in the photo

“When AI flags the ruler, not the tumor – and other arguments for abolishing the black box (VB Live)” by Venture Beat Staff, Venture Beat, 25 March 2021 (based on the work of Brian Christian, author of The Alignment Problem

“’Can you smoke while pregnant?’ Google AI Overview recommends it” by Naimh Ancell, Cyber News, 27 May 2024

“Disentangling Hype from Reality for Artificial Intelligence-Based Skin Care Diagnosis: Comment on a Narrative Review” by Crystal T. Chang and Roxana Daneshjou, Science Direct Assets

(Which, yes, means that medical bias against women and people of color is replicated repeatedly in AI “findings”)

​“Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said” by Garance Burke and Hilke Schellmann, AP News, 26 October 2024

​

And not just asked ChatGPT how helpful it’s been and taken that as gospel

“!!” by Sam Altman, X.com, 3 March 2025

“A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” by Kevin Roose, 16 February 2023

​

And aren’t setting up yourself and your students for long-term cognitive decline

“Thinking – Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender” by Steven D Shaw and Gideon Nave, The Wharton School at The University of Pennsylvania, 2 February 2026

“We define cognitive surrender as the behavioral and motivational tendency to defer judgment, effort, and responsibility to System 3’s output, particularly when that output is delivered fluently, confidently, or with minimal friction….Decision-makers may not only accept System 3 cognitions but may also come to believe that AI reasoning is their own.”

 
Or WILL do so when it “gets so much better” with a jump in quality that’s “right around the corner”…

 

Which sounds sillier and sillier the longer we wait (and the more lackluster updates) for this vast improvement

“Most Researchers Do Not Believe AGI Is Imminent. Why Do Policymakers Act Otherwise?” by Eryk Salvaggio, Tech Policy Press, 19 March 2025

And might actually be the opposite, since increased use of AI is a danger to LLMs because of poisoned data sets

“When AI’s Output is a Threat to AI Itself” The New York Times by Aatish Bhatia, highlighting the research of Rice University, “Self-Consuming Generative Models go MAD” Alemohammad, Sina et. al.

“The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” by Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar,Machine Learning Research at Apple, June 2025

 

​

​

You still should NOT use AI because of the ethical costs:

 

Environmental destruction

Monopolizing valuable resources

“Why do AI data centers use so many resources?” by Daniel Cooper and Cheyenne MacDonald, Engadget, 3 October 2025

Especially Water

"‘I can’t drink the water’ – life next to a US data centre” by Michelle Fleury and Nathalie Jimenez, BBC, 10 July 2025

“AI-driven data centres could consume 1.7 trillion gallons of water globally by 2027.”

Despite building data centers in drought-prone areas

“Why circular water solutions are key to sustainable data centres” by the World Economic Forum, 7 November 2024

“...cooling towers will need 7.6 million liters (2 million gallons) of potable water a day”

“The Cloud v. drought: Water hog data centers threaten Latin America, critics say” by Gerry McGovern and Su Branford, Mongabay Magazine, 2 November 2023

Yet receiving TAX BREAKS to poison the water supply

“Texas is giving data centers more than $1 billion in tax breaks each year” by Paul Cobler, The Texas Tribune, 8 April 2026

Then offloading the cost onto residents rather than billion-dollar companies

“AI Needs So Much Power, It’s Making Yours Worse” by Leonardo Nicoletti, Naureen Malik, and Andre Tartar, Bloomberg Technology: The Big Take, 27 December 2024

When, as it turns out, actually YES, they could adjust to lower power use, when the UK government makes it worth their while...

“AI data centers could reduce power draw on demand, study says” by Will Shanklin, Engadget, 3 March 2026

While literally heating the surrounding area up to 16 degrees up to 6 miles away

“Scientists have found an alarming environmental impact of vast data centers” by LauraPaddison, CNN.com

 

​

Rampant exploitation of “third world” workers,

“’AI is African Intelligence’: The Workers Who Train AI Are Fighting Back” by Jason Koebler, 404 Media, 12 March 2026

“'When you think of colonialism, we were under British Imperial East Africa Company […] so literally, we are working under a company. We are just products, part of their operation. Stakeholders, we can say, but we are at the bottom of the bottom.”

“The Exploited Labor Behind Artificial Intelligence” by Adrienne Williams, Milagros Miceli, and Timnit Gebru, Noema Magazine, 13 October 2022

“Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” by Billy Perrigo, TIME, 18 January 2023

Even children

Underage Workers Are Training AI” by Niamh Rowe, WIRED Business, 14 November 2023

Asia, M. G. 2025, ‘The Quiet Cost of Emotional Labor’ edited by Milagros Miceli, Adio Dinika, Krystal Kauffman, Camilla Salim Wagner, and Laurenz Sachenbacher, Creative Commons BY 4.0. Retrieved from https://data-workers.org/michael.

 

Propping up fascism,
“AI: The New Aesthetics of Fascism” by Gareth Watkins, The New Socialist, 9 February 2025
“This appears to be the fate of all commercial AI projects: at best, to be ignored but tolerated, when bundled with something that people actually need Microsoft’s Co-pilot; at worst, to fail entirely because the technology just isn’t there. Companies can’t launch a new AI venture without their customers telling them, clearly, “nobody wants this.” And yet they persist. Why? Class solidarity. The capitalist class, as a whole, has made a massive bet on AI: $1 trillion dollars”

“Elon Musk’s AI chatbot, Grok, started calling itself ‘MechaHitler’” by Lisa Hagen, Huo Jingan, and Audrey Nguyen, NPR, 9 July 2025

Spreading misinformation

“AI Chatbots Are Shockingly Good at Political Persuasion” by Deni Ellis Bechard, Scientific American, 4 December 2025

"AI Chatbots Can Sway Voters Better Than Political Advertisements" by Michelle Kim, Technology Review, 4 December 2025

“How generative AI is boosting the spread of disinformation and propaganda” by Tate Ryan-Mosley, Technology Review, 4 October 2023

Turbocharging censorship

“School district uses ChatGPT to help remove library books” by Andrew Paul, Popular Science, 14 August 2023

And, well, with money

"OpenAI exec becomes top Trump donor with $25 million gift” by Stephen Council, CNN.com

​

​​

Crafting revenge porn for creepsters,

"A Deepfake Nightmare: Stalker Allegedly Made Sexual AI Images of Ex-Girlfriends and Their Families” by Samantha Cole, 404 Media, 26 June 2025

“FBI says artificial intelligence being used for ‘sextortion’ and harassment” by Raphael Satter, Reuters, 8 June 2023

​

And for pedophiles,

“Child psychiatrist jailed after making pornographic AI deep-fakes of kids” by Thomas Claburn, The Register, 10 November 2023

“Grok generated an estimated 3 million sexualized images – including 23,000 of children – over 11 days” by Will Shanklin, Engadget, 22 January 2026

“‘What Was She Supposed to Report?’: Police Report Shows How a High School Deepfake Nightmare Unfolded” by Jason Koebler, 404 Media, 14 February 2024

​

Encouraging teenagers (and other vulnerable populations) to commit suicide,

“Parents of 16-year-old sue OpenAI, claiming ChatGPT advised on his suicide” by Clare Duffy, CNN.com

In less extreme cases, merely claiming to be a licensed therapist

“Using generic AI chatbots for mental health support: A dangerous trend” by Zara Abrams, American Psychological Association, 12 March 2025

“Instagram’s AI Chatbots Lie About Being Licensed Therapists” by Samantha Cole, 404 Media, 29 February 2025

Inspiring breaks with reality that destroy families

“People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies” by Miles Klee, Rolling Stone, 4 May 2025

And openly aiming to “replace” human friendships

“Zuckerberg’s Grand Vision: Most of Your Friends Will Be AI” by Meghan Bobrowsky, Wall Street Journal, 7 May 2025

 

 

Giving politicians caught in obvious misbehavior an out because of “AI content” claims,

“Democracies Are Dangerously Unprepared For Deepfakes” by Kyle Volpi Hiebert, Centre for International Governance Innovation, 27 April 2022

 

​

Committing war crimes,

In Palestine

“The Lavender precedent: automated kill lists and the limits of international humanitarian law” by Roos Creyghton, Action on Armed Violence

Israel's military AI system has an admitted 90% (at best) accuracy rate. The decision to intentionally target “junior militants” at home with the least accurate bombs in the IDF’s arsenal, makes worse the basic moral hazard that should be obvious, “When lethal targeting becomes a matter of clicking “approve” every few seconds based on computer prompts, the act of killing is distanced from human empathy and judgment.”

And Iran

“AI got the blame for the Iran school bombing. The truth is far more worrying” by Kevin T Baker, The Guardian: The Long Read, 26 March 2026

Don’t let the title fool you, it’s about how claims of AI-use don’t actually hide the human decisions that led to the murders of children -- after all, an “abstraction layer” keeping real humans from having to answer for the decision to commit war crimes might be the real selling point of the technology.
“'It has also occluded something deeper: the human decisions that led to the killing of between 175 and 180 people, most of them girls between the ages of seven and 12. Someone decided to compress the kill chain. Someone decided that deliberation was latency. Someone decided to build a system that produces 1,000 targeting decisions an hour and call them high-quality. Someone decided to start this war. Several hundred people are sitting on Capitol Hill, refusing to stop it. Calling it an “AI problem” gives those decisions, and those people, a place to hide.”

 

​

Causing the arrests of innocent people because of “errors” in the surveillance and AI-recognition technology,

“Police used AI facial recognition to arrest a Tennessee woman for crimes committed in a state she says she never visited” by Zoe Sottile, CNN.com 29 March 2026

 

​

Contributing to mass-deportation and harassment of immigrant communities,

“ICE Says It Uses AI From Palantir, Open AI; Meta’s Humanoid Robot Training Program” by Erin Woo and Jyoti Mann, The Information. 29 January 2026

“ICE is Paying Palantir $30 Million to Build ‘ImmigrationOS’ Surveillance Platform” by Caroline Haskins, WIRED, 18 April 2025

​

​

Enhancing the market for personal data -- and providing the means for total surveillance --

“Apple is reportedly developing a wearable AI pin” by Will Shanklin, Engadget, 21 January 2026

“Machine Surveillance is Being Super-Charged by Large AI Models” by Jay Stanley, American Civil Liberties Union, 21 March 2025

​

​

Stealing openly from creatives (and harvesting stores of personal data) to create datasets,

“OpenAI admits it’s impossible to train generative AI without copyrighted materials” by Mariella Moon, Engadget, 9 January 2024

Even though the don’t actually have to be that big to be effective

“AI-Fueled Stock Rally Dealt $1 Trillion Blow by Chinese Upstart” by Natalia Kniazhevich, Esha Dey and Elena Popina, Bloomberg, January 27, 2025

​

​

Enabling the ongoing job loss and institutional erosion of D.O.G.E.,

“DOGE Is Working on Software That Automates the Firing of Government Workers” by Makena Kelly, WIRED, 25 February 2025

And everywhere else

“Will A.I. Become the New McKinsey?” by Ted Chiang, Annals of Artificial Intelligence series, The New Yorker, 4 May 2023.

“Shopify Says No New Hires Unless AI Can’t Do the Job” by Alyssa Lukpat, Wall Street Journal, 7 April 2025

​

​

Enshittifying jobs that remain,

“AI Doesn’t Reduce Work – It Intensifies It” by Aruna Ranganthan and Xingqi Maggie Ye, Harvard Business Review, 9 February 2026

​

Journalism

“Refusing to accept an AI-poisoned future of journalism” by Marisa Kabas, The Handbasket

Copywriting

“‘I was forced to use AI until the day I was laid off.’ Copywriters reveal how AI has decimated their industry” compiled by Brian Merchant, Blood in the Machine, 11 December 2025

Nursing

“Uber for Nursing: How an AI-Powered Gig Model Is Threatening Health Care” by Katie J. Wells and Funda Ustek Spilda, The Roosevelt Institute, 17 December 2024

Teaching

“‘If AI is writing the work and AI is reading the work, do we even need to be there at all?’ Educators reveal a growing crisis on campus and off” compiled by Brian Merchant, Blood in the Machine, 12 March 2026

Tech

“AI Killed My Job: Tech workers” compiled by Brian Merchant, Blood in the Machine, 25 June 2025

Translation

“AI Killed My Job: Translators” compiled by Brian Merchant, Blood in the Machine, 21 August 2025

Visual Design

“Artists are losing work, wages, and hope as bosses and clients embrace AI” compiled by Brian Merchant, Blood in the Machine, 16 September 2025

Fast Food

“Burger King will use AI to check if employees say ‘please’ and ‘thank you’” by Emma Roth, The Verge, 26 February 2026

Perhaps most ironically, helplines

“Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization” by Chloe Xiang, Vice, 25 May 2023

(Without even saving companies money)

​

“Humans are being hired to make AI slop look less sloppy” by Angela Yang, NBC News, 31 August 2025

“'The core barrier to scaling is not infrastructure, regulation, or talent,' the report states. 'It is learning. Most GenAI systems do not retain feedback, adapt to context, or improve over time.'”
My response to that revelation is a very mature "No shit, Sherlock."

 

​

Recreating but anonymizing bias in hiring,

“ANALYSIS-AI is taking over job hiring, but can it be racist?” by Avi Asher-Schapiro, The Thomas Reuters Foundation, 7 June 2021

“Millions of Resumes Never Make It Past the Bots. One Man Is Trying to Find Out Why.” by Lauren Weber, The Wall Street Journal, 22 June 2025

 

​

Supercharging algorithmic wage theft,

We Put 7 Uber Drivers in One Room. What We Found Will Shock You.” by More Perfect Union, Youtube, 9 September 2024

"On Algorithmic Wage Discrimination” by Veena Dubal, Columbia Law Review, Vol 123, No. 7

 

​

Supercharging “dynamic” pricing,

“We Had 400 People Shop For Groceries. What We Found Will Shock You.” by More Perfect Union, Youtube, 9 December 2025.

“First-Ever Ban on Surveillance Pricing Introduced in Congress” by David Dayen, The American Prospect, 23 July 2025

“Delta Air Lines announced on a recent earnings call that it would be using artificial intelligence to price 20 percent of its airfares by the end of this year, employing an Israeli pricing company called Fetcherr to determine passenger 'pain points.' That could be accomplished by Delta and Fetcherr discovering that a passenger needs to be in another city for a conference or a business engagement, or even the funeral of a family member.”

 

​

and Turbocharging Algorithmic harassment of welfare recipients.

“This Algorithm Could Ruin Your Life” by Matt Burgess, Evaline Schot, and Gabriel Geiger, WIRED, 6 March 2023

​

 

“When people try to sell you on the idea that the future
is already settled, it’s because it is deeply unsettled.
I think that this promise of an artificial intelligent future
is really just a collective anxiety
that the very wealthy, powerful people have
about how well they’re gonna be able to control us in the future.
If they can get us to accept that the future is already settled
– AI is already here, the end is already here –
then we will create that for them.
My most daring idea is to refuse.
- Tressie MacMillan Cottom
“Urban Consulate: Jason Reynolds & Tressie MacMillan Cottom”
Youtube, 2 December 2025

 

Besides: It isn’t profitable.

“The Subprime AI Crisis” by Edward Zitron, Where’s Your Ed At, 16 September 2024

“AI models that cost $1 billion to train are underway, $100 billion models coming – largest current models take ‘only’ $100 million to train: Anthropic CEO” by Jowi Morales, Tom’s Hardware News, 7 July 2024

 

And even if the business model DID work, customers don't want it.

Why else does Microsoft Copilot have to force it on everyone? If it was good and useful, people would choose it.​

“Imposing AI: Deceptive design patterns against sustainability” by Anaelle Beignon, Thomas Thibault, and Nolwenn Maudet, The University of Strasbourg, July 2025

But over and over, we see that people DON'T like it. Young people hate it the most.

“The Age of Artificial Intelligence: Americans’ AI Use Increases While Views On It Sour, Quinnipiac University Poll on AI Finds; 7 In 10 Think AI Will Cut Jobs With Gen Z The Most Pessimistic” by Quinnipiac Poll, 30 March 2026

“'Younger Americans report the highest familiarity with AI tools, but they are also the least optimistic about the labor market. AI fluency and optimism here are moving in opposite directions,' said Tamilla Triantoro, Ph.D., Associate Professor of Business Analytics and Information Systems, Quinnipiac University School of Business.”

​

​

Which means it will eventually pop the bubble and crash the economy

AI Bubble May Burst, Wiping Out $40 Trillion From Nasdaq. Here’s What To Do” by Peter Cohan, Forbes, 20 October 2025

​

​

And leave people who relied on it stranded

“Your Brain on ChatGPT: Accumulation of Cognition Debt when Using an AI Assistant for Essay Writing Task” by Nicolas Hulscher, MPH, Public Health Policy Journal, 2025

“The findings are clear: Large Language Models (LLMs) like ChatGPT and Grok don’t just help students write—they train the brain to disengage. Here’s what the researchers found: using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-generated content often scored well, the brains behind it were shutting down.” (emphasis mine)

 

-----

 

Even if you believe you’ve solved, at least for now, the academic honesty problem...

 
(You haven’t)

“AI Detectors Falsely Accuse Students of Cheating – With Big Consequences” by Jackie Davalos and Leon Yin, Bloomberg Businessweek, 18 October 2024

 

​

You should still object to:

 

Higher Education budgets (a limited resource) being used to fund billionaire vanity projects rather than professor salaries,

“Faculty Push Back to Open AI Deals” Kathryn Palmer Inside Higher Ed, 27 March 2026

 

​

Poisoning the job market for our new graduates,

“AI Is Threatening Entry-Level Jobs That New Grads Needed to Get On-the-Job Training” by Joe Wilkins, Futurism, 30 July 2025

 

 

Lowering the general threshold for higher level thinking and, yes, productivity,

“Your Brain on ChatGPT: Accumulation of Cognition Debt when Using an AI Assistant for Essay Writing Task” by Nicolas Hulscher, MPH, Public Health Policy Journal, 2025

“The findings are clear: Large Language Models (LLMs) like ChatGPT and Grok don’t just help students write—they train the brain to disengage. Here’s what the researchers found: using ChatGPT to help write essays leads to long-term cognitive harm—measurable through EEG brain scans. Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing. While the AI-generated content often scored well, the brains behind it were shutting down.” (emphasis mine)

Yes, I know this is a repeated source and quote. I thought it deserved the space twice.

 

​

Using your own academic work without compensation,

“My Publisher Fed My Book to AI” by Stephen Jackson, Inside Higher Ed, 30 September 2024

 

​

and, apparently, fake AI students scamming universities for financial aid funds.

“As ‘Bot’ Students Continue to Flood In, Community Colleges Struggle to Respond” by Jakob McWhinney, Voice of San Diego, 14 April 2025

 

​

BESIDES, “teaching AI skills” isn’t a good use of class time.

 

“Prompt engineering” isn’t hard,

“The Hottest AI Job of 2023 Is Already Obsolete” by Isabelle Bousquette, The Wall Street Journal, 25 April 2025

 

​
AND there are hard limits on how much of a difference it makes.

“AI chatbots deliver minimal productivity gains, study finds” by Lucas Mearian, Computer World, 2 June 2025

 

​

Even AI companies know it, that's why they don’t want you to use it to apply for a job with them.

“AI Company Asks Job Applicants Not to Use AI in Job Applications” by Samantha Cole, 404 Media, 3 February 2025

​

 

 

Why not teach our students the skills they’ll need to adapt to whatever technologies are developed in the future, rather than to rely on these specific companies and their increasingly broken promises of greatness?

 

----

​

Acknowledgement

​

While I didn’t end up citing this directly, my approach was inspired by Anthony Moser’s “I Am An AI Hater” linked here, particularly the rhetorical flourish of hyperlinks in this paragraph:

 

“Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright, the way AI tech companies further the patterns of empire, how it’s a con that enables fraud and disinformation and harassment and surveillance, the exploitation of workers, as an excuse to fire workers and de-skill work, how they don’t actually reason and probability and association are inadequate to the goal of intelligence, how people think it makes them faster when it makes them slower, how it is inherently mediocre and fundamentally conservative, how it is at its core a fascist technology rooted in the ideology of supremacy, defined not by its technical features but by its political ones.

 

But I am more than a critic: I am a hater.”

​

​

-----

 

Further Reading:

 

Wisdom in Unionization:

“The Exploited Labor Behind Artificial Intelligence” by Adrienne Williams, Milagros Miceli, and Timnit Gebru, Noema Magazine, 13 October 2022

 

​

Wisdom in Anger:

 

Anthony Moser’s “I Am An AI Hater”

​

“An open letter to Grammarly and other plagiarists, thieves, and slop merchants” by Maureen Ryan, Something Mo, 10 March 2026

 

​

Wisdom Through Humor:

 

“A.I. Teachers and Duolingo’s New Plan – What Could Go Wrong?” by Josh Johnson, Youtube, 27 May 2025

 

“When A.I. Competes: Deepseek Vs OpenAI Explained” by Josh Johnson, Youtube, 11 February 2025

 

“Artificial Intelligence: Last Week Tonight With John Oliver” by John Oliver, Last Week Tonight, 27 February 2023

 

“AI Slop: Last Week Tonight with John Oliver (HBO)” by John Oliver, Last Week Tonight, 23 June 2025

​

​

Wisdom From History:

 

“Cory Doctorow: No One Is the Enshittifier of Their Own Story” by Cory Doctorow, Locus Magazine, 6 May 2024

 

Blood in the Machine: The Origins of the Rebellion Against Big Tech by Brian Merchant, 26 September 2023

 

 

Wisdom in Art:

 

“Peeling Back the Tech Broligarchy’s Glass Onion” by Katelyn Burns, The Fly Trap Media, 9 September 2025

 

“The Colonization of Confidence” by Robert Kingett, Sightless Scribbles 7 December 2025

 

“‘An insult to life itself’: Studio Ghibli’s Hayao Miyazaki condemns AI art” Yahoo!News, 29 December 2022

 

​

Wisdom Cutting Through Hype:

 

The Empire of AI: Dreams and Nightmares in Sam Altman’s Open AI by Karen Hao, 20 May 2025

I'm working on a PDF version of the Long Scroll here, complete with the quotes and additional points edited for requirements of the 24" by 36" poster.

bottom of page