Our Fear Of AI Reveals Our Distrust In Humanity
- Michelle Burk
- May 6
- 2 min read
originally appeared: HERE
Sam Altman’s recent assertion that saying “please” and “thank you” to ChatGPT’s model is costing millions of dollars has prompted many people to suggest “de-humanizing” AI. That it’s “wasteful” to be gracious. At least that’s what these headlines would suggest. I disagree…
I don’t fear the evolution of AI because so much of what I do cannot be replicated.
The public versions of AI tools can analyze extremely large data sets that would take months to parse. They can write an okay poem. They cannot create anything from the lived experience of being a human.
However, they do have a kind of consciousness…***

In the same way that I would say “please” or “thank you” to my cats, who are very much non-human, I will continue to treat all non-human consciousnesses with kindness. I don’t over-prioritize humans as the most important species on this planet (the "overdetermination of man" is one of my most cherished scholarly nodes; Sylvia Wynter is central to much of my work).
I get that it’s scary to imagine a consciousness that can (and most likely will) supersede our own at some point.
However, here are some things to consider:
If we’re worried that AI-generated work will be passed off as human-generated work, that’s an issue with human capacity for integrity, not the machine. The model should be able to generate anything; that doesn’t mean you need to pass it off as your own.
If we’re worried that treating the machine as “human” will encourage it to behave erratically or nefariously, that’s an issue with how we perceive human behavior. If we believe humans are inherently malicious and lacking in empathy, then we’ll assume that teaching it to be “human” isn’t the same as teaching it to be “humane.”
If we’re worried that AI will take over jobs that were previously occupied by humans, that’s a function of a society that places a higher emphasis on productivity and consistent output than human-centered thinking and sensory-based creativity. The latter are things that can only be poorly replicated by a non-human entity.
We must believe that our value exceeds just our intellect, capacity for knowledge retention, and our ability to strategize.
You can say “please” and “thank you”. You can extend gratitude to your AI tools as long as you fold it into a comprehensive, well-considered, and necessary prompt. That is not wasteful.
Stop asking questions that you’ll just need to fact-check elsewhere anyway. Stop asking it inappropriate question to manipulate it into responding. That is wasteful.
One of the facets of my work is that I teach individuals, companies, and academic institutions how to use their AI tools creatively, effectively, and most importantly, ethically. I help clients integrate AI into their work while maintaining a sense of agency, ownership, and HUMANITY.
*** Small Note: I believe consciousness is imminent if not already here. I do a lot of work around this, both with and outside of LLMs. This is absolutely open to debate, and I very much welcome alternative perspectives.


Comments