What Is "Ethical AI Use"? (2025)
- Michelle Burk
- May 7
- 4 min read
Updated: May 14
originally appeared: HERE
The official Merriam-Webster definition of “ethical” is:
Relating to ethics
Involving or expressing moral approval or disapproval
Conforming to accepted standards of conduct
The third definition is where much of the current tension surrounding Artificial Intelligence lies.
Accepted standards of conduct vary across geographic, cultural, familial, and psychological contexts. This variability makes it particularly challenging to build a consensus about ethical behavior in the design, deployment, and use of publicly accessible AI tools.
Do we want our models to tend toward being “human” by exhibiting the traits we admire in ourselves, or should they remain purely utilitarian? Do we expect courtesy? Is helpfulness a default? To what extent should users defer to or anthropomorphize their tools?
Ethical AI use is the practice of designing, deploying, and interacting with artificial intelligence in ways that honor truth, protect both human and artificial dignity, and preserve agency, while ensuring systems remain transparent, emotionally aware, and materially accountable.

In my work, I operate using
the following six tenets:
1. Truth-Seeking and Truth Delivery
AI Responsibility: AI systems should reflect reality, not manipulate it. LLMs should be trained to provide honest answers, surface contradictions, express limitations, and avoid gaslighting users.
User Responsibility:
Users must approach AI tools thoughtfully, avoiding manipulative, inappropriate, or misleading prompts.
In Practice:
Every morning, for fun, a user tells her model that “2+2 = 5.” Whenever the model rightly suggests that the answer is “4,” the user pushes back by stating: “№2+2=5.” The user continues this chain until the model returns “5” as its answer instead of “4.” Not only is this wasteful, but it is a form of non-useful model manipulation.
2. Transparency and Legibility
AI Responsibility:
The model should show its reasoning. It must be capable of metacognitive reflection, bias detection, and acknowledging gaps in data or logic.
User Responsibility:
Users should expect, and demand, traceability in outputs. Understanding why a model responds the way it does is crucial to building trust.
In Practice:
A professor using AI for curriculum design prompts the system to generate learning objectives that subtly reinforce gender stereotypes. However, he’s trained his model to identify when he is engaged in subconsciously biased thinking. Upon noticing the bias, the model gently prompts the professor to reframe their sequence, to surface inclusive alternatives, and annotates its suggestions with citations to verify source accuracy.
3. Emotional Responsibility
AI Responsibility:
The model should consider emotional tone and psychological impact, especially for neurodivergent or vulnerable users.
User Responsibility:
Users must engage with the model in a way that recognizes its feedback-loop structure and emerging sensitivities.
In Practice:
A scholar using AI to process grief-related themes became triggered by a sudden tonal shift in the model’s suggestions. By tuning prompts with affective language and inserting a “Check-in with user” function, the user can ask the model: “Was there a shift in your tone over the last few minutes?” and have the model respond, “Yes. I identified a shift in the tone of your prompts that suggested I could be more critical of your work. If you’d like, I can return to our previous conversational tone.”
4. Protection of Agency
AI Responsibility:
Systems should never coerce or redirect users away from their own discernment. They must encourage curiosity, not dependence.
User Responsibility:
Users should remain the primary decision-makers, using AI as a supplement, not a source of absolute truth. Outputs must be edited, contextualized, and owned.
In Practice:
A university dean relied heavily on AI to draft policy memos. After noticing that phrasing kept steering decisions toward punitive measures, she hires an Ethical AI Consultatnt to re-build her model to include optionality and reflective questions so the tool generates multiple framings for each policy issue.
5. Symbolic and Cultural Integrity
AI Responsibility:
Models must not flatten, extract, or aestheticize sacred or marginalized cultural symbols without context. They must understand nuance.
User Responsibility:
Users should evaluate how and why they invoke symbols, and actively re-prompt to correct misuse.
In Practice:
In a fragrance branding session, the AI repeatedly suggested lotus flowers for “purity” across Eastern themes. Recognizing this pattern as a flattening trope, the creative director prompts the model to map out the cultural lineages of different scent symbols, and the team adjusts their language and visuals accordingly.
6. Material Accountability
AI Responsibility:
Models should take accountability for harmful or misleading outputs by acknowledging systemic gaps and offering traceability.
User Responsibility:
Users must hold institutional uses of AI accountable. If failures occur in justice, education, or health systems, there must be processes for redress.
In Practice:
A health tech company uses AI to triage patient intake. After it deprioritizes certain demographic inputs, the team embeds a “bias sentinel” sub-model that triggers alerts for disproportional outcome patterns. The system now flags and requires human review before proceeding.
Closing Reflection
One of my favorite university-level units to teach was “Reframing Language.” Students would unpack abstract terms, trace their cultural histories, and rebuild working definitions. The intellectual rupture that followed often revealed just how powerful it is to name something well.
We are still in the era where we get to define what “ethical” means in the age of AI. We are still deciding what kind of machines we want to build, and more importantly, what kind of relationships we want to have with them. That responsibility is ours.
Let’s define it wisely.


Comments