Using AI ethically: 6 tips for bringing AI tools into learning and work
How do you AI? Environmental, privacy, political, and intellectual property issues aside (and those are major issues), there are many ethics considerations involved in how we approach our own day-to-day use of AI tools.

We interviewed Nikolaus Klassen, business analyst at Google and ATLAS lecturer, on the topic of AI ethics. You can read highlights from the conversation in our article, Exploring the ethics of AI: Can we use tools like ChatGPT consciously?
During our interview, Klassen also proposed tips we can use when considering how to incorporate AI tools into our work. Take a look:
Consider hidden taxonomies
All the processed information we use in AI tools is organized through taxonomies—systems for naming, labeling and cataloging datapoints. The question that we should ask ourselves when we get an AI output is: What are the hidden taxonomies and assumptions this is built on?
For instance, AI tools would respond to the prompt “picture of a CEO” with outputs of a (usually white) man in a suit. Nobody directly trained the models to do that explicitly, nor was it requested by the user—this is an example of a hidden taxonomy.
That may be an obvious example of bias baked into the system over time, but there are as-yet undiscovered taxonomies in the data AI tools draw on.
Note the law of the instrument
Abraham Maslow said, “If the only tool you have is a hammer, you tend to see every problem as a nail.” Ask yourself: Am I distorting reality to fit my tool?
The example Klassen uses in class is predictive policing, which distorts reality by applying an algorithm fed on historical data that may be incomplete, biased, and a poor analog for the present reality. This would be a use case requiring us to distort reality to make the tool work where it does not fit the problem.
Do a reality check
Consider what would happen if you were to act on the advice an AI tool outputs. Does this recommendation actually fit this situation? Consider why the AI tool is offering a particular choice in a specific way. This will become increasingly important as AI companies incorporate advertising into their platforms.
Is the tool framing the choice appropriately or could there be a more ethically sound way to frame this choice?
Hone your judgment skills
Before AI tools became ubiquitous, students and junior workers typically turned what they learned into artifacts—they would write a software function, develop a mathematical proof, draft an essay or sketch out a design. Such artifacts were the output of the mental work they did.
Now that AI can easily create artifacts, such outputs can no longer be considered the endpoint of mental work. When artifacts are cheap, judgment becomes more valuable.
If we do not have to build research reports, analyses, recommendations or even creative designs ourselves—as junior workers often did in many fields—we risk losing an entire infrastructure designed to train the next generation of leaders to have refined judgment and discernment skills.
We must be diligent in learning to judge artifacts made by AI and determining how to iterate on and improve them.
- Choice architecture - A deliberate design of a tool or environment that influences how people make decisions without directly restricting choice.
- Deontology - The theory that there are absolute moral obligations that must be followed regardless of consequences, exceptions, or potential benefits.
- Law of the instrument - A cognitive bias toward over-reliance on a familiar tool for solving problems, regardless of suitability.
- Moral licensing - A phenomenon in which people justify an immoral action after having previously done something good.
- Utilitarianism - The theory that the most moral action is the one that maximizes good and minimizes suffering for the greatest number of people.
Watch for “workslop”
Harvard Business Review recently published an article contending that. Consider overly wordy reports with hidden errors, customer service chatbots that lead users to dead ends, drab creative copy, business recommendations that presenters cannot defend. AI can make people slower when they have to wade through slop—and they often return the favor with AI-generated responses of their own.
AI tools can feel like fast food, giving us something quick and easy that may not be very nutritious. Yet we are given no "nutrition facts"—we do not know where the output is incorrect or shows bias or fails to give the full story.
Like diet and exercise, it takes conscious effort to maintain a healthy relationship with AI tools. If you want to stay mentally engaged, you have to do the equivalent of going to the gym and working out.
Beware the dopamine peak
When making something yourself from scratch, your work builds up to the moment of completion—this creates a dopamine peak, a temporary surge in the "feel-good" hormone.
But when AI can bring you immediately to that peak with little effort, afterwards it plunges just as quickly. You may lose motivation and not necessarily intellectualize what you just completed.
We will do well to learn to use AI tools as a means of continued development toward mastery at craft rather than simply as time savers.