Ethical Considerations for AI in the Workplace
A guide to using AI responsibly in the workplace, covering the benefits of AI adoption, ethical and societal concerns, environmental impact, and practical strategies for responsible implementation. This article is based on discussions with colleagues in my workplace who had some concerns about the rise and use of AI.
Contents
It's hard to escape hearing about, and using, AI these days. Many of the applications, programmes, and services we all use have some element of AI functionality.
Governments, businesses, and individuals around the world are utilising this new technology. With that comes a high potential for reward, but it is not without risk nor without social, ethical, and environmental challenges.
I'll go through some of the challenges that AI presents to us as a society, and as leaders and employees in the workplace. We cannot simply ignore AI and other similar technological advances, just as businesses could not ignore the rise of the Internet. What we must do is set clear standards for the use of AI, and ensure we utilise services in a manner that aligns with our goals and values.
Note: Where I refer to "AI" I am referring to "generative Artificial Intelligence" — the systems used to generate text, images, videos, and audio. Where I refer to "AI models", I am referring to Large Language Model (LLM) AIs such as Claude, ChatGPT, and Google Gemini. When referring to "AI Assistants" I am referencing the interfaces we use to communicate with the AI models (e.g. the Claude app or website).
Why should we use AI in the workplace?
There are four broad areas where AI has the potential to really help businesses and employees.
1. Delivering great customer service
AI can help us respond to customers more quickly and more accurately, particularly outside of office hours and during peak periods. One example of this is AI integration into messaging platforms where customers can ask for assistance and have a personalised response provided to them using existing knowledge bases. This has been particularly helpful for customers who make contact outside of normal support hours — customers have been able to get the help and advice they've needed 24/7.
By having AI handle some of the more standard queries, such as "do you deliver to Northern Ireland?", Customer Support teams have been able to give more time to the more complex and sensitive queries such as delivery or quality issues.
AI is also useful in helping us to understand and respond to customer feedback. We have feedback from customers coming in from various sources, often in "natural language" which is hard to categorise into measurable data. AI can read natural language and help us classify the data. For example, AI can help more easily classify feedback provided by customers coming in as free-text natural language.
2. Freeing people to focus on what matters most - relationships and creativity
Much of our normal day-to-day tasks require some quite repetitive and uninspiring administrative tasks, such as reading and writing reports, summarising documents, making sense of spreadsheets, and formatting documents. This is an area where AI can really help speed up much of that, freeing us up to focus on the areas requiring genuine human capabilities and innovative creativity.
For Software Development teams this means spending less time forming and testing complicated algorithms, and more time developing new and exciting systems and features. For Customer Support this means more time for situations requiring judgement and empathy. For Marketing this means more time on creative campaigns and engaging copy rather than reformatting content across multiple channels. For data analysis this means more time to consider what to do with the information rather than gathering and formatting the data.
The real benefit with AI here is to enhance and elevate our work by allowing us to focus on what we're all good at, and likely what we all take more enjoyment from. It allows us to create genuine value for customers, colleagues, and the business by making greater use of our human capabilities.
3. Maintaining quality and innovation whilst staying competitive
Competitors are already using AI. The question isn't whether we should be using AI, but how we can use AI in a responsible and effective way to provide a more meaningful experience to our customers and potential customers.
AI allows smaller businesses to compete against the big corporate competitors who often have far greater resources available to them. We can use AI to analyse market trends, trial different campaigns, develop new software ideas, improve logistics, and personalise customer experiences at scale — traditionally these areas have required dedicated teams or costly third-party agencies.
Maintaining quality and pushing innovative ideas requires us to use AI as a tool to amplify human judgement rather than replace it. AI can provide us with knowledge and analysis, we must provide the wisdom and values.
4. Supporting people's growth and filling knowledge gaps
Every workplace has internal knowledge gaps. This may be in fully utilising website analytics platforms, understanding the legalities of trade agreements and processes, or making the most of software such as Excel.
Here AI can provide us with tailored, personalised guidance and advice without the need to consult external companies or engage in generic training. This doesn't replace the need for internal human expertise and mentoring, but can assist where otherwise we would have to wait for specific colleagues to help, or wait for specific training to be available.
AI also helps level the playing field for employees who might lack certain background knowledge. If someone hasn't studied at university, or if their education didn't cover certain technical areas, AI can help fill those gaps in a judgement-free environment where they can ask apparently simple questions without embarrassment.
The goal is continuous learning and development, where everyone has access to personalised support as they grow their skills and confidence.
Ethical and social concerns
Throughout history new technology has raised ethical concerns, and resulted in societal changes. The agricultural revolution caused human society to undergo huge changes to human culture and precipitated significant environmental changes.
The industrial revolution caused widespread workplace upheaval and social evolution, whilst having a very negative effect on the global environment.
We're now in the information revolution. Starting in the 90s with the dawn of the consumer world-wide web and personal computers, into the 00s with the advent of widespread smartphone usage. The 2020s has seen the latest stage of the information age with popularisation of generative Artificial Intelligence.
It remains to be seen what the medium- to long-term impact of AI will be. We are far from having anything close to Artificial General Intelligence (AGI) — the hypothetical AI which would be better at most things than most humans — but the current iteration of Large Language Models (LLMs), such as Claude, Gemini, and ChatGPT, are already impacting people worldwide.
The impact on employees
As with any emerging technology, it is often employees of businesses who see the effects beyond simply a fad or passing interest. AI Assistants are particularly good at carrying out many tasks that are typically performed by humans in the workplace: administration, data entry, content writing, writing software code, summarising and analysing documents, and creating reports.
In this regard there are two broad categories of concern with AI use in the workplace: the risk to jobs posed by AI (both directly and indirectly), and the possibility that talent and skills suffer (both for existing employees, and longer-term hiring requirements).
Job security
Not helped by somewhat dramatic headlines claiming AI is coming for all our jobs (the big news that keeps being put out by Big Tech firms claim that software development is dead — despite still hiring software engineers).
Whilst broadly speaking this does not appear to be the case in employment data, there are several reports that entry-level positions are becoming harder to attain, particularly in roles such as data entry, and secretarial and paralegal work. Those likely to be seeing the greatest impact will be young graduates.[1]
It remains to be seen how far AI can go in performing many of the same roles as humans currently do, and what impact that will have on employment globally. Similar technologies have been developed throughout history with similar concerns. We can go back to the industrial revolution, where jobs performed by people came to be performed by machines. More recently, over the past few decades computers, digital systems, and the Internet have been used where previously people would have been. In economics, there is something called the "lump of labour" fallacy[2]: the idea that there is a fixed unit of work to be completed by either human or machine. The trend throughout history has been that when new technologies emerge, new jobs are created when others are changed or lost — the overall employment rate has remained consistent.
There is no doubt that some roles will change, and are changing. Let's take software development as an example. AI can now write software code that can be very good — this was not the case only a year ago. Yet only a part of the role of a software developer is writing lines of code, much of the role is in system design and architecture, planning, testing, considering dependencies, translating business requirements into rules-based processes, and determining medium- to long-term effects on implementation decisions. In the near future, the actual number of lines of code written entirely by an individual software developer will certainly reduce — this means the skills of developers will have a shift in focus to the other areas.
When hiring in the future, employers will place more credit to those candidates who show aptitude in planning and understanding wider business areas over the ability to logically implement algorithms and functional solutions.
Skills and Training
One concern that is undeniable is the potential for "skill fade" in the face of AI. There is a well-documented term for this: cognitive offloading.
There have been several studies on this in relation to other technologies. Notably, smartphone cameras. People who photograph events remember them less accurately than those who simply observe them. This occurs because photography acts as a form of "cognitive offloading" - essentially, the camera becomes an external memory system, so the brain doesn't encode the information as deeply.[3][4]
Similarly the rise of search engines has had the effect — people are less likely to remember information they know they can readily find on the Internet.[5]
More recently, this has been been studied in relation to AI use. Unsurprisingly, similar cognitive offloading effects are being seen.[6]
Again, here I'll use software development as an example. Whilst knowing off the top of your head how to write particular solutions in code may become less useful, actually being able to write code will still be essential to the role — edge-cases will require human intervention, and understanding code in order to explain changes to an AI will still be required.
The challenge will really be for individuals here, what areas are people comfortable cognitively offloading to AI, and what should be embedded as a skill? Whilst utilising systems and tools to help with knowledge-based tasks will become yet more common, skills relying on human connections and creativity will become increasingly important.
The Impact on Customers
There are two primary concerns around AI for customers: biases, and a potential for relationships to become less human-centric.
AI models are trained on huge amounts of data. Data written by humans. This means that biases present in human-generated content are used to train AI models, and so those AI models can contain those same biases.[7] If you ask ChatGPT to generate a picture of a nurse, it will likely be female. If you ask ChatGPT to generate a picture of a CEO, that picture will likely be of a male.
Whilst those are quite obvious examples of biases, the problem runs deeper when AI is used for automated decision-making without human oversight. For example, AI could be used in a hiring process — certain names or address locations could be prioritised over others, as the underlying data is biased towards or against certain demographics.
Knowing this, we must all ensure that we are aware of these biases, and that we always retain a human-centred approach to decision-making. When considering automated systems and contact points, we should prioritise humanity and customer experience above automation and efficiency.
Accuracy
Responses given by AI Assistants will never be completely accurate all of the time - AI models are trained on data, they don't have access to those data. If this is not understood or considered, it can lead to quite negative perceptions of AI, and responses from AI.
AI models are not search engines. A search engine is like a librarian: they will point you to the content you're after; an AI model is like someone who has read every book in the library. When you ask them a question, they answer based on what they remember learning, drawing on patterns and understanding absorbed from everything they've read.
AI excels at explaining well-established topics — scientific principles, historical events, widely understood concepts, programming fundamentals, grammatical rules, and similar knowledge that's stable and well-documented.
If you ask how photosynthesis works, what the capital of France is, or how to structure a for-loop in Python, the response will almost certainly be accurate because these patterns are extremely consistent across its training data.
Any specific numerical claims, dates, statistics, percentages, or quantitative information should be verified independently. Don't trust AI-generated numbers without checking them against reliable sources.
If an AI claims "studies show 73% of customers prefer...", find the actual study. If it provides a specific date for an historical event, verify it. If it gives financial figures, check them. Never trust AI-provided academic citations, article references, or source attributions without verification. A recent example of this was a court case where lawyers used AI to help form arguments for a case, and the AI provided supporting case law that did not actually exist — the judge was less than impressed to say the least.[8]
Copyright and Intellectual Property
One of the most prominent concerns around AI relates to how AI models are trained. These systems learn from vast datasets containing text, images, and code scraped from across the internet — including copyrighted material, creative works, and content created by millions of individuals who did not explicitly consent to this use.
This raises legitimate questions about intellectual property rights, fair compensation for creators, and whether AI models might reproduce or compete with the original works they were trained on.
AI models don't store or retrieve copies of training data — they don't "look up" information like a search engine. They learn patterns, structures, and relationships from vast numbers of examples — fundamentally similar to how humans learn by reading books, viewing art, or studying others' code.
When a painter studies Renaissance techniques by examining hundreds of paintings in galleries, we don't consider this theft, even though they're learning from others' work to develop their own capabilities. When a programmer learns by examining open source code, this is considered normal professional development.
The critical distinction is between learning from examples (which has always been how knowledge develops) and copying specific works (which violates rights).
Many ethical and legal issues remain over the types and sources of data used to train AI models. Copyrighted material that has not been intended to be in the public domain by authors and producers has been used in training data by all the major AI companies. Lawmakers and courts have yet to come to an agreed answer to this. Commercially, AI companies are partnering with specific content providers, such as Reddit, to allow consensual access to their data — this will likely continue to be a hot topic over the next few years.
Environmental Impact
Data centres are what store and process data for "cloud" technologies used by websites, messaging apps, social media, emails, and essentially every other digital service. In combination these services use a lot of electricity and water, and can produce a lot of CO₂.
The sharp rise in AI usage has resulted in new and larger data centres being built and used, and we often hear about the energy and water consumption associated with AI. These reports however often don't put energy usage into context.
What is the energy and water impact from a prompt?
According to recent research, a standard text prompt (a message to an AI model) uses 0.24 watt-hours (Wh) of energy, emits 0.03 grams of carbon dioxide equivalent (gCO₂e), and consumes 0.26 millilitres of water.[9]
To put this into some context, the energy impact per-prompt is roughly equivalent to eating less than 0.005% of a single hamburger.[10] For water usage, a single prompt will use around 5 drops of water,[11] whilst a single hamburger uses around 15 bathtubs.
| Item / Activity | Unit | Energy (Wh) | CO₂e (grams) | Water (Litres) |
|---|---|---|---|---|
| AI Prompt[12][13][14] | 1 Prompt | 0.2 - 2.9 | 0.03–4 | 0.0003–0.05 (Blue) |
| Beef Burger (4oz)[15][16] | 1 Burger | 3000 - 9000 | 2100 - 3700 | 2350 (Total) / 70 (Blue) |
| Cheddar Cheese[17][18] | 1 Slice | ~100 | 280 | 90 (Total) |
| Pork Sausage[19] | 1 Sausage | ~150 | 160 | 300 (Total) / 23 (Blue) |
| Netflix Streaming[20] | 1 Hour | ~70 | 36 | 2 – 12 (Blue)[21] |
Note: "Blue" water is treated water, such as that used for human consumption, as opposed to water directly from rainfall or streams.
The amount of energy, CO₂, and water varies quite considerably with AI use given the range of models, specific use-cases, and energy sources.
How can we reduce the impact when prompting?
The longer the prompt, the more energy it will consume. To help minimise the environmental impact of prompting you can try to ensure that you do not keep long conversations with AI Assistants running — if you are starting a new or changed topic, start a new chat instead of continuing the same one.
The "model" you choose also has an effect. For example on Claude AI you can switch between Opus and Sonnet. Opus is a much larger model, and can be useful for quite niche tasks. For most tasks Sonnet will be what you need, and it'll also normally be quicker given it requires fewer resources.
When providing context to an AI Assistant in the form of files, the types of files will make a difference to energy consumption. Plain text files (ending in .txt, .csv, or .md for example) consume the least. PDFs, Word documents, and Excel spreadsheets take more resources to process. Images, especially photographs with a reasonable resolution, are quite intensive to process. Where possible, provide context files in plain text (for example, by copying your Word document into a Notepad text file). This is more relevant to files which will be repeatedly accessed, such as using Claude's "Projects" feature.
How should we use AI in the workplace?
When considering AI tools we should consider whether the AI company aligns with our values, just as we would when choosing which suppliers and partners we work with. I won't make specific recommendation as to which AI provider or providers may be appropriate here, as specific requirements will vary depending on the aims and values of individual businesses.
We should strive to use AI in responsible and effective ways. This means that we should be open and honest with customers and colleagues when using AI. For example, when a customer has a query answered by AI we should inform the customer that it is an AI responding; when we use AI internally to conduct research we should inform colleagues that AI was used.
We should never use AI alone to make decisions affecting people. We should be aware that AI responses are biased according to both the underlying data used to train AI models, and the biases in our own requests to AI models.
We should use AI effectively to help ensure the responses are useful and meaningful, and not wasteful of resources. For this, we can broadly use the CAFE framework for remembering how to effectively prompt an AI. Read my guide on the CAFE framework at www.elliotjreed.com/ai/cafe-ai-prompt-framework, or for a more in-depth prompting guide see www.elliotjreed.com/ai/ai-prompt-engineering-guide.
This is all still very new technology, and it's undeniable that it will have a significant impact on the way we work, and on wider human society. We should keep talking to one another about it, sharing both challenges and successes openly. We must verify the responses, especially when using AI output in decision-making and communication.
Frequently Asked Questions
Will AI replace jobs in my workplace?
Whilst some roles may change, historical trends suggest new jobs are created when others are transformed by technology. AI is more likely to change how we work rather than eliminate jobs entirely. Entry-level positions in data entry and administrative work may be more affected, but AI typically augments human work rather than replacing it completely.
How much energy does an AI prompt use?
A standard text prompt uses approximately 0.24 watt-hours (Wh) of energy, emits 0.03 grams of CO₂, and consumes 0.26 millilitres of water. This is roughly equivalent to less than 0.005% of eating a single hamburger in terms of energy impact.
Should I trust AI-generated statistics and citations?
No. AI models do not have access to underlying data - they learn patterns from data. Always verify any specific numerical claims, dates, statistics, or academic citations independently. AI excels at explaining well-established concepts but should not be trusted for specific factual claims without verification.
Is using AI trained on copyrighted material legal?
This remains a developing area of law. AI models learn patterns from data similarly to how humans learn, but many ethical and legal questions remain about using copyrighted training data without explicit consent. Lawmakers and courts are still determining the boundaries of fair use in this context.
How can I reduce the environmental impact of using AI?
Keep conversations short and start new chats for different topics instead of extending long conversations. Choose smaller models when appropriate (e.g., Claude Sonnet instead of Opus for routine tasks). Provide context files in plain text formats (.txt, .csv, .md) rather than PDFs, Word documents, or images where possible.
Conclusion
AI can be a really powerful tool if used responsibly and effectively. It will allow us to focus on the work we find meaningful and become more productive, assist us in filling gaps in knowledge and experience, guide us in personalised learning and development, help us to personalise customer experiences and invest more time assisting with complex queries, and give us the space to be more creative and innovative.
By understanding both the opportunities and challenges AI presents, and by establishing clear ethical guidelines, we can harness this technology to create genuine value whilst minimising potential harms. The key is to use AI as a tool to amplify human judgment and creativity, not to replace human decision-making and relationships.
References
- McKinsey - AI's uneven effects on UK jobs and talent ↩
- Investopedia - Lump of Labour Fallacy ↩
- Sage Journals - Photo-Taking Impairs Memory ↩
- PMC - Camera Effects on Memory ↩
- PubMed - Google Effects on Memory ↩
- arXiv - AI Cognitive Offloading Study ↩
- ScienceDirect - AI Bias Study ↩
- The Guardian - High Court warns lawyers about AI misuse ↩
- arXiv - AI Energy Consumption Study ↩
- PubMed - Food Energy Study ↩
- arXiv - AI Water Usage Study ↩
- Nature - AI Environmental Impact ↩
- Sam Altman - The Gentle Singularity ↩
- Google Cloud - Measuring AI Environmental Impact ↩
- ScienceDirect - Beef Environmental Impact ↩
- Science - Food Production Environmental Impact ↩
- ResearchGate - Cheese Production LCA ↩
- Science - Dairy Environmental Impact ↩
- Compleat Food - Sausage Environmental Impact ↩
- Carbon Brief - Streaming Video Carbon Footprint ↩
- Carbon Brief - Streaming Water Usage ↩