Academic Integrity
Using AI for school assignments can be a confusing issue because one instructor may accept its use in course assignments, while another may view it as a shortcut to the learning process. This can be an issue when you use it for an assignment when you weren’t supposed to, as it is considered a violation of academic standards or an academic departure. If your instructor has not expressly stated the use of AI in your assignments, or when you are unsure about appropriate uses, it’s always best to clarify what is acceptable. Also, remember to acknowledge the source if you use it by citing it and providing details on how you used it.
In the News
BBC, (2024). 'I massively regret using AI to cheat at uni'.
CBC, (2024). Are students taking artificial intelligence too far? Accusations of plagiarism are up at MUN.
Algorithmic Bias
What is algorithmic bias?
Algorithmic bias can be broadly defined as the phenomenon in which algorithms in AI systems produce unfair or discriminatory outcomes. One type of algorithmic bias occurs when an algorithm favors one group over another based on factors such as gender or socio-economic status. In a Bloomberg study examining the images generated by Stable Diffusion, researchers found that it favored people and negatively represented people based on skin tone and gender. In addition, there have been several cases where facial recognition software has misidentified people of color as criminal suspects. It's essential to review and critically evaluate its generative output.
In the News
Business Insider, (2018). Why it's totally unsurprising that Amazon's recruitment AI was biased against women.
CNBC, (2023). A.I. has a discrimination problem. In banking, the consequences can be severe.
CBC Radio, (2025). Why MIT researcher is calling for 'algorithmic justice' against AI biases.
Forbes, (2025). Gender Bias In AI: Addressing Technological Disparities.
Copyright
There are several ongoing lawsuits against AI companies for their use of copyrighted materials, both for training their models and for the outputs generated by these models. The works of creators, such as authors and publishers, were used without their consent as data to train their models, which may constitute infringements of their rights. In addition, some of the copyright holders have demonstrated that the outputs have generated works that are very similar to their own. This has resulted in potentially a loss of income for many of them. For example, when using an image generator, your prompt can generate an image that resembles an existing work or an artist's style.
If your instructors have permitted the use of AI images for your assignments, you can use vendors that have been trained on licensed materials, such as Adobe FireFly, and don’t forget to cite and declare the usage.
In the News
Associated Press, (2025). Warner Bros. sues Midjourney for AI-generated images of Superman, Bugs Bunny and other characters.
CBC, (2025). Timbaland used an independent producer's work to train AI — but without the artist's consent.
NBC News, (2025). Viral AI-made art trends are making artists even more worried about their futures.
NPR, (2025). Anthropic settles with authors in first-of-its-kind AI copyright infringement lawsuit.
Disinformation and Misinformation
Whether you are surfing on the internet or receiving an email, you may come across information that can be difficult to distinguish between what is ‘true’ and ‘false’, even though it can look believable or sound plausible. In these situations, you may have encountered either a deep fake or misinformation. Deep fake is defined by Merriam-Webster (2025) as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.” Most common examples usually include public figures such as celebrities and politicians. Deepfakes are a form of disinformation intended to cause financial or political harm. In contrast, misinformation is when you unknowingly pass along information that may not be completely accurate. For example, while you were scrolling through your social media feed, you came across a video that claimed something, and you thought that it was reliable advice, or you found something funny, and so you sent it to your friend.
Mitigating Risks
Here are some resources to guide you through how to check and verify information.
In the News
CBC (2024). This article is real — but AI-generated deepfakes look damn close and are scamming people.
How to Cook That, (2025). AI Cooking Disasters: Debunking Facebook’s Fake Recipes [Video].
NPR, (2023). When you realize your favorite new song was written and performed by ... AI.
Wired, (2025). Deepfake Scams Are Distorting Reality Itself.
Hallucinations
What are hallucinations?
In essence, AI hallucinations refer to the errors in its outputs. Maleki et al. (2024) found that “hallucinations” can encompass several ideas, such as plausible-sounding responses that are incorrect, repeating information, or statistical prediction of words from training datasets without understanding or reasoning, inventing or making up incorrect responses due to insufficient data.
Mitigating Risks
Although LLMs can sound credible in their responses, one of the best ways to avoid hallucinations is to be somewhat knowledgeable of the topic before you prompt. Then you can double-check the accuracy of its output. Although some models include citations, verifying them is essential.
Another approach is to craft effective prompts that generate more specific outputs. Prompting may be an iterative process where you tweak your prompts to achieve better outputs. Your prompts should be structured to include the following elements: contextual information (the purpose of your prompt), assigning a role to the LLM (e.g., act as a librarian), and an expectation (the desired output). There are several mnemonic devices, such as CLEAR, to help you craft better prompts.
To recap, being familiar with the subject matter, double-checking, and crafting effective prompts are steps you can take to reduce the chances of hallucinations.
In the News
CBC News, (2024). Air Canada found liable for chatbot's bad advice on plane tickets.
CBC News, (2025). An Ontario judge tossed a court filing seemingly written with AI. Experts say it's a growing problem.
Data Privacy
Economic Times, (2023). AI and Privacy: The privacy concerns surrounding AI, its potential impact on personal data.
HuffPost, (2025). Teens Are Location-Tracking Their Friends — And Honestly, The Downsides Sound Terrible.
NPR, (2025). Class-action suit claims Otter AI secretly records private work conversations.
Labour
Environmental Issues
CBS News, (2025). Using Generative AI a lot? Here's how much energy it takes, according to an MIT researcher.
Science News, (2024). Generative AI is an energy hog. Is the tech worth the environmental cost?
CBS News, (2025). Using Generative AI a lot? Here's how much energy it takes, according to an MIT researcher.
Science News, (2024). Generative AI is an energy hog. Is the tech worth the environmental cost?
Mental Health
BBC, (2025). Parents of teenager who took his own life sue OpenAI.
Standford Report, (2025). New study warns of risks in AI mental health tools.
Time, (2025). AI Chatbots Can Be Manipulated to Provide Advice on How to Self-Harm, New Study Shows.
Times of India, (2025). When AI becomes more than a tool: How it can twist our minds and what that means for mental health.
References
Helaluzzaman, K. (2023, May19). 7 ChatGPT prompt frameworks for better output, creativity and productivity. Khan Helaluzzaman. https://datayon.substack.com/p/7-chatgpt-prompt-frameworks.
Maleki, N., Padmanabhan, B., & Dutta, K. (2024). AI hallucinations: A misnomer worth clarifying. https://doi.org/10.48550/arxiv.2401.06796
OECD (2023), OECD digital education outlook 2023: Towards an effective digital education ecosystem, OECD Publishing, Paris, https://doi.org/10.1787/c74f03de-en