Skip to main content

AI wrote my homework

ai vs human
By Alice Wilkinson
26 November 2024
Digital, Brand & Creative Strategy
Insight, Research & Evaluation
News

Why does using AI still feel like cheating?

The proliferation of easily accessible artificial intelligence (AI) tools has sent ripples through the UK’s business landscape. But, whilst some embrace the advantages offered by AI, many of us continue to grapple with the implications of using such a powerful business aid.

In the education sector, the use – or misuse – of AI tools is a contentious issue.

"We had two deadlines really close together and I just ran out of steam," said university student, Hannah, in a recent BBC News story. "I felt incredibly stressed and just under enormous pressure to do well. I was really struggling and my brain had completely given up."

In her desperation, Hannah turned to AI, using it to write one of her essays; this was quickly discovered when her lecturer routinely scanned her work using detection software. She faced an academic misconduct panel, which has the power to expel students found to be cheating, but she was ultimately cleared despite admitting to using AI.

This story highlights the challenge that universities face as they encourage students to become AI literate, whilst also discouraging cheating. The BBC quoted a Department for Education spokesperson, who said: "Generative AI has great potential to transform the Higher Education sector and provides exciting opportunities for growth…Universities must determine how to harness the benefits and mitigate the risks to prepare students for the jobs of the future."

Hannah’s story brings our discomfort with AI into sharp focus. For many of us, using AI tools feels like cheating, and in some circumstances, it literally is cheating. So, how do businesses harness the many efficiencies provided by AI without compromising integrity?

Most businesses understand AI's potential to streamline operations. Many have adopted generative AI to automate customer service exchanges, draft marketing copy, or analyse vast data sets to glean strategic insights. 

But businesses are also wrestling with the dual nature of AI tools. On the one hand, tools like ChatGPT offer remarkable efficiency, generating reports, refining language, and even aiding creative processes. But on the other, they raise concerns about originality, authenticity, accountability, and data security. 

And, just as it exists amongst students who have perhaps not yet learnt to know better, the temptation to cut corners exists in business, too – where we really should know better. Whether it's crafting presentations or writing proposals, reliance on AI without carefully checking, editing, and rewriting will lead to a decrease in skill development and culpability. How do you know if someone is really ready for a promotion if you don’t know what percentage of their work has been generated by AI? Good question. 

Another question is this: how many of us, outside of an educational setting, are thinking about the use of AI-detection tools? Not many, I bet. 

But companies like Google DeepMind have developed advanced systems, such as SynthID Text, which watermarks AI-generated content. These innovations help distinguish between human and machine-generated work – crucial for maintaining trust and authenticity. SynthID, for example, encodes watermarks into text without compromising quality. It has shown promise in real-world applications, where millions of AI-generated responses are monitored without sacrificing the user experience. 

Whilst these tools are not foolproof, their evolution underscores our discomfort with AI. For many, using AI openly for day-to-day business tasks feels dishonest, challenging our notions of effort and expertise. 

At the same time, businesses that resist AI risk falling behind in a world increasingly driven by efficiency and innovation. After all, these tools can be framed simply as a new business innovation – insisting on doing accounting tasks manually rather than using accounting software or spreadsheets would be unthinkable.

The key will be to address the issue head on. Companies must foster working environments in which AI enhances the efforts of its employees, whilst at the same time making it clear where the line has been drawn. Business accounts for AI tools eliminate many of the concerns around data protection, making embracing this technology safe for most organisations.

The development of AI detection software adds a different dynamic. As AI-generated content proliferates, we will increasingly seek out content crafted by humans, using AI-detection tools to identify writing that has come from a machine. As a result, human-generated content will become far, far more valuable, shining a light on those businesses that have prioritised authenticity over speed. 

These are choppy waters for businesses to navigate. Ultimately, the debate over AI mirrors a larger societal struggle with change – change that is happening at a pace that makes it hard to comfortably adjust before the next thing comes along. But now is not the time for burying our heads in the sand. As AI detectors improve and become more easily accessible, businesses will need to develop clear guidelines, striking a balance between being too reliant on a tool that produces content that may soon be considered de trop, and refusing to use a tool that can provide valuable efficiencies.