AI in universities: the fine line between help and misconduct

Sharmistha Michaels examines AI use in academic misconduct, fairness in UK institutions, and student guidance
As more students use generative AI for help with their work, UK universities are scrambling to root out cheating by students who submit assignments wholly or partially written using AI chat bots such as ChatGPT.
In 2023 a university student in Manchester was accused of academic misconduct after their essay was flagged by Turnitin’s new AI writing detection tool. No concerns were raised by the human marker of the essay but it was the software’s score alone that triggered the inquiry.
The student, who was not a native English speaker, had not been given access to the report or the basis for suspicion. Nor could they test the “evidence” against them. Education Week reports that of more than 200 million written assignments reviewed by Turnitin’s AI tool in 2024 1 out of 10 used some AI.
As generative AI become widespread in higher education, UK Universities face a difficult question, when does using AI constitute cheating? Perhaps more concerningly, when does the use of detection AI become in itself a flawed and potentially discriminatory process.
When does AI use become cheating?
Universities broadly define academic misconduct as any attempt to gain an unfair advantage through unauthorised means. Therefore, under this definition, using AI to generate an essay or even part of one without the acknowledgement may fall foul of institutional rules. However, nuance is key. AI tools are used by many as assistive technologies, to brainstorm, improve grammar or simplify complex text. The boundary between using AI for legitimate help and dishonest substitution is far from clear.
As Professor Phillip Dawson, a leading researcher on academic integrity, puts it: "AI use is not binary. It exists on a spectrum – from drafting to suggesting to polishing”. The lack of sector-wide definitions means institutions set their own rules. Some ban AI use entirely. Others allow it if properly referenced. This variability risks unfairness - particularly when combined with potential fallibility of detection tools.
The problem is not simply with students misusing AI. It is with universities perhaps misusing AI detection tools. These tools, like Turnitin’s AI Writing Indicator, use opaque algorithms to assess whether a text is ‘likely’ to be AI- generated. Independent studies show they have high false-positive rates, especially for non-native English speakers and neurodivergent writers (Liang et al, PNAS Nexus, 2023).
There are risks that by treating these AI reports as definitive evidence could potentially be in breach of the Equality Act 2010 where it could be shown that protected groups were being disproportionately flagged by the software or if reasonable adjustments under section 20 of the Act were not put in place when assessing work using this software.
Students need to be given a fair opportunity to respond to and challenge the evidence of misconduct. A university failing to allow this could amount to a breach of Article 6 of the European Convention on Human Rights where the outcomes affect a student’s reputation and career without due process.
Should There be Regulation of AI?
Cheating is determined individually by each University or institution and at present, UK universities adopt differing and often vague policies on AI use and detection. This regulatory patchwork creates uncertainty for students and staff, and possible reputational damage for institutions when they get it wrong.
The Office of the Independent Adjudicator for Higher Education (OIA) has issued guidance, stressing the need for fair process and human oversight. But no statutory framework exists. Universities UK, the Russell Group, and the Joint Information Systems Committee (JISC) have all published non-binding statements, but the absence of enforceable national standards leaves key questions unanswered:
- What constitutes “misuse” of AI across disciplines?
- When is a ‘flag ‘sufficient to initiate proceedings?
- What rights does a student have in such investigations?
As the EU take steps to regulate artificial intelligence with its AI Act, the UK still does not have a central AI regulator.
It has been suggested that integration rather than prohibition is the way forward given the increase in use of AI by students.
In the absence of effective regulation to ensure the ethical and fair use of AI, universities should consider an AI use policy if they don’t have one in place already in line with any policies they have on plagiarism and student misconduct and ensure that all students are familiar with policy when they begin their studies. Students should be encouraged about the safe use of AI not blanket banned from using it.
Practical Advice for Students
For students facing allegations of AI-related misconduct, the process can be intimidating particularly where they may not have any legal support. In contrast to professional disciplinary proceedings - the protections afforded to students are inconsistent. A uniform regulatory code would establish minimum standards for evidence disclosure, hearing procedures, and timelines for review. It should also clarify that AI-generated outputs cannot serve as conclusive proof of intent, which is a core element of academic misconduct.
Some practical tips could be:
- Request Disclosure: Ask for the full AI detection report, not just the final score. Seek clarification on how it was generated and whether a human reviewed it.
- Ask for Adjustments: If you have a disability or learning difference, request reasonable accommodations under the Equality Act. This may include support in the hearing or expert review of your writing style.
- Challenge Assumptions: Detection tools are based on probabilities and are not conclusive.
- Provide Contextual Evidence: Submit earlier drafts, notes, or version histories to demonstrate your authorship. Cite where and how AI was used (if at all).
- Seek Support: Use student unions, legal advice services, and SEND support officers. Some law clinics now assist with academic disciplinary hearings.
- Document your process: Keeping a version history through platforms like Google Docs or Microsoft Word’s track changes function can be invaluable. This helps demonstrate your authorship and the iterative development of your work. Some universities now explicitly advise students to retain such evidence.
- Understand your rights: Ask for the university’s misconduct policy, understand the steps involved, and seek clarification on what constitutes acceptable use of AI tools in your course. Policies vary widely, and misunderstanding alone should not be grounds for sanction.
The adoption of AI tools without sector-wide oversight creates a fragmented justice landscape in higher education.
For example, a student using ChatGPT for idea generation may be celebrated at one university but penalised at another. This inconsistency undermines student confidence and may particularly affect international students who are less familiar with UK academic norms. In the long term, a national regulatory response - similar to that used for data protection or safeguarding - may be required.
Conclusion
AI can both assist and undermine education. Universities are right to consider how best to respond. But they must do so within the bounds of the law, ensuring fairness, transparency, and equality.
What is needed now is clarity: clearer definitions of misconduct, clear institutional guidance, and clear safeguards for students. Without them, we risk creating a disciplinary system driven not by evidence or fairness, but by software.