Funerilox – AI & Higher Education News and Insights

Your Gateway to AI Research, Academic Trends, and University Innovations

Universities’ focus on policing is fighting the wrong AI battle

Universities’ focus on policing is fighting the wrong AI battle

Universities are in panic mode. As artificial intelligence tools like ChatGPT and Claude become increasingly sophisticated, administrators are scrambling to install detection software, rewrite plagiarism declarations, and develop policies to catch students using AI to complete assignments. This frantic focus on policing AI use reveals a fundamental misunderstanding of both the technology’s impact and the purpose of higher education.

The irony is that universities are right to be worried about AI, but not for the reasons they think.

The real threat isn’t cheating

AI represents a paradigm shift comparable to the printing press or the internet, but compressed into a timeframe that allows little institutional adaptation. These tools generate human-like text, solve complex problems, and perform cognitive tasks that were once exclusively human domains.

In laboratories, AI systems are revolutionising protein structure prediction and accelerating drug discovery. In astronomy, machine learning algorithms process vast datasets to identify new planets and gravitational waves.

This dual nature of both academic disruptor and scientific accelerator places universities in an uncomfortable position. If machines can write essays, solve mathematical problems, and even conduct research, what value do universities offer?

But, instead of grappling with this existential question, most institutions have fixated on the narrow problem of students’ use of AI. And they’ve invested in faulty detection technology and punitive policies, treating any AI engagement as cheating.

The neoliberal trap

This detection-obsessed response stems from a deeper problem: universities have increasingly defined themselves as credentialling factories and workforce training centres rather than spaces for intellectual transformation. When education becomes primarily about producing grades, degrees, and job placement statistics, AI is a threat to the institution’s market position.

Students using AI to complete assignments aren’t just “cheating the system”; they’re exposing how hollow that system has become. If a machine can successfully complete most university assignments, do we still need universities?

The real issue isn’t that students might use AI inappropriately, but that over-reliance on these tools will prevent them from developing what educators call a “transformative relationship to knowledge”. True education involves intellectual struggle, grappling with complexity, and developing analytical capabilities that fundamentally change how students understand themselves and their world.

When students engage authentically with difficult problems and develop their own thinking, they experience transformation that extends far beyond any specific course content. AI, when used uncritically as a substitute for this intellectual labour, short-circuits this process entirely.

Convincing students of the merits of hard intellectual work in the context of immediate, comprehensive responses being produced by an online program is a tough sell. Academics will need to be able to articulate why, and in which specific cases, AI should be set aside, even where its capabilities can perform the task. “If you use AI, we will catch you” is a very weak pedagogical argument.

Missing the bigger picture

The problem isn’t only that universities have failed to reflect on the purposes of higher education in how they respond to student use of AI.

While universities obsess over catching AI users in their classrooms, they’ve also remained largely silent on the technology’s broader social implications. This is precisely the kind of critical engagement that is central to the value of having a higher education sector in any society.

AI development raises profound questions about mass unemployment as cognitive work becomes automated, about algorithmic bias that perpetuates social inequalities, about environmental impacts from massive computational requirements, and about democratic governance as AI power concentrates in the hands of the broligarchy and corporations.

Universities, if they are, indeed, spaces for democratic deliberation and critical inquiry, should be leading these conversations. Instead, they’ve retreated into narrow institutional concerns.

This represents a profound abdication of higher education’s responsibility as a public good. Universities should function as spaces where diverse stakeholders can engage in reasoned debate about society’s most pressing challenges.

A different path

Rather than merely reacting to AI through detection and prohibition, universities should proactively engage with the technology’s possibilities and reflect on the implications for a focus on the commitment to transformative education.

This means developing teaching approaches that leverage AI’s capabilities while advocating for intellectual struggle and growth. It means conducting interdisciplinary research on AI’s social implications rather than just its technical applications. And it means actively participating in public discourse about how this technology should be governed and deployed.

Most importantly, it requires universities to claim an identity as intellectual and moral communities, not only as training grounds for the labour market. Students need to develop, not just technical skills, but the critical thinking capabilities to understand, critique, and shape AI’s role in society.

Higher education has to choose between continuing its reactive policies focused on constraint and control, or embracing the role of participating in AI’s social implications, both in our teaching and learning and in all our other responsibilities.

The first choice leads to institutional irrelevance as the technology advances, regardless of university policies. The latter offers the possibility of renewed social purpose and educational vitality.

Universities have spent too much energy fighting their students’ use of AI and not enough time figuring out how to nurture what makes human learning transformative in an age of AI or what the implications of AI might be for society at large.

Sioux McKenna is a professor of higher education studies at the Centre for Postgraduate Studies at Rhodes University, South Africa. This commentary is based on a presentation titled, ‘Beyond Detection: Reimagining higher education’s response to AI’, presented on 4 August at a colloquium of the National Institute for Theoretical and Computational Sciences, or NITheCS*. The NITheCS serves as a Centre of Excellence under the National Research Foundation, an entity of the South African government’s Department of Science, Technology and Innovation. It is a multidisciplinary and multi-themed institute.

This article is a commentary. Commentary articles are the opinion of the author and do not necessarily reflect the views of University World News.*

Leave a Reply

Your email address will not be published. Required fields are marked *