#Take5 #150 AI engagement and scepticism in HE: Navigating persistent tensions.

This #Take5 is brought to you by Ikedinachi Ogamba, Associate Professor and Curriculum Lead, Coventry University, UK. Ike explores what it means to be an AI sceptic in this postdigital age. Is it possible? Are we forgetting or neglecting the sceptical, unengaged and disengaged groups in the AI adoption and integration conversation and policy? What does it mean for the academic? What might it mean for their students and higher education? 

The Spark, The Angle 

The idea to write this piece grew from reviewing an institutional AI literacy framework and competency progression model. The framework mapped educator development and support across different stages or levels, but seemed to overlook colleagues who may be doubtful, unengaged, or actively sceptical about AI or may have ethical objections. That oversight prompted two questions in me: Is it still okay to be an AI sceptic in higher education (HE)? And if so, how do we recognise and support scepticism as part of a legitimate adoption curve? 

I ask this not from a distance, but as someone who has graduated from curious cynicism to cautious scepticism :-). We should not ignore the “fact” that some educators and HE professionals may still be unengaged or sceptical about AI. 

Hence, if the goal in HE is to move swiftly from GenAI adoption, through literacy to competency, with no room for any other position, we risk excluding or alienating those who, for good reasons, are not ready or willing to step onto the “ladder”. Those who are cautiously observing, those who were previously engaged but have now disengaged, as well as those who may refuse adoption on professional or ethical grounds. 

image 11
Picture: AI Sceptic Cartooned Face – the image was designed by the author using Copilot.

Why scepticism and ‘un/dis-engagement’ persists. 

In the last few years, there has been considerable hype around AI, resulting in the uptake of AI as a major trend this decade. GenAI in HE is evolving with considerable promise. There are benefits in terms of efficiency, enhancing teaching, learning and assessment design and student engagement. At the same time, there are very real risks around integrity, bias and concerns with unreliable outputs (hallucinations), equity, privacy and sustainability (Bobula, 2024; Gering et al., 2025). Student uptake has surged, often outpacing staff support and institutional clarity and position. This adoption by learners highlights the need for ethical and inclusive integration, transparent guidance, and educators competence (Jisc, 2024).   

On the other hand, among educators and HE professionals, adoption has been mixed. It is likely that there are still a substantial number of academics who are yet to fully engage with AI or who have disengaged from AI adoption (Lee, et al., 2024; UNESCO, 2025, Armellini, 2025). Some reasons and drivers for this ‘unengagement’ or disengagement (‘un/dis-engagement’) may range from subjective and personal issues to strategic scepticism, to real and established issues related to AI, including energy consumption, environment and climate change impact, hallucinations, academic and intellectual integrity, and other ethical issues.  While some academics may have disengaged due to negative experiences in using AI such as fabrication and hallucinations, others fear job losses and displacement, have ethical or discipline concerns, professional, statutory and regulatory body (PSRB) constraints or are aware of privacy and data protection risks, among other concerns (Sano-Franchini, et al, 2024; Lucas & Lioy, 2025; Pikhart & Al-Obaydi, 2025,). Some ‘un/dis-engaged’ staff are simply already overstretched with their workload and time poverty.  

Metaphorising: Zero stage at the ladder 

I will use the number zero and the ladder as metaphors to illustrate where sceptics and the un/dis-engaged should be placed in AI progression and engagement in the dominant models. But first, it is helpful to start by distinguishing three stances (as can be seen below) that can get blurred in debates and discussions about AI adoption for individuals in this group: 

  1. Cynic: “AI is bad and dreadful; full stop.” 
  2. Sceptic: “Show me good evidence, guardrails and value before I use it, if I must.” 
  3. Un/Dis-engaged: “I am unconvinced, time-poor or constrained by, for example, ethical concerns, PSRB rules, contracts or access.” 

Each stance is different, and each deserves a place in the literacy and competency frameworks and targeted development support in the push for AI integration in HE. 

The zero competence or engagement stage indicates a stage in the adoption curve where there is no or insignificant use (e.g. do not have any GenAI accounts, closed or rarely or never uses). The term “AI-Zero” is used in this piece to frame an understanding of AI scepticism and ‘un/dis-engagement’ in HE as not about lack of knowledge or skill but about position and choice on the adoption curve. Therefore, AI-zero signals a starting point on the AI engagement ladder, but unlike “novice,” it acknowledges that some educators may already be AI-literate and competent, yet choose not to adopt or engage with it, due to scepticism, ethical concerns, workload, or other strategic reasons. This nuance is important because it shifts the conversation from deficit (“they don’t know”) to agency (“they are opting out or holding back”). 

Therefore, it is argued that AI-Zero can be understood as a level or stage of progression and engagement that includes two dimensions which are mutually inclusive: 

  1. Competency level (knowledge and skills) 
  2. Adoption status (actual use or willingness to use) 

So, someone at AI-Zero might be highly literate and skilled but disengaged, or willing to engage but minimally literate and lacking the skill to effectively engage. Both scenarios are categorised as AI-zero stage because a progression model should be about movement toward purposeful adoption and integration, not just competence acquisition. 

Consequently, AI sceptics, the ‘un/dis-engaged’, and even cynics, may have some level of AI literacy and competence but choose not to use or apply it. AI-Zero, therefore, is not a synonym for ignorance. It recognises a position on the adoption curve where knowledge and engagement may not align. But also, this stage or level could reflect either or a combination of literacy/competency level and state of adoption/use. Naming it explicitly would help design inclusive strategies that respect choice while supporting informed progression. 

image 2
Picture: Cartoon representing the AI Engagement Ladder: If the goal is competency/progression, then engagement provides the ladder to achieving this. The image was designed by the author using Copilot.

Is AI-zero still valid? Yes, and necessary! 

Although AI adoption is now widespread and popularised, there is a risk of unintentional exclusion of the ‘un/dis-engaged’ and sceptical group in institutional and sector-wide policies and frameworks. As discussed above, there are credible concerns and drivers of scepticism and ‘un/dis-engagement’ that should be acknowledged to prevent this group being negatively labelled or marginalised. Many of the concerns are valid and need addressing in order to raise confidence and potential for wider adoption. Hence there is a need to not only recognise this AI-zero group who may be sceptical and ‘un/disengaged’, but to also resolve the tension between the sweeping wave, hype and push for AI adoption, and the active or passive choice to engage or disengage. This recognition will help to proffer a more inclusive engagement and competency/progression development and support. 

As the above stage AI-Zero Profile shows, scepticism is not an obstruction. Rather, it is due diligence. It keeps practice authentic, protects vulnerable groups, and pushes institutions toward inclusive, evidence-based adoption. 

image 10

What then? Implications for HE 

A multi-professional approach would ensure that learning and academic developers, educators, EdTechnologists, leaders and policymakers act together rather than in parallel. A practical starting point to a multi-professional approach is to recognise a Zero stage (AI-Zero) as an explicit early stage in AI literacy and competency progression models. Many engagements and progression ‘ladders’ begin at approaching/understanding and climb through experimenting to embedded/optimised (Lameras et al., 2022; Southworth et al., 2023; Cukurova & Miao, 2024; Mahmud et al., 2025; Mills, 2025; DEC, 2025), which is useful, yet not always inclusive. Naming AI-Zero would create space for colleagues who are sceptical or ‘un/disengaged’ and can reduce the risk of this group being ‘quietly excluded’ from policies and frameworks, CPD and resourcing. AI-zero is not “anti-AI”; it is a legitimate position on the adoption curve that asks for purpose, evidence, safeguards and choice before participation. 

Taking AI-zero seriously: If we take AI-zero seriously, the how and the strategies become gentler, respectful and more workable. We can then lead adoption with purpose, not tools, co-create clear rationales for AI use or non-use and be explicit about where human judgement remains essential. We might start small and safe with task-specific, institution-approved, privacy-preserving tools. We can build buy-in and competency rather than enforce use, using realistic scenario-based development for staff and students (Long & Magerko, 2020). It will also help to create dialogue spaces, student–staff forums, structured debates, critical case reviews, and offer gradual exposure: sandboxes, opt-in pilots, and AI-lite workflows, with AI and non-AI options where appropriate. Alongside this, acknowledging the practical roadblocks matters: time and workload, privacy and data protection, and equity (access, disability, neurodiversity, gender, discipline, socioeconomic background). 

On roles: academics and learning developers can centre AI relevance and competency in curriculum development and reflective practice, curate discipline-specific examples, and keep AI-lite alternatives in play. EdTech teams can prioritise smaller, privacy-preserving options, offer sandboxes and low-stakes pilots (Alipour et al., 2020; Holstein et al., 2020), and make accessibility work for disabled and neurodiverse staff and students (Mahmud et al., 2025). Leaders and policymakers can acknowledge AI-zero as a stage in the adoption and progression models, protect time for experimentation, and keep policy clear, consistent and fair. In short, name AI-zero, design with it in mind, and resource it, so we move from hype and passive resistance towards more purposeful, inclusive and ethically grounded AI practice in higher education. 

Concluding: AI-Zero, Relevance and Way Forward.


While many of us are exploring AI’s possibilities, it is equally important to acknowledge that scepticism can function as a safeguard rather than a roadblock. Naming AI-Zero (the sceptic and ‘un/dis-engaged’ stage) in progression models may help us avoid unintentional exclusion, respect legitimate concerns, and perhaps open the door to more inclusive and sustainable adoption. If AI is to serve human learning, we need to make room for doubt and meet it with evidence, dialogue and choice: purpose-first rationales for AI use or non-use, opt-in pathways and AI-lite alternatives, privacy-preserving tools, and an emphasis on buy-in and competency-building over enforcement. 

For leaders, policymakers and EdTech teams, this emphasis on competency-building means actively identifying the drivers of ‘un/disengagement’ and exploring supportive responses, resourcing and transparent governance. It also suggests attending carefully to vulnerable and underserved groups, including colleagues and students with mental health challenges, disabilities or caring responsibilities; those with limited tech access, English as an additional language, or high workload. Policies might track equity impacts, offer clear ways to query decisions, and provide opt-in, well-supported pathways. In short: recognise AI-Zero, design for it and resource it. In that way, scepticism can be acknowledged and understood as a path to the first rung on the ladder of genuine AI (re)engagement and progression. 

References  

Bio:

Ikedinachi Ogamba is currently an Associate Professor and Curriculum Lead at Coventry University, UK. He is a senior academic, global health and international development leader with extensive experience in curriculum development and innovation, teaching, active learning and authentic assessment design, and educational development. He is a Senior Fellow of the Higher Education Academy (SFHEA). 

Email: ad2826@coventry.ac.uk.

image 3
Picture: Ike Ogamba, the author.

1 thought on “#Take5 #150 AI engagement and scepticism in HE: Navigating persistent tensions.”

  1. Jack Rundell

    Really helpful frameworks for thinking and talking AI. I think the competency / agency distinction is especially helpful. Thank you !

Leave a Comment

Skip to content