#Take5 #82 What does AI mean for Learning Development?

This #Take5 is brought to you from our very own Lee Fallin. Lee has been exploring and playing with this AI technology for some time now – and we asked him to share his insights with the wider LD community. So – read on – and let us all join the conversation about what these revolutionary techs mean for our students and for us. 

If it’s all about the ChatBots: What about Learning Development?

This blog post will muse on the ‘rise of AI’ and what this means for Learning Development. I am not an expert, but I have spent much time playing with a range of Artificial Intelligence (AI) tools over the last few years. In a recent LDHEN email, I asked the community more broadly about AI. In this Take5, I take the opportunity to bring together some of these thoughts. This post will try to address a broad audience, so forgive me for going into some background.

Background

It is fair to say there are mixed emotions around the relative explosion of AI tools that ‘produce’ or ‘generate’ content. There are some educators excited about the potential of these tools and chatbots, but many more are worried about the impact they will have on the Higher Education (HE) sector and beyond. This is not just concern around assessment, but apprehension around what AI means for human creativity, human thought and white-collar work. All these concerns are valid – though it has often been mused that the HE sector may be more positive if the first function in ChatGPT’s general release was the ability to mark student assessments. 

For anyone that hasn’t seen one of the chatbots, they can be asked to answer questions, suggest essay plans or even write whole sections of text. They can’t produce academic references, are generally restricted from knowing about current affairs and they are not always right. They could, however, signify dramatic changes for not just education, but administrative and creative work. Here is an example response from an AI chatbot that has been asked to write an essay plan:

Hello, this is Bing. I can help you with writing an essay plan for your geography course. 😊

An essay plan is a way of organizing your ideas and main points before you start writing your essay. It helps you structure your argument and make sure you cover all the relevant aspects of the topic.

Here is a possible essay plan for your question:

Introduction: Provide some background information on natural disasters and avalanches, and state your main argument or thesis. For example, you could say that natural disasters have significant impacts on human activities, especially in mountainous regions where avalanches pose a serious threat to people, infrastructure and land management.

Body paragraph 1: Explain what causes avalanches and how they are classified according to their size, speed and type. Use examples and evidence from your case study to illustrate the different characteristics and effects of avalanches.

Body paragraph 2: Analyze how avalanches impact human activities in mountainous regions, such as tourism, agriculture, transportation and settlement. Use examples and evidence from your case study to show the positive and negative consequences of avalanches for different sectors and groups of people.

Body paragraph 3: Evaluate how human activities can influence the occurrence and severity of avalanches, such as deforestation, climate change, land use and development. Use examples and evidence from your case study to demonstrate the feedback loops and interactions between human and natural factors in avalanche-prone areas.

Conclusion: Summarize your main points and restate your thesis. Provide some recommendations or suggestions for improving land management and reducing the risks and impacts of avalanches in mountainous regions.

I hope this essay plan helps you with your assignment. Good luck! 🍀

Image: A screenshot of Bing Chat: The prompt: Write me an essay plan for university-level study on a BA Geography course. My essay question is: Using an example of one natural disaster, discuss the impacts on human activity. I am thinking of writing about avalanches.

It’s fair to say that this is pretty cool and, perhaps, frightening in equal measure. So how did this technology appear?

The rise of AI

I’ve recently blogged about the ubiquity of AI tools when I asked How has Higher Ed slumbered into an Artificial Intelligence Crisis? This is because AI tools have been powering features like dictate in Microsoft Word, auto caption in Microsoft Teams and Presenter Coach in Microsoft PowerPoint. It is fair to say AI has been a part of daily working practices for many students, academics and HE professionals for some time. These new developments may be a paradigm shift – but we should have known they were coming. 

As I noted in a previous blog, the new developments in AI have been described as a crisis, a moral panic, an ‘end to homework‘ and a threat to higher education. I think my favourite contribution this year comes from The Mail, which announces artificial intelligence could make ‘mankind extinct’.

kz mlfolE1Ri6akfdWoQWNXq5gfIAGx7E5nPndHhjUrs9n96c FiuunHQpvVF5L5nmCNrXKqW 5Icn4uuXai7MaqRe gHAKefzbhPMRgeu3Gc81fC1dTnntd9UHcTME4FYMkXVFTuNc3

Image: Sunday Times online: AI means the end of homework – Elon Musk

XTMUhwObRyEcztk3ZLjCtrmdKo1qur1lg5v78ocH1xW9Ba72c4qxvoSvAfhkhMFjTNp0Nn

Image: Mail Online: AI could kill off mankind.

Chat, Chat – Bing, Bing: What’s the problem with Large Language Models?

What led to these dire predictions? The real surprise has come with the new generation of Large Language Models (LLMs) that use advanced machine learning to generate natural language in the form of (generative) written text. In short – AIs are trained through ‘machine learning’, which involves feeding it HUGE datasets to learn from. Practically every AI company is vague on how they’ve trained their algorithms – but, it’s fair to acknowledge it is almost impossible to do this in a way that is not biased (see: Exploring the potential for bias in ChatGPT). For anyone using these AI models, no programming knowledge is required. End users just give the AI a command – this is called a prompt. Crucially, these prompts are in natural language, just as people would speak or type – no coding needed. Prompts can include questions, instructions or just a half-finished sentence. Most of these AI tools base their response on the whole text before – not just the prompt, allowing them to continue responses and lines of discussion.

There are now multiple LLM-based AI Chatbots, including Maker AI, ChatGPT, Jasper and Bing – though it should be noted that most of these tools are powered by ChatGPT. It is fair to say that Open AI’s ChatGPT is probably the biggest name in the field – and much of the furore around AI is rooted in ChatGPT’s quality of output. The real root of this might be the astronomic leap from GPT-2 to GPT-3, and further leap with the tuned ChatGPT that has dominated the news. From the release of GPT-2 in late 2019 to the ChatGPT launch in late 2022, it is almost unbelievable to see how far the technology has come in three years. 

The gamechanger in AI access has to be Microsoft’s Bing Chat – a standout example as this raises the profile and level of accessibility to such tools. People do not have to pay, it’s embedded in the world’s number 2 search engine for free… This is not some tool students may or may not know about – and Microsoft will be bringing this functionality into Microsoft Office and Windows in due course. Microsoft Word’s Editor can already write automatic summaries so this technology is in rollout. If plans keep on course, this will make LLM AIs ubiquitous and mainstream at an unheard-of speed. 

With GTP-4 launching this week (at least for ChatGPT Plus subscribers), the next paradigm shift in generative AI could already be here. It’s always hard to pin down details until there is a launch, but it is fair to say there will be a notable notch in capability associated with this release. How this impacts performance (positively or negatively) also remains to be seen – but it is certainly better at taking tests…

zzC7fQe0erwTHDEyZCM2lurHcP 7cTm8Ngzr2xCqZ bBELkgAXtiG

Image: GTP-4 outperforms ChatGTP by scoring higher approximate percentiles among test-takers.

Beyond ChatGPT

Alongside the LLM ‘chatbots’ that have caused such a storm, there are multiple other tools and applications for AI. This great image by @aaronsiim categories these ‘generative AI’ tools, showing how they are not just focused on text-to-text, but also text-to-image, image-to-text and even brain-to-text. Some of these tools may be familiar – like Grammarly. It’s important to acknowledge Grammarly is AI-powered, and just one of the multiple writing assistants that are available. I think Grammarly is less-controversial for academic use than many of the tools listed here… but more on that later.

5krYB1f bP t15s4w6JckjNIXTOw1xbRi3knaZpv90SUxJWboVXWJIcZDIm TxcT174m3n0PQ3jj5dXlFHsT6cMgXwx8SOs3F 9ZnAmc Ydn7I

Image: List of generativeAI tools (Find an accessible version here: List of Generative AI Tools)

We are also seeing a range of academic or educationally focused AI tools appear. 

  • Research Rabbit claims to ‘reimagine research’, providing a new, novel way to search for papers and visualise research landscapes.
  • Elicit is an AI-powered research assistant that aims to help summerise takeaways and find relevant papers.
  • TeacherMatic is an AI assistant for educations, that can help write lesson plans, design quizzes and produce rubrics. 
Dx4nnrhyh496VitW9nzRtJiZIjrUC6UPra6LuyDMf qq0xO5 6rdO9kz pk3ENBjHA4uHrLjZOLyBqf6D

Image: Research Rabbit visualising one of my articles, and the texts associated with it.

How do these AI tools work?

You’ll notice I introduce these tools as ‘producing’ and ‘generating’ content. This terminology is up for much debate. After all, the AI is ultimately using a very sophisticated algorithm to predict the next token based on the content it has been fed. Take this following example from Open AI’s text-davinci-003:

DCLyMnXMvewN0fjILS eljI5s hR1p1sNvqoWfNno0VvFTuT28iMB0akEvtXIkVWIoBAKZIza1d6b3bQNMVWN8f9G

Image: Generated Text on the topic of ALDinHE

Here we can see I’ve given the AI a prompt by starting a sentence: ‘The Association of Learning Development in Higher Education is’ – and from this prompt, the AI will complete the rest. The colour coding highlights the probability of each next word. In this case, there was a 85.46% chance ‘international’ followed the word ‘an’, so the AI chose it. This highlighting is, however, just based on a single token – the actual process uses all input and output to generate new responses. It is almost unbelievable to think this is all powered by mathematical models predicting words – but it is. It also may be fair to argue that these tools are not yet true intelligences, though there is also much debate over the biology of the human brain and to what extent it is just an organic computer. Let’s leave this philosophical debate before it starts! 

The thing I find almost magical is the application of these AI tools to image generation. The GPT-based tool DALL-E 2 can draw anything you ask of it (within reason and safety), in any style you wish. It’s particularly fun for creating silly, impossible images – like a robot eating a doughnut in space (cartoon style). This has an amazing potential to allow students to illustrate work in new, creative ways that don’t breach copyright (other than the rare occasion the AI re-generates an image it was trained on!). I’ve even started to use these images to illustrate my blog. 

Image: DALL-E Generated Image – Robot eating donut in space

I’ve only touched upon these tools and how they work. For more details, check out Assemble AI:

AI as unfair means

As you can imagine – and have probably seen within your own institutions, the most significant concern focuses on the use of AI for unfair means. There have been multiple examples of generative AI being shown to write passable work at a whole range of levels. This is really problematic for educational assessment, and has led to the snap banning of these tools in many educational contexts. Such policies need to be worded carefully as a careless ban of AI in total would pretty much rule out the use of Microsoft Word and many accessibility tools that students rely on. More on this issue can be read here: Considerations on wording when creating advice or policy on AI use. This is because it’s hard to identify the line at which point AI use can cross into unfair means. Let’s think of three examples:

Example 1: A student gets an AI to write a paragraph and passes it off as their own

Most people would argue this example would count as unfair means. Ultimately the AI has produced the ‘idea’ and the student is claiming it. However, this can get complicated quickly. But what if the student fed it a paragraph of ideas in note form and the AI was just helping rephrase that text? Here the student produced the idea, the AI is just helping phrase it. This might still be unfair means, but it is less clear cut. It probably comes down to what is being assessed as if the AI is going too far on one of those assessment elements (for example, structure). 

Image: Notes that AI can re-write as a paragraph. Prompt: Here are some notes for a paragraph, can you re-write it for me for a specialist audience: To clarify the context of the work it is best to start with the concept of space. Make sure it uses formal English.

Example 2: A student uses a writing assistant to proofread their work

Writing assistants like Microsoft Word’s Editor or Grammarly can be used to check written work for spelling, grammar and style errors. They rarely make direct changes in student work, but instead, identify mistakes and prompt potential solutions. I think it is fair to say, most policies would allow student use of these tools – after all, many institutions allow human proofreaders so long as they don’t make changes. However, this example can also get murky in the world of emerging AI tools. One of Grammarly’s competitor tools not only checks for spelling, grammar and style accuracy – but it can also infer the accuracy of what is written. If a student writes something that is ‘wrong’, the tool will attempt to identify it. At this point, the tool has a significant impact on what is produced – but then again, is it offering something a conversation with a tutor would not?

M7 DG6Blt R5sHwnl2FdOl6n Gf76HaYQypvaJgxWExqyhEiIW86sulyoOtK8mkYT6oaZgJMm4pdPuHfmTZbKx qsIdMOF11yFF M2lOqiTLCUEbDrY 51Ksab0GzFrZgHfdC150hIkM

Image: AI inferring errors: The Grammarly plugin in Microsoft Word, checking the above paragraph. It suggests minor edits for clarity and style. One example is the change from ‘- but it can also info on’ to ‘but can also infer’. You’ll notice I didn’t make this change as it removed part of the stylistic approach I was going for.

Example 3: Extended AI use across the study workflow

The student asks Bing Chat to search the internet and generate some ideas for their presentation. These ideas are fed into ChatGPT which is asked to write a detailed presentation outline (slide-by-slide). The outline is written into PowerPoint, alongside the core text written by ChatGPT. PowerPoint’s Slide Designer then helps to generate the slide content. This is further enhanced by images created by DALL-E2. At this point – the whole work is mostly generated by AI tools. That’s unfair means for sure? Then again, this might reflect a realistic future workflow… what the student contributed was a series of prompts to get the output. Maybe that’s a better life skill than producing a simple presentation?

Suffice to say, these three case studies highlight how complicated this issue is. There is no clear-cut answer and this is all up for debate. If we are thinking about the fourth industrial revolution, and the workplaces our students will be working in, skill with AI is, perhaps, a useful employability skill. I think there is a happy medium, an acceptable level of AI engagement. After all, if every student runs their essay through ChatGPT, they’ll all be formulaic and similar (depending on the prompts). We need to help students go beyond this. We need to assess their human skills, the stuff that cannot be replaced by a robot. For me – this screams for deep, extensive and far-reaching assessment reform. Authentic assessment has to be the goal – and that will probably include some use of AI along the way. 

Detecting the use of AI

Alongside the use of AI to create content, there is also the use of AI to detect the use of AI. Similar to the explosion of academic integrity checking software that universities use to detect plagiarism, there is a new generation of tools to detect the use of AI. I’m not convinced this is the way forward. 

Firstly, I do not believe any tool that claims to identify AI-generated content – and yes, I’ve seen that press release from one of the world’s leading ‘originality software products’… The simple truth is that it is really difficult to identify. After running a series of these AI detection tools on some text I had generated via GPT3, the results were uninspiring. There is even more amusing results in: A short experiment in defeating a ChatGPT detector. Most of these ‘detector’ tools have been trained on GPT-2 or GTP-3. At the time of writing, I’ve not seen any reporting to work with GTP-3.5 or ChatGTP, showing how the sector is constantly playing catch-up. This is especially pertinent as GTP-4 is finally here. This certainly supports claims that we are now heading towards ‘post plagiarism’. 

Secondly, should we ban the use of AI? As you’ve seen above, it really isn’t clearcut, the line to unfair means is a blur. A hard ban would disallow the use of automatic ALT text, dictation and spellcheck – which would disadvantage so many students. Even taking such assistive technologies out of the equation, the three case studies above show there may be some legitimate uses of AI to support work.

What does AI mean for higher education?

 There are some pretty cool potential applications for AI in HE. 

The four positive applications that standout for me are:

  1. Accessibility: From automated captions and subtitles, to dictation and OCR, AI is fundamental to the accessible use of technology. This makes it easier for students with disabilities or those who speak multiple languages to access course materials and participate in class activities.
  2. Personalised Learning: Students can get tailored instruction that helps them better understand course material and excel in their studies. For example, AI-powered algorithms can analyse student data such as grades, test scores, and attendance records to identify areas where they need extra help or guidance.
  3. Quicker help: AI can undertake much of the dull admin for academics and students alike. No back and forth to find a meeting time, the AI can do it in an instant with access to everyone’s travel plans and diaries. Maybe it can give the answer without the need for the meeting in the first place. AI could even help assist marking and get students quicker feedback. 
  4. New learning opportunities: By integrating with virtual reality technology, AI can be used to simulate real-world scenarios so that students can gain hands-on experience in their chosen field. This could even be used for assessment, giving every student their own personalised experience. 

…and on the flipside, the negatives:

  1. Losing the human: An AI sets an assignment, students use an AI to write it, academics use an AI to mark it. Results are moderated by an AI. Okay, bit of an extreme example, but there is a real risk that we forget the human in some places. There is a risk automation could replace jobs and could de-skill people. 
  2. Marginalisation: While AI can enable inclusion, it can also marginalise and exclude. It is fair to acknowledge some people will be left behind and not engage with such tools. This might not be from choice, but could be from lack of funds or access. Students with their own laptops and funds to subscribe to AI tools will always have more access than those with limited funds and reliance on campus computers. 
  3. Loss of privacy: Should universities feed all student data into an AI, there are some major consequences for privacy. Student emails, assignments, grades, background – all processed for progress flags and warnings. This is BIG data on a new level. If tied together, it can generate a really personal profile – this can enable new learning opportunities, but at what cost?
  4. Digital divide and literacy: Some students and academics are already overwhelmed with the currently available technology. Throwing AI into the mix may be a paradigm shift too far. At a minimum, it also presents a challenge to reskill staff and skill students on the use of these AI tools. This is a potential Learning and Development headache for HR Departments. 

The path forwards

One thing that is for certain, is that higher education will need to radically re-think assessment. Unlike secondary and further education, there is an over-reliance on essays and other written assessments. This competence is at risk of being redundant (to some extent!) as AI technology further develops. Yes! I know that is controversial! We should never ditch the written argument as something we expect students to be literate in – but we should also consider how technology has made many other skills more or less important and this could apply here too. It is for this reason that the University of Hull moved to a competence-based education some years ago. I quite like how my colleague Mike Ewen frames how competence-based approaches can offer an answer to AI concerns: Mapping the potential of AI in the age of competence based higher education.

kGMIHDQjC 1jHVWxzYAeQve4eadu ik4qjcj6msIOqoneaymDvfRX75wJ1wZyQc IVNVlFsmksAwx c1jcsojUPCf2ws61oo0ualCWZlB6XDt ndm926hVvokP YR7hjw9mMz16PMk8

AI produced art: Prompt: Artificial intelligence as the bright future of humankind, digital art

JIrjf

AI produced art: Prompt: Digital art showing an artificial intelligence robot as the fall of mankind

What does this mean for Learning Development?

This is the question isn’t it..! 

Right now, this could go anywhere. AI presents a possible paradigm shift in how we access content, produce work and learn. This paradigm shift is not just in education but in the workplace too. AI may well be a core employability skill for the future. For Learning Developers, we will have to help students navigate this new literacy. Students will need a comprehensive set of academic, information, digital and AI literacies for their learning and future work. Given the ability for AI tools to use natural language, I feel their use falls more towards Learning Development than the IT professionals. 

Obviously – there is another side. AI might just make some aspects of education and assessment redundant. If it becomes normalised to use AI to access information, rearticulate it and structure an argument, some of the core literacies supported by Learning Development will become redundant. 

I don’t see either extreme happening in their entirety, of course. 

This could all be for nought. Maybe these tools are at the height of the hype cycle, and we will never see their everyday application. I think this is unlikely but will depend on how the accuracy and ethics of these tools develop. We’ve seen similar things happen with virtual reality and Segway. Lots of hype, no fundamental world-changing impact (yet). We can throw Cryptocurrency in there too, perhaps.

Over to you!

I want this post to be a conversation starter. It would be great to hear more from the Learning Development community. Where do you think this will go in the future? What does the present look like in your institution? 

Bios

Dr Lee Fallin is a Lecturer in Education Studies at the University of Hull. He has ten years of experience working as a Learning Developer for the University Library and is an ALDinHE Certified Learning Practitioner. Lee is a Senior Fellow of the Higher Education Academy, a Microsoft Certified Educator and a Microsoft Innovative Educator Expert. Lee has an EdD in Education, a PG Cert in eLearning and is working towards his PG Cert in Academic Practice. His research interests focus on the intersections between education and geography, inclusive of physical and digital spaces. His current research interests include learning spaces and communities, inclusive digital practice, research methodologies and geographies of place. You can find him on Twitter as @LeeFallin.

e0iuwxEmk87vAkaw8TMe0DZhKvvAQ97rQIUAcuFbe1kKIpvRBmFu0CMYEkTq

Image: Dr Lee Fallin is a white male academic in his thirties. He wears glasses. BUT – what does the AI say?

KebTv7RG7yDaBMTvqoEChafgWzJ0W0clXImBu41WzGqkWM5KKEJHzrEPvVkfRHZFfAL6I9QJTNTzLV3woV0Mt5DK6kqTTDe2TezoDG9O4XrhRv5KvsOL6wEBPtmttNyjlpY3POg3uJpn

Image: AI biography: Dr Lee Fallin is a Lecturer in Education Studies at the University of Hull12. He has research interests in learning technology, accessibility and the intersections between education and geography12. He also blogs and tweets about various topics related to higher education23.

Leave a Comment

Skip to content