The aim of LearnHigher is to promote and facilitate the development and dissemination of high quality, peer-reviewed resources for learning development in the higher education sector. The LearnHigher Working Group recognises that technological innovations are continuous in relation to developing educational resources. This policy has been written to support authors and users with our position on Artificial Intelligence (AI) – inclusive of ‘Large Language Models (LLMs)’, ‘AI Chatbots, ‘Generative AI’, and other related technologies.
While we recognise the potential benefits of using AI tools to develop learning development resources and materials, we are also aware of its risks, which raise important ethical considerations that we expect our authors to address when submitting their work to LearnHigher. In an effort to uphold principles of academic integrity and trustworthiness, authors submitting materials that have used AI in any step of the resource development process must adhere to the following principles:
Transparency – Transparency in resource development and dissemination is critical to the peer review process and LearnHigher’s standards. A lack of transparency undermines trust, and we expect authors to clearly disclose the use of AI systems in the creation of their materials. This includes attribution to the specific AI system used, as well as transparency about how and to what extent the technology was implemented. This includes details on the type of system, the extent of its use (text and image generation, resource design, teaching, learning and assessment strategies, and other uses), and the purpose of its use. This will allow editors, reviewers and readers to better evaluate the work.
Authorship – AI tools, and particularly the use of Large Language Models (LLMs) which generate written text, images and diagrams, should be used to augment, not replace, original resource design and pedagogic rationale. Authors retain responsibility for the final product, and should ensure it contains original planning, design and supporting resources. The AI system itself is not the author, nor should it be listed in the references.
Accuracy – AI tools can confidently produce erroneous or incorrect outputs, often referred to as ‘hallucinations’. All work produced with AI should be carefully reviewed to ensure accuracy, logical coherence, alignment with ALDinHE values and pedagogic literature. Errors, inaccuracies, biases or misrepresentations introduced by the AI system must be corrected prior to submission.
Ethics – Considerations must be made regarding intellectual property, privacy, and the responsible use of data sources used to train generative systems. The limitations and biases inherent in any AI tools utilised must be considered. Authors must ensure that resources developed with AI are inclusive and do not inadvertently disadvantage students.
By adopting the above principles, we aim to thoughtfully integrate these emerging technologies into our network and into the learning development profession. Our commitment to transparency, integrity, author accountability and ethics will guide decisions around publishing AI-generated, processed or analysed content. Authors considering the use of AI systems are encouraged to discuss options with the LearnHigher working group before submission. Authors will also be required to complete an AI disclosure statement as part of the submission process, so that we can understand if and where AI technologies have been used in the process of creating a resource.
In publishing with us, authors can be confident that we do not use AI for any of our author, reviewer or external content creation and correspondence, nor do we feed any submissions into AI tools that summarise or review teaching and learning materials.
We welcome feedback from the learning development community as we continue to evaluate appropriate uses of AI in resource development and dissemination.