This #Take5 is brought to you by Dr Kirsty Hemsworth, Skills Manager, Sheffield Hallam University (SHU). Kirsty leads two student teams in the Library’s Skills Centre at SHU: the Student Skills Partners, who co-design resources and support academic, digital, and information skills projects, and the Library Student Panel, who provide feedback and user experience insight.
In this blog, Kirsty shares a practical approach for increasing transparency around Generative AI use in student-staff collaborations, using a traffic-light system that colleagues are invited to adapt and apply to their own partnership work with students.
What was the challenge?
As anyone involved in student co-design and partnership work will know, most innovations come from adjusting in real time to the needs of the team, usually from one week to the next! Students rarely approach a task in quite the way you expect, and over the past three years of leading our Student Skills Partner and Library Student Panel teams at Sheffield Hallam University, I’ve needed to experiment fast and fail faster when it comes to finding what works for students. Guiding the use of Generative AI has been no exception.
To support our large community of commuter students, carers and those on placement here at Sheffield Hallam, we took the decision early in our partnership work with students to ensure that as much co-design work as possible could take place online and asynchronously. By prioritising accessibility in this way, we’ve aimed to create a partnership model that offers paid opportunities for professional development to students that we typically fail to reach as an academic skills service.
Therefore, a large proportion of the co-design work across the team centres on consultation, resource creation, and reflective feedback tasks. However, as the use of Generative AI in student assessment has grown, so has its presence in the work of our student teams, making it clear that Skills Partners needed clear guidance on how and when AI tools could be used appropriately in their co-design tasks.
What did I do?
I didn’t want to introduce anything too complex or time-consuming, either for myself or the students: the priority was a simple, clear system that could be slotted easily into our existing ways of working. So, I adapted existing AI assessment scales (Perkins, Furze, Roe and MacVaugh, 2024; Purvis, 2025; University of Leeds, 2025) to develop a simple, visual traffic light system, applied at task level, to guide the appropriate use of AI. This was introduced to the student teams during their role induction and then consistently used in weekly emails throughout the programme:
Although the scale was applied on a task-by-task basis, some patterns began to emerge:
- Red tasks would often involve personal reflection, lived experience, or direct student opinion. For example, one ‘red AI’ task asked students to map the journey of a student – real or imagined – through university, identifying key challenges, milestones and interactions with support services.
- Amber tasks tended to be those where embedded AI features in tools like Canva or Adobe Express could be used as part of a broader content creation work, including a group project to design resources and comms materials during Ramadan.
- Green tasks were admittedly few and far between, but were helpful for tasks where we wanted to draw attention to critical thinking. For example, students were permitted to use Generative AI to gather a list of strategies for group facilitation, but were then asked to review and adapt these into a personal action plan for an upcoming workshop.
In Semester 2, as students began working in project groups, project supervisors – a team of Library staff from different service areas – agreed to adopt a consistent table format for project briefs and weekly emails, using the same three-tier model. In response to student feedback partway through the year, a clarifying note was added to amber-rated tasks to clarify where AI use was permitted. This had been a source of early anxiety for some students, and the clarification helped reinforce where creative support tools could be used confidently.
How did students respond?
Students were unanimous in finding the traffic light system helpful for understanding when and how AI tools could be used in their Skills Partner tasks – 25% rated it ‘Mostly Helpful’ and 75% ‘Very Helpful’. Some students, however, felt the restrictions were occasionally too strict:
It was extremely clear on whether or not we could use AI in our tasks once the traffic light system was in place. However, on some occasions I felt that AI could benefit the tasks to help us generate ideas that we can then put our twist on and be creative about.
This feedback highlights a tension we’re still navigating. AI was marked as ‘Red’ – not permitted – for tasks based on personal reflection or lived experience. During induction and training, we emphasised that we wanted students’ authentic voices, particularly when describing barriers or challenges in their own learning. However, it’s worth asking whether we sometimes expect Partners and Panel Members to have fully formed views on topics they may never have engaged with before.
In our end-of-year survey to evaluate the partnership programme, we asked students which aspects of their role they felt AI could support. Based on their responses, student team members saw little value in using AI for peer-working tasks, but strongly associated it with planning, content creation, and designing learning materials, suggesting a preference for using AI as a support tool for production and logistics. Outside of substantive tasks, they also made regular use of AI for everyday digital communication, including writing emails and posting updates in Microsoft Teams, reflecting a broader trend towards using AI to manage the operational and administrative side of their roles. This has interesting implications for how we shape the professional development aspects of our partnership programmes in future years, responding to the skills and experiences students most value from their roles with us.
What’s next?
While the system was well received, it was still clear at times that Generative AI was being used on tasks marked ‘Red’, particularly for creative activities, such as brainstorming new teaching formats or session titles. I don’t see this as a failure of the framework, but rather as a reminder of the pressures we sometimes place on students to be imaginative in unfamiliar territory. When students feel unsure about the quality of their ideas, it is becoming increasingly common to turn to AI for a starting point. This highlights the value of using the framework not just to set boundaries, but to open up conversations about appropriate and transparent AI use, and to be thoughtful about the kinds of input we ask students to provide.
Looking ahead to 2025–26, we’re exploring the following adjustments to further clarify AI use for our student teams:
- Introducing AI declarations for student team members to include on any student-facing resources created with AI support, modelling good practice in academic integrity.
- Expanding our training offer to include a hands-on workshop on AI use in co-design in learning development.
- Shifting the emphasis of core tasks from content creation towards research-focused work, such as consultation and user experience research with staff and students, where the student teams already recognise the limitations of AI use.
I’d love to hear about your own experiences with AI and co-design work with students – please leave a comment below to share your insights or connect with me on LinkedIn and BlueSky to continue the conversation!
References
Perkins, M., Furze, L., Roe, J., MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06). https://doi.org/10.53761/q3azde36
Purvis, Alison (2025). Artificial Intelligence Transparency Scale (AITS). The National Teaching Repository. Retrieved June 10 2025 from https://shura.shu.ac.uk/35464/
University of Leeds (n.d.) Categories of assessment. Generative AI – University of Leeds. Retrieved June 12 2025 from https://generative-ai.leeds.ac.uk/ai-and-assessments/categories-of-assessments/
Bio
I am a Skills Manager at Sheffield Hallam University, based in our Library’s Skills Centre. I lead student partnership initiatives to co-design an inclusive academic skills and Library offer, which includes both our Student Skills Partner and Library Student Panel teams. My wider interests include experimenting with digital technologies to enhance academic skills teaching and, building on my PhD in Translation Studies, supporting international learners. I recently became Co-Chair of the ALN Academic Skills Community of Practice and look forward to contributing to the work of this supportive and collaborative community of colleagues.
ORCID ID: https://orcid.org/0009-0001-1460-3430
