Embedding an inclusive approach to Artificial Intelligence in Higher Education
Dr Matt Bawn joins an interdisciplinary network of academics kickstarting a new project to gain greater understanding of what Artificial Intelligence (AI) means to different groups.
AI has been accelerating across sectors. While it has great potential to transform industries, its rapid expansion also brings its challenges, with ethical concerns, such as bias in algorithms, remaining critical areas of contention.
Striking a balance between innovation and ethical responsibility is therefore imperative as society navigates the evolving landscape of AI.
While higher education plays a critical role in providing training and skills across these new technologies, it also must strive to achieve UNESCO’s objective of producing socially responsible global citizens. To reach this goal, higher education must embed an inclusive and equitable use of AI.
Dr Matt Bawn, co-lead of BAIBEL and Lecturer in Bacterial Genomics in the School of Molecular and Cellular Biology, said:
“Whilst AI can be used effectively and legitimately across all types of teaching, research and administration, there are currently few guiding principles that acknowledge the link between AI and culture.
“Higher education is a complex and diverse landscape with a rich community of academics, professional services and technical staff and students representing a wide array of distinct cultures and outlooks.
With such a dynamic mix of experiences, we need to be sure that the language we use can be mutually used and equally understood.
Dr Manoj Ravi, co-lead of BAIBEL and Lecturer, School of Chemical and Process Engineering added:
“Although phrases like AI competence and equitable use of AI are becoming increasingly prevalent, these often tend to mean different things for different people. Unpacking these terms from the lens of different stakeholders in higher education is pivotal to enable inclusive and constructive use of AI in the sector. “
BAIBEL, which launched at the beginning of 2024, will establish a framework that empowers students and staff to embrace AI advancements while understanding and respecting the cultural implications and nuances that’s observed in AI.
Funded by the Horizons Institute, the activity will be delivered through a series of workshops with different stakeholders at the University, such as academic researchers and educators from arts, humanities, and STEM disciplines.
BAIBEL, through the ‘AI and YOU in HE’ workshops, invites a broad-spectrum of voices with varying degrees of interests and expertise in AI technologies from across the university to share their experiences, views and concerns about AI and to help shape the future direction of AI in a way that there is a shared understanding of terms such as ‘equitable’ and ‘ethical’.
She added:
“This will assist to empower students and staff to embrace AI advancements while simultaneously appreciating and respecting the cultural implications and nuances this technology entails.”
Through looking at how academics, professional staff, students and industry professionals understand AI technologies and their implications, the research team will be addressing Horizons’ questions “What Comes After the UN Sustainable Development Goals?” through the lenses of “Culture (the missing pillar)” and “Balancing the risks and rewards of transformative technology”.
Following the initial design stage, academics have hopes to engage with other UK higher education institutions, education-related Non-Governmental Organizations, charities, and policy makers at event in September to continue the conversation and leverage the common language basis across higher education to ensure an ethical and sustainable teaching approach for AI in the sector.
If you would like more information about the project, contact m.ravi@leeds.ac.uk.
Photo by Hitesh Choudhary from Unsplash.