AAII panel | Responsible AI for Teaching and Learning
On Tuesday 1 August 2023, a panel of AAII’s researchers discussed the implications of ChatGPT and LLMs for teaching and learning. The event represented a collaboration between LX.Lab and UTS’s AAII, and formed part of the AI x L&T event series (31 July – 4 August).
AAII panel on LLMs and Responsible AI for Teaching and Learning | 1 August 2023
Generative AI (GenAI) has recently become more widely available and is impacting higher education, raising questions about the future of learning. Within UTS, conversations about GenAI cover assessment, integrity, its role as a learning tool or administrative aid, and how humans’ roles will change in the workplace.
Inspired by current rising interests and concerns on GenAI, this panel consisted of leading AI researchers at UTS’s AAII, including:
- Distinguished Professor Jie Lu AO (AAII Director)
- Distinguished Professor CT Lin (AAII Co-Director; Computational Intelligence and Brain Computer Interface Lab Director)
- Professor Paul Kennedy (Biomedical Data Science Lab Director)
- Professor Ling Chen (Data Science and Knowledge Discovery Lab Director)
- Associate Professor Guodong Long
- Associate Professor Nabin Sharma (Intelligent Drone Lab Co-Director)
- Dr Yi Zhang (Panel Moderator)
The panel touched on topics from ChatGPT to large language models (LLMs), as well as responsible AI’s theoretical foundations to practical applications.
Mechanisms of GenAI and LLMs
To begin the discussion, Dist. Prof. Jie Lu AO provided an overview of AI and its subfields, highlighting the links between areas such as machine learning and natural language processing which intersect to create generative AI models.
AAII researchers provided an overview of the technology which powers ChatGPT, including:
- Generative AI technology: Unlike Google, which requires humans to filter information, ChatGPT processes information on behalf of the human and generates responses such as conversations, translations and poetry.
- Pre-trained: ChatGPT is pre-trained on a massive scale to provide answers to questions. As a reference, GPT2 was trained on 45+ TB of material, curated content, books and Wikipedia.
- Transformer: ChatGPT has new deep learning based neural architectures more powerful than previous neural networks, capable of understanding big data and providing complex insights.
According to Prof. Ling Chen, LLMs model the likelihood of word sequences to predict the probability of future sequences. Over time, basic statistical word models have progressed to neural language models, into pretrained-language models. Emerging LLMs, now becoming the topic of public discussion, have surprising capabilities.
For example, ChatGPT2 has 1.5 billion parameters, whereas ChatGPT3 has 175 billion parameters and GPT4 now has 100 trillion parameters. These LLMs can solve short tasks in learning, eliminating the need for finetuning downstream tasks. They can adapt to new tasks with minimal data and provide impressive results to complex questions.
However, inbuilt safeguards within these software models need finetuning, leading to conversations around the responsible use of LLMs and AI generally.
Responsible AI Mechanisms: Reinforcement Learning and Standardisation
The principles of fairness, transparency, data privacy, and human centric design guide the development of AI safeguards, according to Dist. Prof Jie Lu AO.
Reinforcement learning is one key approach to align GenAI with human ethical values. This involves inputting human feedback into systems such as ChatGPT, thereby aligning it with human preference data and action trajectories. A key focus area is improving reinforcement agents to make GenAI answers align more closely with human values.
Introducing standards to regulate emerging AI software is another important safeguard. Prof. Paul Kennedy is an active member of the ISO/IEC SC42 Committee for Standardisation of AI, which brings together national bodies from academia, corporate entities, societies and research organisations around the world to develop the governance of AI.
The ISO Committee covers everything from defining basic terms such as Artificial Intelligence itself, to creating governance frameworks to guide the development of safe and equitable AI applications.
Human centric design and safeguards against bias are a key focus of the ISO Committee, with Prof. Paul Kennedy describing efforts to have software responses based on customer's decisions rather than demographics, in order to avoid issues of bias around gender, age and other identity parameters.
Dist. Prof. CT Lin, world-leading researcher in brain-computer interfaces, believes that developing these inbuilt safeguards is crucial to maximising human-AI interactions. Once end users can understand AI decisions, greater trust can be placed in AI. Humans and AI will be better able to adapt to each other and co-operate spontaneously.
AI and Education
AAII researchers believe that AI has great potential to empower the education sector, however we must safely and cautiously incorporate AI tools into teaching and learning.
A/Prof Nabin Sharma, who works closely with students, has noticed AI is making education more accessible by providing real-time translation for students who speak English as a second language. AI tools have also increased student’s productivity, notably LLMs which provide students with personalised tutoring support to complement the classroom experience. A/Prof Sharma observed the high calibre of this years’ recent UTS Tech Fest AI Showcase, with students reporting the use of AI tools to enhance their data visualisation and achieve high level results.
At this early stage, AI is already showing benefits within education by providing personalised learning experiences, making education more accessible, and enhancing the products of student’s learning. AI can also support teachers, with tools to alleviate marking workloads, and generating powerful educational content to engage students.
However, before widescale adoption of AI in education, challenges must be addressed. These key challenges appear to be bias and human alignment, though AAII’s researchers are confident that by keeping responsible AI principles at the forefront of their efforts, these challenges can be overcome.
With tools such as ChatGPT predicted to change society and the job market, AAII researchers urge that educating students on the responsible and safe use of these tools is essential.
AAII academics are committed to working with standards groups, such as the ISO Committee, to guide the development of AI standards which will lead to greater fairness and transparency. Once developed, these standards will inform student education, thereby equipping the first generation of AI users with the ethical use of AI tools.
With advancing standards and inbuilt reinforcement learning approaches, AI applications have the potential to revolutionise education for all.
Learn more: AAII Cross-Lab Special Working Groups on LLMs and Responsible AI