This working group aims to cover the spectrum of responsible AI, from developing guidelines for ethical AI practice, to advanced techniques for transparent AI use.
Cross-Lab Responsible AI Research Group
Cross-Lab Responsible AI Research Group
Background
Developing and practising responsible AI has become an important research topic from academia to the public. As a leading AI institute, we practise the highest ethical standards under the framework of UTS2027. Nevertheless, we must take pre-emptive actions to guide the conversation. Our world-leading research excellence, prominent industry-driven impacts, and global connections make our voice integral to these crucial discussions.
Objectives
This working group aims to cover broad topics and interests of responsible AI, including the theoretical development of transparent and fair machine learning models, cross-disciplinary understandings of the conflicts between AI and human cognition (e.g. human-machine collaboration and social science), and AI ethics and privacy. We also aim to develop applications of responsible AI, such as explainable techniques for allowing users to understand the reasoning behind AI decisions.
We expect:
To coordinate the development of advanced techniques within AAII e.g. fake AI detection, explainable techniques, robustness techniques, fairness recommendations, and more.
To create guidelines for academia, industry, and the public to control AI use and create vast societal, political, and economic implications.
To promote AAII’s internal collaborations on responsible AI’s theoretical development, methodological innovation, implications, and applications, in terms of transparency, interpretability, robustness, adaptability, and security.
Main Tasks
To propose standards and guidelines for responsible AI e.g. developing best practice and guidelines for designing and deploying AI systems in a responsible and ethical manner.
To promote research and discussion on ethical and responsible AI practice, including analysing current AI systems and their impact on society, identifying ethical concerns, discussing ways to mitigate potential harm, and more.
To establish collaborations with industry and academia, e.g. connecting with industry leaders and global academic researchers to conduct ethical AI practice and to advance research in the field of responsible AI.
Group members
|