Artificial Intelligence Operations Procedure
On this page
Purpose | Scope | Principles | Procedure statements | Roles and responsibilities | Definitions | Approval information | Version history | References | Appendix 1: Artificial Intelligence Operations Board
1. Purpose
1.1 The Artificial Intelligence Operations Procedure (the procedure) supports the implementation of the Artificial Intelligence Operations Policy (the policy).
2. Scope
2.1 The scope of the policy applies for this procedure.
3. Principles
3.1 The principles outlined in the policy apply for this procedure.
4. Procedure statements
4.1 UTS is guided by the New South Wales Government's AI Assessment Framework (the NSW framework) in developing risk assessment, project development and decision-making strategies for the management of AI. This is in addition to the processes outlined in the Risk Management Policy.
4.2 Business owners of AI and AI system owners should maintain an open dialogue with relevant UTS experts and stakeholders as part of AI system approval, implementation and ongoing management. This may include the Data Analytics and Insights Unit, the Information Technology Unit, the Office of General Counsel, the Governance Support Unit (including Corporate Governance and Corporate Information), the Finance Unit, data stewards and information system stewards as appropriate.
4.3 When considering an AI system, UTS applies a 6-stage process for identification, assessment, approval, implementation and management. These steps must be followed before the:
- development or procurement of an AI system
- deployment or implementation of an AI system, and/or
- activation of an AI capability within an existing system.
4.4 Where an AI system is being proposed for either development or procurement, the normal project approval or procurement processes should be aligned with the AI approval process where possible (for example, risk evaluation and mitigation processes) to avoid duplication of effort.
Stage 1: Primary purpose and ethical alignment
4.5 AI may be proposed as a solution to a teaching, learning or operational problem. A primary purpose or objective for an AI system must be identified (in line with the policy) in order to provide clarity and transparency to both the approvers and the end users.
4.6 The endorsement of an AI system will be based on the primary purpose. Where more than one purpose is identified (for example, to solve multiple problems) all must be clearly outlined and appropriately risk assessed.
4.7 AI system proposals require a number of fundamental questions to be considered to ensure it aligns with the UTS 2027 strategy and the Equity, Inclusion and Respect Policy.
4.8 The UTS AI Ethical Assessment (based on the NSW framework) identifies key questions that should be considered as part of any AI system to ensure a commitment to human rights. The Australian Human Rights Commission’s Using artificial intelligence to make decisions: addressing the problem of algorithmic bias (available at Technical Paper: Addressing Algorithmic Bias) provides some additional insights into the impact of AI on human rights. Assessing AI against these ethical considerations will determine the value of progressing to stage 2.
4.9 Where an AI system does not align with the expectations outlined in the policy, it is unlikely that the system development, activation or procurement will be endorsed. Further guidance regarding appropriate alternatives may be sought from key stakeholders (refer statement 4.2).
4.10 In line with the policy, where an AI system proposal shows benefit to UTS, and its community, the business owner must submit the proposed AI system to the Data Analytics and Insights Unit (AI.Ops@uts.edu.au) for feedback before progressing to stage 2. The Data Analytics and Insights Unit will provide guidance on the appropriate templates and checklists.
Stage 2: Risk and opportunity evaluation and control
4.11 Business owners of AI and/or AI system owners must undertake a risk and opportunity assessment in RiskConnect (available at Office of General Counsel: Risk (SharePoint)) to determine the potential benefits of an AI system and identify any risks that require management.
4.12 In line with the Risk Management Procedure (SharePoint), business owners of AI and/or AI system owners must identify, assess and analyse risks and opportunities of the AI system, including:
- identifying any potentially impacted individuals, stakeholders and/or communities
- identifying perceived benefits and potential harms to impacted individuals, stakeholders and communities
- considering alignment or compliance with UTS policies and legal requirements, including privacy and security
- considering non-AI alternatives
- considering data issues that may introduce bias or result in unfairness
- considering processes to ensure transparency
- considering people, skills and processes to ensure clear accountabilities in the development and use of AI systems and their outputs, and
- identifying any risks that would prevent the system from providing the perceived benefits to the impacted individuals, stakeholders and communities.
4.13 Business owners of AI and/or AI system owners must undertake the UTS AI ethical assessment (based on the NSW framework), which also provides guidance and advice for assessing risks, harms, benefits and compliance.
4.14 Risks and opportunities must be evaluated, analysed and treatments proposed in RiskConnect. This must be presented to the Artificial Intelligence Operations Board (the board) as part of the endorsement process (refer also Risk Management Procedure (SharePoint), including the control effectiveness rating scale) as outlined in the policy.
4.15 Specific mitigating activities or controls must be put in place to ensure compliance with the ethical principles for the use of AI (refer the policy) and the Risk Management Procedure (SharePoint).
4.16 To ensure ongoing risk management requirements are met, an operational risk register and schedule for AI system review should be included to ensure operational risks are managed and changes in the operating environment are responded to accordingly. AI system owners will be responsible for ensuring risks, opportunities and controls are regularly monitored and updated (normally in RiskConnect) on an ongoing basis.
Stage 3: Managing data and privacy
4.17 Further to the statements outlined in the policy, data governance requirements (or a management plan) and controls must be outlined as part of an AI proposal (refer NSW framework and the Data Governance Policy).
4.18 The appropriate information ownership, access, use and authorisations must be specified as part of the endorsement process. In some cases, depending on the nature of the AI system, data governance settings and authorisations may need to be developed specifically for the system or algorithm. In other cases, for example, if an AI capability is being activated in an existing system these settings and authorisations may already exist.
4.19 A privacy impact assessment is required as part of the endorsement process as outlined in the policy (refer Privacy impact assessments (SharePoint)).
Stage 4: AI project endorsement and approval
4.20 Business owners of AI must register the intent to procure, activate and/or develop an AI system to the board for review and endorsement (refer the policy and Appendix 1).
4.21 The board will review the completed AI system proposal and either:
- provide feedback to the business owners and request further information, development or risk mitigation (including escalation of risks treatments in line with the Risk Management Procedure (SharePoint)), which must be resolved before endorsement and approval
- endorse the AI system for approval in line with the policy, or
- recommend that the AI system does not meet the requirements of the university and/or the primary purpose as stated in the AI system proposal and should not progress.
Stage 5: AI systems deployment endorsement and approval
4.22 Business owners of AI must register the intent to deploy an AI system to the board for review and endorsement (refer the policy and Appendix 1). Templates and checklists for approval are available from the Data Analytics and Insights Unit (AI.Ops@uts.edu.au).
4.23 The board will review the completed AI system proposal and either:
- provide feedback to the business owners and request further information, development or risk mitigation (including escalation of risks treatments in line with the Risk Management Procedure (SharePoint)) which must be resolved in advance of further endorsement and final approval in line with the policy
- endorse the AI system for approval in line with the policy, or
- recommend that the AI system does not meet the requirements of the university and/or the primary purpose as stated in the AI system proposal and should not progress.
Stage 6: AI systems management and governance reporting
4.24 AI system owners must ensure that AI systems are appropriately managed and regularly monitored during their lifecycle to:
- ensure they are meeting their primary stated purpose
- ensure they continue to adhere to the ethical principles for the use of AI
- prevent model drift
- ensure the risk rating and the risk mitigation plan is current, accurate and reflects the internal and external operating environment, and
- ensure any updated risks are being mitigated and are reflected in the risk mitigation plan.
4.25 Any issues identified as part of the monitoring process must be addressed and managed appropriately by the AI system owners with support from the business owners of AI.
4.26 Business owners of AI must ensure staff responsible for the implementation or management of AI have appropriate expertise or be provided with any necessary support or training to ensure systems and software are understood and used appropriately.
4.27 In addition to the requirements outlined in the policy, AI system owners must:
- establish, maintain and manage final approval records, including the project brief, in line with the Records Management Policy.
- maintain and manage all records related to the implementation and operation of the AI system in line with the Records Management Policy.
4.28 Business owners of AI must review risk mitigation plans for AI systems and report to the board at least annually or as otherwise directed by the board or the University Leadership Team.
4.29 The Chair, on behalf of the board, will escalate any risk treatments or actions that are overdue or not adequately addressed in line with the Risk Management Procedure (SharePoint). These will be addressed and must be resolved as determined by the Director, Risk and the board.
5. Roles and responsibilities
5.1 Procedure owner: The Chief Data Officer is responsible for the approval and enforcement of this procedure.
5.2 Procedure contact: The Head of Data Analytics and Artificial Intelligence is responsible for the day to day implementation of this procedure.
5.3 Implementation and governance roles: AI system owners and business owners of AI are responsible for the approval, management and oversight of AI systems within their remit.
6. Definitions
The definitions outlined in the policy apply for this procedure. The definitions outlined below are in addition to those outlined in the policy. Definitions in the singular also include the plural meaning of the word.
Model means a specific computation designed to recognise certain types of patterns. A model is trained as a result of an algorithm that takes a value, or set of values, as an input and produces a value, or set of values, as an output. It can then make evaluations or predictions using other data it has not seen before.
Model drift means a situation where the performance of a model becomes unstable or unpredictable due to changes in data and the relationship between the inputs and outputs. Model drift may have a negative impact on individuals or an organisation over time.
Approval information
Procedure contact | Head of Data Analytics and Artificial Intelligence |
---|---|
Approval authority | Chief Data Officer |
Review date | 2025 |
File number | UR23/686 |
Superseded documents | New procedure |
Version history
Version | Approved by | Approval date | Effective date | Sections modified |
---|---|---|---|---|
1.0 | Chief Data Officer | 12/05/2023 | 09/06/2023 | New procedure. |
1.1 | Deputy Director, Corporate Governance (Delegation 3.14.2) | 19/09/2024 | 23/09/2024 | Update to reflect new title of Deputy Vice-Chancellor (External Engagement and Partnerships). |
References
Artificial Intelligence Operations Policy
Equity, Inclusion and Respect Policy.
Privacy impact assessments (SharePoint)
RiskConnect (available at Office of General Counsel: Risk (SharePoint))
Risk Management Procedure (SharePoint)
Acknowledgements
Australian Human Rights Commission's Using artificial intelligence to make decisions: addressing the problem of algorithmic bias
New South Wales Government's NSW AI Assessment Framework
Appendix 1: Artificial Intelligence Operations Board
Terms of reference
Purpose
The purpose of the Artificial Intelligence Operations Board (the board) is to oversee the use of AI in UTS, outside of the university’s research endeavours, in accordance with the Artificial Intelligence Operations Policy (the policy). The board is established by the Chief Operating Officer.
Definitions
Artificial intelligence (AI) is defined for UTS in the policy as follows:
UTS uses the New South Wales Government’s AI Assessment Framework definition, which states that AI is ‘intelligent technology, programs and the use of advanced computing algorithms that can augment decision-making by identifying meaningful patterns in data’. AI is used to solve problems autonomously and perform tasks to achieve defined objectives, in some cases without explicit human guidance.
Responsibilities
In line with the policy, the board is responsible for:
- implementing the policy through the management and governance of AI at UTS
- providing strategic advice, oversight and prioritisation of AI by working with business owners to develop an understanding of the university’s AI use (new and existing)
- applying (and then assessing the efficacy of) the New South Wales Government's AI Assessment Framework (the NSW framework) in developing risk assessment, project development and decision-making strategies for the management of AI
- endorsing AI project proposals or referring AI project proposals for endorsement to the COO before seeking approval as outlined in the policy and the Artificial Intelligence Operations Procedure (the procedure)
- endorsing the use of AI as part of an identified business activity or solution, in advance of either development of the AI system or approval of the AI system in line with the Procurement Policy
- championing approved AI initiatives
- ensuring AI system use and management complies with existing legal requirements (for example, privacy and the General Data Protection Regulation) and consulting internally as appropriate
- considering the ethical and operational impacts of new and existing AI and consulting internally as appropriate
- reviewing operational risk registers/risk mitigation plans in line with the Risk Management Policy and reporting to the University Leadership Team (ULT) on an annual basis or more regularly as required
- identifying and escalating AI risks in line with the protocols set out in the Risk Management Policy
- ensuring appropriate dialogue with the Cybersecurity Steering Committee (refer Information Security Policy)
- recommending appropriate resourcing and advocating for funding (where appropriate) to implement AI priorities identified and to meet the requirements of the policy and/or to comply with legislation
- providing feedback regarding the efficacy of the policy and the procedure.
Membership
The board membership is as follows:
- Chief Operating Officer (or nominee)
- Deputy Vice-Chancellor (Education and Students) (or nominee)
- Deputy Vice-Chancellor (External Engagement and Partnerships) (or nominee)
- Chief Data Officer
- Chief Information Officer (or nominee)
- Head of Data Analytics and Artificial Intelligence
- Nominee of the Pro Vice-Chancellor (Social Justice and Inclusion)
- Executive Director, UTS Data Science Institute (or nominee)
- Student representative
The Chair will be a member of the board who is also a member of ULT. The Chair will be elected/nominated at the first meeting of the calendar year. This position is to be rotated annually among the ULT members of the board.
The Chair may delegate the position where they cannot attend a meeting.
Meeting frequency
The board will meet at least 4 times per calendar year and as otherwise required.
Quorum
The quorum shall be one-half of the current members of the board, if one-half is not a whole number, the next higher whole number shall be used. Vacant positions on the board do not count toward the total membership.
Secretariat support
The Data Analytics and Insights Unit will provide the board secretariat support. The secretariat will provide meeting papers 3 days before meetings and meeting minutes within 3 days of the meeting.