Artificial Intelligence (AI) Transparency Statement
AI offers significant opportunities to improve productivity and service delivery for the Australian community.
The Department of Social Services (DSS) governs our use of AI in line with all relevant laws and regulations, the Digital Transformation Agency’s policy for the responsible use of AI in government(Opens external website) (DTA policy), and best practice.
Purpose of AI adoption
We use AI to improve how we work and deliver services. It helps us streamline processes and increase efficiency. It provides staff with better tools to serve the community.
We are committed to using AI in ways that serve the public interest. This means improving services, being transparent, and upholding ethical standards. We keep safety and trust at the centre of our AI adoption.
Our current AI use cases
Currently, our approved AI use cases are within the:
- Service delivery, Policy and legal and Corporate and enabling domains; and
- Workplace productivity and Analytics for insights usage patterns,
of the classification system for AI use(Opens external website).
We currently do not use AI within the Decision making and administrative action or Image processing usage patterns, or the Scientific, Compliance and fraud detection, and Law enforcement, intelligence and security domains.
How our staff use AI
Staff mainly use AI systems for:
- summarising documents, reports and meeting notes into key points
- preparing newsletters, presentations and web content; rewrite drafts for clarity
- creating agendas and recording action items
- generating graphs and dashboards from data; summarising insights
- extracting information and preparing summaries
- drafting and reviewing policy documents for clarity and compliance
- summarising contract documents and identifying compliance risks
- developing training guides and making technical content easy to understand
- summarising risk registers and preparing risk reports with dashboards
- generating transcripts of public hearings and session, for example Senate Estimates.
AI systems we use
1. GovAI Multi-Model – we use the GovAI platform as a secure space for staff to test AI models and to support staff learning. This helps build capability uplift and supports safe trials.
2. Microsoft Copilot via GovTEAMS – we use Microsoft Copilot in the secure GovTEAMS environment.
3. Microsoft 365 Copilot Chat (DSS Internal) – we use Microsoft 365 Copilot Chat within the DSS controlled environment.
4. Otter AI – We use Otter AI for a select group of staff, an AI-powered transcription tool designed to convert speech into text.
- DSS AI usage does not include use cases that involve direct interaction with the public
- activities that would significantly affect individuals.
All AI outputs are checked by a human before they are applied or acted on. This ensures technology supports our operations but does not replace human judgment in areas that affect people’s lives.
Our AI governance approach
Current state
We take a risk-based approach using AI. Before we approve a new AI use case, we assess legal, privacy and operational risks to ensure compliance with all relevant laws and DSS policies.
Our governance arrangements include:
- an accountable official – our Chief Information Officer (CIO)
- a Chief AI Officer (CAIO) to provide strategic leadership and oversight
- an AI Governance Committee chaired by the Chief Operating Officer (COO), and including senior staff from the department’s legal, data, corporate, communications and ICT areas.
The committee’s role includes:
- guiding our AI strategy, principles and policies
- identifying, assessing and managing AI risks and opportunities
- reviewing and approving AI use cases
- monitoring performance and impacts.
Our existing AI policy sets clear expectations for staff. It limits how they can use AI in their work. We use technological controls that restrict access to publicly available AI tools from our ICT environment. This helps ensure AI use is secure, compliant and responsible.
Existing DSS policies also apply to AI use. These include policies on:
- privacy
- confidentiality
- risk management
- procurement
- archiving
- acceptable use of technology
- cyber security.
This approach ensures that our AI use complies with legal and ethical obligations, protects sensitive information, manages risks effectively, and meets requirements for record-keeping and accountability.
We monitor the effectiveness of AI systems through robust governance arrangements, policies, processes and by tracking AI usage. We also engage with staff to understand how AI affects workflows and productivity.
Future state
We are strengthening our governance and oversight to support future AI use and make sure we comply with the DTA policy, other departmental policies, and laws.
We are building an AI governance structure into a mature, scalable framework. This framework will align with Australia’s AI Ethics Principles and whole-of-government standards. It will set out clear principles, roles and processes for responsible AI development, procurement, deployment and oversight.
To ensure accountability, every AI use case will have approved key performance indicators (KPIs) to track outcomes, measure effectiveness and confirm that AI solutions deliver value and meet DSS goals.
Our compliance with the DTA policy
The following table sets out an overview of our compliance with the mandatory requirements of the DTA policy:
| Mandatory requirement | Status | Comments | Due date |
|---|---|---|---|
| AI transparency statement | Compliant | N/A | N/A |
| Strategic position on AI adoption | In development | We are documenting our AI strategic position to guide responsible adoption and use. | 1 June 2026 |
| Accountable officials | Compliant | We have appointed our Chief Information Officer (CIO) as our Accountable Official. | N/A |
| Accountable use case owners | In development | We approved an AI governance structure and are developing and implementing this framework. The structure will guide how we assess, adopt and monitor AI across the department to ensure responsible and effective use. | 1 December 2026 |
| Internal AI use case register | In development | We approved an AI governance structure and are developing and implementing this framework. The structure will guide how we assess, adopt and monitor AI across the department to ensure responsible and effective use. | 1 December 2026 |
| Operationalise the responsible use of AI | In development | We approved an AI governance structure and are developing and implementing this framework. The structure will guide how we assess, adopt and monitor AI across the department to ensure responsible and effective use. | 1 December 2026 |
| Staff training on AI | Compliant | We require staff to complete mandatory AI training, including the AI in Government Fundamentals course. We also encourage further learning through programs offered by the Australian Public Service Commission and the Digital Transformation Agency. | 1 December 2026 |
| Assessment of AI use cases and subsequent treatment | In development | We are developing and implementing a formal AI governance framework. This framework will require all AI use cases to be assessed in accordance with the DTA policy. | 1 December 2026 |
This statement is authorised by DSS’s accountable official, the Chief Information Officer.
This AI Transparency Statement was updated on 22 January 2026.
More information
- Visit the Digital Transformation Agency to read the DTA policy(Opens external website).
- If you have questions or feedback, email AI@dss.gov.au.