Horizon Scanning Series
The effective and ethical development of artificial intelligence: An opportunity to improve our wellbeing
As artificial intelligence (AI) becomes more advanced its applications will become increasingly complex and will find their place in homes, work places and cities.
AI offers broad-reaching opportunities, but uptake also carries serious implications for human capital, social inclusion, privacy and cultural values to name a few. These must be considered to pre-empt responsible deployment.
This project examined the potential that artificial intelligence (AI) technologies have in enhancing Australia’s wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society. Placing society at the core of AI development, the report analyses the opportunities, challenges and prospects that AI technologies present, and explores considerations such as workforce, education, human rights and our regulatory environment.
“By bringing together Australia’s leading experts from the sciences, technology and engineering, humanities, arts and social sciences, this ACOLA report comprehensively examines the key issues arising from the development and implementation of AI technologies, and importantly places the wellbeing of society at the centre of any development”
Professor Hugh Bradlow FTSE, Chair of the Board, ACOLA
Webinar
August 2020
What is artificial intelligence?
Artificial Intelligence (AI) is a collection of interrelated technologies and computational techniques that produce machine-based intelligence. This includes technologies such as computer vision, natural language processing and machine learning. Techniques are all at varying stages of development, but broadly share a set of opportunities and challenges.
If developed and deployed appropriately, AI technologies have the potential to enhance Australia’s wellbeing, lift the economy, improve sustainability and create a more equitable and inclusive society. AI technologies are already having transformative impacts in areas such as health, manufacturing, mining and finance. PwC has estimated that AI could contribute up to US$15.7 trillion to the global economy in 2030. At a national level, it has been estimated that Australia could increase its national income by A$2 trillion by 2030 from productivity gains afforded by automation and AI technologies.
Future opportunities and challenges
As AI technologies mature over the next decade and new opportunities emerge, there is a need to ensure technologies are developed responsibly. A proactive approach to development, deployment and uptake will help realise the social and economic potential of AI in the future, as well as mitigate potential and anticipated challenges and risks.
1. Examine the transformative role that artificial intelligence may play in different sectors of the economy, including the opportunities, risks and challenges that advancement presents.
2. Examine the ethical, legal and social considerations and frameworks required to enable and support broad development and uptake of artificial intelligence.
3. Assess the future education, skills and infrastructure requirements to manage workforce transition and support thriving and internationally competitive artificial intelligence industries.
1. AI offers major opportunities to improve our economic, societal and environmental wellbeing, while also presenting potentially significant global risks, including technological unemployment and the use of lethal autonomous weapons. Further development of AI must be directed to allow well-considered implementation that supports our society in becoming what we would like it to be – one centred on improving prosperity, reducing inequity and achieving continued betterment.
2. Proactive engagement, consultation and ongoing communication with the public about the changes and effects of AI will be essential for building community awareness. Earning public trust will be critical to enable acceptance and uptake of the technology.
3. The application of AI is growing rapidly. Ensuring its continued safe and appropriate development will be dependent on strong governance and a responsive regulatory system that encourages innovation. It will also be important to engender public confidence that the goods and services driven by AI are at, or above, benchmark standards and preserve the values that society seeks.
4. AI is enabled by access to data. To support successful implementation of AI, there is a need for effective digital infrastructure, including data centres and structures for data sharing, that makes AI secure, trusted and accessible, particularly for rural and remote populations. If such essential infrastructure is not carefully and appropriately developed, the advancement of AI and the immense benefits it offers will be diminished.
5. Successful development and implementation of AI will require a broad range of new skills and enhanced capabilities that span the humanities, arts and social sciences (HASS) and science, technology, engineering and mathematics (STEM) disciplines. Building a talent base and establishing an adaptable and skilled workforce for the future will need education programs that start in early childhood and continue throughout working life and a supportive immigration policy.
6. An independently led AI body that brings stakeholders together from government, academia and the public and private sectors would provide a critical mass of skills and institutional leadership to develop AI technologies, as well as promote engagement with international initiatives and to develop appropriate ethical frameworks.
Artificial Intelligence (AI) provides us with myriad new opportunities and potential on the one hand and presents global risks on the other. If responsibly developed, AI has the capacity to enhance wellbeing and provide benefits throughout society. There has been significant public and private investment globally, which has been directed toward the development, implementation and adoption of AI technologies. As a response to the advancements in AI, several countries have developed national strategies to guide competitive advantage and leadership in the development and regulation of AI technologies. The rapid advancement of AI technologies and investment has been popularly referred to as the ‘AI race’.
Strategic investment in AI development is considered crucial to future national growth. As with other stages of technological advancement, such as the industrial revolution, developments are likely to be shared and adopted to the benefit of nations around the world.
The promise underpinning predications of the potential benefits associated with AI technologies may be equally juxtaposed with narratives that anticipate global risks. To a large extent, these divergent views exist as a result of the yet uncertain capacity, application, uptake and associated impact of AI technologies. However, the utility of extreme optimism or pessimism is limited in the capacity to address the wide ranging and, perhaps less obvious, impacts of AI. While discussions of AI inevitably occur within the context of these extreme narratives, this report seeks to give a measured and balanced examination of the emergence of AI as informed by leading experts.
What is known is that the future role of AI will be ultimately determined by decisions taken today. To ensure that AI technologies provide equitable opportunities, foster social inclusion and distribute advantages throughout every sector of society, it will be necessary to develop AI in accordance with broader societal principles centred on improving prosperity, addressing inequity and continued betterment. Partnerships between government, industry and the community will be essential in determining and developing the values underpinning AI for enhanced wellbeing.
Artificial intelligence can be understood as a collection of interrelated technologies used to solve problems that would otherwise require human cognition. Artificial intelligence encompasses a number of methods, including machine learning (ML), natural language processing (NLP), speech recognition, computer vision and automated reasoning. Sufficient developments have already occurred within the field of AI technology that have the capacity to impact Australia. Even if no further advancements are made within the field of AI, it will remain necessary to address aspects of economic, societal and environmental changes.
While AI may cause short-term to medium-term disruption, it has the potential to generate long-term growth and improvement in areas such as agriculture, mining, manufacturing and health, to name a few. Although some of the opportunities for AI remain on the distant horizon, this anticipated disruption will require a measured response from government and industry and our actions today will set a course towards or away from these opportunities and their associated risks.
Development, implementation and collaboration
AI is enabled by data and thus also access to data. Data-driven experimental design, execution and analysis are spreading throughout the sciences, social sciences and industry sectors creating new breakthroughs in research and development. To support successful implementation of the advances of AI, there is a need for effective digital infrastructure to diffuse AI equitably, particularly through rural, remote and ageing populations. A framework for generating, sharing and using data in a way that is accessible, secure and trusted will be critical to support these advances. Data monopolies are already occurring and there will be a need to consider enhanced legal frameworks around the ownership and sharing of data. Frameworks must include appropriate respect and protection for the full range of human rights that apply internationally, such as privacy, equality, indigenous data sovereignty and cultural values. If data considerations such as these are not considered carefully or appropriately, it could inhibit the development of AI and the benefits that may arise. With their strong legal frameworks for
data security and intellectual property and their educated workforces, both Australia and New Zealand could make ideal testbeds for AI development.
New techniques of machine learning are spurring unprecedented developments in AI applications. Next-generation robotics promise to transform our manufacturing, infrastructure and agriculture sectors; advances in natural language processing are revolutionising the way clinicians interpret the results of diagnostic tests and treat patients; chatbots and automated assistants are ushering in a new world of communication, analytics and customer service; unmanned autonomous vehicles are changing our capacities for defence, security and emergency response; intelligent financial technologies are establishing a more accountable, transparent and risk-aware financial sector; and autonomous vehicles will revolutionise transport.
While it is important to embrace these applications and the opportunities they afford, it will also be necessary to recognise potential shortcomings in the way AI is developed and used. It is well known, for example, that smart facial recognition technologies have often been inaccurate and can replicate the underlying biases of the human-encoded data they rely upon; that AI relies on data that can and has been exploited for ethically dubious purposes, leading to social injustice and inequality; and that while the impact of AI is often described as ‘revolutionary’ and ‘impending’, there is no guarantee that AI technologies such as autonomous vehicles will have their intended effects, or even that their uptake in society will be inevitable or seamless. Equally, the shortcomings associated with current AI technological developments need not remain permanent limitations. In some cases, these are teething problems of a new technology like that seen of smart facial recognition technologies a few years ago compared to its current and predicted future accuracy. The nefarious and criminal use of AI technologies is also not unique to AI and is a risk associated with all technological developments. In such instances however, AI technologies could in fact be applied to oppose this misuse. For these reasons, there will be a need to be attuned to the economic and technological benefits of AI, and also to identify and address potential shortcomings and challenges.
Interdisciplinary collaboration between industry, academia and government will bolster the development of core AI science and technologies. National, regional and international effort is required across industry, academia and governments to realise the benefits promised by AI. Australia and New Zealand would be prudent to actively promote their interests and invest in their capabilities, lest they let our societies be shaped by decisions abroad. These efforts will need to draw on the skills not only of AI developers, but also legal experts, social scientists, economists, ethicists, industry stakeholders and many other groups.
Employment, education and access
While there is much uncertainty regarding the extent to which AI and automation will transform work, it is undeniable that AI will have an impact on most work roles, even those that, on the surface today, seem immune from disruption. As such, there will be a need to prepare for change, even if change does not arrive as rapidly or dramatically as is often forecast.
The excitement relating to the adoption and development of AI technologies has produced a surge in demand for workers in AI research and development. New roles are being created and existing roles augmented to support and extend the development of AI, but demand for skilled workers including data scientists is outstripping supply. Training and education for this sector are subsequently in high demand. Tertiary providers are rapidly growing AI research and learning capabilities. Platform companies such as Amazon (Web Services) and Google are investing heavily in tools for self-directed AI learning and reskilling. A robust framework for AI education – one that draws on the strengths of STEM and HASS perspectives, that cultivates an interest in AI from an early age and that places a premium on encouraging diversity in areas of IT and engineering – can foster a generation of creative and innovative AI designers, practitioners, consultants as well as an informed society. Students from a diverse range of disciplines such as chemistry, politics, history, physics and linguistics could be equipped with the knowledge and knowhow to apply AI techniques such as ML to their disciplines. A general, communitywide understanding of the basic principles of AI – how it operates; what are its main capabilities and limitations – will be necessary as AI becomes increasingly prevalent across all sectors. The demand for AI skills and expertise is leading to an international race to attract AI talent, and Australia and New Zealand can take advantage of this by positioning themselves as world leaders in AI research and development, through strategic investment as well as recognition of areas of AI application where the countries can, and currently do, excel.
Although AI research and development will become an increasingly important strategic national goal, a larger – and perhaps more significant – goal is to ensure that existing workforces feel prepared for the opportunities and challenges associated with the broad uptake of AI. This will mean ensuring workers are equipped with the skills and knowledge necessary to work with and alongside AI, and that their sense of autonomy, productivity and wellbeing in the workplace is not compromised in the process. Education should emphasise not only the technical competencies needed for the development of AI, but also the human skills such as emotional literacy that will become more important as AI becomes better at particular tasks. In the short to medium term, the implementation of AI may require the application of novel approaches. It will be important to ensure that workers are comfortable with this.
To ensure the benefits of AI are equitably dispersed throughout the community, principles of inclusion should underpin the design of AI technologies. Inclusive design and universal access are critical to the successful uptake of AI. Accessible design will facilitate the uptake and use of AI by all members of our community and provide scope to overcome existing societal inequalities. If programmed with inclusion as a major component, we can facilitate beneficial integration between humans and AI in decision making systems. To achieve this, the data used in AI systems must be inclusive. Much of society will need to develop basic literacies in AI systems and technologies – which will involve understanding what AI is capable of, how AI uses data, the potential risks of AI and so on – in order to feel confident engaging in AI in their everyday lives. Massive Open Online Courses (MOOCs) and micro-credentials, as well as free resources provided by platform companies, could help achieve this educational outcome.
Regulation, governance and wellbeing
Effective regulation and governance of AI technologies will require involvement of, and work by, all thought-leaders and decision makers and will need to include the participation of the public, communities and stakeholders directly impacted by the changes. Political leaders are well placed to guide a national discussion about the future society envisioned for Australia. Policy initiatives must be coordinated in relation to existing domestic and international regulatory frameworks. An independently-led AI body drawing together stakeholders from government, industry and the public and private sectors could provide institutional leadership on the development and deployment of AI. For example, a similar body, the Australian Communications and Media Authority, regulates the communications sector with the view to maximise economic and social benefits for both the community and industry.
Traditional measures of success, such as GDP and the Gini coefficient (a measure of income inequality), will remain relevant in assessing the extent to which the nation is managing the transition to an economy and a society that takes advantage of the opportunities AI makes available. These measures can mask problems, however, and innovative measures of subjective wellbeing may be necessary to better characterise the effect of AI on society. Such measures could include the OECD Better Life Index or other indicators such as the Australian Digital Inclusion Index. Measures like the triple bottom line may need to be adapted to measure success in a way that makes the wellbeing of all citizens central.
Ensuring that AI continues to be developed safely and appropriately for the wellbeing of society will be dependent on a responsive regulatory system that encourages innovation and engenders confidence in its development. It is often argued that AI systems and technologies require a new set of legal frameworks and ethical guidelines. However, existing human rights frameworks, as well as national and international regulations on data security and privacy, can provide ample scope through which to regulate and govern much of the use and development of AI systems and technologies. Updated competition policies could account for emerging data monopolies. We should therefore apply existing frameworks to new ethical problems and make modifications only where necessary. Much like the debates occurring on AI’s impact on employment, the governance and regulation of AI are subject to a high degree of uncertainty and disagreement. Our actions in these areas will shape the future of AI, so it is important that decisions made in these contexts are not only carefully considered, but that they align with the nation’s vision for an AI-enabled future that is economically and socially sustainable, equitable and accessible for all, strategic in terms of government and industry interests, and places the wellbeing of society in the centre. The development of regulatory frameworks should facilitate industry-led growth and seek to foster innovation and economic wellbeing. Internationally coordinated policy action will be necessary to ensure the authority and legitimacy of the emerging body of law governing AI.
A national framework
The safe, responsible and strategic implementation of AI will require a clear national framework or strategy that examines the range of ethical, legal and social barriers to, and risks associated with, AI; allows areas of major opportunity to be established; and directs development to maximise the economic and social benefits of AI. The national framework would articulate the interests of society, uphold safe implementation, be transparent and promote wellbeing. It should review the progress of similar international initiatives to determine potential outcomes from their investments to identify the potential opportunities and challenges on the horizon. Key actions could include:
- Educational platforms and frameworks that are able to foster public understanding and awareness of AI
- Guidelines and advice for procurement, especially for public sector and small and medium enterprises, which informs them of the importance of technological systems and how they interact with social systems and legal frameworks
- Enhanced and responsive governance and regulatory mechanisms to deal with issues arising from cyber-physical systems and AI through existing arbiters and institutions
- Integrated interdisciplinary design and development requirements for AI and cyber‑physical systems that have positive social impacts
- Investment in the core science of AI and translational research, as well as in AI skills.
An independent body could be established or tasked to provide leadership in relation to these actions and principles. This central body would support a critical mass of skills and could provide oversight in relation to the design, development and use of AI technologies, promote codes of practice, and foster innovation and collaboration.
- Download full report (PDF):
- Download report extract (PDF):
- Download summary paper (PDF):
- Media release July 2019 (link):
- Media release May 2018 (link):
- Input papers can be accessed at the base of this page.
Expert Working Group
ACOLA, for its established ability to deliver interdisciplinary evidence-based research that draws on specialist expertise from Australia’s Learned Academies, convenes the Artificial Intelligence Expert Working Group (EWG) to guide the development of a targeted study that draws input from several disciplines to create a well-considered, balanced and peer-reviewed report. The role of the EWG is to provide strategic oversight and provide expert input, analysis and provocative thinking.
Authors
Professor Toby Walsh FAA | Professor Neil Levy FAHA |
Professor Genevieve Bell FTSE | Professor Anthony Elliott FASSA |
Professor James Maclaurin | Professor Iven Mareels FTSE |
Professor Fiona Wood AM FAHMS |
Supported by Dr Alexandra James, Dr Benjamin Nicoll, Dr Marc Rands, Michelle Steeper, Dr Lauren Palmer and the generous contributions of many experts throughout Australia, New Zealand and internationally as acknowledged throughout the report. A full list of contributors can be found in the written submissions section of the report.
Peer Reviewers
This report has been reviewed by an independent panel of experts. Members of this review panel were not asked to endorse the Report’s conclusions and findings. The Review Panel members acted in a personal, not organisational, capacity and were asked to declare any conflicts of interest.
ACOLA gratefully acknowledges their contribution.
Professor Nikola Kasabov FRSNZ | Emeritus Professor Russel Lansbury AO FASSA |
Professor Huw Price FBA FAHA |
Project Management
Dr Lauren Palmer | Dr Angus Henderson |
Project Funding and Support
This project has been kindly supported by the Australian Government through the National Science and Technology Council and Office of the Chief Scientist, with funding received from the Australian Research Council (project number CS170100008); the Department of Industry, Innovation and Science; and the Department of the Prime Minister and Cabinet.
ACOLA also gratefully acknowledges the contribution of our project stakeholders; Office of the Chief Scientist; Department of Industry, Innovation and Science; Department of Prime Minister and Cabinet; Australian Human Rights Commission; and Data61.
![]() |
![]() |
![]() |
![]() |
Report Acknowledgements
ACOLA and the Expert Working Group offer their sincere gratitude to the many experts from Australia, New Zealand and further afield who have extensively contributed to this report by way of input papers. The contributions and expertise from these experts has helped shape and develop the final report. Further information of these contributions can be found in ‘evidence gathering’.
We also gratefully acknowledge the expertise and contributions from our project stakeholders. In particular, we would like to acknowledge Dr Alan Finkel, Sarah Brown, Dr Adam Wright and Dr Kate Boston from the Office of the Chief Scientist; Edward Santow and the Australian Human Rights Commission; Adrian Turner and Data61. We also thank our peer reviewers for the time and effort they have provided in reviewing the report.
We would particularly like to thank the Australian Research Council (project number CS170100008), the Department of Industry, Innovation and Science, and the Department of Prime Minister and Cabinet for their financial support and contributions to the project.
Our thanks to the EWG who put a great deal of time, effort, and insight into coordinating the report’s conceptualisation and production, and also to the ACOLA team, in particular Dr Lauren Palmer, Dr Angus Henderson, Dr Alexandra James, Dr Benjamin Nicoll, Michelle Steeper and Dr Marc Rands (Royal Society of Te Apārangi), who made significant contributions to supporting the EWG and managing the project
Acknowledgement of Country
ACOLA acknowledges the Traditional Owners and custodians of the lands on which our company is located and where we conduct our business. We pay our respects to Elders past, present and emerging.
Agriculture (Australia) (PDF), Professor John Billingsley
Agriculture (Australia) (PDF), Professor Salah Sukkarieh
Agriculture (New Zealand) (PDF), Professor Mengjie Zhang
AI and Trade (PDF), Ziyang Fan and Dr Susan Aaronson
Appeal Algorithmic Decisions (PDF), Anne Matthew, Dr Michael Guihot and Associate Professor Nic Suzor
Arts and Culture (PDF), Dr Thomas Birtchnell
Data Collection, Consent and Use (PDF), Associate Professor Lyria Bennet Moses and Amanda Lo
Data Integrity, Standards and Ethics (PDF), Data61
Data Storage and Security (PDF), Dr Vanessa Teague and Dr Chris Culnane
Defence, Security and Emergency Response (PDF), Dr Adam Henschke
Defence, Security and Emergency Response (PDF), Professor Seumas Miller
Defence, Security and Emergency Response (PDF), Dr Reuben Steff and Dr Joe Burton
Disability (PDF), Dr Sean Murphy and Dr Scott Hollier
Discrimination and Bias (PDF), Professor James Maclaurin and Dr John Zerilli
Economic and Social Inequality (PDF), Professor Greg Marston and Dr Juan Zhang
Economic and Social Inequality (PDF), Nik Dawson
Education and Training (PDF), Professor Rosemary Luckin
Educations and Training (pt 2) (PDF), Professor Rosemary Luckin
Employment and the Workforce (PDF), Alexander Lynch of behalf of Google Australia
Employment and the Workforce (PDF), Dr Ross Boyd
Employment and the Workforce (PDF), Professor Robert Holton
Energy (PDF), Sylvie Thiebaux
Environment (PDF), Professor John Quiggin
Environment (PDF coming soon), Professor Iven Mareels
Ethics, Bias and Statistical Models (PDF), Dr Oisín Deery and Katherine Bailey
Fake News (PDF), Professor Neil Levy
Finance (PDF), Dr Mark Lawrence
FinTech (PDF), Koren O’Brien
FinTech (PDF), Professor Mark Pickering and Dr Dimitrios Salampasis
FinTech (PDF), Westpac Technology
GDPR and Regulation (PDF), Nick Abrahams and Monique Azzopardi on behalf of Norton Rose Fulbright
Geopolitics (PDF), Adjunct Professor Nicholas Davis and Dr Jean-Marc Rickli
Government (PDF), 3A Institute led by Robert Hanson
Global Governance (PDF), Professor Dr Andrea Renda
Health and Aged Care (PDF), Associate Professor Federico Girosi
Health and Aged Care (PDF), Professor Bruce MacDonald, Associate Professor Elizabeth Broadbent and Dr Ho Seok Ahn
Human AI Relationship (PDF), Professor Hussein Abbass
Human Autonomy in AI Systems (PDF), Professor Rafael Calvo, Dorian Peters and Professor Richard Ryan
Human Rights (Australia) (PDF), Australian Human Rights Commission
Human Rights (New Zealand) (PDF), Joy Liddicoat
Inclusive Design (PDF), Dr Manisha Amin and Georgia Reid
Indigenous Data Sovereignty (PDF), Professor Maggie Walter and Professor Tahu Kukutai
Indigenous Peoples (PDF), Associate Professor Ellie Rennie
Information Privacy (PDF), Associate Professor Mark Burdon
Legal and Ethical Issues (PDF), Herbert Smith Freehills
Legal Services (PDF), Professor Julian Webb, Associate Professor Jeannie Paterson, Annabel Tresise and Associate Professor Tim Miller
Liability and Algorithmic Decisions (PDF), Gary Lea
Liability (PDF), Dr Olivia Erdélyi and Dr Gábor Erdélyi
Machine Learning (PDF), Professor Robert C Williamson
Machine Learning (PDF), Professor Anton van den Hengel
Mining (PDF), Chris Goodes, Adrian Pearce and Peter Scales
Natural Language Processing (PDF), Professor Tim Baldwin and Professor Karin Verspoor
Privacy and Surveillance (PDF), Joy Liddicoat and Vanessa Blackwood
Psychological and Counselling Services (PDF), Professor Mike Innes
Public Communications (PDF), Professor Mark Alfano
Quantum Machine Learning (PDF), Professor Lloyd Hollenberg
Regulation (PDF), Dr Olivia Erdélyi
Re-identification of Anonymised Data (PDF), Dr Ian Opperman
Robotics (PDF), Professor Dr Alberto Elfes, Dr Elliot Duff, Dr David Howard, Fred Pauling, Dr Navinda Kottege, Dr Paulo Borges and Dr Nicolas Hudson
SMEs and Start-ups (PDF), Tiberio Caetano and Andrew Stead
Training the Next Generation of AI Researchers (PDF), Professor Mark Reynolds
Transformations of Identity (PDF coming soon), Professor Anthony Elliot
Transformations of Identity (PDF), Dr Eric Hsu and Dr Louis Everuss
Transport and Mobility (PDF), Associate Professor David Bissell
Transport and Mobility (PDF), Dr Malene Freudendal-Pedersen and Robert Martin
Transport and Mobility (PDF), Michael Cameron
Transport and Mobility (PDF), Professor Dr Sven Kesselring, Eriketti Servou and Dr Dennis Zuev
Trust (PDF), Associate Professor Reeva Lederman
Trust and Accessibility (PDF), Professor Mark Andrejevic
Universal Design (PDF), Dr Jane Bringolf
Work Design (PDF), Professor Sharon Parker