Data Protection and Artificial Intelligence Law: Europe Australia Singapore - An Actual or Perceived Dichotomy
American Journal of Science, Engineering and Technology
Volume 4, Issue 4, December 2019, Pages: 55-65
Received: Nov. 1, 2019; Accepted: Nov. 27, 2019; Published: Dec. 5, 2019
Views 460      Downloads 146
Robert Walters, Lecturer, Victoria Law School, Victoria University, Melbourne, Australia
Matthew Coghlan, Associate, Asian Law Centre, Faculty of Law, University of Melbourne, Melbourne, Australia
Article Tools
Follow on us
Artificial Intelligence (AI) is moving so rapidly policy makers, regulators, governments and the legal profession are struggling to keep up. However, AI is not new and it has been used for more than two decades. Coupled with AI, personal data, along with cyber security law, and the challenges posed by the current legal frameworks are nothing short of immense. They are, in part, at odds with each other, and are doing very different things. This paper explores some of the challenges emerging in Australia, Europe and Singapore. The challenge of the interrelationship between personal data and AI arguably begins with who has manufactured the AI. Secondly, who owns the AI. Another challenge that has also emerged is defining AI. Most people are able to understand what AI is and how it is beginning to impact the economy and our daily lives. However, there is no clear legal definition of AI, because AI is so nebulous. This burgeoning area of law is going to challenge society, privacy and economic experts, regulators, innovators of technology, as there continues to be a collision between them. Furthermore, the collection of personal data by AI challenges the notion of where responsibility lies. That is, AI may collect, use and disclose personal data at different points along the technology chain. It will be highlighted how the current data protection laws rather than promote AI projects, largely inhibit its development. This paper identifies some of the tensions between data protection law and AI. This paper argues that there is a need for an urgent and detailed understanding of the opportunities, legal and ethical issues associated with data protection and AI. Doing so will ensure an ongoing balance between the economic and social issues that are attached to the two areas of the law.
Artificial Intelligence, Data Protection, Australia, European Union, Singapore
To cite this article
Robert Walters, Matthew Coghlan, Data Protection and Artificial Intelligence Law: Europe Australia Singapore - An Actual or Perceived Dichotomy, American Journal of Science, Engineering and Technology. Vol. 4, No. 4, 2019, pp. 55-65. doi: 10.11648/j.ajset.20190404.11
Copyright © 2019 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License ( which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Robert Walters, Leon Trakman, Bruno Zeller, Data Protection Law: Data Protection Law: A Comparative Analysis of Asia-Pacific and European Approaches, Springer (2019).
Japan Times, Full text of the G20 Osaka leaders' declaration XTu6qWVeK8U
Ibid. To further promote innovation in the digital economy, we support the sharing of good practices on effective policy and regulatory approaches and frameworks that are innovative as well as agile, flexible, and adapted to the digital era, including through the use of regulatory sandboxes. The responsible development and use of Artificial Intelligence (AI) can be a driving force to help advance the SDGs and to realize a sustainable and inclusive society. To foster public trust and confidence in AI technologies and fully realize their potential, we commit to a human-centered approach. We affirm the importance of protection of intellectual property. Along with the rapid expansion of emerging technologies including the Internet of Things (IoT), the value of an ongoing discussion on security in the digital economy is growing. We, as G20 members, affirm the need to further work on these urgent challenges.
MIT Technology Review, When an AI finally kills someone, who will be responsible? Legal scholars are furiously debating which laws should apply to AI crime,
J. K.C. Kingston, Artificial Intelligence and Legal Liability,
Stanford University, One Hundred Year Study on Artificial Intelligence (AI100), (2016)
The National Science and Technology Council, The National Artificial Intelligence Research and Development Strategic Plan (2016),
World Economic Forum, Artificial Intelligence Collides with Patent Law, White Paper, 2018,
Sinta Dewi Rosadi, Robert Walters, Bambang Pratama, Siti Yuniarti, Personal Data and Smart Appliances Used in the Home (forthcoming)
Phillip Jackson, Introduction To Artificial Intelligence 1, Dover Publ’n, Inc., 2d ed. (1974), pp. 192-338.
World Intellectual Property Organization Technology Trends, Artificial Intelligence,
Greenberg, A., An AI That Reads Privacy Policies So That You Don’t Have To, ( 2018), available at
Rob Sumroy, Natalie Donovan, AI and Data Protection Balancing Tension, Slaughter and May
Council of Europe, Artificial Intelligence and Data Protection: Challenges and Possible Remedies Accurate testing of the training phase before the deployment of AI algorithms on a large scale could reveal hidden bias. Moreover, hidden bias may also involve machine-generated bias which is different from human bias. In the AI context, the assessment of potential bias can also become controversial, given the multiple variables involved and the classification of people into groups which do not necessarily correspond to the traditional discriminatory categories. Questions regarding machine bias cannot be deflected by the argument that human decisions are fallible, and that AI is a way to reduce human error.
Australian Human Rights Commission, Human Rights and Technology Issues Paper, July 2018,
Australian Human Rights Commission and World Economic Forum 2019, Artificial Intelligence: governance and leadership, White paper, January 2019, They note that the potential impact of AI, including on other human rights, goes beyond privacy. For example, AI and related technologies could: bring radical changes in how we work, with predicted large-scale job creation and destruction and new ways of working, transform decision-making that affects citizens’ basic rights and interests increase our environmental impact, become so important in how we live that accessibility of that technology becomes an even more important human rights issue, have a profound impact on our democratic institutions and processes.
Human Rights and Technology, Decision making and Artificial Intelligence,
Personal Data Protection Commission Singapore, Discussion Paper on Artificial Intelligence (AI) AND Personal Data – Fostering Responsible Development and Adoption of AI,
The Organization for Economic Cooperation and Development (OECD) Guideline, governing the Protection of Privacy and Transborder Flows of Personal Data’ (‘OECD Guidelines. The OECD member countries are: Australia, Austria, Belgium, Canada, Chile, the Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Israel, Italy, Japan, Korea, Luxembourg, Mexico, the Netherlands, New Zealand, Norway, Poland, Portugal, the Slovak Republic, Slovenia, Spain, Sweden, Switzerland, Turkey, the United Kingdom and the United States of America. The Commission of the European Communities takes part in the work of the OECD,, accessed 15 June 2018. A notable absentee from this list is Singapore.
Regulation 2016/679 Of the European Parliament and the European Council, on the protection of natural persons with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46/EC (General Data Protection Regulation), Official Journal of the European Union L 119/1, Article 4.
Privacy Act 1988, section 6.
Ibid. Additional categories of personal information include, Membership of a professional or trade association; Membership of a trade union; Sexual orientation or practices; Criminal record; Health information about an individual; Genetic information (that is not otherwise health information); Biometric information that is to be used for the purpose of automated biometric verification or biometric identification; or Biometric templates.
Australian Privacy Principle 3, the collection is reasonably necessary for an APP entity’s functions or activity, or a listed exception applies.
Personal Data Protection Act 2012, section 2.
Personal Data Protection Act 2012. rganization for the Economic Co-operation and Development, Guidelines on the Protection of Privacy and Transborder Flows of Personal Data 2013. m.
EPrivacy Regulation, European Data Protection Supervisor, Opinion on the Proposal for a Regulation on Privacy and Electronic Communications (ePrivacy Regulation), Article 10 and Recital 23.
Ibid, Article 7. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. Notably, the word shall provide a flexible approach to whether the data subject is informed or otherwise. When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract. This also includes the provision of a service, which is conditional on consent to the processing of personal data that is not necessary for the performance of that contract. iller v Procopets (2008) 24 VR 1.
Office of Information Commissioner, Australian Government: Key Concepts,, accessed 12 November 2018.
Privacy Act 1988.
Ibid. In Direct Marketing, APP 7.15 The ‘reasonably expect’ test is an objective test that has regard to what a reasonable person, who is properly informed, would expect in the circumstances. This is a question of fact in each individual case. It is the responsibility of the organization to be able to justify its conduct. 7.16 Factors that may be important in deciding whether an individual has a reasonable expectation that their personal information will be used or disclosed for the purpose of direct marketing include where: the individual has consented to the use or disclosure of their personal information for that purpose (see discussion in paragraph 7.23 below and Chapter B (Key concepts) for further information about the elements of consent): the organization has notified the individual that one of the purposes for which it collects the personal information is for the purpose of direct marketing under APP 5.1 (see Chapter 5 (APP 5)) the organization made the individual aware that they could request not to receive direct marketing communications from the organization, and the individual does not make such a request (see paragraph 7.21).
Australian Privacy Principles, 7.2, 7.3, 7.4. Express consent is given explicitly, either orally or in writing. This could be a handwritten signature, oral statement, or use of an electronic or voice signature. Generally, it cannot be assumed a person has provided consent on the basis they did not object in the first place to allow their data to be processed or transferred to a third party. Furthermore, it will be difficult for an APP entity to establish that an individual’s silence can be taken as consent.
Rogers v Whitaker (1992) 175 CLR 479, 490.
Office of Australian Information Commissioner, Australian Government,
Australian Privacy Principles 3.
Office of Information Commissioner, Australian Government: Key Concepts,
Personal Data Protection Act 2012, Division 1.
Wong YongQuan, B Data privacy law in Singapore: the Personal Data Protection Act 2012 International Data Privacy Law, Vol. 7, No. 4 (2017).
Personal Data Protection Commission, Advisory Guidelines on Key Concepts in the Personal Data Protection Act at para 12.5.
Personal Data Protection Act 2012, division 1, section 13-17.
Personal Data Protection Commission, Advisory Guidelines on Key Concepts in the Personal Data Protection Act, (2017).
Personal Data Protection Act 2012, section 15 and 17. In accordance with Second Schedule (collection), Third Schedule (use) and Fourth Schedule (disclosure).
Yip, M Personal Data Protection Act 2012: Understanding the consent obligation, Singapore Management University (2017).
Personal Data Protection Act 2012, section 13.
Personal Data Protection Act 2012, section 20, Notification Obligation.
Yip, M Personal Data Protection Act 2012: Understanding the consent obligation, Singapore Management University (2017). The PDPA acknowledges that certain forms of socially, morally or legally acceptable uses of personal data do not require the individual’s consent.
Personal Data Protection Act 2012, section 21.
Personal Data Protection Act 2012, section 21 (2), Third, Fourth and Fifth Schedule provides a list of exemptions. Use of Data without consent, Disclosure of data without consent and Exemption from Access. Section 21 (3), provides circumstances in which an organization ‘must not’ provide personal data or other information. A provision such as is this is important for, and applies to, the protection of physical or mental health, or, reveals the identity of an individual who has provided personal data about another individual. Therefore, no data or information is to be released that is in the national interest.
Data Protection Impact Assessment (DPIA) and prior consultation of the Supervisory Authority, Articles 35 and 36 of GDPR. In addition, “where a data protection impact assessment under Article 35 indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk”, the data controller shall consult the relevant Data Protection Supervisory Authority under Article 36 of GDPR.
Personal Data Protection Commissioner, Guide to Data Protection Impact Assessments (2017). Data protection risks are best addressed when the system or process is i) new and in the process of being designed, or ii) in the process of undergoing major changes. Introducing changes to address data protection risks after the design of a process or system has been finalised or implemented will likely lead to increased cost and effort. Some examples of when to conduct a DPIA include: creating a new system that involves the handling of personal data (e.g. new website that collects personal data); creating a new process, including manual processes, that involves the handling of personal data (e.g. receptionist collecting personal data from visitors); changing the way that existing systems or processes handle personal data (e.g. redesign of the customer registration process); and changes to the organisational structure that affecting the department handling personal data (e.g. mergers and acquisition, restructuring).
Office of the Australian Information Commissioner, Guide to undertaking privacy impact assessments,
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
Tel: (001)347-983-5186