In a number of countries, authorities have established what are known as regulatory sandboxes.
Erikson, Timmermans og Undheim
When it comes to artificial intelligence, we develop solutions that we don’t fully understand, and so we don’t understand their implications. Consequently, we also don’t understand what ethical dilemmas may arise.
Artificial intelligence is embraced yet feared by society, businesses and individuals. The last thing people want is to live in a world where algorithms create systematic injustices, artificial intelligence enables full-scale surveillance, or where social media bots taint public debate.
We are awaiting proposed EU legislation on artificial intelligence whilst a number of different initiatives are being discussed – from introducing artificial intelligence and ethics as a compulsory component in technology education programmes to developing professional ethics for programmers of artificial intelligence.
That may, like the Hippocratic oath for the medical profession, help to establish ethical guidelines for individual companies working with artificial intelligence.
In a number of countries, authorities have established what are known as regulatory sandboxes.
Erikson, Timmermans og Undheim
In a number of countries, authorities have established what are known as regulatory sandboxes. They are a dialogue platform where authorities and enterprises can together explore how solutions based on artificial intelligence can be developed in line with the existing regulatory framework. Even without specific legislation in place, data protection regulations, for example, already have a bearing on the development of artificial intelligence solutions.
Although these initiatives are important and interesting, they fail to examine the question of what constitutes good and bad artificial intelligence.
The challenge of developing ethical artificial intelligence is not just about what we want to do or what we are allowed to do. It's also about what we are able imagine. We would argue that artificial intelligence operates in a reality of ‘true uncertainty’, a type of uncertainty that is impossible to calculate, that we do not even have the language to describe, and of which we have no awareness. Artificial intelligence is complex and rapidly developing.
It is difficult for users and, after a certain point even programmers, to fully understand how a system works, what decisions it makes and why.
When it comes to artificial intelligence, we develop solutions that we don’t fully understand, and so we don’t understand their implications. That also means that we are unaware of the ethical dilemmas it may bring. We cannot plausibly calculate possible upsides and downsides of the technology.
Given this true uncertainty, it's simply not enough to appeal to ‘the good in people’. What does good even mean?
Given this true uncertainty, it's simply not enough to appeal to ‘the good in people’. What does good even mean?
Erikson, Timmermans og Undheim
It’s also not enough to fill knowledge gaps. What oes knowledge even mean?
We therefore need to promote arenas that increase the moral imagination of enterprises. A research article in the scientific journal ‘AI and Ethics’ points to regulatory sandboxes as a possible solution to this problem. These sandboxes were established by the authorities, primarily as a means of closing the knowledge gap. They were created to clarify and offer guidance to innovative enterprises about what is and is not in compliance with the law through special, dedicated, intensive course guidance.
It is an explicit goal of the Norwegian Data Protection Authority's regulatory sandbox for responsible artificial intelligence to help innovative individual players comply with regulations and develop privacy-friendly solutions. This will be achieved by increasing enterprises' understanding of regulatory requirements and of how products and services that are based on artificial intelligence can meet the requirements of data protection regulations in practice.
The sandboxes can also play another role that is at least equally important by helping to reduce uncertainty by providing regulatory clarifications about innovative issues. This can be done in the course of four to six months, during which enterprises discuss innovative, technological solutions with the Data Protection Authority. Sandboxes can also help enterprises to envisage the possible consequences of their own technology by providing an open, creative and co-creative arena. The biggest challenge associated with developing responsible AI is not that most managers and businesses in the technology industry lack moral principles.
The challenge is that they lack moral imagination.
Erikson, Timmermans og Undheim
The challenge is that they lack moral imagination.
If the sandboxes succeed in advancing enterprises' ability to see beyond the obvious, they may also help to advance their ability to create better, stronger and more responsible innovation.
These objectives go far beyond the legislative dimension, and the sandboxes will really be able to play a key role in promoting responsible innovation.