The ethics of big technology companies holding onto swaths of the world’s data is at the forefront of many debates right now, including in the development of AI.
Tech giants with control over masses of people’s data must not be allowed to become overly powerful in the race to build artificially intelligent systems, said a report published by the UK’s Parliament on Monday.
The report’s publication follows revelations last month about Facebook improperly, including political ad consultancy Cambridge Analytica, resulting in widespread scrutiny of data use by the tech industry and the academic world. There was no mention in the report of Cambridge Analytica or Facebook, but it has been published at a time when the public are more aware than ever about the amount of data big tech companies hold on them and the potential for its misuse.
In the report, the House of Lords committee on AI called for the government, along with the Competition and Markets Authority, to review the use and potential monopolization of data by big technology companies operating in the UK.
“We have heard considerable evidence that the ways in which data is gathered and accessed needs to change, so that innovative companies, big and small, as well as academia, have fair and reasonable access to data, while citizens and consumers can protect their privacy and personal agency in this rapidly evolving world,” said the report.
The House of Lords, Parliament’s upper chamber, started looking into AI in July. The purpose of the enquiry was to establish the possible effects — both positive and negative — of AI in the UK, as well as the country’s potential to be a leader in the field.
One major outstanding issue it identified was the question of whether laws that are currently in place are built to deal with AI if it goes wrong.
“There is no consensus regarding the adequacy of existing legislation should AI systems malfunction, underperform or otherwise make erroneous decisions which cause harm,” it said.
The committee recommended that the Law Commission investigate this issue.
AI and ethics
The ethical implications of AI was a primary focus of the committee’s report. It published its own AI ethics code that it would like to see form the basis of more widely used guidelines, including the five following recommendations:
- Artificial intelligence should be developed for the common good and benefit of humanity
- Artificial intelligence should be fair and easy to comprehend
- Artificial intelligence should not be used to diminish people’s data rights or privacy
- All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence
“An ethical approach ensures the public trusts this technology and sees the benefits of using it,” said Lord Clement-Jones, the committee chair. “It will also prepare them to challenge its misuse.”
Brandon Purcell, principal analyst for Forrester described the report as “a good start,” and said the politicians had considered many of the ethical implications of AI. “Their awareness of AI’s propensity to learn and reinforce the ‘prejudices of the past’ is encouraging,” he said.
Be he also warned that data scientists on the frontline of AI development need to understand how to prevent bias from creeping into machine learning algorithms.
“At the end of the day, machine learning excels at detecting and exploiting differences between people.” said Purcell. “Companies will need to refresh their own core values to determine when differentiated treatment is helpful, and when it is harmful.”