Asheesh Mehra, CEO and co-founder, AntWorks

AI has the power to reshape how businesses and governments operate today. The benefits of AI range from driving increased efficiency and productivity to filling labour gaps—which means an improvement in bottom-line performance. It also has the profound ability to solve some of the greatest challenges facing our world, from combatting climate change, to increasing access to healthcare, to advancing biometrics. 

Yet, there are undoubtedly ethical complexities relating to data collection and the AI technologies used to interpret it. Huge investments are being made globally to lay the groundwork for AI, with HBR estimating AI will be a US$13 trillion economy over the next decade. Countries across the world now have a responsibility to construct a framework upon which the responsible application of AI can be built.

Some people, including myself, refer to that responsibility as ethical AI, and this means that companies and governments worldwide need to be accountable. There’s no doubt that legislators and regulators have a key role to play in the area of AI accountability. That involves specifying the applications for which AI can and cannot be used, but AI as a technology in itself should not, in my opinion, be regulated. 

Instead there should be governmental regulations put in place to help standardise how AI technology can be used. For example, regulations should indicate that applying AI is appropriate for particular purposes in specific industries, while other laws or rules should make clear what applications of AI are not allowed.

So how is the approach to ethical AI being played out in different regions of the world? Here are a few examples: 

In Europe

EU Commissioner Thierry Breton recently introduced a new initiative called ‘European Strategy for Europe - Fit for the Digital Age’. It includes a whitepaper on AI and the European Strategy for Data, which will work to inform a new legislative framework.

With the European Commission set to invest EUR 2 billion in the creation of an EU cloud alliance as part of an overarching strategy to boost development of AI technologies, Breton and his colleagues are focused on ensuring that digital technologies work to serve European citizens, and not the other way around.

In Singapore

Singapore has recently released a framework on how AI can be ethically and responsibly used, which businesses in the Republic and elsewhere can adopt as they grapple with issues that have emerged with new technology.

This model framework for AI governance is a "living document" intended to evolve along with the fast-paced changes in a digital economy. It takes in feedback from the industry, and will be tweaked when it gets more views.

The framework was released by Mr S. Iswaran, Minister for Communications and Information, at the annual World Economic Forum (WEF) meeting. It is the first in Asia to provide detailed and readily implementable guidance to private sector organisations using AI.

Mr Iswaran described AI as frontier technology that has the potential to affect many sectors in very profound ways. Singapore, he said, wants to be prepared in terms of its capabilities, technology investments and regulations.

Of course, this is not happening in isolation – rather, it also allows Singapore to invite global feedback on its framework.

In the Gulf States

Gulf States are going big on AI. Aside from appointing a minister specialising in it—Omar Al-Olama, the UAE’s minister for artificial intelligence—the UAE also plans a university dedicated to all things machine learning. 

Abu Dhabi’s Mohammed bin Zayed University of Artificial Intelligence, named after the UAE capital’s crown prince, is also set to offer masters and PhD programmes from September 2020. In a sign of how serious the country is about AI, students will study at the university for free—with full scholarships, a monthly allowance, and accommodation. 

In early 2019, the governmental Smart Dubai initiative, which promotes technology use in the emirate, released the “AI Ethics, Principles and Guidelines,” which covers areas like security, humanity, and inclusiveness. Plans are underway to actually enforce a set of AI guidelines. A mandatory “full-fledged legislative framework” governing AI in the future is expected, but some say making it an actual law could stifle innovation.

Meanwhile, Al-Olama says his country is not interested in “fancy or gimmicky uses of artificial intelligence,” but rather in how the technology can impact real people.

Data, it is said, is going to be the new oil—and AI is going to be the electricity.

The Future of Responsible AI

To fully realise the benefits and true strengths of AI, technology vendors need to come together with governments and regulators to define those specific applications. This is not a time to be working in siloes – rather, this triumvirate needs to work closely together, to consolidate their respective skills, knowledge and resources to drive the future adoption of responsible AI. 

Further, organisations that are building and deploying the engine that delivers AI to enterprises globally need to be held accountable for upholding responsible applications of the technology.

More on AntWorks’ approach to manifesting the power of ethical AI can be found here. Ethical AI