Microsoft held its Future Decoded user conference in London on November 1, attended by an estimated 15,000 customers, partners, and other stakeholders. In the words of Microsoft UK CEO Cindy Rose, this large industry get-together was “all about AI.”

But not just AI: ethical AI. There was a session on ethical AI with Lord Clement-Jones and Viscount Ridley and others, leading on from the influential House of Lords report on AI; another on ethical AI in defence, one on healthcare, plus a special analyst Q&A on AI and ethics with Hugh Milward, Microsoft Director Corporate and External Affairs. In his keynote, Microsoft CEO Satya Nadella cited AI and Ethics as one of the three biggest challenges facing developers, alongside privacy and security.  He said that Microsoft is doing “state-of-the-art work … so you can deploy models that are fair, robust, transparent.” Microsoft has established an Ethics Board to advise on Microsoft’s own products and use of AI.

These are just the latest signs that AI is driving a major conversation in the industry, among businesses and consumers, around the ethical use of IT. And, this is largely being driven from Europe, which is building on its thinking around privacy (GDPR), consumers’ personal data being the basis for so many AI implementations today. Earlier this year, President Macron, launching a multi-billion euro AI strategy for France, emphasized the need to develop a “European” approach to AI, balancing AI and innovation. The UK Parliament’s House of Lords has made similar noises about AI’s need to police itself — speaking of not just a threat, but also an opportunity through creating a more desirable AI offer. A grass roots AI initiative called CLAIRE, started up by scientists in the Netherlands earlier this year, has attracted support from well over a thousand European scientists and AI luminaries. Its goal is to strengthen AI research and innovation in Europe, and it is heavily emphasizing a European approach around “human-centered AI.”

It’s good to see the aspirations of government and the good intentions of the industry and a shout-out to Microsoft for taking a leading position on the vendor side. But what will really bring this into sharp focus will be a court case. It surely can’t be long before someone takes legal action against a company’s use of AI.

Likely scenarios include someone turned down for a loan or insurance by an algorithm, or, worse, who feels they’ve been discriminated against unfairly in a recruitment process, or worst of all, someone who feels they’ve been given the wrong medical treatment — or none at all — thanks to AI. Or it could be some unfortunate accident or hold-up due to a maintenance problem potentially arising from an AI algorithm. IDC believes it’s likely a high-profile case of this sort will be brought during 2019 — and it’s most likely to be in Europe, if it revolves around the use of personal data.

To make sure your organization is not on the receiving end of such a case, with the attendant brand damage and the potential cost of rectification or compensation, we would advise considering establishing an Ethics Board to oversee your use of AI across the organization. At the least, the following should be in place:

  • On all AI projects/deployments involving personal data, ensure that someone reporting directly to the CIO, chief data officer, or CISO is made responsible for the risk and governance aspects of the use of AI techniques.
  • Be able to demonstrate that your data, and your use of it, is free of bias — or at least that you’ve made every reasonable effort to make it so.
  • Prepare risk assessments and damage responses in the event of something going wrong with your use of AI.

 

You may also like:

  •  

Spread the love