Naked CapitalismTopic: "Developing Industry Standards Won’t Give AI a Conscience"
Anyone interested in weighing in on the morality of machines?
~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~
By Matthew Linares, Technical and Publishing Manager with openDemocracy. He is a variable tinkerer writing and organising around debates in technology and more: Thought and projects. He tweets @deepthings. Encrypt mail to him with his public key. Originally published at openDemocracy
Toyota unveil a self-driving car in 2013. IEEE’s new standards could transform how companies use AI. Image: David Berkowitz (CC BY 2.0) (CC BY 2.0)
From the algorithms used by social media platforms to the machine-learning that powers home automation, artificial intelligence has quietly embedded itself into our lives. But as the technology involved grows more advanced and its reach widens, the question of how to regulate has become increasingly urgent.
The pitfalls of AI are well-documented. Race and gender prejudices have been discovered in a number of systems built using machine learning – from facial recognition software to internet search engines.
Last week, the UK’s Digital, Culture, Media and Sport select committee released its long-awaited fake news report. The committee lays significant blame on Facebook for fuelling the spread of false information citing the tech giant’s reliance on algorithms over human moderators as a factor. But while governments are now moving to legislate for greater oversight of social platforms, there has been less focus on how we govern the use of AI at large.
A crucial first step could be the development of a set of industry standards for AI. In September,, the Institute of Electrical and Electronics Engineers (IEEE), one of the largest industry standards bodies, is due to publish an ethical framework for tech companies, developers and policy makers on building products and services that use AI.
During its long history the IEEE has produced hundreds of widely used technology standards including those for wireless internet networks and portable rechargeable batteries. As well as technology, the organisation also develops standards for health care, transportation and energy industries.
The IEEE’s attempt to develop a set of AI standards might be its most ambitious task to date. As the project nears completion, some of its contributors are already beginning to raise concerns over potential flaws.
digitaLiberities spoke with Marc Böhlen one of the hundreds of researchers involved about whether it’s possible to make a set of recommendations for ethical AI.
You have been contributing to the IEEE Ethics in Action process which intends to issue ethical guidelines for anyone working on AI, including engineers, designers and policy makers. What role are you playing in the process?
I am part of the subgroup working on the theme of wellbeing within AI ethics. There are some 1100 people contributing to the effort; I am a very minor cog in a giant machine. This is an ambitious undertaking, and I am in support of the project and its goals. The aim is nothing less than to prioritise benefits to human well-being and the natural environment.
But that lofty goal is overshadowed by questionable procedures and a rush to market. There is substantial pressure to get this large project out the door promptly – too fast for its own good, I contend.
The initiative organizers want to be first to market with this document so it can serve as a reference; for engineers making AI, for policy makers deploying AI and corporations selling AI systems. The problem is that it is not clear yet how to make ethical autonomous A.I systems. It is an active area of research.
There are really two very different and interwoven aspects of AI ethics. First is the ethics of those who make and use AI systems. Then there is the ethics of the AIs themselves. The former can and should be addressed right now. Engineers, managers, policy makers are expected to act ethically in their professions and when they make and use AI systems those same expectations should apply.
Too much obfuscation has been generated regarding the special status of AI in this regard. And it so often comes down to basics: be upfront with users about what your AI actually does and the limitations it has.
Could you give me some examples of flaws you see in the project?
Many of the recommendations simply do not live up to the stated goals, in my view. For example, a recommendation on transparency stresses the need to “develop new standards that describe measurable, testable levels of transparency”. As an example, the current document mentions a care robot with a “why did you do that?” button that makes a robot explain an action it just performed. While this button might make getting a response from a robot easier, it certainly does not guarantee that the response is helpful. Imagine the robot indifferently stating it “did what is was programmed to do” when asked for an explanation.
The IEEE knows it is a hard call to assess whether a robot is ethical or not. The approach the IEEE initiative wants to take is not to define ethical actions per se, but to propose a series of tests that are indicative of ethical behavior; a kind of minimum features test that any ethical or A.I system would be able to pass. In practice, the test will include an impact assessment checklist.
Complying with all the items of the checklist ensures compliance. In exchange, a given AI system gets certified: seal of approval. That kind of practical oversight is in principle a good idea. But the details really matter. You can check the transparency box off in a cheap way or a deep way. The system has no provision of ensuring quality standards in the compliance.
Do you think it’s even possible for a static document ever to give sufficient detail? Do you think we should be looking to other formats or systems to provide guidance on such a complex and fast-paced field?
I am less concerned with the format of the document than with the disconnect between its ambition and what it delivers and enables. At worst, it could be an ethics whitewashing mechanism that allows less scrupulous actors to quickly comply with vague rules and then bring questionable products and services to market under a seal of approval.
But you are right about the fact that a static document will be insufficient. In addition to a standard text document I would suggest an openly available, continuously updated, multimedia compilation of AI snafus. We should be learning from mistakes.
The project aspires to set industry standards for AI design with a process that might be compared to other multi-stakeholder efforts, such as W3C working groups that guide best practice on Internet engineering. Some of these groups have failed to meet the expectations of key participants in effectively curbing corporate influence. Do you think the IEEE has been able to balance contributions in this case?
This IEEE initiative wants to be inclusive of multiple perspectives. The document will certainly contain a large collection of perspectives.
However, inclusiveness stops there. There is no diversity of goals. The lack of willingness to take a longer view and a cautious stance on the integration of AI into everyday services is much more problematic than the failure to include everyone’s voice in the discussion. In fact, I would consider the excessive focus on ‘inclusive discussion’ almost a smoke screen that then gives license to an uncompromising drive to push an AI positive industry agenda, with insufficient checks in place to balance the process.
Where else might we look to for safeguards? There do seem to be numerous other competing efforts e.g. the Montréal Declaration for Responsible Development of Artificial Intelligence. What’s our best hope for dealing with the complex issues we face?
The Montréal Declaration considers AI also as a political issue. That is a very important addition to the debate; one that IEEE doesn’t discuss. Yet the Montréal Declaration suffers from similar deficiencies as the IEEE initiative in that enforcement and oversight is unclear. Who performs the ethical certifications? Is the fox guarding the hen house?
The Organisation for Economic Co-operation and Development (OECD) is working on an AI related policy framework (https://www.oecd.org/going-digital/C-MIN-2018-6-EN.pdf) on ‘digital transformation’ at large. What the OECD initiative shares with IEEE project is a clear formulation of indicators across application domains and the consideration of the impact of digital transformation on well-being in general. It also shares some of the vague formulations, unfortunately.
The IEEE focus on enabling AI to become marketable means that existing market logic will control AI. In the end, shoddy AI is bad for business, and that constraint might turn out to be a good thing in the current political climate.
How can such a process realistically drive design decisions in the face of what some describe as AI Nationalism whereby the race for dominance amongst states and subsequent national security interests may trump ethical concerns.
This is a big problem. What good is the most stringent (and possibly restrictive) agreement on ethically aligned AI if some adhere to the principles while others do not? Aligning technology with ethics is not cheap. Some actors certainly will attempt to circumvent costs. If they are successful, the overall project would fail. So there must be a cost associated with non-compliance.
Certainly we will see an ethics compliance industry grow very quickly. It will probably generate similar excesses as other compliance industries have in the recent past. Over time excesses might self-adjust to some sort of useful equilibrium as in the automobile industry where global safety standards have been implemented, to the benefit of most of us.
But compliance will not work everywhere. Imagine ethically aligned robots from one nation fighting an opponent with no such constraints. Only a far superior military robot will have the operational freedom to act ethically against an ethically unconstrained adversary.
https://www.nakedcapitalism.com/2019/02/developing-industry-standards-wont-give-ai-conscience.html
Comments
Post a Comment