katie

This week's Analytics Revolution blog is contributed by Katie Gibbs, Founding Partner and Head of Consulting at Emergence Partners. It carries on from our recent series on Machine Learning and addresses the many ethical implications.

 

 

Artificial intelligence developers and designers are creating systems that could impact millions of people, yet their processes too often relegate ethics to an afterthought.

When oversights are made in the early design and build stage, hidden ethical and operational costs reveal themselves further down the line. Take the recent case of the UK’s AI decision-making system used for processing visa applications. Since rollout, the AI tool has been dismissed by campaigners as 'speedy boarding for white people'; Priti Patel has since pledged a new visa system that will consider 'unconscious bias'. This is an example of a lack of rigour in the early design process, and now the UK government must sustain the cost of scrapping and redesigning the system.

For another high-profile example, look no further than Amazon’s failed recruitment engine. Amazon’s machine learning specialists discovered soon into implementation that the tool was showing bias against women. Again, a timely and costly project was scrapped due to a lack of ethical foresight and less-than-rigorous review of the data.

Achieving ethical harmony in an AI-powered system is much more than a box-ticking exercise. There are blanket universal values we can all agree upon, for example the EU’s ethics guidelines for trustworthy AI. But in trying to ensure these values, has there been too much focus on high level principles, and not enough on the specifics of AI design? Even this latest MIT Sloan piece on best-practise AI strategy concedes that AI ethics are “not necessarily a common component”.

Getting a handle on the specific ethical quandaries associated with your stakeholders takes more than lip service to basic human rights. This is work that must be put in upfront – otherwise the flaws in your design will come back to bite you.

Encoding your code of ethics

AI can bring vast efficiency gains by enabling firms and employees make sense of large amounts of data. But this can all be undone by reputational damage, regulatory breaches and canned projects caused by crucial ethical oversights.

There is far more to think about here than merely compliance or legality. COO of Cambridge Analytica Julian Wheatland has said of their scandal involving their third-party-harvesting of Facebook data that the company’s biggest mistake was believing that complying with government regulations was enough, while ignoring broader questions of data ethics, bias and public perception.

AI design has to go deeper than compliance or high-level principles, to the very root of your company, your values and the industry in which you operate.

This means asking yourself some vital questions at the design stage.

Are you designing your solution with diversity in mind?

In putting your project together, it’s vital that you prioritise diversity, not only of personnel working on the project but also of skills and expertise. The ability to design intuitive interfaces, empathise and collaborate with users and leaders, and clearly communicate changes are critical qualities for a successfully scaled AI project. Your project team should be composed of service designers as well as technical staff, and your Product Owner should be from your business, or at least possess a deep understanding of it. An unbalanced or overly technical project team is likely to result in narrow design specifications which is more likely to reveal ethical flaws.

Equally important is the diversity of the environment in which you’re building. A homogeneous team is far less likely to build a tool that accounts for a diverse set of potential outliers. Accounting for diversity amongst end users is often where a project is caught out; for example, the team that designed the ill-fated Apple credit card tested their tool against the source data, but they came unstuck because they failed to test against diverse scenarios to check for inherent bias that may have been built in. Acceptance criteria needs to test for all different user types – in this case, the historic source data was weighted against women, and the tool accurately reflected and perpetuated this bias. There is such a thing as too much emphasis on the accuracy of testing against a dataset, at the expense of accounting for the outliers, which is where the real damage is usually done.
.

Are you properly investigating your data models?

When the Apple credit card service went live, the bias in the training data was exposed. With more rigorous testing prior to launch with a testing framework that accounted for varied user groups, these would have been revealed without the PR disaster that followed.

Bias often creeps in where training data is unrepresentative of the wider world, or through data that is inherently biased in the first place. Naïvely training algorithms on “convenience samples” of data can result in the human biases in that data being encoded and reinforced. Data scientists may do much of the leg work, but it’s up to everyone participating in an AI project to actively guard against bias in data selection.

Of course, the job is not over after implementation, and constant monitoring is essential. Sound infrastructure allows for the establishment of feedback loops, whereby oversights and failures can be quickly flagged, analysed, and acted upon. Techniques like drift detection and active learning can reduce this burden, but the ongoing training of data is an ethical necessity and needs to be budgeted for.

Will you be getting the best out of your people?

It’s crucial that we design human-centric systems that augment people, rather than replacing or superseding them. Understanding the unique strengths of humans and machines, and how that can be applied to your business, is key

For example: the use of diagnostic decision trees (a common statistically derived AI algorithm) in emergency rooms can improve the accuracy of patient triages by doing something that humans are generally poor at: combining risk factors in consistent and unbiased ways. Machines, on the other hand, lack the tact and sensitivity to deliver a diagnosis or discuss treatment, or the dexterity to perform operations.

This is a particularly stark example, but it’s the kind of task separation that you will need to integrate into your AI strategy in order to make systems truly human-centric, and in order to best utilise the expertise of your workforce in an efficient and risk-averse manner.

At present, badly designed systems risk turning employees into servants for AI, rather than the other way around. Anthropologist Mary L. Gray elaborated in her book Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass about the unseen human labour that is working to enable the implementation of AI as we see today – including endless examples of lowly paid employees manually tagging and cleaning data.
Ultimately, your stance can be reduced to a simple question: are you seeking to empower or overpower users? (hint: we think it should be the former)!

Are you sticking to your guns on data privacy and transparency?

Too many platforms rely on surreptitious data collection or on data that was collected for other purposes. While machine learning systems have the potential to bring more efficiency, consistency and fairness, they also introduce the possibility of new forms of discrimination which may be harder to identify and address.


Practically speaking, protecting the privacy of users when building large-scale, AI-based systems is an ongoing effort. Organisations need to carry out risk-assessments to reinforce their servers and take every precaution to ensure their cyber-security is defended. They also need to align data acquisition methods with stakeholder expectations, and if these don’t align, ensure they are doing all they can to keep user informed about their privacy.

Human-centric AI can be ethical in unprecedented ways

For all our concerns around bias, it’s important to remind ourselves that algorithms present an opportunity for unprecedented ethical fairness.

The behavioural economist Sendhil Mullainathan points out that the scenarios which cause most concern about algorithmic bias are the very same situations in which algorithms—if properly designed and implemented— have the greatest potential to reduce the adverse effects of implicit human biases.

This sentiment is shared by Nobel laureate Daniel Kahneman, who argued that the decision-making process of humans is “noisy” and that algorithms should take over “whenever possible”. Unlike human decisions, machine predictions are consistent over time, and the statistical assumptions and ethical judgments used in algorithms can be clearly and systematically documented, audited, debated, and improved in ways that human decisions cannot.

The kicker, however, is that when AI is overlaid with human expertise, new heights of accountability and fairness are reached.
The recent A-level algorithmic debacle has demonstrated the importance of a human-centric approach to AI, and the pitfalls of undermining human expertise by relying too much on historic data.

Purely statistical models feed on large-scale historical trends and reflect them in data patterns - in this case, the higher performance of typically wealthier schools. New research by UCL's Institute of Education found that 23% of comprehensive school students were under-predicted by the models compared with just 11% of grammar and private school pupils.

The total deference to a standardised algorithm in place of teacher input was an example of how taking the human element out of AI-powered decision-making causes confusion and distrust, and results in outliers being mishandled. Again, as in the recent Home Office visa system or Apple credit card examples, design flaws and a lack of foresight about the tool’s impact on a diverse set of end users resulted in the need for a dramatic U-turn.

We are being given constant reminders that unless ethical considerations are prioritised at the design stage, a moral and operational disaster - not to mention a PR storm - awaits. Putting in the work up front to design an accountable human-centric process, with thorough consideration for end users and potential biases, will mean that this can all be avoided. As the saying goes: hindsight is a wonderful thing, but foresight is better.

To hear more from Katie on uniting the business and technical to drive more ethical AI, register for her talk at BigDataLDN: https://bigdataldn.com/speakers/ 24th Sept @ 12:15pm.

 

Don't forget to subscribe below to kept up-to-date with the latest on data & machine learning!
The Analytics Revolution blog series:

 

 

Sign Up For Blog Updates