When the film Minority Report was released in cinemas, the concept of “precrime” – identifying and preventing a crime before it has even happened - seemed far-fetched and something that wouldn’t be possible within our lifetimes. Steven Spielberg’s 2002 sci-fi blockbuster is set in a dystopian future of 2054. Three humans with the psychic ability to see future events – primarily premeditated murders – are used by Washington DC’s “Precrime unit” to try and prevent murders before they happen. However, when the unit’s own captain is predicted to commit murder, we discover that the visions of the “Precogs” can sometimes deviate from one another, and an alternative vision of the future is shown – the ‘minority report’ referred to in the film’s title – and so we learn that the visions can’t be relied on as absolute fact.

We might not be able to predict a crime before it happens, and send a squad out to arrest the potential perpetrator before they’ve actually committed any wrongdoing. But with the introduction of increasingly advanced Artificial Intelligence (AI) technology – the line between science fiction and reality is blurring. So how can this futuristic technology be implemented to improve our daily lives? And what do we need to consider when it comes to setting out ethical frameworks, minimising unfair bias and managing cybersecurity risks?  

Can science fiction really become science fact?

If you look back at Minority Report, several of its science-fiction scenarios are now science fact: driverless cars, voice-controlled homes, facial recognition, and gesture-based computing.

AI technologies and massive amounts of data are now being used to track and analyse lots of different behaviour patterns – from our shopping history to our travel patterns – and as a result, financial institutions are able to spot credit card fraud and block the card as it is happening, rather than react to it after the fact. When it comes to email, spam filters are able to spot what is a legitimate email and what is spam with increasing accuracy.

AI can also be a force for good when it comes to a more proactive approach to managing security risks. Although still in its infancy, AI and machine learning have the potential to sift through vast amounts of data and video surveillance to assist human analysts in identifying threats. By doing so, it can uncover blind spots and see what humans miss as well as improving situational awareness, avoiding ‘alarm fatigue’ and, perhaps more importantly, cutting the noise to identify the real risks to security.

The UK’s Government Communications Headquarters (GCHQ) recently published a paper on The Ethics of Artificial Intelligence which outlines several specific ways AI can be used. These include:

  • Mapping international networks that enable human, drugs, and weapons trafficking.
  • Fact-checking and detecting deepfake media to tackle foreign state disinformation.
  • Scouring chatrooms for evidence of grooming to prevent child sexual abuse.
  • Analysing activity at scale to identify malicious software to protect the UK from cyberattacks.

However, there is also the risk that adversaries “will undoubtedly seek to use AI to attack the UK” - according to the Royal United Services Institute (RUSI), the world’s oldest think tank on international defence and security.

With great power, comes great responsibility

The power of AI is immense and it seems the technology is here to stay. Yet the rules of how to use these new technologies have not yet been written, and we have a responsibility to address potential issues around ethics, fairness and accountability. Jeremy Fleming, Director of GCHQ, said: “We need honest, mature conversations about the impact that new technologies could have on society. This needs to happen while systems are being developed, not afterwards. And in doing so we must ensure that we protect our [citizens’] right to privacy and maximise the tremendous upsides inherent in the digital revolution.”

Philosophers and data scientists have been grappling with the implications of ethics for AI:

  • How can we prevent AI systems from replicating existing power imbalances?
  • How can we prevent biases, such as racial bias?
  • How do we ensure fairness and accountability is embedded in systems from the start?

The ethics of artificial intelligence: Issues and initiatives (March 2020) from the Panel for the Future of Science and Technology (STOA), deals with some of the ethical implications the arise from the development of AI:

Trust

Robots and AI evoke mixed feelings. Two Eurobarometer surveys - which aim to gauge public perception, acceptance, and opinion to characterise public attitudes towards robots and AI (survey 382), and towards increasing digitisation and automation (survey 460) - have shown that there is still a way to go before people are comfortable with the widespread use of robots and advanced technology in society.

Bias and discrimination

Bias and fairness have been extensively studied by AI ethics researchers, but it is still possible to unknowingly train AI to discriminate. For example, skewed data sets may actively discriminate against women or those from an ethnic minority background. There are many other prominent examples of bias, for example, speech and facial recognition software that fails to work appropriately for large parts of society, and recruitment AI that favours males over females in the shortlisting process.

In addition, our own assumptions and biases can be incorporated/coded into AI decision making, for example, the relevance of certain factors such as age or ethnic background. This can make the system unfair, and perhaps even unethical.

Policies

International humanitarian law sets out what can and cannot be done during an armed conflict. These rules are universal, and the Geneva Conventions – which are the core element of international humanitarian law – have been ratified by all 196 states. If the rules are broken there are consequences, with war crimes being investigated and people prosecuted. Many people are calling for similar policies to be implemented for AI.

Accountability

How do we address accountability? It is possible that AI can, and does, make decisions differently from how a human would make them. However, human decision-makers need to be able to make sense of the decisions made by AI and this is not always the case. According to Caplan et al. (2018), algorithmic accountability is the delegation of responsibility for damages incurred as a result of algorithm-based decisions which have produced discriminatory or unfair consequences. However, it is difficult to punish algorithms for making mistakes and causing harm. Without accountability it is likely there will be an erosion of trust in the systems.

Human-AI interaction

Researchers have suggested that having a ‘human-in the-loop’ (HITL) system is one way to overcome many of the challenges related to AI. When you mark an email as spam, you are acting as a HITL within a complex machine-learning algorithm that is trying to improve email classification. In a HITL AI system, the human can also help identify mistakes and take corrective action - for example, a year of birth entered incorrectly on a form could be spotted by a human checking the form and the accompanying photograph. However, this may not be enough and to overcome algorithms that can influence political beliefs or preferences, we may need a ‘society-in-the-loop’ system.

Researchers in the field agree that trust in AI can only be achieved if there is fairness, transparency, accountability and regulation. Whatever the future holds, AI systems need to be introduced in a way that builds trust, understanding and accountability, as well as respecting human and civil rights.