The Ethical Implications of Artificial Intelligence in Cybersecurity

Main visual : The Ethical Implications of Artificial Intelligence in Cybersecurity

Pictured left to right: Dr Taddeo, Dr Nakata, Mr Inakoshi, Dr Naseer, Dr Agrafiotis

The ethical issues surrounding Artificial Intelligence (AI) are of critical importance to Fujitsu. Fujitsu Laboratories of Europe is playing a key role in helping to identify how AI can be applied appropriately, without constraining the freedom to advance human centric innovation.

To progress this discussion, we recently jointly hosted an international workshop on trustworthy AI with the Digital Ethics Lab (DELab), of the Oxford Internet Institute, University of Oxford.

Staged on 8 November at St Cross College, Oxford, the event brought together some of the world’s leading experts from academia, the policy world and the private sector – addressing the all-important question of “Can we develop trustworthy AI in cybersecurity?”   

Delegates included:

  • Representatives of the UK National Cyber Security Centre
  • Dr Ioannis Agrafiotis (ENISA) – (Network Security Information Officer)
  • Dr Anna Jobin (Health Ethics and Policy Lab, ETH, Zurich)
  • Dr Tsuneo Nakata (Fujitsu Laboratories)
  • Ms Nathalie Smuha (Department of International and European Law, KU, Leuven)
  • Dr Mariarosaria Taddeo (Oxford Internet Institute, University of Oxford)
  • Mr Paul Timmers (European Policy Centre, Brussels) 

With cybersecurity as an in-depth case study, the focus was on real-world examples of misuse/abuse of AI in cyberspace, investigating the trustworthiness of data and equipment, operational risk at the “Internet edge”, and the risk of autonomous Machine to Machine (M2M) control.  

The workshop discussion produced some clearly-defined views – particularly with regard to the key ethical principles that should shape standardisation and certification procedures for AI in cybersecurity. 

The team of experts agreed that the transparency of AI, as well as accountability for its uses and misuses, are fundamental principles and that standards should ensure that they are upheld when AI is used for cybersecurity practises.

More broadly, there was agreement on the validity of the EU Guidelines on Trustworthy AI and on the underpinning principles of beneficence, non-maleficence, autonomy, justice, and explicability as identified by Cowls & Floridi.

Trust was also a central topic of discussion. While the panellists agreed on the role of trust in AI to support its adoption, there was disagreement with respect to the validity of claims concerning ‘trustworthy AI in cybersecurity’.

Some argued in particular that trust in AI applications for cybersecurity is unwarranted and that forms of control when delegating cybersecurity tasks to AI are what is actually required.  As such, standards and certification procedures need to take these issues into account.  

Workshop Highlights

In summary, the discussions during the workshop highlighted three key issues.

  • The robustness of AI systems depends as much on the inputs they are fed and interactions with other agents once deployed, as on their design and training. This makes it problematic to predict the robustness of these systems.
  • EU Guidelines on Trustworthy AI offer a valuable high-level set of principles to the design and use of AI, but require refinement when considering the specific cases and uses of AI in cybersecurity.
  • Standards and certification procedures focusing on the robustness of AI systems performing cybersecurity tasks will be effective only insofar as they will take into account the dynamic and self-learning nature of AI systems. They need to start envisaging forms of monitoring and control that span the design through to the development stages.

This activity builds on Fujitsu’s existing contribution towards developing the EC AI HLEG’s ‘Ethics Guidelines for Trustworthy AI’, published in April 2019, which significantly impacts on the overall EU strategy for AI.

In summary, it is clear that there are undoubtedly benefits to be gained from AI in cybersecurity. But our workshop confirmed there is a lot more work ahead to ensure it is used ethically and appropriately.

Related News