The UK government has issued a major report identifying cybersecurity in the field of artificial intelligence as a priority in the face of cyber threats. It is also worth mentioning that this initiative is made at the backdrop of a Chinese cyberattack on the Ministry of Defense and demonstrates United Kingdom’s commitment to ensure remaining at the forefront of the rapid AI development.
The reports are developed on the basis of the information provided by the representatives of both public and private organizations, among which there is significant information from the London-based startup Mindgard. The compilation in question has been ordered by the Department for Science, Innovation, and Technology (DSIT) and the said purpose is geared towards providing business executives and government policymakers with a comprehensive guide to governing cybersecurity.
Mindgard was part of this endeavor with the company being from Lancaster University and as the sole startup that participated in this major piece of work. The result of their effort, the “Cyber Security for AI Recommendations” report, consists of 45 distinct recommendations addressing cybersecurity issues associated with AI.
Among the areas of Mindgard’s recommendations is a technical recommendations series. These include software, hardware, data, and network accesses of AI systems and the modification of AI models. Such technical enhancements include changes in the training process, changes in preprocessing ways and changes in the model structure – all to enhance the capabilities against cyber-attacks that aim to be implemented on AI systems.
Besides these technical suggestions, Mindgard also proposes a strategic organizational framework for tackling AI cybersecurity risks. These suggestions include security hygiene, business organizations, corporate governance, and the like. Some of the organizations that can be considered are the ones dealing with legal and regulatory challenges concerning AI, interested stakeholders, designing organizational AI programs, controls to govern AI models, documentation of AI project specifications, and other similar issues. The recommendations also call for red teaming and risk analysis exercises.
Other important input providers for the governmental report include Grant Thornton UK LLP, Manchester Metropolitan University, and IFF Research. Their efforts have highlighted the following critical issues: legal and regulatory compliance; communication with stakeholders; and internal documentation. The study identified twenty-three categories of security in AI systems mostly due to malicious applications, particularly in the adversarial machine learning used to launch previous attacks.
As well as being involved in the government report, Mindgard also provides a specific industry platform to deal with AI security risks. This includes data poisoning and model theft. The platform’s modules cover external threats to internal models, visibility of the external inputs to the system, and exposures in the wider ecosystem.
Dr. Peter Garraghan, CEO and CTO of Mindgard and a professor at Lancaster University, remarked on the significance of the research work: “Research has always been fundamental to Mindgard’s work and mission. Directing that research towards initiatives that strengthen cybersecurity and address the weaknesses of proprietary AI on a national level is a responsibility and a privilege.”
Publicizing these reports and the proposed governance Code of Practice regarding AI cybersecurity illuminates the British government’s active role in AI security. The new guidance is aimed at supporting directors and business leaders with the required knowledge and tools to protect AI systems from increasingly sophisticated cyber-threats.
Considering the growing use of AI in different spheres of society and its implementation in economic activities, the significance and pertinence of cybersecurity increases accordingly. UK’s decision to address the problem of discrimination through solidarity between public and private sectors serves as an example to other countries to follow and shows that all countries should strive to address the issue of the potential discrimination of AI against their citizens.
The world of AI is moving forward at a brisk pace that leaves no stone unturned for the security systems. UK’s willingness to push for knowledge and incorporate AI as an imminent solution in its future illustrates the country’s progressive culture. The government proposes that it will offer comprehensive guidelines and recommendations that will make sure that the business entities can be able to embrace the concept of AI cybersecurity fully to enhance the productivity of the business while at the same time preventing any unwanted risks that may lead to exposing sensitive business information to the public.