< Back to previous page

Project

CSAI - CyberSecurity Artificial Intelligence (CSAI)

With the CSAI project, both research and industry are aiming to use available AI-technologies and advance them for the purpose of Cyber Security. Four research partners and four industry partners are aiming to make solid contributions in advancing the state of the art, their market positioning and building unique differentiators otherwise difficult to achieve without this project. Results can be implemented by the CSAI-technology companies for further use of their technology and services offering. The technology companies will investigate the use of the AI technologies, models and tools into their own platforms, detecting and identifying ways to improve their current capabilities.This research project will have a strong impact on both the research institutions and the industrial partners, by increasing their knowledge base on AI for CS, and by building security tools that go beyond the state of the art, providing a significant competitive advantage, and leveraging innovative services in specific CS domains.
Date:1 Mar 2021 →  28 Feb 2023
Keywords:AI, Artificial Intelligence, Machine Learning, Cyber Security, malicious, attack, automation
Disciplines:Numerical computation, Automation, feedback control and robotics
Project type:Collaboration project
Results:

1.     Identify the most convenient encoding of security related events in the different use cases covered in the project in a general way, so that they can be incorporated into a multi-agent ML framework. The success in this is key for the rest of the project, and it will be measured by comparing performances of the ML systems using different encoding strategies. (Lead Partner: KUL Distrinet)

2.     Design novel ML solutions with application in practical use cases of interest for our research partners. The success of this task will be measured on the basis of productivity increase and false alarm and misdetection rates, improving on the current rates using fixed rules currently available at the companies, and reaching comparable rates to those achieved in similar tasks using ML in the state of the art. (Lead Partner: VUB AI Lab)

3.     Improve the state-of-the-art solutions in the proposed scenarios. The role of adversarial attacks is key to achieve this objective, by feeding them back into the training stage to improve performance. By doing so, it is expected to improve the resilience of the AI detectors in both adversarial and non-adversarial settings. (Lead Partner: KUL COSIC)

4.     Publication of the research results in high-impact journals or academic security venues. Two papers of these results, jointly published by the research partners, are our objective in this regard.

5.     Provide guidance and assessment criteria for CS technology companies to use public cloud-based AI-services, to allow for a cost-effective operational plan. Two guides (white papers) will provide selection guidance to the different public cloud-offerings, both presenting the available services capabilities and limitations and providing a total cost of ownership analysis, specifically focused on the AI-services offering, indicating routes to optimization. (Lead Partner: LSEC)

6.     Provide a methodology for assessment of available AI-technologies and guidance to CS companies to integrate in their product and services development process. A third guide (white paper) will present use cases and best practices in implementing and integrating public AI-cloud services in CS products and services (edge vs cloud analysis, data normalization, preprocessing, meta data storage, etc.). (Lead Partner: LSEC)