The New Area to Protect Cyber Security: Artificial Intelligence
Introduction
Even though artificial intelligence originated in decades, it has achieved its most significant progress in the last ten years. Now it is not only used in research centres and universities, but it has also started to be used in many commercial areas such as logistics, health and autonomous vehicles. One of the main reasons why it has become so widespread today is that the computing power of today’s computers had progressed very much compared to when the algorithms were designed. Thanks to the high capacity computers and storage units we have, by recording the high amount of data needed by the algorithms, the Graphical Processing Unit (GPU) owned by the graphics cards are used, and the training is done. Alternatively, this phase, which will last for weeks, can be reduced to only hours or minutes. Even though the mathematical foundations behind them were developed a few decades ago, with the relatively new emergence of powerful GPUs, researchers in this field have recently obtained the computational power required to experiment and build sophisticated machine learning systems. Today, state-of-the-art models such as VGG19, ResNet, DenseNet and Inception used for computer vision consist of deep neural networks containing several million parameters and are based on hardware that has been available for only ten years.
Although this eye-catching process, which takes place in the field of artificial intelligence, does not progress, the issue of security is still neglected which is the subject that other new technologies ignore and experience significant problems in the future. When the TCP/IP protocol was initially designed, security was developed considering a small number of computers on a network to which it is connected. Although the scale and complexity of the Internet have increased rapidly, the attackers, who benefit from these protocol deficiencies that the designers have not been able to prevent, are still able to survive today.
A similar situation is valid in artificial intelligence. Almost all of the machine learning algorithms contain various security weaknesses. Generally, all of these attacks aim to manipulate the model by using offensive input patterns (adversarial instances) during the training or classification phase of the algorithm. Adversarial input instances are malicious inputs designed to fool machine learning models. From an attacker’s point of view, they can force them to bypass a detection system with a misclassified example or to learn the machine learning model in a way that is consistently misclassified during the training phase.
An Example Attack
The best example that can be given to this situation from our daily life is cybersecurity components. IPS/IDS systems, which will work based on machine learning models on corporate networks, are specified by the relevant commercial manufacturers that they will spend their data collection and learning on the network for about three months. During this 3-month training period, the anomaly patterns that will be marked as a positive example of an attacker in the network can prove it benign with the label changes on the network packets. In this case, the malicious network detection model, which will be used after three months, has been trained with adversarial data and will result in misclassification. These types of attacks are called label flipping attacks and pass during the training phase. Besides, the attacker can circumvent the model by manipulating input samples. One of the examples that can be given is to evading the logistic regression method, which is a binary classification method. If I explain logistic regression very briefly, using a linear equation, it indicates the sample as positive if the result is greater than 0.5 (if it is expressed as an attack example; the positive result is an attack, and the negative result is the normal behaviour).
As an example, weights of the logistic regression classification model (w) that can detect the network attack we have and the input sample (network packet x) to be classified are shown in the table below.
In the last row, the result is -3. If We put it in the equation, the result obtained has a positive label with a probability of 0.0474, that is, 4.74%, or a negative with about 95% probability. The situation where we made some changes in our input sample is shown in the table below.
The result obtained with some changes is 2. When we substitute the equation, the result obtained is 0.88, that is, 88% probability positive and 12% probability negative. In this way, the attacker managed to increase the probability of negative, which is about 5%, to 88%.
Mitigation Methods
The first solution to this type of attack is training under attack (adversarial training). It is an approach similar to the data augmentation method, which is mostly used in the field of image classification. It is aimed to build the model by creating adversarial samples with all the attack methods known with this approach, adding these adversarial examples together with the correct labels to the dataset during the training phase. In this way, The model will become more resistant to attacks in the classification phase.
My Related Publication
M. Aladag, F. O. Catak and E. Gul, “Preventing Data Poisoning Attacks By Using Generative Models,” 2019 1st International Informatics and Software Engineering Conference (UBMYK), 2019, pp. 1–5, DOI: 10.1109/UBMYK48245.2019.8965459.