False Positive Responses Optimization for Intrusion Detection System

Abstract

In Intrusion Detection Systems (IDS), the operation costs represent one of the big challenges for researchers. They are apart from the IDS cost acquisition and they comprise the costs of maintenance, administration, response, running and errors reactions costs. In the present paper, we focus on the missed reactions which include False Positive (FP) and False Negative (FN) reactions. For that a new optimization cost model is proposed for IDS. This optimization proposes a minimal interval where the IDSs work optimally. In simulation, we found this interval as a trade-off between the damage costs and the FP.


Share and Cite:

Baayer, J. , Regragui, B. and Baayer, A. (2014) False Positive Responses Optimization for Intrusion Detection System. Journal of Information Security, 5, 19-36. doi: 10.4236/jis.2014.52003.

1. Introduction

An Intrusion Detection System (IDS) [1] assures the process of detection and responding to malicious activity [2] that threats the computing and networking resources. It is generally based on four main components: the knowledge base [3] , the source of information (detector) [4] , the analysis module [5] and the response module, that makes responses based on appropriate analyzes.

This reply can be passive with standard alarm reports [6] or active based on additional module called IRS (Intrusion Response System) [7] . An IRS can be defined as a system, constantly supervise computer networks health based on IDS alerts, by launching efficiently suitable countermeasures against malevolent or illegal activities. These actions help to have deterioration prevention and keep the monitored system in its normal situation [8] .

An IRS can be static (predefined countermeasures attacks [9] ), dynamic (response based on the severity/confidence degree of the attack [10] ) and cost-sensitive (balance between the intrusion damage and the cost of response [11] ). The static IRS is easy to build and to maintain but it is predictable and vulnerable to intrusions, in particular, the denial of service (DoS) [12] . The Dynamic IRS is more sophisticated than static but do not introduce the cost as main element.

In other hand, the cost-sensitive IRS presents a good alternative to ensure responses that attempt to balance the intrusion damage and the cost of response [13] .

In our paper we focus on cost-sensitive IRS where the response cost is a financial value of a correct response launched against real attacks and the damage is the value of losses if the same response had not been launched against the same real attack [14] . Generally, actions are launched if the response cost is inferior to the damage value [15] . So, the success of a given response is strongly dependent on the good balance between the attack affectation damage and the system resources restoring costs.

When an IRS launches a wrong response against a real attack or in front of a normal activity, it generates a FP response. We talk about FN as the act not to detect any intrusion or launch any response when abnormal activity is observed. The FP and FN responses can harshly degrade the overall performances of IDS [16] and they are still subject of various research works.

Those likely false reactions in IDS cannot be totally weeded out in IDS. In that case, many models had been proposed to reduce their impacts [17] . These minimizations were done without having sufficient light on the trade-off between the FP and FN with cost aspect. This trade off is defined as a main performance indicator in IDS [18] based on the ROC curve [19] with cost notions. This ROC curve is suitably used to establish effectively the cost as a reliable metric.

In IDS, the fixed cost-sensitive model presents higher FP costs and extra FN costs [11] . So, to avoid this problem a new optimization cost model for IDS is required.

In the rest of the paper we focus our study on FP and we present a new optimization cost model which lets the IDS works with minimum costs by reducing the impact of FP responses. Our proposed model presents a minimal interval where the IDSs work optimally.

The rest of this paper is organized as follows. Section 2 presents the IRS module and performance indicators. Section 3 presents an overview on curve ROC in IDS. In Section 4 we review related work. Section 5 presents our improvement with simulations and results. The conclusion is given in Section 6.

2. IRS Module & Performance Indicators

Among the four principal modules of an IDS, we distinguish the response decision module that permit basing on analysis results to launch an alarm as passive response or communicate a decision to IRS to activate an active response. The IDS functional architecture is represented according to the figure 1.

The Intrusion Response System (IRS) is a mechanism destined to ensure output as intrusion response following an IDS systems analysis. Various solutions of IRS had been applied by notification or by manual or automatic responses. To be able to initiate a response, it is necessary to determine what kind of attack we are faced.

According to our work described in [20] , we proposed an IRS organization according to their parameters assessment, which is shown following the figure 2.

We focus on cost sensitive IRS. For that many indicators are defined for performances.

We define various indicators and metrics used in IDS performance evaluation. Those metrics evaluate the ability of IDSs to detect effectively malicious activities. So in order to assimilate the performance characteristics of various IDSs, several indicators or measures are needed to quantitatively assess the competence of the intrusion detection. To date, many indicators have been proposed to evaluate IDS. But it is imperative to learn and study the behavior of the IDS before having the ability to evaluate its performance. We distinguish:

• True negative (TN): represents the number of normal activities seen by the IDS as normal.

• True positive (TP): represents the number of intrusions seen by the IDS as true intrusions.

• False negative (FN): represents the number of intrusions seen by the IDS as normal.

• False positive (FP): represents the number of normal activities seen by the IDS as intrusions.

• Accuracy: shows the percentage of real intrusions from the real number of intrusions reported by the IDS:

(1)

• Rate or probability of detection: shows the percentage of reported intrusions from the number of intrusions:

Figure 1. IDS Functional architecture.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Denning, D. (1987) An Intrusion-Detection Model. IEEE Transactions on Software Engineering, SE-13, 222-232. http://dx.doi.org/10.1109/TSE.1987.232894
[2] Endorf, C., Schultz, E. and Mellander, J. (2004) Intrusion Detection & Prevention. McGraw-Hill/Osborne.
[3] Zanero, S. and Savaresi, S.M. (2004) Unsupervised Learning Techniques for an Intrusion Detection System. Proceedings of the 2004 ACM Symposium on Applied Computing, Nicosia, 14-17 March 2004.
[4] Ertoz, L., Eilertson, E., Lazarevic, A., Tan, P., Srivastava, J., Kumar, V. and Dokas, P. (2004) The MINDS—Minnesota Intrusion Detection System. Next Generation Data Mining, MIT Press.
[5] Gul, I. and Hussain, M. (2011) Distributed Cloud Intrusion Detection Model. International Journal of Advanced Science and Technology, 34, 71.
[6] Elshoush, H.T. and Osman, I.M. (2011) Alert Correlation in Collaborative Intelligent Intrusion Detection Systems—A Survey. Journal of Applied Soft Computing, 11, 4349-4365.
http://dx.doi.org/10.1016/j.asoc.2010.12.004
[7] Anuar, N.B., Papadaki, M., Furnell, S. and Clarke, N. (2010) An Investigation and Survey of Response Options for Intrusion Response Systems. Information Security for South Africa, Sandton, 2-4 August 2010, 1-8.
[8] Shameli-Sendi, A., Ezzati-Jivan, N., Jabbarifar, M. and Dagenais, M. (2012) Intrusion Response Systems: Survey and Taxonomy. SIGMOD Record, 12, 1-14.
[9] Mu, C., Shuai, B. and Liu, H. (2010) Analysis of Response Factors in Intrusion Response Decision Making. 3rd International Joint Conference on Computational Science and Optimization, Huangshan, 28-31 May 2010, 395-399.
[10] Zonouz, S.A., Khurana, H., Sanders, W.H. and Yardley, T.M. (2009) RRE: A Game-Theoretic Intrusion Response and Recovery Engine. Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, Lisbon, 29 June-2 July 2009, 439-448.
[11] Zhou, M. and Yao, G. (2011) Improved Cost-Sensitive Model of Intrusion Response System Based on Clustering. International Conference in Electrics, Communication and Automatic Control Proceedings, 931-937.
[12] Svecs, I., Sarkar, T., Basu, S. and Wong, J. (2010) XIDR: A Dynamic Framework Utilizing Cross-Layer Intrusion Detection for Effective Response Deployment. IEEE 34th Annual Computer Software and Applications Conference Workshops, Seoul, 19-23 July 2010, 287-292.
[13] Stakhanova, N., Basu, S. and Wong, J. (2007) A Cost-Sensitive Model for Preemptive Intrusion Response Systems. Proceedings of the 21st International Conference on Advanced Networking and Applications, Niagara Falls, 21-23 May, 428-435.
[14] Strasburg, C., Stakhanova, N., Basu, S. and Wong, J.S. (2009) A Framework for Cost Sensitive Assessment of Intrusion Response Selection. Proceedings of IEEE Computer Software and Applications Conference, Seattle, 20-24 July 2009, 355-360.
[15] Stakhanova, N., Basu, S. and Wong, J. (2007) A Cost-Sensitive Model for Preemptive Intrusion Response Systems. Proceedings of the IEEE AINA, Niagara Falls, 21-23 May 2007, 428-435.
[16] Timm, K. (2009) Strategies to Reduce False Positives and False Negatives in NIDS. Security Focus Article. http://www.securityfocus.com/infocus/1463
[17] Victor, G.V., Sreenivasa, R.M. and Venkaiah, V.CH. (2010) Intrusion Detection Systems—Analysis and Containment of False Positives Alert. International Journal of Computer Applications, 5, 27-33.
[18] Lippmann, R., Fried, D.J., Graf, I., Haines, J.W., Kendall, K.R., McClung, D., Weber, D., Webster, S.H., Wyograd, D., Cunningham, R.K. and Zissman, M.A. (2000) Evaluating Intrusion Detection Systems: The 1998 DARPA Off-Line Intrusion Detection Evaluation. Proceedings of DARPA Information Survivability Conference and Exposition, Hilton Head, 25-27 January 2000, 12-26.
[19] Stolfo, S., Fan, W., Lee, W., Prodromidis, A. and Chan, P. (2000) Costbased Modeling for Fraud and Intrusion Detection: Results from the JAM Project. Proceedings of DARPA Information Survivability Conference and Exposition, Los Alamitos, 2, 130-144.
[20] Baayer, J. and Regragui, B. (2009) WOTIC’09—“Architecture Fonctionnelle d’un IPS, Etat de l’Art et Classification de Ses Systèmes de Réponse d’Intrusion (IRS)”. Université Ibn Zohr, Agadir.
[21] Swets, J.A. (1996) Signal Detection Theory and ROC Analysis in Psychology and Diagnostics: Collected Papers. Lawrence Erlbaum Associates, Mahwah.
[22] Foo, B., Wu, Y.-S., Mao, Y.-C., Bagchi, S. and Spafford, E.H. (2005) ADEPTS: Adaptive Intrusion Response Using Attack Graphs in an E-Commerce Environment. Proceedings of DSN, 28 June-1 July, 508-517.
[23] Toth, T. and Kregel, C. (2002) Evaluating the Impact of Automated Intrusion Response Mechanisms. Proceeding of the 18th Annual Computer Security Applications Conference, Los Alamitos, 301-310.
[24] Balepin, I., Maltsev, S., Rowe, J. and Levitt, K. (2003) Using Specification-Based Intrusion Detection for Automated Response. Proceedings of RAID, 2820, 136-154.
[25] Jahnke, M., Thul, C. and Martini, P. (2007) Graph Based Metrics for Intrusion Response Measures in Computer Networks. Proceedings of the IEEE LCN, Dublin, 15-18 October 2007, 1035-1042.
[26] Yu, S. and Rubo, Z. (2008) Automatic Intrusion Response System Based on Aggregation and Cost. International Conference on Information and Automation, Changsha, 20-23 June 2008, 1783-1786.
[27] Papadaki, M. and Furnell, S.M. (2006) Achieving Automated Intrusion Response: A Prototype Implementation. Information Management and Computer Security, 14, 235-251.
http://dx.doi.org/10.1108/09685220610670396
[28] Haslum, K., Abraham, A. and Knapskog, S. (2007) DIPS: A Framework for Distributed Intrusion Prediction and Prevention Using Hidden Markov Models and Online Fuzzy Risk Assessment. 3rd International Symposium on Information Assurance and Security, Manchester, 29-31 August 2007, 183-188. http://dx.doi.org/10.1109/ISIAS.2007.4299772
[29] Mu, C.P. and Li, Y. (2010) An Intrusion Response Decision Making Model Based on Hierarchical Task Network Planning. Expert Systems with Applications, 37, 2465-2472.
http://dx.doi.org/10.1016/j.eswa.2009.07.079
[30] Kanoun, W., Cuppens-Boulahia, N., Cuppens, F. and Dubus, S. (2010) Risk-Aware Framework for Activating and Deactivating Policy-Based Response. 4th International Conference on Network and System Security, Melbourne, 1-3 September 2010, 207-215.
[31] Kheir, N., Cuppens-Boulahia, N., Cuppens, F. and Debar, H. (2010) A Service Dependency Model for Cost Sensitive Intrusion Response. Proceedings of the 15th European Conference on Research in Computer Security, 6345, 626-642.
[32] Denning, D. (1999) Information Warfare and Security. Addison-Wesley.
[33] Northcutt, S. (1999) Intrusion Detection: An Analyst’s Handbook. New Riders Publishing.
[34] Lee, W., Fan, W., Millerand, M., Stolfo, S. and Zadok, E. (2002) Toward Cost-Sensitive Modeling for Intrusion Detection and Response. Journal of Computer Security, 10, 5-22.
[35] Tanachaiwiwat, S., Hwang, K. and Chen, Y. (2002) Adaptive Intrusion Response to Minimize Risk over Multiple Network Attacks. ACM Trans on Information and System Security.
[36] Durst, R., Champion, T., Witten, B., Miller, E. and Spag-nuolo, L. (1999) Testing and Evaluating Computer Intrusion Detection Systems. ACM, 42, 53-61. http://dx.doi.org/10.1145/306549.306571
[37] Saydjari, O.S. (2000) Designing a Metric for Effect. Presented at DARPA: IDS Evaluation Re-Think Meeting, Lake Geneva, 23-24 May.
[38] Stolfo, S., Fan, W., Lee, W., Prodromidis, A. and Chan, P. (2000) Costbased Modeling for Fraud and Intrusion Detection: Results from the JAM Project. Proceedings of DARPA Information Survivability Conference and Exposition, Los Alamitos, 2, 130-144.
[39] McHugh, J., Christie, A. and Allen, J. (2000) Defending Yourself: The Role of Intrusion Detection Systems. IEEE Software, 17, 42-51. http://dx.doi.org/10.1109/52.877859
[40] Graf, I., Lippmann, R., Cunningham, R., Fried, D., Kendall, K., Webster, S. and Zissman, M. (1998) Results of DARPA 1998 Off-Line Intrusion Detection Evaluation. Presented at DARPA PI Meeting, Cambridge, 15 December.
[41] (2012) Verizon Business Data Breach Investigations Report.
http://www.verizonenterprise.com/DBIR/2013/
[42] Widup, S. (2010) The Leaking Vault—Five Years of Data Breaches. Digital Forensics Association.
[43] An Osterman Research White Paper (2011) Why You Need to Eliminate False Positives in Your Email System. http://www.ostermanresearch.com.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.