Results of Experiment 2 show that PSA2 outperforms four algorithms proposed by Jadidi
(2013) in terms of three metrics DR, ACC and FAR. Also, PSA2 is better than both Random
Forest and Random Tree in terms of ACC and DR.
PSA2 can not train with unlabelled data, so we only use 10% labelled dataset, 5998 attack
flows and 595 normal flows, for training phase in Experiment 3. Meanwhile algorithm S4VM
uses 100% training dataset for training, in which 90% unlabelled dataset and 10% labelled
dataset. Results from the experiment strongly confirm the efficiency of PSA2 in comparison
with methods proposed by Jadidi (2015). Both Random Forest and Random Tree perform
badly with very low ACC and DR
26 trang |
Chia sẻ: honganh20 | Ngày: 08/03/2022 | Lượt xem: 306 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Improving some artificial immune algorithms for network intrusion detection, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
labels a path from n to some leaf. Moreover, we define L(D) = ∪n is a root of DL(D,n).
A prefix DAG can be turned into a finite automaton to decide the membership of strings
in languages.
1.3.3 Detectors
Given an alphabet Σ is nonempty and finite set of symbols, positive and negative r-chunk
detectors, r-contiguous detectors, rcbvl-detectors could be defined as follows:
Definition 1.1. Given a self set S ⊆ Σ`, a tuple (d, i) of a string d ∈ Σr, where r ≤ `, and
an index i ∈ {1, ..., `− r + 1} is a positive r-chunk detector if there exists a s ∈ S such that d
occurs in s.
Definition 1.2. Given a self set S ⊆ Σ`, a tuple (d, i) of a string d ∈ Σr, r ≤ `, and an index
i ∈ {1, ..., `− r + 1} is a negative r-chunk detector if d does not occurs any s ∈ S.
Example 1.1. Let ` = 6, r = 3. Given a set of five self strings S = {s1 = 010101, s2 =
111010, s3 = 101101, s4 = 100011, s5 = 010111}. The set of some positive r-chunk detectors
is {(010,1), (111,1), (101,2), (110,2), (010,3), (101,3), (101,4), (010,4), (111,4))}. The set
5of some negative r-chunk detectors is {(000,1), (001,1), (011,1), (001,2), (010,2), (100,2),
(000,3), (100,3), (000,4), (001,4), (100,4)}.
Definition 1.3. Given a self set S ⊆ Σ`, a string d ∈ Σ` is a r-contiguous detector if d[i, . . . , i+
r − 1] does not occurs any s ∈ S for all i ∈ {1, ..., `− r + 1}.
Example 1.2. Let ` = 5, r = 3. Given a set of 7 self strings S = {s1 = 01111, s2 = 00111,
s3 = 10000, s4 = 10001, s5 = 10010, s6 = 10110, s7 = 11111}. The set of all 3-contiguous
detectors is {01011, 11011}.
We also use the following notations:
• Dpi = {(d, i)|(d, i) is a positive r-chunk detector} is set of all positive r-chunk detectors
at position i, i = 1, . . . , `− r + 1.
• Dni = {(d, i)|(d, i) is a negative r-chunk detector} is set of all negative r-chunk detectors
at position i, i = 1, . . . , `− r + 1.
• CHUNKp(S, r) = ∪`−r+1i=1 Dpi is set of all positive r-chunk detectors.
• CHUNK(S, r) = ∪`−r+1i=1 Dni is set of all negative r-chunk detectors.
• CONT(S, r) is the set of all r-contiguous detectors that do not match any string in S.
• For a given detectors set X, L(X) is the set of all nonself strings detected by X. We also
say that Σ` \ L(X) is the set of all self strings detected by X.
Definition 1.4. Given a self set S ⊆ Σ`. A triple (d, i, j) of a string d ∈ Σk, 1 ≤ k ≤ `, an
index i ∈ {1, ..., `− r+ 1} and an index j ∈ {i, ..., `− r+ 1} is called a negative detector under
rcbvl matching rule if d does not occur in any s, s ∈ S.
To combine PSA and NSA in a unified framework, we have to change the original semantic
of positive selection in the detection phase as follows.
Definition 1.5. If new instance matches ` − r + 1 positive r-chunk detectors (dij , i), i =
1, . . . , `− r + 1, it is claimed as self, otherwise it is claimed as nonself.
With this new detection semantic, the following proposition on the equivalence of detec-
tion coverage of r-chunk type PSA and NSA could be stated.
Theorem 1.1 (Detection Coverage). The detection coverage of positive and negative selection
algorithms coincide.
L(CHUNKp(S, r)) = L(CHUNK(S, r))
61.3.4 Ring representation of data
With reference to string-based detectors set, a simple technique for this approach is to con-
catenate each string representing a detector with its fist k symbols. Each new linear string is a
ring representation of its original binary one. Given a set of strings S ⊂ Σ`, a set Sr ⊂ Σ`+r−1
includes ring representations of all strings in S by concatenating each string s ∈ S with its fist
r − 1 symbols. We can easily apply the idea of ring strings for other data representations in
AIS. One way to do this, for instance, is to create ring representations of other structures such
as trees, and automata, from set Sr instead of S as usual.
1.3.5 Frequency trees
Given a set D of length-equaled strings, a tree T on D, noted TD, is a rooted directed tree
with edge labels from Σ where for all c ∈ Σ, every node has at most one outgoing edge labeled
with c. For a string s, we write s ∈ T if there is a path from the root of T to a leaf such that s
is the concatenation of the labels on this path. Each leaf is associated with an integer number,
that is frequency of string s ∈ D and s is the concatenation of the labels on the path ending
by this leaf. We also use two concepts: self trees and nonself trees to present r-chunk detectors
set for normal dataset and abnormal dataset, respectively.
1.4 Datasets
We only concentrate on flow-based NIDSs because: 1 - It can detect some special attacks more
efficient and faster than payload based one, since lesser information is needed to be analyzed;
2 - Flow-based anomaly detection methods process only packet headers and reduce data and
processing time for high-speed detection on large networks. It can solve the scalability prob-
lem in condition of increasing network usage and load. 3 - Flow-based NIDSs decrease privacy
issues in comparison with packet-based ones because of the absence of payload.
The DARPA-Lincoln datasets: The DARPA-Lincoln datasets were collected by MITs Lin-
coln laboratory with the purpose of evaluating the performance of different intrusion detection
methodologies.
UT datasets: This data set was captured by monitoring a honeypot hosted in the University
of Twente network. The dataset has three categories: malicious traffic, unknown traffic and
side-effect traffic. Each flow in the datasets has 12 fields.
Netflow datasets: This dataset focuses only on flows to a specific port and a IP address
which receives the most number of attacks. It contains all 129,571 traffics (including attacks)
to and from victims.
7Chapter 2
COMBINATION OF NEGATIVE
SELECTION AND POSITIVE
SELECTION
2.1 New Positive-Negative Selection Algorithm
Our algorithm first constructs ` − r + 1 binary trees (called positive trees) corresponding to
`− r+ 1 positive r-chunk detector set Dpi, i = 1, . . . , `− r+ 1. Then, all complete subtrees of
these trees are removed to achieve a compact representation of the positive r-chunk detector
set while maintaining the detection coverage. Finally, for every ith positive trees, we decide
whether or not it should be converted to the negative tree, which covers the negative r-chunk
detector set Dni. The decision depends on which tree is more compact. When this process is
done, we have ` − r + 1 compact binary trees that some of them represent positive r-chunk
detectors and the others represent negative ones.
The r-chunk matching rule on binary trees is implemented as follows: a given sample
s matches the positive (negative) tree ith if s[i . . . i + k] is a path from the root to a leaf,
i = 1, . . . , ` − r + 1, k < r. The detection phase can be conducted by traveling the compact
binary trees iteratively one by one: a sample s is claimed as non-self if it matches a negative
tree or it does not match all positive trees, otherwise it is considered as self. From the
description of DetectorGeneration, it is trivial to show that it takes |S|(` − r + 1).r steps to
generate all necessary trees (detector generation time complexity) and (` − r + 1).r steps to
verify a cell string as self or non-self in the worst case (worse-case detection time complexity).
These time complexities are similar to the state-of-the-art NSAs (PSAs).
Theorem 2.1. Given a self set S and an integer `, procedure DetectorGeneration produces
the detector (binary) tree set that have at most total (` − r + 1).2r−2 less number of nodes in
comparison to the detector tree set created by a PSA or NSA only, where r ∈ {2, . . . , `−r+1}.
8Algorithm 2.1 Detector Generation Algorithm.
1: procedure DetectorGeneration(S, r, T )
Input: A set of self strings S ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A set T of `− r + 1 prefix trees presenting all r-chunk detectors.
2: T = ∅
3: for i = 1, ..., `− r + 1 do
4: create an empty prefix tree Ti
5: for all s ∈ S do
6: insert every s[i . . . i+ r − 1] into Ti
7: for all internal node n ∈ Ti do
8: if n is root of complete binary subtree then
9: delete this subtree
10: if (number of leaves of Ti) > (number of nodes of Ti that have only one child) then
11: for all internal node ∈ Ti do
12: if it has only one child then
13: if the child is a leaf then
14: delete the child
15: create the other child for it
16: mark Ti as a negative tree
17: T = T ∪ {Ti}
Algorithm 2.2 Positive-Negative Selection Algorithm.
1: procedure PNSA(T , r, s)
Input: A set T of ` − r + 1 prefix trees presenting all r-chunk detectors, a matching
threshold r ∈ {1, . . . , `}, an unlabeled string s ∈ Σ`.
Output: A label of s (as self or nonself).
2: flag = true . A temporary boolean variable
3: i = 1
4: while (i ≤ `− r + 1) and (flag = true) do
5: if (Ti is positive tree) and (s /∈ Ti) then
6: flag = false
7: if (Ti is negative tree) and (s ∈ Ti) then
8: flag = false
9: i=i+1
10: if flag = false then
11: output s is nonself
12: else
13: output s is self
2.2 Experiments
Table 2.1 shows the results on detector memory storage and detection time of PNSA compared
to one of the popular single NSAs on some combinations of S, ` and r. We have conducted
another experiment by choosing ` = 40, |S| = 20,000 (S is the set of randomly generated binary
strings of length `) and varying r (from 15 to 40). Then, `−r+1 trees were created using single
NSA and other ` − r + 1 compact trees were created using PNSA. Next, both detector sets
were used to detect every s ∈ S. Next experiment is conducted on Netflow dataset. Table 2.2
9Table 2.1: Comparison of memory and detection time reductions.
S ` r Memory (%) Time (%)
1,000 50 12 0 0
2,000 30 15 2.5 5
2,000 40 17 25.9 42.7
2,000 50 20 36.3 50
shows some of the experiment steps. The percentage of node reduction is in the final column.
Table 2.2: Comparison of nodes generation on Netflow dataset.
r NSA PNSA Reduction(%)
5 727 706 2.89
10 33,461 31,609 5.53
15 1,342,517 1,154,427 14.01
20 9,428,132 6,157,766 34.68
25 18,997,102 11,298,739 40.52
30 29,668,240 17,080,784 42.42
35 42,596,987 24,072,401 43.48
40 58,546,497 32,841,794 43.90
45 79,034,135 44,194,012 44.08
10
Chapter 3
GENERATION OF COMPACT
DETECTOR SET
3.1 New negative selection algorithm
Given a non-empty set S of self strings of length `, and an integer r ∈ {1, . . . , `− r + 1}, this
section presents a new NSA bases on rcbvl matching rule. Some prefix trees are first used to
generate perfect detectors set from S and then this set is used to distinguish if a new sample
as self or non-self.
Algorithm 1 summarizes the first phase of new NSA. From the description of the algo-
rithm, it takes |S|.(`− r + 1).r steps to generate (`− r + 1) prefix trees and |D|.(`− r + 1).2r
steps to generate perfect detector set D.
Example 3.1. Given `, r and the set of self strings S as in Example 1.1, S = {s1 = 010101, s2
= 111010, s3 = 101101, s4 = 100011, s5 = 010111}. Some steps in the Algorithm 1 generating
a perfect detector set {(0001,1,2), (00100,1,4), (100,4,4), (011110,1,4), (11000,1,3)} are:
Set D is first created as (00,1,1), (011,1,1), (110,1,1). Then the for loop (lines 13-29) calculates
D and D1 as following:
For i = 2: D = (0001,1,2); (0010,1,2); (0111,1,2); (1100,1,2) and D1 = ∅.
For i = 3: D = (00100,1,3); (01111,1,3); (11000,1,3) and D1 = (0001,1,2).
For i = 4: D = (00100,1,4); (011110,1,4) and D1 = (0001,1,2); (11000,1,3); (100,4,4).
The final step, D = D
⋃
D1 in line 30, generates the perfect detector set {(0001,1,2), (00100,1,4),
(100,4,4), (011110,1,4), (11000,1,3)}.
To detect if a given string s is self or non-self, we simply check our rcbvl matching rule
on s against every detector in D. If it is the case, output s is nonself, otherwise s is self.
3.2 Experiments
The flow-based NetFlow is used for experiment 1. A randomly created dataset is used for
experiment 2. Flows in NetFlow are converted into binary strings by two steps. The first
step is to map all features to binary string features. After this step, a total string features
are constructed for both normal data and anomalous one. The second step is to concatenate
11
Algorithm 3.1 Algorithm to generate perfect rcbvl detector set
1: procedure GenerationDetectors(S, `, r, D)
2: for i = 1, ..., `− r + 1 do Create an empty prefix tree Ti
3: for all s ∈ S do
4: for i = 1, ..., `− r + 1 do
5: insert every s[i . . . i+ r − 1] into Ti
6: for i = 1, ..., `− r + 1 do
7: for all non-leaf node n ∈ Ti and all σ ∈ Σ do
8: if no edge with label σ starts at n then
9: create a new leaf n′ and an edge (n, n′) labeled with σ.
10: delete every node n ∈ Ti from which none of the newly created leaves is reachable.
11: D1 = ∅
12: D = { (s, 1, 1)|s ∈ T1 }
13: for i = 2, ..., `− r + 1 do
14: D2 = ∅
15: for all (s, k, j) ∈ D do
16: if there exists a s′ ∈ Ti where s[i− k + 1...|s|] is prefix of it then
17: D2 = D2
⋃{(s+ s′[|s| − j + k...|s′|], k, i)}
18: delete every node n ∈ Ti from which only nodes in the s′ is reachable
19: for all s′ ∈ Ti where s[i− k + 1...|s|] is prefix of it do
20: if |s| − i+ k < r then
21: D2 = D2
⋃{(s[|s|] + s′, i− 1, i)}
22: else
23: D2 = D2
⋃{(s′, i, i)}
24: delete every node n ∈ Ti from which only nodes in the s′ is reachable
25: else
26: D1 = D1
⋃{(s, k, j)}
27: for all s′ ∈ Ti do
28: D2 = D2
⋃{(s′, i, i)}
29: D = D2
30: D = D
⋃
D1
12
the binary string features for every flows. After this step, dataset contains binary strings with
their length of 49. The distributions of training and testing datasets as well as parameters r,
` for 4 experiments are described in Table 3.1.
Table 3.1: Data and parameters distribution for experiments and results comparison.
` r Train Test
Size (in bits) Time (in minutes)
r-chunk rcbvl r-chunk rcbvl
Case 1
49 10 119,571 10,000 206,810 42,704 1 0.2
49 8 79,571 50,000 31,672 8,096 0.8 0.18
Case 2
30 12 25,000 25,000 367,092 79,222 4.1 0.75
30 14 40,000 10,000 2,324,056 392,815 8.65 1.4
Results in Table 3.1 show that our proposed algorithm reduce both size (bits) of detectors
and time to classify testing dataset in comparison with that of best algorithm proposed in 2004.
13
Chapter 4
FAST SELECTION ALGORITHMS
4.1 A fast negative selection algorithm based on r-chunk detector.
Table 4.1 shows the comparison of our results with the runtimes of previously published
r-chunk detector-based algorithms. All runtimes for training phase and classification phase
are given for a binary alphabet (|Σ| = 2) since not all algorithms are applicable to arbitrary
alphabets. The parameter K = min{|S[i..i+r−1]|, i = 1, . . . , `−r+1}. The parameter |D|, the
desired number of detectors, is only applicable to algorithms that generate detectors explicitly.
Our algorithm and algorithm in by Textor produce the results that would be obtained with
the maximal number of generated detectors. We proposed a NSA that produces a substantial
performance improvement in training phase compared to the most effective published NSA by
Textor.
Table 4.1: Comparison of our results with the runtimes of previously published algorithms.
Matching rules Algorithms Training Classification
r-chunk
Stibor et al. (2r + |S|)(`− r + 1) |D|`
Elberfeld, Textor r2|S|(`− r) |S|`2r
Elberfeld, Textor |S|`r `
Present thesis |S|` `
r-contiguous
D’haeseleer et al. (linear) (2r + |S|)(`− r) |D|`
D’haeseleer et al. (greedy) 2r|S|(`− r) |D|`
Wierzchon´ 2r(|D|(`− r) + |S|) |D|`
Elberfeld, Textor (2009) |S|3`3r3 |S|2`3r3
Elberfeld, Textor (2011) |S|`r `
Present thesis |S|`+ (2r −K).` `r
We use the following notations: An array Q, where Q[s][c] is a pointer used for creating
new nodes in the tree, s ∈ Σr−1, and c ∈ Σ; An array P, where P [i] is a structure of two fields,
a pointer P[i].end and a string P[i].str ∈ Σr−1.
Algorithm 4.3 summarizes the overall of proposed Chunk-NSA. In the algorithm, we use
a self set S, an integer r ∈ {1, . . . , ` − r + 1}, a cell string s to be detected, a prefix DAG G.
14
Algorithm 4.1 To generate positive r-chunk detectors set
1: procedure PositiveR-chunkDetector(S, r, G)
Input: A set of self strings S ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A prefix DAG G presenting positive r-chunk detectors set.
2: G = ∅
3: for i = 1, ..., |S| do
4: insert si[1 . . . r] into G and assign P [i].end to the leaf node in path s[1 . . . r]
5: P [i].str = s[2 . . . r]
6: for i = r + 1, ..., ` do
7: for j = 1, ..., |S| do
8: if Q[P [j].str][sj[i]] = NULL then
9: Q[P [j].str][sj[i]] = new()
10: for j = 1, ..., |S| do
11: p = P [j].end
12: for c ∈ Σ do
13: if (Q[P [j].str][c] 6= NULL)&&(edge starts from p with label c does
not exist) then
14: create an edge starts from p with label c to Q[P [j].str][c]
15: P [j].end = end node of the edge starts from p with label sj[i]
16: for j = 1, ..., |S| do
17: Q[P [j].str][sj[i]] = NULL
18: P [j].str = P [j].str[2...r − 1] + sj[i]
Chunk-NSA will detect s as self or nonself.
Theorem 4.1. Given any S ⊆ Σ`, s ∈ Σ` and r ∈ {1, . . . , `}, algorithm Chunk-NSA constructs
an automaton M with L(M) ∩ Σ` = L(CHUNK(S, r)) in time O(|S|.`.|Σ|) and checks if
s ∈ L(M) in time O(`).
4.2 A fast negative selection algorithm based on r-contiguous detec-
tor
Two arrays P and Q are used in this section as the same meaning in the previous section.
Besides, we use two set P1 and P2 of pointers for chunk detectors. They swap their roles at
the end of each step.
Theorem 4.2. There exists an algorithm that, given any S ⊆ Σ` and r ∈ {1, . . . , `}, constructs
a finite automaton M with L(M)∩Σ` = CONT (S, r) in time O(|S|.`.|Σ|+ (Σr−K).`), where
K= min{|S[i..i+ r − 1]|, i = 1, . . . , `− r + 1}.
4.3 Experiments
In our experiments, we use the NetFlow and a random datasets for experiment 1 and experi-
ment 2, respectively. In Table 4.2, the runtime of NSA by Textor (2011) on experiment 1 and
experiment 2 are in columns a and c, respectively. Also, the runtime of proposed Chunk-NSA
on experiment 1 and experiment 2 are in columns b and d, respectively.
15
Algorithm 4.2 To generate negative r-chunk detectors set
1: procedure NegativeR-chunkDetector(S, r, G)
Input: A set of self strings S ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A prefix DAG G presenting negative r-chunk detectors set.
2: PositiveR-chunkDetector(S, r, G)
3: create a special node n′
4: for each non-leaf node n ∈ G do
5: for c ∈ Σ do
6: if no edge with label c starts at n then
7: create new edge (n, n′) labeled with c
8: for each node n ∈ G do
9: if n is not reachable to n′ then
10: delete n
Algorithm 4.3 A fast r-chunk detector-based NSA
1: procedure Chunk-NSA(s, S, r, G, M)
Input: A set of self strings S ⊆ Σ`, an unlabeled string s ∈ Σ`, a matching threshold
r ∈ {1, . . . , `}.
Output: A prefix DAG G presenting negative r-chunk detectors set, an automaton M , a
label of s (self or nonself).
2: NegativeR-chunkDetector(S, r, G)
3: turn G into an automaton M
4: if s ∈ L(M) then
5: output s is nonself
6: else
7: output s is self
Table 4.2: Comparison of Chunk-NSA with r-chunk detector-based NSA.
r
Experiment 1 Experiment 2
a b a/b c d c/d
10 1,330 454 2.9 1,490 482 3.1
11 1,395 439 3.2 1,633 472 3.5
12 1,564 454 3.4 637 360 4.5
13 1,767 435 4.1 2,134 453 4.7
14 1,771 418 4.2 2,276 451 5.0
15 2,092 486 4.3 2,793 450 6.2
16 1,985 437 4.5 3,086 365 8.5
17 2,071 391 5.3 4,079 427 9.6
18 2,249 410 5.5 4,509 422 10.7
19 2,345 375 6.3 5,312 470 11.3
20 2,859 359 7.0 6,796 437 15.6
16
Algorithm 4.4 Algorithm to generate negative r-contiguous detectors set
1: procedure Cont-NSA(S, r, G)
Input: A set of self strings S ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: An automaton G presenting negative r-contiguous detectors set.
2: G = P1 = ∅
3: for s ∈ Σr \ S[1 . . . r] do
4: insert s into G and create p.end that points to the leaf node in path s
5: p.str = s[2 . . . r]
6: P1 = P1 ∪ {p}
7: for i = r + 1, ..., ` do
8: for j = 1, ..., |S| do
9: if Q[P [j].str][sj[i]] = NULL then
10: Q[P [j].str][sj[i]] = new()
11: P2 = ∅
12: for each p ∈ P1 do
13: for c ∈ Σ do
14: if (Q[p.str][c] = NULL) then
15: Q[p.str][c] = new()
16: p.str = p.str[2...r − 1] + c
17: p.end = Q[p.str][c]
18: P2 = P2 ∪ {p}
19: else
20: if Q[p.str][c] is newly created in this inner for loop then
21: p.str = p.str[2...r − 1] + c
22: p.end = Q[p.str][c]
23: P2 = P2 ∪ {p}
24: for j = 1, ..., |S| do
25: Q[P [j].str][sj[i]] = NULL
26: P [j].str = P [j].str[2...r − 1] + sj[i]
27: for each p ∈ P1 do
28: for c ∈ Σ do
29: Q[p.str][c] = NULL
30: P1 = P2
31: for each node n ∈ G do
32: if n is not reachable to a leaf node ∈ P1 then
33: delete n
34: turn G into an automaton
17
Chapter 5. APPLYING HYBRID ARTIFICIAL IMMUNE
SYSTEM FOR NETWORK SECURITY
5.1 Hybrid Positive Selection Algorithm With Chunk Detectors
Given `, r, a normal data set N ⊂ Σ`, an abnormal data set A ⊂ Σ`. Algorithm 4.5 and
Algorithm 4.6 create positive trees and negative trees, respectively. A new data instance
s ∈ Σ` is detected as self or nonself by Algorithm 4.7.
Algorithm 4.5 Algorithm to generate positive trees
1: procedure SelfTreesGeneration(N , r, TN)
Input: A set of self strings N ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A set TN of `− r + 1 prefix trees presenting positive trees.
2: for i = 1, ..., ` do
3: Create an empty tree TNi
4: for all s ∈ Nr do
5: for i = 1, ..., ` do
6: insert every s[i . . . i+ r − 1] into TNi
Algorithm 4.6 Algorithm to generate negative trees
1: procedure NonselfTreesGeneration(A, r, TA)
Input: A set of nonself strings A ⊆ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A set TA of `− r + 1 prefix trees presenting negative trees.
2: for i = 1, ..., ` do
3: Create an empty tree TAi
4: for all s ∈ Ar do
5: for i = 1, ..., ` do
6: insert every s[i . . . i+ r − 1] into TAi
In Algorithm 4.7, Leaf(s, T ) is a function to return frequency value corresponding with a
string s ∈ Σr, this value is contained in the leaf of the path labeled by s in tree T . Parameters
d1 and d2 sum up frequencies of s in positive trees TN and negative trees TA, respectively.
Four other parameters t1, t2, t3, t4 are also used in PSA2 to control its performance.
18
Algorithm 4.7 Algorithm PSA2 to detect if a new data instance s ∈ Σ` is self or nonself
1: procedure PSA2(N , A, s, r, TN , TA)
Input: A set of nonself strings N ⊆ Σ`, a set of self strings A ⊆ Σ`, an unlabeled string
s ∈ Σ`, a matching threshold r ∈ {1, . . . , `}.
Output: A set TA of `− r+ 1 prefix trees presenting negative trees, a set TN of `− r+ 1
prefix trees presenting positive trees, a label of s (self or nonself).
2: SelfTreesGeneration(N , r, TN)
3: NonselfTreesGeneration(A, r, TA)
4: d1 = d2 = d3 = 0
5: Create a string sr as ring representation of the string s
6: for i = 1, ..., ` do
7: s′ = sr[i . . . i+ r − 1]
8: if s′ /∈ TNi then
9: d1 = d1 + 1
10: if s′ /∈ TAi then
11: d2 = d2 + 1
12: if Leaf(s′, TNi) < Leaf(s
′, TAi).t1 then
13: d3 = d3 + 1
14: if d1 > t2 then output s is nonself
15: else if d2 > t3 then output s is self
16: else if d3.t4 > ` then output s is nonself
17: else output s is self
5.2 Experiment
5.2.1 Datasets
We use two popular flow-based datasets: NetFlow and TU. Similar to the previous studies, we
select the same 4 features from the NetFlow dataset as the input of experiments 1, 5, 6, and
7; 7 features from the NetFlow dataset as the input of experiments 2 and 3; 4 features from
the UT dataset as the input of experiment 4.
Table 4.3: Features for NIDS.
Experiment Dataset Feature
1 NetFlow Packets, Octets, Duration, Flags
2, 3 NetFlow Packets, Octets, Duration, Scr port,
Dst port, Flags, IP protocol
4 UT Packets, Octets, Duration, Flags
5, 6, 7 NetFlow Packets, Octets, Duration, Flags
5.2.2 Data preprocessing
The preprocessing for the features of training dataset is composed of two steps. The first step
is to map all features to binary string features. After this step, a total string features are
constructed for both normal data and anomalous one. The second step is to concatenate the
binary string features for every flows. After this step, training dataset contains equaled-length
binary strings. For testing dataset, each feature will be convert to a binary string using the
19
map of training data. If a feature value is not in range of corresponding training feature range,
its binary form will be selected randomly.
Example 4.1. Given a training dataset contain numerical two-feature flows S = {N,A},
where set of normal data N = {n1 = (1; 8);n2 = (0.5; 6)} and anomalous data A = {a1 =
(1; 9); a2 = (1; 6), a3 = (0.5; 9)}.
The ranges of the first feature and the second one are {0.5; 1} and {6; 8; 9}, respectively.
Therefore, we can use one bit (two bits) to encode the first (the se
Các file đính kèm theo tài liệu này:
- improving_some_artificial_immune_algorithms_for_network_intr.pdf