Research on camera surveillance usually received much
attention due to its highly application. In which, issue on the hand -
off camera to continuous tracking object when they pass by camera
observation zone. And one more importance problem with video
surveillance detects abnormal movement.
The thesis expresses overview of video surveillance systems
with research on how to solve the problem in finding the next
observation camera and detect abnormalities in video surveillance
system.
Based on surveys and experiments the dissertation has
contributed following:
27 trang |
Chia sẻ: mimhthuy20 | Lượt xem: 457 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Techniques to process observed regions and detect abnormal objects in video surveillance systems, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
ection).
One of problems needs to solve in multi-camera surveillance
system is the appearance and disappearance of an object from one
camera to the other, this called finding forward camera. Finding the
next camera is the most important work in continuous tracking an
object in a multi camera surveillance system.
Many projects for solving continuous tracking object when its
travel pass by the camera, most of them pay attention to establishing
the relation of an object in one camera and the forward camera. It
means that most projects, compare objects in the intersection of the
observation zone of cameras in a 2D environment.
How to define the time and forward camera to get the
continuous tracking? The researcher is working to find the answer
for this question. Finding forward camera requests a huge task:
define time and define the next camera, transfer the object. So, with
2
the aim of strengthening the power of the system, changing camera
should be at least, this is researched and given detail in chapter 2.
2. The objectives of the thesis
The thesis majors at:
First: Camera surveillance systems and related works;
Second: The hand – off camera techniques in surveillance
camera systems with multiple cameras;
Third: The anomaly detection techniques in video
surveillance.
3. The new contributions of the thesis
Main results of the thesis:
Recommend a technique on partitioning the static observation
region in a camera surveillance system on the geometric
relationship between the observed regions of cameras.
Through the reduction of polygon edges, the recommended
technique helps to reduce the time to transfer camera in the
Overlapping system. This technique was published in the
Journal of Information Technology and Communications in
2014;
Raise up a new technique to hand - off camera based on
virtual line, this effect on detect the right time to change
camera by calculating impact of moving object with virtual
line in 3D environment. Proposal technique was published at
Vietnamese Journal of Science and Technology in 2013.
Propose a technique to select forwarding camera. This base on
the movements of objects to reduce time to transition
3
surveillance camera with the aim of improving the
performance of the systems. Proposal technique was
presented and published in Fundamental and Applied
Information Research– FAIR 2013.
Raise up an abnormal detection technique based on
segmentation the criteria of each route. The result shows that
the proposed technique could detect abnormal while the
object has not finished its orbit, it means that the object still in
the video. This technique really helps in real time surveillance
and published in Journal of Informatics and Communication
2015.
4. Structure of the thesis
The thesis includes Introduction, Summary and 3 main
chapters. Chapter 1: Overview on hand-off camera and abnormal in
camera surveillance systems. In this chapter, we mention on the
overview of the camera surveillance system and related works.
Chapter 2: Some techniques to hand-off camera. Propose techniques
to find forward camera with the aim of reducing times to choose next
camera in tracking object. Chapter 3: Detect abnormal based on
orbital in video surveillance. This chapter also gives a brief on
approaches and techniques to detect abnormal in video surveillance
and propose a technique to detect abnormal based on analysis the
moving orbital of an object.
4
CHAPTER 1: OVERVIEW ON HAND – OFF CAMERA AND
DETECT ABNORMAL IN CAMERA
SURVEILLANCE SYSTEMS
1.1. Camera surveillance system
In this part, the thesis present a general introduction on
camera surveillance and theirs basics problems.
1.2. Hand - off camera and detect abnormal
Approaches to solve problems in camera surveillance: hand –
off camera and detect abmormal in video surveillance are shown
here.
1.3. Summary and researches
In this chapter, the thesis shown the overview on camera
surveillance system and its ralated works. Beside these, thesis also
gives an introduction on some approaches in tracking object in multi-
cameras system. In this subject I pay most attention at two
importance problems that have been applied in many fields: hand –
off camera and detect abnormal in surveillance system.
CHAPTER 2: SOME TECHNIQUES TO HANDLE REGION
OBSERVATION IN HAND – OFF CAMERA
In this chapter, thesis presents three proposals related to:
When we need to find forward camera? And How does camera get
the tracking job? These proposals aim at reduce the calculation in
choosing forward camera and enhance the power of the system.
5
2.1. Introduction
2.2. Partion observation zone
2.2.1. Introduction
In this part, a technique to divide the observation zone into
non intersection parts in 2D environment. (Fig.2b)
(a) Division in 1D environment (b) Division in 2D
environment
Fig2.1. Some division methods
2.2.2. Intersection of two polygons
Definiton 2.1 [Observation polygon]
Observation polygon is the projected zone of observed area to
the 2D plane.
Definition 2.2[Intersection point of two intersection polygons]
A point is called intersection point of two polygon (A and B)
if it is the intersection of an edge of polygon A and the other of B.
This point is neither vertex of polygon A nor B.
Definition 2.3[Single intersection]
Having two observation polygons: A and B. Intersection
between A and B is called single intersection if it is convex and the
rest of either A or B is a polygon.
(a) (b) (c)
Fig2.2. Types of two polygons intersection
a). Non intersection; b) Single intersection; c) Intersection
6
Proposition 2.1
If two observation polygons A and B have a single
intersection, number of intersection point can not be exceed 2
2.2.3. Divide observation zone of camera surveillance system
2.2.3.1. Divide intersection zone of two polygons
Proposition 2.2[Divide two polygons]
Let two observation polygons A and B. Their intesection is
single if there exist 2 intersection points. These points create an
intersection edge that formed min-edges polygons. (Fig.2.4)
Fig2.22. Divide the intersection between two polygons
2.2.3.2. Division of observed zone in multi-cameras surveillance
system
In an working observation with n static cameras with
information of observation zone, these observation polygons are
overlapped and single intersection. We divide observation zone of
the system into class of observation polygon of each camera, these
polygons are non-intersect.
PartitionTwoPolygon Function: Partition two intersecting
polygons so that the edge of each polygon after separation is
minimal.
Input: A=(A[1], A[2], ..., A[n]); B=(B[1], B[2], ...,
B[m]); vertex A[i], B[j].
Output: Polygon X and Y, satisfiy:𝑨 ∪ 𝑩 = 𝑿 ∪ 𝒀
in which 𝑨 ∪ 𝑿 ∩ 𝒀 = ∅; 𝑿 ⊆ 𝑨; 𝒀 ⊆ 𝑩;
7
Pseudocode
partitionTwoPolygon (A, B: polygon)
{Find the Subtraction of P(P[1], P[2], ..., P[t]) = A\B.
Find intersection of each pair in A and B; P[h], P[k] (h<
k< t).
For i = 1 to h
𝑋 = 𝑋 ∪ 𝑃[𝑖];
For i = k to t
𝑋 = 𝑋 ∪ 𝑃[𝑖];
Y = B – X;
A = X; B = Y;}
PartitionFOV Algorithm
Input: Observation zone 𝑷 = {𝑷[𝟏], 𝑷[𝟐], . . , 𝑷[𝒏]}(with n,
integer). Where, 𝑷[𝒊] = {𝑽𝟏, 𝑽𝟐, . . 𝑽𝒕}, with vertex Vk (xk,yk) ,
these vertex are sorted clockwise.
Output: Q=(Q[1], Q[2], ..., Q[n]) satify:
n
i
iQP
1
][
; in
which: 𝐐[𝐢] ∩ 𝐐[𝐣] = ∅ (∀𝒊, 𝒋 ∈ 𝟏. . 𝒏) and 𝐐[𝐢] ∈ 𝐏[𝐢](∀𝒊 ∈
𝟏. . 𝒏).
Pseudocode
Add the information of observed zone of n camera: P[i]
(i=1..n)
Q[]={0}; Q[1]=P[1]; i=1; j=1;
While (in){
i=i+1; T=P[i]; k=1;
While (k<=j){
if(Q[k] intersect with T)
partitionTwoPolygon(Q[k],T);
k=k+1;}
j=j+1; Q[j]=T;}
8
Computational Complexity:
As we seen, with an observation zone of n camera system, at
step i we have to do (i – 1) times the partitionTwoPolygon function.
In general, times to do:
2
)1(...21
2n
n
So, the computational complexity of the PartitionFOV
algorithm is O(n2).
2.2.4. Experiment
The thesis installed the proposal algorithm in Matlab R2010a
environment, the input is the observation zones of camera system
that Yi Yao implemented by Erdem (Fig.2.6b)
The experiment aim at divide observation zone of camera into
non-intersection parts, each of these parts will be pass to one camera.
With proposal technique, not only the coverage in the highest
but also the observation zone is divided into non-intersection parts.
Finding forward camera will be more efficient when we combine
virtual line into the technique. This was published in Journal of
Information and Communication in 2014.
a) Monitoring site plan b) Yi Yao plan
9
c) Overlapped zone and edges of
polygon as Yi Yao plan
d) Proposal algorithm result
Fig. 2.6. Division of observation zone in camera surveillance system
2.3. Finding next camera based on virtual line
2.3.1. Virtual line
At the intersection part of cameras, build virtual lines with the
aim to define observation zone for each camera. From that whenever
the object touch the virtual line, we start to change camera. To
enhance the accuracy of time to change camera, the thesis calculate
the collision of the object with the virtual line in 3D environment
instead of 2d. Tracking object and virtual line are modeled in 3D
cubes.
Fig.2.11. Moving object and virtual line in 3D environment
-15 -10 -5 0 5 10 15
-20
-18
-16
-14
-12
-10
-8
-6
-4
-2
0
C1
C2
C4
C3
C6
C5
10
2.3.2. Calculate collision of object with virtual line
Thesis presents necessary calculation to check the collision of
an object with a virtual line in 3D environment.
2.3.3. Proposal technique
2.3.3.1. System modeling
System modeling is shown in figure 2.16.
Fig 2.16. System structure
2.3.3.2. Algorithm
See figure 2.17.
11
Fig 2.17. Flowchart of the system
2.3.4. Experiment
Virtual line, demo version, has been developed in Visual
C++2008 using OpenCV library with the input: 3 video stream
which are connected directly to 3 camera in a room with overlap
observation zones (see fig. 2.18)
12
Fig 2.19 shows the forward tracking between cameras. A
person moving and collision with virtual line (red line) and appear at
the observation of forwarded camera with the information of object's
index and next camera.
Fig 2.18. Camera site plan
(a) Forward between camera 1 and camera 2
13
(a) Forward between camera 2 and camera 3
Fig 2.19. Forwarded between 2 camera
The result of experiment show that the accuracy of calculation
in 3D environment is higher than that in 2D environment. Proposal
technique was published at Journal of Science and Technology,
VAST in 2013.
2.4. Finding camera based on the moving direction of an object
2.4.1. Prediction position and moving direction of an object
In this part, thesis using Kalman filter to predict position and
moving direction of an object.
2.4.2. Express the relation between observation parts of the
system
Thesis using adjacency list to express the relation between
observation parts of the system
2.4.3. Algorithm to choose camera base on moving direction
Let A, B be the correspondence position in Decarts at time t1
and t2: 𝐴(𝑥𝑡1 , 𝑦𝑡1) 𝐵(𝑥𝑡2 , 𝑦𝑡2). With the aim of reduce the times to
change camera, our tatics is calculate for the maintain time in one
camera is the longest (the existance in an observation of one camera
is longest). The thesis propose to build a line though A and B, find its
intersection with each edges of polygon. Intersect point C should
satisfy A and belong to the other part of B. Called Dj is the length of
14
BC in camera j.So selected camera will be the camera that have the
largest Dj.
Function findIntersectPolygon: find intersection point C of AB
and edge of polygon P.
Input: P=(P[1], P[2], ..., P[n]); Vertex A, B.
Output: Point C, Intersection point of AB and an edge in P,
satisfy:
𝑨𝑩̅̅ ̅̅ + 𝑩𝑪̅̅ ̅̅ = 𝑨𝑪̅̅ ̅̅
Pseudocode:
findIntersectPolygon (P: polygon; A, B: point)
{ Creat the equation line AB;
For i=1 to n do{
C= intersection point of AB with edge (P[i],P[i+1]);
If(𝐴𝐵̅̅ ̅̅ + 𝐵𝐶̅̅ ̅̅ = 𝐴𝐶̅̅ ̅̅ ) return C;}}
Proposal algorithm:
Input: Q=(Q[1], Q[2], ..., Q[n]) observation polygons
Object's position at 𝑡1: 𝐴(𝑥𝑡1 , 𝑦𝑡1)
Predict object's position at 𝑡2: 𝐵(𝑥𝑡2 , 𝑦𝑡2)
Graph G=(V, E): Adjacency list Ke(i)
Index i (tracking object camera )
Output: Index t: Index of forwarded camera.
Pseudocode:
k =0; t=0;
C=findIntersectPolygon(Q[Ke(i)[k]],A,B);
Dmax= BC ; t=Ke(i)[k];
while (k<length(Ke(i)) do {
k++;
C=findIntersectPolygon(Q[Ke(i)[k]],A,B);
15
If(dmax< BC ){
dmax= BC ;
t=Ke(i)[k];}}
Evaluate computational complexity
With an observation zone of n cameras, at the time object out of
camera i, we should go through adjacency list (Ke(i)) to find suitable
camera, using findIntersectPolygon function to find the intersection
between AB and edges of observation polygons. So, the computational
complexity is O(n2).
2.4.4. Experiment
Figure 2.24 illustrate the result of the algorithm with the input is
observation zone of cameras deployed by Eduardo Monari. The result
show that time to change camera in the system with overlapped system
has been reduce.
Fig 2.24. Result of selective camera
Proposal technique was presented and published in
Fundamental and Applied Information Research, FAIR 2013.
2.5. Summary chapter 2
In order to answer “When will need to change camera and
Which camera will be the next?” in finding forward camera, this
chapter proposes 3 techniques. These techniques, attention on
16
reducing the calculation to find next camera, then strengthen the
power of the surveillance system.
First: Recommend a technical partition fixed surveillance
cameras rely on single crossings, to divide the camera observation
system into regions do not intersect with the criteria reduce the
number of polygons edge observation after partition, thereby
reducing the computational time transition when moving objects in
the overlapping zone of the camera.
Second: propose a technique to define the time to change
camera through the calculate the collision of object with virtual line
in a 3D environment. The result shows that with proposal technique,
the accuracy has been risen.
Third: Recommend a technique to find next camera base in the
moving direction, this helpful in reducing the times to change camera
when an object passes camera observation zone in the OVL system.
CHAPTER 3: DETECT ABNORMAL BASED ON OBJECT'S
TRAJECTORY IN VIDEO SURVEILLANCE
In this chapter, thesis presents some approaches to detect
abnormal in video surveillance. Then propose a technique to detect
abnormal based on the moving trajectory of an object.
3.1. Introduction
3.1.1. Approach base on stream video image analysis
Approaches of this group base on video stream image analysis
using image processing, manipulate with motion picture archieved
from object detection, then combine with probabilistic model,
clusters, statistical to detect abnormal.
3.1.2. Approach base on orbital analysis
Approaches based on clustering the trajectory are working as
the workflow in figure 3.1. Most of proposed algorithms to detect
17
abnormalities are based on finish orbital, this mean all of the points
in the orbital need to be classified to detect abnormal or not. This
really an obstacle in automatic surveillance application, which
request real time.
Fig 3.1. Detect abnormal based on trajectories clustering
3.2. Some concepts, definiton in proposal model
Defintion 3.1 [Moving trajectory]
Moving trajectory of an object O is the list of 𝑡1, 𝑡2, , 𝑡𝑛,
show the different time position of O, this is signed 𝑂 =
{𝑡1, 𝑡2, , 𝑡𝑛}
Definition 3.2 [Similarity between two trajectories]
Having two trajectories 𝐴 = {𝑎1, 𝑎2, , 𝑎𝑛} and 𝐵 =
{𝑏1, 𝑏2, , 𝑏𝑚}
Similarity between A and B is h(A,B):
ℎ(𝐴, 𝐵) = 𝑚𝑎𝑥{𝑑(𝐴, 𝐵), 𝑑(𝐵, 𝐴)} (3.1)
In which, 𝑑(𝐴, 𝐵), 𝑑(𝐵, 𝐴) is calculate by:
𝑑(𝐴, 𝐵) = 𝑚𝑎𝑥{𝑑(𝑎𝑖, 𝐵): 𝑎𝑖 ∈ 𝐴} (3.2)
𝑑(𝐵, 𝐴) = 𝑚𝑎𝑥{𝑑(𝑏𝑖, 𝐴): 𝑏𝑖 ∈ 𝐵} (3.3)
18
With 𝑑(𝑎𝑖, 𝐵), 𝑑(𝑏𝑖, 𝐴) is calculate by:
𝑑(𝑎𝑖 , 𝐵) = min {𝑑(𝑎𝑖 , 𝑏𝑗): 𝑏𝑗 ∈ 𝐵} (3.4)
𝑑(𝑏𝑖, 𝐴) = min {𝑑(𝑏𝑖, 𝑎𝑗): 𝑎𝑗 ∈ 𝐴} (3.5)
In which,𝑑(𝑎𝑖, 𝑏𝑗) is:
𝑑(𝑎𝑖, 𝑏𝑗) = 𝑑𝑒(𝑎𝑖 , 𝑏𝑗) + 𝛾𝑑𝑜(𝑎𝑖, 𝑏𝑗) (3.6)
Which, 𝑑𝑒(𝑎𝑖, 𝑏𝑗) is the Euclidean between 𝑎𝑖 and 𝑏𝑗:
𝑑𝑒(𝑎𝑖, 𝑏𝑗) = √(𝑥𝑖
𝑎 − 𝑥𝑗
𝑏)
2
+ (𝑦𝑖
𝑎 − 𝑦𝑗
𝑏)
2
(3.7)
And 𝑑𝑜(𝑎𝑖, 𝑏𝑗) is defined by 𝑣𝑎𝑖 at 𝑎𝑖 and 𝑣𝑏𝑗 at 𝑏𝑗:
𝑑𝑜(𝑎𝑖, 𝑏𝑗) = 1 −
𝑣𝑎𝑖 . 𝑣𝑏𝑗
|𝑣𝑎𝑖|. |𝑣𝑏𝑗|
(3.8)
Which, speed at 𝑎𝑖 and 𝑏𝑗 is defined:
𝑣𝑎𝑖 = (𝑥𝑖
𝑎 − 𝑥𝑖−1
𝑎 , 𝑦𝑖
𝑎 − 𝑦𝑖−1
𝑎 ) (3.9)
𝑣𝑏𝑗 = (𝑥𝑗
𝑏 − 𝑥𝑗−1
𝑏 , 𝑦𝑗
𝑏 − 𝑦𝑗−1
𝑏 ) (3.10)
𝛾 is the parameter to adjust the weighting of moving direction
Definition 3.3 [Linking relation 𝑸𝜽]
With threshold , orbitals U, V T (set of trajectories) is
called linking relation belongs and denote Qθ(𝑈, 𝑉) if there exist
list of orbits O1, O2, , On satisfy:
(i). 𝑈 ≡ 𝑂1
(ii). 𝑉 ≡ 𝑂𝑛
(iii). ℎ(𝑂𝑖, 𝑂𝑖+1) < 𝜃, ∀ 𝑖, 1 ≤ i ≤ n − 1
Proposition 3.1:
Linking relation Q is an equivalent relation.
The thesis proved that linking relation Q satisfy:
reflection,symmetrical, bridge.
19
Concept Route
Linking relation Q between orbitals is an equivalence relation
so it classifies orbitals into equivalent classes. From now on we call
each equivalent class is a route. Orbitals in one route are having the
same simulation points.
Definition 3.4 [Representation of a route]
Let route 𝑅 = {𝑂1, 𝑂2, , 𝑂𝑘}, representation of route 𝑅 is 𝑃 =
{𝑝𝑖}|(𝑖 = 1. . 𝑛) which is defined:
𝑝𝑖 = {
1
𝑘
∑ O𝑗[𝑡𝑖]
𝑘
𝑗=1
} (3.11)
which, k is number of orbitals belong to route 𝑅, n is the
length of the orbital.
Definition 3.5 [Width of route]
Having route 𝑅 = {𝑂1, 𝑂2, , 𝑂𝑘} and 𝑃 = {𝑝𝑖}|(𝑖 = 1. . 𝑛) is
the representative of R. Width of R signed ℎ𝑅 is defined:
ℎ𝑅 = max
𝑖=1..𝑘
{ℎ(𝑂𝑖, 𝑃)} (3.12)
Definition 3.6 [Abnormal orbital of a route]
Let 𝑃 = {𝑝𝑖} be the representative of route𝑅 = {𝑂1, 𝑂2, , 𝑂𝑘}
and orbital 𝑇∗ = {𝑡1, 𝑡2, , 𝑡𝑛}, T* is called abnormal orbital with R
if ℎ(𝑇∗, 𝑃) > ℎ𝑅.
Abnormal concept
In zone mornitoring by a camera, objects (human) travel along
normal orbits created by certain groups (route). In this thesis, object
has abnormal behaviour is an object with abnormal orbit which does
not belong to any routes. In the other word, this is an abnormal orbit
with all the routes.
20
3.3. Orbit clustering
Thesis uses the rate of speed changing to cluster. The
clustering point is the point where the rate 𝑟𝑎𝑡𝑒(𝑣𝑖)is pass threshold
𝜗
𝑟𝑎𝑡𝑒(𝑣𝑖) = 𝑚𝑖𝑛 (
𝑣𝑖
𝑥 − 𝑣𝑖−1
𝑥
𝑣𝑖−1
𝑥 ,
𝑣𝑖
𝑦 − 𝑣𝑖−1
𝑦
𝑣𝑖−1
𝑦 ) (3.13)
where, 𝑣𝑖
𝑥, 𝑣𝑖
𝑦
the corresponding speed by direction x and y, it
is the distance between two points in the same time: 𝑣𝑖
𝑥 = 𝑥𝑖 − 𝑥𝑖−1
and 𝑣𝑖
𝑦 = 𝑦𝑖 − 𝑦𝑖−1
Theorem 3.1. [Abnormal detection based on the sub-
trajectory]
Let P be the representative of route R:
𝑃 = {𝑝1, 𝑝2 , 𝑝𝑛} with 𝑠𝑒𝑔 = {𝑠𝑒𝑔1, 𝑠𝑒𝑔2, . . , 𝑠𝑒𝑔𝑢} is the
segmentation point P (1 < 𝑢 < 𝑛).
T* : being checked orbit.
If T* is defined abnormal with an orbit i(1 ≤ 𝑖 ≤ 𝑢) then T*
is abnormal with all orbit 𝑙(𝑖 < 𝑙 ≤ 𝑢).
3.4. Abnoraml detection base on clustering the route
In this part, the thesis proposes a 2 phase technique in order to
detect abnormalities in video surveillance base on the clustering
route (figure 3.5).
21
Fig 3.5. Workflow to detect abnormal base on clustering route
First pharse: Initialization
Symbol:
𝑅 = {𝑅1, 𝑅2, , 𝑅𝑘} set of normal route;
𝑟𝑖 number of orbits 𝑅𝑖 with (1 ≤ 𝑖);
𝑂𝑗
𝑖 is trajectories of route 𝑅𝑖; (1 ≤ 𝑖 ≤ 𝑘), (1 ≤ 𝑗 ≤ 𝑟𝑖)
𝑃 = {𝑃1, 𝑃2, , 𝑃𝑘} set of representation routes. 𝑃𝑖
representation route of 𝑅𝑖;
𝑆𝑂𝑗
𝑖 : orbit number j of representative 𝑃𝑖;
𝑆𝑂𝑗
𝑖 = {𝑃𝑖 (𝑝1, 𝑝2, . . 𝑝𝑠𝑒𝑔𝑗)}
Step 1: Create group of orbits of the same route
Step 2: Build a representative for each route
Step 3: Calculate 𝑑𝑚𝑎𝑥
Threshold 𝑑𝑚𝑎𝑥 is calculate by:
𝑑𝑚𝑎𝑥 = min
𝑖=1..𝑘
{ max
𝑗=1..𝑟𝑖
{ℎ(𝑂𝑗
𝑖 , 𝑃𝑖)}} (3.14)
Step 4: Segmentation representations of the route.
Second pharse:
22
Detect abnormal base on the representative of route.
Algorithm: Abnormal Detecter Based on Sub –
Trajectories of Route (ADB-STR)
Input:
𝑢𝑚𝑎𝑥: number of max orbits of all route
k: number of route
𝑑𝑚𝑎𝑥: Threshold
{𝑆𝑂𝑗
𝑖}(𝑖 = 1. . 𝑘); (𝑗 = 1. . 𝑢𝑚𝑎𝑥): set of orbits
T*: orbit need to check
Output: Abnormal or not
Pseudocode:
j=1; Abnormal=false;
While (𝑗 ≤ 𝑢𝑚𝑎𝑥 and Abnormal=false) do
𝑑 = min
𝑖=1..𝑘
(ℎ(𝑇∗, 𝑆𝑂𝑗
𝑖));
if (𝑑 > 𝑑max ) then Abnormal=true;
j=j+1;
End while;
Evaluate computational complexity
For each j(𝑗 ≤ 𝑢𝑚𝑎𝑥), we have to find d, the smallest
similarity number of orbit T* with the other orbit of k route. In
general times to find d is 𝑢𝑚𝑎𝑥 × 𝑢𝑚𝑎𝑥 × 𝑘, so computational
complexity of the algorithm is ADB-STR là O(𝑢𝑚𝑎𝑥2 × 𝑘).
3.5. Experiment
To verify the proposal technique, we do an experiment with
the database of orbits by Piciarelli built in 2008 and data collected
form camera surveillance. Experiment result show that proposal
23
technique can detect abnormal event when object does not finish its
orbit. This really useful when applied the system in real time .
3.6. Summary chapter 3
In this chapter, thesis proposes a technique to detect abnormal
base on the segmentation of representative route. Proposal technique
base on the affected of route to the object. So, by combining
similarity and segmentation of representative route, proposal can
detect abnormal event when the object does not finish its orbit. This
was published in Journal of Information and Communication 2015.
THESIS SUMMARY
Research on camera surveillance usually received much
attention due to its highly application. In which, issue on the hand -
off camera to continuous tracking object when they pass by camera
observation zone. And one more importance problem with video
surveillance detects abnormal movement.
The thesis expresses overview of video surveillance systems
with research on how to solve the problem in finding the next
observation camera and detect abnormalities in video surveillance
system.
Based on surveys and experiments the dissertation has
contributed following:
Recommend a technique on partitioning the static observation
region in a camera surveillance system on the geometric
relationship between the observed regions of cameras. Through
the reduction of polygon edges, the recommended technique helps
to reduce the time to transfer camera in the Overlapping system.
Raise up a new technique to find next camera based on virtual
line, this effect on detecting the right time to change camera by
24
calculating the impact of moving objects with virtual line in a 3D
environment.
Propose a technique to select forwarding camera. This base on the
movements of objects to reduce time to transition surveillance
camera with the aim of improving the performance of the
systems.
Raise up an abnormal detection technique based on segmentation
the criteria of each route. The result shows that the proposed
technique could detect abnormal while the object has not finished
its circulation, it means that the object still in the video. This
technique really helps in real time surveillance.
Further research issues:
Research on detect abnormal base on orbit, using image
processing with moving orbit of an object after tracking it.
Applied the research result on specific problems.
LIST OF PUBLISHING RELATED TO THESIS
1. Ngô Đức Vĩnh, Đỗ Năng Toàn, Hà Mạnh Toàn (2010), “Một tiếp
cận trong phát hiện mặt người dưới sự trợ giúp của camera”, Tạp
chí Khoa học và Công nghệ, ĐH Công nghiệp Hà Nội, ISSN 1859
– 3585, số 3.2010, tr. 20 – 24.
2. Ngô Đức Vĩnh, Đỗ Năng Toàn (2013). “Một cách tiếp cận mới
giải quyết việc chuyển tiếp các camera trong hệ thống giám sát tự
động”, Tạp chí Kh
Các file đính kèm theo tài liệu này:
- tomtatluananta_ngo_duc_vinh_5799_1854459.pdf