In this section, the following GMM model selection algorithms will be
evaluated through various experiments with artificial data:
- GMM model selection algorithm utilized the EM-GMM and PFAIC
(EM-GMM-PFAIC); initialized parameters areEM 106 , Jmax 6;
- GMM model selection algorithm utilized the EM-GMM and PFBIC
(EM-GMM-PFBIC); initialized parameters are EM 106 , Jmax 6;
- GMM model selection algorithm utilized the EM-GMM and
SWRLCF (EM-GMM-SWRLCF); initialized parameters are EM 106 ,
0.02
CF ;
- The proposed algorithm (EM-CD-GMM-PFBIC-CD), initialized
parameters are EM 106 , Jmax 6.
27 trang |
Chia sẻ: honganh20 | Ngày: 01/03/2022 | Lượt xem: 360 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Tóm tắt Luận án Research and development for wi - Fi based indoor positioning technique, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
D-RSSIF-IPT) or
probability method (P-RSSIF-IPT). Compared with D-RSSIF-IPT, P-
RSSIF-IPT has lower positioning error because the database of this
method can cover the variation of RSSI. P-RSSIF-IPT can use non-
parametric model (e.g. histogram) or parametric model (e.g. Gaussian
process, GMM) to model the distribution of Wi-Fi RSSIs. P-RSSIF-IPT
using a parametric model has lower positioning errors; the database has
to store fewer parameters than P-RSSIF-IPT using a non-parametric
model.
1.2. Theoretical studies about the available RSSIF-IPT
The distribution of Wi-Fi RSSIs can be fitted by the Gaussian process
or the GMM if data was collected under the changing conditions (e.g.
4
door opening or closing, the moving of commuters). Therefore,
compared to Gaussian process, GMM can model Wi-Fi RSSI
distribution more accurately.
However, some data samples may not be observable due to either of
the following reasons:
- Censoring, i.e., clipping. This problem refers to the fact that sensors
are unable to measure RSSI values below some threshold, such as −100
dBm.
- Dropping. It means that occasionally RSSI measurements of access
points are not available, although their value is clearly above the
censoring threshold.
While censoring occurs due to the limited sensitivity of Wi-Fi sensors
on portable devices, dropping comes from the limitation of sensor
drivers and the operation of WLAN system.
According to our data investigation, the data set (Wi-Fi RSSIs)
collected at an RP, from an AP has the characteristics corresponding to
one of the following eight cases:
(1) The distribution of data can be drawn from one Gaussian
component, data set are observable;
(2) The distribution of data can be drawn from one Gaussian
component, a part of data set are unobservable due to censoring
problem;
(3) The distribution of data can be drawn from one Gaussian
component, a part of data set are unobservable due to dropping
problem;
(4) The distribution of data can be drawn from one Gaussian
component, a part of data set are unobservable due to censoring and
dropping problems;
(5) The distribution of data can be drawn from more than one Gaussian
component, data set are observable;
5
(6) The distribution of data can be drawn from more than one Gaussian
component, a part of data set are unobservable due to censoring
problem (figure 1.10a);
(7) The distribution of data can be drawn from more than one Gaussian
component, a part of data set are unobservable due to dropping
problem (figure 1.10b);
(8) The distribution of data can be drawn from more than one Gaussian
component, a part of data set is unobservable due to censoring and
dropping problems (figure 1.10c).
a. b. c.
Figure 1.10. Histogram of Wi-Fi RSSIs
The authors in published articles solved the data set with
characteristics such as (1) - (5). However, no studies have been able to
solve the data set with the same characteristics as the cases (6) - (8). For
this reason, the thesis focuses on researching and proposing solutions to
develop RSSIF-IPT to simultaneously solve the problems of censoring,
dropping and multi-component problems (cases (6) - (8)).
1.3. Conclusion of chapter 1
In this chapter, the thesis presents available Wi-Fi based indoor
positioning techniques. Chapter 1 also summarizes and analyzes related
works on RSSIF-IPT. According to related works and the issues that
have not been solved for RSSIF-IPT, the thesis proposes scientific
research goals.
6
CHAPTER 2. GMM PARAMETER ESTIMATION IN THE
PRESENCE OF CENSORED AND DROPPED DATA
2.1. Motivation
In indoor environment, data set (Wi-Fi RSSIs) collected at a RP from
an AP can be modeled by the GMM with J Gaussian components (J is a
finite number). Let ny is RSSI value gathered at
thn time, ( ny ,
1 n N ), N is the number of measurements. ny are independent and
identically distributed random variables. In a GMM, the PDF
(Probability Density Function) of an observation ny is:
1
( ),p ; ;
J
n j n j
j
y w yΘ
(2.1)
Θ is a set of parameters of GMM, jw and j are mixing weights
and parameters jth Gaussian component.
While 1 2 Ny ,y ,...,yy is the set of unobservable, non-censored, non-
dropped data (complete data), let c be the specific threshold at which a
portable device (e.g., smart phone) does not report the signal strength;
let 1 2 Nx ,x ,...,xx be the set of observable data, censored, possibly
dropped data (incomplete data). The censoring problem can be presented
as follow:
, 1 .n nn
n
y y c
x n N
c y c
if
if
(2.4)
Let 1 2 Nd ,d ,...,dd be the set of hidden binary variables indicating
whether an observation ( )ny is dropped 1( )nd or not ( 0)nd . The
dropping problem can be presented as follow:
, 1 .n nn
n
y d
x n N
c d
if =0
if =1
(2.5)
7
If data are unobservable owing to the censoring and dropping
problems then:
if and = 0
, 1 .
if and = 1
n n n
n
n n
y y c d
x n N
c y c d
(2.6)
The motivation of this chapter is GMM parameters estimation via
incomplete data( )x .
2.2. Introduction to the EM algorithm
The EM (Expectation Maximization) algorithm is an iterative method
for ML (Maximum Likelihood) estimation of parameters of statistical
models in the presence of hidden variables. This method can be used to
estimate the parameters of a GMM, including two steps:
- E-step: Creates a function for the expectation of the log-
likelihood evaluated using the current estimate for the parameters.
- M-step: computes parameters maximizing the expected log-
likelihood found on the E-step.
2.3. GMM parameter estimation in the presence of censored data
The EM algorithm for GMM parameters estimation in the presence of
censored data (EM-C-GMM) [CT3] is developed as follows:
Let Δnj ( 1 , 1n N j J ) be the latent variables, Δ 1nj if ny belongs
to thj Gaussian component, Δ 0nj if ny does not belong to thj Gaussian
component. The expectation of log-likelihood function (LLF) of y given
by observations ( )x and old estimated parameters are calculated:
E-step:
( ) ( )
( )
1 1
Q ; ln ; ,
ln p ; p , | ; d
;
.
k k
N J
k
n n n nnj j j nj
n j
w y y x y
Θ Θ Θ y Δ Θ
Θ
x
(2.17)
8
Function ( )Q ; kΘ Θ was calculated for two case including n nx y and
nx c , obtained by:
( ) ( )
( )
( )
( )
1 1
1 1 0
1 ln ln
β
Q ; ; ;
;
; dln ln
I
.
N J
n j
k k
n j n j
k
n jk
j n
n j
cN J
n n
n j
j k
j
j
z w
z w
x x
y
y y
Θ Θ
(2.19)
In the equation (2.19), ( 1 ) nz n N are hidden binary variables
indicating whether ny is unobservable 1n nz x c or observable
0n n nz x y
The notations ( )( ; ) kn jx , ( )β( ) kj and (0 )( )I kj are given as follows:
( ) ( )
( )
( ) ( )
1
;
; ;
;
k k
nj jk
n j J
k k
nj j
j
w x
x
w x
(2.20)
( ) ( )
( )
( ( )
1
0
)
0Iβ
I
;
k k
j jk
j J
k k
j j
j
w
w
(2.21)
( )
( ) (
( )0
) 1; d erfc .
2 2
I
kc
jk k
n nj j k
j
c
y y
(2.22)
M-step:
Re-estimated parameters at ( 1)k+ iteration are obtained by computing
the partial derivatives of ( )Q ; kΘ Θ in the equation (2.19) w.r.t. the
elements of , , j j jw and setting them to zero, then we arrived at
formulae given in the equations (2.23)-(2.25).
9
( )
( ) ( )
( )
( 1)
( )
( ) ( )
1
1 10
1
)
1
(
1 0
I
1 β
I
I
.
1 ;
I
;
β
N N
n n n
n n
k
jk k
n j j k
jk
j k
jk k
n j j k
N N
n n
n nj
z x z
z z
x
x
(2.23)
( ) ( )
( 1)2
( ) ( )
( ) ( ) ( )
2( ) ( )
( ) ( )
( ) ( )
2
1
1 1
2 1
10 0
1 1
;
;
+ .
;
1
1 β
I 2 I
β
I I
1 β
k k
n j jk
j
k k
n j j
k k k
j j jk k
j jk k
j
N
n n
n
N N
n n
n n
j
N
n
n
N N
n n
k
j
n n
k
n j
z x
z z
z
x
x
x
z
z
(2.24)
( ) ( )
( 1) 1 1
;1 β
.
N N
n n
n n
k k
n j j
k
j
xz z
N
w
(2.25)
In the equations (2.23)-(2.25), )1 (I kj and )2 (I kj are given as follows:
2( )
( ) ( ) ( ) ( )
( )1 0
1 exp ;I
2 2
I
k
jk k k k
j j j j k
j
c (2.26)
2( )2 2( ) ( ) ( ) ( ) ( ) (
( )0
)
2I
1 exp .
2
I
2
k
jk k k k k k
j j j j j j k
j
c
c
(2.27)
2.4. GMM parameter estimation in the presence of dropped data
The EM algorithm for GMM parameters estimation in the presence of
dropped data (EM-D-GMM) [CT2] is developed as follows:
E-step:
10
( )
1
1
(
1
)
( )
1
ln ln
1 ln ln ln; .
Q ;
1 ;
N J
k
n j j
k
k
n nj
n j
N J
n
n j
jj
d w
x x
w
d w
Θ Θ
(2.30)
In the equation (2.30), P( 1) nd is the dropping probability.
M-step:
(
1
)
(
1
1)
( )
;
.
;
1
1
N
n n
n
k
n j
k
j
k
j
N
n n
n
xd x
xd
(2.31)
2(( )
( 1)2
( )
)
1
1
;
.
1
1 ;
N
kk
n jk
j
k
n j
n n j
n
N
n
n
xd x
d x
(2.32)
( )
1 1
( )
( 1)
1 ;
.
N N
n n
k k
n j j
k
j
n n
x wd d
N
w
(2.33)
( 1) 1 .
N
n
nk
d
N
(2.34)
2.5. GMM parameter estimation in the presence of censored and
dropped data
The EM algorithm for GMM parameters estimation in the presence of
censored and dropped data (EM-CD-GMM) [CT4] is developed as
follows:
E-step:
11
( )
( )
( )
( ) ( ) ( )
(
1 1
1 1 0
1
)
( ) ( ) ( )
1
1 ln ln ln
β α ln ln
Q ;
; 1 ;
;
, 1
I
α
; d
1 , ln .
k
k
n nj j
kc
n
N J
n j
n j
N J
n nj
n j
N J
n
n j
jk k k
nj j k
j
k k k
j
v w
v w
x x
y
y y
v w
Θ Θ
Θ
Θ
(2.52)
In the equation (2.52): ( 1 ) nv n N are hidden binary variables
indicating whether ny is unobservable 1n nv x c or observable
0n n nv x y ;
( ) ( ) ( )
0
1( ) ( )
( ) ( ) ( ) ( )
0
1
1 I
,
1
α
I
J
k k k
j j
jk k
J
k k k k
j j
j
w
w
Θ
M-step:
1
( )
( ) ( ) ( ) ( )
( )
( 1)
( )
1 10
1
( ) ( ) (
1
)
; ,
.
;
I
1 β α
I
1 β ,α
k
jk k k k
n j j k
jk
j
k
N N
n n n
k k k
n j
n n
N N
n n
n n
j
v x
v v
x v
x
Θ
Θ
(2.53)
( ) ( ) 2
( 1)2
( ) ( ) (
1
1 1
2 1
10
) ( )
( ) ( ) ( )
2( ) ( ) ( ) ( )
( ) ( )
( ) ( ) (
0
)
1
1
1 β α
I 2 I
β α
;
; ,
,
I
;1 β
+
,
I
α
k k
n j jk
j
k k k k
n j j
k k k
j j jk k k k
j jk k
N
n n
n
j j
k k k
n j j
N N
n n
n n
N
n
n
N
n
n
x
x
x
v x
v v
v
v
Θ
Θ
Θ
1
( )
.
N
n
n
k v
(2.54)
12
( ) ( ) ( ) ( )
( 1)
( ) (
1
)
1
1
; ,
1 ,
β
.
1 α
α
k k k k
n j j
k
j
N N
n n
n n
n
k
N
k
n
v v
v
x
N
w
N
Θ
Θ
(2.55)
( )
( 1
( )
1)
,
.
1 α
k k n
nk
N
v
N
Θ
(2.56)
As can be seen in equations (2.53) - (2.56), collected data, including
observable, censored and dropped samples are contributed to the
estimate, simultaneously. This means the proposed EM algorithm can
deal with all the mentioned phenomena presented in collected data.
2.6. Evaluation of the EM-CD-GMM
In this section, the proposed EM-CD-GMM was evaluated and
compared to other EM algorithms by using Kullback Leibler Divergence
(KLD). After 1000 experiments, the mean of KLD ( )KLD is shown in
table 2.1 and standard deviation of KLD ( KLD ) is shown in table 2.2
(when c= – 90dBm).
Table 2.1. KLD of the EM algorithms after 1000 experiments
c
(dBm) Algorithm
0 0.075 0.15 0.225 0.3
–90
EM-GMM 3.1491 3.2325 3.3142 3.5054 6.1253
EM-CD-G 0.0798 0.0864 0.1096 0.1329 0.1998
EM-CD-GMM 0.0098 0.0111 0.0229 0.0334 0.0364
Table 2.2. KLD of the EM algorithms after 1000 experiments
c
(dBm) Algorithm
0 0.075 0.15 0.225 0.3
–90
EM-GMM 0.0351 0.3535 1.7911 2.202 2.4937
EM-CD-G 0.1199 0.1364 0.1535 0.1963 0.296
EM-CD-GMM 0.0227 0.0601 0.0857 0.1005 0.1302
13
As can be seen in table 2.1 and table 2.2:
- When 0 and 96c , data are almost observable. The EM-GMM
and the EM-CD-GMM introduced the same results. The EM-CD-G has
a larger error due to the fact that this algorithm assumed the distribution
of data by the Gaussian process.
- For other cases, KLD and KLD of the EM-CD-GMM are always the
smallest. Hence, EM-CD-GMM is the most effective algorithm for
GMM parameter estimation in the presence of censored and dropped
data.
2.7. Conclusion of chapter 2
In chapter 2, the author proposed three algorithms to estimate the
parameters of GMM in the following cases: A part of the data set cannot
be observed due to censoring; due to dropping; due to censoring and
dropping. Experimental results had demonstrated the effectiveness of
EM-CD-GMM algorithm compared to EM-GMM and EM-CD-G.
14
CHAPTER 3. GMM MODEL SELECTION IN THE PRESENCE
OF CENSORED AND DROPPED DATA
3.1. Motivation
In the complex indoor environments, the histogram of collected Wi-Fi
RSSIs can be drawn from one or more than one Gaussian components. If
using GMM with J Gaussian components, the number of parameters of
GMM will be NPs = 3J-1. This means that the number of parameters to
store in the database and the computational cost of positioning
algorithms are proportional to the number of Gaussian components used
to describe the distribution of Wi-Fi RSSIs. Therefore, it is necessary to
have a solution to estimate the number of Gaussian components in
GMM to optimize the database and reduce the complexity of the
calculations in the positioning algorithm of the IPS.
3.2. Methods for GMM model selection
3.2.1. Penalty Function (PF) based methods
Let x be the mixture and observable data set; N is the number of
samples in x ; ˆ JΘ is the set of parameters of GMM with J Gaussian
components; PsN is the number of parameters of GMM; ˆ( | )JΘ x is the
likelihood function. PF of Akaike Information Criterion (AIC), AIC3
and Bayesian Information Criterion (BIC) were defined as follows:
AIC
ˆ ˆ( ) [ ( | )]PF 2ln 2 . J J PsNΘ Θ x (3.3)
AIC3
ˆ ˆ( ) [ ( | )]PF 2ln 3 . J J PsNΘ Θ x (3.4)
BIC ˆ ˆ( ) [ ( | )]PF 2ln ln . J J PsN NΘ Θ x (3.5)
3.2.2. Characteristic Function (CF) based methods
The CF based method uses the convergence of the Sum of Weighted
Real parts of all Log-Characteristic Functions (SWRLCF) to determine
the number of Gaussian components, is as follows:
15
1
ˆ ˆSWRLCF( ) .
j
J
j
j
wJ (3.6)
3.3. GMM model selection in the presence of censored and dropped
data [CT5]
The term ˆ[ ( | )]ln JΘ x of PFBIC in the equation (3.5) can be calculated
as follows:
1 1
0
1 1
ˆˆ ˆ1 ln 1 ;
ˆˆ ˆ ln 1
ˆ ˆln , |
I .ˆ
j
N J
n n j
n j
N J
n jj
j
J
n
v x
wv
wΘ x
(3.7)
Let BIC CD ˆ( , )ˆPF JΘ be the PF of BIC in the presence of censored and
dropped data, we have:
1 1
BIC
1
CD
0
1
ˆˆ ˆ2 1 ln 1 ;
ˆˆ ˆ 2 ln
ˆ ˆPF ,
ˆ 3 ln .1 I
N J
n n j
n
jJ
j
j
N J
n j
n j
w
w N
v
J
x
v
Θ
(3.12)
The algorithm for GMM parameter estimation and model selection in
the presence of censored and dropped data (EM-CD-GMM-PFBIC-CD) is
as follows (figure 3.4):
Input: A set of incomplete data ( )x , convergence threshold of the EM
algorithm for CD-GMM ( ) EM and the maximum number of Gaussian
components ( )maxJ for calculating PFs.
Output: The estimated number of Gaussian components ˆ( )J and
estimated parameters ˆˆ ˆ( , )JΘ in the CD-GMM using to model the
distribution of x .
16
True
False
True
Begin
Initiate 1 and j j J
1J
1k
( ) ( )
0
1 2
( ) ( )
( ) (
( )
)
( )
( )β; α ,
According to equation (2-5,10,11),
According to equation (18), compute:
ln , | at iteratio
I
I I
E-step:
compute , , , ,
and at iteration =1 ;
k k
n j j
k k
j
k k
j j
kJ thk
k
th
x
k
k j J
Θ Θ Θ
Θ x n
( 1)( 1) 2 ( 1)
( 1)
( 1 ( )) 1
( 1)
According to equation (6-9),
According to equation (18), compute:
ln , | at 1 iteration
M-step:
compute: = , ,
and at 1 iteration =1 ;
kk k
j j j
k
k tJ k h
k
j
th
w
k
k j J
Θ x
Θ
( 1)( 1) ˆ
Output a set of estimated parameters in the CD-GMM
ˆwith Gausssian components: , k
kJ JJ
Θ Θ
BIC CD ˆAccording to equation (19), compute PF , ˆJ Θ
ˆ
ˆOutput the estimated number of Gaussian components
ˆ ˆand estimated parameters: ;J
J
Θ
J=Jmax
ˆ 1
BIC CD
BIC CD BIC CD BIC CD
Select the smallest PF among penalty functions:
ˆ ˆ ˆPF , min PF , ,...,Pˆ ˆFˆ ,maxJ=JJ J
maxJ
Θ Θ Θ
End
1k k
1J J
False
The EM algorithm
( 1)( 1 ( )) ()ln , | ln , |k kk kJ J Θ x Θ x
Figure 3.4. The EM-CD-GMM-PFBIC-CD algorithm
17
3.4. Evaluation of GMM model selection algorithms
In this section, the following GMM model selection algorithms will be
evaluated through various experiments with artificial data:
- GMM model selection algorithm utilized the EM-GMM and PFAIC
(EM-GMM-PFAIC); initialized parameters are 610 EM , 6maxJ ;
- GMM model selection algorithm utilized the EM-GMM and PFBIC
(EM-GMM-PFBIC); initialized parameters are 610 EM , 6maxJ ;
- GMM model selection algorithm utilized the EM-GMM and
SWRLCF (EM-GMM-SWRLCF); initialized parameters are 610 EM ,
0.02 CF ;
- The proposed algorithm (EM-CD-GMM-PFBIC-CD), initialized
parameters are
610 EM , 6maxJ .
After 1000 experiments, different levels between the true number ( )J
and estimated number ˆ( )J of Gaussian components were recorded in
table 3.2.
As can be seen in Tab.2, the proposed method introduced far better
results than other approaches, especially when data are suffered from
censoring or dropping or both of them. This can be explained as follows:
The proposed method utilized the extended version of the EM algorithm
in which both observable data ( )n nx y and unobservable data ( )nx c
are contributed to the estimates. When data are unobservable owing to
the censoring and dropping problems, this algorithm produces a lot
better results compared to the standard EM algorithm. Moreover, in the
PF of AIC, the PF of BIC and SWRLCF, unobservable data had almost
no practical contribution while they really contributed to the likelihood
in PF of our proposal, as mentioned in sub-section 3.3.
18
Table 3.2. Different levels between J and Jˆ of four approaches
c
(dBm)
Methods Probability
0 0.1 0.2
92
EM-GMM-PFAIC
ˆP( )J=J 0.01 0.01 0.01
ˆP(| | 1) J J 0.31 0.27 0.22
ˆP( | 2) J J 0.68 0.72 0.78
EM-GMM-PFBIC
ˆP( )J=J 0.01 0.01 0.01
ˆP(| | 1) J J 0.39 0.37 0.3
ˆP( | 2) J J 0.6 0.62 0.69
EM-GMM-SWRLCF
ˆP( )J=J 0.52 0.02 0.01
ˆP(| | 1) J J 0.39 0.78 0.77
ˆP( | 2) J J 0.09 0.2 0.22
EM-CD-GMM-PFBIC-CD
ˆP( )J=J 0.82 0.8 0.79
ˆP(| | 1) J J 0.16 0.18 0.2
ˆP( | 2) J J 0.02 0.02 0.01
3.5. Conclusion of chapter 3
When a portion of the data is not observed due to dropping or
censoring or both, the other GMM model selection algorithms have a
large error due to the absence of unobserved data samples. . In chapter 3,
PF of BIC is calculated on both the observed data samples and the
unobserved data samples. These are new findings of the proposed GMM
model selection method compared to others.
19
CHAPTER 4. POSITIONING ALGORITHM AND
EXPERIMENTAL RESULTS
4.1. Motivation
P-RSSIF-IPT includes offline training phase and online positioning
phase. In the offline training phase, let RPN be the number of RPs; APN is
the number of APs; , 1 , 1 q i RP APq N i Nx is the data set collected
at qth RP from thi AP. Therefore, database built in the offline training
stage of IPS utilized P-RSSIF-IPT is:
,ˆ ; 1 , 1 , q i RP APq N i NΘR (4.1)
,
ˆ
q iΘ is the set of parameters in the GMM used to model the
distribution of ,q ix , estimated by the EM-CD-GMM-PFBIC-CD.
During the online positioning phase, let 1( ... ) AP
on on on
Nx xx be the
data set collected by OB, the positioning problem can be formulated as a
classification problem, where the classes are the positions from which
RSSI measurements are taken during the offline training phase (RPs).
To estimate the target’s position, a MAP (maximum a posteriori)
based classification rule is developed in this chapter. The censoring and
dropping problems were also considered in this proposal.
4.2. Optimal classification rule for censored and dropped mixture
data [CT5]
Let q be the position of the q
th RP; 1 2[ , ,..., ] AP
on on on on
Nx x xx is the data
set gathered by OB. Posterior probability is determined as follows:
1
1 1
p | P
p |
p | P
AP
APRP
N
on
q qi
on i
q NN
on
i q' q'
q' i
x
x
x
(4.2)
20
In the equation (4.2), P( )q is the marginal probability, considering
that RPs are independent of each other, then
;1P q
RPN
1 1
p | P
APRP NN
on
i q' q'
q' i
x is the normalizing constant; p | oni qx is
likelihood, can be calculated as follows:
,
',
,
,
ˆ
, , , , ,
11
ˆ
', ', , ', ,
' 1 11
ˆ
, , , , , ,0
11
ˆ
, , , ,0
1
ˆˆ ˆ1 ;
khi
ˆˆ ˆ1 ;
p |
ˆ ˆ ˆˆ I 1
ˆˆ I
q iAP
q iAPRP
q iAP
q i
JN
on
q i q i j i q i j
ji on
iJNN
on
iq i q i j q i j
q ji
on
Jq N
q i j q i j q i q i
ji
J
q i j q i j
j
w x
x > c
w x
w
w
x
, ,
' 1 1
khi
ˆ ˆ1
APRP
on
i
NN
q i q i
q i
x c
(4.9)
Using the set NNK of nearest neighbors which is chosen among the
offline locations by taking those with the largest posteriors, the final
location estimate is then obtained by the weighted average:
p |
ˆ
p |
NN
NN
on
q qq Kon
on
qq K
x
x
x
(4.10)
4.3. Experimental results
4.3.1. Positioning accuracy
In order to evaluate the positioning accuracy of the proposed method,
compared to the other state-of-art approaches, the author of this thesis
conducts experiments with both simulation data and real field data.
21
4.3.1.1. Simulation results
In order to evaluate the effectiveness of the proposed approach, a floor
plan having an overall size of 45m by 45m with 100 RPs and 10 APs
was generated. The training data were collected as following:
(1) Collect data at each RP from each AP according to PLM:
0 10
0
[dBm]= [dBm] 10 log n
ry RSSI
r
(4.11)
(2) Rounding ny .
(3) Generate censored and dropped data, 0.15 , 100dBmc .
In the training phase, 400 measurements were collected at each RP
from each AP. Data collected at 50% of the RPs is distri
Các file đính kèm theo tài liệu này:
- tom_tat_luan_an_research_and_development_for_wi_fi_based_ind.pdf