Evaluate the proposed fusion schemes for multi-organ plant identification: In
the thesis, a fusion scheme RHF has been proposed for two-organ based plant
identification. Theoretically, this fusion scheme can be applied for multiple-organ
plant identification. Therefore, in the near future, we will extend the proposed
fusion scheme and evaluate its robustness for other organs.
Deploy multiple-organ search module for VnMed: In the current deployment, the
image-based plant retrieval takes only one image of the plant. We would like to
deploy the two-organ plant retrieval in the first period and then multiple-organ
plant retrieval in this application. For this, an interface that allows to select/-
capture several images, as well as the fusion scheme, has to be implemented.
27 trang |
Chia sẻ: honganh20 | Ngày: 02/03/2022 | Lượt xem: 361 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Tóm tắt Luận án Interactive and multi - Organ based plant species identification, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
h-level feature map, given a designed
6
Fingure 2.7 Construction of image-level eature concatenating feature vectors of cells
in layers of hand pyramid structure.
patch level match kernel function. The approximative feature over image patch P is
constructed as [13]:
F gradient(P ) =
∑
z∈P
m˜(z)φo(ω˜(z))⊗ φp(z) (2.17)
where m˜(z) is the normalized gradient magnitude, φo(ω˜(z)) and φp(z) are approx-
imative feature maps for orientation kernel and position kernel respectively, ⊗ is the
Kronecker product.
c) Image-level features extraction
Once patch-level features are computed for each patch, the remaining work is
computing a feature vector representing the whole image. To do this, a spatial pyramid
structure dividing the image into cells using horizontal and vertical lines at several
layers. Then we compute the feature vector for each cell of the pyramid structure and
concatenate them into a final descriptor. The feature map on the pyramid structure
is:
φ¯P (X) =
[
w(1)φ¯S(X
(1,1)); ...;w(l)φ¯S(X
(l,t)); ...;w(L)φ¯S(X
(L,nL))
]
(2.20)
Where w(l) is the weight associated with level l, φ¯S(X
(l,t) is the feature map of cell
t-th in the l-th level.
We obtain the final representation of the whole image, that we call image-level
feature vector. This vector will be an input of a Multiclass SVM for training and
testing.
2.4 Experimental results
2.4.1 Datasets
We conduct experiments on the following public datasets:
7
Subset of ImageCLEF 2013 dataset: 5,540 and 1,660 leaf images of 80 species
of ImageCLEF 2013 for training and testing respectively.
Flavia dataset: 1,907 leaf images on a simple background of 32 species.
LifeCLEF 2015 dataset: The Table 2.1 shows detail leaf/leafscan dataset.
Table 2.1 Leaf/leafscan dataset of LifeCLEF 2015
Leaf Leafscan
Training 13,367 12,605
Testing 2,690 221
Number of species 899 351
2.4.2 Experimental results
Results on ImageCLEF 2013 dataset
The results are shown in Table 2.2. The results show that our improvements on
kernel descriptor extraction make a significant increase of the performance on both in-
teractive and automatic segmented images. Moreover, the proposed method obtains the
best result. On the same dataset, improved KDES outperformed the original KDES.
On the same KDES method, interactive segmentation allows to improve significantly
the accuracy.
Table 2.2 Accuracy obtained in six experiments on ImageCLEF 2013 dataset
Method Accuracy(%)
Improved KDES with Interactive segmentation 71.5
Original KDES with Interactive segmentation 63.4
Improved KDES with no segmentation 43.68
Original KDES with no segmentation 43.25
Improved KDES with Automatic segmentation 42.3
Original KDES with Automatic segmentation 35.5
Results on Flavia dataset
The accuracy is 99.06%. We compare with other methods on Flavia dataset. The
results are as follows in Table 2.4, the our method is the best, it improve in range [0.36,
6.86]% than other results. The results are very high with a simple image dataset.
Results on LifeCLEF 2015 dataset
The evaluation measure is the score at image level [1]. The proposed method has
been integrated in our submissions named Mica Run 1, Mica Run 2 and Mica Run 3.
Figure 2.12 shows the obtained results of all participated teams in LifeCLEF 2015.
The results show that KDES performs very well on the Leaf Scan category with an
identification score better than most of the runs based on the GoogLeNet such as Inria
8
Table 2.4 Comparison of the proposed method with the state-of-the-art hand-designed
features-based methods on Flavia dataset
Methods Feature, classificationmethod Accuracy(%)
Proposed method Improved KDES; SVM 99.06
[14] SMSD +NFC, 97.50
[15] CT,HU, moments, GF, GLCM; NFC 97.60
[16] EnS, CDS; SVM 97.80
[17] GIST features (486), (PCA=40%); cosine KNN 98.7
[18] Zernike moments, HOG; SVM 96.4
[19] Geometrical features, invariant moments; RBPNN 94.1
[20] Geometrical features, vein features; SVM 92.2
Figure 2.12 Detailed scores obtained for Leaf Scan [1], our team’s name is Mica.
Zenith, QUT RV, Sabanki Okan and Ecouan [1]. The proposed method for Leafscan
obtains the second place with the score is 0.737 while the score of the first place team
is 0.766. This shows the relevance of the leaf normalization strategy as well as the
effectiveness of the gradient kernel for this type of organ.
2.5 Conclusion
This chapter presents the proposed method for complex background leaf-based
plant identification. The obtained results show that the combination of improved
KDES and interactive image segmentation in the proposed method outperform the
original KDES and different state of the art hand-designed feature-based methods
on ImageCLEF 2013, Flavia and LifeCLEF 2015 datasets. It is worth to note that
our proposed method still requires the interaction of the users in segmentation step.
However, in the point of view of the real application for users, it is acceptable if users
9
have to define few numbers of markers (in our experiments, the number of a marker is
from 1 to 3). When using the mobile device, this becomes an easy task even for novice
users thanks to the convenience of the users touch on a mobile devices screen.
CHAPTER 3
FUSION SCHEMES FOR MULTI-ORGAN BASED
PLANT IDENTIFICATION
3.1 Introduction
Following the point of view of botanists and biological experts, images from single
organ do not have enough information for the identification task due to the large inter-
class similarity and large intra-class variation. Therefore, this chapter aims at propos-
ing a fusion technique for multi-organ plant identification. Without lost of generality,
we present and evaluate fusion schemes for each pair of organs. The proposed frame-
work is illustrated in Figure 3.2. We have proposed one novel fusion scheme named
Robust Hybrid Fusion (RHF) that is combination of both transformation-based fusion
and classification based fusion (CBF).
Figure 3.2 The framework for multi-organ plant identification
3.2 The proposed fusion scheme RHF
In this chapter, we employ some notations as follows:
q = {I1, I2, .., IN}: the query-images sets that contains images of N organs (N =
2 in this work);
C: the number of species in the dataset;
si(Ik) is the confidence score of the plant species i−th when using image of organ
k noted Ik as a query for single organ plant identification, where 1 ≤ i ≤ C,
1 ≤ k ≤ N ;
10
c: the predicted class of the species for the query q.
Transformation-based fusion
The three rules that are widely used in transformation-based are max, sum and
product rules. Using these rules, the class c of the query q is defined as follows:
Max rule
c = arg max
i
max
k=1..N
si(Ik) (3.1)
Sum rule
c = arg max
i
N∑
k=1
si(Ik) (3.2)
Product rule
c = arg max
i
N∏
k=1
si(Ik) (3.3)
The transformation-based fusion approaches do not alway guarantee a good per-
formance. However, as they are simple and do not require training process, most of
the current multiple-organ based plant identification methods adopt these techniques.
Classification-based fusion (CBF)
The main idea of classification-based fusion approaches is that multiple scores are
treated as feature vectors and a classifier is constructed to discriminate each class.
The signed distance from the decision boundary is usually regarded as the fused score.
We adopt this idea in plant identification from images of two organs. In our work,
SVM (Support Vector Machine) is chosen as classifier as it is a powerful classification
technique. The CBF is performed as follows: Firstly, we have to define the positive
and negative samples in training dataset. For each pair of images, we will have one
positive sample and (C − 1) negative samples. The positive and negative samples are
illustrated as shown in Figure 3.3.
In the test phase, for the query q, the feature vector is computed through the single-
organ plant identification models. Then CBF method results two predict probabilities
for each species i− th: one for positive denoted Ppos(i, q) and one for negative denoted
Pneg(i, q) respectively. The list of plants that are ranked by si(q) is determined where
si(q) is the confidence score of the plant species i− th obtained for the query q:
si(q) = Ppos(i, q) (3.4)
11
Figure 3.3 Explanation for positive and negative samples.
The class c is predicted as follows, where 1 ≤ i ≤ C
c = arg max
i
si(q) (3.5)
Robust Hybrid Fusion (RHF)
The above classification-based approach can loose distribution characteristics for
each species because all positive and negative samples of all species are merged and
represented in a metric space only. Therefore, we build each species an SVM model
based on its positive and negative samples. When we input a pair of organs to our
model, we will know the probability that this pair belongs to each species by these
SVM classifiers. Then we combine this probability with the confidence score of each
organ. As far as we know, q is the query of a pair of two image organs, and si(Ik) is
i-th species confidence score for image Ik. Let us denote si(q) are the confidence score
of a query q for i-th plant species computed by SVM model. The robust hybrid fusion
model is formed as follows:
c = arg max
i
si(q).
( 2∏
k=1
si(Ik)
)
(3.6)
This model is an integration between a product rule and a classification-based
approach. We expect that the positive probability of point q affects the fusion result.
If the positive probability of point q is high, the probability of point q belonging to
i-th species is high, too.
12
3.3 The choice of classification model for single organ plant
identification
For the single organ plant identification we employ some well-known CNN net-
works that are AlexNet [21], ResNet [22] and GoogLeNet [23]. Two schemes are pro-
posed as illustrated in Figure 3.10: (1) one proper CNN for each organ and (2) one
CNN for all organs. The first scheme allows making explicit fusion for each organ while
the second does not require to know the type of organ and consumes less computation
resources.
Fingure 3.10 Single organ plant identification
In our experiments, we use two schemes for the network weights that are pre-
trained on ImageNet dataset and fine tune the chosen networks with the working
dataset.
3.4 Experimental results
3.4.1 Dataset
To evaluate the proposed fusion scheme, the dataset contains images of 50 species
extracted from LifeCLEF 2015 and augmented from the Internet (Table 3.2). This
dataset is divided into 3 parts: CNN training is the training data for single organ
identification; SVM Input used as training dataset of the SVM model; Testing is used
to evaluate the performances of CNN and late fusion methods.
Table 3.2 The collected dataset of 50 species with four organs
Flower Leaf Entire Branch Total
CNN Training 1650 1930 825 1388 5793
SVM Input 986 1164 495 833 3478
Testing 673 776 341 553 2343
Total 3309 3870 1661 2774 11614
Species number = 50
13
Table 3.3 Single organ plant identification accuracies (%) with two schemes:
(1) An CNN for each organ; (2) An CNN for all organs.
AlexNet ResNet GoogLeNet
Organ Scheme 1 Scheme 2 Scheme 1 Scheme 2 Scheme 1 Scheme 2
Leaf (Le) 66.2 63.8 73.4 70.6 75.0 76.6
Flower (Fl) 73.0 72.2 75.6 75.4 82.2 78.4
Branch (Br) 43.2 47.4 48.6 54.6 53.2 54.8
Entire (En) 32.4 33.8 32.4 39.0 36.4 35.2
3.4.2 Single organ plant identification results
The results obtained for the two proposed schemes with three networks are shown
in Table 3.3. We can observe that GoogLeNet obtained better results than that of
AlexNet, ResNet in both schemes and for most organs. It is interesting to see that
scheme 1 is suitable for high discriminative and salient organs such as leaf and flower
while scheme 2 is a good choice for other organs such as branch and entire. The results
of branch and entire identification in scheme 2 are improved because some images of
flower and leaf might contain the branch and entire information. The advantage of
using scheme 2 for single organ identification is that it does not require to determine
the type of organ. The results also show that flower is the organ that obtains the best
result while the entire gets the lowest result.
3.4.3 Evaluation of the proposed fusion scheme in multi-organ plant iden-
tification
Table 3.4, Table 3.5 and Table 3.6 show the performance obtained when combining
a pair of organs for plant identification. The experimental results show that almost
the fusion techniques highly improve the accuracy rate compared with utilizing images
from one sole organ. In the case of applying scheme 1 for single organ plant identifica-
tion, with AlexNet, the best performance for single organ is 73.0% for flower images,
whereas by applying the proposed RHF, the accuracy rate of a combination between
leaf-flower images dramatically increases by 16.8% to 89.8%. When applying ResNet,
the combination of leaf and flower (Le-Fl) improves +17% over the single organ and
+13.6% when applying GoogLeNet. Not only the leaf-flower pair but in all six pairs of
multi-organs combination, RHF also retain the high performances. Almost the other
fusion performances are also higher than those of single organ.
Comparison to MCDCNN (Multi Column Deep Convolutional Neural
Networks)
To show the effectiveness of the proposed fusion scheme, we compare its perfor-
mance with the performance of MCDCNN [24]. The obtained results on the same
dataset in Table 3.7 show that the proposed method outperforms MCDCNN in all
14
Table 3.4 Obtained accuracy at rank-1 when combining each pair of organs with different
fusion schemes in case of using AlexNet. The best result is in bold.
Scheme 1 for single organ identification Scheme 2 for single organ identification
Accuracy (%)
Max
rule
Sum
rule
Product
rule CBF RHF
Max
rule
Sum
rule
Product
rule CBF RHF
En - Le
R1 66.2 67.2 75.6 74.0 76.6 66.8 67.2 77.4 71.4 78.6
R5 88.6 88.8 93.2 81.8 94.6 88.4 88.2 93.6 80.2 94.4
En - Fl
R1 73.8 74.4 78.8 77.2 81.2 73.84 73.6 78.8 76.24 80.4
R5 92.6 92.8 94.2 84.2 94.4 88.8 89.2 94.8 83.6 95.6
Le - Fl
R1 81.6 82.0 88.6 86.2 89.8 78.8 81.2 89.6 83.2 89.6
R5 96.8 96.8 98.2 90.4 98.4 95.6 96.0 99.2 88.8 99.2
Br - Le
R1 70.2 71.0 76.8 73.8 78.4 66.4 68.2 78.2 73.6 78.2
R5 89.6 90.0 93.4 79.6 93.8 92.0 93.0 95.6 81.6 96.0
Br - Fl
R1 74.2 75.4 80.8 79.0 81.4 70.2 70.6 80.6 76.6 81.4
R5 90.8 91.4 95.2 83.0 95.4 90.4 90.6 95.4 84.6 95.6
Br - En
R1 51.6 52.2 58.0 58.0 58.6 52.4 52.8 60.6 60.6 61.6
R5 76.8 77.6 83.6 81.4 83.8 78.2 78.6 83.6 83.4 84.9
Table 3.5 Obtained accuracy at rank-1 when combining each pair of organs with different
fusion schemes in case of using ResNet. The best result is in bold.
Scheme 1 for single organ identification Scheme 2 for single organ identification
Accuracy (%)
Max
rule
Sum
rule
Product
rule CBF RHF
Max
rule
Sum
rule
Product
rule CBF RHF
En - Le
R1 70.4 72.2 75.2 73.2 78.0 73.6 75.4 80.8 73.2 80.8
R5 91.8 92.6 92.8 90.6 93.2 94.2 94.4 94.8 90.6 95.2
En - Fl
R1 73.8 75.4 80.0 76.4 83.2 74.6 76.0 80.2 76.4 83.2
R5 93.2 93.6 95.0 89.2 95.4 94.4 95.0 95.8 89.2 95.2
Le - Fl
R1 90.0 91.4 92.4 91.4 92.6 85.8 87.6 89.2 91.4 92.6
R5 98.0 98.8 99.0 96.0 99.2 98.4 98.4 99.0 96.0 99.2
Br - Le
R1 77.8 79.2 82.0 79.4 83.2 79.8 81.4 83.6 79.4 83.2
R5 91.8 92.2 94.0 90.4 94.6 94.4 94.4 96.4 90.4 94.6
Br - Fl
R1 80.0 81.0 84.4 82.0 86.4 78.8 80.4 85.6 81.0 86.0
R5 93.6 94.4 97.6 91.4 97.8 95.6 96.0 96.2 91.4 97.6
Br - En
R1 52.4 54.4 62.2 55.0 60.6 60.4 66.2 69.0 55.0 69.0
R5 82.0 83.4 86.6 80.4 87.4 84.8 85.6 89.6 80.4 87.6
Table 3.6 Obtained accuracy at rank-1 when combining each pair of organs with different
fusion schemes in case of using GoogLeNet. The best result is in bold.
Scheme 1 for single organ identification Scheme 2 for single organ identification
Accuracy (%)
Max
rule
Sum
rule
Product
rule CBF RHF
Max
rule
Sum
rule
Product
rule CBF RHF
En - Le
R1 74.6 75.0 79.2 79.4 80.6 77.8 78.0 79.4 81.2 82.0
R5 94.0 93.8 93.6 84.0 94.4 91.4 91.4 96.2 85.6 95.8
En - Fl
R1 79.2 79.8 83.4 83.8 84.2 77.6 78.0 81.0 80.2 81.0
R5 95.8 96.0 97.0 89.2 96.8 93.6 93.8 95.8 84.4 96.2
Le - Fl
R1 91.4 92.0 95.4 93.8 95.8 90.6 90.2 92.6 91.8 92.8
R5 99.6 99.6 99.6 96.0 99.8 98.6 98.8 99.0 93.8 99.0
Br - Le
R1 79.8 81.0 84.6 80.2 84.6 81.2 81.8 85.6 81.6 86.6
R5 94.4 94.6 97.4 84.8 97.4 96.8 96.8 96.8 86.0 97.0
Br - Fl
R1 85.0 86.0 90.2 87.2 91.6 80.0 80.4 86.8 83.2 87.2
R5 97.0 97.4 99.2 90.2 99.0 96.0 96.0 97.6 86.8 97.0
Br - En
R1 58.0 58.8 61.8 60.2 64.2 57.8 58.4 65.6 59.2 66.4
R5 81.4 81.8 86.8 70.4 87.0 82.2 82.0 87.0 68.4 87.0
combinations. The improvement is up to 14.4% for the combination of branch and
leaf.
15
Table 3.7 Comparison of the proposed fusion schemes with the state of the art method
named MCDCNN [24]. The best result is in bold.
Scheme 1 for single
organ identification
Scheme 2 for single
organ identification
Accuracy (%)
RHF
(AlexNet)
RHF
(ResNet)
RHF
(GoogLeNet)
RHF
(AlexNet)
RHF
(ResNet)
RHF
(GoogLeNet)
MCDCNN
[24]
En - Le
R1 76.6 78.0 80.6 78.6 80.8 82.0 70.0
R5 94.6 93.2 94.4 94.4 95.2 95.8 91.0
En - Fl
R1 81.2 83.2 84.2 80.4 83.2 81.0 75.6
R5 94.4 95.4 96.8 95.6 95.2 96.2 94.2
Le - Fl
R1 89.8 92.6 95.8 89.6 92.6 92.8 86.6
R5 98.4 99.2 99.8 99.2 99.2 99.0 98.4
Br - Le
R1 78.4 83.2 84.6 78.2 83.2 86.6 72.2
R5 93.8 94.6 97.4 96.0 94.6 97.0 93.0
Br - Fl
R1 81.4 86.4 91.6 81.4 86.0 87.2 76.8
R5 95.4 97.8 99.0 95.6 97.6 97.0 93.0
Br - En
R1 58.6 60.6 64.2 61.6 69.0 66.4 55.2
R5 83.8 87.4 87.0 84.0 87.6 87.0 80.6
3.5 Conclusion
This chapter presented the fusion scheme proposed for multi-organ based plant
identification. The combination of two organs usually gives better results than one
organ. The experiments show that the fusion techniques increase the performances
dramatically. Also, the robust hybrid fusion model presents the best result in all eval-
uations. It obtains from + 3.2% to + 14.8% of improvement in rank-1 over MCDCNN
method. In future work, we will investigate a method to identify species for observa-
tions with an unfixed number of organs.
16
CHAPTER 4
A FRAMEWORK FOR AUTOMATIC PLANT
IDENTIFICATION WITHOUT DEDICATED
DATASET AND A CASE STUDY FOR BUILDING
IMAGE-BASED PLANT RETRIEVAL
4.1 The framework for building automatic plant identification
system without dedicated dataset
We proposed a new framework based on deep learning for building an automatic
plant identification from images without available database as illustrated in Figure 4.3.
Figure 4.3 The proposed framework for building automatic plant identification system
without dedicated dataset
Plant data collection The step aims at collecting images from different sources.
Plant organ detection We propose to build an organ detection (leaf, flower,
fruit, stem, branch, non-plant) based on LifeCLEF 2015 dataset and used as an
automatic data filter.
Data validation The main purpose of this task is to remove the invalid plant
images while keeping the valid ones.
Plant identification Once dataset processed by the data validation step, dif-
ferent identification models can be trained for plant identification.
17
4.2 Plant organ detection
We propose to apply GoogLeNet and transfer learning for building organ detection.
We apply the dataset by taking images of LifeCLEF 2015 dataset (leaf, flower, fruit,
stem, branch) [1] and the collected dataset from the internet (non-plant).
Experiment: Table 4.4 presents the results corresponding to two weighted initial-
ization strategies. The results show that using the weighted training set on a large such
as ImageNet allows to obtain an improvement +5.08% for the rank 1 and +2.54% for
the rank 2 over the case of randomly weight initialization. This result is very promising
as the working images are mainly captured in a complex background. This proves that
deep learning is capable of learning well with natural images.
Table 4.4 The organ detection performance of the GoogLeNet with different
weight initialization.
Weighted initialization strategy Acc rank1 (%) Acc rank2 (%)
Randomly generated weight 82.10 94.92
Pre trained on ImageNet 87.18 97.46
4.3 Case study: Development of image-based plant retrieval in
VnMed application
The aim of this section is to develop the image-based plant retrieval functionality
of VnMed by applying the proposed framework. We have conducted the following
experiments. First, we collect images of 100 medicinal plants following two acquisition
modes: manual acquisition and crowdsourcing. We then organize these images into
four datasets as follows:
VnDataset1 contains images captured by manual image acquisition.
VnDataset2 contains images of VnDataset1 and images collected through
crowdsourcing.
VnDataset3 contains remaining images of VnDataset2 after applying the plant
organ detection method built in the previous section to remove invalid images.
VnDataset4 contains images of VnDataset3 after manually removing invalid
images of VnDataset3;
These training datasets are shown in Table 4.8. We perform two evaluations
named evaluation 1 and evaluation 2. The evaluation 1 contains 972 images captured
by manual image acquisition while in the evaluation 2, 3,163 images including images
of evaluation 1 and other images collected through crowdsourcing are used.
Table 4.8 Four Vietnamese medicinal species databases
VnDataset1 VnDataset2 VnDataset3 VnDataset4
train 3,901 16,513 15,652 15,150
18
We finetune GoogleNet which is pre-trained on ImageNet. Four models are gen-
erated for four corresponding datasets (denote Mi model). The results are shown in
Table 4.9.
The training data plays an important role in the performance of the plant identi-
fication. The more heterogeneous the training data is, the more robust the model is.
Among 4 models, M1 outperform the others ones on evaluation 1 (accuracy at rank 1
is 81.58%). However, when testing with the images in evaluation 2, the performance
of this model decreases dramatically. The other models obtain the results that are
relatively lower than the model M1 on evaluation 1. However, these models still keep
high accuracies when working with images of evaluation 2. M1 model is not suitable
with data collected from crowdsourcing. Between the 3 models M2, M3, M4, the results
obtained on both evaluations ranked from high to low are M4, M3, M2. This shows
the important role of data validation. It is also worth noting that the automatic data
validation based on plant organ detection allows to remove a significant part of the
invalid images.
Table 4.9 Results for Vietnamese medicinal plant identification.
Experiments Accuracy (%) M1 M2 M3 M4
evaluation 1
rank 1 81.58 76.03 78.70 79.63
rank 5 90.64 88.48 83.54 84.77
evaluation 2
rank 1 29.62 56.50 57.73 58.46
rank 5 34.62 66.42 67.31 79.48
At the time of the dissertation writing, a second dataset containing 75,405 images
of 596 Vietnamese medicinal plants is built by applying the proposed framework. We
propose to use the GoogLeNet to train on this dataset and to get accuracy at rank 1
is 66.61% and at rank 10 is 87.52%. The identification model trained on the collected
dataset has been integrated in VnMed application.
4.4 Conclusion
In this chapter, an automatically plant identification system without an available
dataset has been proposed. The core step of the framework is the data validation with
the help of the proposed plant organ detection. We also have confirmed the validity of
the proposed framework for building image-based plant retrieval of VnMed application.
As results, an image dataset of 596 medicinal plants in Vietnam has been collected and
carefully annotated with the help of botanists. Moreover, the identification model
19
trained on this dataset has been integrated in the VnMed application.
CONCLUSIONS AND FUTURE WORKS
Conclusions
This dissertation has made three contributions: (1) a complex background leaf-
based plant identification method, (2) a fusion scheme for two-organ based plant identi-
fication, (3) a framework for automatic plant identification without dedicated dataset
and the application of this framework for the Vietnamese medicinal plant retrieval
system.
For plant identification based on complex background leaf images, we have pro-
posed to combine an interactive segmentation method with the improved KDES. To
evaluate the robustness of the proposed method, we have performed experiments on
different datasets. The obtained results show that the combination of improved KDES
and interactive image segmentation in the proposed method outperform the original
KDES and different state of the art hand-designed feature-based methods on both Im-
ageCLEF 201
Các file đính kèm theo tài liệu này:
- tom_tat_luan_an_interactive_and_multi_organ_based_plant_spec.pdf