Content based image retrieval using multiple features and Pareto approach
Author affiliations
DOI:
https://doi.org/10.15625/1813-9663/32/2/8611Keywords:
Pareto point, Pareto front, Content based image retrieval (CBIR), Relevance feedback (RF), ClassificationAbstract
In this paper, we propose a method for image retrieval based on the Pareto method. Each Pareto point is represented by a vector of distances between image features. In the method, feature distance can be applied with the use of any existing distance measures such as euclidean distance, Histogram Intersection distance, etc. A point is called a Pareto point if and only if there does not exist any other point that is less than or equal for all dimensions of distance measure with respect to the input image. The set of all Pareto points forms a set of fronts at different depths. We also propose formal properties of the Pareto front. Specifically, we prove the Pareto front depth with regards to a point in the search space is the number of vertices of the longest dominant path.A content based image retrieval (CBIR) system executes an image classifier according to the input query image. The classification engine takes the set of all fronts as the original dataset set. Relevance feedback is used to integrate user’s information feedback over returned results. The experimental result on three image collections shows the effectiveness of the proposed method by reducing noise data for supervised machine learning classification engines or avoiding local strap in query refinement techniques such as query point movement and query expansion. Our algorithm called PDFA (Pareto Front Depth Algorithm) uses a flexible threshold to get Pareto points of k-depths and it saves search space up to 70\%.
Metrics
Metrics Loading ...
Published
22-12-2016
How to Cite
[1]
V. V. Hieu, “Content based image retrieval using multiple features and Pareto approach”, JCC, vol. 32, no. 2, p. 169–187, Dec. 2016.
Issue
Section
Computer Science
License
1. We hereby assign copyright of our article (the Work) in all forms of media, whether now known or hereafter developed, to the Journal of Computer Science and Cybernetics. We understand that the Journal of Computer Science and Cybernetics will act on my/our behalf to publish, reproduce, distribute and transmit the Work.2. This assignment of copyright to the Journal of Computer Science and Cybernetics is done so on the understanding that permission from the Journal of Computer Science and Cybernetics is not required for me/us to reproduce, republish or distribute copies of the Work in whole or in part. We will ensure that all such copies carry a notice of copyright ownership and reference to the original journal publication.
3. We warrant that the Work is our results and has not been published before in its current or a substantially similar form and is not under consideration for another publication, does not contain any unlawful statements and does not infringe any existing copyright.
4. We also warrant that We have obtained the necessary permission from the copyright holder/s to reproduce in the article any materials including tables, diagrams or photographs not owned by me/us.