A new approach to anti-forgery using saliency guided image watermarking
Author affiliations
DOI:
https://doi.org/10.15625/1813-9663/20178Keywords:
Image watermarking, anti-forgery, saliency, learning.Abstract
Using various image characteristics as a secret key, the cryptanalytic watermarking method is known for enhancing the resilience of authentication systems and protecting against forgery attempts when concealing information within a host image. This study introduces a new approach that leverages saliency features to establish a secret key, subsequently utilizing this key as a parameter for embedding and extracting watermarks. Despite the alteration of image features during watermark embedding, we suggest employing learning techniques in conjunction with saliency models to ensure the robustness of watermark extraction. The proposed image watermarking technique incorporates SVM learning and multiple saliency models. Our findings demonstrate the effectiveness of the cryptanalytic watermarking method in maintaining the watermark's invisibility and stability. The benefits of the saliency feature-based approach for anti-forgery are evident through experiments conducted on a standard dataset.
Metrics
References
F. Cayre, C. Fontaine and T. Furon, Watermarking security: theory and practice, IEEE Trans-actions on Signal Processing, vol. 53, no. 10, pp. 3976–3987, Oct. 2005.
Itti, L., Koch, C., Niebur, E. A model of saliency-based visual attention for rapid scene analy-sis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 20 (11): 1254–9. 1998.
H. J. Seo and P. Milanfar, Nonparametric bottom-up saliency detection by self-resemblance, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 45-52, 2009.
X. Hou, J. Harel and C. Koch, Image Signature: Highlighting Sparse Salient Regions, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 1, pp. 194-201, Jan. 2012.
N. Riche, M. Mancas, B. Gosselin and T. Dutoit, Rare: A new bottom up saliency model, 2012 19th IEEE International Conference on Image Processing, pp. 641-644, 2012.
S. Fang, J. Li, Y. Tian, T. Huang and X. Chen, Learning Discriminative Subspaces on Ran-dom Contrasts for Image Saliency Analysis, in IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 5, pp. 1095-1108, May 2017.
M. Oakes, D. Bhowmik and C. Abhayaratne, Visual attention-based watermarking, 2011 IEEE International Symposium of Circuits and Systems (ISCAS), Rio de Janeiro, pp. 2653-2656, 2011.
Jarno Mielikainen, LSB Matching Revisited, IEEE signal processing letters, vol. 13, no. 5, 285, 2006.
Huan Xu, Jianjun Wang, Hyoung Joong Kim, Near-optimal solution to pair-wise LSB match-ing via an immune programming strategy, Information Sciences, Volume 180, Issue 8, Pages 1201-1217, 2010.
Basu A., Talukdar S., Sengupta N., Kar A., Chakraborty S.L., Sarkar S.K, On the Implemen-tation of a Saliency Based Digital Watermarking. In: Mandal J., Satapathy S., Kumar Sanyal M., Sarkar P., Mukhopadhyay A. (eds) Information Systems Design and Intelligent Applica-tions. Advances in Intelligent Systems and Computing, vol 339. Springer, 2015.
Lihua Tian,Nanning Zheng, Jianru Xue, Ce Li, Xiaofeng Wang, An integrated visual saliency-based watermarking approach for synchronous image authentication and copyright protection, Signal Processing: Image Communication, vol. 26, Issues 8–9, Pages 427- 437, Signal Processing: Image Communication, 2011.
De Vleeschouwer, C., Delaigle, J.-F., and Macq, B., Invisibility and application functionalities in perceptual watermarking - An overview, Proceedings of the IEEE, vol. 90, pp. 64-77, 2002.
Pfitzmann, B., Information hiding terminology, 1st Workshop on Information Hiding, pp. 347-350, 1996.
Ben-Hur, Asa, Horn, David, Siegelmann, Hava, and Vapnik, Vladimir N., Support vector clustering, Journal of Machine Learning Research, 2: 125–137, 2001.
Jolliffe I.T. Principal Component Analysis, Series: Springer Series in Statistics, 2nd ed., Springer, XXIX, 487 p. 28, 2002.
L. Wu, J. Zhang, W. Deng and D. He, Arnold Transformation Algorithm and Anti-Arnold Transformation Algorithm, 2009 First International Conference on Information Science and Engineering, pp. 1164-1167, 2009.
Deeply supervised salient object detection with short connections, Q Hou, MM Cheng, X Hu, A Borji, Z Tu, P Torr, IEEE TPAMI, 2018.
Wackerly, Dennis, Mendenhall, William, Scheaffer, Richard L. Mathematical Statistics with Applications (7 ed.). Belmont, CA, USA: Thomson Higher Education. 2008.
E. G. Richardson, Iain. H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. Chichester: John Wiley & Sons Ltd. 2003.
Brunet, D., Vass, J., Vrscay, E. R., Wang, Z. On the mathematical properties of the structural similarity index (PDF). IEEE Transactions on Image Processing. 21 (4): 2324–2328. April 2012.
Winkler, W. E. Overview of Record Linkage and Current Research Directions. Research Report Series. 2006.
Ponomarenko, N., Ieremeiev, O., Lukin, V., Egiazarian, K., Carli, M. Modified image visual quality metrics for contrast change and mean shift accounting. 2011 11th International Confer-ence the Experience of Designing and Application of CAD Systems in Microelectronics (CADSM): 305–311. February 2011.
Downloads
Published
How to Cite
Issue
Section
License
1. We hereby assign copyright of our article (the Work) in all forms of media, whether now known or hereafter developed, to the Journal of Computer Science and Cybernetics. We understand that the Journal of Computer Science and Cybernetics will act on my/our behalf to publish, reproduce, distribute and transmit the Work.2. This assignment of copyright to the Journal of Computer Science and Cybernetics is done so on the understanding that permission from the Journal of Computer Science and Cybernetics is not required for me/us to reproduce, republish or distribute copies of the Work in whole or in part. We will ensure that all such copies carry a notice of copyright ownership and reference to the original journal publication.
3. We warrant that the Work is our results and has not been published before in its current or a substantially similar form and is not under consideration for another publication, does not contain any unlawful statements and does not infringe any existing copyright.
4. We also warrant that We have obtained the necessary permission from the copyright holder/s to reproduce in the article any materials including tables, diagrams or photographs not owned by me/us.