Learning Models for Semantic Classification of Insufficient Plantar Pressure Images.

Authors

  • Yao Wu Wenzhou Business College.
  • Qun Wu Zhejiang Sci-Tech University image/svg+xml
  • Nilanjan Dey Techno India College of Technology.
  • R. Simon Sherratt University of Reading image/svg+xml

DOI:

https://doi.org/10.9781/ijimai.2020.02.005

Keywords:

Image Processing, Artificial Neural Networks, Analysis, Machine Learning, Feature Extraction, Image Classification, Image
Supporting Agencies
This work is supported by Science Foundation of Ministry of Education of China (Grant No:18YJC760099) and Zhejiang Provincial Key Laboratory of Integration of Healthy Smart Kitchen System (Grant No: 19080049-N).

Abstract

Establishing a reliable and stable model to predict a target by using insufficient labeled samples is feasible and effective, particularly, for a sensor-generated data-set. This paper has been inspired with insufficient data-set learning algorithms, such as metric-based, prototype networks and meta-learning, and therefore we propose an insufficient data-set transfer model learning method. Firstly, two basic models for transfer learning are introduced. A classification system and calculation criteria are then subsequently introduced. Secondly, a dataset of plantar pressure for comfort shoe design is acquired and preprocessed through foot scan system; and by using a pre-trained convolution neural network employing AlexNet and convolution neural network (CNN)- based transfer modeling, the classification accuracy of the plantar pressure images is over 93.5%. Finally, the proposed method has been compared to the current classifiers VGG, ResNet, AlexNet and pre-trained CNN. Also, our work is compared with known-scaling and shifting (SS) and unknown-plain slot (PS) partition methods on the public test databases: SUN, CUB, AWA1, AWA2, and aPY with indices of precision (tr, ts, H) and time (training and evaluation). The proposed method for the plantar pressure classification task shows high performance in most indices when comparing with other methods. The transfer learning-based method can be applied to other insufficient data-sets of sensor imaging fields.

Downloads

Download data is not yet available.

References

[1] J. Deng, et al, “ImageNet: A large-scale hierarchical image database,” in Proc. CVPR, 2009, pp. 248–255. doi:10.1109/CVPR.2009.5206848

[2] S. Pachori, A. Deshpande and S. Raman, “Hashing in the zero shot framework with domain adaptation,” Neurocomputing, 275, 2137–2149. doi:10.1016/j.neucom. 2017.10.061

[3] Y. He, Y. Tian, and D. Liu, “Multi-view transfer learning with privileged learning framework,” Neurocomputing, 335, 131–142. doi:10.1016/j.neucom.2019.01.019

[4] A. Lumini and L. Nanni, “Deep learning and transfer learning features for plankton classification,” Ecological Informatics, 51, 33–43. doi:10.1016/j.ecoinf.2019.02.007

[5] A. Tavanaei, at al, “Deep learning in spiking neural networks,” Neural Networks, 111, 47–63. doi:10.1016/j.neunet.2018.12.002

[6] H. Zhang, Y. Long and L. Shao, “Zero-shot hashing with orthogonal projection for image retrieval,” Pattern Recognition Letters, 117, 201– 209. doi:10.1016/j.patrec.2018.04.011

[7] A. M. Anam and M. A. Rushdi, “Classification of scaled texture patterns with transfer learning,” Expert Systems with Applications, 120, 448–460. doi:10.1016/j.eswa.2018.11.033

[8] Y. Wu, Y. Su, and Y. Demiris, “A morphable template framework for robot learning by demonstration: Integrating one-shot and incremental learning approaches,” Robotics and Autonomous Systems, 62, 1517–1530. doi:10.1016/j.robot.2014.05.010

[9] S. Cisse, The Fortuitous Teacher, pp. 119-143, 2016, Chandos Publishing.

[10] Y. Ji, Y. Yang, X. Xu, and H. T. Shen, “One-shot learning based pattern transition map for action early recognition,” Signal Processing, 143, 364– 370. doi:10.1016/j.sigpro.2017.06.001

[11] U. Mahbub, et al. “A template matching approach of one-shot-learning gesture recognition,” Pattern Recognition Letters, 34, 1780–1788. doi:10.1016/j.patrec.2012.09.014

[12] X. Xu, S. Gong, and T. M. Hospedales, “Zero-shot crowd behavior recognition,” pp. 341–369. In V. Murino, M. Cristani, S. Shah and S. Savarese (eds) Group and Crowd Behavior for Computer Vision. doi:10.1016/B978-0-12-809276-7.00018-7

[13] Z. Abderrahmane, G. Ganesh, A. Crosnier and A, Cherubini, “Haptic zeroshot learning: recognition of objects never touched before,” Robotics and Autonomous Systems, 105, 11–25. doi:10.1016/j.robot.2018.03.002

[14] Z. Ji, et al., “Zero-shot learning with multi-battery factor analysis,” Signal Processing, 138, 265–272. doi:10.1016/j.sigpro.2017.03.023

[15] Z. Ji, et al., “Manifold regularized cross-modal embedding for zeroshot learning,” Information Sciences, 378, 48–58. doi:10.1016/j.ins.2016.10.02

[16] M. Liu, D. Zhang and S. Chen, “Attribute relation learning for zeroshot classification,” Neurocomputing, 139, 34–46. doi:10.1016/j.neucom.2013.09.056

[17] B. Liu, et al., “Combining ontology and reinforcement learning for zeroshot classification,” Knowledge-Based Systems, 144, 42–50. doi:10.1016/j.knosys.2017.12.022

[18] T. Long, et al., “Zero-shot learning via discriminative representation extraction,” Pattern Recognition Letters, 109, 27–34. doi:10.1016/j.patrec.2017.09.030

[19] Y. Yu, Z. Ji, J. Guo and Y. Pang, “Zero-shot learning with regularized cross-modality ranking”, Neurocomputing, 259, 14–20. doi:10.1016/j.neucom.2016.06.085

[20] A. Bear and D. G. Rand, “Modeling intuition’s origins”, J. Applied Research in Memory and Cognition, 5, 341–344. doi:10.1016/j.jarmac.2016.06.003

[21] S. Blaes and T. Burwick, “Few-shot learning in deep networks through global prototyping,” Neural Networks, 94, 159–172. doi:10.1016/j.neunet.2017.07.001

[22] A. Chen, et al., “On the systematic method to enhance the epiphany Ability of individuals,” Procedia Computer Science, 31, 740–746. doi:10.1016/j.procs.2014.05.322

[23] K. Dombroski, “Learning to be affected: Maternal connection, intuition and elimination communication,” Emotion, Space and Society, 26, 72–79. doi:10.1016/j.emospa.2017.09.004

[24] J. Li, et al., “What is in a name?: The development of cross-cultural differences in referential intuitions,” Cognition, 171, 108–111. doi:10.1016/j.cognition.2017.10.022

[25] X. Li, et al., “Zero-shot classification by transferring knowledge and preserving data structure,” Neurocomputing, 238, 76–83. doi:10.1016/j. neucom.2017.01.038

[26] A. Soudani, and W. Barhoumi, “An image-based segmentation recommender using crowdsourcing and transfer learning for skin lesion extraction,” Expert Systems with Applications, 118, 400–410. doi:10.1016/j.eswa.2018.10.029

[27] K. Deschamps, et al., “Efficacy measures associated to a plantar pressurebased classification system in diabetic foot medicine,” Gait and Posture, 49, 168–175. doi:10.1016/j.gaitpost.2016.07.009

[28] J. A. Ramirez-Bautista, et al., “Review on plantar data analysis for disease diagnosis,” Biocybernetics and Biomedical Engineering, 38, 2, 342–361. doi:10.1016/j.bbe.2018.02.004

[29] J. Sommerset, et al., “Plantar acceleration time: A novel technique to evaluate arterial flow to the foot,” Annals of Vascular Surgery, 60, 308– 314. doi:10.1016/j.avsg.2019.03.002

[30] Y. Xia, et al., “A convolutional neural network cascade for plantar pressure images registration,” Gait and Posture, 68, 403–408. doi:10.1016/j. gaitpost.2018.12.021

[31] C. Wang and G. Tzanetakis, “Singing style investigation by residual Siamese convolutional neural networks,” in Proc. ICASSP, 2018, pp 116– 120. doi:10.1109/ICASSP.2018.8461660

[32] J. Lu, et al., “Boosting few-shot image recognition via domain alignment prototypical networks,” in Proc. ICTAI, 2018, pp. 260–264. doi: 10.1109/ ICTAI.2018.00048

[33] J. Shi, et al., “Concept learning through deep reinforcement learning with memory-augmented neural networks,” Neural Networks, 110, 47–54. doi:10.1016/j.neunet.2018.10.018

[34] T. L. Griffiths, et al., “Doing more with less: Meta-reasoning and metalearning in humans and machines,” Current Opinion in Behavioral Sciences, 29, 24–30. doi:10.1016/j.cobeha.2019.01.005

[35] S. Pramanik and A. Hussain, “Text normalization using memory augmented neural networks,” Speech Communication, 109, 15–23. doi:10.1016/j.specom.2019.02.003

[36] A. E. Gutierrez-Rodríguez, et al., “Selecting meta-heuristics for solving vehicle routing problems with time windows via meta-learning,” Expert Systems with Applications, 118, 470–481. doi:10.1016/j.eswa.2018.10.036

[37] Y, Xian, et al., “Zero-shot learning - a comprehensive evaluation of the good, the bad and the ugly,” IEEE Trans. Pattern Analysis and Machine Intelligence, early-access. doi: 10.1109/TPAMI.2018.2857768

[38] S. J, Pan and Q. Yang, “A survey on transfer learning,” IEEE Trans. Knowledge and Data Eng., 22, 1345–1359. doi:10.1109/TKDE.2009.191

[39] B. Gong, Y. Shi, F. Sha and K. Grauman, “Geodesic flow kernel for unsupervised domain adaptation,” in Proc. CVPR, 2012, pp. 2066–2073. doi:10.1109/CVPR.2012.6247911

[40] Shallu and R. Mehra, “Breast cancer histology images classification: Training from scratch or transfer learning?” ICT Express, 4, 247–254. doi:10.1016/j.icte.2018.10.007

[41] H. Hermessi, O. Mourali and E. Zagrouba, “Deep feature learning for soft tissue sarcoma classification in MR images via transfer learning,” Expert Systems with Applications, 120, 116–127. doi:10.1016/j.eswa.2018.11.025

[42] P. Herent, et al., “Detection and characterization of MRI breast lesions using deep learning,” Diagnostic and Interventional Imaging, 100, 219– 225. doi:10.1016/j.diii.2019.02.008

[43] G. Patterson and J. Hays, “SUN attribute database: Discovering, annotating, and recognizing scene attributes,” in Proc. CVPR, 2012, pp. 2751–2758. doi:10.1109/CVPR.2012.6247998

[44] P. Welinder, et al., Caltech-UCSD Birds 200. Computation & Neural Systems Technical Report CNS-TR: 1–15. https://authors.library.caltech.edu/27468

[45] C. H. Lampert, H Nickisch and S. Harmeling, “Attribute-based classification for zero-shot visual object categorization,” IEEE Trans. Pattern Analysis and Machine Intelligence, 36, 453–465. doi:10.1109/ TPAMI.2013.140

[46] A. Farhadi, I. Endres, D. Hoiem and D. Forsyth, “Describing objects by their attributes,” in Proc. CVPR, 2009, pp. 1778–1785. doi:10.1109/CVPR.2009.5206772

[47] N. M. Norouzi, et al., “Zero-shot learning by convex combination of semantic embeddings,” arXiv:1802.10408.

[48] R. Socher, M. Ganjoo, C. D.Manning and A. Ng, “Zero-shot learning through cross-modal transfer,” in Proc. Int. Conf. Neural Information Processing Systems, p.p. 935–943.

[49] Z. Zhang and V. Saligrama, “Zero-shot learning via semantic similarity embedding,” in Proc. ICCV, 2015, pp. 4166–4174. doi: 10.1109/ ICCV.2015.474

[50] Y. Xian, et al., “Latent embeddings for zero-shot classification,” in Proc. CVPR, 2016, pp. 69–77. doi:10.1109/CVPR.2016.15

[51] Z. Akata, et al., “Evaluation of output embeddings for fine-grained image classification,” in Proc. CVPR, 2015, pp. 2927–2936. doi:10.1109/ CVPR.2015.7298911

[52] B. Romera-Paredes and P. H. S. Torr, “An embarrassingly simple approach to zero-shot learning,” pp. 11–30. In: R. Feris, C. Lampert and D. Parikh (eds), Visual Attributes. Advances in Computer Vision and Pattern Recognition. doi:10.1007/978-3-319-50077-5_2

[53] S. Changpinyo, W. Chao, B. Gong and F. Sha, “Synthesized classifiers for zero-shot learning,” in Proc. CVPR, 2016, pp. 5327–5336. doi:10.1109/ CVPR.2016.575

[54] E. Kodirov, T. Xiang and S. Gong, “Semantic autoencoder for zeroshot learning,” in Proc. CVPR, 2017, pp. 4447–4456. doi:10.1109/ CVPR.2017.473

[55] V. K. Verma and P. Rai, “A simple exponential family framework for zero-shot learning,” in: M. Ceci, J. Hollmén, L. Todorovski, C. Vens and S. Džeroski (eds), Machine Learning and Knowledge Discovery in Databases. Lecture Notes in Computer Science, 10535. doi:10.1007/978- 3-319-71246-8_48

Downloads

Published

2020-03-01
Metrics
Views/Downloads
  • Abstract
    292
  • PDF
    66

How to Cite

Wu, Y., Wu, Q., Dey, N., and Sherratt, R. S. (2020). Learning Models for Semantic Classification of Insufficient Plantar Pressure Images. International Journal of Interactive Multimedia and Artificial Intelligence, 6(1), 51–61. https://doi.org/10.9781/ijimai.2020.02.005

Most read articles by the same author(s)