01798nas a2200253 4500000000100000000000100001008004100002260001200043653001200055653001500067653001600082653002400098653002300122100001400145700002000159700001900179700002700198245010200225856008200327300001200409490000600421520110300427022001401530 2021 d c12/202110aC-means10aClustering10aConvergence10aJeffreys-Divergence10aSimilarity Measure1 aAyan Seal1 aAditya Karlekar1 aOndrej Krejcar1 aEnrique Herrera-Viedma00aPerformance and Convergence Analysis of Modified C-Means Using Jeffreys-Divergence for Clustering uhttps://www.ijimai.org/journal/sites/default/files/2021-11/ijimai7_2_13_0.pdf a141-1490 v73 aThe size of data that we generate every day across the globe is undoubtedly astonishing due to the growth of the Internet of Things. So, it is a common practice to unravel important hidden facts and understand the massive data using clustering techniques. However, non- linear relations, which are essentially unexplored when compared to linear correlations, are more widespread within data that is high throughput. Often, nonlinear links can model a large amount of data in a more precise fashion and highlight critical trends and patterns. Moreover, selecting an appropriate measure of similarity is a well-known issue since many years when it comes to data clustering. In this work, a non-Euclidean similarity measure is proposed, which relies on non-linear Jeffreys-divergence (JS). We subsequently develop c- means using the proposed JS (J-c-means). The various properties of the JS and J-c-means are discussed. All the analyses were carried out on a few real-life and synthetic databases. The obtained outcomes show that J-c-means outperforms some cutting-edge c-means algorithms empirically. a1989-1660