02997nas a2200277 4500000000100000000000100001008004100002260001200043653002500055653002300080653001800103653005900121100001500180700001700195700001800212700001300230700001400243700001400257700003000271245007400301856005800375300000900433490001300442520225000455022001402705 9998 d c09/202510aContrastive Learning10aFederated Learning10aMeta-Learning10aNon-Independent and Identically Distribution (Non-IID)1 aHuan Zhang1 aYuxiang Chen1 aKuan-Ching Li1 aYuhui Li1 aSisi Zhou1 aWei Liang1 aAneta Poniszewska-Maranda00aRobust Federated Learning With Contrastive Learning and Meta-Learning uhttps://www.ijimai.org/journal/bibcite/reference/3595 a1-140 vIn press3 aFederated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non-IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods. a1989-1660