Federated learning enables mutually distrusting participants to learn a distributed machine learning model without revealing anything but the model’s output . In this paper, we consider an honest-but-curious adversary who participantsin training a distributed ML model, does not deviate from the defined learningprotocol, but attempts to infer private training data from the legitimatelyreceived information . We design and implement two practicalattacks, reverse sum attack and reverse multiplication attack, neither of whichwill affect the accuracy of the learned model . We also experimentally show that the leaked information is as effective as the raw training data through training an alternativeclassifier on the leaked data . We further discuss potentialcountermeasures and their challenges, which we hope may lead to severalpromising research directions . We hope to lead to the … research directions. We also show the leaked info is effective as well as the leaked Information is to train an alternative classifier on leaked information. We hope that the leaks are as effective to the raw data is to be trained an alternative Classifier on a leaked information to train a new classifier from the leaked material. We conclude that the leaking information is to learn an

Author(s) : Haiqin Weng, Juntao Zhang, Feng Xue, Tao Wei, Shouling Ji, Zhiyuan Zong

Links : PDF - Abstract

Code :

Keywords : leaked - information - data - training - model -

Leave a Reply

Your email address will not be published. Required fields are marked *