Peer-to-peer Approach for Distributed Privacy-preserving Deep Learning
The revolutionary advances in machine learning and Artificial Intelligence have enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making. Deep learning is the most effective, supervised, time and cost efficient machine learning approach which is becoming popular in building today’s applications such as self-driving cars, medical diagnosis systems, automatic speech recognition, machine translation, text-to-speech conversion and many others. On the other hand the success of deep learning among others depends on large volume of data available for training the model. Depending on the domain of application, the data needed for training the model may contain sensitive and private information whose privacy needs to be preserved. One of the challenges that need to be address in deep learning is how to ensure that the privacy of training data is preserved without sacrificing the accuracy of the model. In this work, we propose, design and implement a decentralized deep learning system using peer-to-peer architecture that enables multiple data owners to jointly train deep learning models without disclosing their training data to one another and at the same time benefit from each other’s dataset through exchanging model parameters during the training. We implemented our approach using two popular deep learning frameworks namely Keras and TensorFlow. We evaluated our approach on two popular datasets in deep learning community namely MNIST and Fashion-MNIST datasets. Using our approach, we were able to train models whose accuracy is relatively close to models trained under privacy-violating setting, while at the same time preserving the privacy of the training data.
. A. Uçar, Y. Demir, and C. Güzeliş, “Object recognition and detection with deep learning for autonomous driving applications,” Simulation, vol. 93, no. 9, pp. 759–769, Sep. 2017.
. D. Wang, A. Khosla, R. Gargeya, H. Irshad, and A. H. Beck, “Deep Learning for Identifying Metastatic Breast Cancer,” pp. 1–6, 2016.
. A. Graves, A. Mohamed, and G. Hinton, “Speech recognition with deep recurrent neural networks,” in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 2013, vol. 3, no. 3, pp. 6645–6649.
. S. Arik et al., “Deep voice: Real-time neural text-to-speech,” 34th Int. Conf. Mach. Learn. ICML, vol. 1, no. Icml, pp. 264–273, 2017.
. L. Deng and Y. Liu, “Deep Learning in Machine Translation,” in Deep Learning in Natural Language Processing, L. Deng and Y. Liu, Eds. Singapore: Springer Singapore, 2018, pp. 1–327.
. F. Chollet, “What is deep learning,” in Deep Learning with Python, NY, USA: Manning Publications Co., 2018, pp. 3–24.
. B. Hitaj, G. Ateniese, and F. Perez-Cruz, “Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 603–618.
. M. Al-Rubaie and J. M. Chang, “Privacy preserving machine learning: Threats and solutions,” 2018.
. R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” 2015 53rd Annu. Allert. Conf. Commun. Control. Comput. Allert. 2015, pp. 909–910, 2016.
. B. K. Beaulieu-Jones, S. G. Finlayson, W. Yuan, and Z. S. Wu, “Privacy-preserving distributed deep learning for clinical data,” 2018.
. Y. Wang et al., “Privacy Preserving Distributed Deep Learning and Its Application in Credit Card Fraud Detection,” in 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE), 2018, no. August, pp. 1070–1078.
. M. K. Abd-ellah, A. Ismail, A. A. M. Khalaf, and H. F. A. Hamed, “A review on brain tumor diagnosis from MRI images : Practical implications , key achievements , and lessons learned,” Magn. Reson. Imaging, vol. 61, no. August 2018, pp. 300–318, 2019.
. A. Bellet, R. Guerraoui, M. Taziki, and M. Tommasi, “Personalized and private peer-to-peer machine learning,” Int. Conf. Artif. Intell. Stat. AISTATS 2018, pp. 473–481, 2018.
. M. Abdulkadir, “Distributed Privacy-Preserving Deep Learning,” GitHub, 2019. [Online]. Available: https://github.com/MustaphaAbdulkadir1983/distributed_deep_learning. [Accessed: 15-Feb-2021].
. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
. H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mniST: A novel image dataset for benchmarking machine learning algorithms,” 25-Aug-2017. [Online]. Available: https://github.com/zalandoresearch/fashion-mnist. [Accessed: 15-Feb-2021].
. F. Chollet, “Introduction to Keras,” in Deep Learning with Python, NY, USA: Manning Publications Co., 2018, pp. 61–62.
Copyright (c) 2021 International Journal of Computer (IJC)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Authors who submit papers with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
- By submitting the processing fee, it is understood that the author has agreed to our terms and conditions which may change from time to time without any notice.
- It should be clear for authors that the Editor In Chief is responsible for the final decision about the submitted papers; have the right to accept\reject any paper. The Editor In Chief will choose any option from the following to review the submitted papers:A. send the paper to two reviewers, if the results were negative by one reviewer and positive by the other one; then the editor may send the paper for third reviewer or he take immediately the final decision by accepting\rejecting the paper. The Editor In Chief will ask the selected reviewers to present the results within 7 working days, if they were unable to complete the review within the agreed period then the editor have the right to resend the papers for new reviewers using the same procedure. If the Editor In Chief was not able to find suitable reviewers for certain papers then he have the right to reject the paper.
- Author will take the responsibility what so ever if any copyright infringement or any other violation of any law is done by publishing the research work by the author
- Before publishing, author must check whether this journal is accepted by his employer, or any authority he intends to submit his research work. we will not be responsible in this matter.
- If at any time, due to any legal reason, if the journal stops accepting manuscripts or could not publish already accepted manuscripts, we will have the right to cancel all or any one of the manuscripts without any compensation or returning back any kind of processing cost.
- The cost covered in the publication fees is only for online publication of a single manuscript.