Protecting Trained Models in Privacy-Preserving Federated Learning

The last two posts in our series covered techniques for input privacy in privacy-preserving federated learning in the context of horizontally and vertically partitioned data. To build a complete privacy-preserving federated learning system, these techniques must be combined with an approach for output privacy, which limit how much can be learned about individuals in the training data after the model has been trained.

Source: NSTIC Blog

 


Date:

Categorie(s):