FedQP: Large-Scale Private and Flexible Federated Query Processing

State-of-the-art federated learning coordinates stochastic gradient descent across clients to refine shared model parameters while protecting individual datasets. Current methods require a uniform data model and are vulnerable to privacy attacks such as model inversion. The key challenge is in designing algorithms that efficiently aggregate client input, minimize data exposure, and maximize model adaptability. We propose a new scheme for aggregating local, diverse, and independent data models, addressing data inconsistency and model-inversion attacks. The proposed scheme, federated query processing FedQP, enables clients to build local models independently, without coordinating with the server or other clients. The server communicates with selected clients only to predict future values. Our scheme only requires one communication round between each client and the server for model initialization without the need for a training phase. This article presents the methodology, design, and evaluation of a performance-efficient implementation of FedQP. We show that the caching algorithm in FedQP reduces redundant client communications during query processing. In addition, we present a privacy analysis that shows that our method outperforms prominent gradient-based approaches to federated learning. Our experiments show that FedQP consistently achieves higher classification accuracy in non-IID settings and demonstrates stronger resilience to reconstruction attacks compared to gradient-based methods such as federated averaging (FedAvg).
Almohri, H. M. J., and Layne T. Watson. “FedQP: Large-Scale Private and Flexible Federated Query Processing.” IEEE Access, accepted, 2025.