Open this publication in new window or tab >>2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]
In recent years, there has been a significant surge in the development of artificial intelligence, with machine learning emerging as a fundamental aspect of its applications. Machine learning algorithms enable systems to learn from data and make predictions or decisions without explicit programming. In distributed environments, where data is often distributed across multiple nodes, decentralized learning methods have become increasingly prevalent. These methods allow for collaborative model training without using centralized data, offering benefits such as scalability, privacy, and efficiency. To ensure convergence and accuracy of the learned models, achieving consensus among distributed nodes is paramount. Consensus mechanisms enable nodes to agree on a common model despite variations in local data distributions and computational resources, forming the backbone of decentralized learning systems. Thus, the development of efficient consensus protocols is essential for realizing the potential of decentralized learning in various domains, ranging from IoT applications to large-scale data analytics.
This thesis explores strategies to minimize the communication cost in wireless multi-agents systems. It examines the potential of leveraging the broadcast nature of wireless networks, focusing on two frameworks: distributed average consensus and decentralized learning.
In distributed average consensus, wherein nodes aim to converge to the average of the initial values despite communication limitations, a novel probabilistic scheduling approach is proposed. This approach aims to streamline communication by selectively choosing a subset of nodes to broadcast information to their neighbors in each iteration. Various heuristic methods for determining node broadcast probabilities are evaluated, alongside the introduction of a pre-compensation technique to mitigate potential bias. These contributions shed light on the design of communication-efficient consensus protocols tailored to wireless environments with restricted resources.
Transitioning to decentralized learning, the thesis introduces BASS (Broadcast-based Subgraph Sampling) to expedite the convergence of D-SGD (decentralized stochastic gradient descent) while considering the communication overhead. By generating a set of mixing matrix candidates that represent sparse subgraphs of the network topology, BASS facilitates the activation of collision-free subset of nodes in each iteration, optimizing communication efficiency. The optimization of sampling probabilities and the mixing matrices significantly enhances convergence speed and resource utilization compared to existing approaches. These findings underscore the inherent advantages of leveraging the broadcast capabilities of wireless channels to enhance the efficiency of decentralized optimization and learning algorithms in distributed systems.
Place, publisher, year, edition, pages
Linköping: Linköping University Electronic Press, 2024. p. 34
Series
Linköping Studies in Science and Technology. Licentiate Thesis, ISSN 0280-7971 ; 2004
National Category
Communication Systems
Identifiers
urn:nbn:se:liu:diva-207357 (URN)10.3384/9789180757867 (DOI)9789180757850 (ISBN)9789180757867 (ISBN)
Presentation
2024-10-11, Systemet, B Building, Campus Valla, Linköping, 10:15 (English)
Opponent
Supervisors
2024-09-062024-09-062024-09-06Bibliographically approved