The surge in data generated by IoT sensors has increased the need for scalable and efficient data analysis methods, particularly for robust algorithms like quantile regression, which can be tailored to meet a variety of situations, including nonlinear relationships, distributions with heavy tails, and outliers. This paper presents a sub-gradient-based algorithm for distributed quantile regression with non-convex, and non-smooth sparse penalties such as the Minimax Concave Penalty (MCP) and Smoothly Clipped Absolute Deviation (SCAD). These penalties selectively shrink non-active coefficients towards zero, addressing the limitations of traditional penalties like the l(1)-penalty in sparse models. Existing quantile regression algorithms with non-convex penalties are designed for centralized cases, whereas our proposed method can be applied to distributed quantile regression using non-convex penalties, thereby improving estimation accuracy. We provide a convergence proof for our proposed algorithm and demonstrate through numerical simulations that it outperforms state-of-the-art algorithms in sparse and moderately sparse scenarios.
Funding Agencies|Research Council of Norway