Differentially Private Stochastic Convex Optimization(DP-SCO) has been extensively studied in recent years . Most of the previous work can only handle either regular data distribution or irregulardata in the low dimensional space case . To better understand the challengesarising from irregular data distribution, in this paper, we provide the firststudy on the problem with heavy-tailed data in the high dimensionalspace . We show that if the loss function is smoothand its gradient has bounded second order moment, it is possible to get a (highprobability) error bound (excess population risk) of $n$ in the $epsilon$-DP model, where $n is the sample size and$d$is the dimensionality of the underlying space . In the second part of the paper we study sparselearning with heavytailed data. We first revisit the sparse linear model andpropose a truncated DP-IHT method whose output could achieve an error of$\tilde{O}(\frac{s^{*2}\log d}{(n\epSilon)^\frac{2{1}{1}{2!\frac {n

Author(s) : Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang