We analyze the generalization properties of two-layer neural networks in...
Random feature approximation is arguably one of the most popular techniq...
We consider the problem of learning a linear operator θ between two
Hilb...
We consider distributed learning using constant stepsize SGD (DSGD) over...
In this paper we consider algorithm unfolding for the Multiple Measureme...
While large training datasets generally offer improvement in model
perfo...
Optimization was recently shown to control the inductive bias in a learn...
Stochastic gradient descent (SGD) provides a simple and efficient way to...
Stochastic Gradient Descent (SGD) has become the method of choice for so...
We study reproducing kernel Hilbert spaces
(RKHS) on a Riemannian mani...
In the setting of supervised learning using reproducing kernel methods, ...
A common strategy to train deep neural networks (DNNs) is to use very la...
While stochastic gradient descent (SGD) is one of the major workhorses i...
We address the problem of adaptivity in the framework of reproducing
ke...
We investigate if kernel regularization methods can achieve minimax
conv...
We consider a distributed learning approach in supervised learning for a...
We consider a statistical inverse learning problem, where we observe the...