Supervised learning with probabilistic morphisms and kernel mean embeddings

05/10/2023
βˆ™
by   HΓ΄ng VΓ’n LΓͺ, et al.
βˆ™
0
βˆ™

In this paper I propose a concept of a correct loss function in a generative model of supervised learning for an input space 𝒳 and a label space 𝒴, both of which are measurable spaces. A correct loss function in a generative model of supervised learning must accurately measure the discrepancy between elements of a hypothesis space β„‹ of possible predictors and the supervisor operator, even when the supervisor operator does not belong to β„‹. To define correct loss functions, I propose a characterization of a regular conditional probability measure ΞΌ_𝒴|𝒳 for a probability measure ΞΌ on 𝒳×𝒴 relative to the projection Ξ _𝒳: 𝒳×𝒴→𝒳 as a solution of a linear operator equation. If 𝒴 is a separable metrizable topological space with the Borel Οƒ-algebra ℬ (𝒴), I propose an additional characterization of a regular conditional probability measure ΞΌ_𝒴|𝒳 as a minimizer of mean square error on the space of Markov kernels, referred to as probabilistic morphisms, from 𝒳 to 𝒴. This characterization utilizes kernel mean embeddings. Building upon these results and employing inner measure to quantify the generalizability of a learning algorithm, I extend a result due to Cucker-Smale, which addresses the learnability of a regression model, to the setting of a conditional probability estimation problem. Additionally, I present a variant of Vapnik's regularization method for solving stochastic ill-posed problems, incorporating inner measure, and showcase its applications.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
βˆ™ 03/07/2023

Manually Selecting The Data Function for Supervised Learning of small datasets

Supervised learning problems may become ill-posed when there is a lack o...
research
βˆ™ 02/12/2023

Recursive Estimation of Conditional Kernel Mean Embeddings

Kernel mean embeddings, a widely used technique in machine learning, map...
research
βˆ™ 02/10/2020

A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings

We present a new operator-free, measure-theoretic definition of the cond...
research
βˆ™ 07/05/2021

The information loss of a stochastic map

We provide a stochastic extension of the Baez-Fritz-Leinster characteriz...
research
βˆ™ 09/06/2023

Contrastive Learning as Kernel Approximation

In standard supervised machine learning, it is necessary to provide a la...
research
βˆ™ 07/09/2021

Batch Inverse-Variance Weighting: Deep Heteroscedastic Regression

Heteroscedastic regression is the task of supervised learning where each...
research
βˆ™ 06/11/2020

Conditional Sampling With Monotone GANs

We present a new approach for sampling conditional measures that enables...

Please sign up or login with your details

Forgot password? Click here to reset