API Library
Module
AugmentedGaussianProcesses.AugmentedGaussianProcesses — ModuleGeneral Framework for the data augmented Gaussian Processes
Model Types
AugmentedGaussianProcesses.GP — TypeGP(args...; kwargs...)Gaussian Process
Arguments
X: input features, should be a matrix N×D where N is the number of observation and D the number of dimensiony: input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)kernel: covariance function, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
Keyword arguments
noise: Variance of the likelihoodopt_noise: Flag for optimizing the variance by using the formul σ=Σ(y-f)^2/Nmean: Option for putting a prior meanverbose: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)IndependentPriors: Flag for setting independent or shared parameters among latent GPsatfrequency: Choose how many variational parameters iterations are between hyperparameters optimizationmean: PriorMean object, check the documentation on itMeanPrior
AugmentedGaussianProcesses.VGP — TypeVGP(args...; kwargs...)Variational Gaussian Process
Arguments
X::AbstractArray: Input features, ifXis a matrix the choice of colwise/rowwise is given by theobsdimkeywordy::AbstractVector: Output labelskernel::Kernel: Covariance function, can be any kernel from KernelFunctions.jllikelihood: Likelihood of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, see theCompatibility Table)
Keyword arguments
verbose: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
AugmentedGaussianProcesses.MCGP — TypeMCGP(args...; kwargs...)Monte-Carlo Gaussian Process
Arguments
X::AbstractArray: Input features, ifXis a matrix the choice of colwise/rowwise is given by theobsdimkeywordy::AbstractVector: Output labelskernel::Kernel: Covariance function, can be any kernel from KernelFunctions.jllikelihood: Likelihood of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, at the moment onlyGibbsSamplingis available (see theCompatibility Table)
Keyword arguments
verbose::Int: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
AugmentedGaussianProcesses.SVGP — TypeSVGP(args...; kwargs...)Sparse Variational Gaussian Process
Arguments
X::AbstractArray: Input features, ifXis a matrix the choice of colwise/rowwise is given by theobsdimkeywordy::AbstractVector: Output labelskernel::Kernel: Covariance function, can be any kernel from KernelFunctions.jllikelihood: Likelihood of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, see theCompatibility Table)nInducingPoints/Z: number of inducing points, orAbstractVectorobject
Keyword arguments
verbose: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorZoptimiser: Optimiser for inducing points locationsobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
AugmentedGaussianProcesses.OnlineSVGP — TypeOnlineSVGP(args...; kwargs...)Online Sparse Variational Gaussian Process
Arguments
kernel::Kernel: Covariance function, can be any kernel from KernelFunctions.jllikelihood: Likelihood of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, see theCompatibility Table)Zalg: Algorithm selecting how inducing points are selected
Keywords arguments
verbose: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorZoptimiser: Optimiser for inducing points locationsT::DataType=Float64: Hint for what the type of the data is going to be.
AugmentedGaussianProcesses.MOVGP — TypeMOVGP(args...; kwargs...)Multi-Output Variational Gaussian Process
Arguments
X::AbstractVector: : Input features, ifXis a matrix the choice of colwise/rowwise is given by theobsdimkeywordy::AbstractVector{<:AbstractVector}: Output labels, each vector corresponds to one output dimensionkernel::Union{Kernel,AbstractVector{<:Kernel}: covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs modelslikelihood::Union{AbstractLikelihood,Vector{<:Likelihood}: Likelihood or vector of likelihoods of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, for compatibilities see theCompatibility Table)num_latent::Int: Number of latent GPs
Keyword arguments
verbose::Int: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Optimisers.jl library. Default isADAM(0.001)Aoptimiser: Optimiser used for the mixing parameters.atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
AugmentedGaussianProcesses.MOSVGP — TypeMOSVGP(args...; kwargs...)Multi-Output Sparse Variational Gaussian Process
Arguments
kernel::Union{Kernel,AbstractVector{<:Kernel}: covariance function or vector of covariance functions, can be either a single kernel or a collection of kernels for multiclass and multi-outputs modelslikelihoods::Union{AbstractLikelihood,Vector{<:Likelihood}: Likelihood or vector of likelihoods of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, for compatibilities see theCompatibility Table)nLatent::Int: Number of latent GPsnInducingPoints: number of inducing points, or collection of inducing points locations
Keyword arguments
verbose::Int: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Optimisers.jl library. Default isADAM(0.001)Zoptimiser: Optimiser used for the inducing points locationsAoptimiser: Optimiser used for the mixing parameters.atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
AugmentedGaussianProcesses.VStP — TypeVStP(args...; kwargs...)Variational Student-T Process
Arguments
X::AbstractArray: Input features, ifXis a matrix the choice of colwise/rowwise is given by theobsdimkeywordy::AbstractVector: Output labelskernel::Kernel: Covariance function, can be any kernel from KernelFunctions.jllikelihood: Likelihood of the model. For compatibilities, seeLikelihood Typesinference: Inference for the model, see theCompatibility Table)ν::Real: Number of degrees of freedom
Keyword arguments
verbose: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimiser: Optimiser used for the kernel parameters. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isADAM(0.001)atfrequency::Int=1: Choose how many variational parameters iterations are between hyperparameters optimizationmean=ZeroMean(): PriorMean object, check the documentation on itMeanPriorobsdim::Int=1: Dimension of the data. 1 : X ∈ DxN, 2: X ∈ NxD
Likelihood Types
AugmentedGaussianProcesses.GaussianLikelihood — TypeGaussianLikelihood(σ²::T=1e-3) # σ² is the variance of the noiseGaussian noise :
\[ p(y|f) = N(y|f,\sigma^2)\]
There is no augmentation needed for this likelihood which is already conjugate to a Gaussian prior.
AugmentedGaussianProcesses.StudentTLikelihood — TypeStudentTLikelihood(ν::T, σ::Real=one(T))Arguments
ν::Real: degrees of freedom of the student-Tσ::Real: standard deviation of the local scale
Student-t likelihood for regression:
\[ p(y|f,ν,σ) = \frac{Γ(\frac{ν+1}{2})}{\sqrt(νπ) σ Γ(\frac{ν}{2})} (1+\frac{(y-f)^2}{σ^2ν})^{(-\frac{ν+1}{2})},\]
where ν is the number of degrees of freedom and σ is the standard deviation for local scale of the data.
For the augmented analytical solution, it is augmented via:
\[ p(y|f,\omega) = N(y|f,\sigma^2 \omega)\]
Where $\omega \sim \mathcal{IG}(\frac{\nu}{2},\frac{\nu}{2})$ where $\mathcal{IG}$ is the inverse-gamma distribution. See paper Robust Gaussian Process Regression with a Student-t Likelihood
AugmentedGaussianProcesses.LaplaceLikelihood — TypeLaplaceLikelihood(β::T=1.0) # Laplace likelihood with scale βLaplace likelihood for regression:
\[\frac{1}{2\beta} \exp(-\frac{|y-f|}{β})\]
see wiki page
For the analytical solution, it is augmented via:
\[p(y|f,ω) = N(y|f,ω⁻¹)\]
where $ω \sim \text{Exp}(ω | 1/(2 β^2))$, and $\text{Exp}$ is the Exponential distribution We use the variational distribution $q(ω) = GIG(ω|a,b,p)$
AugmentedGaussianProcesses.LogisticLikelihood — FunctionLogisticLikelihood() -> BernoulliLikelihoodBernoulli likelihood with a logistic link for the Bernoulli likelihood
\[ p(y|f) = \sigma(yf) = \frac{1}{1 + \exp(-yf)},\]
(for more info see : wiki page)
For the analytic version the likelihood, it is augmented via:
\[ p(y|f,ω) = \exp\left(\frac{1}{2}(yf - (yf)^2 \omega)\right)\]
where $ω \sim \mathcal{PG}(\omega | 1, 0)$, and $\mathcal{PG}$ is the Polya-Gamma distribution. See paper : Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation.
AugmentedGaussianProcesses.HeteroscedasticLikelihood — FunctionHeteroscedasticLikelihood(λ::T=1.0)->HeteroscedasticGaussianLikelihoodArguments
λ::Real: The maximum precision possible (this is optimized during training)
Gaussian with heteroscedastic noise given by another gp:
\[ p(y|f,g) = \mathcal{N}(y|f,(\lambda \sigma(g))^{-1})\]
Where $\sigma$ is the logistic function
The augmentation is not trivial and will be described in a future paper
AugmentedGaussianProcesses.BayesianSVM — FunctionBayesianSVM() -> BernoulliLikelihoodThe Bayesian SVM is a Bayesian interpretation of the classical SVM.
\[p(y|f) \propto \exp(2 \max(1-yf, 0))\]
For the analytic version of the likelihood, it is augmented via:
\[p(y|f, ω) = \frac{1}{\sqrt(2\pi\omega)} \exp\left(-\frac{(1+\omega-yf)^2}{2\omega})\right)\]
where $ω \sim 1[0,\infty)$ has an improper prior (his posterior is however has a valid distribution, a Generalized Inverse Gaussian). For reference see this paper.
AugmentedGaussianProcesses.SoftMaxLikelihood — FunctionSoftMaxLikelihood(num_class::Int) -> MultiClassLikelihoodArguments
num_class::Int: Total number of classesSoftMaxLikelihood(labels::AbstractVector) -> MultiClassLikelihood
Arguments
labels::AbstractVector: List of classes labels
Multiclass likelihood with Softmax transformation:
\[p(y=i|\{f_k\}_{k=1}^K) = \frac{\exp(f_i)}{\sum_{k=1}^K\exp(f_k)}\]
There is no possible augmentation for this likelihood
AugmentedGaussianProcesses.LogisticSoftMaxLikelihood — FunctionLogisticSoftMaxLikelihood(num_class::Int) -> MultiClassLikelihoodArguments
num_class::Int: Total number of classesLogisticSoftMaxLikelihood(labels::AbstractVector) -> MultiClassLikelihood
Arguments
labels::AbstractVector: List of classes labels
The multiclass likelihood with a logistic-softmax mapping: :
\[p(y=i|\{f_k\}_{1}^{K}) = \frac{\sigma(f_i)}{\sum_{k=1}^k \sigma(f_k)}\]
where $\sigma$ is the logistic function. This likelihood has the same properties as softmax. –-
For the analytical version, the likelihood is augmented multiple times. More details can be found in the paper Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation.
GPLikelihoods.PoissonLikelihood — TypePoissonLikelihood(λ::Real)->PoissonLikelihoodArguments
λ::Real: Maximal Poisson rate
Poisson Likelihood where a Poisson distribution is defined at every point in space (careful, it's different from continous Poisson processes).
\[ p(y|f) = \text{Poisson}(y|\lambda \sigma(f))\]
Where $\sigma$ is the logistic function. Augmentation details will be released at some point (open an issue if you want to see them)
AugmentedGaussianProcesses.NegBinomialLikelihood — TypeNegBinomialLikelihood(r::Real)Arguments
r::Realnumber of failures until the experiment is stopped
Negative Binomial likelihood with number of failures r
\[ p(y|r, f) = {y + r - 1 \choose y} (1 - \sigma(f))^r \sigma(f)^y,\]
if $r\in \mathbb{N}$ or
\[ p(y|r, f) = \frac{\Gamma(y + r)}{\Gamma(y + 1)\Gamma(r)} (1 - \sigma(f))^r \sigma(f)^y,\]
if $r\in\mathbb{R}$. Where $\sigma$ is the logistic function
Note that this likelihood follows the Wikipedia definition and not the Distributions.jl one.
Inference Types
AugmentedGaussianProcesses.AnalyticVI — TypeAnalyticVI(;ϵ::T=1e-5)Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation) All data is used at each iteration (use AnalyticSVI for updates using minibatches)
Keywords arguments
ϵ::Real: convergence criteria
AugmentedGaussianProcesses.AnalyticSVI — FunctionAnalyticSVI(nMinibatch::Int; ϵ::T=1e-5, optimiser=RobbinsMonro())Stochastic Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation). See AnalyticVI for reference
Arguments
nMinibatch::Integer: Number of samples per mini-batches
Keywords arguments
ϵ::T: convergence criteriaoptimiser: Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isRobbinsMonro()(ρ=(τ+iter)^-κ)
AugmentedGaussianProcesses.GibbsSampling — TypeGibbsSampling(;ϵ::T=1e-5, nBurnin::Int=100, thinning::Int=1)Draw samples from the true posterior via Gibbs Sampling.
Keywords arguments
ϵ::T: convergence criterianBurnin::Int: Number of samples discarded before starting to save samplesthinning::Int: Frequency at which samples are saved
AugmentedGaussianProcesses.QuadratureVI — TypeQuadratureVI(;ϵ::T=1e-5, nGaussHermite::Integer=20, clipping=Inf, natural::Bool=true, optimiser=Momentum(0.0001))Variational Inference solver by approximating gradients via numerical integration via Quadrature
Keyword arguments
ϵ::T: convergence criterianGaussHermite::Int: Number of points for the integral estimationclipping::Real: Limit the gradients values to avoid overshootingnatural::Bool: Use natural gradientsoptimiser: Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isMomentum(0.0001)
AugmentedGaussianProcesses.QuadratureSVI — FunctionQuadratureSVI(nMinibatch::Int; ϵ::T=1e-5, nGaussHermite::Int=20, clipping=Inf, natural=true, optimiser=Momentum(0.0001))Stochastic Variational Inference solver by approximating gradients via numerical integration via Gaussian Quadrature. See QuadratureVI for a more detailed reference.
Arguments
-nMinibatch::Integer : Number of samples per mini-batches
Keyword arguments
ϵ::T: convergence criteria, which can be user definednGaussHermite::Int: Number of points for the integral estimation (for the QuadratureVI)natural::Bool: Use natural gradientsoptimiser: Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isMomentum(0.0001)
AugmentedGaussianProcesses.MCIntegrationVI — TypeMCIntegrationVI(;ϵ::T=1e-5, nMC::Integer=1000, clipping::Real=Inf, natural::Bool=true, optimiser=Momentum(0.001))Variational Inference solver by approximating gradients via MC Integration. It means the expectation E[log p(y|f)] as well as its gradients is computed by sampling from q(f).
Keyword arguments
ϵ::Real: convergence criteria, which can be user definednMC::Int: Number of samples per data point for the integral evaluationclipping::Real: Limit the gradients values to avoid overshootingnatural::Bool: Use natural gradientsoptimiser: Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isMomentum(0.01)
AugmentedGaussianProcesses.MCIntegrationSVI — FunctionMCIntegrationSVI(batchsize::Int; ϵ::Real=1e-5, nMC::Integer=1000, clipping=Inf, natural=true, optimiser=Momentum(0.0001))Stochastic Variational Inference solver by approximating gradients via Monte Carlo integration when using minibatches See MCIntegrationVI for more explanations.
Argument
-batchsize::Integer : Number of samples per mini-batches
Keyword arguments
ϵ::T: convergence criteria, which can be user definednMC::Int: Number of samples per data point for the integral evaluationclipping::Real: Limit the gradients values to avoid overshootingnatural::Bool: Use natural gradientsoptimiser: Optimiser used for the variational updates. Should be an Optimiser object from the Flux.jl library, see list here Optimisers and on this list. Default isMomentum()(ρ=(τ+iter)^-κ)
Functions and methods
AugmentedGaussianProcesses.train! — Functiontrain!(model::AbstractGPModel; iterations::Integer=100, callback, convergence)Function to train the given GP model.
Arguments
model: AbstractGPModel model with either anAnalytic,AnalyticVIorNumericalVItype of inference
Keyword Arguments
iterations::Int: Number of iterations (not necessarily epochs!)for trainingcallback::Function=nothing: Callback function called at every iteration. Should be of typefunction(model,iter) ... endconvergence::Function=nothing: Convergence function to be called every iteration, should return a scalar and take the same arguments ascallback
train!(model::AbstractGPModel, X::AbstractMatrix, y::AbstractArray; obsdim = 1, iterations::Int=10,callback=nothing,conv=0)
train!(model::AbstractGPModel, X::AbstractVector, y::AbstractArray; iterations::Int=20,callback=nothing,conv=0)Function to train the given GP model.
Keyword Arguments
there are options to change the number of max iterations,
iterations::Int: Number of iterations (not necessarily epochs!)for trainingcallback::Function: Callback function called at every iteration. Should be of typefunction(model,iter) ... endconv::Function: Convergence function to be called every iteration, should return a scalar and take the same arguments ascallback
Missing docstring for sample. Check Documenter's build log for details.
AugmentedGaussianProcesses.predict_f — Functionpredict_f(m::AbstractGPModel, X_test, cov::Bool=true, diag::Bool=true)Compute the mean of the predicted latent distribution of f on X_test for the variational GP model
Return also the diagonal variance if cov=true and the full covariance if diag=false
AugmentedGaussianProcesses.predict_y — Functionpredict_y(model::AbstractGPModel, X_test::AbstractVector)
predict_y(model::AbstractGPModel, X_test::AbstractMatrix; obsdim = 1)Return - the predictive mean of X_test for regression - 0 or 1 of X_test for classification - the most likely class for multi-class classification - the expected number of events for an event likelihood
AugmentedGaussianProcesses.proba_y — Functionproba_y(model::AbstractGPModel, X_test::AbstractVector)
proba_y(model::AbstractGPModel, X_test::AbstractMatrix; obsdim = 1)Return the probability distribution p(ytest|model,Xtest) :
- `Tuple{Vector,Vector}` of mean and variance for regression
- `Vector{<:Real}` of probabilities of y_test = 1 for binary classification
- `NTuple{K,<:AbstractVector}`, with element being a vector of probability for one class for multi-class classificationPrior Means
AugmentedGaussianProcesses.ZeroMean — TypeZeroMean()Construct a mean prior set to 0 and which cannot be updated.
AugmentedGaussianProcesses.ConstantMean — TypeConstantMean(c::Real = 1.0; opt=ADAM(0.01))Arguments
c::Real: Constant value
Construct a prior mean with constant c Optionally set an optimiser opt (ADAM(0.01) by default)
AugmentedGaussianProcesses.EmpiricalMean — TypeEmpiricalMean(c::AbstractVector{<:Real}=1.0;opt=ADAM(0.01))Arguments
c::AbstractVector: Empirical mean vector
Construct a empirical mean with values c Optionally give an optimiser opt (ADAM(0.01) by default)
AugmentedGaussianProcesses.AffineMean — TypeAffineMean(w::Vector, b::Real; opt = ADAM(0.01))
AffineMean(dims::Int; opt=ADAM(0.01))Arguments
w::Vector: Weight vectorb::Real: Biasdims::Int: Number of features per vector
Construct an affine operation on X : μ₀(X) = X * w + b where w is a vector and b a scalar Optionally give an optimiser opt (Adam(α=0.01) by default)
Index
AugmentedGaussianProcesses.AffineMeanAugmentedGaussianProcesses.AnalyticVIAugmentedGaussianProcesses.ConstantMeanAugmentedGaussianProcesses.EmpiricalMeanAugmentedGaussianProcesses.GPAugmentedGaussianProcesses.GaussianLikelihoodAugmentedGaussianProcesses.GibbsSamplingAugmentedGaussianProcesses.LaplaceLikelihoodAugmentedGaussianProcesses.MCGPAugmentedGaussianProcesses.MCIntegrationVIAugmentedGaussianProcesses.MOSVGPAugmentedGaussianProcesses.MOVGPAugmentedGaussianProcesses.NegBinomialLikelihoodAugmentedGaussianProcesses.OnlineSVGPAugmentedGaussianProcesses.QuadratureVIAugmentedGaussianProcesses.SVGPAugmentedGaussianProcesses.StudentTLikelihoodAugmentedGaussianProcesses.VGPAugmentedGaussianProcesses.VStPAugmentedGaussianProcesses.ZeroMeanGPLikelihoods.PoissonLikelihoodAugmentedGaussianProcesses.AnalyticSVIAugmentedGaussianProcesses.BayesianSVMAugmentedGaussianProcesses.HeteroscedasticLikelihoodAugmentedGaussianProcesses.LogisticLikelihoodAugmentedGaussianProcesses.LogisticSoftMaxLikelihoodAugmentedGaussianProcesses.MCIntegrationSVIAugmentedGaussianProcesses.QuadratureSVIAugmentedGaussianProcesses.SoftMaxLikelihoodAugmentedGaussianProcesses.predict_fAugmentedGaussianProcesses.predict_yAugmentedGaussianProcesses.proba_yAugmentedGaussianProcesses.train!