API Library
Module
General Framework for the data augmented Gaussian Processes
Model Types
AugmentedGaussianProcesses.GP
— Type.Class for Gaussian Processes models
GP(X::AbstractArray{T}, y::AbstractArray, kernel::Kernel;
noise::Real=1e-5, opt_noise::Bool=true, verbose::Int=0,
optimizer::Bool=Adam(α=0.01),atfrequency::Int=1,
mean::Union{<:Real,AbstractVector{<:Real},PriorMean}=ZeroMean(),
IndependentPriors::Bool=true,ArrayType::UnionAll=Vector)
Argument list :
Mandatory arguments
X
: input features, should be a matrix N×D where N is the number of observation and D the number of dimensiony
: input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)kernel
: covariance function, can be either a single kernel or a collection of kernels for multiclass and multi-outputs models
Keyword arguments
noise
: Initial noise of the modelopt_noise
: Flag for optimizing the noise σ=Σ(y-f)^2/Nmean
: Option for putting a prior meanverbose
: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimizer
: Optimizer for kernel hyperparameters (to be selected from GradDescent.jl) or set it tofalse
to keep hyperparameters fixedIndependentPriors
: Flag for setting independent or shared parameters among latent GPsatfrequency
: Choose how many variational parameters iterations are between hyperparameters optimizationmean
: PriorMean object, check the documentation on itMeanPrior
ArrayType
: Option for using different type of array for storage (allow for GPU usage)
AugmentedGaussianProcesses.VGP
— Type.Class for variational Gaussian Processes models (non-sparse)
VGP(X::AbstractArray{T},y::AbstractVector,
kernel::Kernel,
likelihood::LikelihoodType,inference::InferenceType;
verbose::Int=0,optimizer::Union{Bool,Optimizer,Nothing}=Adam(α=0.01),atfrequency::Integer=1,
mean::Union{<:Real,AbstractVector{<:Real},PriorMean}=ZeroMean(),
IndependentPriors::Bool=true,ArrayType::UnionAll=Vector)
Argument list :
Mandatory arguments
X
: input features, should be a matrix N×D where N is the number of observation and D the number of dimensiony
: input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)kernel
: covariance function, a single kernel from the KernelFunctions.jl packagelikelihood
: likelihood of the model, currently implemented : Gaussian, Bernoulli (with logistic link), Multiclass (softmax or logistic-softmax) seeLikelihood Types
inference
: inference for the model, can be analytic, numerical or by sampling, check the model documentation to know what is available for your likelihood see theCompatibility Table
Keyword arguments
verbose
: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimizer
: Optimizer for kernel hyperparameters (to be selected from GradDescent.jl) or set it tofalse
to keep hyperparameters fixedatfrequency
: Choose how many variational parameters iterations are between hyperparameters optimizationmean
: PriorMean object, check the documentation on itMeanPrior
IndependentPriors
: Flag for setting independent or shared parameters among latent GPsArrayType
: Option for using different type of array for storage (allow for GPU usage)
AugmentedGaussianProcesses.SVGP
— Type.Class for sparse variational Gaussian Processes
SVGP(X::AbstractArray{T1},y::AbstractVector{T2},kernel::Kernel,
likelihood::LikelihoodType,inference::InferenceType, nInducingPoints::Int;
verbose::Int=0,optimizer::Union{Optimizer,Nothing,Bool}=Adam(α=0.01),atfrequency::Int=1,
mean::Union{<:Real,AbstractVector{<:Real},PriorMean}=ZeroMean(),
Zoptimizer::Union{Optimizer,Nothing,Bool}=false,
ArrayType::UnionAll=Vector)
Argument list :
Mandatory arguments
X
: input features, should be a matrix N×D where N is the number of observation and D the number of dimensiony
: input labels, can be either a vector of labels for multiclass and single output or a matrix for multi-outputs (note that only one likelihood can be applied)kernel
: covariance function, can be either a single kernel or a collection of kernels for multiclass and multi-outputs modelslikelihood
: likelihood of the model, currently implemented : Gaussian, Student-T, Laplace, Bernoulli (with logistic link), Bayesian SVM, Multiclass (softmax or logistic-softmax) seeLikelihood
inference
: inference for the model, can be analytic, numerical or by sampling, check the model documentation to know what is available for your likelihood see theCompatibility table
nInducingPoints
: number of inducing points
Optional arguments
verbose
: How much does the model print (0:nothing, 1:very basic, 2:medium, 3:everything)optimizer
: Optimizer for kernel hyperparameters (to be selected from GradDescent.jl) or set it tofalse
to keep hyperparameters fixedatfrequency
: Choose how many variational parameters iterations are between hyperparameters optimizationmean
: PriorMean object, check the documentation on itMeanPrior
IndependentPriors
: Flag for setting independent or shared parameters among latent GPsoptimizer
: Optimizer for inducing point locations (to be selected from GradDescent.jl)ArrayType
: Option for using different type of array for storage (allow for GPU usage)
Likelihood Types
GaussianLikelihood(σ²::T=1e-3) #σ² is the variance
Gaussian noise :
There is no augmentation needed for this likelihood which is already conjugate to a Gaussian prior
StudentTLikelihood(ν::T,σ::Real=one(T))
Student-t likelihood for regression:
ν
is the number of degrees of freedom and σ
is the variance for local scale of the data.
For the analytical solution, it is augmented via:
Where ω ~ IG(0.5ν,,0.5ν)
where IG
is the inverse gamma distribution See paper Robust Gaussian Process Regression with a Student-t Likelihood
LaplaceLikelihood(β::T=1.0) # Laplace likelihood with scale β
Laplace likelihood for regression:
see wiki page
For the analytical solution, it is augmented via:
where $ω ~ Exp(ω | 1/(2 β^2))$, and Exp
is the Exponential distribution We use the variational distribution $q(ω) = GIG(ω | a,b,p)$
LogisticLikelihood()
Bernoulli likelihood with a logistic link for the Bernoulli likelihood
(for more info see : wiki page)
For the analytic version the likelihood, it is augmented via:
where $ω ~ PG(ω | 1, 0)$, and PG
is the Polya-Gamma distribution See paper : Efficient Gaussian Process Classification Using Polya-Gamma Data Augmentation
HeteroscedasticLikelihood(λ::T=1.0)
Gaussian with heteroscedastic noise given by another gp:
Where σ
is the logistic function
Augmentation will be described in a future paper
BayesianSVM()
The Bayesian SVM is a Bayesian interpretation of the classical SVM.
math p(y|f,ω) = 1/(sqrt(2πω) exp(-0.5((1+ω-yf)^2/ω)) `$where$ω ∼ 𝟙[0,∞)`` has an improper prior (his posterior is however has a valid distribution, a Generalized Inverse Gaussian). For reference see this paper
SoftMaxLikelihood()
Multiclass likelihood with Softmax transformation:
There is no possible augmentation for this likelihood
LogisticSoftMaxLikelihood()
The multiclass likelihood with a logistic-softmax mapping: :
where σ
is the logistic function. This likelihood has the same properties as softmax. –-
For the analytical version, the likelihood is augmented multiple times. More details can be found in the paper Multi-Class Gaussian Process Classification Made Conjugate: Efficient Inference via Data Augmentation
Poisson Likelihood(λ::T=1.0)
Poisson Likelihood where a Poisson distribution is defined at every point in space (careful, it's different from continous Poisson processes)
Where σ
is the logistic function Augmentation details will be released at some point (open an issue if you want to see them)
NegBinomialLikelihood(r::Int=10)
Negative Binomial likelihood with number of failures r
Where σ
is the logistic function
Inference Types
AnalyticVI
Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation) All data is used at each iteration (use AnalyticSVI for Stochastic updates)
AnalyticVI(;ϵ::T=1e-5)
Keywords arguments
- `ϵ::T` : convergence criteria
AugmentedGaussianProcesses.AnalyticSVI
— Function.AnalyticSVI Stochastic Variational Inference solver for conjugate or conditionally conjugate likelihoods (non-gaussian are made conjugate via augmentation)
AnalyticSVI(nMinibatch::Integer;ϵ::T=1e-5,optimizer::Optimizer=InverseDecay())
- `nMinibatch::Integer` : Number of samples per mini-batches
Keywords arguments
- `ϵ::T` : convergence criteria
- `optimizer::Optimizer` : Optimizer used for the variational updates. Should be an Optimizer object from the [GradDescent.jl](https://github.com/jacobcvt12/GradDescent.jl) package. Default is `InverseDecay()` (ρ=(τ+iter)^-κ)
GibbsSampling(;ϵ::T=1e-5,nBurnin::Int=100,samplefrequency::Int=1)
Draw samples from the true posterior via Gibbs Sampling.
Keywords arguments - ϵ::T
: convergence criteria - nBurnin::Int
: Number of samples discarded before starting to save samples - samplefrequency::Int
: Frequency of sampling
QuadratureVI
Variational Inference solver by approximating gradients via numerical integration via Quadrature
QuadratureVI(ϵ::T=1e-5,nGaussHermite::Integer=20,optimizer::Optimizer=Momentum(η=0.0001))
Keyword arguments
- `ϵ::T` : convergence criteria
- `nGaussHermite::Int` : Number of points for the integral estimation
- `optimizer::Optimizer` : Optimizer used for the variational updates. Should be an Optimizer object from the [GradDescent.jl](https://github.com/jacobcvt12/GradDescent.jl) package. Default is `Momentum(η=0.0001)`
AugmentedGaussianProcesses.QuadratureSVI
— Function.QuadratureSVI
Stochastic Variational Inference solver by approximating gradients via numerical integration via Quadrature
QuadratureSVI(nMinibatch::Integer;ϵ::T=1e-5,nGaussHermite::Integer=20,optimizer::Optimizer=Adam(α=0.1))
-`nMinibatch::Integer` : Number of samples per mini-batches
Keyword arguments
- `ϵ::T` : convergence criteria, which can be user defined
- `nGaussHermite::Int` : Number of points for the integral estimation (for the QuadratureVI)
- `optimizer::Optimizer` : Optimizer used for the variational updates. Should be an Optimizer object from the [GradDescent.jl](https://github.com/jacobcvt12/GradDescent.jl) package. Default is `Momentum(η=0.001)`
MCIntegrationVI(;ϵ::T=1e-5,nMC::Integer=1000,optimizer::Optimizer=Adam(α=0.1))
Variational Inference solver by approximating gradients via MC Integration.
Keyword arguments
- `ϵ::T` : convergence criteria, which can be user defined
- `nMC::Int` : Number of samples per data point for the integral evaluation
- `optimizer::Optimizer` : Optimizer used for the variational updates. Should be an Optimizer object from the [GradDescent.jl]() package. Default is `Adam()`
AugmentedGaussianProcesses.MCIntegrationSVI
— Function.MCIntegrationSVI(;ϵ::T=1e-5,nMC::Integer=1000,optimizer::Optimizer=Adam(α=0.1))
Stochastic Variational Inference solver by approximating gradients via Monte Carlo integration
Argument
-`nMinibatch::Integer` : Number of samples per mini-batches
Keyword arguments
- `ϵ::T` : convergence criteria, which can be user defined
- `nMC::Int` : Number of samples per data point for the integral evaluation
- `optimizer::Optimizer` : Optimizer used for the variational updates. Should be an Optimizer object from the [GradDescent.jl]() package. Default is `Adam()`
Functions and methods
AugmentedGaussianProcesses.train!
— Function.train!(model::AbstractGP;iterations::Integer=100,callback=0,convergence=0)
Function to train the given GP model
.
Keyword Arguments
there are options to change the number of max iterations,
iterations::Int
: Number of iterations (not necessarily epochs!)for trainingcallback::Function
: Callback function called at every iteration. Should be of typefunction(model,iter) ... end
convergence::Function
: Convergence function to be called every iteration, should return a scalar and take the same arguments ascallback
Missing docstring for predict_f
. Check Documenter's build log for details.
AugmentedGaussianProcesses.predict_y
— Function.predict_y(model::AbstractGP,X_test::AbstractMatrix)
Return - the predictive mean of X_test
for regression - the sign of X_test
for classification - the most likely class for multi-class classification - the expected number of events for an event likelihood
AugmentedGaussianProcesses.proba_y
— Function.proba_y(model::AbstractGP,X_test::AbstractMatrix)
Return the probability distribution p(ytest|model,Xtest) :
- Tuple of vectors of mean and variance for regression
- Vector of probabilities of y_test = 1 for binary classification
- Dataframe with columns and probability per class for multi-class classification
Kernels
Missing docstring for RBFKernel
. Check Documenter's build log for details.
Missing docstring for MaternKernel
. Check Documenter's build log for details.
Kernel functions
Missing docstring for kernelmatrix
. Check Documenter's build log for details.
Missing docstring for kernelmatrix!
. Check Documenter's build log for details.
Missing docstring for getvariance
. Check Documenter's build log for details.
Missing docstring for getlengthscales
. Check Documenter's build log for details.
Prior Means
ZeroMean
ZeroMean()
Construct a mean prior set to 0 and cannot be changed.
ConstantMean
ConstantMean(c::T=1.0;opt::Optimizer=Adam(α=0.01))
Construct a prior mean with constant c
Optionally set an optimizer opt
(Adam(α=0.01)
by default)
EmpiricalMean julia` function EmpiricalMean(c::V=1.0;opt::Optimizer=Adam(α=0.01)) where {V<:AbstractVector{<:Real}}
Construct a constant mean with values c
Optionally give an optimizer opt
(Adam(α=0.01)
by default)
Index
AugmentedGaussianProcesses.AnalyticVI
AugmentedGaussianProcesses.BayesianSVM
AugmentedGaussianProcesses.ConstantMean
AugmentedGaussianProcesses.EmpiricalMean
AugmentedGaussianProcesses.GP
AugmentedGaussianProcesses.GaussianLikelihood
AugmentedGaussianProcesses.GibbsSampling
AugmentedGaussianProcesses.HeteroscedasticLikelihood
AugmentedGaussianProcesses.LaplaceLikelihood
AugmentedGaussianProcesses.LogisticLikelihood
AugmentedGaussianProcesses.LogisticSoftMaxLikelihood
AugmentedGaussianProcesses.MCIntegrationVI
AugmentedGaussianProcesses.NegBinomialLikelihood
AugmentedGaussianProcesses.PoissonLikelihood
AugmentedGaussianProcesses.QuadratureVI
AugmentedGaussianProcesses.SVGP
AugmentedGaussianProcesses.SoftMaxLikelihood
AugmentedGaussianProcesses.StudentTLikelihood
AugmentedGaussianProcesses.VGP
AugmentedGaussianProcesses.ZeroMean
AugmentedGaussianProcesses.AnalyticSVI
AugmentedGaussianProcesses.MCIntegrationSVI
AugmentedGaussianProcesses.QuadratureSVI
AugmentedGaussianProcesses.predict_y
AugmentedGaussianProcesses.proba_y
AugmentedGaussianProcesses.train!