User Guide

User Guide

There are 3 main stages for the GPs:

Initialization

Sparse vs FullBatch

To pick between the two add Batch or Sparse in front of the name of the desired model.

A better common interface is currently being developed.

Regression

For regression one can use the GPRegression or StudentT model. The first one is the vanilla GP with Gaussian noise while the second is using the Student-T likelihood and is therefore a lot more robust to ouliers.

Classification

For classification one can select the XGPC model using the logistic link or the BSVM model based on the frequentist SVM.

Model creation

Creating a model is as simple as doing GPModel(X,y;args...) where args is described in the next section

Parameters of the models

All models except for BatchGPRegression use the same set of parameters for initialisation. Default values are showed as well.

One of the main parameter is the kernel function (or covariance function). This detailed in the kernel section, by default an isotropic RBFKernel with lengthscale 1.0 is used.

Common parameters :

Specific to sparse models :

Model specific : StudentT : ν::Real=5 : Number of degrees of freedom of the Student-T likelihood

Training

Training is straightforward after initializing the model by running :

model.train(;iterations=100,callback=callbackfunction)

Where the callback option is for running a function at every iteration. callback function should be defined as

function callbackfunction(model,iter)
    "do things here"...
end

Prediction

Once the model has been trained it is finally possible to compute predictions. There always three possibilities :

Miscellaneous

Saving/Loading models

Once a model has been trained it is possible to save its state in a file by using save_trained_model(filename,model), a partial version of the file will be save in filename.

It is then possible to reload this file by using load_trained_model(filename). !!!However note that it will not be possible to train the model further!!! This function is only meant to do further predictions.

Pre-made callback functions

There is one (for now) premade function to return a a MVHistory object and callback function for the training of binary classification problems. The callback will store the ELBO and the variational parameters at every iterations included in iterpoints If Xtest and y_test are provided it will also store the test accuracy and the mean and median test loglikelihood