Produces plots for model evaluation.

model.evaluation.plot(..., fn.plot = NULL, 
colours=NULL, show.all=FALSE, verbose = 1)



one or more object of class siamcat-class, can be named


string, filename for the pdf-plot


colour specification for the different siamcat-class- objects, defaults to NULL which will cause the colours to be picked from the 'Set1' palette


boolean, Should the results from repeated cross-validation models be plotted? Defaults to FALSE, leading to a single line for the mean across cross-valdiation repeats


control output: 0 for no output at all, 1 for only information about progress and success, 2 for normal level of information and 3 for full debug information, defaults to 1


Does not return anything, but produces the model evaluation plot.

Binary classification problems

The first plot shows the Receiver Operating Characteristic (ROC)-curve, the other plot the Precision-recall (PR)-curve for the model. If show.all == FALSE (which is the default), a single line representing the mean across cross-validation repeats will be plotted, otherwise the individual cross-validation repeats will be included as lightly shaded lines.

Regression problems

For regression problems, this function will produce a scatter plot between the real and predicted values. If several siamcat-class-objects are supplied, a single plot for each object will be produced.



# simple working example
model.evaluation.plot(siamcat_example, fn.plot='./eval.pdf')
#> Plotted evaluation of predictions successfully to: ./eval.pdf

# plot several named SIAMCAT object
# although we use only one example object here
    'Example_2'=siamcat_example, colours=c('red', 'blue'),
#> Plotted evaluation of predictions successfully to: ./eval.pdf
# show indiviudal cross-validation repeats
model.evaluation.plot(siamcat_example, fn.plot='./eval.pdf', show.all=TRUE)
#> Plotted evaluation of predictions successfully to: ./eval.pdf