Skip to main content
. Author manuscript; available in PMC: 2017 Jun 1.
Published in final edited form as: J Biomed Inform. 2016 Mar 16;61:119–131. doi: 10.1016/j.jbi.2016.03.009

Table 3.

Net reclassification (cNRI) comparisons for IPC weighted versions of the machine learning techniques described in Section 3 evaluated on the hold-out test set.

cNRI
Events Non-Events cNRI Overall Overall Weighted
Tree
  vs. k-NN −0.003 0.048 0.045 0.045
  vs. Bayes −0.064 0.058 −0.006 0.050
  vs. Logistic −0.065 0.045 −0.020 0.038
  vs. GAM −0.056 0.030 −0.026 0.024

k-NN
  vs. Tree 0.003 −0.048 −0.045 −0.045
  vs. Bayes −0.065 0.015 −0.050 0.009
  vs. Logistic −0.108 0.009 −0.099 0.001
  vs. GAM −0.069 −0.013 −0.082 −0.016

Bayes
  vs. Tree 0.064 −0.058 0.006 −0.050
  vs. k-NN 0.065 −0.015 0.050 −0.009
  vs. Logistic −0.013 −0.017 −0.030 −0.017
  vs. GAM 0.028 −0.040 −0.012 −0.035

Logistic
  vs. Tree 0.065 −0.045 0.020 −0.038
  vs. k-NN 0.108 −0.009 0.099 −0.001
  vs. Bayes 0.013 0.017 0.030 0.017
  vs. GAM 0.037 −0.022 0.015 −0.018

GAM
  vs. Tree 0.056 −0.030 0.026 −0.024
  vs. k-NN 0.069 0.013 0.082 0.016
  vs. Bayes −0.028 0.040 0.012 0.035
  vs. Logistic −0.037 0.022 −0.015 0.018

Positive numbers indicate that the bolded technique correctly reclassifies subjects more frequently than the technique preceded by “vs”. cNRI (Events) and cNRI (Non-Events) give the reclassification improvement among those who did and did not experience events, and cNRI (Overall) is their sum. cNRI (Overall Weighted) is a weighted sum where the reclassification performance among Events and Non-Events is weighted according to the event and non-event probabilities, respectively. Tree: Classification trees; k-NN: k-nearest neighbors; Bayes: Bayesian network models; Logistic: Logistic regression; GAM: Generalized additive models.