2012年1月1日 星期日

precision recall - Performance evaluation

Performance evaluation                 Link

The reliability of a prediction may be evaluated using different performance measures [47]. We focused our evaluation on the following measures:
precision=TPTP+FP
recall=TPTP+FN

where TP refers to interface residues correctly predicted, FP to non-interface residues predicted as interfaces, and FN to interface residues predicted as non-interfaces. Precision evaluates the quality of the prediction in reference to the set of predicted interface residues, whereas recall measures the quality of the prediction with respect to the set of actual interface residues. When possible, the performance of different classifiers is evaluated by comparison of the precision-recall curve of each classifier. These curves are generated by computing precision and recall using different threshold values on the probability of each residue to be part of the interface. Therefore, these curves provide a more comprehensive evaluation than a pair of precision and a recall values.
For sake of completeness, we computed the following measures:
   F1=2×precision×recallprecision+recall
  Accuracy=TP+TNN
  CC=(TP×TN)(FP×FN)(TP+FN)×(TP+FP)××(TN+FP)×(TN+FN)


The F1 score computes the harmonic mean between precision and recall. Accuracy measures how well interface and non-interface residues are correctly predicted. CC refers to the Matthews correlation coefficient. In addition, we use the area under the receiver operating characteristic (AUC ROC). This measure computes the area under the curve generated by computing the sensitivity and the false positive rate using different thresholds on the probabilities that indicates whether a residue belongs to the interface.

沒有留言:

張貼留言