{% extends "about/base.html" %} {% load i18n %} {% block title %}{% trans "About" %} :: {% trans "Evaluation" %}{% endblock %} {% block breadcrumbs %}{% trans "About" %} / {% trans "Evaluation" %}{% endblock %} {% block content %}

{% trans "Evaluation" %}

Adding a new performance measure

If you'd like to add a new measure get mldata-utils open up mleval/evaluation.py in your editor and define a function, that gets two arguments, output and truth, write some nice short docuemntation e.g.:

def accuracy(out, lab):
   """
     Computes Accuracy.
     Expects labels to be +/-1 and predictions sign(output)
   """
   return numpy.mean(numpy.sign(out) == lab)

and add a line to 'register' that measure to the pm dictionary. Here keys are human readable names values are tuples of

(functioname, application domain ('Classification', 'Regression', ...),
description), e.g.

pm['Accuracy'] = (accuracy, 'Classification', accuracy.__doc__)

Currently supported measures

Regression

Binary Classification

Multiclass Classification

Analogues exist for most of the binary classification measures above

{% endblock %}