标签:-o median rate better sdn hold its param 距离
參考:http://scikit-learn.org/stable/modules/model_evaluation.html#scoring-parameter
三种方法评估模型的预測质量:
最后介绍 Dummy estimators 。提供随机推測的策略,能够作为预測质量评价的baseline。
(參考第六小节)
See also
For “pairwise” metrics, between samples and not estimators or predictions, see the Pairwise metrics, Affinities and Kernels section.
详细内容有时间再写。。
。
1、
The scoring parameter: defining model evaluation rulesModel selection and evaluation using tools, such as grid_search.GridSearchCV and cross_validation.cross_val_score, take a scoring parameter that controls what metric they apply to the estimators evaluated.
1)提前定义的标准
全部的scorer都是越大越好。因此mean_absolute_error and mean_squared_error(測量预測点离模型的距离)是负值。
Scoring | Function | Comment |
---|---|---|
Classification | ||
‘accuracy’ | metrics.accuracy_score | |
‘average_precision’ | metrics.average_precision_score | |
‘f1’ | metrics.f1_score | for binary targets |
‘f1_micro’ | metrics.f1_score | micro-averaged |
‘f1_macro’ | metrics.f1_score | macro-averaged |
‘f1_weighted’ | metrics.f1_score | weighted average |
‘f1_samples’ | metrics.f1_score | by multilabel sample |
‘log_loss’ | metrics.log_loss | requires predict_proba support |
‘precision’ etc. | metrics.precision_score | suffixes apply as with ‘f1’ |
‘recall’ etc. | metrics.recall_score | suffixes apply as with ‘f1’ |
‘roc_auc’ | metrics.roc_auc_score | |
Clustering | ||
‘adjusted_rand_score’ | metrics.adjusted_rand_score | |
Regression | ||
‘mean_absolute_error’ | metrics.mean_absolute_error | |
‘mean_squared_error’ | metrics.mean_squared_error | |
‘median_absolute_error’ | metrics.median_absolute_error | |
‘r2’ | metrics.r2_score |
给个样例:
3)自己定义scoring标准
following two rules:
2、
Classification metricsSome of these are restricted to the binary classification case:
matthews_corrcoef(y_true, y_pred) | Compute the Matthews correlation coefficient (MCC) for binary classes |
precision_recall_curve(y_true, probas_pred) | Compute precision-recall pairs for different probability thresholds |
roc_curve(y_true, y_score[, pos_label, ...]) | Compute Receiver operating characteristic (ROC) |
Others also work in the multiclass case:
confusion_matrix(y_true, y_pred[, labels]) | Compute confusion matrix to evaluate the accuracy of a classification |
hinge_loss(y_true, pred_decision[, labels, ...]) | Average hinge loss (non-regularized) |
Some also work in the multilabel case:
accuracy_score(y_true, y_pred[, normalize, ...]) | Accuracy classification score. |
classification_report(y_true, y_pred[, ...]) | Build a text report showing the main classification metrics |
f1_score(y_true, y_pred[, labels, ...]) | Compute the F1 score, also known as balanced F-score or F-measure |
fbeta_score(y_true, y_pred, beta[, labels, ...]) | Compute the F-beta score |
hamming_loss(y_true, y_pred[, classes]) | Compute the average Hamming loss. |
jaccard_similarity_score(y_true, y_pred[, ...]) | Jaccard similarity coefficient score |
log_loss(y_true, y_pred[, eps, normalize, ...]) | Log loss, aka logistic loss or cross-entropy loss. |
precision_recall_fscore_support(y_true, y_pred) | Compute precision, recall, F-measure and support for each class |
precision_score(y_true, y_pred[, labels, ...]) | Compute the precision |
recall_score(y_true, y_pred[, labels, ...]) | Compute the recall |
zero_one_loss(y_true, y_pred[, normalize, ...]) | Zero-one classification loss. |
And some work with binary and multilabel (but not multiclass) problems:
average_precision_score(y_true, y_score[, ...]) | Compute average precision (AP) from prediction scores |
roc_auc_score(y_true, y_score[, average, ...]) | Compute Area Under the Curve (AUC) from prediction scores |
In the following sub-sections, we will describe each of those functions, preceded by some notes on common API and metric definition.
2)accuracy score:
The accuracy_score function computes the accuracy, 默认是计算预測正确的比例,假设设置normalize=False。计算预測正确的绝对数量。给个样例就明确:
对于multilabel classification,仅仅有所有的labels所有预測对。该sample才算预測对。
给个样例就明确:
再參考:
3)confusion matrix:
The confusion_matrix function
evaluates classification accuracy by computing the confusion
matrix. 给个样例:
(注意:纵轴是true label,横轴是predict label)
再參考:
4)classification report:
The classification_report function
builds a text report showing the main classification metrics. 给个样例:
再參考:
5)hamming loss:
If is
the predicted value for the -th
label of a given sample, is
the corresponding true value, and is
the number of classes or labels, then the Hamming loss between
two samples is defined as:
6)jaccard similarity coefficient score:
The Jaccard similarity coefficient of the -th samples, with a ground truth label set and predicted label set , is defined as
Several functions allow you to analyze the precision, recall and F-measures score:
average_precision_score(y_true, y_score[, ...]) | Compute average precision (AP) from prediction scores |
f1_score(y_true, y_pred[, labels, ...]) | Compute the F1 score, also known as balanced F-score or F-measure |
fbeta_score(y_true, y_pred, beta[, labels, ...]) | Compute the F-beta score |
precision_recall_curve(y_true, probas_pred) | Compute precision-recall pairs for different probability thresholds |
precision_recall_fscore_support(y_true, y_pred) | Compute precision, recall, F-measure and support for each class |
precision_score(y_true, y_pred[, labels, ...]) | Compute the precision |
recall_score(y_true, y_pred[, labels, ...]) | Compute the recall |
Note that the precision_recall_curve function is restricted to the binary case. The average_precision_score function works only in binary classification and multilabel indicator format.
8)hinge loss:
9)log loss:
10)matthews correlation coefficient:
11)receiver operating characteristic(ROC):
12)zero one loss:
3、
Multilabel ranking metrics
In multilabel learning, each sample can have any number of ground truth labels associated with it. The goal is to give
high scores and better rank to the ground truth labels.
1)coverage error:
2)label ranking average precision:
4、
Regression metricsThe sklearn.metrics module implements several loss, score, and utility functions to measure regression performance.
Some of those have been enhanced to handle the multioutput case: mean_absolute_error, mean_squared_error, median_absolute_error and r2_score.
1)explained variance score:
If is
the estimated target output, the
corresponding (correct) target output, and is Variance,
the square of the standard deviation, then the explained variance is estimated as follow:
2)mean absolute error:
If is
the predicted value of the -th
sample, and is
the corresponding true value, then the mean absolute error (MAE) estimated over is
defined as
3)mean squared error:
If is
the predicted value of the -th
sample, and is
the corresponding true value, then the mean squared error (MSE) estimated over is
defined as
4)R^2 score、the coefficient of determination:
If is
the predicted value of the -th
sample and is
the corresponding true value, then the score R2 estimated over is
defined as
5、
Clustering metrics
The sklearn.metrics module
implements several loss, score, and utility functions. For more information see the Clustering
performance evaluation section for instance clustering, and Biclustering
evaluation for biclustering.
6、Dummy estimators
stratified generates random predictions by respecting the training set class distribution.
most_frequent always predicts the most frequent label in the training set.
uniform generates predictions uniformly at random.
Note that with all these strategies, the predict method completely ignores the input data!
给个简单样例:
first let’s create an imbalanced dataset:
Next, let’s compare the accuracy of SVC and most_frequent:
We see that SVC doesn’t do much better than a dummy classifier. Now, let’s change the kernel:
同理,对于回归问题:
DummyRegressor also implements four simple rules of thumb for regression:
In all these strategies, the predict method completely ignores the input data.
scikit-learn:3.3. Model evaluation: quantifying the quality of predictions
标签:-o median rate better sdn hold its param 距离
原文地址:http://www.cnblogs.com/lytwajue/p/7136431.html