标签:question read tab mini progress plain review path ted
Hyperparameter Optimization
In the context of machine learning, hyperparameter optimization or model selection is the problem of choosing a set of hyperparameters[when defined as?] for a learning algorithm, usually with the goal of optimizing a measure of the algorithm‘s performance on an independent data set. Often cross-validation is used to estimate this generalization performance.[1] Hyperparameter optimization contrasts with actual learning problems, which are also often cast as optimization problems, but optimize a loss function on the training set alone. In effect, learning algorithms learn parameters that model/reconstruct their inputs well, while hyperparameter optimization is to ensure the model does not overfit its data by tuning, e.g., regularization.
Using the k-NN algorithm, we obtained 57.58% classification accuracy on the Kaggle Dogs vs. Cats dataset challenge:
The question is: “Can we do better?”
Of course we can! Obtaining higher accuracy for nearly any machine learning algorithm boils down to tweaking various knobs and levels.
In the case of k-NN, we can tune k, the number of nearest neighbors. We can also tune our distance metric/similarity function as well.
Of course, hyperparameter tuning has implications outside of the k-NN algorithm as well. In the context of Deep Learning and Convolutional Neural Networks, we can easily have hundreds of various hyperparameters to tune and play with (although in practice we try to limit the number of variables to tune to a small handful), each affecting our overall classification to some (potentially unknown) degree.
Because of this, it’s important to understand the concept of hyperparameter tuning and how your choice in hyperparameters can dramatically impact your classification accuracy.
In the remainder of today’s post, I’ll be demonstrating how to tune k-NN hyperparameters for the Dogs vs. Cats dataset. We’ll start with a discussion on what hyperparameters are, followed by viewing a concrete example on tuning k-NN hyperparameters.
We’ll then explore how to tune k-NN hyperparameters using two search methods: Grid Search and Randomized Search.
As our results will demonstrate, we can improve our classification accuracy from 57.58% to over 64%!
Hyperparameters are simply the knobs and levels you pull and turn when building a machine learning classifier. The process of tuning hyperparameters is more formally called hyperparameter optimization.
So what’s the difference between a normal “model parameter” and a “hyperparameter”?
Well, a standard “model parameter” is normally an internal variable that is optimized in some fashion. In the context of Linear Regression, Logistic Regression, and Support Vector Machines, we would think of parameters as the weight vector coefficients found by the learning algorithm.
On the other hand, “hyperparameters” are normally set by a human designer or tuned via algorithmic approaches. Examples of hyperparameters include the number of neighbors k in the k-Nearest Neighbor algorithm, the learning rate alpha of a Neural Network, or the number of filters learned in a given convolutional layer in a CNN.
In general, model parameters are optimized according to some loss function, while hyperparameters are instead searched for by exploring various settings to see which values provided the highest level of accuracy.
Because of this, it tends to be easier to tune model parameters (since we’re optimizing some objective function based on our training data) whereas hyperparameters can require a nearly blind search to find optimal ones.
As a concrete example of tuning hyperparameters, let’s consider the k-Nearest Neighbor classification algorithm. For your standard k-NN implementation, there are two primary hyperparameters that you’ll want to tune:
Both of these values can dramatically affect the accuracy of your k-NN classifier. To demonstrate this in the context of image classification, let’s apply hyperparameter tuning to our Kaggle Dogs vs. Cats dataset from last week.
Open up a new file, name it knn_tune.py , and insert the following code:
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os
Lines 2-12 start by importing our required Python packages. We’ll be making heavy use of the scikit-learn library.
We’ll also be using an imutils library, so make sure you have it installed as well:
$ pip install imutils
Next, we’ll define our extract_color_histogram function:
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os 13 14 def extract_color_histogram(image, bins=(8, 8, 8)): 15 # extract a 3D color histogram from the HSV color space using 16 # the supplied number of `bins` per channel 17 hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 18 hist = cv2.calcHist([hsv], [0, 1, 2], None, bins, 19 [0, 180, 0, 256, 0, 256]) 20 21 # handle normalizing the histogram if we are using OpenCV 2.4.X 22 if imutils.is_cv2(): 23 hist = cv2.normalize(hist) 24 25 # otherwise, perform "in place" normalization in OpenCV 3 (I 26 # personally hate the way this is done 27 else: 28 cv2.normalize(hist, hist) 29 30 # return the flattened histogram as the feature vector 31 return hist.flatten()
This function accepts an input image along with a number of bins for each channel of the image.
We convert the image to the HSV color space and compute a 3D color histogram to characterize the color distribution of the image (Lines 17-19).
This histogram is then flattened into a single 8 x 8 x 8 = 512-d feature vector that is returned to the calling function.
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os 13 14 def extract_color_histogram(image, bins=(8, 8, 8)): 15 # extract a 3D color histogram from the HSV color space using 16 # the supplied number of `bins` per channel 17 hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 18 hist = cv2.calcHist([hsv], [0, 1, 2], None, bins, 19 [0, 180, 0, 256, 0, 256]) 20 21 # handle normalizing the histogram if we are using OpenCV 2.4.X 22 if imutils.is_cv2(): 23 hist = cv2.normalize(hist) 24 25 # otherwise, perform "in place" normalization in OpenCV 3 (I 26 # personally hate the way this is done 27 else: 28 cv2.normalize(hist, hist) 29 30 # return the flattened histogram as the feature vector 31 return hist.flatten() 32 33 # construct the argument parse and parse the arguments 34 ap = argparse.ArgumentParser() 35 ap.add_argument("-d", "--dataset", required=True, 36 help="path to input dataset") 37 ap.add_argument("-j", "--jobs", type=int, default=-1, 38 help="# of jobs for k-NN distance (-1 uses all available cores)") 39 args = vars(ap.parse_args()) 40 41 # grab the list of images that we‘ll be describing 42 print("[INFO] describing images...") 43 imagePaths = list(paths.list_images(args["dataset"])) 44 45 # initialize the data matrix and labels list 46 data = [] 47 labels = []
Lines 34-39 handle parsing our command line arguments. We only need two switches here:
Line 43 grabs the paths to our 25,000 input images while Lines 46 and 47 initializes thedata list (where we’ll store the color histogram extracted from each image) and labels list (either “dog” or “cat” for each input image), respectively.
Next, we can loop over our imagePaths and describe them:
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os 13 14 def extract_color_histogram(image, bins=(8, 8, 8)): 15 # extract a 3D color histogram from the HSV color space using 16 # the supplied number of `bins` per channel 17 hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 18 hist = cv2.calcHist([hsv], [0, 1, 2], None, bins, 19 [0, 180, 0, 256, 0, 256]) 20 21 # handle normalizing the histogram if we are using OpenCV 2.4.X 22 if imutils.is_cv2(): 23 hist = cv2.normalize(hist) 24 25 # otherwise, perform "in place" normalization in OpenCV 3 (I 26 # personally hate the way this is done 27 else: 28 cv2.normalize(hist, hist) 29 30 # return the flattened histogram as the feature vector 31 return hist.flatten() 32 33 # construct the argument parse and parse the arguments 34 ap = argparse.ArgumentParser() 35 ap.add_argument("-d", "--dataset", required=True, 36 help="path to input dataset") 37 ap.add_argument("-j", "--jobs", type=int, default=-1, 38 help="# of jobs for k-NN distance (-1 uses all available cores)") 39 args = vars(ap.parse_args()) 40 41 # grab the list of images that we‘ll be describing 42 print("[INFO] describing images...") 43 imagePaths = list(paths.list_images(args["dataset"])) 44 45 # initialize the data matrix and labels list 46 data = [] 47 labels = [] 48 49 # loop over the input images 50 for (i, imagePath) in enumerate(imagePaths): 51 # load the image and extract the class label (assuming that our 52 # path as the format: /path/to/dataset/{class}.{image_num}.jpg 53 image = cv2.imread(imagePath) 54 label = imagePath.split(os.path.sep)[-1].split(".")[0] 55 56 # extract a color histogram from the image, then update the 57 # data matrix and labels list 58 hist = extract_color_histogram(image) 59 data.append(hist) 60 labels.append(label) 61 62 # show an update every 1,000 images 63 if i > 0 and i % 1000 == 0: 64 print("[INFO] processed {}/{}".format(i, len(imagePaths)))
Line 50 starts looping over each of the imagePaths . For each imagePath , we load it from disk and extract the label (Lines 53 and 54).
Now that we have our image , we compute a color histogram (Line 58), followed by updating the data and labels lists (Lines 59 and 60).
Finally, Lines 63 and 64 display the feature extraction progress to our screen.
In order to train and evaluate our k-NN classifier, we’ll need to partition our data into two splits: a training split and a testing split:
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os 13 14 def extract_color_histogram(image, bins=(8, 8, 8)): 15 # extract a 3D color histogram from the HSV color space using 16 # the supplied number of `bins` per channel 17 hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 18 hist = cv2.calcHist([hsv], [0, 1, 2], None, bins, 19 [0, 180, 0, 256, 0, 256]) 20 21 # handle normalizing the histogram if we are using OpenCV 2.4.X 22 if imutils.is_cv2(): 23 hist = cv2.normalize(hist) 24 25 # otherwise, perform "in place" normalization in OpenCV 3 (I 26 # personally hate the way this is done 27 else: 28 cv2.normalize(hist, hist) 29 30 # return the flattened histogram as the feature vector 31 return hist.flatten() 32 33 # construct the argument parse and parse the arguments 34 ap = argparse.ArgumentParser() 35 ap.add_argument("-d", "--dataset", required=True, 36 help="path to input dataset") 37 ap.add_argument("-j", "--jobs", type=int, default=-1, 38 help="# of jobs for k-NN distance (-1 uses all available cores)") 39 args = vars(ap.parse_args()) 40 41 # grab the list of images that we‘ll be describing 42 print("[INFO] describing images...") 43 imagePaths = list(paths.list_images(args["dataset"])) 44 45 # initialize the data matrix and labels list 46 data = [] 47 labels = [] 48 49 # loop over the input images 50 for (i, imagePath) in enumerate(imagePaths): 51 # load the image and extract the class label (assuming that our 52 # path as the format: /path/to/dataset/{class}.{image_num}.jpg 53 image = cv2.imread(imagePath) 54 label = imagePath.split(os.path.sep)[-1].split(".")[0] 55 56 # extract a color histogram from the image, then update the 57 # data matrix and labels list 58 hist = extract_color_histogram(image) 59 data.append(hist) 60 labels.append(label) 61 62 # show an update every 1,000 images 63 if i > 0 and i % 1000 == 0: 64 print("[INFO] processed {}/{}".format(i, len(imagePaths))) 65 66 # partition the data into training and testing splits, using 75% 67 # of the data for training and the remaining 25% for testing 68 print("[INFO] constructing training/testing split...") 69 (trainData, testData, trainLabels, testLabels) = train_test_split( 70 data, labels, test_size=0.25, random_state=42)
Here we’ll be using 75% of our data for training and the remaining 25% for evaluation.
Finally, let’s define the set of hyperparameters we are going to optimize over:
1 # import the necessary packages 2 from sklearn.neighbors import KNeighborsClassifier 3 from sklearn.grid_search import RandomizedSearchCV 4 from sklearn.grid_search import GridSearchCV 5 from sklearn.cross_validation import train_test_split 6 from imutils import paths 7 import numpy as np 8 import argparse 9 import imutils 10 import time 11 import cv2 12 import os 13 14 def extract_color_histogram(image, bins=(8, 8, 8)): 15 # extract a 3D color histogram from the HSV color space using 16 # the supplied number of `bins` per channel 17 hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 18 hist = cv2.calcHist([hsv], [0, 1, 2], None, bins, 19 [0, 180, 0, 256, 0, 256]) 20 21 # handle normalizing the histogram if we are using OpenCV 2.4.X 22 if imutils.is_cv2(): 23 hist = cv2.normalize(hist) 24 25 # otherwise, perform "in place" normalization in OpenCV 3 (I 26 # personally hate the way this is done 27 else: 28 cv2.normalize(hist, hist) 29 30 # return the flattened histogram as the feature vector 31 return hist.flatten() 32 33 # construct the argument parse and parse the arguments 34 ap = argparse.ArgumentParser() 35 ap.add_argument("-d", "--dataset", required=True, 36 help="path to input dataset") 37 ap.add_argument("-j", "--jobs", type=int, default=-1, 38 help="# of jobs for k-NN distance (-1 uses all available cores)") 39 args = vars(ap.parse_args()) 40 41 # grab the list of images that we‘ll be describing 42 print("[INFO] describing images...") 43 imagePaths = list(paths.list_images(args["dataset"])) 44 45 # initialize the data matrix and labels list 46 data = [] 47 labels = [] 48 49 # loop over the input images 50 for (i, imagePath) in enumerate(imagePaths): 51 # load the image and extract the class label (assuming that our 52 # path as the format: /path/to/dataset/{class}.{image_num}.jpg 53 image = cv2.imread(imagePath) 54 label = imagePath.split(os.path.sep)[-1].split(".")[0] 55 56 # extract a color histogram from the image, then update the 57 # data matrix and labels list 58 hist = extract_color_histogram(image) 59 data.append(hist) 60 labels.append(label) 61 62 # show an update every 1,000 images 63 if i > 0 and i % 1000 == 0: 64 print("[INFO] processed {}/{}".format(i, len(imagePaths))) 65 66 # partition the data into training and testing splits, using 75% 67 # of the data for training and the remaining 25% for testing 68 print("[INFO] constructing training/testing split...") 69 (trainData, testData, trainLabels, testLabels) = train_test_split( 70 data, labels, test_size=0.25, random_state=42) 71 72 # construct the set of hyperparameters to tune 73 params = {"n_neighbors": np.arange(1, 31, 2), 74 "metric": ["euclidean", "cityblock"]}
The above code block defines a params dictionary which contains two keys:
Now that we have defined the hyperparameters we want to search over, we need a method that actually applies the search. Luckily, the scikit-learn library already has two methods that can perform hyperparameter search for us: Grid Search and Randomized Search.
As we’ll find out, it’s normally preferable to used Randomized Search over Grid Search in nearly all circumstances.
The Grid Search tuning algorithm will methodically (and exhaustively) train and evaluate a machine learning classifier for each and every combination of hyperparameter values.
The primary benefit of the Grid Search algorithm is also it’s major drawback: as an exhaustive search your number of possible parameter values explodes as both the number of hyperparameters and hyperparameter values increases.
Sure, you get to evaluate each and every combination of hyperparameter — but you pay a cost — it’s a very time consuming cost. And in most cases, it’s hardly worth it.
As explain in the “Use Randomized Search for hyperparameter tuning (in most situations)” section below, there are rarely just one set of hyperparameters that obtain the highest accuracy.
Instead, there are “hot zones” of hyperparameters that all obtain near identical accuracy. The goal is to explore as many of these “zones” of hyperparameters a quickly as possible and locate one of these “hot zones”. It turns out that a random search is a great way to do this.
The Random Search approach to hyperparameter tuning will sample hyperparameters from our params dictionary via a random, uniform distribution. Given a set of randomly sampled parameters, a model is then trained and evaluated.
We perform this set of random hyperparameter sampling and model construction/evaluation for a preset number of times. You set the number of evaluations to be as long as you’re willing to wait. If you’re impatient and in a hurry, make this value low. And if you have the time to spend on a longer experiment, increase the number of iterations.
In either case, the goal of a Randomized Search is to explore a large set of possible hyperparameter spaces quickly — and the best way to accomplish this is via simple random sampling. And in practice, it works quite well!
You can find the code to perform a Randomized Search of hyperparameters for the k-NN algorithm below:
http://www.pyimagesearch.com/2016/08/15/how-to-tune-hyperparameters-with-python-and-scikit-learn/
How to tune hyperparameters with Python and scikit-learn?
标签:question read tab mini progress plain review path ted
原文地址:http://www.cnblogs.com/casperwin/p/6697210.html