Overview of Linear Classifiers and Perceptron in Classification Models

Slide Note
Embed
Share

Explore various linear classification models such as linear regression, logistic regression, and SVM loss. Understand the concept of multi-class classification, including multi-class perceptron and multi-class SVM. Delve into the specifics of the perceptron algorithm and its hinge loss, along with deriving updates using stochastic gradient descent.


Uploaded on Oct 02, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Linear classifiers: Outline Examples of classification models: nearest neighbor, linear Empirical loss minimization framework Linear classification models 1. Linear regression 2. Logistic regression 3. Perceptron loss 4. SVM loss General recipe: Data loss, regularization Multi-class classification 1. Multi-class perceptron 2. Multi-class SVM 3. Softmax

  2. Recall: The shape of logistic loss ? ?,??,?? = log? ?????? Approaches ?????? Approaches 0 Figure source ??????

  3. Perceptron Let s define the perceptron hinge loss: ? ?,??,?? = max 0, ?????? Perceptron hinge loss Incorrectly classified Correctly classified ??????

  4. Perceptron Let s define the perceptronhinge loss: ? ?,??,?? = max 0, ?????? Training: find ? that minimizes ? ? =1 ? ? ? ? ?,??,?? =1 max 0, ?????? ? ?=1 ?=1 Once again, there is no closed-form solution, so let s go straight to SGD

  5. Deriving the perceptron update Let s differentiate the perceptron hinge loss: ? ?,??,?? = max 0, ?????? (Strictly speaking, this loss is not differentiable, so we need to find a sub-gradient) Incorrectly classified Correctly classified ??????

  6. Deriving the perceptron update Let s differentiate the perceptron hinge loss: ? ?,??,?? = max 0, ?????? ? ?,??,?? = ?[??????< 0]???? ? ??max 0,? = ?[? > 0] Incorrectly classified Correctly classified ??????

  7. Deriving the perceptron update Let s differentiate the perceptron hinge loss: ? ?,??,?? = max 0, ?????? ? ?,??,?? = ?[??????< 0]???? = ?2 ?1? ?1 (?) We also used the chain rule: ?2?1?

  8. Deriving the perceptron update Let s differentiate the perceptron hinge loss: ? ?,??,?? = max 0, ?????? ? ?,??,?? = ?[??????< 0]???? Corresponding SGD update (? ? ? ? ?,??,??): ? ? + ? ?[??????< 0]???? If ??is correctly classified: do nothing If ?? is incorrectly classified: ? ? + ? ????

  9. Understanding the perceptron update rule Perceptron update rule: If ?? sgn(????) then update weights: ? ? + ? ???? The raw response of the classifier changes to ????+ ? ???? 2 How does the response change if ??= 1? The response ????is initially negative and will be increased How does the response change if ??= 1? The response ????is initially positive and will be decreased

  10. Linear classifiers: Outline Example classification models: nearest neighbor, linear Empirical loss minimization Linear classification models 1. Linear regression (least squares) 2. Logistic regression 3. Perceptron loss 4. Support vector machine (SVM) loss

  11. Support vector machines When the data is linearly separable, which of the many possible solutions should we prefer? Perceptron training algorithm: no special criterion, solution depends on initialization

  12. Support vector machines When the data is linearly separable, which of the many possible solutions should we prefer? Perceptron training algorithm: no special criterion, solution depends on initialization Margin SVM criterion: maximize the margin, or distance between the hyperplane and the closest training example Support vectors Separating hyperplane

  13. Finding the maximum margin hyperplane We want to maximize the margin, or distance between the hyperplane ??? = 0 and the closest training example ?0 This distance is given by |???0| (for derivation see, e.g., here) Assuming the data is linearly separable, we can fix the scale of ? so that ??????= 1 for support vectors and ?????? 1 for all other points Then the margin is given by ? here 1 ?

  14. Finding the maximum margin hyperplane 1 ? while correctly classifying all We want to maximize margin training data: ?????? 1 Equivalent problem: 1 2 2 ?????? 1 ? min? ? s.t. This is a quadratic objective with linear constraints: convex optimization problem, global optimum can be found using well-studied methods

  15. Soft margin formulation What about non-separable data? And even for separable data, we may prefer a larger margin with a few constraints violated Source

  16. Soft margin formulation What about non-separable data? And even for separable data, we may prefer a larger margin with a few constraints violated Source

  17. Soft margin formulation Penalize margin violations using SVM hinge loss: ? 2 ? 2 max[0,1 ??????] min? ? + ?=1 Hinge loss (0,1) (1,0) Incorrectly classified Correctly classified ?????? +1 0 -1

  18. Soft margin formulation Penalize margin violations using SVM hinge loss: ? 2 ? 2 max[0,1 ??????] min? ? + ?=1 Hinge loss (0,1) (1,0) Incorrectly classified Correctly classified ?????? +1 Recall hinge loss used by the perceptron update algorithm! 0 -1

  19. Soft margin formulation Penalize margin violations using SVM hinge loss: ? 2 ? 2 max[0,1 ??????] min? ? + ?=1 Maximize margin a.k.a. regularization Minimize misclassification loss

  20. SGD update for SVM ? 2+ max[0,1 ??????] ? ?,??,?? = 2?? ? ?? ?[??????< 1]???? ? ?,??,?? = ? ??max 0,? = ?[? > 0] Recall:

  21. SGD update for SVM ? 2+ max[0,1 ??????] ? ?,??,?? = 2?? ? ?? ?[??????< 1]???? ? ?,??,?? = SGD update: If ?????? 1: ? ? ?? ?? If ??????< 1: ? ? + ? ???? ? ?? S. Shalev-Schwartz et al., Pegasos: Primal Estimated sub-GrAdient SOlver for SVM, Mathematical Programming, 2011

  22. Linear classifiers: Outline Examples of classification models: nearest neighbor, linear Empirical loss minimization framework Linear classification models 1. Linear regression 2. Logistic regression 3. Perceptron training algorithm 4. Support vector machines General recipe: data loss, regularization

  23. General recipe Find parameters ? that minimize the sum of a regularization loss and a data loss: ? 1 ? ?=1 ? ? = ?? ? + ?(?,??,??) regularization empirical loss data loss L2 regularization: ?(?) =1 2 ?2 2

  24. Closer look at L2 regularization Regularized objective: ?(?) =? ? 2+ ?=1 2?2 ?(?,??,??) Gradient of objective: ? ?(?) = ?? + ?(?,??,??) ?=1 SGD update: ? ?? + ? ?,??,?? ? ? ? ? 1 ?? ? ? ? ?,??,?? ? Interpretation: weight decay

  25. General recipe Find parameters ? that minimize the sum of a regularization loss and a data loss: ? 1 ? ?=1 ? ? = ?? ? + ?(?,??,??) regularization empirical loss data loss L2 regularization: ?(?) =1 2 ?2 2 L1 regularization: ?(?) = ?1

  26. Closer look at L1 regularization Regularized objective: ? ? ? = ? ?1+ ? ?,??,?? ?=1 ? ?(?)+ = ? ? ?,??,?? ? ?=1 ? Gradient: ? ? = ? sgn(?) + ?=1 (here sgn is an elementwise function) SGD update: ? ? ?? ?sgn ? ? ? ?,??,?? Interpretation: encouraging sparsity ?(?,??,??)

  27. Linear classifiers: Outline Examples of classification models: nearest neighbor, linear Empirical loss minimization framework Linear classification models 1. Linear regression 2. Logistic regression 3. Perceptron training algorithm 4. Support vector machines General recipe: data loss, regularization Multi-class classification 1. Multi-class perceptron 2. Multi-class SVM 3. Softmax

  28. One-vs-all classification Let ? {1, ,?} Learn ? scoring functions ?1,?2, ,?? Classify ? to class ? = argmax? ??(?) Let s start with multi-class perceptrons: ??? = ???? Argmax Perceptrons w/ weights ?? Inputs

  29. Multi-class perceptrons Multi-class perceptrons: ??? = ???? Let ? be the matrix with rows ?? What loss should we use for multi-class classification? Figure source: Stanford 231n

  30. Multi-class perceptrons Multi-class perceptrons: ??? = ???? Let ? be the matrix with rows ?? What loss should we use for multi-class classification? For (??,??), let the loss be the sum of hinge losses associated with predictions for all incorrect classes: max[0,????? ??? ???] ? ?,??,?? = ? ?? Score for correct class (??) has to be greater than the score for the incorrect class (?)

  31. Multi-class perceptrons max[0,????? ??? ???] ? ?,??,?? = ? ?? Gradient w.r.t. ???: ?[?????> ??? ???]?? ? ?? Recall: ? ??max 0,? = ?[? > 0]

  32. Multi-class perceptrons max[0,????? ??? ???] ? ?,??,?? = ? ?? Gradient w.r.t. ???: ?[?????> ??? ???]?? ? ?? Gradient w.r.t. ??, ? ??: ?[?????> ??? ???]?? Update rule: for each ? s.t. ?????> ??? ???: ??? ???+ ??? ?? ?? ???

  33. Multi-class perceptrons Update rule: for each ? s.t. ?????> ??? ???: ??? ???+ ??? ?? ?? ??? Is this equivalent to training ? independent one-vs-all classifiers? Independent Multi-class Do nothing Increase Cat score: 65.1 Decrease Decrease Dog score: 101.4 Decrease Do nothing Ship score: 24.9

  34. Multi-class SVM Recall single-class SVM loss: ? 2+ max[0,1 ??????] ? ?,??,?? = Generalization to multi-class: 2?? ? 2+ ? ??max[0,1 ??? Score for correct class has to be greater than the score for the incorrect class by at least a margin of 1 ???+ ?????] ? ?,??,?? = 2?? (0,1) (1,0) Source: Stanford 231n Score for correct class score for incorrect class

  35. Multi-class SVM ? 2+ ? ??max[0,1 ??? ???+ ?????] ? ?,??,?? = 2?? Gradient w.r.t. ???: ? ???? ??? ?????< 1 ?? ? ??? ? ?? Gradient w.r.t. ??, ? ??: ? ???+ ?[??? ??? ?????< 1]?? Update rule (almost* equivalent to above): For each ? ??s.t. ??? For ? = 1, ,?: ?? 1 ?? ??? ?????< 1: ??? ???+ ???, ?? ?? ??? ???

  36. Announcements Assignment 1 is out, due Tuesday, February 14 Quiz 1 will be available 9AM this Thursday, February 9, through 9AM Monday, February 13

  37. Review: Three ways to think about linear classifiers Visual Viewpoint Algebraic Viewpoint Geometric Viewpoint One template per class Hyperplanes cutting up space ??? = ?? + ? Source: J. Johnson

  38. Review: General recipe for training classifiers Find parameters ? that minimize the sum of a regularization loss and a data loss: ? 1 ? ?=1 ? ? = ?? ? + ?(?,??,??) regularization empirical loss data loss L2 regularization: ?(?) =1 2 ?2 2 L1 regularization: ?(?) = ?1

  39. Last week: Linear classifiers Examples of classification models: nearest neighbor, linear Empirical loss minimization framework Linear classification models 1. Linear regression 2. Logistic regression 3. Perceptron training algorithm 4. Support vector machines General recipe: data loss, regularization Multi-class classification Multi-class perceptrons Multi-class SVM Softmax

  40. Softmax We want to squash the vector of responses ?1, ,?? into a vector of probabilities : exp(?1) ?exp(??), , exp(??) ?exp(??) softmax ?1, ,?? = The outputs are between 0 and 1 and sum to 1 If one of the inputs (logits) is much larger than the others, then the corresponding softmax value will be close to 1 and others will be close to 0

  41. Softmax and sigmoid For two classes: exp(?) exp( ?) softmax ?, ? = exp(?) + exp( ?), exp ? + exp( ?) 1 1 1+exp( 2?), = ? 2? ,?( 2?) = exp 2? +1 Thus, softmax is the generalization of sigmoid for more than two classes

  42. Cross-entropy loss It is natural to use negative log likelihood loss with softmax: ??? ??? exp ??? ?exp ?? ? ?,??,?? = log?????? = log This is also the cross-entropy between the empirical distribution ? ? ?? = ?[? = ??] and estimated distribution ??(?|??): ? ? ?? log??(?|??) ? ?(correct class | ??) = 1 ?(incorrect class | ??) = 0 Estimated distribution ??(?|??) Empirical distribution ? ? ??

  43. SVM loss vs. cross-entropy loss Correct class is the third one (blue) Source: Stanford 231n

  44. SGD with cross-entropy loss ??? ??? exp ??? ?exp ?? ? ?,??,?? = log?????? = log ??? ???+ log = ??? ?exp ?? Gradient w.r.t. ???: ??+exp ??? ????? ??? = (?????? 1)?? ?exp ?? Gradient w.r.t. ??, ? ??: exp ??????? ?exp ?? = ??? ???? ???

  45. SGD with cross-entropy loss Gradient w.r.t. ???: (?????? 1)?? Gradient w.r.t. ??, ? ??: ??? ???? Update rule: For ??: ??? ???+ ? 1 ?????? For ? ??: ?? ?? ???? ???? ??

  46. Softmax trick: Avoiding overflow Exponentiated values exp ??can become very large and cause overflow Note that adding the same constant to all softmax inputs (logits) does not change the output of the softmax: exp ?? ?exp ?? ? exp ?? ?? exp ?? exp ??+ log? ?exp ??+ log? = = Then we can let log? = max? ?? (i.e., make largest input to softmax be 0)

  47. Softmax trick: Temperature scaling Suppose we divide every input to the softmax by the same constant ?: exp(?1/?) ?exp(??/?), , exp(??/?) ?exp(??/?) softmax ?1, ,??;? = Prior to normalization, each entry exp(?1) is raised to the power 1/? If ? is close to 0, the largest entry will dominate and output distribution will approach one-hot If ? is high, output distribution will approach uniform ? = 0.1 ? = 10 Source

  48. Softmax trick: Temperature scaling Low temperature: More concentrated distribution Higher temperature: More uniform distribution Figure source

  49. Softmax trick: Label smoothing Recall: cross-entropy loss measures the difference between the observed label distribution ? ? ?? and estimated distribution ??(?|??): ? ? ?? log??(?|??) ? Hard prediction targets ?(correct class | ??) = 1 ?(incorrect class | ??) = 0 Estimated distribution ??(?|??) Empirical distribution ? ? ??

  50. Softmax trick: Label smoothing Recall: cross-entropy loss measures the difference between the observed label distribution ? ? ?? and estimated distribution ??(?|??): ? ? ?? log??(?|??) ? Soft prediction targets ?(correct class | ??) = 1 ? ? ?(incorrect class | ??) = ? 1 Estimated distribution ??(?|??) Empirical distribution ? ? ??

Related


More Related Content