
ECE 417 Lecture 9: Exam 1 Review Highlights
Dive into the review highlights for Exam 1 in ECE 417 with topics including distance metrics, Minkowski norms, Mahalanobis distance, Gaussian distributions, eigenvectors, classifiers, and more. Get ready for the upcoming exam by exploring key concepts covered in the course material.
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
ECE 417, Lecture 9: Exam 1 Review Mark Hasegawa-Johnson 9/25/2017
Exam Details In class, Thursday 9/28 No calculators. Solutions can be numerical equations (e.g., cos(0.74)) One page of hand-written notes, front & back The exam will be composed of 4 long-answer problems
Outline: Course material until now Distance; Minkowski; Mahalanobis Gaussians: 1-D, Multi-dimensional Eigenvectors and PCA (SVD and Gram matrices are not on the exam) Classifiers: KNN (especially 1-NN) and Bayesian
Distance The distance between two vectors is a function ? ? that is: 1. Non-negative: ? ? 0 2. Positive definite: 0 = ? ? only if ? = ? 3. Symmetric: ? ? = ? ? 4. (Actually not required for it to be a distance: Homogeneous: ? ? ? ? = |?| ? ? ) 5. Satisfies the triangle inequality, ? ? ? + ?
Example Problem ?1 ?2 ?3 ?1 ?2 ?3 ? = , ? = ,? ?, ? = ?2 ?2 Is ? ?, ? a distance metric? If not, why not? 1 1 1 2 1 0 A: No. Not positive definite, example, ? = , ? = have ? ?, ? = 0 although ? ?.
Minkowski Norm ?|?1 ?1|?+ + |?? ??|? ? ??=
Example ?1 ?2 ?3 Suppose ? = and ?1= ?2= ?3and ??= 2. Find ?1as a function of ?. Answer: ?1= 2/31/?.
(|x1|^p + |x2|^p+ |x3|^p)^(1/p) = 2 |x1|^p + |x2|^p+ |x3|^p = 2^p |x1|^p + |x1|^p + |x1|^p = 2^p |x1|^p = 2^p / 3 x1 = 2 / (3)^(1/p) = 2 3^(-1/p) Or if p is even, x1 = - 2 3^(-1/p)
Mahalanobis Distance 2?,? = ? ?? 1 ? ? ? = (x1-y1)^2/s1^2 + (x2-y2)^2/s2^2 if Sigma diagonal If the goal is sqrt((2)^2/s1^2 + 0^2/s2^2) < 2, satisfied by s1^2 > 1.
Example ? =0 0, ? =2 0, ? =0 2 Find a possible value of (there are infinitely many possible solutions) so that ? Answer: for example, this will work for any diagonal covariance with s1^2 > 1 and s2^2 < 1 works. =1.1 0 2 ?, ? < 2 but ? 2 ?, ? > 2. 0 0.9
Probability Density Function (pdf) A probability density function (pdf) is the derivative of the CDF: ? ????? ??? = That means, for example, that the probability of getting an X in any interval ? < ? ? is: ? Pr ? < ? ? = ??? ?? ?
Unit Normal pdf Suppose that X is normal with mean ? and standard deviation ? (variance ?2): 2 1 2??2? 1 ? ? ? ??? = ? ?;?,?2= 2 ? ? ? Then ? = is normal with mean 0 and standard deviation 1: 1 2?? 1 2?2 ??? = ? ?;0,1 =
Example Define Phi(z) = int_(-infty)^z (1/sqrt(2pi)) * exp(-0.5*u^2) du X is Gaussian with mean 5, variance of 4. In terms of ? , find Pr{X>9}. Answer:Pr ? > 9 = 1 2 .
Multivariate Gaussian 1 (2?)?/2 1/2? 1 ? ?? 1 ? ? ?? ? = ? ?; ?, = 2
Example X is a Gaussian random vector with =4 0 1, ? =2 0 2 1 4?? 2. Plot the set ?:?? ? = Answer: an ellipse centered at [2,2], touching the points [-2,2], [6,2], [2,4], and [2,0], and with main axes parallel to the coordinate axes.
Symmetric positive definite matrices = ? ?? 1= ? 1?? ?? ? = ??? = ?
Example A covariance matrix has eigenvalues =2 0.8 0.6 0.6 Answer: 0 1, and eigenvectors 0 0.8. Find and 1. ? = 1 25 41 12 1 50 12 34 12 41 = 34 12 1=
Principal Components 2 ?, ? = ? ?? 1 ? ? = ?? 1 ? ? ? ? ? ?? ?1 ? = ?? ? ? = ? ? ?
Example A covariance matrix has eigenvalues =2 1 1 1 Answer: an ellipse centered at the origin, touching the pints [1,1], [-1,- 2 2, 0 1, and eigenvectors 0 2 1. Plot the set of points ? such that ?? 1 ? = 1. ? = 2 2 2 2 1], [ 2] and [ 2, 2].
Example A dataset contains six two-dimensional vectors, given by ? = 1 1 0 1 2 0 2 test vector will be classified using a nearest neighbor classifier. Plot the classification boundary. Answer: 1 1 1 0 1with labels ? = [0,0,0,1,1,1]. Suppose that a
Example A two-class classifier has labels ??{0,1}. Suppose ??1 = 1/3. If Y=0, then X is a Gaussian random vector with mean of [0,0] and identity covariance. If Y=1, then X is a Gaussian random vector with mean of [1,0] and identity covariance. Plot the decision boundary. Answer: a vertical line, intersecting the x1 axis at x1=0.5+ln(2).