Data Science Course Updates and Events Overview
Stay informed with the latest updates and events from the ECE-5424G/CS-5824 course at Virginia Tech. Learn about topics such as EM and GMM, administrative deadlines, distinguished lectures, K-means algorithm, and hierarchical clustering. Mark your calendar for key dates like final project discussions and exam schedules. Enhance your understanding of data science concepts with insightful slides and credits to experts in the field.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
EM and GMM Jia-Bin Huang Virginia Tech Spring 2019 ECE-5424G / CS-5824
Administrative HW 3 due March 27. Final project discussion: Link Final exam date/time Exam Section: 14M https://banweb.banner.vt.edu/ssb/prod/hzskexam.P_DispExamInfo 2:05PM to 4:05PM May 13
J. Mark Sowers Distinguished Lecture Michael Jordan Pehong Chen Distinguished Professor Department of Statistics and Electrical Engineering and Computer Sciences University of California, Berkeley 3/28/19 7:30 PM, McBryde 100
K-means algorithm Input: ? (number of clusters) Training set ?(1),?(2),?(3), ,?(?) ?(?) ? (note: drop ?0= 1 convention) Slide credit: Andrew Ng
K-means algorithm Randomly initialize ? cluster centroids ?1,?2, ,?? ? Repeat{ for ? = 1 to ? ?(?) index (from 1 to ?) of cluster centroid closest to ?(?) for ? = 1 to ? ?? average (mean) of points assigned to cluster ? Cluster assignment step Centroid update step } Slide credit: Andrew Ng
K-means optimization objective ?(?) =Index of cluster (1, 2, K) to which example ?? is currently assigned ?? = cluster centroid ? (?? ?) ??(?) = cluster centroid of cluster to which example ?? has been assigned Optimization objective: ? ?1, ,??,?1, ,?? =1 Example: ?(?)= 5 ?(?)= 5 ??(?) = ?5 ? 2 ?? ??? ? ?=1 ? ?1, ,??,?1, ,?? min ?1, ,?? ?1, ,?? Slide credit: Andrew Ng
K-means algorithm Randomly initialize ? cluster centroids ?1,?2, ,?? ? Cluster assignment step ? ?1, ,??,?1, ,?? =1 ? Repeat{ closest to ?(?) for ? = 1 to ? ?? average (mean) of points assigned to cluster ? } 2 ?? ??? ? ?=1 for ? = 1 to ? ?(?) index (from 1 to ?) of cluster centroid Centroid update step ? ?1, ,??,?1, ,?? =1 ? 2 ?? ??? ? ?=1 Slide credit: Andrew Ng
Hierarchical Clustering A hierarchy might be more nature Different users might care about different levels of granularity or even prunings. Slide credit: Maria-Florina Balcan
Hierarchical Clustering Top-down (divisive) Partition data into 2-groups (e.g., 2-means) Recursively cluster each group Bottom-up (agglomerative) Start with every point in its own cluster. Repeatedly merge the closest two clusters Different definitions of closest give different algorithms. Slide credit: Maria-Florina Balcan
Bottom-up (agglomerative) Have a distance measure on pairs of objects. ? ?,? : Distance between ? and ? Single linkage: dist A,B = min x ?,? ? d(x,x ) Complete linkage: dist A,B = max x ?,? ? d(x,x ) Average linkage: dist A,B = average x ?,? ? d(x,x ) ? |?| ? +|?|mean ? mean ? 2 Ward s method dist A,B = Slide credit: Maria-Florina Balcan
Bottom-up (agglomerative) Single linkage: dist A,B = min x ?,? ? d(x,x ) At any time, distance between any two points in a connected components < r. Complete linkage: dist A,B = max x ?,? ? d(x,x ) Keep max diameter as small as possible at any level ? |?| ? +|?|mean ? mean ? 2 Ward s method dist A,B = Merge the two clusters such that the increase in k-means cost is as small as possible. Works well in practice Slide credit: Maria-Florina Balcan
Things to remember Intro to unsupervised learning K-means algorithm Optimization objective Initialization and the number of clusters Hierarchical clustering
Todays Class Examples of Missing Data Problems Detecting outliers Latent topic models Segmentation Background Maximum Likelihood Estimation Probabilistic Inference Dealing with Hidden Variables EM algorithm, Mixture of Gaussians Hard EM
Todays Class Examples of Missing Data Problems Detecting outliers Latent topic models Segmentation Background Maximum Likelihood Estimation Probabilistic Inference Dealing with Hidden Variables EM algorithm, Mixture of Gaussians Hard EM
Missing Data Problems: Outliers You want to train an algorithm to predict whether a photograph is attractive. You collect annotations from Mechanical Turk. Some annotators try to give accurate ratings, but others answer randomly. Challenge: Determine which people to trust and the average rating by accurate annotators. Annotator Ratings 10 8 9 2 8 Photo: Jam343 (Flickr)
Missing Data Problems: Object Discovery You have a collection of images and have extracted regions from them. Each is represented by a histogram of visual words . Challenge: Discover frequently occurring object categories, without pre-trained appearance models. http://www.robots.ox.ac.uk/~vgg/publications/papers/russell06.pdf
Missing Data Problems: Segmentation You are given an image and want to assign foreground/background pixels. Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Foreground Background
Missing Data Problems: Segmentation Challenge: Segment the image into figure and ground without knowing what the foreground looks like in advance. Three steps: 1. If we had labels, how could we model the appearance of foreground and background? Maximum Likelihood Estimation 2. Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? Probabilistic Inference 3. How can we get both labels and appearance models at once? Expectation-Maximization (EM) Algorithm
Maximum Likelihood Estimation 1. If we had labels, how could we model the appearance of foreground and background? Background Foreground
Maximum Likelihood Estimation = argmax data parameters = x .. x x 1 N x ( | ) p n = argmax ( | ) p x n
Maximum Likelihood Estimation = argmax = x .. x x 1 N x ( | ) p n = argmax ( | ) p x n Gaussian Distribution 1 ) ( ) 2 x = 2 n ( | , exp p x n 2 2 2 2
Maximum Likelihood Estimation ( ) 2 x 1 = 2 n ( | , ) exp p x Gaussian Distribution n 2 2 2 2 ? = argmax? ? ? ?) = argmax? log ? ? ?) ? = argmax? ? ? ? = 2 Log-Likelihood log (? ??? ) = argmax? ?(?) ? log 2? ? 1 log ?2 ?? ?2 2?2 2 ? ??(?) ?? 1 ?2 ? =1 = ?? ? = 0 ? ?? ? ? ??(?) ?? =? 1 ?3 1 ? ?? ?2= 0 ?2= ?? ?2 ? ? ?
Maximum Likelihood Estimation = argmax = x .. x x 1 N x ( | ) p n = argmax ( | ) p x n Gaussian Distribution ( ) 2 x 1 = 2 n ( | , ) exp p x n 2 2 2 2 1 1 n ( ) n = 2 n x = 2 n x N N
Example: MLE Parameters used to Generate fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 im labels >> mu_fg = mean(im(labels)) mu_fg = 0.6012 >> sigma_fg = sqrt(mean((im(labels)-mu_fg).^2)) sigma_fg = 0.1007 >> mu_bg = mean(im(~labels)) mu_bg = 0.4007 >> sigma_bg = sqrt(mean((im(~labels)-mu_bg).^2)) sigma_bg = 0.1007 >> pfg = mean(labels(:));
Probabilistic Inference 2. Once we have modeled the fg/bg appearance, how do we compute the likelihood that a pixel is foreground? Background Foreground
Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label = ( | , ) p z m x n n
Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label ( ) = , | p z m x = , = n n m ( | ) p z m x Conditional probability ( ) n n | p x ?(?,?) ?(?) n ? ? ? =
Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label ( ) = , | p z m x = , = n n m ( | ) p z m x ( ) n n | p x n ( ) = | , | p z m x , = n n x m Marginalization ( ) k = p z k n n k ? ? = ?(?,? = ?) ?
Probabilistic Inference Compute the likelihood that a particular model generated a sample component or label ( ) = , | p z m x = , = n n m ( | ) p z m x ( ) n n | p x n ( ) = | , | p z m x , = n n x m Joint distribution ? ?,? = P B P(A|B) ( ) k = p z k n n k ( ) ( ) ( k , ) = z , = | | | p x z m p z m = n n | m n m ( ) k = = p x k p z k n n n k
Example: Inference Learned Parameters fg: mu=0.6, sigma=0.1 bg: mu=0.4, sigma=0.1 im >> pfg = 0.5; >> px_fg = normpdf(im, mu_fg, sigma_fg); >> px_bg = normpdf(im, mu_bg, sigma_bg); >> pfg_x = px_fg*pfg ./ (px_fg*pfg + px_bg*(1-pfg)); p(fg | im)
Dealing with Hidden Variables 3. How can we get both labels and appearance parameters at once? Background Foreground
Mixture of Gaussians mixture component component model parameters component prior ( ) ( ) m 2 = = , , 2 | , , , | p x p x z m n n n m m m ( ( ) ( ) 2 = = = ) ( p , , 2 , | , , , | p x z m p x z m n n n n m m m ) 2 = , = | | p x z m n m m n m ( ) 2 x 1 = n m 2 exp m 2 2 2 m m
Mixture of Gaussians With enough components, can represent any probability density function Widely used as general purpose pdf estimator
Segmentation with Mixture of Gaussians Pixels come from one of several Gaussian components We don t know which pixels come from which components We don t know the parameters for the components Problem: - Estimate the parameters of the Gaussian Mixture Model. What would you do?
Simple solution 1. Initialize parameters 2. Compute the probability of each hidden variable given the current parameters 3. Compute new parameters for each model, weighted by likelihood of hidden variables 4. Repeat 2-3 until convergence
Mixture of Gaussians: Simple Solution 1. Initialize parameters 2. Compute likelihood of hidden variables for current parameters ( ) t = = ( ) 2 ( ) t t ( | , , , ) p z m x nm n n 3. Estimate new parameters for each model, weighted by likelihood 1 1 ( ) + n n ( ) 1 t + nm 2 2 ( ) 1 = t = x x + n ( ) 1 t n = n m nm n m m nm n m N nm nm
Expectation Maximization (EM) Algorithm ( ) z = x z argmax log , | p Goal: Log of sums is intractable for concave functions f(x) X E ( ) ( ) X E f f Jensen s Inequality (so we maximize the lower bound!) See here for proof: www.stanford.edu/class/cs229/notes/cs229-notes8.ps
Expectation Maximization (EM) Algorithm = ( ) z x z argmax log , | p Goal: 1. E-step: compute ) ( p ) ( ( ) ) ( ( ) z = ( ) t x z x z z x E log , | log , | | , p p ( ) t , | z x 2. M-step: solve ) ( p ) ( ( ) z ) 1 + = ( ( ) t t x z z x argmax log , | | , p
log of expectation of P(x|z) ( ) z X E ( ) ( ) X = x z argmax log , | p E f f Goal: 1. E-step: compute expectation of log of P(x|z) ) ( p ) ( ( ) ) ( ( ) z = ( ) t x z x z z x E log , | log , | | , p p ( ) t , | z x 2. M-step: solve ) ( p ) ( ( ) z ) 1 + = ( ( ) t t x z z x argmax log , | | , p
EM for Mixture of Gaussians - derivation ( = n x p , , | ( ) 2 2 ( ) x 1 ) m m = n m 2exp 2 = , , 2 , | p x z m m ) ( p n n m m m 2 ( m m ) ( ( ) ) ( ) z ( x = ( ) t x z x z z x E log , | log , | | , p p 1. E-step: ( ) t , | z x ) ( p ) ( ) z ) 1 + = ( ( ) t t z z x argmax log , | | , p 2. M-step:
EM for Mixture of Gaussians ( ) 2 2 ( ) x 1 ( ) m m = n m 2exp 2 = = , , 2 | , , , | p x p x z m m ) n n n m m m 2 m m ) ( p ( ( ) ) ( ( ) z ( x = ( ) t x z x z z x E log , | log , | | , p p 1. E-step: ( ) t , | z x ) ( p ) ( ) z ) 1 + = ( ( ) t t z z x argmax log , | | , p 2. M-step: ( ) t = = ( ) 2 ( ) t t ( | , , , ) p z m x nm n n 1 1 ( ) + n nm ( ) 1 t n 2 2 = + ( ) 1 t x = x + ( ) 1 t = n n n m nm n m m nm n m N nm nm
EM algorithm - derivation http://lasa.epfl.ch/teaching/lectures/ML_Phd/Notes/GP-GMM.pdf
EM algorithm M-Step Take derivative with respect to ??
EM algorithm M-Step 1 Take derivative with respect to ?
EM Algorithm Maximizes a lower bound on the data likelihood at each iteration Each step increases the data likelihood Converges to local maximum Common tricks to derivation Find terms that sum or integrate to 1 Lagrange multiplier to deal with constraints