Understanding Latent Variable Models in Machine Learning

Slide Note
Embed
Share

Latent variable models play a crucial role in machine learning, especially in unsupervised learning tasks like clustering, dimensionality reduction, and probability density estimation. These models involve hidden variables that encode latent properties of observations, allowing for a deeper insight into the underlying structure of the data. By incorporating both global and local latent variables, generative models with latent variables enable sophisticated probabilistic modeling to uncover hidden patterns and relationships within datasets.


Uploaded on Oct 09, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Latent Variable Models CS771: Introduction to Machine Learning Nisheeth

  2. Coin toss example Say you toss a coin N times You want to figure out its bias Bayesian approach Find the generative model Each toss ~ Bern( ) ~ Beta( , ) Draw the generative model in plate notation CS771: Intro to ML

  3. Plate notation Random variables as circles Parameters, fixed values as squares Repetitions of conditional probability structures as rectangular plates Switch conditioning as squiggles Random variables observed in practice are shaded CS771: Intro to ML

  4. 4 Generative Models with Latent Variables Have already looked at generative models for supervised learning Generative models are even more common/popular for unsupervised learning, e.g., Clustering Dimensionality Reduction Probability density estimation Latent variable z_n usually encodes some latent properties of the observation ?? ?? ?? In such models, each data point is associated with a latent variable Clustering: The cluster id ?? (discrete, or a K-dim one-hot rep, or a vector of cluster membership probabilities) Dimensionality reduction: The low-dim representation ?? ? These latent variables will be treated as random variables, not just fixed unknowns Will therefore assume a suitable prior distribution on these and estimate their posterior If we only need a point estimate (MLE/MAP) of these latent variables, that can be done too CS771: Intro to ML

  5. 5 Generative Models with Latent Variables A typical generative model with latent variables might look like this ? ??? :A suitable distribution based on the nature of ?? ? ? ????,? :A suitable distribution based on the nature of ?? ?? ? ?? ? Need probability distributions on both In this generative model, observations ?? assumed generated via latent variables ?? The unknowns in such latent var models (LVMs) are of two types Global variables: Shared by all data points (? and ? in the above diagram) Local variables: Specific to each data point (?? s in the above diagram) Note: Both global and local unknowns can be treated as r.v. s However, here we will only treat the local variables ?? s as random latent variable and regard ? and ? as other unknown parameters of the model CS771: Intro to ML

  6. 6 An Example of a Generative LVM Probabilistic Clustering can be formulated as a generative latent variable model Assume ? probability distributions (e.g., Gaussians), one for each cluster ?(?) is a Gaussian mixture model (GMM) Parameters of the ? distributions, e.g,. ? = ??, ? ?=1 ? ?? ? = multinoulli(?) (also means ? ??= ? ? = ??) ? ? Discrete latent variable (with ? possible values) or a one-hot vector of length ?. Modeled by a multinoulli distribution as prior Assumed generated from one of the ? distributions depending on the true (but unknown) value of ??(which clustering will find)) The likelihood distributions The parameter vector ? = ?1,?2, ,?? of the multinoulli distribution ?? ? ?? ? ?? ??= ?,? = ? ??, ? ? In any such LVM, ? denotes parameters of the prior distribution on ?? .. and ? denotes parameters of the likelihood distribution on ?? CS771: Intro to ML

  7. 7 Parameter Estimation for Generative LVM So how do we estimate the parameters of a generative LVM, say prob. clustering? ? ?? ? ?? ? The guess about ?? can be in one of the two forms A hard guess a fixed value (some optimal value of the random variable ??) The expected value ? ?? of the random variable ?? Using the hard guess of ??will result in an ALT-OPT like algorithm Using the expected value of ?? will give the so-called Expectation-Maximization (EM) algo EM is pretty much like ALT-OPT but with soft/expected values of the latent variables CS771: Intro to ML

  8. 8 Parameter Estimation for Generative LVM Can we estimate parameters ?,? = (say)of an LVM without estimating ??? In principle yes, but it is harder Given ? observations ??,? = 1,2, ,?, the MLE problem for will be The discussion here is also true for MAP estimation of Also note that ? ??,?? = ? ??? ?(??|??,?) ? ? argmax log ?(??| ) = argmax log ?(??,??| ) ?=1 ?=1 ?? After the summation/integral on the RHS, ?(??| )is no longer exp. family even if ?(??|?) and ?(??|??,?) are in exp-fam Summing over all possible values ?? can take (would be an integral instead of sum if ?? is continuous Convex combination (mixture) of ? Gaussians. No longer an exp-family distribution For the probabilistic clustering model (GMM) we saw, ?(??| ) will be ? ? ? ? ?? = ? ??,??= ? = ? ??= ? ? ?(??|??= ?,?) = ??? ??|??, ? The log of sum doesn t give us a simple expression; MLE can still be done using gradient based methods but update will be complicated. ALT-OPT or EM make it simpler by using guesses of ?? s ?=1 ?=1 ?=1 ? ? MLE problem thus will be argmax log ??? ??|??, ? ?=1 ?=1 CS771: Intro to ML

  9. If the ?? were known, it just becomes a probabilistic version of the multi-output regression problem where ?? ? are the observed input features and ?? ?are the vector-valued outputs 9 Another Example of a Gen. LVM Probabilistic PCA (PPCA) is another example of a generative latent var model Assume a ?-dim latent var ?? mapped to a ?-dim observation ?? via a prob. mapping ? ? mapping matrix Parameters defining the projection from ?? to ?? ? 1 ? ?? ? = ? 0,?? ? 1 mean of the mapping ??= ??? ? ? ?? ??,?,?2= ? ??,?2?? Real-valued vector of length ?. Modeled by a zero-mean ?-dim Gaussian distribution as prior Probabilistic mapping means that will be not exactly but somewhere around the mean (in some sense, it is a noisy mapping): ??= ???+ ?? The parameters of the Gaussian prior on ??. In this example, no such parameters are actually needed since mean is zero and cov matrix is identity, but can use nonzero mean and more general cov matrix for the Gaussian prior ?? ? ?? Also, instead of a linear mapping ???, the ?? to ?? mapping can be defined as a nonlinear mapping (variational autoencoders, kernel based latent variable models) Added Gaussian noise just like probabilistic linear regression ? PPCA has several benefits over PCA, some of which include Can use suitable distributions for ?? to better capture properties of data Parameter estimation can be done faster without eigen-decomposition (using ALT-OPT/EM algos) CS771: Intro to ML

  10. 10 Generative Models and Generative Stories Data generation for a generative model can be imagined via a generative story This story is just our hypothesis of how nature generated the data For the Gaussian mixture model (GMM), the (somewhat boring) story is as follows For each data point ??with index ? = 1,2, ,? Generate its cluster assignment by drawing from prior ?(??|?) ?? multinoulli(?) Assuming ??= ?, generate the data point ?? from ?(??|??,?) ?? ?(??,??) Can imagine a similar story for PPCA with ?? generated from ? 0,?? and then conditioned on ??, the observation ?? generated from ? ?? ??,?,?2= ? ???,?2?? For GMM/PPCA, the story is rather simplistic but for more sophisticated models, gives an easy way to understand/explain the model, and data generation process CS771: Intro to ML

Related