Understanding TensorFlow for Social Good by Zhixun Jason He

Slide Note
Embed
Share

This content provides an overview of TensorFlow for social good, focusing on models, training, and data. It explains how to predict outcomes using inputs and models, and the process of finding the right parameters and models. The content emphasizes the role of TensorFlow in designing the right model and calculating parameters for effective machine learning. Images and examples help clarify the concepts discussed.


Uploaded on Sep 14, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Tensorflow for Social Good Zhixun Jason He

  2. Table of Content 1. Overview: Model, Training (Learning), Data 2. Warm up: Training (behind the scene) 3. Start Tensorflow 4. Useful resources that helped me a lot

  3. Overview: Model, Training (Learning), Data Model: In plain language, we want to predict something, e.g., housing price, classification, etc. by feeding some inputs to the Model. The Model suppose to give its prediction. E.g., input: number of bedrooms area distance to school etc. E.g., output: price of the house Example: Input: ?1: number of bedrooms ?2: area ?3: distance to school Parameters: a, b, c, d Model: ? = ??1+ ??2+ ??3 2+ ?

  4. Overview: Model, Training (Learning), Data Model: Where does input come from? where does the model come from? where do the parameters come from? How do we know the exact number for input? How do we know the exact number for parameters? Example: Input: ?1: number of bedrooms ?2: area ?3: distance to school Parameters: a, b, c, d Model: ? = ??1+ ??2+ ??3 2+ ?

  5. Overview: Model, Training (Learning), Data Model: Find the exact number for parameters, e.g., a, b, c, d Find the right model Example: Input: ?1: number of bedrooms ?2: area ?3: distance to school Parameters: a, b, c, d Model: ? = ??1+ ??2+ ??3 Your job: Tensorflow s job: 2+ ?

  6. Overview: Model, Training (Learning), Data Model: Find the exact number for parameters, e.g., a, b, c, d Find the right model Example: Input: ?1: number of bedrooms ?2: area ?3: distance to school Parameters: a, b, c, d Model: ? = ??1+ ??2+ ??3 Your job: design the right model Tensorflow s job: calculate the parameters Tensorflow needs: 2+ ?

  7. Overview: Model, Training (Learning), Data Model: Find the exact number for parameters, e.g., a, b, c, d Find the right model Example: Input: ?1: number of bedrooms ?2: area ?3: distance to school Parameters: a, b, c, d Model: ? = ??1+ ??2+ ??3 Your job: Tensorflow s job: calculate the parameters Tensorflow needs: data 1sthouse: ?1 2ndhouse: ?1 ... ?? house: ?1 in order to train/learning a, b, c, d design the right model 2+ ? (1),?2 (2),?2 (1),?3 (2),?3 (1),?(1) (2),?(2) (?),?2 (?),?3 (?),?(?)

  8. Warm up: Training (behind the scene) Training: Example: ? = ? ?2 Data: {(?(?),?(?))}, ? = 1,2, ,? Find parameter s value, ?, such that the among all data, the predicted result ?(?)= ? ??2is as close as the ground truth value ?(?) Measure the difference between ? (prediction) and ? (ground truth): ?(?) ?(?)2

  9. Warm up: Training (behind the scene) Training: Example: ? = ? ?2 Data: {(?(?),?(?))}, ? = 1,2, ,? Find parameter s value, ?, such that the among all data, the predicted result ?(?)= ? ??2is as close as the ground truth value ?(?) Measure the difference between ? (prediction) and ? (ground truth): ?(?) ?(?)2 ?(?) ? ??22

  10. Warm up: Training (behind the scene) Training: Example: ? = ? ?2 Data: {(?(?),?(?))}, ? = 1,2, ,? Find parameter s value, ?, such that the among all data, the predicted result ?(?)= ? ??2is as close as the ground truth value ?(?) Measure the difference between ? (prediction) and ? (ground truth): ?(?) ?(?)2 ?(?) ? ??22 ??(?) ? ??22 ? = ????

  11. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22

  12. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2

  13. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3

  14. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 L ?? ??(?) a ?(?)

  15. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 In order to lower L, how we want to change ?(?)? L ?? ??(?) a ?(?)

  16. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 In order to lower L, how we want to change ?(?)? ????= ?(?) ??(?) Does it always work? L L ?? ?? ??(?) a ?(?)

  17. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 In order to lower L, how we want to change ?(?)? ????= ?(?) ??(?) ????= ?(?) ? ????( ???) L L ?? ?? ??(?) ?? a ?(?)

  18. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 In order to lower L, how we want to change ?(?)? ????= ?(?) ??(?) ????= ?(?) ? ????( ???) In order to learn the optimal value for ? what we need to do? L L ?? ?? ??(?) ?? a ?(?)

  19. Warm up: Training (behind the scene) Training: Measure the difference between ? (prediction) and ? (ground truth): ? = ?(?) ? ??22 ? = ??4 ?2 2 ?? ??2 ? + ??2 ? = ?1 ?2 ?2 ? + ?3 In order to lower L, how we want to change ?(?)? ????= ?(?) ??(?) ????= ?(?) ? ????( ???) In order to learn the optimal value for ? what we need to do? Calculate the gradient for the parameter ?, ????? L ?? L ?? ??(?) ?? ??, and update ? with it. a ?(?)

  20. Warm up: Training (behind the scene) A milestone we reached! We just learnt what does training look like.

  21. Warm up: Training (behind the scene) Find me: citris.ucmerced.edu/tsg

  22. Warm up: Training (behind the scene) Imaging, there is a relationship between x and y in nature, but we don t know (this relationship is invisible to us): ? = sin ? What is available to us are data collection: {(?(?),?(?))}, ? = 1,2, ,? We do not know the real relationship between x and y What we do right now?

  23. Warm up: Training (behind the scene) Imaging, there is a relationship between x and y in nature, but we don t know (this relationship is invisible to us): ? = sin ? What is available to us are data collection: {(?(?),?(?))}, ? = 1,2, ,? We do not know the real relationship between x and y What we do right now? Design a model to describe the relationship between x and y It does not have to be perfect, but at least we design it the way to the best of our knowledge

  24. Warm up: Training (behind the scene) Imaging, there is a relationship between x and y in nature, but we don t know (this relationship is invisible to us): ? = sin ? What is available to us are data collection: {(?(?),?(?))}, ? = 1,2, ,? We do not know the real relationship between x and y Our design: ? = ? + ? ? + ? ?2+ ? ?3 parameters: a, b, c, d

  25. Warm up: Training (behind the scene) Imaging, there is a relationship between x and y in nature, but we don t know (this relationship is invisible to us): ? = sin ? What is available to us are data collection: {(?(?),?(?))}, ? = 1,2, ,? We do not know the real relationship between x and y Our design: ? = ? + ? ? + ? ?2+ ? ?3 parameters: a, b, c, d Training:

  26. Warm up: Training (behind the scene) Imaging, there is a relationship between x and y in nature, but we don t know (this relationship is invisible to us): ? = sin ? What is available to us are data collection: {(?(?),?(?))}, ? = 1,2, ,? We do not know the real relationship between x and y Our design: ? = ? + ? ? + ? ?2+ ? ?3 parameters: a, b, c, d Training: Give a random value to parameters first, then calculate their individual gradient ?? ?? ??, ?? ??, ?? ??, ??, to update each one of them.

  27. Warm up: Training (behind the scene) Recap: Data: input x, output y x and y can contain multiple entries, e.g., ? = ?1, ?2, ?3,? = {?1, ?2} Model: we design its structure, it has some parameters Training: calculate the gradient of the loss with respect to each parameter use the gradient information to update (learn) the value of parameters

  28. Warm up: Training (behind the scene) Another milestone reached!

  29. 3 Start Tensorflow What are the potential hurdles in the raw implementation when we try to develop a large model?

  30. 3 Start Tensorflow What are the potential hurdles in the raw implementation when we try to develop a large model? 1. Tedious or almost impractical to hand calculate each parameter s gradient if we have thousands of parameters 2. Not flexible when we try to do adaptive updates 3. Not portable or maintainable for complicated model 4. Hard to keep track of large number of parameters and intermediate states 5. Not fast enough, or take advantage of hardware acceleration 6. Etc.

  31. 3 Start Tensorflow Data Model Loss Optimizer Training

  32. 4 Useful Resources Tips: Boostrapping Develop you code with Intergrated Dev Env (IDE) Online class: Andrew Ng: Deep Learning (free) Google.com, stackoverflow.com Be patient: Write you question with patience and describe it well Debug one small issue at a time

Related


More Related Content