Practical Machine Learning

Practical Machine Learning
Slide Note
Embed
Share

Pseudo-homophones are words that sound like real words but aren't, affecting cognition as seen in various studies like Rubenstein et al. (1971) and Martin (1982). The pseudohomophone effect demonstrates how visually presented words are phonologically encoded, impacting lexical decision tasks. Underwood et al. (1998) observed the effect in readers encountering homophones. Explore the impact of pseudohomophones on phonological and orthographic information processing through research findings.

  • Pseudohomophones
  • Phonological Encoding
  • Lexical Decision Task
  • Cognition Studies

Uploaded on Mar 06, 2025 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. Practical Machine Learning Neural Network Structure Sven Mayer

  2. Neuronal Network Structure TensorFlow Syntax for a 2-Layer Model model = tf.keras.Sequential() x = tf.keras.layers.InputLayer((400,), name = "InputLayer") model.add(x) x = tf.keras.layers.Dense(14, name = "HiddenLayer1", activation = 'relu') model.add(x) model.add(tf.keras.layers.Dense(8, name = "HiddenLayer2", activation='relu')) model.add(tf.keras.layers.Dense(2, name = "OutputLayer", activation = 'softmax')) Sven Mayer 2

  3. Layers to Pick from tf.keras.layers.* AbstractRNNCell ConvLSTM2D GlobalAveragePooling2D MaxPool2D SpatialDropout2D Activation Convolution1D GlobalAveragePooling3D MaxPool3D SpatialDropout3D ActivityRegularization Convolution1DTranspose GlobalAvgPool1D MaxPooling1D StackedRNNCells Add Convolution2D GlobalAvgPool2D MaxPooling2D Subtract AdditiveAttention Convolution2DTranspose GlobalAvgPool3D MaxPooling3D ThresholdedReLU AlphaDropout Convolution3D GlobalMaxPool1D Maximum TimeDistributed Attention Convolution3DTranspose GlobalMaxPool2D Minimum UpSampling1D Average Cropping1D GlobalMaxPool3D MultiHeadAttention UpSampling2D AveragePooling1D Cropping2D GlobalMaxPooling1D Multiply UpSampling3D AveragePooling2D Cropping3D GlobalMaxPooling2D PReLU Wrapper AveragePooling3D Dense GlobalMaxPooling3D Permute ZeroPadding1D AvgPool1D DenseFeatures InputLayer RNN AvgPool2D DepthwiseConv2D InputSpec ReLU ZeroPadding2D AvgPool3D Dot RepeatVector LSTM ZeroPadding3D BatchNormalization Dropout LSTMCell Reshape Bidirectional ELU Lambda SeparableConv1D Concatenate Embedding Layer SeparableConv2D Conv1D Flatten LayerNormalization SeparableConvolution1D Conv1DTranspose GRU LeakyReLU SeparableConvolution2D GRUCell LocallyConnected1D SimpleRNN Conv2D Conv2DTranspose GaussianDropout LocallyConnected2D SimpleRNNCell Conv3D GaussianNoise Masking Softmax Conv3DTranspose GlobalAveragePooling1D MaxPool1D SpatialDropout1D Sven Mayer 3

  4. Neuronal Network Structure InputLayer InputLayer HiddenLayer1 Dense HiddenLayer2 Dense OutputLayer Dense 2 8 14 400 Sven Mayer 4

  5. Neuronal Network Structure InputLayer InputLayer HiddenLayer1 Dense HiddenLayer2 Dense OutputLayer Dense Perceptron 2 8 14 400 Sven Mayer 5

  6. Neuronal Networks What can be trained? Frank Rosenblatt. "The perceptron: a probabilistic model for information storage and organization in the brain." Psychological review 65, no. 6 (1958): 386. DOI: https://psycnet.apa.org/doi/10.1037/h0042519 Sven Mayer 6

  7. What is a Perceptron? Single-Layer Perceptron Bias Input Output ? ?0 ?0 ? ?(?) Activation function ?? ?? Weights Sven Mayer 7

  8. What can be trained? Single-Layer Perceptron Output Input ? ? = ? Model Sven Mayer 8

  9. What can be trained? Single-Layer Perceptron Input ?0 ?? ? ? = ? = ? ? ? + ? = ? Output Model Sven Mayer 9

  10. What can be trained? Single-Layer Perceptron Input WeightsBias ?0 ?? ? ? = ? = ? ? ? + ? = ? Output Activation function Model Sven Mayer 10

  11. What can be trained? Single-Layer Perceptron Input WeightsBias ?0 ?? ? ? = ? = ? ? ? + ? = ? Output Activation function Model ? = ? ????+ ? ?=0 Sven Mayer 11

  12. Activation Function Rectified Linear Unit (ReLU) ? ? = max 0,? = ? ?? ? > 0 ?? ??????= ? ?? ? ? + ? > 0 ?? ?????? 0 0 Sven Mayer 12

  13. Neuronal Network Structure InputLayer HiddenLayer1 HiddenLayer2 OutputLayer ?0 ?? ?0 ?? ? + ? Perceptron 2 8 14 400 Sven Mayer 13

  14. Neuronal Network Structure InputLayer HiddenLayer1 HiddenLayer2 OutputLayer Each neuron is ? ? with a independed ? and ? 2 8 14 400 Sven Mayer 14

  15. Combining Perceptions Why is it all about fast matrix multiplication? ? ? ?0+ ?0 = ?0 ? ? ?1+ ?1 = ?1 2 8 ?0,0 ?1,0 ?0,1 ?1,1 ?0,6 ?1,6 ?0,7 ?0,7 ?0 ?1 ?0 ?1 +?0 ? = ?1 ?0,0 ??,0 ?0,? ??,? ?0 ?? ?0 ?? ?0 ?? ? + = Sven Mayer 15

  16. Parameter 2 8 14 400 Layers 400, 14, 8, and 2 => weights: 400 * 14 + 14 * 8 + 8 * 2 = 5,728 => biases: 14 + 8 + 2 = 24 => trainable parameter: 5,728 + 24 = 5,752 Trainable parameters can raise fast Layers 400, 100, 40, 2 => Parameter: 44,222 (one model from the walkthough) Sven Mayer 16

  17. Conclusion Neural Network Structure Perceptron Weights Biases Combining multiple Perceptron Trainable Parameter Activation Function (e.g. ReLu) Sven Mayer 17

  18. License This file is licensed under the Creative Commons Attribution-Share Alike 4.0 (CC BY-SA) license: https://creativecommons.org/licenses/by-sa/4.0 Attribution: Sven Mayer Sven Mayer

Related


More Related Content