ELECTRA: Pre-Training Text Encoders as Discriminators
Efficiently learning an encoder that classifies token replacements accurately using ELECTRA method, which involves replacing some input tokens with samples from a generator instead of masking. The key idea is to train a text encoder to distinguish input tokens from negative samples, resulting in better downstream task performance compared to masked language modeling.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Paper Reading ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS from:ICLR 2020 1801210827
Outline Motivation Introduction Method Expreiments Conclusion
Motivation Compute efficient Performance better
Introduction What is ELECTRA? Efficiently Learning an Encoder that Classifies Token Replacements Accurately. Main Idea Replaced token detection & GAN thinking replace some input tokens with samples from generator instead of masking pre-train discriminator to predict for every token whether it is an original or a replacement
Method G, D tokens: ? = [?1,...,??] Token (?) = [ 1,..., ?] t,G ?? softmax
Method x mask token
Conclusion The key idea is training a text encoder to distinguish input tokens from high- quality negative samples produced by an small generator network Compared to masked language modeling, ELECTRA is more compute-efficient and results in better performance on downstream tasks