Dynamic Neural Network for Incremental Learning: Solution and Techniques

Slide Note
Embed
Share

Addressing the challenge of incremental learning, this research presents a Dynamic Neural Network solution that enables training without previous data. The approach focuses on fast learning, reduced storage and memory costs, and optimal performance without forgetting past knowledge. Techniques such as network expansion, knowledge distillation, regularization, and rehearsal methods are employed to enhance learning across domains and tasks.


Uploaded on Oct 02, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Dynamic Neural Network for Incremental Learning Hikvision Research Institute Liang Ma, Jianwen Wu, Qiaoyong Zhong, Di Xie, Shiiliang PuQiaoyong Zhong, Di Xie, Shiiliang Pu

  2. Content What we ask for incremental learning Main stream in the community Our solution: Dynamic Neural Network

  3. What we ask for incremental learning Training without using previous data data- increme ntal Wheredata-incremental data&class- incremental class-incremental data&class- incremental class- increme ntal Training fast GOAL Less storage and memory costs Good performance without forgetting

  4. Main stream in the community Regularization weight regularization: EWC/MAS/SI feature regularization: KD/SLNI Network Different networks for different tasks: PNN Rehearsal coreset selection and replay: iCarl/GEM generative replay

  5. Our solution: Dynamic Neural Network Dynamic network expansion for data across dissimilar domain Knowledge distillation for data in similar domain Combined method of network expansion and feature regularization Features No need for previous data Dynamic network expansion to alleviate domain gap

  6. Dynamic network expansion Freeze shared conv layers Network expansion for severe domain gap (bad accuracy) Tricks for generalization ability: For shared convs, imagenet pre-trained model For heads, more data augmentation and more batches to train head1

  7. Knowledge distillation Replace BatchNorm with GroupNorm Mining known instances in new task and distill on best head known+ unknown known known shared convs shared convs shared convs head1 head1' head1 y ref_known y_known Cls loss KD loss

  8. Experimental comparison Finetune DynamicNN

  9. Experiment results in 1stround Square performance on all tasks Finetune 93.84 DynamicNN(No expand) 94.50 DynamicNN(expand@1) 95.75 DynamicNN(No expand) + LR trick 96.01 DynamicNN(expand@1) + LR trick 96.83 LR trick: Incremental learning with small learning rate

Related


More Related Content