Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images
This research project presented at CVPR 2019 by Wuyang Chen, Ziyu Jiang, Zhangyang Wang, Kexin Cui, and Xiaoning Qian focuses on memory-efficient segmentation of ultra-high resolution images using Collaborative Global-Local Networks. The study explores the benefits of employing two branches for deep feature map sharing and regularization methods like aggregation and concatenation to enhance segmentation accuracy. Experimental results compare their approach with state-of-the-art methods, highlighting improvements in mean Intersection over Union (mIoU) and memory usage.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Collaborative Global-Local Networks for Memory-Efficient Segmentation of Ultra-High Resolution Images CVPR 2019 Oral Wuyang Chen1, Ziyu Jiang1, Zhangyang Wang1, Kexin Cui1 and Xiaoning Qian2 Texas A&M University
Ultra-High Resolution Images Aerial image Skin pathology image
Regularization and Aggregation Aggregation: Concatenation + 3x3 Conv ?2norm of local and global braches
Experiment Ablation Study AGG: Aggregation of G and L branches. Fmreg: regulation, ?2norm shallow: only a single layer s feature shared (not specified) deep: all layers feature shared
Experiment Compare with STOA
Experiment Compare with STOA
Conclusions Only mIoU and memory, no speed comparison. Drawback of crop: Middle partition