Jia-Bin Huang awarded NSF grant to advance representation learning and adaptation with free unlabeled images and videos
Jia-Bin Huang, an assistant professor of electrical and computer engineering and a DAC faculty member, has received a grant from the National Science Foundation’s Division of Information and Intelligent Systems to develop algorithms to capitalize on the massive amount of free unlabeled images and videos readily available on the internet for representation learning and adaptation.
This approach is in contrast to recent success in visual recognition which relies on training deep neural networks (DNNs) on a large-scale annotated image classification dataset in a fully supervised fashion.
“While the learned representation encoded in the parameters of DNNs have shown remarkable transferability to a wide range of tasks, depending on supervised learning substantially limits the scalability to new problem domains because manual labeling is often expensive and can sometimes require a specific expertise,” Huang said.
“Our aim in developing new methods is to significantly alleviate the high cost and scarcity of manual annotations for constructing large-scale datasets,” said Huang.
The study, entitled “Representation Learning and Adaptation using Unlabeled Videos,” commences this month and is estimated to extend through May 31, 2020.
The research team led by Huang will simultaneously leverage spatial and temporal contexts in videos taking advantages of rich supervisory signals for representation learning from their appearance variations and temporal coherence. Compared to the supervised counterpart (which requires millions of manually labeled images), learning from unlabeled videos is inexpensive and unlimited in scope.
The project also seeks to adapt the learned representation to handle appearance variations in new domains with minimal manual supervision. The effectiveness of representation adaptation is validated in the context of instance-level video object segmentation. Both graduate and undergraduate students will be involved in the project. Research materials will also be integrated into curriculum development in courses on deep learning for machine perception.
Results of the study will be disseminated through scientific publications, open-source software, and dataset releases. Huang joined Virginia Tech in 2016 as assistant professor of electrical and computer engineering. His research interests include computer vision; computer graphics; and machine learning with a focus on visual analysis and synthesis with physically grounded constraints.