Journal of Visual Communication and Image Representation
Special Issue on: Multimodal Cooperation for Multimedia Computing
摘要截稿:
全文截稿: 2018-07-15
影响因子: 2.479
期刊难度:
CCF分类: C类
中科院JCR分区:
• 大类 : 计算机科学 - 3区
• 小类 : 计算机:信息系统 - 3区
• 小类 : 计算机:软件工程 - 3区
Overview
The proliferation of Web-based sharing services, coupled with the growing prevalence of digital devices, has produced staggering amounts of social multimedia content available online. Apart from the multimedia content by themselves in the form of image, audio, or video, social multimedia data are often accompanied with textual descriptors, geographical maps, camera properties metadata, and even user interactions. Multimodal information cooperation works towards a common or mutual target, as opposed to working for competition for selfish ends. Optimal multimodal cooperation can either enhance the decision confidence if information in multimodalities is consistent, or jointly provide a complete profile of the phenomenon of interest from different aspects if information is complementary. In turn, it benefits a wide range of applications, such as user profiling and personalized advertisement.
Despite its value, multimodal cooperation also raises challenges in multimedia computing and understanding. For instance, early fusion leads to the issues of heterogeneity and curse of dimensionality, while late fusion suffers from non-informative representation. How to characterize and model the relationships among different modalities remains an open problem, which has attracted rich research attention recently from multiple disciplines including multimedia computing, machine learning, computer vision, and information retrieval.
We see a timely opportunity for organizing a special issue to bring together active researchers to share recent progress in this cutting-edge area. The rationales to solicit original contributions are two-fold: 1) showcasing new theories and new application scenarios on multimodal cooperation for multimedia understanding, and 2) surveying the recent advances in this area.
The list of possible topics includes, but is not limited to:
o Multimodal cooperation theories and applications
o Human-computer-interaction with multimodal data
o Security problem in multimedia multimodal data
o Multimodal cooperation in healthcare applications
o Multimodal cooperation in state grid applications
o Modality-wise missing data completion
o Deep models for multimodal aggregation
o Multitask learning in Multimodal settings
o Multimedia question answering
o Multimodal feature learning and fusion
o Multimodal concept detection, object recognition, and segmentation
o Novel machine learning for multimodal analysis
o Multimodal data organization, indexing, and retrieval
o Common space learning for multimodal data
o Multimodal approaches to detecting complex activities
o Multimodal approaches to event analysis and modeling
o Temporal or structural modeling for multimodal data
o Scalable processing and scalability issues in multimedia multimodal analysis
o Deep learning for cross-media analysis, knowledge transfer and information sharing