Special Issue on New Developments on Randomized Algorithms for Neural Networks
摘要截稿:
全文截稿: 2018-05-30
影响因子: 5.91
期刊难度:
CCF分类: B类
中科院JCR分区:
• 大类 : 计算机科学 - 1区
• 小类 : 计算机:信息系统 - 1区
Overview
Randomized learner models, as a powerful tool for large-scale data analytics, have received considerable attention in the machine learning community over the past years. Along with the development of deep neural networks and deep learning techniques, randomized algorithms for deep neural networks have become popular due to their feasibility for problem solving. From theory to practice, randomized algorithms as a class of machine learning techniques, have great potential to deal with big data analytics although they still do not offer an optimal solution.
Randomized learning techniques for neural networks can be categorized into data-dependent and data-independent algorithms, which result in different randomized learner models such as the stochastic configuration networks (SCNs), the random vector functional-link (RVFL) nets and echo state networks (ESNs), respectively. It is interesting and useful to understand the impacts of the random parameters of these randomized learner models on the learning performances. In addition, it is significant to see some aspects on randomized representation learning from deep models. It has been known that fundamental aspects related to randomized models and algorithms, such as the universal approximation property, algorithmic convergence, stability, consistency, robustness and their relationships to the generalization, need to be established to support further theoretical developments and real-world applications. Besides the theory, it is highly desirable to develop scalable and practical randomized learning algorithms for big data modeling and analytics.
This special issue covers recent developments on randomized models and algorithms for neural networks. Original contributions or real-world applications or comprehensive surveys are welcome. Through this special issue, some fundamental concepts and existing randomized learning algorithms for neural networks (especially for deep models) will be further developed. Except for the dissemination of the latest research results on randomized learner models and learning algorithms for neural networks, it is also expected that this special issue can include some industrial applications, deliver some new ideas and identify directions for future studies.
The topics of this special issue include, but are not limited to:
- Universal approximation property of deep RVFL nets, deep SCNs and deep ESNs
- Convolutional randomized learner models for random representation learning
- Algebraic properties of randomized neural networks
- Convergence rate and estimation of error bounds
- Stability, consistency, robustness and generalization
- Random kernel methods and learner models associated with random projection
- Regularization theory, model evaluation and selection criteria
- Fast and robust randomized algorithms for large-scale datasets
- Randomized fuzzy systems and hybrid systems
- Distributed learning, ensemble learning, incremental learning, and semi-supervised learning
- Time-series forecasting and interval estimate
- Applications in control engineering, bioinformatics and biometrics, finance and business, power systems and process industries, image processing and information retrieval, and intelligent software engineering and communication systems