Any ailment in our organs can be visualized by using different modality signals and images, such as EEG, ECG, PCG, X-ray, magnetic resonance imaging, computerized tomography, Single photon emission computed tomography, Positron emission tomography, fundus and ultrasound images, etc., originating from various body parts to obtain useful information. Hospitals are encountering a massive influx of large multimodality patient data to be analysed accurately and with context understanding. Many machine learning algorithmshave been developed to automatically detect the features that are characterizing the diseases depicted in medical images. Extracting the proper features from the medical images using advanced image or signal processing methods limits the amount of information available for the machine learning algorithm. Furthermore, feature selection is oftentimes subjective and it is not clear if two or more features report the same information. To overcome these problems, deep learning approaches implicitly learn these features from the training data and use them to support diagnosis and prognosis from medical images.
The deep learning techniques, like convolution neural networks (CNN), long short-term memory (LSTM), autoencoders, deep generative models and deep belief networks have already been applied to efficiently analyse possible large collections of data. Application of these methods to medical signals and images can aid the clinicians in clinical decision making. The special issue on “Deep learning methods for medical applications” calls for manuscripts on reports on new methods, approaches and application of deep learning. Manuscripts on explainable approaches that describe how deep learning models can help us interpret the data and explain the predictions are especially welcome.