Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning
摘要截稿:
全文截稿: 2019-02-15
开会时间: 2019-05-02
会议难度:
CCF分类: 无
会议地点: Calgary, Alberta, Canada
Overview
Workshop at the SIAM International Conference on Data Mining (SDM19), May 2‑4, 2019
As mentioned above, the core topics are intended to be on meta-evaluation, that is, problems around evaluation (how to evaluate properly and fairly, identification of traps and common mistakes in evaluation, strategies to improve evaluation settings, e.g., by defining benchmarks). As such, topics include, but are not limited to
Benchmark datasets for data mining tasks: are they diverse/realistic/challenging?
Impact of data quality (redundancy, errors, noise, bias, imbalance, ...) on qualitative evaluation
Propagation/amplification of data quality issues on the data mining results (also interplay between data and algorithms)
Evaluation of unsupervised data mining (dilemma between novelty and validity)
Evaluation measures
(Automatic) data quality evaluation tools: What are the aspects one should check before starting to apply algorithms to given data?
Issues around runtime evaluation (algorithm vs. implementation, dependency on hardware, algorithm parameters, dataset characteristics)
Design guidelines for crowd-sourced evaluations