For each track, we have released data and evaluation tools, as well as baselines consisting of training tools and models. For all tracks, the test data will be provided on Sep. 10th, the final system outputs will be submitted by Oct. 8th, and the human evaluation results by Oct 23rd. Then we will have two wrap-up workshops, giving the scientific community the opportunity of reviewing the state-of-the-art performance and novel approaches, as well as discussing the next directions for dialog technology challenges. For more information, seehttp://workshop.colips.org/dstc7/index.html
This special issue will host work on any of the three DSTC7 tasks. We anticipate most papers will describe DSTC7 entries, and we particularly welcome papers describing novel techniques that advance the state-of-the-art in dialog system technologies. Papers may describe entries in the official DSTC7 challenge, or work on DSTC7 data but outside or after the official challenge. We also welcome papers that analyze the DSTC7 tasks or results themselves. Finally, we also invite papers from participants of previous DSTC editions, as well as general technical papers oriented to End-to-End dialog technologies (e.g. conversational agents, dialog breakdown detection, reasoning and understanding, automatic dialog evaluation, or dialog policies and tracking, among others).