Symbolic-neural learning involves deep learning methods in combination with symbolic structures. A "deep learning method" is taken to be a learning process based on gradient descent on real-valued model parameters. A "symbolic structure" is a data structure involving symbols drawn from a large vocabulary; for example, sentences of natural language, parse trees over such sentences, databases (with entities viewed as symbols), and the symbolic expressions of mathematical logic or computer programs. Natural applications of symbolic-neural learning include, but are not limited to, the following areas:
- Image caption generation and visual question answering
- Speech and natural language interactions in robotics
- Machine translation
- General knowledge question answering
- Reading comprehension
- Textual entailment
- Dialogue systems
Various architectural ideas are shared by deep learning systems across these areas. These include word and phrase embeddings, recurrent neural networks (LSTMs and GRUs) and various attention and memory mechanisms. Certain linguistic and semantic resources may also be relevant across these applications. For example dictionaries, thesauri, WordNet, FrameNet, FreeBase, DBPedia, parsers, named entity recognizers, coreference systems, knowledge graphs and encyclopedias. Deep learning approaches to the above application areas, with architectures and tools subjected to quantitative evaluation, loosely define the focus of the workshop.
We invite submissions of high-quality, original papers within the workshop focus. The workshop will consist of a half-day of invited talks and a full day of presentations of accepted papers.
Yoshua Bengio (invited) | Université de Montréal, Montréal, Canada |
William Cohen (invited) | Carnegie Mellon University, Pittsburgh, USA |
Masashi Sugiyama | RIKEN and University of Tokyo, Tokyo, Japan |
Jun'ichi Tsujii | AI Center, AIST, Tokyo, Japan |
Sadaoki Furui | Toyota Technological Institute at Chicago, Chicago, USA |
Tomoko Matsui | Institute of Statistical Mathematics, Tokyo, Japan |
David McAllester | Toyota Technological Institute at Chicago, Chicago, USA |
Yutaka Sasaki | Toyota Technological Institute, Nagoya, Japan |
Koichi Shinoda | Tokyo Institute of Technology, Tokyo, Japan |
Masashi Sugiyama | RIKEN and University of Tokyo, Tokyo, Japan |
Jun'ichi Tsujii | AI Center, AIST, Tokyo, Japan |
David McAllester | Toyota Technological Institute at Chicago, Chicago, USA |
Yutaka Sasaki | Toyota Technological Institute, Nagoya, Japan |
Jen-Tzung Chien (National Chiao Tung University, Taiwan) | Yo Ehara (AIRC, Japan) | Kevin Gimpel (TTIC, USA) | Tatsuya Harada (University of Tokyo, Japan) | Beven Jones (AIRC, Japan) | Karen Livescu (TTIC, USA) | Yasuyuki Matsushita (Osaka University, Japan) | David McAllester (TTIC, USA) | Makoto Miwa (TTI, Japan) | Daichi Mochihashi (Institute of Statistical Mathematics, Japan) | Takayuki Okatani (Tohoku University, Japan ) | Yutaka Sasaki (TTI, Japan) | Greg Shakhnarovich (TTIC, USA) | Takahiro Shinozaki (Tokyo Institute of Technology, Japan) | Jun Suzuki (NTT, Japan) | Yuta Tsuboi (IBM, Japan) | Matthew Walter (TTIC, USA) | Takashi Washio (Osaka University, Japan) | Takuya Yoshioka (NTT, Japan) |
March 22, 2017 | Paper submission deadline |
May 10, 2017 | Notification of acceptance |
June 7, 2017 | Camera-ready submission deadline |
June 9, 2017 | Early registration deadline |
July 7-8, 2017 | SNL-2017 workshop in Nagoya, Japan |