Dieuwke Hupkes bio photo

Dieuwke Hupkes

Email LinkedIn Instagram Github

Mailing address

Institute for Logic, Language and Computation
Universiteit van Amsterdam
P.O. Box 94242
1090 GE AMSTERDAM

Visiting address

Room F2.26
Building F
Science Park 107
1098 XG Amsterdam

Publications

Overview

In preparation

  • Hupkes D., Singh A.K., Korrel K., Kruszewski G. and, Bruni E. (2019) Learning compositionally through attentive guidance. To appear at the 20th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2019).
    [paper] [source code]

  • Lakretz Y., Kruszewski G., Desbordes T., Hupkes D., Dehaene S. and Baroni M. (2019) The emergence of number and syntax units in LSTM language models. To appear at the Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) 2019.
    [paper]

  • Leonandya R., Bruni E., Hupkes D. and Kruszewski G. (2019) The Fast and the Flexible: training neural networks to learn to follow instructions from small data. To appear at the 13th International Conference on Computational Semantics (IWCS 2019).

2018

  • Hupkes D., Veldhoen S., and Zuidema W. (2018). Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research 61, 907-926.
    [paper] [bibtex].

  • Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D. and Zuidema, W. (2018). Under the hood: using diagnostic classifiers to investigate and improve how language models track agreement information. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248. Best paper award.
    [paper] [bibtex].

  • Jumelet, J. and Hupkes, D. (2018). Do language models understand anything? On the ability of LSTMs to understand negative polarity items. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222-231.
    [paper] [bibtex].

  • Hupkes, D., Bouwmeester, S. and Fernández, R. (2018). Analysing the potential of seq2seq models for incremental interpretation in task-oriented dialogue. Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 165-174.
    [paper] [bibtex].

  • Zuidema W., Hupkes D., Wiggins G., Scharf C. and Rohrmeier M. (2018). Formal models of Structure Building in Music, Language and Animal Song. In Honing, H (Ed.), The Origins of Musicality (pp. 253-286). Cambdrige, Mass.: The MIT Press.
    [chapter]

2017

  • Hupkes D. and Zuidema W. (2017). Diagnostic classification and symbolic guidance to understand and improve recurrent neural networks. Interpreting, Explaining and Visualizing Deep Learning, NIPS2017.
    [paper] [poster] [source code]

2016

Abstracts

  • Ulmer D., Hupkes D. and Bruni E. (2019) An incremental encoder for sequence-to-sequence modelling. CLIN29.
    [abstract]

  • Lakretz, Y., Kruszewski, G., Hupkes, D., Desbordes, T., Marti, S., Dehaene, S.and Baroni, M., (2018). The representation of syntactic structures in Long-Short Term Memory networks and humans. L2HM.
    [poster]

  • Zuidema, W., Hupkes, D., Abnar, S. (2018). Interpretable Machine Learning for Predicting Brain Activation in Language Processing. L2HM.
    [poster]

  • Leonandya, R., Kruszewski, G., Hupkes, D., Bruni, E. (2018). The Fast and the Flexible: pretraining neural networks to learn from small data. L2HM. [poster]

  • Nielsen A., Hupkes D., Kirby S. and Smith K. (2016). The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity. EVOLANG11.
    [abstract]