Dieuwke Hupkes bio photo

Dieuwke Hupkes

Email LinkedIn Instagram Github

Mailing address

Institute for Logic, Language and Computation
Universiteit van Amsterdam
P.O. Box 94242
1090 GE AMSTERDAM

Visiting address

Room F2.26
Building F
Science Park 107
1098 XG Amsterdam

Publications

Overview

In preparation/under review

  • Nielsen A., Hupkes D., Kirby S. and Smith K. (in prep). The arbitrariness of the sign revisited: An examination of the roles of phonological similarity and task construction in an artificial language learning paradigm.

2018

  • Hupkes D., Singh A.K., Korrel K., Kruszewski G. and, Bruni E. (2018) Learning compositionally through attentive guidance.
    [paper] [source code] .

  • Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D. and Zuidema, W. (2018). Under the hood: using diagnostic classifiers to investigate and improve how language models track agreement information. To appear at the EMNLP workshop Analyzing and interpreting neural networks for NLP. [paper].

  • Jumelet, J. and Hupkes, D. (2018). Do language models understand anything? On the ability of LSTMs to understand negative polarity items. To appear at the EMNLP workshop Analyzing and interpreting neural networks for NLP. [paper].

  • Hupkes, D., Bouwmeester, S. and Fernández, R. (2018). Analysing the potential of seq2seq models for incremental interpretation in task-oriented dialogue. To appear at the EMNLP workshop Analyzing and interpreting neural networks for NLP.
    [paper].

  • Hupkes D., Veldhoen S., and Zuidema W. (2018). Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research 61, 907-926.
    [paper] [bibtex].

  • Zuidema W., Hupkes D., Wiggins G., Scharf C. and Rohrmeier M. (2018). Formal models of Structure Building in Music, Language and Animal Song. In Honing, H (Ed.), The Origins of Musicality (pp. 253-286). Cambdrige, Mass.: The MIT Press.

2017

  • Hupkes D. and Zuidema W. (2017). Diagnostic classification and symbolic guidance to understand and improve recurrent neural networks. Interpreting, Explaining and Visualizing Deep Learning, NIPS2017.
    [paper] [poster] [source code]

2016

Abstracts

  • Lakretz, Y., Kruszewski, G., Hupkes, D., Desbordes, T., Marti, S., Dehaene, S.and Baroni, M., (2018). The representation of syntactic structures in Long-Short Term Memory networks and humans. L2HM.
    [poster]

  • Zuidema, W., Hupkes, D., Abnar, S. (2018). Interpretable Machine Learning for Predicting Brain Activation in Language Processing. L2HM.
    [poster]

  • Leonandya, R., Kruszewski, G., Hupkes, D., Bruni, E. (2018). The Fast and the Flexible: pretraining neural networks to learn from small data. L2HM. [poster]

  • Nielsen A., Hupkes D., Kirby S. and Smith K. (2016). The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity. In The Evolution of Language: Proceedings of the 11th International Conference (EVOLANG11).
    [abstract]