Dieuwke Hupkes bio photo

Dieuwke Hupkes

Research scientist at FAIR, scientific manager of the ELLIS Amsterdam unit, interested in structures in language

Email Twitter Instagram Github

Curriculum Vitae



You can also find me on Google Scholar.

In prep

  • J. Jumelet, M. Denić, J. Szymanik, D. Hupkes and S. Steinert-Threlkeld. Language Models Use Monotonicity to Assess NPI Licensing. [preprint]

  • K. Sinha, R. Jia, D. Hupkes, J. Pineau, A. Williams, D. Kiela. Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little. [preprint]

  • T. Kersten, H. Wong, J. Jumelet and D. Hupkes. Attention vs Non-attention for a Shapley-based Explanation Method. Accepted at the NAACL 2021 workshop DeeLIO [preprint]


  • Y. Lakretz, D. Hupkes, A. Vergallito, M. Marelli, M. Baroni and S. Dehaene. Mechanisms for Handling Nested Dependencies in Neural-Network Language Models and Humans. Cognition

  • L. Weber, J. Jumelet, E. Bruni and D. Hupkes. Language modelling as a multi-task problem. EACL 2021. [paper]

  • G. Dagan, D. Hupkes and E. Bruni. Co-evolution of language and agents in referential games. EACL 2021. [paper]


  • R.D. Luna, E.M. Ponti, D. Hupkes, and E. Bruni. Internal and External Pressures on Language Emergence: Least Effort, Object Constancy and Frequency. Accepted for EMNLP-findings.

  • O. van der Wal, S. de Boer, E. Bruni and D. Hupkes. The grammar of emergent languages. Accepted at EMNLP 2020. [paper] [source code]

  • D. Hupkes, V. Dankers, M. Mul and E. Bruni. Compositionality decomposed: how do neural networks generalise? JAIR. [paper] [source code] [extended abstract (IJCAI)] [ 15 min presentation (IJCAI)]

  • Dubois Y., Dagan G., Hupkes D., and Bruni E. Location Attention for Extrapolation to Longer Sequences. ACL2020. [paper]


  • Jumelet J., Zuidema W. and Hupkes D. Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment. CONLL 2019. Honourable mention.
    [paper] [source code]

  • Baan J., Leible J., Nikolaus M., Rau D., Ulmer D., Baumgärtner T., Hupkes D. and Bruni E. On the Realization of Compositionality in Neural Networks. BlackboxNLP, ACL 2019.

  • Ulmer D., Hupkes, D. and Bruni, E. Assessing incrementality in sequence-to-sequence models. Repl4NLP, ACL 2019.

  • Korrel K., Hupkes D., Dankers V., and Bruni E. Transcoding compositionally: using attention to find more generalizable solutions. BlackboxNLP, ACL 2019.

  • Lakretz Y., Kruszewski G., Desbordes T., Hupkes D., Dehaene S. and Baroni M. The emergence of number and syntax units in LSTM language models. NAACL 2019.
    [paper] [bibtex].

  • Hupkes D., Singh A.K., Korrel K., Kruszewski G. and, Bruni E. Learning compositionally through attentive guidance. CICLing 2019.
    [paper] [source code]

  • Leonandya R., Bruni E., Hupkes D. and Kruszewski G. The Fast and the Flexible: training neural networks to learn to follow instructions from small data. IWCS 2019.


  • Hupkes D., Veldhoen S., and Zuidema W. (2018). Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. JAIR
    [paper] [bibtex] [demo] [extended abstract (IJCAI)]

  • Giulianelli, M., Harding, J., Mohnert, F., Hupkes, D. and Zuidema, W. (2018). Under the hood: using diagnostic classifiers to investigate and improve how language models track agreement information.
    BlackboxNLP 2018, ACL. Best paper award.
    [paper] [bibtex].

  • Zuidema W., Hupkes D., Wiggins G., Scharf C. and Rohrmeier M. (2018). Formal models of Structure Building in Music, Language and Animal Song. In The Origins of Musicality

  • Jumelet, J. and Hupkes, D. (2018). Do language models understand anything? On the ability of LSTMs to understand negative polarity items. BlackboxNLP 2018, ACL.
    [paper] [bibtex].

  • Hupkes, D., Bouwmeester, S. and Fernández, R. (2018). Analysing the potential of seq2seq models for incremental interpretation in task-oriented dialogue. BlackboxNLP 2018, ACL.
    [paper] [bibtex].


  • Hupkes D. and Zuidema W. Diagnostic classification and symbolic guidance to understand and improve recurrent neural networks. Interpreting, Explaining and Visualizing Deep Learning, NIPS2017.
    [paper] [poster] [source code]


  • Veldhoen S., Hupkes D. and Zuidema W. (2016). Diagnostic classifiers: revealing how neural networks process hierarchical structure. CoCo, NIPS 2016.
    [paper] [bibtex] [poster] [source code] [demo]

  • Hupkes D. and Bod R. POS-tagging of Historical Dutch. LREC 2016.
    [bibtex] [paper] [source code]


  • Ponti E., Hupkes D. and Bruni E. (2019) The typology of emergent languages. Interaction and the Evolution of Linguistic Complexity

  • Ulmer D., Hupkes D. and Bruni E. (2019) An incremental encoder for sequence-to-sequence modelling. CLIN29.

  • Lakretz, Y., Kruszewski, G., Hupkes, D., Desbordes, T., Marti, S., Dehaene, S.and Baroni, M., (2018). The representation of syntactic structures in Long-Short Term Memory networks and humans. L2HM.

  • Zuidema, W., Hupkes, D., Abnar, S. (2018). Interpretable Machine Learning for Predicting Brain Activation in Language Processing. L2HM.

  • Leonandya, R., Kruszewski, G., Hupkes, D., Bruni, E. (2018). The Fast and the Flexible: pretraining neural networks to learn from small data. L2HM. [poster]

  • Nielsen A., Hupkes D., Kirby S. and Smith K. (2016). The Arbitrariness Of The Sign Revisited: The Role Of Phonological Similarity. EVOLANG11.