Dieuwke Hupkes bio photo

Dieuwke Hupkes

Research scientist at FAIR, scientific manager of the ELLIS Amsterdam unit, interested in structures in language

Email Twitter Instagram Github

Curriculum Vitae





  • April 28, Compositionality Decomposed: how do neural networks generalise? Tel Aviv University (virtual talk).

  • March 17, Kunnen we kunstmatige intelligentie nog doorgronden? Studium Generale, Utrecht (virtual talk).

  • February 11, Compositionality decomposed: how do neural networks generalise? Woman@CL, Cambridge (virtual talk).

  • February 5, Compositionality decomposed: how about natural language? Rijksuniversiteit Groningen, Groningen (virtual talk).


  • October 30, Neural networks as explanatory models of language processing, ILCC Seminar at the University of Edinburgh, Edinburgh (virtual talk)

  • September 17, Neural networks as explanatory models. AllenNLP, Seattle (virtual talk)


  • November 4, Syntax in neural language models: a case study, Universiteit Utrecht, Utrecht.

  • October 9 Subject verb agreement in neural language models – how, when and where? Johns Hopkins University, Baltimore.

  • October 1, What do they learn? Neural networks, compositionality and interpretability, Computational Cognition workshop, Osnabruek.
    [slides] [recording]

  • September 3. Guest speaker and panelist at the public event When fake looks all too real: the technology behind Deep Fake, SPUI25, Amsterdam.
    [slides] [recording]

  • June 18. The typology of emergent languages. Interaction and the Evolution of Linguistic Complexity, Edinburgh.

  • May 6. The compositionality of neural networks: integrating symbolism and connectionism. CS&AI / SIKS workshop on analyzing and interpreting neural networks for NLP, ‘s-Hertogenbosch.

  • April 18. The compositionality of neural networks: integrating symbolism and connectionism. Internal talk at Saarland University, Saarbrücken.

  • March 14. On neural networks and compositionality. Internal seminar at École normale supérieure, Paris.


  • July 18, 2018. Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. IJCAI.

  • June 12, 2018. Learning compositionally through attentive guidance. Internal seminar at the University of Copenhagen.


  • May 9, 2017. Processing hierarchical structure with RNNs. Dagstuhl seminar on Human-like neural-symbolic computing.

  • December 7, 2017. The grammar of neural networks. SMART workshop Grammars, Computation & Cognition, Amsterdam.

  • December 15, 2017. Hierarchical compositionality in recurrent neural networks. Internal seminar at Rijksuniversiteit Groningen.


  • May 25, 2016. POS-tagging of Historical Dutch. LREC, Portoroz.

  • November 22, 2016. How may neural networks process hierarchical structure? Insights from recursive and recurrent networks learning arithmetics. Logic Tea at the University of Amsterdam.


  • June 8, 2015. Using Parallel Data to improve Part-of-Speech tagging of 17th century Dutch. DH Benelux, Antwerp.