Tommy Rochussen
Tommy Rochussen

Doctoral Researcher

About Me

I am a doctoral researcher at Helmholtz AI and the Technical University of Munich as a member of the Elpis lab, supervised by Dr. Vincent Fortuin. I am mentored by Mark van der Wilk. I am broadly motivated by the need to develop machine intelligence systems that can reason in the presence of uncertainty, as this strikes me as the most crippling flaw of current automated learning systems.

Prior to my PhD, I took a year out of education to focus on expanding my knowledge of probabilistic machine learning. In addition to this, I wrote a single-author research paper that was accepted at AABI 2024, and spent five months as a Machine Learning Researcher at Motorway in London where I worked with scalable Bayesian machine learning models for various use cases in vehicle pricing.

Before my year out, I studied engineering at the University of Cambridge where my specialism was computer and information engineering, though my module choice made the integrated masters year indistinguishable from a typical masters in Machine Learning, albeit with a heavy dose of Bayesianism.

Download CV
Interests
  • Bayesian Deep Learning
  • Approximate Inference
  • Probabilistic Meta-learning
Education
  • Doctor of Natural Sciences (Dr. Rer. Nat.), Probabilistic Machine Learning

    Helmholtz AI, Technical University of Munich (TUM)

  • Master of Engineering (M.Eng.), Computer and Information Engineering

    University of Cambridge

  • Bachelor of Arts (B.A.), Engineering

    University of Cambridge

Research Mission

When making decisions or predictions, we as humans rely on a sense of how confident we are in a belief before acting upon it. This is something we have evolved to do since we live in a constantly changing and uncertain world, and learning to reason in the presence of uncertainty is the only hope that we ever had for making sense of it all. Despite the many impressive advances in machine learning and artifical intelligence in recent times, reasoning sensibly in the presence of uncertainty is something that machines generally cannot do, and I would argue that it is a critical flaw; how can we trust machine intelligence if the machine believes it's never wrong?

In my research I hope to develop powerful machine learning models that do have a sense of how certain they are, so that, once equipped with the ability to quantify their uncertainty, we can use machines to obtain more reliable predictions that can help us in arbitrarily high-stakes situations. The Bayesian formalism offers a highly principled approach to uncertainty quantification and it comes with many attractive benefits: the ability to specify prior beliefs and udpate them with data, the ability to automatically choose the "right" model out of many, the ability to update a trained model when new data becomes available, and so on. As such, a key part of my ethos is to be Bayesian, and I believe that Bayesian machine learning research is a good place to focus our efforts.