👋 Hi there!
I'm Timo, a PhD student in the Machine Learning Interpretability group at LMU Munich. My research interests center on making Machine Learning interpretable to increase trust and transparency, as well as on intersections of Interpretable Machine Learning with other areas of Machine Learning. My current research focuses on feature effects, feature interactions, and functional decompositions. Previously, I have also worked on prompt optimization and in industry, developing and deploying Machine Learning models with a focus on interpretability.
My repositories encompass a broad range of projects, from university assignments to private projects and competitions, from unmaintained research repositories to productive Python packages, and from desktop applications and Web Interfaces to Machine Learning use cases and optimization benchmarks.
Feel free to explore these and other projects on my GitHub or connect with me on LinkedIn.


