Matthias Niessner

Professor for Computer Science & Head of the Visual Computing Lab
Matthias Niessner is a Professor for Computer Science at the Technical University of Munich, heading the Visual Computing Lab. Prof. Nießner studied computer science at the University of Erlangen-Nuremberg, receiving his Diploma degree in 2010. He then started pursuing his PhD degree under the supervision of Prof. Günther Greiner at the chair of Computer Graphics, also at the university of Erlangen-Nuremberg. He successfully graduated in 2013, after submitting his thesis about “Subdivision Surface Rendering using Hardware Tessellation” which was awarded with highest distinctions. Subsequently, Prof. Nießner became a visiting assistant professor at Stanford University (from 2013 to 2017) in collaboration with the Max-Planck Center for Visual Computing. Since 2017, Matthias Nießner is a professor at TUM where he is leading the Visual Computing lab. Prof. Nießner‘s research focuses on the area of 3D digitization at the intersection between Computer Graphics, Computer Vision, and Artificial Intelligence. The main theme of this research is to obtain 3D models of real-world environments captured by video and range cameras. In this context, the main focus lies on the representation of the obtained 3D geometry, as well as the processing and its analysis by leveraging cutting-edge machine learning techniques (e.g., Deep Learning). In addition to the analysis of semantic understanding of 3D environments, which is a key element of modern robotics, the research in the Visual Computing Lab touches many fascinating applications, including the editing of videos, numerical optimization, and many more.

Technical University of Munich, Visual Computing Lab

The Visual Computing Lab at TUM is a group of research enthusiasts pushing the state of the art at the intersection of computer vision, computer graphics, and machine learning. Our research mission is to obtain high-quality digital models of the real world, which include detailed geometry, surface texture, and material in both static and dynamic environments. In our research, we heavily exploit the capabilities of RGB-D and range sensing devices that are now widely available. However, we ultimately aim to achieve both 3D and 4D recordings from monocular sensors - essentially, we want to record holograms with a simple webcam or mobile phone. We further employ our reconstructed models for specific use cases, such as video editing, immersive AR/VR, semantic scene understanding, and many others. Aside from traditional convex and non-convex optimization techniques, we see great potential in modern artificial intelligence, mainly deep learning, in order to achieve these goals.

Do NOT follow this link or you will be banned from the site!