Interview with TERAIS’ new postdoctoral researcher Štefan Pócoš, PhD
Welcome to the interview with our esteemed colleague at the TERAIS project, Dr. Štefan Pócoš, a postdoctoral researcher with a robust background in mathematics and machine learning. Štefan has been an integral part of the TERAIS team since his time as a PhD candidate, and now, in his new role as a postdoc, he brings an extensive experience in deep learning, adversarial robustness, and human-robot interaction to the TERAIS team at Comenius University Bratislava, Slovakia, where his research focuses on attention mechanisms, explainability, and the development of AI systems for real-world applications.
Štefan’s role at TERAIS will enhance the research capabilities of DAI UKBA, furthering the project’s commitment to advancing cutting-edge AI technologies and fostering international collaboration. Join us as we delve into his impressive career, research goals, and contributions to shaping the future of AI and robotics.
Can you tell us about your academic and research background and how it led you to become involved in the TERAIS project as a postdoctoral researcher?
My higher academic path started by getting a bachelor’s degree in mathematics, which was followed by a master’s degree in computer graphics and geometry. Having acquired a solid base for conducting research I felt the need to further investigate the modern trends in machine learning. That is the reason I enrolled for doctoral study. After that, my involvement in TERAIS started quite naturally since the main research pillars of TERAIS are tightly correlated with my research topic and after my defence, I felt that my contributions to the project could be further enriched.
What inspired you to specialise in areas like adversarial robustness, explainability, and attention mechanisms in AI?
At first, I started studying adversarial examples out of curiosity. Further, I subsequently discovered that neural networks have fundamental issues that are not yet reliably solved. For this, I considered the study of explainability and attention of deep learning models as a crucial aspect and a necessary step to make the current machine learning more transparent and reliable.
You’ve been part of the TERAIS project for some time, and now you’re contributing in a new role. How has your focus within the project evolved, and how does your current research aim to further advance AI and robotics?
During my PhD, I mostly focussed on a rather narrow part of artificial intelligence - attention mechanisms. Now, I can fully indulge in research of other interesting aspects of current machine learning, such as explainability and trustworthiness in robotic applications.
As a postdoctoral researcher within the TERAIS project, what are your primary responsibilities and research objectives?
My main responsibility is to conduct project-related research. It involves studying state-of-the-art literature, coming up with innovative ideas, and implementing them at various levels. Furthermore, I am leading several students in their diploma theses. Being a part of the TERAIS project also involves organising and attending various research-related meetings, and communicating and collaborating with colleagues not only from Comenius University but also from the partner institutions.
How do you think the TERAIS project will impact your career trajectory and future opportunities in the field of robotics and AI?
So far the TERAIS project offered me great research opportunities in the field I am deeply interested in. In the future, the skills I acquired can be leveraged to further pursue my research goals.
Explainability and trustworthiness are hot topics in AI today. From your experience, how can attention mechanisms and explainability improve trust in AI systems?
Everybody expects something else from “an explainable system”, thus it depends on which point of view we look at. For example, a researcher can benefit from visualisations of the inner processes of a neural network, since they create an opportunity to spot an outlier in the data or the model’s misbehaviour in advance. On the other hand, the end user can feel more confident to use machine learning systems, if they also get reasonable explanations of the output in a verbal or non-verbal manner.
What do you see as the most promising practical applications of your research in areas such as human-robot collaboration or deep learning systems?
My research, as well as the research of some of my colleagues from the TERAIS project, can be applied to make human-robot collaboration qualitatively different. If the robotic apparatus could, besides the correct movements, also communicate about its intentions, the collaboration with humans is expected to be smoother, more efficient, and more trustworthy.