<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Dear all,<br><div><div><br></div><div>The Bernoulli Institute for Mathematics, Computer Science, and Artificial Intelligence at the University of Groningen, NL is looking for two PhD candidates (4-year full-time, 1.0 FTE) in an exciting interdisciplinary project. We aim to create models of the world, conduct visual data analysis to understand them better, and use machine learning models to yield optimized brain-computer interfaces (BCI), visualization representations and interfaces, as well as robotic task planning.</div><div><br></div><div>*PhD Project 1: Context-specific Grasping Control and Adaptive Visual Interfaces*</div><div> </div><div>In robotic manipulation, human-robot, and human-computer interaction, recent research endeavors are concentrated on advancing context-specific grasping control and adaptive visual interfaces. These interfaces incorporate intricate feedback mechanisms utilizing 2D/3D visualizations on computer screens and augmented reality overlays on physical objects. A critical aspect of this research involves the incorporation of multimodal representations, spanning from neural to behavioral patterns associated with the object interaction task, and integrating visual, auditory, and tactile sensory inputs to construct a comprehensive model of the environment. Developing and analyzing such models further refine our understanding of interaction-specific properties between (i) the human or the robotic hand and (ii) the objects or a visualization, enabling nuanced control strategies.</div><div><br></div><div>*PhD Project 2: Neural Task Planning for Optimizing Visualization and Robot Interaction*</div><div><br></div><div>We will leverage the power of Large Multimodal Models (LMMs) for continual task planning/modeling and visualization design. In robotic domains and visualization alike, a task is often specified in various forms, such as language and visual instructions. We aim to develop a multimodal model that accommodates both textual and visual modalities, overcoming issues of conventional approaches: reliance on complex programming and data collection processes–coupled with limited adaptability and scalability and the involvement of domain experts to explicitly train the robot or adapt a visualization for specific tasks and integrate (rule-based) mechanisms for explanation. To achieve this, we employ LMMs to act as a continual task planner and visualization designer. Our model takes natural task descriptions and the current state as input and generates a hierarchical task plan or visualization as output.</div><div><br></div><div>The candidates would become members of the groups for Scientific Visualization and Computer Graphics, Cognitive Modeling, and Autonomous Systems, working under the supervision of Steffen Frey, Andreea Sburlea, and Hamidreza Kasaei.</div><div><br></div><div>More information as well as the application form can be found here: </div><div><a href="https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-02S000AYVP">https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-02S000AYVP</a></div><div><br></div><div>If you have any questions, please also feel free to reach out to Steffen Frey ( <a href="mailto:s.d.frey@rug.nl">s.d.frey@rug.nl</a> ). </div><div><br></div><div>Applications received before 5 November 2024 will be given full consideration; however, the positions will remain open until it is filled.</div><div><br></div><div>Cheers,</div><div>Steffen Frey, Andreea Sburlea, Hamidreza Kasaei</div></div></body></html>