How do humans understand what a robot is about to do? Paper of common UKBA and IIT team explores how gaze and pointing cues help people predict a robot’s intentions while its movements are incomplete. Using the NICO humanoid robot, we show that combining gaze with pointing makes robot actions clearer (multimodal superiority), while gaze alone offers the fastest clue (oculomotor primacy). These findings shed light on designing robots whose actions are not only precise but also legible – making human-robot collaboration safer, smoother, and more intuitive.