RESEARCH

Our software is underpinned by contemporary academic research and our applications have been comprehensively tested in a wide range of sectors from aviation and sport to healthcare and defence.

OUR SCIENTIFIC ADVISORY BOARD

PROF SAM VINE

Sam is Professor of Psychology at the University of Exeter, and a Director of Research and Impact within the University of Exeter medical school.

He is co-founder and Chief Scientific Officer (CSO) at Cineon and chairs Cineon’s Scientific Advisory Board. Sam leads the Virtual Immersive Training and Learning (VITAL) research group, and the Exeter Immersive research network at the University of Exeter, and is co-chair for a NATO research task group exploring XR technology in defence. Sam’s research aims to understand the psychology of human performance and learning, using technology (eye tracking, psychophysiological measurement, virtual reality).

His research is applied to a range of different domains (e.g., sport, surgery, military, and aviation) and populations (e.g., children, elite performers, and patient groups). Sam has over 100 original articles published in peer review journals and has been involved in research and innovation grants totalling over £4m.

PROF SYLVIA PAN

Sylvia is a Professor of Virtual Reality at Goldsmiths, University of London, where she co-leads the Goldsmiths Computing MA/MSc in Virtual and Augmented Reality,  and the SeeVR Lab.

Her  research interest lies in the area of Virtual Reality technology and its applications in the areas of social neuroscience, psychology, training, and therapy.

She was a research fellow at the Hamilton Lab at Institute of Cognitive Neuroscience (ICN),  UCL and also the Virtual Environments and Computer Graphics group (VECG), Computer Science, UCL. She received her PhD in Virtual Reality in 2009, at UCL.

DR NICK PERES

Nick is the Programme Director of Innovation and Transformation at Torbay and South Devon NHS Foundation Trust.

He is an innovator and researcher in the digital healthcare field, with experience in broadcast media and the NHS for the last 12 years. He has created cutting-edge projects for virtual reality (VR) healthcare solutions and completed his PhD focusing on compassionate behaviour and technology in medical education and simulation. During that period, Nick has developed, executed, and overseen various XR-based projects, such as the nationally recognised NHSE-supported VR Lab.

He currently co-leads the Digital Futures programme at Torbay and South Devon NHS Foundation Trust, which is nominated for a parliamentary award. The programme supports research and development projects, digital health fellowships, and digital literacy initiatives, as well as supporting the adoption of XR and other emerging technologies.

The science of

visual attention

The science of

visual

attention

The science of visual attention is becoming accepted as one of the best ways to understand a person’s cognitive and emotional state. Visual attention is guided by our current goals and motivations (top-down processing) and visual characteristics of the environment (bottom-up processing).

How our goals and motivations guide where we focus our attention. It is internally driven, relying on cognitive control to determine which aspects of our visual field are most important.

Bottom-up processing, on the other hand, is stimulus-driven. It directs attention based on external stimuli, such as bright colours or sudden movement, rather than internal goals.

Different brain networks regulate these processes, but they interact closely under normal conditions. In stressful or anxiety-inducing situations, top-down control weakens, leading to an increased reliance on bottom-up processing. This shift often manifests as changes in eye behaviour, such as difficulty maintaining focus or being easily distracted by external stimuli.

The technology developed by Cineon builds on this knowledge by continuously monitoring how eye behaviour shifts. By analysing eye movements, it can infer changes in a user’s stress and anxiety levels in real time. This allows for a more personalised experience in digital environments, where the system can adapt to the user’s emotional state, potentially improving comfort, productivity, or focus.

This approach has wide applications, from improving user experience in gaming or virtual environments to stress management tools in workplace settings.

STReSS, ANXIETY

& EYE MOVEMENT

STRESS

ANXIETY & EYE

MOVEMENT

We can estimate stress and cognitive load by using eye-tracking technology to monitor visual attention. Psychological factors, such as stress and anxiety, directly affect how our eyes behave, and eye-tracking provides insights by measuring changes in pupil size, fixation patterns, and saccades (rapid eye movements) in response to internal emotions and external stimuli. When a person is stressed or anxious, their attention may become more fragmented—they’re easily distracted, may fixate on irrelevant stimuli, and often find it harder to maintain concentration.

Decades of cognitive psychology research have shown that stress and anxiety lead to specific changes in attentional control. Under normal conditions, visual attention is guided by both top-down processes (driven by our goals) and bottom-up processes (stimulus-driven by the environment). However, stress disrupts top-down control, making individuals more reactive to bottom-up cues, which is evident in their eye movements—such as an increased tendency to look at distracting or irrelevant stimuli.

Cineon’s technology builds on this understanding by embedding these findings into mathematical models that track how visual attention patterns shift as cognitive load and emotional states change. By continuously analysing eye movements, the technology provides a real-time assessment of stress and cognitive load, offering actionable insights that can be used to tailor user experiences based on their current mental state.

Machine LEarning in

Adaptive environments

MACHINE

LEARNING IN

ADAPTIVE

ENVIRONMENTS

To deliver accurate estimates of stress and anxiety across diverse settings, our data science team is employing machine learning models validated through data collected by our research team.

To ensure that our models are robust and can generalise across different individuals, time periods, and varying attentional demands, we collect rich datasets from volunteers engaging in various digital and virtual environments. This approach captures a wide range of eye-tracking data under different emotional and cognitive states.

Participants provide regular self-reported feedback on their emotional well-being, which is essential for training the machine learning models. By linking eye movement patterns to these emotional reports, the models learn to recognise subtle cues related to stress and anxiety.

This ongoing feedback drives the continuous improvement of our core model, our Empathic Learning Engine, or ELE, allowing it to refine its ability to accurately assess stress and anxiety in real time. The model adapts to individual variations and evolving contexts, enhancing its performance over time.