Our software is underpinned by contemporary academic research and our applications have been comprehensively tested in a wide range of sectors from aviation and sport to healthcare and defence.
He is co-founder and Chief Scientific Officer (CSO) at Cineon and chairs Cineon’s Scientific Advisory Board. Sam leads the Virtual Immersive Training and Learning (VITAL) research group, and the Exeter Immersive research network at the University of Exeter, and is co-chair for a NATO research task group exploring XR technology in defence. Sam’s research aims to understand the psychology of human performance and learning, using technology (eye tracking, psychophysiological measurement, virtual reality).
His research is applied to a range of different domains (e.g., sport, surgery, military, and aviation) and populations (e.g., children, elite performers, and patient groups). Sam has over 100 original articles published in peer review journals and has been involved in research and innovation grants totalling over £4m.
Her research interest lies in the area of Virtual Reality technology and its applications in the areas of social neuroscience, psychology, training, and therapy.
She was a research fellow at the Hamilton Lab at Institute of Cognitive Neuroscience (ICN), UCL and also the Virtual Environments and Computer Graphics group (VECG), Computer Science, UCL. She received her PhD in Virtual Reality in 2009, at UCL.
He is an innovator and researcher in the digital healthcare field, with experience in broadcast media and the NHS for the last 12 years. He has created cutting-edge projects for virtual reality (VR) healthcare solutions and completed his PhD focusing on compassionate behaviour and technology in medical education and simulation. During that period, Nick has developed, executed, and overseen various XR-based projects, such as the nationally recognised NHSE-supported VR Lab.
He currently co-leads the Digital Futures programme at Torbay and South Devon NHS Foundation Trust, which is nominated for a parliamentary award. The programme supports research and development projects, digital health fellowships, and digital literacy initiatives, as well as supporting the adoption of XR and other emerging technologies.
Different brain networks regulate these processes, but they interact closely under normal conditions. In stressful or anxiety-inducing situations, top-down control weakens, leading to an increased reliance on bottom-up processing. This shift often manifests as changes in eye behaviour, such as difficulty maintaining focus or being easily distracted by external stimuli.
The technology developed by Cineon builds on this knowledge by continuously monitoring how eye behaviour shifts. By analysing eye movements, it can infer changes in a user’s stress and anxiety levels in real time. This allows for a more personalised experience in digital environments, where the system can adapt to the user’s emotional state, potentially improving comfort, productivity, or focus.
This approach has wide applications, from improving user experience in gaming or virtual environments to stress management tools in workplace settings.
Decades of cognitive psychology research have shown that stress and anxiety lead to specific changes in attentional control. Under normal conditions, visual attention is guided by both top-down processes (driven by our goals) and bottom-up processes (stimulus-driven by the environment). However, stress disrupts top-down control, making individuals more reactive to bottom-up cues, which is evident in their eye movements—such as an increased tendency to look at distracting or irrelevant stimuli.
Cineon’s technology builds on this understanding by embedding these findings into mathematical models that track how visual attention patterns shift as cognitive load and emotional states change. By continuously analysing eye movements, the technology provides a real-time assessment of stress and cognitive load, offering actionable insights that can be used to tailor user experiences based on their current mental state.
To deliver accurate estimates of stress and anxiety across diverse settings, our data science team is employing machine learning models validated through data collected by our research team.
To ensure that our models are robust and can generalise across different individuals, time periods, and varying attentional demands, we collect rich datasets from volunteers engaging in various digital and virtual environments. This approach captures a wide range of eye-tracking data under different emotional and cognitive states.
Participants provide regular self-reported feedback on their emotional well-being, which is essential for training the machine learning models. By linking eye movement patterns to these emotional reports, the models learn to recognise subtle cues related to stress and anxiety.
This ongoing feedback drives the continuous improvement of our core model, our Empathic Learning Engine, or ELE, allowing it to refine its ability to accurately assess stress and anxiety in real time. The model adapts to individual variations and evolving contexts, enhancing its performance over time.