← all projects
ARIA Opportunity Space: Scalable Neural Interfaces
AI for Neural Decoding
We're revolutionizing brain-computer interfaces by teaching AI to speak the brain's language across multiple senses (using magnetoencephalography (MEG) and large language models as a unified foundation model). Our breakthrough approach combines brain imaging with everyday sensory data – speech, vision, and thought patterns – to create an AI system that understands neural activity more deeply than ever before (achieving SOTA metrics on neural decoding tasks). Building on Oxford's cutting-edge research in feature universality and multi-modal neural architectures, we're developing a unified framework that bridges the gap between human thought and machine understanding. This technology promises to transform how people with communication difficulties interact with the world, while uncovering fundamental insights about how our brains represent and process information – potentially opening new frontiers in human-computer interaction through direct neural decoding.
Mariya holds a PhD in Multimodal Machine Learning from University of Amsterdam. She served as the General Chair for the Women in Machine Learning (WiML) at ICML 2025. She's interned at several industry and academic labs, including the Google DeepMind's Gemini team, Bloomberg AI, Amazon Alexa, LIIR at KU Leuven, and ETH Zurich. Mariya also serves as a mentor through the Inclusive AI initiative.
Oiwi Parker Jones
Principal Investigator, Oxford Robotics Institute; Hugh Price Fellow in Computer Science
Dr. Oiwi Parker Jones leads the Parker Jones Neural Processing Lab at Oxford, where he combines expertise in machine learning, neuroscience, and linguistics to develop cutting-edge brain-computer interfaces and neural prosthetics. Following his doctoral work in NLP at Oxford and neuroscience training at UCL, he now pioneers large-scale machine learning methods for neural data while maintaining a passion for endangered languages and the mathematical foundations of language processing in the brain.
Philip Torr
Professor of Engineering Science; Five AI/Royal Academy of Engineering Research Chair in Computer Vision and Machine Learning
Philip Torr leads Torr Vision Group, which applies deep learning techniques to a wide ranging set of topics. He has been involved in numerous spin-outs as founder or advisor including: FiveAI, Onfido, Oxsight, Eigent, DreamTech, Visionary Machines, CamelAI, as well as working closely with big tech companies like Google, Meta, Apple, Microsoft, and Sony.