← all projects
ARIA Opportunity Space: Scalable Neural Interfaces
AI for Neural Decoding
We're revolutionizing brain-computer interfaces by teaching AI to speak the brain's language across multiple senses (using magnetoencephalography (MEG) and large language models as a unified foundation model). Our breakthrough approach combines brain imaging with everyday sensory data – speech, vision, and thought patterns – to create an AI system that understands neural activity more deeply than ever before (achieving SOTA metrics on neural decoding tasks). Building on Oxford's cutting-edge research in feature universality and multi-modal neural architectures, we're developing a unified framework that bridges the gap between human thought and machine understanding. This technology promises to transform how people with communication difficulties interact with the world, while uncovering fundamental insights about how our brains represent and process information – potentially opening new frontiers in human-computer interaction through direct neural decoding.
Oiwi Parker Jones
Principal Investigator, Oxford Robotics Institute; Hugh Price Fellow in Computer Science
Dr. Oiwi Parker Jones leads the Parker Jones Neural Processing Lab at Oxford, where he combines expertise in machine learning, neuroscience, and linguistics to develop cutting-edge brain-computer interfaces and neural prosthetics. Following his doctoral work in NLP at Oxford and neuroscience training at UCL, he now pioneers large-scale machine learning methods for neural data while maintaining a passion for endangered languages and the mathematical foundations of language processing in the brain.
Philip Torr
Professor of Engineering Science; Five AI/Royal Academy of Engineering Research Chair in Computer Vision and Machine Learning