SYNAPTICON: Closed-Loop AI Multimodal Generation from Brain Signals: A BCI Framework Integrating EEG Decoding with LLMs and Transformers
SYNAPTICON is a research prototype at the intersection of neuro-hacking, non-invasive brain-computer interfaces (BCIs), and foundational models, probing new territories of human expression, aesthetics, and AI alignment. Envisioning a cognitive “Panopticon” where biological and synthetic intelligent systems converge, it enables a pipeline that couples temporal neural dynamics with pretrained language representations and operationalizes them in a closed loop for performance. At its core lies a live “Brain Waves-to-Natural Language-to-Aesthetics” system that translates neural states into decoded speech, and then into immersive audiovisual output, shaping altered perceptual experiences and inviting audiences to directly engage with the user’s mind. SYNAPTICON provides a reproducible reference for foundation-model-assisted BCIs, suitable for studies of speech decoding, neuroaesthetics, and human–AI co-creation.
Credits
Directed, Produced & Developed:
Albert Barqué-Duran (Albert.DATA)
Technical Managers:
Ada Llauradó & Ton Cortiella
Audio Engineer:
Jesús Vaquerizo (I AM JAS)
Extra Performer:
Teo Rufini.
Partners:
Sónar+D
OpenBCI
Interactive Arts & Science Lab (IASlab)
La Salle Barcelona (Universitat Ramon Llull)
BSC (Barcelona Supercomputing Center)
.NewArt { foundation;}
CBC (Center for Brain & Cognition)
Universitat Pompeu Fabra
Departament de Cultura - Generalitat de Catalunya.