In vitro neurons learn and exhibit sentience when embodied in a simulated game-world
Highlights
-
Improvements in performance or “learning” over time following closed-loop feedback•
- Learning observed from both human and primary mouse cortical neurons
-
Systems with stimulus but no feedback show no learning•
- Dynamic changes observed in neural electrophysiological activity during embodiment
Summary
Integrating neurons into digital systems may enable performance infeasible with silicon alone. Here, we develop DishBrain, a system that harnesses the inherent adaptive computation of neurons in a structured environment. In vitro neural networks from human or rodent origins are integrated with in silico
computing via a high-density multielectrode array. Through
electrophysiological stimulation and recording, cultures are embedded in
a simulated game-world, mimicking the arcade game “Pong.” Applying
implications from the theory of active inference via the free energy
principle, we find apparent learning within five minutes of real-time
gameplay not observed in control conditions. Further experiments
demonstrate the importance of closed-loop structured feedback in
eliciting learning over time. Cultures display the ability to
self-organize activity in a goal-directed manner in response to sparse
sensory information about the consequences of their actions, which we
term synthetic biological intelligence. Future applications may provide
further insights into the cellular correlates of intelligence.
Aucun commentaire:
Enregistrer un commentaire