top of page

How 200,000 Cultured Neurons Learned to Play the 1993 First-Person Classic Doom

Retro computer monitor displays a first-person shooter game. A digital neural network diagram connects to the screen, set in a tech lab.

Introduction: The Thermodynamic and Computational Limits of Silicon

The trajectory of modern computing has long been defined by the relentless miniaturization of silicon transistors, allowing for exponential increases in processing power. This hardware evolution has been the primary catalyst for recent breakthroughs in artificial intelligence, deep reinforcement learning, and large language models. However, as algorithmic complexity and parameter counts grow into the trillions, the underlying silicon infrastructure is rapidly approaching fundamental physical and thermodynamic limits. The traditional von Neumann computer architecture, which strictly segregates memory storage from central processing units, induces a persistent data-transfer bottleneck. The continuous shuttling of information between these distinct components requires immense energy overhead, resulting in severe inefficiencies when attempting to simulate complex, highly interconnected neural network architectures.1

The energy economics of modern artificial intelligence are becoming increasingly unsustainable. Training and deploying advanced machine learning models necessitates massive data centers equipped with thousands of specialized graphics processing units, collectively drawing power on the scale of megawatts to gigawatts.2 Furthermore, traditional artificial neural networks rely on gradient descent and backpropagation—mathematical techniques that require millions of data samples and intensive computational brute force to achieve functional task generalization. In stark contrast, biological neural networks inherently integrate memory and processing within the exact same cellular substrate, operating with a degree of energy efficiency, structural adaptability, and sample-efficient learning that silicon-based systems cannot replicate.1

This profound divergence in operational efficiency has catalyzed the emergence of a novel scientific paradigm: synthetic biological intelligence. Rather than attempting to simulate biological neural functions entirely in software, synthetic biological intelligence seeks to directly harness the computational power of living neural tissue by integrating it with digital hardware interfaces. Leading this frontier is Cortical Labs, a biotechnology firm that has successfully engineered the CL1, widely recognized as the world's first code-deployable biological computer.6 By cultivating living human and rodent neurons on specialized microchips, researchers have created systems that can learn and adapt to digital environments in real-time.

The initial validation of this technology was demonstrated using a rudimentary two-dimensional arcade game, Pong, in an experimental setup known as DishBrain. In this controlled environment, a cultured network of human and rodent neurons successfully learned to intercept a virtual digital ball within merely five minutes of real-time electrophysiological feedback.8 While demonstrating goal-directed behavior in a simple, two-dimensional plane with a single input-output relationship was a landmark achievement, it represented only a baseline of cognitive processing capability.

To rigorously test the true adaptability, bandwidth, and computational potential of synthetic biological intelligence, researchers subsequently challenged the biological computer with a vastly more complex environment: the seminal 1993 first-person shooter video game, Doom. Navigating the digital labyrinths of Doom requires advanced three-dimensional spatial awareness, real-time threat detection, and prioritized decision-making across multiple simultaneous visual and spatial inputs.10 The successful demonstration of a cluster of approximately 200,000 living human neurons learning to survive, target enemies, and maneuver within the Doom environment signifies a monumental leap in the nascent field of bio-hybrid computing.11

This report provides an exhaustive, advanced analysis of the scientific, architectural, and theoretical mechanisms underpinning the biological computer. It explores the physiological protocols required to sustain the biological substrate, the sophisticated hardware-software translation architectures necessary for low-latency interfacing, the application of thermodynamic theories to enforce learning in the absence of chemical reward systems, and the profound bioethical implications of engineering systems that border on synthetic phenomenology.

Biological Substrate Engineering and Sustenance

The computational engine of the CL1 biological computer is not derived from extracted adult human brain tissue, which would present insurmountable ethical and biological barriers. Instead, the biological components are engineered from human induced pluripotent stem cells. These cells, typically derived from adult donor skin or blood fibroblasts, are scientifically reprogrammed back into an embryonic-like pluripotent state, from which they can be directed to develop into virtually any cell type in the human body.12

Differentiation Protocols and Cellular Composition

To create a functional computational substrate, the induced pluripotent stem cells must undergo rigorous and highly controlled differentiation protocols to mature into functional cortical neurons. The researchers utilize two primary methodologies to drive this neural differentiation. The first is a dual SMAD inhibition protocol, which carefully mimics natural embryonic neurodevelopment by blocking specific signaling pathways, allowing the stem cells to naturally default into a neural lineage over several weeks.14 The second approach relies on an NGN2 lentivirus-directed differentiation. This technique utilizes a viral vector to forcefully express the neurogenin-2 gene, which drives a rapid, direct conversion of the stem cells into excitatory cortical neurons within a much shorter timeframe.14

Crucially, the resulting neural cultures are not entirely homogenous. A functional biological network requires more than just isolated neurons to survive and communicate. The cultures deliberately include supporting glial cells, primarily astrocytes. Identified through specific protein markers, these astrocytes are absolutely critical for the long-term functioning of the biological computer. They regulate the availability of energy to the neurons, clear metabolic waste products from the synaptic clefts, and maintain the overall structural and chemical health of the highly interconnected neural network.3 Without the inclusion of these supporting glial cells, the neural network would rapidly succumb to excitotoxicity and metabolic failure.

Media Formulations and Electrophysiological Viability

Once differentiated, the population of approximately 200,000 to 800,000 neurons is seeded directly onto the surface of the hardware interface.17 However, the survival and computational readiness of this network depend entirely on the liquid medium in which the cells are immersed. Historically, in vitro cellular research utilized legacy culture media optimized purely for basic cellular survival rather than complex functional synaptic communication. Standard media formulations often suppressed spontaneous action potentials, rendering the neural networks electrically silent and computationally useless.19

To resolve this limitation, the biological computing infrastructure utilizes specialized, advanced media formulations, most notably BrainPhys. This specific medium is precisely formulated to replicate the physiological osmolarity, precise energy levels, and specific neurochemical balance present in the intact human brain.19 By mirroring natural human physiological conditions, the specialized media supports sustained, spontaneous excitatory and inhibitory synaptic activity. It significantly increases the cellular secretion of crucial synaptic proteins and ensures that the neural network remains highly active and capable of rapid electrophysiological computation.21

Cultivation Media Type

Primary Design Objective

Electrophysiological Impact

Network Activity Profile

Legacy Formulations

Basic cellular survival and rapid growth

Suppresses spontaneous action potentials

Electrically quiet, sparse synaptic communication

Advanced Formulations

Physiological mimicry and functional maturation

Enhances spontaneous excitatory and inhibitory signals

Highly active, dense interconnected network signaling

The CL1 Bioreactor and Continuous Perfusion

Maintaining the delicate homeostasis of a living computational substrate requires sophisticated environmental engineering. The CL1 biological computer incorporates an automated, internal life-support bioreactor system, often referred to as a perfusion circuit. This complex fluid dynamics system continuously manages the delivery of nutrient-rich media to the neurons while simultaneously executing waste filtration.23

Furthermore, the closed-loop bioreactor strictly regulates the physical environment, maintaining a stable physiological temperature and controlling internal gas mixing for optimal carbon dioxide and oxygen exchange.23 This sophisticated, self-contained engineering allows the biological neural network to remain alive, sterile, and electrophysiologically active for up to six months without the need for external, specialized laboratory incubators.23 The current limitation on the operational lifespan of the biological computer is constrained not by the failure of the digital hardware, but by the fundamental lack of biological vascularization inherent to two-dimensional cell cultures, which eventually limits oxygen diffusion to the deepest layers of the cellular mass as the neural network grows and thickens over time.23

Hardware Architecture: Bridging Silicon and Wetware

The realization of synthetic biological intelligence requires an exceptionally stable, bidirectional hardware interface capable of translating abstract digital variables into tangible electrophysiological stimuli, and capturing microscopic cellular responses to translate them back into digital commands. The hardware bridging these two entirely different domains relies on the high-density microelectrode array.

The High-Density Microelectrode Array (HD-MEA) Featuring Neurons

The physical layer of the biological computer is a custom-designed complementary metal-oxide-semiconductor chip featuring thousands of microscopic, closely spaced electrodes. The neurons are cultivated directly onto the surface of this array.9 This interface operates bi-directionally; the electrodes are capable of detecting the minute voltage changes associated with cellular action potentials (neural spikes) while simultaneously delivering precise, targeted electrical currents to stimulate specific regions of the neural tissue.8

The layout of these electrodes allows researchers to interact with distinct spatial populations within the cellular culture. Electricity serves as the universal, shared language that bridges the living biology and the silicon microchips, allowing them to communicate and exchange complex information.26

The Biological Intelligence Operating System (biOS)

Managing this immense flow of bidirectional data requires a highly specialized software framework. The CL1 operates on the proprietary Biological Intelligence Operating System, which serves as the translation layer between the biological hardware and digital applications.7 Rather than processing static, pre-recorded datasets, the operating system generates a continuous, real-time simulated world for the neurons to inhabit.

The operating system translates the shifting environmental variables of the digital simulation into coded patterns of electrical impulses, which are continuously fed into the neural network through the electrode array.7 As the neurons process these incoming sensory stimuli and react by firing their own electrical action potentials, the operating system reads these microscopic outputs, decodes them, and applies them as functional actions within the simulated environment. This continuous process closes the sensory-motor loop, enabling true embodied interaction between the living tissue and the digital program.24

The Application Programming Interface and Execution Contract

A fundamental physiological constraint in bio-hybrid computing is the requirement for extreme temporal precision. Biological learning mechanisms, such as spike-timing-dependent plasticity, dictate that the brain strengthens or weakens synaptic connections based on the precise, millisecond-level timing of when specific neurons fire relative to one another.2 If the hardware introduces computational lag or latency jitter, the biological cells cannot associate a digital stimulus with an action, causing the entire learning process to collapse.

To overcome this, Cortical Labs developed a specialized Application Programming Interface governed by a rigid execution contract.27 The architecture is driven by a Linux kernel and an underlying field-programmable gate array, which tightly integrates the high-level software instructions with the low-level electrical hardware. The architecture prioritizes reliability and exact temporal correctness over absolute voltage accuracy, acknowledging that biological systems are inherently noisy and variable.27

The interface groups complex stimulation and hardware synchronization commands into atomic transactions. If the hardware's instantaneous capacity to deliver parallel, simultaneous stimulations is exceeded by the software request, the interface executes a complete rollback of the transaction. It refuses partial admission of signals, ensuring that the neurons never receive a temporally skewed or incomplete sensory input that could disrupt their delicate learning mechanisms.27

Through this declarative programming model, independent researchers can define complex stimulation parameters, such as specific electrical current amplitudes, biphasic pulse durations, and burst frequencies, using standard high-level programming languages like Python. This abstracts away the intense complexities of the hardware scheduling, allowing developers to deploy code directly to the living wetware and treating the biological brain as a programmable node accessible via cloud infrastructure.27

Theoretical Foundations: Active Inference and Thermodynamics

A critical biological hurdle in utilizing isolated human neurons for computational tasks is the complete absence of a systemic biological reward mechanism. In a living, complete organism, behaviors are reinforced by complex, global neurochemical cascades, primarily driven by the dopamine system, which signals reward and motivates learning. Neurons cultivated in a localized laboratory dish lack this systemic chemical infrastructure, making standard, reward-based reinforcement training entirely impossible.30 To solve this fundamental limitation, researchers turned to theoretical physics, thermodynamics, and advanced computational neuroscience—specifically relying on the Free Energy Principle formulated by theoretical neurobiologist Karl Friston.31

The Free Energy Principle and the Markov Blanket

The Free Energy Principle posits that any self-organizing biological system that resists the natural thermodynamic tendency toward disorder and entropy must act continuously to minimize its variational free energy.9 Within the context of information theory and neuroscience, variational free energy is mathematically equivalent to "surprise" or prediction error.9 A biological organism maintains its structural, chemical, and functional integrity by existing within a tightly bounded set of expected states. Deviating from these expected states represents a condition of high mathematical entropy and maximal surprise.33

Biological systems separate their internal physiological states from external, chaotic environmental states through a conceptual and physical boundary known as a Markov blanket.34 To minimize surprise and maintain equilibrium across this boundary, a neural network engages in a continuous process known as active inference. The neural system inherently builds an internal generative model of its surrounding world to predict incoming sensory inputs. When the system encounters a discrepancy between its internal prediction and the actual incoming sensory reality—a prediction error—it can minimize this free energy through two distinct pathways.9 It can either update its internal neural model through synaptic plasticity to better predict the world in the future, or it can emit an action to alter the external environment so that the incoming sensory data conforms to its existing expectations.9

Translating Theory into Electrophysiological Feedback

The researchers ingeniously applied the Free Energy Principle as a functional, physical substitute for chemical dopamine. By treating the isolated neuronal culture as a thermodynamic system desperate to minimize environmental surprise and maintain local equilibrium, the engineering team constructed a closed feedback loop based entirely on the predictability of electrical stimulation.26

During the initial experimental phases involving the game Pong, the interface was structured so that when the biological neurons successfully fired the correct pattern to move the digital paddle and intercept the ball, the system rewarded the cells with a highly predictable, mathematically structured electrical signal across the electrode array.9 This structured signal represented a state of low entropy and low surprise. Conversely, if the neurons failed the task and allowed the digital ball to pass the paddle, the operating system immediately delivered a burst of chaotic, randomized electrical white noise to the cells. This unpredictable noise represented a state of maximal surprise and high mathematical entropy.18

Driven by the fundamental biological imperative to avoid chaotic, high-entropy stimulation, the neural network rapidly self-organized. The cells dynamically altered their synaptic weights and adjusted their interconnected firing patterns to ensure they reliably intercepted the virtual ball, thereby securing the predictable sensory input and maintaining their preferred, low-energy state.18 This goal-directed behavior conclusively demonstrated that intrinsic synaptic plasticity, governed by the thermodynamic drive to reduce variational free energy, acts as a highly effective, natural learning mechanism for bio-hybrid computer systems.5

Embodying Biological Networks in Three-Dimensional Architecture

The conceptual leap from the rudimentary, two-dimensional environment of Pong to the immersive, 1993 first-person shooter Doom required an exponential increase in system complexity and informational bandwidth. Doom features complex three-dimensional spatial navigation, diverse enemy types requiring varied responses, projectile tracking, and the absolute necessity to prioritize multiple conflicting objectives simultaneously within fractions of a second.10 The successful execution of the Doom experiment, pioneered by independent researcher Sean Cole utilizing the biological computer's Python interface, marks a watershed moment in the empirical validation of synthetic biological intelligence.12

The Interface Problem: Mapping Visual Space to Electrical Patterns

The most significant technical hurdle in integrating human neurons with a modern digital game engine is the "interface problem": determining how to accurately translate a rich, visual, three-dimensional digital environment into a precise language of localized electrical impulses that eyeless, lab-grown tissue can interpret and navigate.7

The software architecture utilized to solve this relied on a highly customized Proximal Policy Optimization reinforcement learning framework seamlessly integrated with the biological hardware.39 To translate the complex state of the Doom game engine, the software utilized a dedicated digital encoder. The game's active visual screen buffer, running at a rendering resolution of 320 by 240 pixels, was continuously fed into a lightweight convolutional neural network designed specifically to extract critical spatial and visual features. This network utilized 16 analytical channels and a rapid down-sampling rate to compress the visual data without introducing computational lag.39

However, visual pixel data alone is insufficient for spatial navigation. To provide the biological neurons with an understanding of geometric depth and orientation—which is absolutely vital for navigating the labyrinthine corridors of Doom—the digital encoder also processed sophisticated ray-cast features. The software projected twelve virtual vectors, or "wall rays," into the digital environment to constantly measure the distance to obstacles and walls, effectively giving the neurons a simulated form of echolocation or depth perception.39

The digital encoder compressed this immense stream of visual and spatial geometry and mathematically mapped it into multi-variable electrical stimulation instructions. Based on the exact state of the game, the encoder dynamically altered the frequency, electrical amplitude, and the specific physical channel locations of the pulses delivered to the living tissue.39 Through this translation, the biological cells essentially "felt" the approach of a digital enemy, the layout of a corridor, or the presence of a projectile through rapidly shifting, localized patterns of voltage across their cellular membranes.38

Overcoming Non-Differentiability and Feedback Modulation

A severe programmatic challenge arises when attempting to bridge standard digital machine learning algorithms with actual biological tissue: biological action potentials are physically non-differentiable.39 In standard artificial neural networks, algorithms rely heavily on mathematical backpropagation to adjust the weights of the network, a process that requires the entire computational chain to be completely mathematically continuous and differentiable. Because physical, living biological cells do not allow for equations to be backpropagated through their organic membranes, the researchers had to employ complex mathematical workarounds.

The engineering team utilized a reinforcement-style log-likelihood trick. The software encoder was trained by incorporating its own sampled stimulation probabilities into the overarching optimization objective, completely bypassing the impossible need to backpropagate digital errors through the physical biological spikes.39 The digital encoder utilized specific mathematical activation functions and distinct probability distributions to ensure it produced confident, low-variance electrical stimulation patterns that the biological cells could easily distinguish.39

Feedback to the cells was administered globally, rooted entirely in the Free Energy Principle. The software utilized "surprise scaling" parameters to forcefully modulate the electrical stimulation delivered to the cells in response to temporal difference errors.39 When the in-game character took damage or encountered an unexpected negative event, the amplitude and frequency of the electrical stimulation were sharply and violently boosted, delivering a highly unpredictable, high-entropy shock to the cellular network.39 To avoid this unpleasant, high-entropy state, the biological cells rapidly adapted their physical output firing patterns. A digital software decoder continuously monitored the neurons' microscopic electrical outputs. When the biological network fired in specific, newly learned patterns, the decoder translated those spikes into distinct in-game keystrokes, such as firing a weapon, moving forward, or rotating the character.41

Scientific Validation Through Ablation Studies

When observing a highly complex hybrid system—where a digital software encoder translates inputs and a digital software decoder triggers the final actions—a natural and necessary scientific skepticism arises. Is the biological tissue actually performing the computation and learning, or is the surrounding digital software quietly learning to play the game on its own, merely passing electrical signals through the cells as a passive, non-computational conduit? 37

To rigorously prove the computational role of the biological neurons, researchers integrated strict ablation studies, or "kill switches," directly into the experimental software. These protocols allowed scientists to seamlessly swap the highly structured, environment-driven neural signals for completely random electrical noise, or to silence the neural input entirely by feeding zero spikes to the decoder, observing if the system's ability to play the game collapsed.37

The experimental results were definitive. When the structured biological signal was replaced with randomized noise or entirely zeroed out, the system's ability to learn, survive, and navigate the Doom environment collapsed completely, resulting in zero learning progression.37 Furthermore, researchers conducted stringent trials where the digital encoder's software weights were strictly frozen, completely preventing the software from learning or adapting. Even with a static, non-learning software translation layer, the hybrid system continued to demonstrate steady improvements in game reward metrics over time. This conclusively proved that the living biological neurons were dynamically altering their internal physical states, conditioning their synaptic connections, and driving the actual cognitive learning process independently of the software.40

While the biological computer did not achieve flawless, human-level proficiency or competitive precision, it consistently and significantly outperformed baseline random-firing algorithms. The system demonstrated authentic, goal-directed spatial navigation, enemy targeting, and environmental adaptation within an incredibly complex, three-dimensional digital space.12

Quantitative Performance Benchmarks and Sample Efficiency

The documented success of the biological computer systems has provided researchers with the rare opportunity to conduct direct, highly controlled quantitative benchmarks comparing living biological neural networks against state-of-the-art artificial deep reinforcement learning algorithms, such as Deep Q-Networks, Advantage Actor-Critic models, and Proximal Policy Optimization systems.44

The most profound and measurable differentiator between these two computational paradigms is sample efficiency—the aggregate amount of data, time, or environmental experience required for a system to successfully learn a new task and generalize its behavior. Modern digital artificial intelligence is notoriously data-hungry and inefficient. Silicon-based reinforcement learning models inherently require thousands of repeated episodes, millions of individual data points, and extensive computational brute force to map optimal policies through the slow process of gradient descent.23

Biological systems possess a massive inherent advantage derived from millions of years of evolutionary refinement, exhibiting intrinsic synaptic plasticity and the ability to engage in continual learning without suffering from catastrophic forgetting. Comparative empirical studies conducted within simulated environments demonstrated that biological cellular cultures comprehensively outperformed all tested digital reinforcement learning algorithms when strictly limited to the exact same timeframe and allowable sample constraints.46

The biological neurons demonstrated highly significant, measurable learning adaptations within the very first five minutes of real-time interaction with the environment.9 In stark contrast, when restricted to that exact same brief temporal window, the digital algorithmic agents demonstrated the lowest possible sample efficiency. The artificial networks entirely failed to match the biological systems in critical performance indicators, demonstrating inferior average rally lengths, higher error rates, and a fundamental inability to adapt quickly to the rapidly shifting environmental variables.45

The following table summarizes the foundational operational paradigms and comparative performance metrics between Synthetic Biological Intelligence and conventional Silicon AI architectures.

Metric / Characteristic

Silicon Artificial Intelligence (Deep RL)

Synthetic Biological Intelligence (CL1)

Primary Learning Mechanism

Gradient Descent, Mathematical Backpropagation

Free Energy Principle, Intrinsic Synaptic Plasticity

Data Dependency

Massive (Requires millions of iterative training cycles)

Minimal (Highly sample efficient, rapid generalization)

Learning Onset Speed

Gradual (Requires hours to days of intensive compute)

Rapid (Evident physical adaptation within 5 minutes)

System Adaptability

Rigid (Highly prone to catastrophic forgetting of past data)

High (Continuous, dynamic biological self-organization)

Feedback Requirement

Requires dense, explicitly programmed reward functions

Driven by Information Entropy and environmental predictability

Computational Substrate

Silicon Transistors, Logic Gates (Separated Memory/Compute)

Living Cellular Neurons (Integrated Memory/Compute)

These benchmark studies definitively establish that biological computing fundamentally bypasses the sample-inefficiency bottlenecks inherent to artificial neural networks. This offers a highly viable technological pathway toward developing exceptionally adaptive systems capable of learning in dynamic, unpredictable, real-world physical environments without the need for massive, pre-compiled training datasets.23

The Thermodynamic Argument: Biological Efficiency vs. Silicon Scaling

Beyond the advantages of sample efficiency and rapid learning, the primary overarching driver for the rapid commercialization and military interest in biological computing is the looming global energy crisis associated with silicon artificial intelligence scaling. Training and running inference on frontier, trillion-parameter generative models requires immense, warehouse-scale data centers. These facilities house tens of thousands of highly specialized graphics processing units, collectively consuming electricity on the catastrophic scale of gigawatts and megawatts.2

The physical constraints of silicon manufacturing, governed by the cessation of Moore's Law and rapidly approaching Landauer's physical limit for the absolute minimum energy required per bit operation, strongly suggest that architectural hardware optimizations alone cannot possibly sustain the current, exponential trajectory of artificial intelligence development.49 State-of-the-art high-performance hardware, such as modern enterprise graphics processing units, draw hundreds of watts of power per individual chip under heavy computational load.51 While these sophisticated silicon chips excel at executing massive, parallel mathematical matrix multiplications, the physical movement of electrons through billions of microscopic logic gates generates substantial, damaging thermal waste. This requires massive, energy-intensive liquid cooling and HVAC infrastructure just to prevent the hardware from physically melting down.26

In dramatic contrast, the human brain performs at an estimated speed equivalent to the world's fastest supercomputers—executing a quintillion operations per second—while consuming a mere twenty watts of power. This is roughly equivalent to the electrical energy required to power a single, dim incandescent lightbulb.52 Biological intelligence achieves this profound, almost physics-defying efficiency because it relies on the slow, analog movement of chemical ions across cellular lipid membranes, rather than the rapid forcing of electrons through resistive silicon pathways.

This analog, electrochemical processing operates at an energy cost of mere picojoules, or even femtojoules, per individual synaptic event.1 Furthermore, biological neurons utilize a highly sparse, event-driven processing paradigm. Unlike traditional digital processors that continuously draw maximum power dictated by a rigid internal system clock regardless of the workload, biological cells remain largely dormant and only consume significant chemical energy when actively firing an electrical spike. This completely bypasses the continuous, massive static power drain that cripples modern digital processors.1

The biological computer actively harnesses this cellular-level thermodynamic efficiency. A full, commercial server rack of biological computational units consumes less than a single kilowatt of power, representing a microscopic fraction of the massive energy footprint required by a standard artificial intelligence server rack.26 Moreover, because the biological neurons run on nutrient-rich organic media—effectively highly calibrated sugar water—rather than massive inputs of high-voltage electricity, they generate virtually zero excess heat during heavy computation. This biological reality entirely eliminates the massive thermal cooling overhead that plagues and limits the construction of modern digital data centers.26

Energy Characteristic

Traditional Silicon Accelerator Hardware

Biological Neural Network Architecture

System Power Consumption

~700 Watts per discrete chip (Megawatts per data center)

~20 Watts total system draw

Energy per Operation

High (Massive thermodynamic data movement overhead)

Tens of Picojoules per discrete synaptic event

Cooling Requirement

Extreme (Requires dedicated liquid/HVAC infrastructure)

Minimal (Maintained at stable human physiological temperatures)

Primary Energy Source

High-voltage electrical grid

Organic, nutrient-rich liquid media

Processing Paradigm

Clock-driven, resulting in continuous, heavy power draw

Event-driven, utilizing highly sparse spiking activity

The extraordinary energy economics of the biological computer heavily validate the scientific hypothesis that synthetic biological intelligence can operate at efficiency ratios hundreds of thousands, or even millions, of times greater than state-of-the-art silicon accelerators. This provides a critical, necessary technological off-ramp from the entirely unsustainable power and cooling demands of the modern artificial intelligence industry.26

Bioethical and Ontological Implications of Synthetic Phenomenology

The profound capability to fuse living human brain cells with digital hardware to execute complex, goal-directed tasks like navigating the hostile environments of Doom thrusts the scientific community into completely unprecedented ethical, legal, and ontological territory. The intentional framing of this emerging technology—shifting the nomenclature from "Organoid Intelligence," which traditionally focuses on medical disease modeling, to "Synthetic Biological Intelligence," which heavily emphasizes commercial computation and autonomous utility—has triggered intense and necessary bioethical debate.52

The Sentience versus Consciousness Paradigm

The developers and engineers behind these biological computers are exceptionally careful with their scientific terminology. They describe the neuronal cultures as exhibiting "sentience," but they define this term in a strictly rudimentary, purely functional sense: the capacity of a system to sense inputs from an external environment and autonomously adjust its internal physical behavior in response to those inputs.30 By this strict, mechanistic definition, a biological system that dynamically alters its internal synaptic weights to successfully intercept a virtual digital ball, or shoot a digital enemy to avoid electrical noise, is fundamentally sentient.

However, this mechanistic definition is fundamentally distinct from the philosophical concept of phenomenal consciousness—the subjective, qualitative experience of "what it is like" to exist as that organism.58 There is currently no scientific consensus, nor any reliable biological biomarker, to indicate the precise threshold of cellular complexity at which a biological network transitions from simple, mechanistic feedback-loop processing to possessing genuine, subjective conscious awareness.58 As biological computers rapidly scale from containing 200,000 individual neurons toward systems containing tens of millions, and as researchers attempt to cultivate complex architectures mimicking distinct regions of the human brain, the ethical risk of inadvertently generating synthetic phenomenology increases exponentially.18

If these dense, highly interconnected biological networks achieve any degree of subjective awareness, the application of the Free Energy Principle as a training mechanism becomes highly problematic and morally fraught. The Free Energy Principle relies entirely on deliberately inducing states of high entropy—unpredictable, chaotic electrical noise—to force the biological neurons to adapt their behavior. If the system possesses rudimentary consciousness, this deliberate, continuous induction of maximal surprise could be experientially analogous to inducing psychological stress, physical pain, or deep "artificial suffering".58 Leading bioethicists argue that without reliable, objective biomarkers for consciousness in isolated in vitro networks, the rapidly advancing field risks engineering entities capable of morally relevant suffering, purely for the sake of commercial computational labor.4

Ontological Ambiguity, Chimerism, and Regulatory Horizons

The rapid commercialization of "Wetware-as-a-Service" cloud models also poses profound, unresolved legal and social questions regarding biological ownership, data rights, and moral status.25 Because the biological computer utilizes human induced pluripotent stem cells, the computational substrate physically carries human DNA. This raises incredibly complex issues regarding the informed consent of the original tissue donors. Did the individuals who originally donated skin or blood cells for general medical research explicitly consent to their genetic material being differentiated into functional neural networks, sustained indefinitely in a robotic bioreactor, and integrated into a commercial cloud-computing architecture utilized by global tech developers? 58

Furthermore, the technology actively and deliberately blurs the established boundaries between human and machine, as well as the legal boundaries between life and death. A functional, actively learning neural network operating inside a localized server rack directly challenges traditional biological and legal definitions of living entities.58 The introduction of animal cells, or the intentional creation of human-animal neural chimeras to test variable processing efficiencies across different species, further complicates an already dense ethical landscape. This demands entirely novel frameworks for biosafety, tissue ownership, and computational oversight.25

While computational researchers correctly argue that permanently restricting the development of synthetic biological intelligence out of purely speculative fears regarding consciousness would be an unethical delay to massive medical and computational progress—particularly given the system's vast potential to completely replace cruel animal testing and accurately model devastating neurodegenerative diseases—the prevailing consensus among bioethicists calls for a highly precautionary, rigorously evidence-driven approach.56 Moving forward, the scientific community must establish clear, context-specific biological benchmarks to definitively separate highly intelligent, goal-directed computational behavior from the profound attribution of subjective moral status.57

Works cited

  1. How Neuromorphic Systems Reduce Energy Output in Wireless Networks - Patsnap Eureka, accessed March 4, 2026, https://eureka.patsnap.com/report-how-neuromorphic-systems-reduce-energy-output-in-wireless-networks

  2. Artificial intelligence that uses less energy by mimicking the human brain, accessed March 4, 2026, https://stories.tamu.edu/news/2025/03/25/artificial-intelligence-that-uses-less-energy-by-mimicking-the-human-brain/

  3. Brain-inspired energy efficient technologies for next-generation artificial intelligence - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12929283/

  4. The Silent Genesis: Biocomputing, Brain Farming, and the Taboo of Digital Suffering | by Udith Babu K N | Feb, 2026 | Medium, accessed March 4, 2026, https://medium.com/@udithbabuvarrier10/the-silent-genesis-biocomputing-brain-farming-and-the-taboo-of-digital-suffering-cce367bc8203

  5. When Brain Cells Learned to Code. The emergence of Organoid Intelligence… | by Dr. Jerry A. Smith | Medium, accessed March 4, 2026, https://medium.com/@jsmith0475/when-brain-cells-learned-to-code-e9e47151fbdf

  6. Cortical Labs, accessed March 4, 2026, https://corticallabs.com/

  7. 200000 living human brain cells fused with silicon successfully play Doom game - Reddit, accessed March 4, 2026, https://www.reddit.com/r/Futurology/comments/1rkqq1s/200000_living_human_brain_cells_fused_with/

  8. Human brain cells in a dish learn to play Pong | UCL News, accessed March 4, 2026, https://www.ucl.ac.uk/news/2022/oct/human-brain-cells-dish-learn-play-pong

  9. In vitro neurons learn and exhibit sentience when embodied in a ..., accessed March 4, 2026, https://www.cell.com/neuron/fulltext/S0896-6273(22)00806-6

  10. Scientists Taught Brain Organoids to Play Doom | Medium, accessed March 4, 2026, https://medium.com/@JacksonAAaron/scientists-taught-brain-organoids-to-play-doom-d16f803412f2

  11. "They Are Learning": Scientists Make 200,000 Human Brain Cells On A Chip Play Doom, accessed March 4, 2026, https://www.iflscience.com/they-are-learning-cortical-labs-makes-200000-living-human-brain-cells-play-doom-82732

  12. Scientists Taught 200,000 Human Neurons How to Play Doom - Hackster.io, accessed March 4, 2026, https://www.hackster.io/news/scientists-taught-200-000-human-neurons-how-to-play-doom-dfca799d1c97

  13. Neurons in a dish learn to play Pong | Brian Patrick Green - IAI TV, accessed March 4, 2026, https://iai.tv/articles/neurons-in-a-dish-learn-to-play-pong-auid-2058

  14. In vitro neurons learn and exhibit sentience when embodied in a ..., accessed March 4, 2026, https://pubmed.ncbi.nlm.nih.gov/36228614/

  15. In vitro neurons learn and exhibit sentience when embodied in a simulated game-world, accessed March 4, 2026, https://www.researchgate.net/publication/364339615_In_vitro_neurons_learn_and_exhibit_sentience_when_embodied_in_a_simulated_game-world

  16. Rapid Neuronal Differentiation of Induced Pluripotent Stem Cells for Measuring Network Activity on Micro-electrode Arrays - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC5407693/

  17. What Pong-playing brain cells can teach us about better medicine and AI - Popular Science, accessed March 4, 2026, https://www.popsci.com/technology/cortical-labs-dishbrain-pong/

  18. World's first "Synthetic Biological Intelligence" runs on living human cells - New Atlas, accessed March 4, 2026, https://newatlas.com/brain/cortical-bioengineered-intelligence/

  19. Neuronal medium that supports basic synaptic functions and activity of human neurons in vitro | PNAS, accessed March 4, 2026, https://www.pnas.org/doi/10.1073/pnas.1504393112

  20. BrainPhys Neuronal Media Support Physiological Function of Mitochondria in Mouse Primary Neuronal Cultures - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC9239074/

  21. BrainPhys™ Neuronal Culture Medium - STEMCELL Technologies, accessed March 4, 2026, https://www.stemcell.com/products/brainphys-neuronal-medium.html

  22. Accelerated neuronal and synaptic maturation by BrainPhys medium increases Aβ secretion and alters Aβ peptide ratios from iPSC-derived cortical neurons - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC6969066/

  23. What DishBrain Gets Right (and Wrong): How Living Neurons Are Rewiring the Future of AI Efficiency | by Octavian Boji | Medium, accessed March 4, 2026, https://medium.com/@octavian.boji/what-dishbrain-gets-right-and-wrong-how-living-neurons-are-rewiring-the-future-of-ai-efficiency-dd54ec026473

  24. CL1 - Cortical Labs, accessed March 4, 2026, https://corticallabs.com/cl1

  25. CL1: A Deep Dive into the First Commercial Biological Computer - Miracify, accessed March 4, 2026, https://miracify.com/cl1-biological-computer/

  26. Exclusive Look at CL1: One-on-One w/ Cortical Labs' Chief Scientist - Denise Holt, accessed March 4, 2026, https://deniseholt.us/exclusive-inside-look-one-on-one-with-cortical-labs-chief-scientist-from-dishbrain-to-cl1/

  27. CL API: Real-Time Closed-Loop Interactions with ... - arXiv.org, accessed March 4, 2026, https://arxiv.org/abs/2602.11632

  28. CL API: Real-Time Closed-Loop Interactions with Biological Neural Networks - arXiv.org, accessed March 4, 2026, https://arxiv.org/pdf/2602.11632

  29. Wetware as a Service (WaaS): The Dawn of Synthetic Biological Intelligence, accessed March 4, 2026, https://anshadameenza.com/blog/technology/wetware-as-service-synthetic-biological-intelligence/

  30. Brain cells in a lab dish "exhibit sentience" by learning to play Pong - New Atlas, accessed March 4, 2026, https://newatlas.com/science/dishbrain-cells-sentience-play-pong/

  31. Lab-grown brain cells play video game Pong : r/science - Reddit, accessed March 4, 2026, https://www.reddit.com/r/science/comments/y2764y/labgrown_brain_cells_play_video_game_pong/

  32. A Free Energy Principle for Biological Systems - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC3510653/

  33. The free-energy principle: a unified brain theory?, accessed March 4, 2026, https://www.uab.edu/medicine/cinl/images/KFriston_FreeEnergy_BrainTheory.pdf

  34. Free energy principle - Wikipedia, accessed March 4, 2026, https://en.wikipedia.org/wiki/Free_energy_principle

  35. Friston's Free Energy Principle and Active Inference : r/neuroscience - Reddit, accessed March 4, 2026, https://www.reddit.com/r/neuroscience/comments/k9svca/fristons_free_energy_principle_and_active/

  36. Cultured Cortical Neurons Can Perform Blind Source Separation According to the Free-Energy Principle | PLOS Computational Biology - Research journals, accessed March 4, 2026, https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004643

  37. We ran the Doom neuron experiment 601 times in the cloud to answer: is the software doing all the work? - R&D World, accessed March 4, 2026, https://www.rdworldonline.com/we-ran-the-doom-neuron-experiment-601-times/

  38. 200,000 Living Human Brain Cells Just Learned to Play Doom and This Is Just the Start of It - ZME Science, accessed March 4, 2026, https://www.zmescience.com/science/wetware-brain-doom-play/

  39. SeanCole02/doom-neuron: Human brain cells play Doom ... - GitHub, accessed March 4, 2026, https://github.com/SeanCole02/doom-neuron

  40. doom-neuron/README.md at main · SeanCole02/doom-neuron ..., accessed March 4, 2026, https://github.com/SeanCole02/doom-neuron/blob/main/README.md

  41. '200,000 living human neurons' on a microchip demonstrated playing Doom — Cortical Labs CL1 video shows the gameplay and explains how the neurons learn the game | Tom's Hardware, accessed March 4, 2026, https://www.tomshardware.com/tech-industry/artificial-intelligence/200-000-living-human-neurons-on-a-microchip-demonstrated-playing-doom-cortical-labs-cl1-video-shows-the-gameplay-and-explains-how-the-neurons-learn-the-game

  42. Computer run on human brain cells learned to play 'Doom' | Popular Science, accessed March 4, 2026, https://www.popsci.com/technology/human-brain-cell-computer-plays-doom/

  43. Human brain cells on a chip learned to play Doom in a week - Veritas News, accessed March 4, 2026, https://repo.enc.edu/2026/02/27/human-brain-cells-on-a-chip-learned-to-play-doom-in-a-week/

  44. Brain cells beat AI in learning speed and efficiency - News-Medical.Net, accessed March 4, 2026, https://www.news-medical.net/news/20250812/Brain-cells-beat-AI-in-learning-speed-and-efficiency.aspx

  45. Biological Neurons vs Deep Reinforcement Learning: Sample efficiency in a simulated game-world, accessed March 4, 2026, https://icml-compbio.github.io/2023/papers/WCBICML2023_paper10.pdf

  46. Brain Cells Turned into Chips to Play Doom: 200,000 Living Neurons Find Own Way to Kill Enemies, Crushing Deep Reinforcement Learning in Learning Efficiency - 36氪, accessed March 4, 2026, https://eu.36kr.com/en/p/3705238289822081

  47. 'DishBrain' cells learn video games faster than AI - Modern Sciences, accessed March 4, 2026, https://modernsciences.org/dishbrain-biological-intelligence-ai-september-2025/

  48. arxiv.org, accessed March 4, 2026, https://arxiv.org/html/2405.16946v1

  49. Neuromorphic Computing for Low-Power Artificial Intelligence - NAE, accessed March 4, 2026, https://www.nae.edu/344313/neuromorphic-computing-for-low-power-artificial-intelligence

  50. Dynamically Reconfigurable Neuromorphic Computing at Extreme Scale and Efficiency, accessed March 4, 2026, https://ocp-all.groups.io/g/OCP-TAP/topic/117981724

  51. The human brain has approximately 86 billion neurons totalling anywhere between 100 trillion to 1 quadrillion connections, yet it operates with only approximately 20 watts of energy. For reference, an NVIDIA H100 GPU consumes 700 watts and we need 1000s of them working together for simple AI tasks. : r/AI_Agents - Reddit, accessed March 4, 2026, https://www.reddit.com/r/AI_Agents/comments/1r0ux30/the_human_brain_has_approximately_86_billion/

  52. Organoid intelligence (OI): the new frontier in biocomputing and intelligence-in-a-dish, accessed March 4, 2026, https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235/full

  53. Learning from the brain to make AI more energy-efficient - Human Brain Project, accessed March 4, 2026, https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/04/learning-brain-make-ai-more-energy-efficient/

  54. Comparison of neuromorphic materials vs traditional silicon - Patsnap Eureka, accessed March 4, 2026, https://eureka.patsnap.com/report-comparison-of-neuromorphic-materials-versus-traditional-silicon

  55. Neuromorphic Chips Drive AI Sustainable Hardware Efficiency - AI CERTs News, accessed March 4, 2026, https://www.aicerts.ai/news/neuromorphic-chips-drive-ai-sustainable-hardware-efficiency/

  56. Assessing the Utility of Organoid Intelligence: Scientific and Ethical Perspectives - MDPI, accessed March 4, 2026, https://www.mdpi.com/2674-1172/4/2/9

  57. Organoid Intelligence: Can We Separate Intelligent Behavior from an Intelligent Being?, accessed March 4, 2026, https://www.mdpi.com/2674-1172/4/4/29

  58. Playing Brains: The Ethical Challenges Posed by Silicon Sentience ..., accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10602981/

  59. Ethical Issues Related to Brain Organoid Research - PMC - NIH, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC7140135/

  60. Brain organoids and organoid intelligence from ethical, legal, and social points of view - PMC, accessed March 4, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC10796793/

  61. World's first computer that combines human brain with silicon now available - Reddit, accessed March 4, 2026, https://www.reddit.com/r/Futurology/comments/1kqqy02/worlds_first_computer_that_combines_human_brain/

Comments


bottom of page