AlterEgo: How Researchers Taught Wearables to Read Silent Speech
- Bryan White
- Jan 2
- 15 min read

Abstract
The history of computing is fundamentally a history of the Input/Output (I/O) bottleneck. While the computational processing power of silicon has followed Moore’s Law, exponentially increasing in capacity, the bandwidth of the human link to these machines has remained tethered to the mechanical speed of typing fingers and the acoustic limitations of speech. This report presents an exhaustive analysis of AlterEgo, a peripheral myoneural interface developed at the MIT Media Lab and commercialized in 2025. Unlike invasive Brain-Computer Interfaces (BCIs) that require surgical intervention, AlterEgo intercepts the neuromuscular signals of "internal articulation"—the silent, volitional formation of words—from the surface of the skin. By combining high-density surface Electromyography (sEMG) with advanced deep learning architectures (CNN-CTC pipelines), the system achieves a silent, high-bandwidth communication channel that is subjectively experienced as an internal cognitive extension. This deep dive explores the historical lineage of Intelligence Augmentation (IA), the precise neuroanatomy of subvocalization, the signal processing challenges of non-invasive sensors, the machine learning architectures required to decode silent speech, and the ethical landscape of a world where human and machine cognition coalesce without a sound.
1. Introduction: The Bandwidth Bottleneck and the Second Self
The modern human exists in a state of divided attention. We inhabit a physical reality composed of atoms, social interactions, and sensory inputs, while simultaneously maintaining a continuous tether to a digital reality composed of bits, global networks, and artificial intelligence. The interface between these two worlds, however, is fraught with friction. To access the digital realm, one must usually disengage from the physical: we look down at smartphone screens, severing eye contact; we type on keyboards, occupying our hands; or we speak voice commands aloud, disrupting the acoustic environment and sacrificing privacy.
This friction constitutes the "I/O Bottleneck." While a computer can process information at gigahertz speeds, the human user can only input data at the speed of typing (roughly 40 words per minute on a mobile device) or speaking (roughly 150 words per minute). More critically, the modality of this input is external. The device remains a tool—an object separate from the self.
1.1 The Genesis of AlterEgo
The AlterEgo project, originating from the Fluid Interfaces Group at the MIT Media Lab under the direction of Professor Pattie Maes and researcher Arnav Kapur, sought to dismantle this separation.1 The project’s philosophical core is not merely to create a better keyboard, but to achieve "Cognitive Coalescence".2 The goal is an interface that is so seamless, non-invasive, and internal that the user ceases to distinguish between their own biological cognition and the artificial intelligence augmenting it.
In 2018, the project debuted as a research prototype—a 3D-printed wearable that hooked over the ear and jaw, capable of reading "silent speech".3 By September 2025, the project had evolved into a commercial entity, AlterEgo Inc., launching a "near-telepathic" wearable that promised to fundamentally alter the landscape of human-computer interaction.4
1.2 The Concept of Subvocalization
The central innovation of AlterEgo is its reliance on "subvocalization" or internal articulation. When a human reads silently or thinks verbally, the brain's motor cortex initiates a speech plan. Signals are sent down the cranial nerves to the speech articulators—the tongue, lips, jaw, and larynx. Even if the person consciously refrains from making a sound or opening their mouth, minute neuromuscular electrical signals (Action Potentials) still reach the muscles.5
AlterEgo uses surface Electromyography (sEMG) to intercept these command signals at the very end of the neural pipeline.7 By decoding these electrical shadows of speech, the device allows the user to communicate with a computer silently. The feedback is delivered via bone conduction—vibrations through the skull directly to the inner ear—creating a closed loop of communication that is completely silent to the outside world.3
This report dissects the AlterEgo system from the atomic level of electrode chemistry to the societal level of privacy ethics, arguing that it represents a third paradigm in HCI: neither the external tool of the PC nor the invasive implant of the medical BCI, but a "peripheral nerve-computer interface" that respects bodily integrity while expanding cognitive capacity.
2. Historical Phylogeny: From Memex to Myoneural Interface
To understand the trajectory of AlterEgo, one must situate it within the broader history of Intelligence Augmentation (IA). The distinction between IA and Artificial Intelligence (AI) is crucial. AI typically aims to build autonomous agents that perform tasks instead of humans (replacing the human). IA aims to build systems that allow humans to perform tasks better (enhancing the human).
2.1 The Licklider Legacy
The intellectual lineage of AlterEgo traces directly to J.C.R. Licklider’s seminal 1960 paper, "Man-Computer Symbiosis".8 Licklider envisioned a future where the coupling between human and machine would be tighter than the "operator-operand" relationship. He foresaw a partnership where the human provided the creative heuristics and the machine provided the algorithmic retrieval, functioning together in real-time.
Arnav Kapur and the MIT team explicitly reference this lineage. The design of AlterEgo is intended to fulfill Licklider's vision of a "very tight coupling".8 In the traditional paradigm, querying the internet is an act of retrieval: one poses a question, waits, and reads the answer. In the AlterEgo paradigm, the interaction is designed to feel like memory. If a user silently asks for a fact and receives the answer instantly via bone conduction, the subjective experience approaches that of simply "knowing" the information.
2.2 The Evolution of Silent Speech Research
The idea of reading silent speech predates AlterEgo, though earlier attempts were often cumbersome or invasive.
NASA Ames Research (2000s): Early experiments by NASA explored subvocal recognition to allow astronauts to communicate in high-noise environments or during high-G acceleration where vocal cords might be strained. These systems typically used large, gel-based electrodes and achieved limited vocabularies.
Invasive Studies: Other research utilized invasive electrodes placed directly on the cortex (ECoG) to decode speech intention. While accurate, these required craniotomies, limiting their application to medical patients with severe paralysis.10
AlterEgo’s contribution to this field was the demonstration that high-accuracy, continuous silent speech recognition could be achieved non-invasively and unobtrusively using a wearable form factor that did not obstruct the face, paving the way for consumer adoption.3
3. Physiological Substrate: The Anatomy of Internal Articulation
The efficacy of the AlterEgo system relies on the specific neuroanatomy of speech production. The system does not "read thoughts" in the abstract sense; it reads the motor commands of specific muscles. Understanding which muscles and why requires a tour of the peripheral speech apparatus.
3.1 The Motor Pathway
The pathway of speech begins in the Broca’s area of the frontal lobe, where language planning occurs. Signals are then transmitted to the motor cortex, which maps the movements required for phonation. These signals travel via the Corticobulbar tract to the nuclei of the cranial nerves in the brainstem.
From the brainstem, the signals travel out to the face and neck via the cranial nerves, specifically:
Trigeminal Nerve (CN V): Controls the muscles of mastication (jaw movement).
Facial Nerve (CN VII): Controls the muscles of facial expression (lips, cheeks).
Hypoglossal Nerve (CN XII): Controls the tongue.
During "internal articulation," the user engages this entire pathway with the intention to speak, but inhibits the final respiratory drive that would produce air pressure for sound. The result is a cascade of neuromuscular activation that is too weak to move the muscles visibly but strong enough to generate detectable electrical potentials on the skin surface.8
3.2 Target Muscle Groups
The AlterEgo researchers conducted extensive pilot studies to identify the optimal electrode locations that provided the highest signal-to-noise ratio for distinguishing phonemes (the distinct sounds of speech). The following regions were identified as critical 8:
Muscle / Region | Anatomical Function | Relevance to AlterEgo |
Orbicularis Oris | Encircles the mouth; puckers/closes lips. | Crucial for bilabial sounds (B, P, M). |
Mentalis | Located on the chin (mentum). | Controls lower lip positioning and chin wrinkling. |
Platysma | Broad sheet from chest to jaw. | Activated during jaw depression and stress. |
Digastric (Anterior) | Under the jaw (submental). | Primary muscle for opening the jaw (depressing mandible). |
Mylohyoid | Floor of the mouth. | Moves the tongue and hyoid bone; proxy for tongue tracking. |
Levator Anguli Oris | Corners of the mouth. | Lifts the corners of the mouth; involved in vowel shaping. |
In the finalized commercial and late-stage research prototypes, the system typically utilizes 7 channels derived from these regions, with specific focus on the laryngeal, hyoid, and jaw areas to capture the complex interplay of tongue position and jaw opening.8
3.3 The Concept of Volition
A critical physiological distinction is the concept of volition. AlterEgo is not a passive mind-reader. It captures "volitional activation of internal speech articulators".3 This means the user must deliberately engage the speech muscles.
This physiology provides a natural privacy safeguard. Spontaneous thoughts, emotional reactions, or internal monologues that are not deliberately "subvocalized" do not generate the requisite coherent neuromuscular patterns to trigger the system. The user has an internal "clutch"—they engage the system only when they consciously form words for it to hear.6
4. Hardware Architecture: Engineering the Silent Interface
Translating the faint microvolts of neuromuscular activity into digital commands requires a sophisticated hardware stack. The AlterEgo device is a feat of biomedical engineering, miniaturized for daily wear.
4.1 The Electrode Interface
The primary challenge in surface EMG is skin impedance. The outer layer of the skin (stratum corneum) is dead, dry, and highly resistant to electricity. Clinical EMG often uses abrasive skin prep and conductive gels, which are impractical for a consumer device.
AlterEgo utilizes dry or semi-dry electrode materials, typically involving silver (Ag) or silver-chloride (AgCl) composites, or biomedical grade conductive polymers.6 To maintain signal integrity without gels, the device relies on:
Mechanical Design: The headset frame is designed to apply consistent, gentle pressure to the electrode sites to ensure contact stability.
Active Shielding: The cables and sensors are shielded to prevent the body from acting as an antenna for 60Hz power line hum (electromagnetic interference from wall outlets and lights).11
4.2 Signal Conditioning and Analog Processing
Before the signals can be digitized, they must be cleaned. The raw signal from the face is contaminated with noise from heartbeat, swallowing, and general movement. The analog front-end (AFE) of the AlterEgo system applies a series of filters 11:
High-Pass Filtering (Butterworth, > 0.5 Hz): This removes the "DC Offset" and baseline drift. As the user moves or sweats, the baseline voltage of the skin changes slowly. The high-pass filter eliminates this slow drift, keeping the signal centered around zero.
Notch Filtering (60 Hz): This specifically targets the "mains hum" interference from nearby electrical equipment.
Band-Pass Filtering (0.5 – 8 Hz): Interestingly, much of the relevant information for speech motor planning exists in low frequencies. Filtering to this range helps isolate the speech envelope from high-frequency muscle noise (jitter).
4.3 Heartbeat Artifact Removal
A specific challenge for neck-based electrodes is the carotid pulse. The user's heartbeat creates a rhythmic electrical spike (the ECG artifact) that can be larger than the subtle speech signals. The MIT team implemented a Ricker Wavelet convolution method to detect the QRS complex of the heartbeat and subtract it from the EMG stream, ensuring that the "thump-thump" of the heart is not interpreted as a spoken word.11
4.4 The Form Factor
The 2018 prototype was a C-shaped caliper that hooked over the ears and curved along the jawline. By the 2025 commercial release, the device had been refined into a sleeker aesthetic. It is described as wearing "bone conduction headphones" with sensor extensions.7 The commercial unit integrates the sensors, the processing unit (microcontroller), the battery, and the wireless radio (Bluetooth/Wi-Fi) into a unified wearable that sits comfortably on the head.4
5. The Computational Engine: Neural Networks and Decoding
The raw data streaming from the 7 electrode channels is a chaotic time-series of voltage fluctuations. Converting this into intelligible text requires a deep learning pipeline that can handle temporal variability and sparse data.
5.1 The Architecture: CNN + RNN
The AlterEgo system employs a hybrid neural network architecture that leverages the strengths of both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).11
5.1.1 Feature Extraction (CNN)
The front end of the model is a deep Convolutional Neural Network. While CNNs are typically associated with image recognition, they are highly effective for time-series data when applied as 1D convolutions.
Function: The CNN scans the incoming EMG window and identifies "features"—local patterns of muscle activation. For example, it might learn to recognize the specific "signature" of the orbicularis oris muscle tightening at the same time as the digastric muscle relaxes.
Structure: The architecture typically consists of 3 convolutional layers containing filters of increasing depth (e.g., 64, 128, and 256 filters), utilizing Rectified Linear Unit (ReLU) activation functions.12
5.1.2 Temporal Modeling (LSTM)
Speech is inherently temporal; the meaning of a signal depends on what came before it. The output of the CNN is fed into a Recurrent Neural Network, specifically a Bidirectional Long Short-Term Memory (BiLSTM) network.11
Function: The LSTM maintains a "memory" of the sequence. It connects the features extracted by the CNN into a coherent flow, handling the variability in speed (e.g., if a user speaks a word slowly one time and quickly the next).
5.2 Connectionist Temporal Classification (CTC)
A major hurdle in silent speech recognition is "alignment." In standard training, you need to know exactly which millisecond of audio corresponds to which phoneme. However, in continuous silent speech, there are no clear breaks.
To solve this, AlterEgo utilizes Connectionist Temporal Classification (CTC) loss.11 CTC allows the network to predict a sequence of labels (characters or phonemes) from an unsegmented input stream. It outputs a probability distribution over all possible characters for each time step, introducing a "blank" token to account for silence or transitions. The system then collapses repeated characters (e.g., "h-h-e-e-l-l-l-o") into the final word ("hello").
5.3 Language Model Boosting
The raw output from the EMG decoder is phonetic and can be error-prone. To bridge the gap to natural language, the system integrates a Language Model (LM).11
Contextual Correction: Similar to how smartphone autocorrect works, the LM analyzes the sequence of words. If the EMG decoder produces a phonetically ambiguous result (e.g., "meat" vs. "meet"), the Language Model looks at the context of the sentence (e.g., "I want to [word] you at the station") and selects the statistically probable term ("meet").
Impact: The integration of the LM significantly boosts the accuracy, allowing the system to scale from small vocabularies to conversational speech.
5.4 Personalization and Training
Unlike voice recognition systems (like Siri) which are trained on millions of users and work "out of the box," myoneural signals are highly idiosyncratic. The placement of electrodes relative to muscle bellies, the thickness of the skin, and the specific motor patterns of speech vary from person to person.
User-Dependence: Consequently, the AlterEgo system is user-dependent.11 Users typically undergo a calibration phase where they silently read a corpus of text to train the model to their specific physiology. While this increases the friction of setup, it results in a personalized model with much higher fidelity for that specific individual.
6. Commercialization: The Rise of AlterEgo Inc. (2025)
For seven years, AlterEgo remained a research project, producing papers and winning awards (such as the Time Magazine Best Inventions of 2020).14 However, the transition from lab bench to product requires solving issues of scalability, durability, and cost.
6.1 The Spin-Off
In early 2025, the project spun out of the MIT Media Lab to form AlterEgo Inc. (also referenced as AlterEgo.io), headquartered in Boston.3 The company is led by Arnav Kapur as CEO and Co-founder, and Max Newlon as COO.15
6.2 Investment and Valuation
The commercial potential of a silent interface attracted significant venture capital. The company raised a total of $35.1 million in early funding rounds 16:
Series A ($7.3M): Led by Rembrandt Partners and Cisco Investments. The involvement of Cisco suggests strong interest in enterprise communication applications.
Series B ($27.8M): Led by Granite Ventures and Mayfield.This funding enabled the company to miniaturize the electronics and develop the consumer-ready form factor unveiled in September 2025.
6.3 The "Silent Sense" Launch
On September 8, 2025, AlterEgo unveiled its flagship commercial device.4 The marketing centered on the concept of "Silent Sense"—the ability to communicate at the "speed of thought."
Demonstrations: The launch demos featured users performing tasks that would be impossible with voice or text:
Silent Querying: A user asking for the weather in Tokyo while maintaining a conversation with a person in the room.
Telepathy: Two users wearing the devices communicated silently with each other across a room. One user subvocalized a message; the system decoded it, transmitted it wirelessly, and synthesized it into the other user's bone conduction headphone.4
Market Positioning: The device is positioned both as an assistive tool for the speech-impaired and a "super-human" productivity tool for professionals who need high-bandwidth, hands-free, private communication.
7. Comparative Analysis: AlterEgo vs. The BCI Landscape
The rise of AlterEgo occurs in parallel with the explosion of interest in Brain-Computer Interfaces (BCIs), most notably Neuralink. Understanding the distinction between these approaches is vital.
Feature | AlterEgo (Peripheral Myoneural) | Neuralink (Invasive Cortical BCI) |
Invasiveness | Non-Invasive: Wearable surface electrodes. No surgery. | Invasive: Craniotomy + electrodes implanted in brain tissue. |
Signal Source | Neuromuscular Action Potentials (Muscles). | Cortical Spikes (Single Neurons/LFP). |
Setup Cost | Low (Purchase device, calibrate). | High (Surgery, medical oversight). |
Bandwidth (Speech) | ~100+ wpm (Expert users).11 | ~62 wpm (Current academic records for invasive speech BCI).10 |
Privacy | Volitional: Only reads attempted speech. | Continuous: Potential to read unvoiced thoughts/intent. |
Durability | Device can be removed/upgraded instantly. | Implant degrades over time; removal is risky.19 |
Target Market | Broad consumer + Medical. | Primarily Medical (Paralysis) initially. |
7.1 The Bandwidth Surprise
Counter-intuitively, the non-invasive AlterEgo system currently rivals or exceeds the speech decoding rates of invasive BCIs. While invasive BCIs have theoretical access to more data, the "motor code" for speech in the brain is incredibly complex and distributed. By reading the signal at the muscles, AlterEgo leverages the body's own "decoder" (the peripheral nervous system) which has already aggregated the complex brain signals into specific muscle vectors. This allows for rapid, high-accuracy decoding of speech without drilling into the skull.19
7.2 The "Telepathy" Comparison
Both systems claim "telepathy." Neuralink envisions a future of high-bandwidth data transfer between brains. AlterEgo achieves a functional equivalent today using "Silent Sense." While Neuralink is often described as "The Matrix," AlterEgo is more akin to "The Force"—a subtle, volitional extension of will that requires discipline and training but grants extraordinary capability without violating the body's integrity.
8. Ethical Horizons: Silence, Privacy, and Society
The introduction of silent speech interfaces redefines the social contract of communication.
8.1 The Privacy of the Mind
The primary anxiety surrounding such technology is "mind-reading." However, the physiological constraint of AlterEgo offers a robust firewall. Because the system relies on motor units, it cannot read abstract thought, memory, or visual imagery. It can only read what the user intends to say. This distinction between "thinking" and "articulating" is the boundary of privacy.6
Biometric Data: However, EMG data is biometric. It can potentially reveal stress levels, fatigue, or even early signs of neurodegenerative disease. The security of this data is paramount. AlterEgo Inc. has emphasized "local processing" (Edge AI) where possible to keep sensitive neural data on the device rather than the cloud.7
8.2 The Erosion of Social Cues
In a future where AlterEgo is ubiquitous, silence becomes ambiguous. Is the person sitting across from you simply listening, or are they silently dictating an email, checking sports scores, or chatting with a third party?
The "Glasshole" Effect: Just as Google Glass failed partly due to the unease it caused observers (who didn't know if they were being recorded), AlterEgo faces a social acceptance hurdle. The "invisibility" of the interaction makes it more discreet, but potentially more unnerving for observers who value undivided attention.
8.3 The Borg or the Symbiote?
Critics might view this as the final step in the mechanization of the human—turning us into "nodes" in the digital hive mind. However, proponents like Pattie Maes argue the opposite. By moving the interface inside (subjectively) and removing the need to stare at screens, technology like AlterEgo could actually make us more human. It allows us to access the infinite knowledge of the cloud while keeping our heads up, our hands free, and our eyes fixed on the world and the people around us.7
9. Conclusion
AlterEgo represents a pivotal moment in the history of human-computer interaction. It effectively demonstrates that the bandwidth bottleneck of the human mind can be widened without resorting to the surgical knife. By cleverly exploiting the physiological leakage of "internal articulation" and applying state-of-the-art machine learning, the system creates a "second self"—a digital alter ego that listens to our silent commands and whispers back the knowledge of the world.
As the technology matures from the laboratory benches of MIT to the consumer market of 2025, it challenges us to reimagine the boundaries of the self. In the era of the peripheral myoneural interface, the line between biological memory and digital retrieval dissolves, and the human voice is no longer limited by the need for sound. We are entering the age of the Silent Symphony, where our thoughts—volitional, articulated, and amplified—become the direct architects of our digital reality.
Appendix: Technical Specifications & Data Summary
The following table summarizes the key technical parameters of the AlterEgo system as analyzed in this report.
Parameter | Specification | Notes / Context |
Interface Type | Peripheral Myoneural Interface | Non-invasive, surface-based. |
Sensor Technology | sEMG (Surface Electromyography) | Uses Ag/AgCl or conductive polymer electrodes. |
Channel Count | 7 Channels | Targeted regions: Larynx, Hyoid, Jaw, Face. |
Signal Bandwidth | 0.5 Hz – 1000 Hz | Filtered to 0.5-8 Hz for speech envelope extraction. |
Processor | CNN + BiLSTM + CTC | Hybrid Deep Learning architecture for time-series. |
Feedback | Bone Conduction Audio | Transmits via skull vibration; keeps ear canal open. |
Latency | Real-time (< 200ms) | Perceptible as immediate conversation. |
Accuracy | ~92% (Median Word Accuracy) | Based on user-dependent training models. |
Communication Rate | ~100+ Words Per Minute | Comparable to natural conversation; faster than typing. |
Commercial Status | Available (Select Beta, 2025) | Spin-off company AlterEgo Inc. |
Funding | $35.1 Million | Series A & B (Cisco, Mayfield, Granite Ventures). |
Works cited
Publications ‹ AlterEgo — MIT Media Lab, accessed January 2, 2026, https://www.media.mit.edu/projects/alterego/publications/
[PDF] AlterEgo: A Personalized Wearable Silent Speech Interface | Semantic Scholar, accessed January 2, 2026, https://www.semanticscholar.org/paper/AlterEgo%3A-A-Personalized-Wearable-Silent-Speech-Kapur-Kapur/9e2af148acbf7d4623ca8a946be089a774ce5258
Overview ‹ AlterEgo - MIT Media Lab, accessed January 2, 2026, https://www.media.mit.edu/projects/alterego/overview/
Alterego debuts “near-telepathic” AI wearable - The Rundown AI, accessed January 2, 2026, https://www.therundown.ai/p/alterego-debuts-near-telepathic-ai-wearable
AlterEgo: A Personalized Wearable Silent Speech Interface | Request PDF - ResearchGate, accessed January 2, 2026, https://www.researchgate.net/publication/323669071_AlterEgo_A_Personalized_Wearable_Silent_Speech_Interface
Alterego's silent wearable is making noise in AI communication - Parola Analytics, accessed January 2, 2026, https://parolaanalytics.com/blog/alterego-silent-sense-telepathy-patents/
Frequently Asked Questions ‹ AlterEgo - MIT Media Lab, accessed January 2, 2026, https://www.media.mit.edu/projects/alterego/frequently-asked-questions/
AlterEgo: A Personalized Wearable Silent Speech Interface - Andy Matuschak, accessed January 2, 2026, https://andymatuschak.org/files/papers/Kapur%20et%20al%20-%202018%20-%20AlterEgo.pdf
Signature redacted-- - DSpace@MIT, accessed January 2, 2026, http://dspace.mit.edu/bitstream/handle/1721.1/120883/1088722982-MIT.pdf?sequence=1
Scientists Say New Brain-Computer Interface Lets Users Transmit 62 Words Per Minute, accessed January 2, 2026, https://futurism.com/neoscope/scientists-new-brain-computer-interface-type-62-words-per-minute
A Continuous Silent Speech Recognition System for AlterEgo, a Silent Speech Interface - DSpace@MIT, accessed January 2, 2026, https://dspace.mit.edu/bitstream/handle/1721.1/123121/1128187233-MIT.pdf?sequence=1
How Deep Neural Networks Can Improve Emotion Recognition on Video Data - MIT Lincoln Laboratory, accessed January 2, 2026, https://www.ll.mit.edu/sites/default/files/publication/doc/2018-05/2016_Khorrami_ICIP_FP.pdf
MIT Graduate Student Develops a Mind-Reading Device - element14 Community, accessed January 2, 2026, https://community.element14.com/technologies/sensor-technology/b/blog/posts/mit-graduate-student-develops-a-mind-reading-device
Updates ‹ AlterEgo - MIT Media Lab, accessed January 2, 2026, https://www.media.mit.edu/projects/alterego/updates/
Alterego demoes 'world's first near-telepathic wearable' that enables 'typing at the speed of thought' other abilities — device said to enable silent communication with others, control devices hands-free, and restore speech for impaired | Tom's Hardware, accessed January 2, 2026, https://www.tomshardware.com/peripherals/wearable-tech/alterego-demoes-worlds-first-near-telepathic-wearable-that-enables-typing-at-the-speed-of-thought-other-abilities-device-said-to-enable-silent-communication-with-others-control-devices-hands-free-and-restore-speech-for-impaired
2025 Funding Rounds & List of Investors - AlterEgo - Tracxn, accessed January 2, 2026, https://tracxn.com/d/companies/alterego/__oLf0SLh9CeVPrhQkhPjExJ8D0IiJ-J45J1t5N714nYk/funding-and-investors
'Near Telepathic' Wearable Lets You Communicate Silently With Devices - MIT Media Lab, accessed January 2, 2026, https://www.media.mit.edu/articles/exclusive-startup-lets-you-query-ai-with-silent-speech/
AlterEgo Device Enables Silent AI Communication via Facial Muscle Signals, accessed January 2, 2026, https://www.chosun.com/english/industry-en/2025/09/20/BIHCZEPDCNBSFCREJPFOL2TF3M/
Neuralink competitors | How does Neuralink's technology compare? - Paradromics, accessed January 2, 2026, https://www.paradromics.com/insights/neuralink-competitors
AlterEgo, The Intelligence Augmentation Device That Reads Your Mind - FashNerd, accessed January 2, 2026, https://fashnerd.com/2018/04/alterego-intelligence-augmentation-wearable/



Comments