
I know what you’re thinking (pun intended). But this is not a sensational fantasy about mind reading. It is an extrapolation from real advances in AI, brain-computer interfaces (BCIs), and neuroscience. As systems for decoding neural signals and translating thought-related activity into digital output continue to improve, the question is no longer just what they can do on their own, but whether they could one day be networked, and to what extent. Although BCIs are no longer a novel concept, if such devices could communicate directly with one another, they might give rise to technological telepathy.
From private thought to usable input
Thoughts have historically remained private by default, becoming shareable only when forced through speech, writing, gesture, or code, each of which introduces delay, tension, and translation between intention and expression. Technological telepathy becomes consequential not because machines can literally read minds in a science-fiction sense, but because computing is progressively collapsing that gap, as BCIs, silent-speech systems, neural decoding models, and generative AI converge into a communications stack in which cognition itself becomes a usable input, suggesting that future networks may connect minds rather than merely devices or identities.
A longer lineage of brain-machine translation
This trajectory does not originate with ATR, AlterEgo, Neuralink, or the current “AI summer,” but extends through a longer history of rendering the brain legible to machines, beginning with Hans Berger’s EEG and its demonstration of non-invasive neural capture, continuing through José Delgado’s stimulation experiments, Alvin Lucier’s ”Music for Solo Performer” and its use of EEG for artistic control, Eberhard Fetz’s work on learned modulation of neural firing, and Jacques Vidal’s articulation of “brain-computer communication,” later made publicly tangible through BrainGate’s cursor control for paralysed patients and Kevin Warwick’s experiments in technological telepathy, all of which situate this internet of minds as a continuation rather than a rupture.
Institutional drivers and military interest
This history cuts across neuroscience, engineering, military funding, performance art, and public spectacle, with DARPA embedded as part of the field’s institutional structure, particularly through programs such as N3 that target high-performance neural interfaces for human-machine teaming, active cyber defence systems, and control of unmanned aerial vehicles, although no credible public evidence supports operational BCI-to-BCI communication in military or covert use.
Also Read: AI is irrevocably changing the tech landscape, and you are going to need a new map
Science fiction as conceptual groundwork
Science fiction anticipated the conceptual and social implications well before technical feasibility, as seen in Alfred Bester’s “The Demolished Man“ and its treatment of telepathy and social order, William Gibson’s “Neuromancer“ and “Johnny Mnemonic“ and their linking of neural systems to networked computation, Iain M. Banks’s neural lace, Ramez Naam’s infrastructural treatment of networked cognition, and Isaac Asimov’s “Foundation,“ alongside concepts such as hive mind theory and consciousness field theory, which framed expectations even as technological telepathy itself remains grounded in engineering rather than speculative human abilities.
What technological telepathy actually is
In practical terms, technological telepathy does not involve full extraction of continuous private thought, but instead consists of narrower capabilities such as decoding constrained visual categories from brain activity during sleep, reconstructing partial features of perceived or imagined images, or inferring silently articulated words from neuromuscular signals in the face and jaw, which are distinct but collectively indicate that communication technologies are moving upstream toward earlier stages of cognition.
In practical terms, current and near-term systems rely on specific device classes rather than abstract “mind reading,” including implanted electrode arrays that record neural firing directly from the cortex, non-invasive headsets based on EEG or functional imaging that capture aggregate brain activity, endovascular interfaces that access signals via blood vessels, and wearable EMG sensors placed on the face or jaw to detect subvocal speech, all of which produce partial, task-specific signals that must be decoded and interpreted through software, meaning that what is transmitted is not raw thought but a constrained, device-mediated representation of selected aspects of cognition.
The layered communication stack
A clearer understanding emerges when treated as a layered system, beginning with capture through electrodes, imaging systems, or wearable sensors that detect neural or neuromuscular activity, followed by decoding via machine-learning models that map signals to probable words, intentions, percepts, or categories, then mediation through software that filters noise, ranks interpretations, predicts continuations, corrects errors, and structures ambiguous biological signals into coherent output, and finally transmission to devices, other individuals, or networks, with AI functioning as the intermediary that translates between biological activity and digital meaning rather than enabling direct thought transfer.
From data to the “internet of minds”
Within this architecture, cognition-related data becomes processable in ways analogous to other network data while remaining qualitatively closer to thought itself, introducing the possibility that privacy breaches occur prior to completed expression, and implying that the dominant model will resemble cognitively mediated client-server communication rather than direct peer-to-peer telepathy.
Also Read: How fintech in Asia is enabling and making education affordable for everyone
Philosophical constraints on “pure thought”
Philosophical objections of Neuralink-like research, such as Slavoj Žižek’s 2020 talk at the University of Winnipeg, emphasise that conceptual thought does not exist independently of language, challenging the notion that “pure thought” can be transmitted without distortion and reframing linguistic imperfection as intrinsic to meaning rather than an obstacle to be eliminated.
Technical limits and partial decoding
Technical constraints remain substantial, particularly in the form of invasiveness, as high-performance BCIs often depend on implants placed in or near the brain, introducing surgical risk, long-term maintenance, and questions of removal and bodily autonomy, while less invasive approaches such as wearables or endovascular interfaces shift these tradeoffs without removing them, as illustrated by ATR and Yukiyasu Kamitani’s lab, whose dream decoding studies demonstrated category-level prediction of dream content under tightly constrained conditions rather than full reconstruction, thereby establishing partial permeability of internally generated experience without generalisability.
Silent speech and the boundary of intent
Alternative approaches, such as AlterEgo, focus on silent speech, relying on intentional subvocalisation and neuromuscular detection to create a clearer boundary between private thought and transmitted output, although current limitations in surface EMG prevent reliable decoding of inner-speech phonetic content, reinforcing that existing systems detect controlled signals rather than unrestricted cognition.
The fragility of intentionality boundaries
This boundary of intent, while conceptually important, remains technically and institutionally fragile, as systems may expand the definition of “intended” signals through software updates, model retraining, or error correction, and as movement toward decoding imagery, semantic content, and prelinguistic intention further complicates distinctions between thinking, rehearsing, and transmitting, with proposed safeguards such as learned cognitive protocols or mental “keys” likely to erode under pressures for efficiency and usability.
From research frontier to platform economy
The transition from research to commercialisation, evident in companies such as Neuralink, Synchron, and Paradromics, reframes neurotechnology as infrastructure rather than experiment, introducing business models that range from high-cost clinical hardware reimbursed through healthcare systems to platform-based software and institutional deployment in workplaces, defence settings, or insurer-managed care, and elevating neural data as a potentially valuable resource due to its proximity to intention before action, preference before declaration, and friction before expression.
Also Read: Beyond Silicon Valley dreams: Why Southeast Asia is rewriting the rules of tech for good
Expression as a hybrid artifact
In this context, expression becomes a joint product, as systems infer, correct, rank, and autocomplete outputs derived from incomplete signals, producing hybrid artefacts that blur the boundary between user intention and machine contribution, thereby shifting the problem from privacy alone to questions of authorship and authenticity, since output may already reflect negotiation between human cognition and computational prediction.
Why consent is insufficient
Consent, traditionally understood as sufficient ethical grounding, becomes inadequate when users do not stand outside the systems shaping their expression, requiring structural governance mechanisms such as purpose limitation, auditable mediation, rights of refusal, prohibitions on employer coercion, protected clinical boundaries, and legal remedies when machine output is misattributed to the user as fully self-authored.
Feedback loops and cognitive adaptation
Feedback dynamics further complicate the system, as users adapt to decoder behaviour, anticipate system completions, and potentially reshape their own cognitive patterns to improve legibility and control, creating a feedback loop in which thought is partially oriented toward machine interpretability and generating ambiguity in responsibility when outputs reflect inferred rather than explicitly intended meaning.
Assistive promise and differential cognitive citizenship
While assistive applications for paralysis, severe motor impairment, or speech loss remain one of the strongest justifications, variability in neural and neuromuscular signals introduces differential cognitive citizenship, in which some individuals are more easily legible to systems than others due to anatomy, fatigue, stress, medication, injury, learning history, or neurotype, producing structured inequalities in performance, correction burden, and access.
Regulation, geography, and legibility inequality
These inequalities intersect with broader regulatory and geopolitical conditions, as jurisdictions that prioritise speed, scale, or strategic advantage may normalise less reliable systems more quickly, while Chile’s neurorights turn and the OECD’s neurotechnology governance work represent efforts to establish rights-based and standards-based constraints before large-scale commercialisation hardens, highlighting that cognitive interfaces will be shaped by states, healthcare systems, defence institutions, and major technology firms with divergent regulatory approaches.
Ownership, control, and contested infrastructure
Cognitive infrastructure is therefore unlikely to be uniformly owned, with centralisation more likely in hardware, clinical deployment, and large-scale inference systems, while mediation layers, user-facing software, and potentially open models remain more contestable, shifting the central political question from ownership of discrete thought-data to governance of the channel through which thought becomes public, legible, and actionable.
Also Read: The foundation of Southeast Asia’s tech future
Conditions for legitimate development
Legitimate development would require constrained deployment, local processing by default where feasible, strict separation between therapeutic and productivity uses, independent auditing of intent-detection and mediation systems, meaningful user oversight and contestability, and enforceable rights to refuse cognitive monitoring without loss of work, care, insurance, or civic participation, recognising that technological telepathy simultaneously compresses the distance between thought and communication while inserting additional computational mediation between them.
Near-term reality: low-bandwidth cognition
In near-term scenarios, the most achievable outputs remain limited to affective state, emotional valence, stress, calm, urgency, attentional load, and simple assent or refusal rather than full semantic language, although even these low-bandwidth signals may carry operational value in domains such as military coordination, justice, or entertainment.
Conclusion: authorship under mediation
The progression from networks connecting machines to those connecting identities suggests that an internet of minds would connect cognition itself to computation at unprecedented proximity, leaving unresolved the central question of whether thought, once mediated, inferred, and transmitted through such systems, can remain meaningfully one’s own.
—
Editor’s note: e27 aims to foster thought leadership by publishing views from the community. You can also share your perspective by submitting an article, video, podcast, or infographic.
The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of e27.
Join us on WhatsApp, Instagram, Facebook, X, and LinkedIn to stay connected.
The post Technological telepathy: Is an ”internet of minds” possible? appeared first on e27.
