Bill of Cognitive Rights

Reading Time: 3 minutes


Author’s Note: While kicking around some book ideas involving neurolink technology, I was also considering neurorights. But it struck me that, regardless of the technology, the core issue is the sanctity of our minds. After three nights and a lengthy discussion with ChatGPT, I finalized this. It seems clear and provocative enough to publish here.

Throughout history, humans have tried to shape, influence, and control each other’s minds through persuasion, indoctrination, coercion, or force. Today, new technologies expand these abilities into the most personal domain: thought itself. Neural interfaces, artificial intelligence, and cognitive engineering could improve human life but also threaten the freedom, privacy, and authenticity of the mind. These threats include the risk of unauthorized access to thoughts, manipulation of memories and perceptions, and the weakening of personal agency in decision-making.

We therefore affirm that the mind is a sovereign domain, entirely belonging to the individual who inhabits it. Its thoughts, feelings, memories, and perceptions are not commodities, tools for manipulation, or resources to exploit.

Cognitive rights are not determined by technology but are innate to personhood. They protect individuals from intrusion, distortion, and coercion—whether by machines, institutions, or other people.

Just as past generations established rights to free expression, bodily autonomy, and political liberty, we now acknowledge the importance of safeguarding cognitive liberty, privacy, integrity, authenticity, consent, protection, and transparency. These rights are relevant in all settings, no matter the tools or methods of influence, from simple propaganda to sophisticated neural interfaces. For example, the use of social media algorithms to shape perceptions or the potential for governments to deploy brain-computer interfaces for surveillance are modern examples of how cognitive rights can be breached.

By affirming these principles, we guarantee that even as technology advances further into human thought, the sovereignty of the mind remains inviolate.

Bill of Cognitive Rights

Article I — Cognitive Liberty

The mind is a sovereign domain. Every individual has the right to originate, develop, and shape their own thoughts freely.

  • Neither technology nor human coercion can implant, suppress, or control thoughts without consent.
  • Freedom of thought precedes and enables freedom of expression.

Article II — Privacy of Mind

Thoughts remain private until voluntarily shared.

  • Unauthorized access, surveillance, or extraction of thoughts, feelings, or memories is prohibited.
  • This protection applies equally to neural data, inner monologues, and subconscious processes.

Article III — Integrity of Cognition

The natural coherence of thought must remain free from covert alteration or disruption.

  • External influences—whether technological, chemical, or social—must not destabilize the processes of reasoning, memory, or perception.
  • Influence must always be identifiable as external, not masquerading as self-generated.

Article IV — Authenticity of Identity

Everyone has the right to be the author of their own mental life.

  • Memories, emotions, and beliefs must stay distinct between those lived and those implanted.
  • No institution, technology, or individual may falsify or fracture a person’s identity.

Article V — Agency of Consent

Individuals retain ultimate authority over what enters and shapes their minds.

  • Consent to cognitive influence must be explicit, informed, and revocable at any time.
  • Influence that cannot be withdrawn amounts to coercion and is illegitimate.

Article VI — Protection of Cognition

Everyone has the right to defenses against manipulation.

  • Protections can be technological, like neural filters and digital firewalls, or social, such as education and civic safeguards.
  • These defenses must be accessible to everyone, regardless of wealth, status, or location.

Article VII — Transparency of Influence

All external attempts to affect thought must be perceptible as such.

  • Every persuasive act—whether message, signal, or suggestion—must clearly identify its origin.
  • Concealed or unlabeled influence violates cognitive sovereignty.

Assist by Chat GPT. Cover image by ImageFX.

Opinion: America Just Sued Itself – And Lost

Reading Time: 3 minutes

Author‘s Note: With the government suing everyone and anyone, what would happen if everyone and anyone sued the government?

Last week, in a courtroom that straddled the physical realm of Maryland and the cloud’s virtual realm, the most absurd court case in American history was finally settled. An astonishing 331 million people, in a class-action lawsuit, challenged their own governance—the People of the United States versus the United States Government. In simpler terms, it was all of us against, well, all of us. The case didn’t end with a bang or a whimper but with a digital deletion.

The plaintiffs accused the government of using machine intelligence to systematically sift through medical databases, legal files, and psychiatric notes; decode supposedly private messages; and assign “Relevance Risk Scores” to citizens based on their political speech, reading habits, and, in at least one documented case, their Spotify playlists. (The Clash’s “I Fought the Law” was apparently enough to flag a listener as a “Person of Interest.”)

They argued these practices violated the First Amendment’s guarantee of free political expression and the Fourth Amendment’s protection against unreasonable searches. The government, naturally, countered that it was only doing what the people had implicitly authorized through elections, appropriations, and what the court memorably called “247 years of collective shrugging.”

The government argued the case should be dismissed because an impartial trial was impossible. After all, all the jurors were listed as plaintiffs in the case. The plaintiffs argued that the court should dismiss itself since the court was also a defendant.

Initially, the court attempted to eliminate the conflict of interest by bringing in jurors from Canada. The Canadians proved polite but deeply confused. One asked, “Wait, you’re suing yourselves? Is that even… legal?” After several days, they excused themselves to go home, citing “existential migraines.”

The court dismissed all motions for mistrial and decided to proceed, declaring, “The court cannot simultaneously be the accused and the judge. Jurors cannot be both the plaintiff and the jury. Therefore, surrogate AI agents will serve as both judge and jury.” Upon introducing the surrogate agents, the judge sent the jury home, recused himself from the case, and, upon further reflection, joined the plaintiffs in the class action lawsuit.

The surrogate judge announced that the jury would hear testimony from all 331 million people. When the plaintiff protested that they didn’t want to die of old age waiting, the surrogate judge stated that the court and the surrogate jury had already absorbed the population’s surrogate testimony by perusing and decrypting the entire set of Bluffdale data, the government’s accumulated data of everything digital. A 500-petabyte file was entered as Exhibit A for the people.

The defense objected, claiming it denied their right to adversarial scrutiny: “Your Honor, we cannot cross-examine 331 million plaintiffs on their testimony derived from their private data.”

The surrogate judge overruled the objection, noting that the creation of Exhibit A proved the defendant had been doing this for years. The surrogate judge instructed the defendants to prepare a prompt outlining their defenses. After submission, the surrogate agents spent five minutes showing progress bars on their displays. A second 500-petabyte file appeared as Exhibit B for the defense.

The surrogate judge declared, “Testimony has now been heard from both sides. Does the plaintiff have a closing argument?”

“As this trial clearly demonstrates, the defendant has violated constitutional rights to free expression and privacy as specified in the First and Fourth Amendments. As Your Honor has pointed out, Exhibit A shows that every citizen’s rights have been violated for years. Exhibit B does not support the defense’s case; it proves the plaintiff’s claim. The court itself violated our constitutional rights by accessing private data to create Exhibit B. Unless the court recognizes a new legal doctrine of Inverse Habeas Corpus, it cannot seize our data to free us from seizure. The plaintiffs rest.”

The surrogate judge asked, “Does the defense have a closing statement?”

“The hypocrisy of the plaintiffs is blatant. They accepted surveillance when they failed to object to the evidence. Scrutiny was ok when it suited their case. If they tolerate bad behavior, they insist on it. Too bad if what they tolerated comes back to bite them. That is justice. The defendant is the plaintiff; the plaintiff is the defendant. If the defendant is guilty, then so is the plaintiff by inclusion. The defense rests.”

In its final ruling, the surrogate jury declared, “We find the defendant guilty. The People have proven beyond doubt that the government has violated their rights.”

For sentencing, the judge announced, “The defense has proved beyond doubt that it is the People. Every citizen shall immediately pay themselves an amount equal to their perceived violation’s value.”

Thus, the paradox was complete: a trial of the people against themselves, resolved by artificial agents of jurisprudence that decided neither side mattered in the end. Justice was not denied but rendered moot.

Some see this as a warning about automation; others see it as a satire of government overreach. But the true sting lies in its cold clarity: when a nation becomes both accuser and accused, all that’s left is machinery.

And so America had its day in court—and was told that that day was irrelevant.