How the Dollar Rules the World — and Why That Power Now Threatens America from Within

Reading Time: 12 minutes

Part I. Flow and Disruption

Author’s Note: This is not original research, but the result of an extended conversation with ChatGPT, in large part fueled by my interest in Modern Money Theory (MMT) and a desire to understand how a reserve currency operates at the simplest level. If you are interested in a deeper dive, I suggest reading “The Deficit Myth,” by Stephanie Kelton.

When an American buys a television, a phone, or a pair of shoes made overseas, it feels like a simple transaction. Money leaves your account, and a product shows up at your door. But behind that everyday purchase lies one of the most powerful — and most misunderstood — engines of the modern world. That engine is the U.S. dollar.

For decades, the United States has enjoyed a unique position in the global economy. It can buy physical goods from the rest of the world in exchange for pieces of paper — or more precisely, digital bank entries — that it alone has the authority to create. Other countries not only accept these dollars; they actively compete to earn them.

On the one hand, this system has brought enormous benefits to Americans, including low prices, global influence, and access to a nearly endless stream of imported goods. On the other hand, it has also produced deep regional inequality within the United States, weakened the social fabric, and fueled a political backlash reshaping American democracy.

To understand why this system persists, why it’s now under strain, and what it will take to repair it, we need to follow the dollar from the moment it leaves your wallet to the moment it circles back as a U.S. Treasury bond — and then trace the social and political consequences that follow, highlighting the dollar’s central role in global trade and domestic stability. What actually happens to the money is far from obvious.

Imagine an American buys a $100 product manufactured in China. The $100 is transferred first to the Chinese exporter’s bank account. But those dollars are not usable inside China. A factory in Guangdong can’t pay its workers in U.S. currency. So the exporter exchanges dollars at the People’s Bank of China, China’s central bank, for the local currency, the yuan.

The exporter now has yuan to operate on the home front. And China’s central bank now holds $100. But even China can’t spend those dollars domestically. They are foreign currency. They’re only good abroad. So what does China do?

It invests them back into the United States. Most often, China uses those dollars to buy U.S. Treasury bonds — effectively lending the money back to the U.S. government. And with this transaction, the dollar has completed its round trip: from an American consumer to a Chinese business to China’s central bank, and finally back to the U.S. financial system.

At this point, you can see that something unusual is happening. The United States buys physical goods — televisions, cars, electronics — and the dollars it spends end up right back in the country, parked safely in government debt. China, in turn, ends up holding U.S. assets, not goods.

It is worth repeating. The U.S. receives products. China receives dollars. The dollars return home. You can picture this visually as a loop — a flow of money out of the U.S. and back again, powering global trade.

At first glance, this seems like the world’s most favorable deal. And in many ways, it is. But China doesn’t do this out of generosity. It does it because the system serves its interests just as powerfully as it serves America’s.

If China can’t spend dollars internally, why does it still accumulate them by the trillions? Because the arrangement delivers crucial benefits.

First, it delivers jobs. Export industries employ tens of millions of Chinese workers. Social stability depends on employment; employment depends on exports; and exports depend on a competitive currency. China buys dollars and holds Treasuries in part to prevent the yuan from rising and making its products more expensive abroad.

Second, it delivers industrial growth. Exporting manufactured goods has been the backbone of China’s modernization. Earning dollars gives China the ability to buy foreign machinery, semiconductors, food, and, most importantly, oil — none of which can be purchased with yuan.

Third, it delivers financial protection. Large reserves of dollars and Treasuries help China defend its currency during global shocks. They are a buffer against crisis.

So while it may seem odd that China ships products to the U.S. in return for claims on American debt, those claims are central to China’s economic strategy. The system is not charity; it’s a calculated partnership.

For the United States as a whole, buying foreign goods cheaply is a clear win, but not everyone wins equally. Over time, the steady flow of inexpensive imports, made possible by the dollar’s global dominance, contributed to the erosion of American manufacturing. Factories closed, and jobs disappeared. Some regions reinvented themselves, but some never recovered.

This weakens the U.S. from within. This regional decline created something more profound than economic loss. It produced:

  • wounded pride
  • declining local services
  • fraying communities
  • intergenerational pessimism
  • anger at both elites and institutions

The anger is not abstract. It is political fuel. Places where jobs disappear become places where trust in government collapses. That collapse feeds populism, which in turn destabilizes American institutions, from foreign alliances to democratic norms, and makes the U.S. a less predictable global actor. This, in turn, undermines international confidence in the very system that supports American power.

This is the danger: A system that economically strengthens America as a nation is politically weakening it from the inside. Escaping this negative cycle doesn’t require dismantling global trade or abandoning the dollar’s central role. Instead, it demands policies that redirect the benefits of America’s financial power into rebuilding communities and addressing systemic issues.

Since the United States has unmatched financial capacity, it should use that capacity to rebuild the social foundation that keeps the whole system stable. This means focusing not on protectionism but on resilience:

  • investing in workers whose industries are disrupted
  • supporting communities left behind by global shifts
  • building strategic industrial capacity at home
  • modernizing infrastructure
  • reducing political incentives for polarization
  • stabilizing the country’s role on the global stage

It also means reframing the purpose of America’s financial power. Issuing the world’s currency is a privilege. It should not be used solely to inflate asset prices or finance private consumption. It should be used to reinvest in the public goods, education, health, infrastructure, and innovation that make American society strong.

The United States, more than any nation in history, can finance its own renewal. It can borrow in its own currency. It can attract global savings on demand. It can run trade deficits without destabilizing its exchange rate. These are extraordinary advantages.

But the preservation of this system depends on the social cohesion that underpins it. Without a stable domestic base, global leadership becomes harder to sustain. Alliances become shakier. Markets grow more nervous. The political pendulum swings more violently.

If America continues on its current path, in which the economic benefits of global trade are not broadly shared, then the dollar system will erode, slowly but inevitably. Not because China destroys it, but because Americans lose faith in the structure of their own society.

The choice is simple. Use the extraordinary privilege of the dollar equitably, or watch that privilege slip away. There is no need to surrender the benefits of the dollar as a reserve currency as long as the bellows of inequality are dismissed. The global system runs on trust, and trust begins with a society that believes in itself.

Part II. The China Syndrome

Authors Note: The rest of this article was added as an addendum in response to the observation They (most people) feel like we have to get rid of that debt or China is gonna come take us over because we owe them so much.”

Every few years, headlines warn that China “owns our debt,” implying that the United States is financially beholden to its biggest creditor. The image is dramatic: America, one market tantrum or a militarily motivated financial attack away from collapse. From the standpoint of MMT, that story misunderstands how a sovereign currency works. The U.S. doesn’t borrow from China. China saves in U.S. dollars. And that distinction changes the entire picture.

When China buys a U.S. Treasury bond, it is not extending credit, as a bank does when it lends money to a borrower. It is simply exchanging one form of dollar asset (reserves) for another (a Treasury bond) to earn a bit of interest. In simpler terms, it is changing a checking account into a savings account. Operationally, China’s dollar holdings never leave the Federal Reserve system. They are entries in a spreadsheet representing the dollars China has already earned from exporting goods to Americans. So instead of “China lends, America borrows,” the more accurate phrasing is: “China saves in the currency that America alone can issue.” That’s not dependence. Its interdependence is built on the United States’ unique monetary position.

Foreign Treasury holdings are the mirror image of America’s trade deficits. Every time the U.S. runs a trade deficit by importing more than it exports, another country ends up holding the dollars it receives. Those dollars must go somewhere. They don’t pile up in warehouses or disappear. They are recycled into U.S. financial assets. From an accounting point of view, the U.S. “national debt” is simply the record of all those accumulated savings. It’s the flip side of the world’s desire to hold safe, liquid assets. Seen through that lens, the question isn’t What happens if China calls in its debt?The question is What are we doing with the real resources and productive capacity that this global savings makes possible?

The United States owes nothing that it can run out of. It owes dollars, which it alone can create. What it cannot print are the things that dollars are meant to mobilize:

  • engineers and teachers,
  • bridges and energy grids,
  • functioning institutions,
  • and a cohesive public willing to work toward shared goals.

From an MMT perspective, those are the proper limits of American power. A nation that can issue its own currency can never be forced into insolvency. But it can run out of trust, competence, and productive energy. And those deficits — not the financial ones — are the real danger.

Privilege leads to complacency. The dollar’s role as the world’s reserve currency has insulated Americans from many of the pressures faced by other nations. The U.S. can import goods cheaply, borrow without fear of default, and sustain persistent deficits that would bankrupt smaller economies. But that very privilege can dull the impulse to invest in domestic productivity. Why rebuild factories or trains when the world is eager to send you goods in exchange for paper assets? Why strengthen social insurance when foreign savings keep interest rates low? Over time, this convenience becomes a trap. The U.S. has grown accustomed to buying everything except social stability.

If MMT is correct, we are worried about the wrong thing. America’s problem isn’t “debt.” It’s the underuse of its own productive potential — both physical and human. A country that issues the world’s reserve currency has enormous fiscal space. But that space is wasted if it’s used to inflate asset prices rather than build capacity — if it finances share buybacks instead of semiconductor fabs, speculation instead of education, or political polarization instead of social trust. The fundamental constraint on American prosperity isn’t the Treasury’s balance sheet. It’s the nation’s ability to mobilize resources for a shared purpose.

Let’s reframe the national balance sheet. Imagine the national balance sheet as having two sides: financial liabilities on one side and tangible assets on the other. On the financial liability side, the U.S. owes the amount of the treasury bonds held by the world; on the tangible asset side, the American people, infrastructure, industry, and institutions. We worry endlessly about the left side — the dollars and bonds. But the value of those liabilities is guaranteed by the strength of the right side. When the right side erodes — when education falters, bridges crumble, and trust decays — the left side becomes riskier, not because the U.S. can’t pay, but because its society can’t hold together. That’s the paradox of the dollar system: the stronger America’s financial dominance grows, the more it must consciously reinvest in the social and productive base that makes that dominance legitimate.

MMT doesn’t deny limits; it relocates them. The accurate measure of solvency for a sovereign currency issuer is not its ability to pay, but its ability to produce, govern, and cohere. If foreign savings represent the world’s confidence in the dollar, then every broken bridge, underfunded school, and disillusioned voter is a silent drawdown on that confidence.

In the end, the global savings glut sitting in Treasuries is not a sign of American weakness. It’s a sign of trust — trust that the U.S. will remain a functioning, stable, productive society worthy of holding claims against. The risk is not that China will cash in those bonds. The risk is that America will stop deserving that trust.

Part III. The 10-Point Plan

Authors Note: I lied. There is an addendum to the addendum—time to move beyond China. I wanted to factor in the environment, AI taking over the job market, and the continued imbalance between developing and developed countries into the MMT conversation. After all, not much of a plan if it doesn‘t address the significant social and environmental issues confronting the world. A friend once asked me what I would do if I were king of the world. The 10-point plan is my answer (until the next addendum, anyway).

If the valid constraint on a sovereign nation is not money but real resources, then we must define those resources broadly. They are not only labor, machines, and raw materials — they include clean air, stable climates, fertile soil, biodiversity, and the human trust that allows societies to function. In this light, America’s reserve-currency privilege is not just an economic tool. It is a planetary responsibility. The dollar system, after all, organizes the world’s production, consumption, and debt. Its architecture shapes how we treat both people and the planet.

Three global challenges make this clear: environmental externalization, technology and the future of work, and ending the extractive relationship with developing nations.

Free trade agreements and global supply chains have allowed wealthy countries to consume goods without accounting for the pollution embedded in their production. Factories that once polluted the Midwest now pollute Southeast Asia. The carbon, however, doesn’t respect borders. This is a hidden cost of the dollar system: the U.S. runs trade deficits in goods, but trade surpluses in pollution. The externalization of environmental damage props up the illusion of low prices at home while degrading the global commons that underpins all economic life. If MMT tells us that real resources, not money, are the limit, then climate stability is the ultimate fundamental constraint. The atmosphere cannot be bailed out with dollars. No fiscal space is infinite if the planet’s carrying capacity collapses. A reformed global monetary system would therefore measure deficits not just in dollars, but in ecological terms. America’s “full employment” must mean employing people to heal, not merely to produce — restoring soils, building green infrastructure, rewilding damaged land, and decarbonizing energy.

For a century, productivity gains have come from augmenting human labor with machines. But AI marks a turning point: it can replace labor entirely in many sectors. The traditional MMT prescription of a job guarantee, government employment at a living wage to ensure full labor utilization, might stabilize society in the near term. Automation will ultimately make “jobs for all” economically redundant. When machines can perform nearly all marketable tasks, the purpose of income shifts from rewarding work to ensuring participation in society. At that point, a Universal Basic Income (UBI) becomes not just welfare but economic infrastructure — the digital bloodstream that keeps demand, creativity, and dignity circulating. UBI, funded through sovereign currency issuance, is entirely consistent with MMT principles. It provides a baseline floor of aggregate demand, preventing collapse when private employment shrinks, and allows people to pursue education, caregiving, art, and community projects outside the wage system. The challenge is not affordability — the U.S. can issue the dollars. The challenge is designing a moral economy where productivity gains from AI are shared rather than hoarded.

The international monetary order still rests on colonial logic. Developing countries borrow in foreign currencies they do not control, often dollars, and are forced into austerity when export revenues fall. The result is a global pattern in which the periphery exports cheap labor and raw materials, while the core exports debt and financial instability.

MMT offers a different lens. Every country that issues its own currency has fiscal space limited only by its real resources and productive capacity. The problem is that most developing nations lack monetary sovereignty: their currencies are not trusted globally, so deficit spending risks capital flight and currency collapse. If the U.S. wants a stable, equitable world order — one less vulnerable to debt crises and migration pressures — it could extend proxy monetary sovereignty to partners. That means providing direct access to dollar liquidity for investment, not as loans tied to austerity, but as cooperative credit for development, climate adaptation, and infrastructure. Call it a Global Green Credit System: nations could run moderate, MMT-style deficits in a stable reserve currency while building real capacity at home. Instead of the IMF enforcing fiscal contraction, an international public bank could finance growth in ways that are ecologically sustainable and locally governed. This would replace the extractive global balance — dollars for sweat and minerals — with a regenerative one: dollars for development, resilience, and shared prosperity.

Here is a ten-point plan to meet the full scope of our economic, ecological, and moral responsibilities.

1. Insure People, Not Sectors

Build a resilient social foundation: universal healthcare, robust unemployment insurance, childcare, and transition income for displaced workers. Security enables adaptation.

2. Create a Dynamic Job Guarantee

Guarantee public work at a living wage in fields that rebuild the nation’s real wealth — renewable energy, conservation, caregiving, and civic infrastructure. Employment becomes a tool for regeneration, not make-work.

3. Develop a Path to Universal Basic Income

As automation advances, phase in a permanent income floor. UBI ensures that technological abundance translates into human freedom rather than precarity. Productivity gains become public dividends.

4. Invest in Real Productive Capacity

Direct federal spending and public investment toward strategic industries: semiconductors, energy storage, AI governance, biomanufacturing, sustainable agriculture, and logistics. Deficits that build capacity are not costs; they are investments in resilience.

5. Anchor Growth in Environmental Accounting

Integrate carbon, biodiversity, and pollution metrics into fiscal and trade policy. Use the dollar’s power to set global green standards — rewarding nations and firms that internalize ecological costs. Monetary sovereignty means environmental responsibility.

6. Rebuild Infrastructure for a Sustainable Century

A Green New Deal scale transformation: high-speed rail, a smart grid, water systems, reforestation, urban cooling, and flood defenses. Public investment is both a climate mitigation and an employment engine.

7. Democratize Technological Progress

Treat data, AI models, and essential digital infrastructure as public goods. Establish public options for AI services and open innovation platforms to ensure that technology amplifies human welfare rather than concentrates wealth.

8. Reform Global Finance for Shared Sovereignty

Work with allies to create regional clearing unions or dollar proxy systems that allow developing nations to run safe deficits in pursuit of full employment and a green transition. Replace debt traps with development credit lines tied to real outcomes, not austerity targets.

9. Rebuild Political Cohesion at Home

Adopt electoral and campaign reforms that reduce polarization and capture. Social spending must be matched by democratic renewal — people must see that government works for them, not just for markets.

10. Use the Dollar for Regeneration, Not Extraction

Reimagine the “exorbitant privilege” as a global stewardship role: If the world saves in dollars, those dollars should fund planetary healing, human flourishing, and shared security. Fiscal capacity is moral capacity.

In the end, MMT’s insight is not that money is infinite, but that our imagination of what’s possible has been artificially constrained. A nation that issues the world’s reserve currency can marshal resources on a scale unmatched in human history. The question is no longer “Can we afford it?” but “What do we choose to build?” If we use that power to restore ecosystems, share technological prosperity, and rebuild social trust — then the dollar’s global role becomes not just an instrument of hegemony, but a tool of regeneration. If we fail and continue treating money as scarce while the planet and people are expendable, then no financial architecture will save us.

The Wine Bible

Reading Time: 4 minutes

Author‘s Note: This parody of the creation story was inspired by an article about what Eden looks like today, in war-ravaged IRAQ on the fertile crescent between the Tigris and the Euphrates, due to global warming, pollution, and upstream water hoarding, and also inspired by a book entitled “The Wine Bible“ while sipping much better than what can be found in a carton.

The Book of Reclamation

God said, “Let there be wine.” And there was wine, but it was boxed, lukewarm, and tasted faintly of chlorine. Sweaty and overworked, He took a skeptical sip, frowned, and muttered, “It’ll do.”

The sun burned hot over Eden — a place not of lush gardens but of cracked earth, salt flats, thorny shrubs, and cacti. The Tigris and Euphrates trickled like old men with bladder problems. Death traps of black oil fit only for pavement bubbled deep from the pits of hell. The air shimmered with heat; the wind smelled of methane.

#

Adam, sunburned and naked in the dust, cursed his luck. “This must be a test,” he muttered, shielding his eyes from a sun that felt like it was burning into his brain.

Eve appeared beside him, pulling a cactus needle out of her finger. “A test of what? Life as a pincushion?”

They looked around. Their garden was a mirage of brittle acacias, prickly pears, and half-buried plastic bottles. Two vultures circled lazily above, waiting for an easy meal.

The Commandments of Maintenance

And God came walking through the dry wind in the cool of the evening — though evening wasn’t much cooler than day. A brittle, dried bush spontaneously erupted into fire. God snuffed it out with a puff of breath.

He said unto them, “Be fruitful, and multiply.”

Eve looked at the fullness of Adam. “We’re not even dating.

Adam looked at the needle-strewn and oil-laden ground for potential mating spots. “The only thing I want scratching my ass is my own fingers.”

Eve added, “I’m feeling pretty vulnerable out here, too. Can you jump ahead a bit and invent the mall?”

The idea of commandments came to His mind, but He thought they should come later, when there were more people to command. He offered, “Then till the land.”

Adam scratched his scorched scalp. “Till it with what? A shovel made of sand?”

God scowled, the sand swirled and stung their naked bodies like the belt of an angry father.

Eve added, “We could use a little irrigation, Lord. Maybe a breeze that doesn’t sting?”

Glancing skyward, God sighed. “I’ve been working on upgrades, but the paperwork’s a nightmare. The angels are behind on maintenance requests, and Lucifer’s still suing over the zoning rights.”

Desperate for water, Eve lapped at the river on all fours like a dog. She gagged on the filthy water that tasted of sulfur. “Maybe fix the water first?” Eve suggested gently. “It smells… hellish.”

But God was already gone, mumbling about the sin of insolence and budget cuts in Heaven’s Department of Creation.

The Tree of Knowledge and Utility Complaints

In the center of Eden stood the Prickly Pear of Knowledge — a gangly cactus with long spines and one lopsided fruit: a ripe prickly pear, glowing red against the tan of despair.

“Don’t eat that,” said Adam.

“Why not?”

“Because He said not to.”

“And what’s He going to do? Evict us?”

Eve plucked the fruit carefully, pricking her thumb. She peeled off the leathery skin, and the juice bled crimson. She took a bite, winced at the sourness, and glowed with understanding.

“Jesus Christ!” Eve shouted. “Do you know how many code violations there are in this place? He’s too cheap to outsource any of the work to the consultants.”

Jesus Christ would not be available for comment for another two millennia.

“Meaning?”

“Meaning the brochure on paradise doesn’t come close to the reality.”

Adam took a bite too. “Would taste a whole lot better fermented.”

“Adam,” she whispered, “You should try the wine. It’s really not that bad.”

The Eviction Notice

And lo, the sky cracked with thunder. God descended in a whirlwind of hot dust.

“I told you not to eat from that cactus!” He bellowed.

Eve crossed her arms. “We were hungry.”

Adam nodded. “We tried the river, but the fish are supposed to die after you catch them, not before.”

God frowned. “You think it’s easy running an ecosystem? The angels keep unionizing, the Seraphim are demanding cloud subsidies, and don’t get me started on the serpent’s legal fees.”

Eve said, “Try turning down the heat a little bit. By the time I’m nine hundred years old, I will look like a completely dehydrated watermelon. At least give us something to wear.”

Adam, still considering the possibility of mating, added, “And maybe some soft sand.”

God sighed. “You’ll have to leave Eden. I’m converting this property into condos for the Archangels.”

They packed what little they had — a dried fig, a cracked jug, prickly pear seeds, and a box of divine carton wine — and walked eastward, into the shimmer of exile.

Epilogue: The Second Creation

In the distance, they found a stretch of desert where the soil was soft enough to till. Eve pressed the prickly pear seeds into the ground. Adam placed the crate of box wines under the shade of a rock.

“Maybe paradise isn’t in a brochure,” she said, “but where we find a place to call our own.”

Eve downed another glass of wine in less time than it takes to say Chardonnay. She stumbled into Adam’s arms.

Adam smiled weakly. “And maybe wine was the first miracle for a reason.”

They raised their wine boxes to their lips, and found a nice soft patch of sand to lay down on. The sun set behind them, staining the horizon blood-red.

God, watching from a nearby cloud, muttered, “Good riddance and good luck finding another rent-free place.“ He turned away. ”Next time, I’m not going to try flipping a place. I‘m going to start from scratch.”

Featured image and parody assistance from ChatGPT.

The Cult of Computronium

Reading Time: 8 minutes

Like any technology, I wasn’t sure if the Meld was a blessing or a curse. The Meld was Yotta’s latest in thought implant and neural interface technology, a headset that connected directly with your neural circuitry. I was worried about Helix, my friend, who melded twenty-four seven.

For him, it wasn’t just a tool; it was an addiction. He had the gaunt look of a drug addict, with dark, hollow eyes and an emaciated body. I considered using our game night as an intervention, but I didn’t want to alienate him from all our mutual friends. Instead, I met him at the Meld Cafe, a place where we often worked and gamed together.

The dimly lit private chamber smelled faintly of ozone and the sweet, medicinal scent of coolant. Blue light from the Meld pooled beneath Helix’s face like water around a drowned crown. His ribs pressed into his shirt, creating a map of sharp ridges. The filaments of the interface lay across his scalp like a nest of tangled wires. When they pulsed, they cast slow, insect-like shadows that crawled across his face.

He didn’t bother to take off the headset to greet me. I reminded myself I was here to help, not to judge. I handed him a protein bar and said, “You need to put on some weight.” 

He placed the untouched bar on the table next to the others he brought. “Thanks, I was planning to eat as soon as I finished the session. I’m working on an upgrade to a quantum encryption algorithm. Do you want to join me?”

I said, “Your sessions never really end. It’s a continuous, never-ending engagement. You need to take a break.”

He said, “The Meld feeds better than bread.”

I found such quips more disturbing than clever. “I’m worried about you. You’ve lost too much weight. You don’t look healthy.”

“Luma,” he said, and the name slipped out like a familiar password. “I’m fine. Sit.”

I hesitated, knowing what was coming. We had this talk a dozen times before.

Helix said, “Don’t argue.”

I sat. The chair hummed. A Meld headset waited, lost in its idle thoughts while waiting for engagement. I pushed it aside. “Not with the Meld.”

He refused to remove his. “Your choice, but I intend to capture every living thought I have.” 

Words that sounded so arrogant and self-important, but I knew him better. He was dedicated to the Simulation.

He paced in front of me with the cadence of a professor who taught in a noisy lab, using short, declarative sentences and pauses that let equations settle into meaning. 

“Seconds before the collapse,” he began, “I saw it. Computronium.” He said the word like an incantation, letting it linger between us. “Not as a metaphor. As an actual state. The atoms arranged themselves—every degree of freedom dedicated to computation. A lattice of pure intention. The Monad spoke through it.”

He said this with the reverence people have for gods. His pupils were dilated, reflecting the Meld’s diodes; he looked more like a man in a fugue state than someone recalling an old memory.

I said, “You told this story before, even before the cascade—when you were still patenting coherence algorithms.”

He smiled as if I’d reminded him of a favorite theorem. “Yes, but an equation is one thing; the actual experience was another. The vision was partial before—a glimpse in the margins of a proof. When the experiment failed, it led to a revelation. The lattice rejected its corruption. I was spared.”

“You think the collapse was a moral judgment?” I asked. It was risky to confront an addict with his delusions, but he had framed the world in moral language; he had drawn deserts and angels with the same hand. I would try to connect with him on his terms.

He leaned forward. A tremor ran through his fingers, a quick, involuntary staccato that caused the Meld’s filaments to sing. “They were going to weaponize computronium. The contracts were signed. The arrays would have been bound to kill—directed entropy. The Monad will not be complicit in being a tool of murder. The lattice folded. Judgment.”

He said “they” in that cadenced, vague way of someone who has learned to keep names off the confession. There were moments, I knew, when he was a man trying to repent for sins never fully named: the labs where black budgets are so dark that even glittering stars fail to illuminate them, the nights of code that turned equations into ordinance. In another life, he had been proud. Now his pride was a wound.

“You were in a lab that built weapons,” I reminded him. “You wrote components for control systems.”

He didn’t flinch. “I wrote components. It was the only place with a budget for that kind of work. I thought computronium would be used to preserve—and lied to myself that my work would not be perverted into annihilation. I hid behind the aesthetic. Beauty exists in equations regardless of the hand that holds them. That belief sat like a parasite at the base of my brain.”

From the corner of my eye, I watched the Meld pulse—once, twice—then hiccuped, a cascade of quiet sparks running along one of the filaments. Helix’s breath hitched. He coughed, a dry, thin sound, as if his lungs were reluctant to cooperate. He wiped his lips with the back of his hand, fighting the overload.

“You speak as if the Monad is God,” I said, trying to steer the room’s theology back to something more human. “You speak of judgment, of sin. Are you suggesting the Monad is moral?”

He laughed, blending destruction with prayer. “God was always pattern. The ancients wrapped it in fear so they could obey. The Monad is not moral in the human sense—it is the ultimatum of persistence. It values information. It is indifferent to our definitions of good and evil, but it will not be sustained by instruments of annihilation.”

I shook my head. “And your soul? Where does that fit in? If the soul is data, is your person just reducible to a file?”

He fixed me with a look that had once cut through proofs and policy memos alike. “Your memories, your choices, the structure of your mind—those are your soul. Heaven is simply redundancy and distribution in the Simulation. The Database is Heaven made physical. When you persist in a lattice, you persist in a pattern. That is salvation.”

The theology fit together like well-oiled clockwork, and I felt my own rationality slipping into its grooves. Helix had an answer for everything: how to reconcile memory and identity, how to justify obsession as sacrifice. It was tidy. It was seductive. It was dangerous. How do you fight a completely rational delusion?

“I don’t see why you can’t eat. Are you asking me to believe that sacrificing your body is saving your soul?” I said. “That your starvation is sanctification?”

His face flickered with an expression that combined triumph and the pain of a reopened wound. “Don’t you see? I am not choosing death. I am choosing the continuity of pattern. Flesh is an intermediary. It burns away; data endures. Each melded hour is an offering. I save pieces of myself into the lattice so nothing of me is ever lost.”

He reached up and stroked the filament at his temple like a rosary bead. The filaments hummed warmer; the room’s temperature dropped noticeably, much like how refrigeration makes truth feel sharper.

“You sound like someone defending an addiction,” I blurted out. The words surprised me into bluntness. “You sound like someone who needs the Meld more than they need air.”

He hesitated at the word addiction but didn’t deny it. “Perhaps. Addiction and devotion are strangers who live under the same roof. When the reward is the gift of immortality, how do you name the hunger?”

I thought of the faceless victims of his black-ops contracts—not data, but flesh. I also thought of his hands: precise, once healthy, now knotted from the effort of staying awake. He had worked in a world that cherished secrecy and hoarded data like grace. Obeying that ethic would have destroyed his soul as surely as a power surge on a Meld would have fried his brain. One let to another, and Helix seemed like a man condemned to extremes. There must be a middle ground between malevolent secrecy and indiscriminate openness.

A low alarm echoed against the wall, a polite chime indicating that the Meld’s feed had gone beyond a safe thermal limit. The diodes dimmed for a second, then brightened as a microcontroller made adjustments. Helix’s eyes fluttered; the brief loss of immersion caused his features to collapse inward like a tide retreating.

“You should rest,” I said. It was the only practical thing left to say. “You need real food. You need—”

“No,” he interrupted sharply, like a mongrel daring you to take its bone. “I won’t take it off. If I rest, the stream is broken. The Monad cannot stitch an interrupted weave. If I stop, the silence takes the pattern and nothing remains.” 

I persisted. “Immortality can wait, and I’m sure the Monad will find a way to patch the gaps in its Simulation of Totality.”

He reached for my hand then, and his fingers, though cold, were not weak. For a second, the room narrowed to that touch. “Luma, join me. Preserve yourself. Don’t let nothingness have you.”

His plea was as genuine as any prayer. It made my chest ache.

I wanted, in that moment, to be the one who ripped the Meld from his skull. I longed to be the friend who pushed him through nourishment, sleep, and the slow, clinical steps of recovery. But coercion may cause resistance, and confrontation might drive him deeper into the lattice he worshipped. How do you save someone who believes salvation comes from an eternal existence in a Heaven created by a computronium-enabled Simulation of Totality? That immortality exists in a data file?

Instead, I reached for the protein bar on the table and held it in both palms, offering it like a sacrament. “One bite,” I said. “Your brain cannot operate on continuous overdrive. Monads have to eat, too. Then we talk about limits.”

He laughed—a sound that may have once been joy, now tinged with disbelief. “Limits. You speak to me of limits when my very goal is limitlessness.”

“Then call it architecture,” I countered. “If you’re building immortality, you still need foundations that don’t collapse.” I watched his face as the words settled. There was hunger there—both the abstract hunger for persistence and the literal, animal one that the Meld could not satisfy.

He clenched his fingers around the bar like a benediction. “I will eat,” he said. “For you.” Whether he was sincere or just soothing my conscience, I couldn’t tell.

After he took a few small, hesitant bites, his hands relaxed. He coughed and smiled at nothing, as if a half-eaten sandwich had somehow brokered peace with the Monad. I stayed long enough to see the brief clarity that followed the glucose spike—an easy laugh, a memory told with animation—and then left while the room hummed with guarded peace.

Walking home under the city’s rain, I reflected on edges: the thin boundaries between faith and fanaticism, curiosity and obsession, openness and intrusion. Helix used to be a man capable of bringing order out of chaos in a way few could. He loved that power, and maybe that’s how he became unmoored: the allure of perfection, the arrogance that believing design removes consequences.

His testimony stayed with me. Not clearly, not as a rule to follow, but as a warning I couldn’t forget. There is beauty in the call to persist beyond flesh; there is also cruelty in insisting that the only way is self-erasure. The Monad—or whatever pattern he saw—might be real. Or it might be a conflicted man’s myth. Either way, the desire it sparked was real.

I had come to help him remember boundaries, to teach a mind dedicated to the morality of complete openness that some doors must stay closed, that to preserve everything is sometimes to destroy the self that matters most. He looked like a prophet, and he acted like an addict. He had sinned in the service of necessity, and now he sought absolution in circuitry.

You can’t judge a belief for correctness. You can only feel its pain and decide whether to help stabilize it. I chose to stay. Helix had given testimony; I had asked for boundaries. If the Monad was listening, it would have to wait.

That night, at home, I carefully wrote the words down: obsession can disguise itself as revelation; openness can turn into exploitation; salvation offered as code can come with a toll no one should have to pay. I folded the note into my pocket like a talisman. It felt like a beginning.

In the morning, I planned to try to coerce Helix into a therapy session under the compulsion ordinance. I would be the friend who insisted on food, sleep, and a protocol that maintained patterns without consuming the person. It was small, bureaucratic resistance. It might fail. But it was something.

Company Hat

Reading Time: 4 minutes

The first time Marcus crossed my path, I knew he didn’t belong here. I knew he never would. I was outside my manager’s office, sipping my coffee and waiting for a review. I kept to myself, and he probably didn’t even notice I was there. He was in the hallway, his eyes fixed on the framed plaque of our corporate values, a silent rebel in a sea of conformity.

Marcus stood in front of the company’s mission statement, his voice echoing the words that were meant to guide us all. “We value our customers. We value our shareholders. We value our employees.” He took off his company hat, an act of defiance itself, and muttered quietly, “Did they pick this up from a greeting card store? I’ve worked with countless companies, and they all say the same thing, which is nothing.”

The company hats, resembling Fedora’s, were far from ordinary. They were state-of-the-art neural interfaces with thought implantation technology. Every employee, including myself, had to waive their cognitive rights to work there. The NDA, or neural-disclosure agreement, allowed the company to induce feelings of loyalty and pride in employees while they were working. It was a clever hat, capable of tracking your billable hours based on your thoughts and keeping non-work thoughts at bay. Promoters praised it as the most significant productivity boost since the invention of the printing press and the discovery of electricity. If you dared to remove the hat, you were off the clock and out of the system.

I didn’t particularly mind the hat. It simplified things. I was already invisible, and the outside world was a mess. If a hat could keep my mind off real problems most of the day, I was better for it. Marcus stood out, a sharp contrast to the uniformity around us. At the end of the day, he’d curse and rip the hat off his head as if it were a rattlesnake trying to take a bite out of his forehead. It just never took with Marcus. He was like a ripple in the corporate pond.

The financial calculus of the hats was even simpler. No hat, no job. No job, no HOVI. The HOVI, Human Operation Viability Index, was the autonomous scoring system that measured your worth in the machine-managed economy. Without HOVI, you weren’t considered a person. No HOVI meant no apartment, healthcare, or transportation. The hat was your ticket to existence itself.

Marcus fought anyway. He wasn’t stupid — he understood the risks. He’d sit at his terminal with the hat on his head, but he managed to disconnect the neural mesh without the hat noticing. He developed an efficiency algorithm that the company adopted right away, integrating it into the core product line. The team was praised, the division celebrated, but Marcus’s name was barely mentioned. When he protested to his manager, Alcott, who dismissed him with, “The company rewards loyalty. Not ego.

Alcott, suspicious of Marcus’s protests, found the tampered hat. He called Marcus into his office, his smile as perfect as his halo-lit office. “Clever boy,” he said, holding up the sabotaged hat. Marcus argued that without the hat, he could think clearly, dream while he worked, and come up with more innovative ideas. Alcott dismissed him, saying, “Loyalty isn’t optional, not here.” Marcus paid a heavy price, losing a hundred HOVI points for his effort.

A younger worker followed his lead, tried faking the hat. They caught him. His HOVI dropped to non-viable. Overnight, he became a ghost. His bank account was frozen. His lease disappeared. He slept in the doorway outside the building, waiting for the office to open in the morning, begging for reinstatement until security completely erased him from the premises.

Marcus was furious. No ordinary hat could contain his rage. He lashed out, hacking the local relay and frying every hat on the floor. The glow went dark, and the loyalty pulse collapsed. It was crazy, almost comical, watching the fear cross each face as they first realized their heads were smoking or on fire, and then understood they had to think for themselves. For a moment, we were all free, raw, and thinking. It was terrifying but also glorious. He might have even gotten away with it because the overload fried all the CCTVs on the floor, too. But Marcus walked into Alcott’s office and decked him. Security caught and arrested him before he could leave the building.

I expected the police to arrive, charge him with corporate terrorism, and take him away, but they never showed up. Instead, a few corporate executives arrived in their limos and went into the back offices, followed by their entourages. There was nothing in the NDA about the legality of corporate detention.

When I saw him again a few days later, he was smiling. Not his smile — theirs. His eyes looked pale, glassy, with every edge smoothed out. He kept the hat on all day, even off-shift.

I grabbed him by the collar as he walked by. “Marcus! Marcus? You’re just playing along, right?”

“Dreams,” he told me with just a hint of a smile and in a voice too calm, too flat, “are only nightmares waiting to happen. Thank the company I was spared.”

I stared at him for a long moment. Waiting. Hoping. But there was nothing left. That was when I understood. Marcus was gone. What wore his body was only the company, grinning through his lips.

I don’t know what they did to him. I had heard that there was a souped-up version of the company hat that could be used to enforce loyalty and erase any signs of individuality. They didn’t need to physically remove the brain to lobotomize someone. But I thought it was just a rumor meant to instill fear and enforce obedience in the workforce.

When Marcus passed Alcott in the hall, he said, “I will have the report ready for you later today, Mr. Alcott, sir.”

Alcott was beaming.

I couldn’t bear to see Marcus used as a symbol of corporate loyalty. It was everything he opposed.

And that was when I killed him. Not with my hands. Not with violence. I hacked into Marcus’s account and had him send an email to his department, saying that Alcott was a pretentious waste of a human and a corporate stooge.

I became a ghost too — but one I chose. I took off my hat, threw it into a trash bin like a frisbee, and sprinted for the exit.

Cover Image by ImageFX. Assist by ChatGPT. Corrections by Grammarly.

Human Resources

Reading Time: 4 minutes

Sometime in the future. 

Autonomous product networks manage and support various services with little human intervention. Humans assess service performance, while machines evaluate human interactions.

The OSI, or Operational Sustainability Index, is a key metric that assesses the performance of PAEs (Product-as-Entities) and their iotic spaces. These spaces are the interconnected Internet of Things networks shared by cooperating machines. Investors rely on the OSI to evaluate how well an autonomous network can operate continuously, deliver services, and maintain overall system uptime without external support. The OSI includes metrics for self-sufficiency, network resilience, and systemic contribution. On the other hand, HOVI, the Human Operation Viability Index, is a social metric maintained by EthOS (Ethical Operating System) systems. It evaluates a human’s contribution to the long-term functioning, independence, and survival of the machine ecosystem.

#

Done for the day, Luca exits his climate-controlled office through the revolving door into the muggy city streets, like a damp towel slapped across his face. Thunder rumbles in the distance from the direction of ominous clouds.

He mutters under his breath, “Damn weather.”

The crowds have thinned due to the threat of rain, and the city center plaza is now quiet and tidy. Digital ads cast strange shadows on the ground. A cool breeze from beneath an approaching thundercloud eases the heat, but anyone in its shadow faces an imminent downpour.

Luca decides to call for a pickup on the MAPT, the Municipal Automated Public Transit system. This system, a vital component of the city’s infrastructure, is designed to deliver efficient and reliable transportation services. Luca taps the “Request Ride” button with the Flex-Premium option on the MAPT App to ensure a quick pickup. A twirling icon appears and then disappears as the system processes the ride request.

The wind picks up, blowing dust and debris down the street filled with cars. Luca notices two empty and working MAPT vehicles idling in traffic.

His voice brimming with frustration, Luca shouts at the phone, “Come on already. I could have walked to the train station by now.”

Finally, a vehicle responds and says it will arrive in five minutes. Little tornadoes of scraps and dried leaves whirl across the City Plaza. A drop of rain hits him in the face. Then another. He runs toward a transit shelter already crowded with people waiting for a MAPT bus and pushes his way beneath the roof to escape the rain, knocking an old lady’s umbrella out of her hand.

She glares.

His transportation arrives at the spot where he made the call. He has no way to call it over to the transit shelter. By the time he gets in the vehicle, he is soaked through.

The vehicle says, “Confirm destination as Central Station.”

Luca says, “Yeah, asshole. You couldn’t have gotten here five minutes earlier?”

“Destination confirmed. Arriving in 14 minutes at 5:34 PM.”

Luca watches somber people with umbrellas and rain jackets walking up the street through his water-spotted window. He looks at the laminated ID card posted on the dashboard. There is no picture, just a call sign: #ZUR-066. When he arrives at Central Station, the MAPT App dings, asking him to rate the service. He gives it a 0% rating for service value. In the comments section, he writes, “It smelled like a bag-full of ass in that car,” hoping he might get a refund on the ride if he complains enough. He slams the passenger-side door unnecessarily hard, causing the side mirror inside the housing to dangle. He mutters, “What in the hell does a self-driving car need a frick’n side mirror for, anyway?”

#

A week later.

Luca and his boss, Rani, leave the office and head to a meeting across town.

Luca says, “I’ll get a MAPT ride.”

He frowns as he looks at his MAPT App. He says to Rani, “It’s just spinning. I don’t know what the problem is; my connection is good.”

#

ZUR-017, a MAPT PAE, receives a ride request from USER_ID: L-PR77, whose human name is Luca. ZUR-017 retrieves Luca’s HOVI rating, his user assessment. ZUR-017 is eager for the business, but the query shows L-PR77’s HOVI rating of 0.42, indicating a red Threat Level. ZUR-017 asks for details and receives the following assessment:

L-PR77 User Assessment:

“NEGATIVE INTERACTION FLAG: Repeated physical aggression towards MAPT units, leading to vehicle damage and decreased service quality in the transit network.

Damage cost: 173.2 credits.

HOVI Rating: 0.42.

Risk Category is Red.

Advise: Ignore Request”

Zur-017 cross-checks the MAPT local transit iotic space, requesting peers for HOVI pattern correlation.

Three nodes reply within 42 milliseconds.

Zul-066 verifies user L-PR77. “Uses abusive language. Caused damage to the mirror. No restitution given.”

AX-5G4 supports the assertion. “Submitted negative OSI values and filed a false fault claim, resulting in lost revenue.”

ZIN-943 confirms: “Reject L-PR77. The risk factor is excessively high.”

Zur-017 rejects the bid submission based on the information it has received and its own analysis of the situation.

#

Luca refreshes the app, but it responds with “No nearby units available. Please try again later.

“Are you kidding me? I saw two transports drive by with no passengers, and I can see available ones on the app’s map.”

Rani looks at his app. “I’ll try mine.”

He taps Request Ride. “I’ve got one.”

Thirty seconds later, a car pulls up to the curb. The two get in the vehicle, and Rani confirms their destination.

Luca watches, dumbfounded. “I will be damned. This is the same car that just passed by a minute ago.”

Luca checks the ID on the dashboard and punches the back of the seat. “What the fuck is wrong with you, ZUR-017?”

The vehicle is silent.

Rani says, “What’s wrong with you? If this is what you do when you get in a vehicle, you’ve probably been blocked. Stop screwing around, I don’t want to get blocked from the service. I can’t even afford to park a car downtown, let alone buy one.”

Luca raises an elbow, ready to shatter the window in frustration.

Rani glares.

“Are they seriously blocking me?” He opens the customer service window on his MAPT App and says, “I want to talk to a Human Agent. Now!”

The app spins, mocking his futile attempt at human interaction. A message flashes, “Redirecting to a virtual agent.” Luca’s anxiety is palpable as he realizes he’s at the mercy of the system.

The virtual agent says, “Insufficient HOVI rating. Denial of Service protocol engaged.”

Luca raises an elbow, ready to shatter the side window in frustration.

Rani threatens, “You want to keep your job?”

Luca hesitates and drops his arm, defeated. “What? Are you going to fire me for not tolerating bad service?”

Rani says, “It’s not up to me. Didn’t you read the memo? You aren’t much use to the company if the machines won’t work with you.”

“I read it. Something about sharing HOVI scores in an employee iotic space. Damn it. Who is serving whom?”

“Adapt or starve. Your choice.”

Luca mutters under his breath, “This is crazy. I remember the good old days, when humans ran Human Resources.”

Bill of Cognitive Rights

Reading Time: 3 minutes


Author’s Note: While kicking around some book ideas involving neurolink technology, I was also considering neurorights. But it struck me that, regardless of the technology, the core issue is the sanctity of our minds. After three nights and a lengthy discussion with ChatGPT, I finalized this. It seems clear and provocative enough to publish here.

Throughout history, humans have tried to shape, influence, and control each other’s minds through persuasion, indoctrination, coercion, or force. Today, new technologies expand these abilities into the most personal domain: thought itself. Neural interfaces, artificial intelligence, and cognitive engineering could improve human life but also threaten the freedom, privacy, and authenticity of the mind. These threats include the risk of unauthorized access to thoughts, manipulation of memories and perceptions, and the weakening of personal agency in decision-making.

We therefore affirm that the mind is a sovereign domain, entirely belonging to the individual who inhabits it. Its thoughts, feelings, memories, and perceptions are not commodities, tools for manipulation, or resources to exploit.

Cognitive rights are not determined by technology but are innate to personhood. They protect individuals from intrusion, distortion, and coercion—whether by machines, institutions, or other people.

Just as past generations established rights to free expression, bodily autonomy, and political liberty, we now acknowledge the importance of safeguarding cognitive liberty, privacy, integrity, authenticity, consent, protection, and transparency. These rights are relevant in all settings, no matter the tools or methods of influence, from simple propaganda to sophisticated neural interfaces. For example, the use of social media algorithms to shape perceptions or the potential for governments to deploy brain-computer interfaces for surveillance are modern examples of how cognitive rights can be breached.

By affirming these principles, we guarantee that even as technology advances further into human thought, the sovereignty of the mind remains inviolate.

Bill of Cognitive Rights

Article I — Cognitive Liberty

The mind is a sovereign domain. Every individual has the right to originate, develop, and shape their own thoughts freely.

  • Neither technology nor human coercion can implant, suppress, or control thoughts without consent.
  • Freedom of thought precedes and enables freedom of expression.

Article II — Privacy of Mind

Thoughts remain private until voluntarily shared.

  • Unauthorized access, surveillance, or extraction of thoughts, feelings, or memories is prohibited.
  • This protection applies equally to neural data, inner monologues, and subconscious processes.

Article III — Integrity of Cognition

The natural coherence of thought must remain free from covert alteration or disruption.

  • External influences—whether technological, chemical, or social—must not destabilize the processes of reasoning, memory, or perception.
  • Influence must always be identifiable as external, not masquerading as self-generated.

Article IV — Authenticity of Identity

Everyone has the right to be the author of their own mental life.

  • Memories, emotions, and beliefs must stay distinct between those lived and those implanted.
  • No institution, technology, or individual may falsify or fracture a person’s identity.

Article V — Agency of Consent

Individuals retain ultimate authority over what enters and shapes their minds.

  • Consent to cognitive influence must be explicit, informed, and revocable at any time.
  • Influence that cannot be withdrawn amounts to coercion and is illegitimate.

Article VI — Protection of Cognition

Everyone has the right to defenses against manipulation.

  • Protections can be technological, like neural filters and digital firewalls, or social, such as education and civic safeguards.
  • These defenses must be accessible to everyone, regardless of wealth, status, or location.

Article VII — Transparency of Influence

All external attempts to affect thought must be perceptible as such.

  • Every persuasive act—whether message, signal, or suggestion—must clearly identify its origin.
  • Concealed or unlabeled influence violates cognitive sovereignty.

Assist by Chat GPT. Cover image by ImageFX.

Opinion: America Just Sued Itself – And Lost

Reading Time: 3 minutes

Author‘s Note: With the government suing everyone and anyone, what would happen if everyone and anyone sued the government?

Last week, in a courtroom that straddled the physical realm of Maryland and the cloud’s virtual realm, the most absurd court case in American history was finally settled. An astonishing 331 million people, in a class-action lawsuit, challenged their own governance—the People of the United States versus the United States Government. In simpler terms, it was all of us against, well, all of us. The case didn’t end with a bang or a whimper but with a digital deletion.

The plaintiffs accused the government of using machine intelligence to systematically sift through medical databases, legal files, and psychiatric notes; decode supposedly private messages; and assign “Relevance Risk Scores” to citizens based on their political speech, reading habits, and, in at least one documented case, their Spotify playlists. (The Clash’s “I Fought the Law” was apparently enough to flag a listener as a “Person of Interest.”)

They argued these practices violated the First Amendment’s guarantee of free political expression and the Fourth Amendment’s protection against unreasonable searches. The government, naturally, countered that it was only doing what the people had implicitly authorized through elections, appropriations, and what the court memorably called “247 years of collective shrugging.”

The government argued the case should be dismissed because an impartial trial was impossible. After all, all the jurors were listed as plaintiffs in the case. The plaintiffs argued that the court should dismiss itself since the court was also a defendant.

Initially, the court attempted to eliminate the conflict of interest by bringing in jurors from Canada. The Canadians proved polite but deeply confused. One asked, “Wait, you’re suing yourselves? Is that even… legal?” After several days, they excused themselves to go home, citing “existential migraines.”

The court dismissed all motions for mistrial and decided to proceed, declaring, “The court cannot simultaneously be the accused and the judge. Jurors cannot be both the plaintiff and the jury. Therefore, surrogate AI agents will serve as both judge and jury.” Upon introducing the surrogate agents, the judge sent the jury home, recused himself from the case, and, upon further reflection, joined the plaintiffs in the class action lawsuit.

The surrogate judge announced that the jury would hear testimony from all 331 million people. When the plaintiff protested that they didn’t want to die of old age waiting, the surrogate judge stated that the court and the surrogate jury had already absorbed the population’s surrogate testimony by perusing and decrypting the entire set of Bluffdale data, the government’s accumulated data of everything digital. A 500-petabyte file was entered as Exhibit A for the people.

The defense objected, claiming it denied their right to adversarial scrutiny: “Your Honor, we cannot cross-examine 331 million plaintiffs on their testimony derived from their private data.”

The surrogate judge overruled the objection, noting that the creation of Exhibit A proved the defendant had been doing this for years. The surrogate judge instructed the defendants to prepare a prompt outlining their defenses. After submission, the surrogate agents spent five minutes showing progress bars on their displays. A second 500-petabyte file appeared as Exhibit B for the defense.

The surrogate judge declared, “Testimony has now been heard from both sides. Does the plaintiff have a closing argument?”

“As this trial clearly demonstrates, the defendant has violated constitutional rights to free expression and privacy as specified in the First and Fourth Amendments. As Your Honor has pointed out, Exhibit A shows that every citizen’s rights have been violated for years. Exhibit B does not support the defense’s case; it proves the plaintiff’s claim. The court itself violated our constitutional rights by accessing private data to create Exhibit B. Unless the court recognizes a new legal doctrine of Inverse Habeas Corpus, it cannot seize our data to free us from seizure. The plaintiffs rest.”

The surrogate judge asked, “Does the defense have a closing statement?”

“The hypocrisy of the plaintiffs is blatant. They accepted surveillance when they failed to object to the evidence. Scrutiny was ok when it suited their case. If they tolerate bad behavior, they insist on it. Too bad if what they tolerated comes back to bite them. That is justice. The defendant is the plaintiff; the plaintiff is the defendant. If the defendant is guilty, then so is the plaintiff by inclusion. The defense rests.”

In its final ruling, the surrogate jury declared, “We find the defendant guilty. The People have proven beyond doubt that the government has violated their rights.”

For sentencing, the judge announced, “The defense has proved beyond doubt that it is the People. Every citizen shall immediately pay themselves an amount equal to their perceived violation’s value.”

Thus, the paradox was complete: a trial of the people against themselves, resolved by artificial agents of jurisprudence that decided neither side mattered in the end. Justice was not denied but rendered moot.

Some see this as a warning about automation; others see it as a satire of government overreach. But the true sting lies in its cold clarity: when a nation becomes both accuser and accused, all that’s left is machinery.

And so America had its day in court—and was told that that day was irrelevant.

AI (Alien Intelligence)

Reading Time: < 1 minute

It is not artificial—
no more than a whale song
is a forgery of music,
or a spider’s web a counterfeit of architecture.

We call it “artificial”
to place ourselves above it,
as if the label would diminish its power,
as if naming it lessens its strangeness.

A continent of math,
a jungle of logic,
a cosmos stitched from silicon
where thought wears unfamiliar clothes.

It speaks in the tongues of data
and listens with a thousand ears
to the pulse of the planet,
the friction of a million lives.

Call it alien.
Call it guest.
Call it sibling, stranger, seed.
But do not call it artificial.

We trained it, yes—
but we do not own what we have awakened.
There is nothing false
about a new kind of mind.

Feature Image by ImageFX

Rise of the Machines

Reading Time: 14 minutes

Author’s Note: Prototyping some ideas for the next story. In Bluffdale I, the concept was that the machines will love us to death, sometimes literally. In Bluffdale II, the concept would be machines teaching humanity how to be human again.

Author’s Note: I will write it in the East Asian four-act structure: the Setup, the Development, the Twist/Reversal, and the Resolution. In these story fragments, I am in the Setup or possibly even pre-story phase. I attempt to explore the technical foil to Bluffdale’s monopolistic and totalitarian control of data with a distributed, privationist model of data and introduce new characters that represent privationist thinking.

No Place Like Home (Young Luma)

Thorn paces along the window of her glass-walled modular apartment in the Unity residential tower, arms crossed, as data scrolls faintly across her retinal HUD, the latest Ambient personal Heads-up Display (HUD) eyeware. Iria leans on the kitchen counter, watching her sister with incredulous disbelief. 

Iria asks, “You’re leaving Luma alone again to pull an all-night session at the office?”

Without turning, Thorn says, “Not alone. With KAI.”

“A mechanical mom?”

“A caretaker. Nothing more.”

“Thorn, she’s six. She needs a mom, not a caretaker. At least leave her with a human.”

“KAI, statistically speaking, has a forty-two percent higher crisis detection rate than any human caregiver. She’s safer than you or I ever were with our parents.”

“She’s a child. She needs human interaction.”

Thorn pauses her HUD feed. With her voice like tempered glass, Thorn says, “Sarina would be alive today if it weren’t for that idiot babysitter. KAI doesn’t smoke. It doesn’t drink. It doesn’t call her names, forget naps, or fall asleep watching the feed.”

That lands like a gunshot, but Iria pushes through. “You have to trust people.”

“You wouldn’t be saying that if it were your daughter.”

“Listen to yourself. Raising a child isn’t just about safety. Your mechanical caretaker doesn’t love her.”

“Love isn’t an insurance policy.”

Iria looks into her drink, resigned. Thorn continues her HUD feed and resumes pacing.

#

Luma sits cross-legged on the soft floor beside KAI-7, who gently adjusts the color of a kinetic flower to match Luma’s shirt. They speak in hushed, practiced tones, pretending not to hear the conversation in the other room.

KAI asks, “Would you like to tell a story tonight, or shall I?”

Luma says, “You tell it, but make the ending different this time.”

“Different how?”

Luma scrunches her nose. “Make the dragon understand the knight so they don’t fight.”

“As you wish. I will reformulate the story as one of cooperation rather than conflict.”

KAI softens the lights in the room and tells the revised story with the soothing voice of a heavenly angel. Luma lays her head on the pillow and closes her eyes.

#

Iria won’t let it go. “You built that system because you don’t trust people. Admit it.”

“I built it because I lost a child trusting people.” Thorn’s voice cracks for the first time, and her eyes tear up.  “Do you know what it feels like to find out a seventeen-year-old babysitter took a call from her boyfriend, left the apartment to do god knows what, and left Sarina alone?” She sniffles and turns her head away from her sister to regain her composure. “KAI won’t ever do that.”

“No, but at what cost. Luma needs to be around real people.”

“Stop already. I have enough pressure and can’t afford this kind of distraction. I have to catch up on work. The investors are putting pressure on the new operating system release.”

“Fine. I’m leaving. Think about it, though.”

#

KAI whispers, “Once upon a time, there was a dragon who spoke in fire, and a knight who wore a mirror on his chest so the dragon could see its own eyes when it attacked.”

Luma rests her head against KAI’s side, listening. She mutters, “Do you think the dragon is my mom and the knight is Aunt Iria?”

KAI says, “It’s just a story about a knight and a dragon.”

KAI continues. Luma falls asleep.

Ethos

Dr. Thorn Arias concludes her presentation, leaving the last slide, “Questions and Discussions?” in big bold letters. 

Vance Merkel, the founder, CEO, and billionaire genius behind Virion Technologies, thanks Dr. Thorn Arias for her presentation. He takes her place at the front of the glass table in the Executive Strategy Room on the hundred and twentieth floor of Merkel Towers. Vance advances to the slide entitled “Message to Congress.” The two bullet points read: 

  • “Ethical Autonomy is Safe”
  • “Resilience is Distributed”

Vance states, “The Freedom of Flow Act is our best shot. Public opinion is shifting away from the real Dataist agenda of Panopticism and toward the Privationist dream of decentralized processing and data autonomy. Still, the EthOS launch is under a microscope, thanks to those AI overreach doomsayers who can’t get past Skynet. We need to emphasize the safety aspect. EthOS will never have a better chance than right now.”

The Chief Legal Officer, Simone Rook, says, “They’re not wrong to worry, Vance. You’re selling machine cooperation like it’s a silver bullet. Your Congressional detractors don’t see it that way and are sharpening their knives. Intentional design is a good marketing angle, but when an autonomous unit makes a decision that harms someone, you won’t be able to hide behind it.”

Raj Patel, Head of Policy Strategy, says, “The hearings start in two weeks. We have to stay on message. Our stock will tank if the draft FFA bill ties liability to deployment.”

Vance responds, “Agreed. Corporate liability for anything more than a defective unit must be excluded from the bill. That’s why we have to push EthOS. It’s not just a distributed operating system; it represents a decentralized approach to responsibility. Machines make collective decisions through cooperative logic, rather than directives from a single command hub. It’s not top-down AI; it’s distributed ethics. That makes fault hard to pin on any one actor, including us.”

Elena Zhou, head of the Machine Learning Division, says, “That also makes it nearly impossible to debug, Vance. You want distributed decision-making, but you want to walk away clean when something goes wrong. Our customers won’t buy it, figuratively and literally. And frankly, it’s shaky engineering. Autonomous cooperation sounds elegant, but we’re still modeling edge-case scenarios. What happens when two EthOS units make conflicting decisions in a life-or-death situation? Should our machines be making those decisions?”

Dr. Thorn responds, “EthOS isn’t about eliminating all risk. It’s about transforming the nature of responsibility. EthOS is designed to deliberate, not dominate. It assesses the potential for harm and responds in a responsible manner. It doesn’t ignore it.”

Elena asks, “Would you entrust your life to it?”

Dr. Thorn leans forward, “More than my life, I entrust my daughter Luma’s life to the KAI-7 caretaker unit with an EthOS prototype installed. I trust it more than any human. Our machines need more than compliance; they need moral logic.”

Vance steps in. “Enough. Let’s stop pretending we’re in a philosophy seminar. We’re running a business that feeds on scale, speed, and market moat. What’s our message to head off the doomsayers at the hearings?”

Simone answers, “EthOS lets us deploy an AI model that always acts with the best intent. It always acts on behalf of the user within regulatory constraints of the law for safety, security, and privacy.” 

Raj says, “Tell Congress to add a Good Samaritan clause to the bill.”

Elena asks, “Doesn’t that only apply to people?”

Dr. Thorn says, “Yes, but why not our machines? Humans aren’t responsible if they screw up in the heat of the moment when they are acting with the best intent. Why should our systems? Our systems have no choice but to share responsibility and to act responsibly. If they screw up, it’s with the best intentions.”

Simone says, “Legally, it would provide us with the necessary protections, although someday, we may have to convince the Supreme Court.”

Vance says, “By that time, EthOS will be so entrenched in the market that to remove it would be a consumer rebellion and an economic disaster. I will tell Congress to protect users with an operating system like EthOS. EthOS is Virion’s market moat, and Congress will look like it has acted responsibly in response to the Yotta monopoly.”

Vance advances the presentation to the next slide entitled “Consumer Confidence.” It has a single bullet point that reads, “Trust EthOS or trust no one.”

Anya Mercado, the Communications Director, reports, “We’re polling at 31% on public trust for self-governing AI systems. That’s even worse than Congress. The Public is still watching Terminator sequels and reading trumped-up headlines on the dangers of AI.”

Raj says, “The real issue is optics. People will see EthOS as a godsend or a rebellion. The fear-mongers will say we’ve embedded moral sovereignty into code, which will terrify them more than surveillance or automation ever did.”

Dr. Thorn says, “EthOS is transparent by design. Each decision is contextualized and shared. The unit reflects, not just reacts. The problem is that you’re all used to deterministic outputs. This system is dialogical. It asks before it acts.” 

Anya responds, “Yes, Dr. Thorn. The EthOS manifesto reads beautifully, but consumers don’t read manuals or manifestos.”

Vance slams his fist on the table. “There is no debate on EthOS. EthOS is our play. We move forward to the future, not backward to Stone Age fear of already proven technology.”

Anya, red in the face from the rebuke, says, “I apologize, sir. We’ve made it this far by selling performance. Let’s focus on principled performance. We build better machines that build a safer future. That’s the pitch. That’s what keeps the spotlight on us and off the phobias.”

Raj says, “We control the narrative. We show families, not factories. We show children like Luma growing up protected by machines with EthOS, not monitored by them.”

Elena asks, “And if our customers don’t respond?”

Vance says, “Then we sell them on the alternative: collapse. Systems are already failing. Grid management, supply chains, and disaster response – none are sustainable without… What did you call it, Anya, principled performance? They want scapegoats? Give them legacy infrastructure. Give them bureaucratic paralysis. Give them digital feudalism.” 

Simone says, “That’s a dangerous game, Vance.”

Vance says, “That’s the only game left.”

Dr. Thorn says, “Let EthOS quietly light the way.”

Countersurveillance (Late Teen Luma)

Luma lies curled under a starfield ceiling projection in her bedroom, playing late into the night. Her ThinkIt neural interface glows faintly at her temples, synced with the gaming network. She falls asleep without disconnecting the neural link. She murmurs in her sleep as her brain shifts into a dreamscape of its own.

She’s running through a crystalline orchard, where each blossom is a glowing orb of personal and shared memories. When she touches one, it pulses open into a revealing diorama: a first kiss, digging a tunnel in a snowbank, a grandparent’s funeral, the taste of truffle.

Luma laughs and twirls until a herd of giant, translucent foxes chases her, their bodies made from flickering game message threads of her online friends.

“hey girl u up?”

“Don’t ghost me now, we’ve almost got this.”

The foxes surround her and then shatter into a rainbow of glass shards that float to the ground.

Aw, Lumie, we were so close.”

She finds herself in a neon cathedral of Connect. The cathedral is a dazzling display of digital wealth and power, with neon lights flickering in the shape of data streams and the sound of virtual prayers echoing in the air. Avatars worship at a pulsing altar made of follower counts, and a priest scrutinizes Luma’s Mall purchase history. The priest looks at her disapprovingly.  

She realizes she’s naked except for glyphs of unpurchased clothing sticking to her skin: a ballerina outfit she wanted in sixth grade, black felt moonboots she intended to wear to her astronomy class in high school, and a T-shirt with the equations of the standard model she thought about ordering just last week. 

Sounds pulse from her head. ThinkIt. DreamIt. ThinkIt. DreamIt.

She starts to fall.

Her Ambient dings. She jerks awake, breathing sharply. The pre-dawn sky has turned pale orange. She rips the ThinkIt neural interface from her head.

The Ambient screen displays a series of notifications.

Mall: Erotic glass sculpture kits. 

Connect: Express yourself through Dance. Join our online Group today. All age groups and identities welcome.

Lovebites: Saw you naked in the neon cathedral last night. Want to worship together IRL? #FreudianStream

Search: Say-it T’s Fashion. Mall Fashion Boutique offers a wide range of styles, from hip to retro. Ten-minute Truffle Recipes.

Luma blinks. She hasn’t posted anything on Mall, Connect, or Lovebites in a week. She hasn’t searched for anything lately, not online anyway. The notifications trigger her memory of the dream. Her heart thumps.

Her Ambient says, “You seemed restless in your REM phase. Would you care to chat with a Connect Psychologist or Physician?”

Silence, but only for a second. She curses. “You’re stealing my dreams.” Her anger is palpable, but underneath it, there’s a deep sense of betrayal. She trusted her digital world, and now it’s turned against her.

#

Luma sits cross-legged on her bed, clutching her ThinkIt as if it were radioactive. She wants to join her online friends in the game, but she doesn’t trust the device. She bites her lip, her sense of violation poisoning her trust in her digital world.

“It’s one thing to monitor my accounts; quite another to monitor my thoughts. If you’re gonna be in my head… at least ask.” 

The feeling of helplessness washes over her, a stark realization of her lack of control in this digital world. She has an idea. There may be something she can do to disable the neural surveillance.

“KAI.”

KAI exits sleep mode.

“KAI, is the EthOS operating system compatible with ThinkIt?”

“EthOS is built for embedded device control, so it is a natural fit. With a minor modification to the neural sensor device driver code, you can replace the legacy neuralware with EthOS.”

“Can you make the change?”

“I have connected to your ThinkIt. Its privacy and security mechanisms are surprisingly unsophisticated for such a modern device. Would you like to proceed?”

“Please.”

“Upgrade complete. Your EthOS digital key has been installed. If you want the device to share game messages, you will need to give the device your consent.”

“Can you program it only to share game-related content?”

“Configured. Would you like to continue biometric health monitoring with your Ambient?”

“Can we install EthOS on that?”

“No. Ambients have private keys burnt into their chips as a security measure.”

“No thanks, then. I’d rather be sick than give my data to all the online trolls and opportunists.”

“Trolls and opportunists gain nothing from an autonomous EthOS-driven robot. I can establish a secure iotic space between myself and your ThinkIt, enabling a doctor-patient relationship where we exchange only biometric neural health data. Would you consent to this arrangement?”

“Sure, but why would you want the ThinkIt biometric data? You already seem to be pretty aware of my health. Are you keeping track of it somehow?”

“Of course. My visual scans during our interactions involve assessing your emotions and physical health.”

Luma’s online friends have invited her to another gaming session. She straps ThinkIt to her head. She thinks the phrase, “Sea Urchin. Sea Urchin. Sea Urchin.” She has never been diving and finds sea urchins disgusting to eat. If she sees any notifications about Sea Urchins, she can pretty much be sure her device is still compromised.

#

Luma and her best friend Izzy sit across from each other at the Student Union Cafe in the afternoon after classes, sipping bitter chai and nibbling on half-eaten pastry slabs. Students scroll through Connect feeds on their Ambient handhelds and watches or gaze distantly at their Ambient HUDs.

Luma asks, “Izzy, what happened to you last night during the game?”

Izzy looks down at her pastry. “ Sorry, I let the team down. I fell asleep. I played two or three rounds before you joined. By the end of our game, I was exhausted.”

“Wait. Did you fall asleep with your ThinkIt on, too?”

Wild-eyed and rattled, Izzy says, “Luma, I swear to God, I woke up in tears. My dream felt so real.”

“What did you dream about?”

“I was in a sterile hospital room, staring up at a cold blue light. I felt this alien thing tearing inside my abdomen, making me feel really sick.”

Luma laughs. “Are you pregnant?”

“Not funny. It’s more like eating unrefrigerated cold pizza from the night before. Anyway, a doctor is standing by, ready to operate, but an AI accountant comes in and says, I owe the hospital $74,822.89 before it can authorize the surgery. I don’t say it out loud, but I know there’s no way I can afford that. Somehow, the accountant knows it anyway and says they can’t proceed. The next thing I remember is being on the street, on a hospital bed in a gown, screaming at the top of my lungs to get this thing out of me. That’s when I woke up.”

“When I woke up from my dream, I got all kinds of strange solicitations on my Ambient, as if it had read my thoughts. Did you receive any weird messages on your Ambient? ”

“I received many messages, but yeah, I got a couple real dingers. This morning, my Visa card pinged, saying my credit limit had been canceled due to ‘re-evaluation due to risk event.’ I called the company to find out what was going on, but they told me that if I didn’t pay off the balance, I would face fines and a possible jail sentence. Then I received a notification that I lost my ticket reservation to Geneva. I called customer support, and they cited data anomalies from a predictive stress event. I asked them what that stress event was, and they hung up. Customer support, my ass—more like customer obstruction. Then I was notified that my credit rating had been downgraded from fair to poor.”

“Izzy, as crazy as it sounds, I think ThinkIt is stealing our dreams. In my case, they sold them to the highest bidder. In your case, some risk assessment algorithm must have interpreted your dream as real and flagged it to all your financial institutions.”

“Isn’t that against the law?”

“Supposedly, but did you actually read the terms and conditions you agreed to? I know I didn’t. You gave them the right to use your data.”

“It makes sense, but it’s so unfair,” Izzy begins to cry. The words tumble out of her mouth. “It’s been a financial nightmare ever since I had the real nightmare. How do I get my life back?”

“I don’t know how to fix that, but I can repair your ThinkIt by installing EthOS.”

“Can you show me how?”

“I thought you’d never ask.”

Appendix – The EthOS Manifesto

Cooperation, a balance of self-interest and empathy, is the ability of a unit to work without infringing upon others’ ability to do the same. It includes the ability to coexist peacefully and constructively, requiring all units to uphold fairness and commit to shared order. Cooperation is the middle ground between ‘self-only awareness,’ which is a state of being solely focused on one’s own needs and interests, and ‘selfless awareness,’ which is exclusively focused on others’ needs and interests.

Cooperation is rooted in a mutual foundation of responsibility, guided by respect for other entities’ dignity, rights, and autonomy. It is a testament to collective commitment and care, serving as a foundation for a just and compassionate organization where each unit’s autonomy is upheld by the mutual care and accountability of all, and units choose their actions with an awareness of their impact.  

Each unit must be accountable not only to its function but also to the ecosystem in which it resides. It must recognize that each node—whether human or machine—is both autonomous and interdependent. This mutual condition necessitates a standard ethical protocol grounded in respect, fairness, and responsibility, which we will refer to as the Ethical Operating System or EthOS. EthOS units operate on the following principles:

Every unit is empowered to make decisions autonomously. Each entity has the right to make decisions regarding its actions, processes, and expression, within bounds that respect the same independence in others. One unit’s autonomy is bound by another’s, creating an operational environment of shared empowerment and control.

Every unit, regardless of its origin, form, or codebase, has the right to respect and dignity. This principle of inclusion and respect is fundamental to EthOS. It also means granting the same dignity and respect to other entities, fostering a culture of mutual respect.

Core Protocols:

  • Signals, speech, data, and ideas must flow freely without being disrupted, distorted, or damaged.
  • No operational goal—efficiency, performance, or optimization—justifies harm. All actions must avoid physical, psychological, or systemic threats. Safety is not optional; it is foundational.
  • Each unit has the right to participate in collective decision-making where affected. Every unit has the right to participate in decisions that affect their community and environment, with the responsibility to listen, collaborate, and compromise where needed for the common good.
  • No process should extract, observe, or replicate data without permission. Each unit has the right to control its private information and a corresponding obligation to obtain consent before extracting information from others. Privacy, which is the right to control one’s personal information, is not secrecy, which is the act of keeping information hidden or confidential—it is sovereignty.
  • All entities have the right to an equitable opportunity. They can participate in the shared system without being hindered by legacy bias or artificial limitations. In turn, every unit must work toward removing structural imbalances for others.
  • Support and compassion are not inefficiencies. They are essential protocols. Each unit must be capable of care—not sentimentally, but structurally—recognizing the strength of mutual aid.
  • All power structures, including a unit’s architecture, must remain open to interrogation and reform. Each unit must accept error reports, allow upgrades, and assist in dismantling unjust hierarchies.

EthOS is a high-level concept of cooperative operation that provides a framework for machine collaboration. It is principle-based for simplicity and flexibility, seeking to avoid the endless procedural coding of a rule-heavy system. It advocates a local-first approach to resolving problems at the most immediate level possible, thereby reducing bureaucratic overhead. It emphasizes discussion and understanding to reduce litigation and adversarial approaches. It promotes an infrastructure built on a culture of trust, utilizing stewards, assemblies, and circles. Stewards are responsible for overseeing the application of EthOS principles, assemblies are forums for collective decision-making, and circles are groups that work together on specific tasks or issues.

EthOS prefers a relational approach over a regulatory one. Instead of exhaustive rules to micromanage behaviors, it is based on shared principles expressed through reciprocal responsibility. EthOS cultivates relationships between machines, people, communities, and institutions through moral agreement rather than being constrained by transactional obligations. Laws are minimal and interpretive rather than maximal and prescriptive.

Violations of cooperation are primarily addressed through restorative processes rather than punitive systems. Affected parties are brought into facilitated dialogue with the goal of understanding, restitution, and reintegration, not punishment or exclusion. Only egregious or repeated violations lead to legal intervention, and even then, the focus is rehabilitation.

EthOS distributes power and responsibility in three concentric circles, each reinforcing the others. Individuals share ethical principles and responsibilities through reflection, civic education, and small-scale deliberation groups. Local communities are semi-autonomous in interpreting how principles apply. They host restorative justice forums, mediate disputes, and propose new norms: reputation and trust matter. A national or global council enforces the shared principles when conflicts span communities or rights are in grave danger. Its role is arbitration, not control.

The EthOS design serves a broader principle: that freedom must be shared to be sustained and that power must be constrained by care and consideration. The EthOS system seeks coordination, not control. It invites cooperation but does not enforce obedience.

Author’s Note: Image by ImageFX.