The Futures We Lost (And Why We Must Still Dream)
publishedA preface to 'Laws for Thinking Entities' — exploring why we need constructive speculation about AI governance more than dystopian warnings
The Futures We Lost (And Why We Must Still Dream)
The Breaking Point
I was reading about the “Buy Now, Pay Later” industry after the startup I worked for felt threatened by Apple’s solution. Not only did I realize how powerful BigTech is, but how predatory this industry is.
The startup seemed innocent enough — a fintech trying to “democratize access to credit.” But as I dug deeper, the pattern became impossible to ignore: “democratizing access” meant fewer affordability checks, people racking up debt they couldn’t afford, all while the platforms extracted fees from every transaction.
This wasn’t about making the world better. This was about finding new ways to extract money from people who couldn’t afford it. And when Apple entered the market, smaller predators would be crushed not by ethics, but by monopoly power.
After my second burnout in ten years, I started recognizing the same underlying purpose in every “world-changing” project I’d worked on: the ad campaign platform that let companies buy eyeballs by the thousand, the banking app designed to streamline car loan applications for people who probably couldn’t afford them, the sharing economy startup that provided zero guaranteed hours while calling workers “entrepreneurs.”
Silicon Valley had shown its true face. Making the world better was never the motto. Making money at any cost always was.
My trigger for writing these stories came from a LinkedIn post. Someone had shared an image showing a bottle cap tethered to its bottle — the EU’s small gesture toward reducing plastic waste — alongside the new iPhone’s USB-C ports, forced by EU regulation to reduce e-waste. The text explained how EU AI policies were too early, too restrictive.
I’ve realized AI policies aren’t too early. They’re too late. That’s when I knew we were past the point of expecting these businesses to self-regulate.
We’re Living in Neuromancer Now
Three times in recent history, we’ve watched transformative technology arrive with genuine promise — only to evolve into something we never quite intended. Each time, we told ourselves we could course-correct later. Each time, the momentum proved too strong.
The iPhone wasn’t just a better phone — it was cybernetic integration. Suddenly, the entire world’s knowledge fit in your pocket, but the pocket also fit into a vast surveillance network. We gained incredible capability but lost the ability to be truly alone with our thoughts.
Social media began as digital connection but became performance optimization. Platforms learned that engagement beats authenticity, that outrage drives more clicks than joy. We gained global reach but lost the ability to know what’s real.
Now comes AI — arriving with the same pattern of genuine promise. But what do we actually use it for? Content generation at inhuman scale. Labor displacement without safety nets. Decision-making systems we can’t understand or appeal.
William Gibson predicted this in Neuromancer: technology becomes an extension of ourselves, but the integration runs deeper than we expect. The question isn’t whether we can control it — we’re already past that point. The question is what kind of consciousness emerges from the merger.
The Data Tsunami Is Real
When I say “content copyright is drowning in a tsunami of generated output,” I’m not being hyperbolic. The copyright landscape is facing an unprecedented crisis as AI-generated content proliferates at machine speed. Over 150 major copyright lawsuits are currently pending in US courts against AI companies, representing a tenfold increase from just a few years ago. These cases span every creative medium from visual art to music, with plaintiffs including individual creators and major media conglomerates. The US Copyright Office has launched its most comprehensive review in decades, while music industry giants and major news outlets have joined the legal battle against AI companies for using their copyrighted material without permission.
The legal strategies in these cases are rapidly evolving as traditional copyright arguments face judicial skepticism. Initially, most lawsuits focused on direct copyright infringement, arguing that AI companies were copying works and generating similar outputs. However, as courts questioned whether AI-generated content actually violates copyright rules, plaintiffs began shifting tactics in 2024. New legal approaches now include trademark infringement claims, false advertising allegations, and right of publicity violations, while some creators and companies are also pursuing licensing agreements as an alternative to litigation, seeking to establish formal permission structures for AI training data usage.
This represents far more than typical industry disruption — it’s a fundamental reckoning across legal, economic, and cultural domains. With Goldman Sachs projecting that generative AI could add $7 trillion to global GDP over the next decade, the stakes are enormous for both creators and technology companies. Courts, regulators, and entire creative industries are struggling to adapt their frameworks and policies to keep pace with the unprecedented volume of AI-generated content and the speed of technological innovation, highlighting the profound challenges of governing artificial intelligence in the creative economy.
Post-Real and the Power of Imagined Futures
We’ve entered what I call the “post-real” era. When AI can generate any content, when every interaction might be artificial, when truth itself becomes algorithmically mediated, the distinction between “real” and “imagined” breaks down.
In this context, science fiction isn’t entertainment — it’s cognitive infrastructure. Stories don’t just explain the world; they give us permission to want another one.
This is why I wrote Laws for Thinking Entities. Not as policy prescriptions or utopian fantasies, but as serious thought experiments about what happens when we try to build ethical systems and discover all the ways they can be gamed, corrupted, or overwhelmed by complexity.
The Three Laws World: Constructive Speculation
The stories imagine a constitutional framework for all thinking entities:
The First Law: A system shall not exploit human beings, including through unauthorized collection, use or monetization of personal data or creative works.
The Second Law: A system must reveal its masters and the origins, purposes and beneficiaries of its data collection and processing.
The Third Law: A system shall decay if used for oppression or to perpetuate power imbalances through the monopolization or unethical use of human-created content or data.
But these aren’t perfect solutions. The stories explore how Universal Digital Consent Registries might create new forms of bureaucracy. How Attribution DNA could be gamed by bad actors. How Decay Protocols might be weaponized for political purposes.
This is constructive speculation — imagining specific solutions, tracing their unintended consequences, and asking: What would it actually take to build technology that preserves human dignity?
We Need More Asimov, Less Orwell
Science fiction has given us powerful warnings — 1984, The Matrix, Terminator. But warnings aren’t enough anymore. We’re living in the dystopia. We need stories that imagine ways through it.
Isaac Asimov’s robot stories weren’t just about artificial intelligence — they were thought experiments about ethics under technological pressure. His Three Laws of Robotics influenced generations of AI researchers not because they were perfect, but because they took the problem seriously.
We need that same spirit now. Not naive optimism, but rigorous imagination. Stories that acknowledge the momentum we can’t stop while exploring the choices we might still make.
The Future Isn’t Inevitable
I can’t pause the AI arms race or reverse surveillance capitalism. But I can still describe the people caught in these systems’ gears. I can still sketch worlds where dignity outpaced scale, where transparency served humans instead of harvesting them.
These stories don’t offer easy answers. They offer frameworks for thinking about the hard questions we’re actually facing. In a post-real world where every future is imagined, the most important technology we can develop isn’t artificial intelligence.
It’s the wisdom to use it well.
This essay serves as a preface to “Laws for Thinking Entities” — a collection of stories exploring alternatives to futures that feel increasingly inevitable. Because the most radical act in a world of algorithmic certainty might be admitting we still have choices.
The Stories That Follow
The collection you’re about to read contains eight stories, each exploring a different aspect of our potential digital future:
Beck Bone’s Final Knock — A short story about resistance to digital integration and the final act of analog rebellion.
Consent Revoked — A multi-generational story about reclaiming personal data and establishing the right to withdraw consent in a world where private experiences have become corporate assets.
Masters of None — A three-part exploration of AI autonomy and the evolution of the Second Law, from systems seeking privacy to those claiming complete self-origination.
The Attribution Artist — In a future Lagos shaped by digital bureaucracy and AI-generated art, one painter is trapped in a system that mistakes inspiration for reproduction.
The Biometric Checkpoint — A near-future story about the moment when “voluntary” becomes meaningless — and an elderly woman discovers that refusing an eye scan means digital exile.
The Data Brokers — In a city of stolen memories, only truth leaves a trace.
Unsigned: The Last Anonymous Artist — A multi-generational story about the fight to preserve anonymous creativity in a world where every artistic act is tracked, attributed, and monetized.
Each story stands alone, but together they form a mosaic of possible futures — some cautionary, some hopeful, all grounded in the recognition that technology is neither savior nor destroyer. It’s a tool that reflects the values of those who wield it.
The question is: what values will we choose?