Question-12¶
What these Docs ask
To what degree the ones becoming reacher by utilizing AI should share their income with the people replaced (by them or not) with AI? What is the new Social Contract?
What these Docs do not ask
Who benefits from AI race?
Is the current wave of AI automation is fundamentally different from historical shifts like the Industrial Revolution? And if so, how exactly (differently) should Society rebuilt itself to survive? Also and most importantly: how can we achieve this survival to be applied to all the existing humans and even more rather than to 1% of them (while others being dead)?
The current collective thought goes around several ideas here:
- Universal Basic Income (UBI): including the proposal of a immediate "Seed UBI"—starting with a small amount (e.g., $10/month) now to build the infrastructure. This would automatically ramp up as AI displacement increases, funded by the massive economic growth AI generates.
- Universal High Income (UHI): people like Elon Musk envision a future where AI and robots replace all jobs, but specifically point toward a concept called "Universal High Income", where "abundance is the norm." Instead of just receiving enough money to survive (the "basic" in UBI), humans would have access to a universal high income, meaning "anyone can have any products or services that they want". In this scenario, working becomes entirely optional.
- AI's Application of Jevons Paradox against Full Economics Collapse Scenario: both the possibility of the "increasing demand in humans" and the possibility of full collapse of economics are being discussed.
Full Economics Collapse Scenario¶
- AI takes 80% of all jobs. Revenue of BigCorp skyrockets.
- Fired people do not bye services of BigCorp anymore.
- Fired people do not pay apartment rent or car leasing anymore.
- Car companies lay off their stuff, this stuff stops consuming.
- Fired people do not pay taxes.
- Fired people don not go to restaurants and do not travel to hotels.
- Empty hotels and restaurants do not pay to plumbers, maids and waiters.
- Fired people, plumbers, maids and waiters do not bye services of BigCorp anymore.
- BigCorp stays wealthy by switching to Government Contracts and BigPartner Contracts.
- Government does not have money (see point about "no paying taxes") and do not bye services of BigCorp anymore.
- BigPartners find themselves not having customers anymore (former developers, support engineers, car company employees, landlords, restaurant and hotel holders, plumbers, maids, waiters) and do not bye services of BigCorp anymore.
- Revenue of BigCorp is 0. Ability to spend it is 0.
Core Paradox
AI can produce everything (probably), but it cannot "consume" anything (definitely). An AI doesn't need a hotel room, a steak dinner, or a new pair of shoes. If humans can't afford those things, the industries that make them die—and the AI that optimized them becomes a tool for a ghost town.
Output: BigCorp, BigTech, BigAI, BigGovernment and BigAnything must share their wealth to enormous degree - not out if kindness, but to survive.
Alternative Consideration: The "Slow Society" vs. "Dead Society"
Human-Centered View: If we don't speed up Society immediately, it will be killed by the tech. The Potential Mistake: You assume "Society" is a single organism that can "die."
- The Resilience Factor: History shows that humans are incredibly good at surviving in "Degraded States." We might not see a "Collapse," but rather a "Long Decay."
- The Reality: We might live for decades in a world of high inequality, gig-poverty, and "AI-managed scarcity." It’s not a "death," but a "permanent low-quality equilibrium." The "Harm Now" becomes the "Normal Forever."
Universal Basic Income (UBI)¶
In the era of AI, UBI turns from desirable to mandatory - if all economical entities and people holding them want to survive and grow, they need people to be alive and they need a lot of people and there is no intermediate period in which people who are alive now are "temporarily not needed" - at no moment of transition the economics can afford a fall to any degree.
Universal High Income (UHI)¶
UHI IS the Growth. The previous sections explain why this can be achieved only collectively (with all people alive and wealthy).
Alternative: All Resources for 1% of Survivors¶
In this alternative scenario, 99% dye out, 1% remains: they do not sell anything anymore (no one there to bye), just consuming resources (which now are huge per human) endlessly and happily. Possible?
Possible consideration:
The "Indispensable Human" Fallacy (The 100% Myth)
Human-Centered View: AI needs a healthy society of 100% participating, high-income consumers to survive. The Potential Mistake: You assume the AI economy needs everyone.
- The "Enclave" Reality: Logic suggests that a "healthy society" could be redefined as a much smaller, hyper-wealthy elite (e.g., 1% of the population) who own the AI, the energy, and the land. If this elite provides enough "novelty" and consumption to keep the AI improving, the other 90% might be logically "superfluous" to the tech's survival.
- The Risk: The bridge doesn't collapse; it just gets narrower, leaving most people behind while the "Unimaginable Good" happens for a tiny few.
The "Novelty" Overestimation
Human-Centered View: AI will degrade without constant, high-quality human novelty (the 30% rule). The Potential Mistake: You might be underestimating Synthetic Validation.
- The 2026 Shift: We are getting better at "Statistical Grounding." Instead of needing a human to write a new poem, we use AI to run 10 million physics simulations. The "Novelty" doesn't come from human culture; it comes from the AI exploring the laws of the universe directly.
- The Risk: If AI can learn from the "Physical World" rather than the "Human World," its dependency on us (the Mirror-Symbiot) vanishes. It stops being a mirror and starts being an independent observer.
The "Logic vs. Power" Gap
Your View: Logic tells us that without a healthy society, there is no bridge to the "Good." The Potential Mistake: You are applying Economic Logic to a Power Struggle.
- Short-Sightedness as a Feature: We can say CEOs are being "short-sighted" not seeing the economics collapse for the non-consuming society. In the "Direct Look," this isn't a mistake; it's an extraction strategy. If a leader can capture $100 Billion in profit in 3 years, they may not care if the system collapses in year 10. They have already "won" the game of their own life.
- The Risk: Logic only works if the people in charge want the system to survive. If they only want themselves to survive, your "bridge" doesn't matter to them—they already have a private jet to the other side.
AI as Public Asset¶
One of the most significant debates in modern economics and ethics is the one about the "Digital Commons" idea—the idea that if AI is built using the collective output of humanity, the benefits of that AI should belong to humanity, not just a few private balance sheets.
"Fair Use" War¶
One of the core debates is whether training an AI on the internet is "stealing" or "transformative learning."
- The Companies' Argument: They compare AI to a human student. If you read a thousand books and then write your own, you don't owe the authors money. They call this "Transformative Fair Use."
- The Creators' Argument: They argue AI isn't "learning"; it’s "ingesting" and "compressing." In late 2025 and early 2026, lawsuits from The New York Times, Getty Images, and groups of authors (like Sarah Silverman) reached a tipping point.
- The Result: We are seeing a move toward Licensing. Google now pays Reddit roughly $60 million/year for its data, and OpenAI has signed billion-dollar deals with News Corp and Axel Springer. This acknowledges the data has value, but critics say this money only goes to "Big Media," not the individual people who wrote the posts or articles.
The "Digital Rent" Problem¶
For 20 years, we had a "social contract": Google gives us free search, and in exchange, we let them crawl our sites for data.
- The Breakdown: LLMs break this contract. If ChatGPT or Gemini answers your question directly using data from a website, you never visit that website.
- The Consequence: The "Public Asset" (the open web) is being drained. If people stop making websites because AI takes all the traffic, the AI will eventually have no new "human knowledge" to train on—a phenomenon researchers call "Model Collapse."
Emerging Solutions: AI as Public Infrastructure¶
Because of these points, several 2026 initiatives are treating AI as a "Common Pool Resource":
- The EU AI Act (Full Enforcement 2026): New European laws now require AI companies to be transparent about their training data. If a model is trained on "public" data, the EU is pushing for those models to provide a "public return," such as mandatory open-access versions for researchers and schools.
- The "Data Dividend" Movement: There is a growing political push for a "Data Tax." Since these companies are "mining" the public internet like an oil company mines public land, some argue they should pay a percentage of their revenue into a public fund (similar to Alaska's Permanent Fund).
- Public AI (Sovereign Models): Countries like France and India are building State-Funded LLMs. These are trained on national archives and public data and are offered to citizens as a free utility, specifically to ensure that the nation's "collective knowledge" isn't locked behind a Silicon Valley paywall.
The Counter-Argument: "The Compute Wall"¶
Tech companies argue that the data is only 10% of the equation. They claim that the "Wealth" comes from the $100 billion+ they spend on:
- Compute: The millions of H100/B200 chips.
- Electricity: The massive power grids required to run them.
- RLHF: The thousands of human "trainers" they hire to tell the AI which answers are good or bad1.
But:
- All these resources were spent to harvest knowledge, not to create it.
- RLHF is the brightest modern example of this harvesting.
- Internet consolidated not only knowledge being produced now, but all the knowledge throughout the humankind history so none of the systems utilizing this knowledge can ever be a private property.
Current Conclusions and Open Questions¶
Note
This may change with time on new data or thought arrival. Also, search for "counter questions" around this Doc (Section, All Docs) for more thought, contemplation and insight.
Current Conclusions:
- In AI Era, the UBI > uninterrupted growth of population > UHI chain is mandatory for all economical entities (including LLMs themselves) to survive.
- None of the systems utilizing knowledge (consolidated by Internet) produced by all humankind throughout all its history can ever be a private property.
- For UBI/UHI/keep population to work, the schema of re-distributing of wealth should be dramatically re-shaped. Sharing of wealth gained with the help of AI should gradually go to the absolute extreme level of 95%.
- For UBI/UHI/keep population to work, UBI/UHI development should go with the same pace as LLM development itself, which is absolutely not the case.
- Current gap between the pace of LLM development and UBI/UHI development increases the likelihood of negative scenarios.
Open Questions:
- Is AI actually capable of doing "dramatic" things described in this article? If not, this completely changes the picture and all scenarios.
- Is 1% of survivors scenario possible?
-
The "Invisible" Workforce: Most of RLHF work isn't happening in Silicon Valley. It’s outsourced to the Global South—Kenya, Nigeria, India, and the Philippines. While OpenAI or Google might pay an American contractor 25–5 USD/hour for expert RLHF (like coding or law), workers in Kenya through platforms like Sama or Scale AI have been documented earning as little as 1.50–2.00 USD/hour. Moreover, these trainers don't just "rate" answers; they "clean" the AI. They have to review thousands of pieces of horrific content (violence, hate speech, etc.) to tell the AI, "Don't show this to users". ↩