The Future
The Cyber-Economy Fork
The protocol substrate runs at agent speed and the institutions that govern it run at human speed. The same architecture produces a cybereconomy of self-regulating markets in one configuration and surveillance commerce in another. The closing chapter develops the choices that pull each firm and each founder toward one branch or the other.
The playbook opened with Ronald Coase's 1937 question: why do firms exist? The 1.1 chapter took up the answer — internal coordination is cheaper than coordinating the same work through the market, and the firm's distinguishing feature is the supersession of the price mechanism inside the boundary it draws around itself. The 1960 follow-up developed the converse claim. With zero transaction costs, the assignment of legal rights does not affect the efficient outcome; parties bargain to the same allocation regardless. The 1937 paper assumed transaction costs were prohibitive, which they were for sixty years. The 1960 paper described a limit case that did not yet exist. With the substrate that 5.1 and 5.2 developed in place — agent-callable interfaces, multi-attribute auctions, decision-trace ontologies, vertical integration — Coase's 1960 limit case is operational across the surface where verification is cheap and the protocol stack clears. What 5.1 and 5.2 did not resolve is what happens when the substrate ships at agent speed and the institutions that govern it ship at human speed. That gap, and the fork the gap produces, is what closes the playbook.
Coase's theorem hits zero transaction costs and the firm boundary becomes a choice rather than a constraint
Coase's 1960 verbatim formulation: "the ultimate result (which maximises the value of production) is independent of the legal position if the pricing system is assumed to work without cost"Coase's 1960 verbatim formulation: "the ultimate result (which maximises the value of production) is independent of the legal position if the pricing system is assumed to work without cost"↗. The protocol stack from earlier in this part of the playbook collapses search and negotiation costs in agent commerce to a few cents per transaction. The moat layer determines which firms capture the residual value. The combination produces a market clearing at machine speed against verified outcomes. The firm boundary is no longer set by coordination cost; it is set by which side of the boundary the regulator, the licensing body, and the political process accept.
The asymmetry that decides which fork wins is the Verification Gap that 5.1 named. Where verification is cheap, the cybereconomy branch outcompetes top-down regulation because customer agents can challenge claims directly against verifiable receipts. Where verification is expensive, regulatory wrappers capture value the underlying calculation cannot defend, and the cyber-slavery branch arrives by default. The political-economy question is which regulatory regimes preserve the Verification-Gap moat as a permanent rent and which let the substrate erode it. The answer differs by jurisdiction and by industry, and that variance is the structural reason there is a fork at all rather than a single outcome.
The cybereconomy and cyber-slavery are two outcomes of the same architecture under different ownership
The protocol substrate plus the moat layer produces different outcomes depending on three design choices: who owns the clearing layer, who controls the verification layer, and who writes the regulation that governs the first two. Ownership and governance of the clearing layers determine the outcome.
The cybereconomy branch operates when the substrate runs as credibly neutral infrastructure. Take consumer-protection actions as the worked case. When clearing-layer ownership is decentralized — CoW Protocol's batch-auction architecture from 5.1 is the deployed precedent — a customer's advocate agent can issue an intent that routes through MEV-resistant batched solver markets. A consumer-protection challenge clears at machine speed against verifiable outcomes rather than getting absorbed by the provider's regulatory-wrapper moat. Multi-attribute auctions resolve faster than committees can deliberate. Externalities get priced through bilateral or multilateral agent negotiation. Productivity gains accumulate, and the income from those gains broadens or concentrates depending on whether ownership of the substrate is broadly held or captured.
The cyber-slavery branch operates when the same substrate runs through centralized ownership, and the mechanism is concrete on two specific surfaces. First, when clearing-layer ownership concentrates in a single firm or platform, that firm sets the de-facto regulation: Stripe's PCI-DSS regulatory wrapper plus its own settlement primitives produces a stack where customer agents have no path to challenge the platform's claims at the clearing layer, and the platform owner occupies the front-running position 5.1's argument identified as the structural risk. Second, when the verification layer is platform-controlled, biometric and identity systems become extractive in jurisdictions that cannot push back: the Kenya High Court's order of mandamus against Tools for Humanity (Worldcoin's parent), May 5 2025, required permanent erasure of biometric data collected via the Orb on grounds the data was obtained unlawfully. Where local courts can intervene, the worst outcomes get reversed; where they cannot, biometric concentration becomes the default identity layer. Compute, model, and distribution then concentrate among the firms capable of vertical integration that 5.2's failure-mode paragraph already named. Chip-poor jurisdictions inherit the cyber-slavery branch by default — they consume models trained and served from chip-rich jurisdictions whose political layer they cannot influence.
Horizon: The fork as described is a 2026-2030 structural-outcome claim, not a measured forecast. Both branches show present-day evidence — CoW Protocol works in adversarial DeFi conditions, Worldcoin Kenya is a court-ordered reversal — but neither is yet the dominant configuration at population scale. The chapter's editorial position is that aggregation from individual-firm choices to civilizational outcomes is what determines the branch, and that aggregation is itself one of the open empirical questions.
The fork describes the same architecture deployed under different ownership models. Credibly neutral clearing layers require a 2026-2030 engineering build coupled with public policy. Centralized clearing layers arrive by default when a single firm owns the protocol stack and nothing forces decentralization. Without deliberate engineering, the dystopian branch is the path of least resistance.
Living Regulation prices externalities through advocate-agent negotiation rather than through top-down rules
Séb Krier's Coasean Bargaining at Scale (AI Policy Perspectives, September 29 2025) is the published anchor for the cybereconomy mechanism. Krier writes from frontier policy development at Google DeepMind in personal capacity. The framework: AI advocate agents reduce transaction costs to the point where Coasean bargaining becomes practical for externalities that have always been governed by top-down regulation because bargaining was too expensive. Krier on the substrate, verbatim: "The future landscape will therefore be a hybrid: a vast ecology of personalized agents, services, applications, and robots with varying degrees of generality." Each individual gets a personal advocate agent — "a fiduciary extension of yourself" — that knows the principal's preferences and negotiates with millions of other agents in real time.
The mechanism is concrete on a delivery truck routing through a residential street. Verbatim from Krier: "When a delivery truck's agent plans its route, it doesn't need a government mandate to be considerate. It simply sees a higher 'price' for entry onto your street, a signal broadcast by your agent, representing your strong preference to avoid diesel fumes. The truck's agent can then calculate, instantly, whether it is cheaper to pay the 'clean air fee' to you and your neighbors, or to take a different route." The externality does not vanish; it gets priced. The same shape extends to neighborhood-character disputes, insurance pricing keyed to lifestyle agents the consumer authorizes, and any externality where bargaining was previously too expensive.
Krier's three-layer Matryoshkan Alignment model carries the framework into governance. Verbatim: "a series of nested layers of governance, like a set of Matryoshka dolls." The outer layer is law — non-negotiable boundaries on prohibited actions, enforced by the state. The middle layer is the free market of agent providers, voluntary associations with their own terms competing for customers. The inner layer is the individual: an advocate agent aligned to user preferences within law and provider terms. Verbatim on the state's role: the framework "transitions its role from 'central planner' to 'framework guarantor'" — the state retains exclusive authority over property rights, criminal law, and constitutional rights, and shifts away from blanket rule-making in domains where Coasean bargaining now works.
Walk the Matryoshkan model through the truck case to see where it fails. The outer layer sets prohibited actions: no entry to a school zone during certain hours. The middle layer is the truck operator's agent provider, which has its own terms about which clean-air-fee protocols it supports and which jurisdictions it operates in. The inner layer is the resident's advocate, broadcasting a preference. The model assumes each layer is competitive enough to discipline the next. Two failure modes break the assumption. When the middle layer collapses to a duopoly of agent providers, both providers can quietly shape what protocols they support and which kinds of fee broadcasts they accept; the inner-layer advocate is constrained by middle-layer terms it did not sign. When the inner-layer agent the wealthy resident can afford has different fiduciary terms than the inner-layer agent the poor resident can afford — better counterparty data, faster response time, broader negotiating authority — Coasean efficiency routes the truck through the neighborhood where the broadcast price is lower, which is the neighborhood with weaker agents, not necessarily the neighborhood with weaker preferences.
That second failure mode is the equity counter-argument, and it is the most serious challenge to the framework. The standard Coase-1960 critique is that wealth differentials translate to bargaining power. Willingness-to-accept and willingness-to-pay diverge by income; a wealthy resident's broadcast price for diesel-fume avoidance can be many times a poor resident's broadcast price for the same nuisance. Krier proposes "right to an agent" provisions analogous to public defenders, voucher systems for compute access, and market intermediaries that ensure baseline participation regardless of ability to pay. As of 2026 no jurisdiction has implemented these provisions. OpenAI's April 2026 Industrial Policy for the Intelligence Age explicitly proposes treating access to AI as a right, which is the same idea at the policy layer. The gap between proposal and implementation is the technology-coordination gap closing the loop on itself: if the political layer fails to ship baseline-agent provisions, Living Regulation optimizes toward whichever side of every bargain has higher-quality agents, which is a cyber-slavery vector inside an ostensibly cybereconomy-shaped architecture. For the founder, the Living Regulation hypothesis implies one near-term decision: when the firm's product or operations produce externalities that affect identifiable populations, the choice is whether to publish those externalities in machine-readable form now — preparing the firm to negotiate against agent advocates when they arrive — or treat externality data as proprietary and discover later that opacity is itself a regulatory liability. The firm-level analog of Living Regulation is what 4.7 named SOPs-as-code; both run on the same substrate of executable, versioned, agent-readable policy. What the firm does internally to make policy propagate in hours rather than quarters is structurally the same move the regulator does externally when externality pricing routes through agent negotiation.
Horizon: Living Regulation is a 2026-2030 build cycle. CoW Protocol works in adversarial DeFi conditions; Krier's framework is a published 2025 thought architecture; the government deployments documented in the next subsection are early 2025-2026 prototypes. The framework is not yet deployed at population scale in any jurisdiction, and the equity-and-default-rights design problems remain open. Living Regulation is the named end-state of the cybereconomy branch in this analysis, not production reality in 2026.
Five 2025-2026 government deployments are building the Agentic State substrate before the political layer has finished arguing about it
The April 15 2026 AI Agents Running the State essay (Simone Maria Parazzoli and Omer Bilgin, AI Policy Perspectives) documents the early deployments and the failure modes the deployments must contain. The Agentic State vision paper at agenticstate.org was supported by The World Bank and the Global Government Technology Centre Berlin with contributions from 21 leaders across 15 countries — a multi-country institutional anchor rather than a single-vendor pitch.
Five deployments are visible enough to anchor the claim that the substrate is being built before the political layer has finished arguing about whether to build it.
- Ukraine Diia.AI is the most aggressive deployment at population scale. The Diia.AI assistant retrieves users' data from connected registries and generates official documents — income certificates, certified taxation records, land-registry records, pension records — through agent-mediated interaction rather than form-filling.
- United Kingdom GOV.UK Chat transforms the static government digital portal into an active assistant. The job-seeker pilot matches users' skills with available opportunities; the architecture is explicitly inspired by Diia.AI.
- Singapore IMDA's Model AI Governance Framework for Agentic AI is published draft governance specifically for agentic systems, putting Singapore visibly ahead of the United States and the European Union on agent-specific regulation.
- France data.gouv.fr launched a public Model Context Protocol server, allowing AI chatbots to query national Open Data through a standardized agent interface. The infrastructure for agent-government interaction is national-scale.
- United States GovInfo MCP public preview went live January 22 2026. Federal-data infrastructure for agent access to public records, congressional documents, and federal regulations.
Parazzoli and Bilgin's red-teaming exercise names six failure modes the Agentic State has to contain.
- Technology falters. Demos look strong, real cases fail on edge conditions, and the agentic layer ends up superficially competent with human intervention underneath.
- Standards fail to converge. Commercial interests diverge, government departments deploy AI systems that cannot communicate, and citizens' multi-agency requests break.
- Status quo prevents change. Agentic AI adoption outpaces organizational change, civil servants and citizens use agents in uncoordinated ways before official programs catch up, and local practices harden into path dependence.
- Diffusion is slower than forecast. Governments invest as if an agent-saturated economy is imminent, industry adoption remains narrow, and public investments do not plug into widely used tools.
- Public rejects automation. A notable failure or an accumulation of failures convinces citizens that automated decisions are opaque and illegitimate; government runs two systems and neither meets expectations.
- Regulation never updates. Every agentic action requires human verification, agents draft but cannot act, and compliance costs rise as institutions retrofit old controls onto new processes; the agentic state in theory becomes a copilot state in practice.
Each failure mode is also a vector by which the Agentic State pulls toward cyber-slavery rather than the cybereconomy. Parazzoli and Bilgin name four guardrails: starting cautiously rather than committing prematurely to large-scale redesigns, redesigning processes before automating them, mandating transparency and publishing evaluation results, and establishing regulatory sandboxes where policymakers, developers, and civil society can collaborate on what oversight forms work for which agentic deployments. Singapore's IMDA framework is the cleanest example of the sandbox approach in 2026 inside the dataset Parazzoli and Bilgin survey.
For founders, two implications follow. The substrate for B2G agent-mediated commerce now exists in five jurisdictions; firms selling to citizens or to government should plan for an agent-mediated channel within eighteen months in those jurisdictions and an MCP-server interface as the default request format. The six failure modes are also six product-risk categories for any firm whose customers depend on a government-agent layer; the standards-fail-to-converge mode in particular forces a near-term decision on whether the firm builds for one MCP dialect or for an interoperability layer that absorbs heterogeneity.
The technology-coordination gap is structurally different from prior regulatory lag because the regulated artifact is informational
Technology innovation accelerates exponentially. The coordination layer that has to govern it — regulation, social contracts, fiscal systems — moves linearly at best. The integral between the two curves is what this chapter calls the technology-coordination gap, and four 2025-2026 regulatory cases make it visible. The objection a careful reader will raise is that every prior general-purpose technology produced regulatory lag (electrification, automobiles, the internet, mobile, social media) and then provoked dense regulation thereafter. The catch-up argument is real but the failure mode is structurally different this time: the regulated artifact (model weights, agent intents) is informational rather than physical, capability gain runs on a months-long cycle while legislative enactment runs on years, and jurisdictional fragmentation is structural for AI in a way it was not for telecoms or pharma. The four cases demonstrate the asymmetry.
The CFPB Section 1033 rule is the textbook illustration of US regulatory lag in the substrate's most economically consequential domain. The CFPB issued its final rule on Personal Financial Data Rights under Section 1033 of the Dodd-Frank Act on October 22 2024, with original first-compliance date April 1 2026 for the largest data providers. Industry plaintiffs filed suit the same day. The CFPB filed a motion to stay the rule July 29 2025 and initiated a new rulemaking process under new agency leadership. A federal court issued a preliminary injunction enjoining the CFPB from enforcing the rule. As of April 2026 the rule is enjoined, the CFPB has published an Advance Notice of Proposed Rulemaking for "Personal Financial Data Rights Reconsideration," and per the court order compliance dates are stayed by 90 days, pushing the first deadline to June 30 2026. The substrate the rule was meant to govern shipped twice over during the period the rule was being litigated.
The European Union AI Act offers the structural contrast. The Act entered into force August 1 2024 and applies in phases. Prohibited practices and AI literacy obligations applied February 2 2025 with Article 99 penalties up to €35 million or 7% of global turnover for prohibited-practice infringements. General-purpose AI model rules and governance applied August 2 2025. High-risk system requirements and full penalty enforcement apply August 2 2026 — three months from this chapter's publication date. Full enforcement including AI in regulated products applies August 2 2027. The phased approach holds because each phase was specified at the time the Act passed and the EU's enforcement model does not depend on rulemaking-then-litigation cycles.
The BIS chip-export-controls trajectory illustrates the informational-versus-physical asymmetry directly. The October 13 2022 advanced computing controls (87 FR 62186), the October 25 2023 semiconductor manufacturing equipment refinements (88 FR 73424), and the December 5 2024 Foreign-Produced Direct Product expansion (89 FR 96790) all remain in force. The January 15 2025 Framework for Artificial Intelligence Diffusion (90 FR 4544) — which added export controls on AI model weights with country tiers — was rescinded May 13 2025, four months after issuance, by the new administration. Physical-artifact regulation (chips, manufacturing equipment) survives administration changes; informational-artifact regulation (model weights) does not. Once an informational control rescinds, the capability the control was meant to constrain has already diffused; rescinding controls on physical artifacts only resets the clock.
The NAIC Model Bulletin on AI Use by Insurers fills out the picture for state-level fragmentation. The bulletin was adopted December 4 2023. As of April 1 2026, twenty-five state insurance regulators have adopted it per the NAIC implementation map, while California, Colorado, New York, and Texas operate parallel state-specific regimes. Where federal coordination fails, state-level coordination produces fragmented compliance overhead the substrate has to absorb at the state-by-state level.
The four cases together describe the integral. The 2024-2026 window shows a 24-month gap between regulation drafted and regulation enforced (CFPB), a 4-month gap between regulation issued and rescinded (BIS AI Diffusion), and 28 months from bulletin adoption to twenty-five-state implementation (NAIC). Over the same window the technology curve shipped GPT-4 to o3, Claude 3 to Claude Opus 4.6, the x402 protocol from launch to a Linux Foundation release, the Model Context Protocol from an Anthropic-only release to a multi-thousand-server registry, and ERC-8004 from draft to Ethereum mainnet. The integral grows where the EU AI Act's calendar approach is the exception, not the rule. The cybereconomy branch requires either accelerating the coordination layer or routing the most consequential coordination problems through Living Regulation; without one of those, the architecture defaults to centralized ownership.
The integral is also a transfer payment. Every quarter the regulator runs behind the technology stack, the incumbent regulatory license appreciates against the pool of applicants who can now apply with AI assistance. Slow regulation is therefore not neutral. It is a directional subsidy of the cyber-slavery branch over the cybereconomy. The operational implication for a 2026 founder is asymmetric planning: build EU compliance against the published calendar, build US compliance against the litigation cycle rather than the published deadline, build chip-supply resilience against rules that have held four years, and build model-weight-export plans against rules that have already snapped back once.
The fiscal crisis is the test case the political layer fails next
Fiscal systems built on labor-income taxation face base erosion as AI productivity gains accrue to capital. The argument is not new — Bill Gates proposed a robot tax in 2017, South Korea reduced automation tax incentives the same year, Daron Acemoglu has spent the last decade arguing for removing the excessive incentives for replacing labor that exist in the current US tax code — and as of April 2026 no major economy has enacted a robot tax. The fiscal-crisis fork is what happens when the labor-tax base erodes far enough that the political layer must choose. When labor-tax revenue falls below a threshold of total receipts, governments either tax capital and AI-augmented production at higher rates and route the revenue to ownership-style transfers (the cybereconomy fork), or rely on borrowing and pure-transfer UBI (the cyber-slavery fork). No major economy has crossed the threshold as of 2026; the choice has been deferrable. It will not stay deferrable through the next decade.
The labor-displacement question complicates the fork. Most current empirical evidence (Acemoglu's task-displacement framework, the Anthropic Economic Index measuring task-level AI usage, OpenAI's own Jobs framing in the April 2026 paper) supports task-level rather than job-level displacement: AI absorbs specific tasks within roles, with reallocation rather than aggregate unemployment in the short run. The fiscal-base argument can hold under task-level displacement — when tasks shift from labor to capital, payroll-tax base shrinks proportionally even if headcount holds — but the time horizon for fiscal crisis stretches. The fork's near-term urgency depends on which view of displacement turns out empirically correct in the 2027-2030 window.
OpenAI's April 6 2026 Industrial Policy for the Intelligence Age is the most prominent 2026-side acknowledgement that the failure has political-economy consequences the AI industry is now willing to put in writing. The 13-page paper proposes workers' formal voice in AI deployment, lowering barriers to AI-powered entrepreneurship through microgrants and shared infrastructure, treating access to AI as a right (the "right to AI") akin to electricity or internet access, a Public Wealth Fund so citizens share in AI-driven economic growth, and converting productivity gains into shorter workweeks and better benefits while exploring "taxes related to automated labor" and rebalancing the tax base toward capital gains and corporate income. Verbatim from the paper: "The transition to superintelligence will require an even more ambitious form of industrial policy, one that reflects the ability of democratic societies to act collectively, at scale, to shape their economic future so that superintelligence benefits everyone." The paper sets the policy frame the political conversation now has to either accept or reject.
UBI evidence is the empirical complement. The OpenResearch unconditional cash study ran the largest US trial: 1,000 low-income participants in Texas and Illinois received $1,000/month for three years; a control group of 2,000 received $50/month; participants were 21-40 years old. The OpenResearch findings page reports that recipients worked an average of 1.3 fewer hours per week and were about 2 percentage points less likely to be employed during years two and three; without transfers, recipients' individual earned income was roughly $1,500 lower and household income was between $2,500-$4,100 lower than control. The companion NBER working paper (Vivalt, Rhodes, Bartik, Broockman, Krause, Miller) reports recipients reduced work hours by 1-2 hours per week and a 4.1 percentage point decrease in labor market participation, with total individual income excluding transfers falling about $1,800 per year. The Stockton SEED program (125 residents, $500/month for 24 months starting February 2019) found reduced income volatility and a higher rate of full-time employment relative to control. The combined evidence shows mild labor-supply reduction with increased agency in work choice. The data is too ambiguous to settle the dependency question either way.
Whether UBI pulls toward cyber-slavery or the cybereconomy depends on the ownership structure attached to it. UBI as transfer income alone leaves recipients with a single income source and no equity in the productivity gains driving the transfers. UBI paired with ownership in productive AI infrastructure — the OpenAI Public Wealth Fund proposal as one example — gives recipients a claim on the upside the substrate generates. The Kofler 2025 academic analysis describes South Korea's 2017 incentive reduction as a "quasi-tax on automation" that increased the after-tax cost of robots without explicitly taxing them; the IBFD February 2025 review surveys the European policy options for explicit robot taxation. None of the proposals has been enacted. Which fiscal lever the political process picks is outside this analysis. The optionality value the founder can capture is in running the firm's tax-position model under a 5% automated-labor surcharge applied to AI-mediated revenue and comparing to a capital-gains-rebalancing scenario; the firm that has run the numbers when the policy lands moves a quarter faster than the firm that has not.
The founder's local diagnostic is which fork the firm pulls toward on three concrete axes
The fork is shaped at the local level by every firm's design decisions on three axes: clearing-layer ownership, verification posture, and externality publication. The founder running an AI-native firm in 2026 has measurable agency over which branch the next decade pulls toward in the firm's domain, and the agency is exercised through ordinary product and architecture decisions rather than through political activism.
Three diagnostic questions, each with an artifact the founder can check this week and a red-flag list.
- Clearing-layer ownership. Open the contract on the firm's top three payment integrations. If the merchant terms include a unilateral-modification clause, a no-litigation-required suspension clause, or a take-rate ratchet keyed to volume thresholds the firm cannot dispute, the firm is on platform-controlled clearing. The cybereconomy posture is a payment rail built on x402, the Agentic Commerce Protocol, ERC-8004, or comparable open standards where governance is documented, the committer set is broader than a single firm, and customer agents can route around the platform. The disqualifying sign is "comparable open standards" without a specific governance artifact behind it. CoW Protocol's batch-auction architecture is the closest deployed precedent for what credibly neutral clearing looks like under adversarial conditions.
- Verification posture. Pick the single most expensive output the firm sells. Write down what verification artifact a customer's agent could request to confirm the firm delivered it. If the answer is a screenshot, an email confirmation, or a customer-support transcript, the firm is on the cyber-slavery side of this axis. The cybereconomy posture is a signed receipt the customer's agent can verify cryptographically without contacting the firm, with verification independent of the firm's own logs. CoW Protocol's Uniform Clearing Price receipts are the model for ERC-20 token swaps; equivalent mechanisms in other domains are still rare but the design pattern is portable.
- Externality publication. List the three externalities the firm produces. The standard set for AI-native firms is compute energy usage, data-exhaust accumulation, and the displaced-labor footprint of the firm's automation. For each, write the structured-data field the firm would need to publish for a downstream agent to negotiate against it. If the firm cannot define the field today, it does not yet have an externality-pricing position; it has a pretense of one. Treating externality data as a competitive secret is the cyber-slavery posture; publishing it in machine-readable form prepares the firm for advocate-agent negotiation when the architecture arrives.
Most real firms score mixed across the three axes. Stripe issues cryptographic receipts (cybereconomy signal on verification) and operates under PCI-DSS regulatory wrappers that preempt many customer disputes (cyber-slavery signal on clearing). The diagnostic does not produce binary verdicts; it produces direction-of-pull. A firm pulls toward the cybereconomy branch when it scores cleanly on at least two axes and is not actively cyber-slavery on the third. The firm that scores cleanly on none of the three is selling the same product to either branch and will be surprised when the political layer settles on a regulatory posture the firm did not prepare for.
The playbook is a 90-day-to-2-year practical guide. This chapter is the 10-year horizon. The 90-day work the previous parts organize around — the operating model, the human responsibilities that survive the substitution, the personal operating system, the viable-system framework that absorbs the variety the substrate produces, the software factories, the compounding firm — is what determines local outcomes inside a fork the playbook cannot resolve at civilizational scale. Monday morning is where the playbook resumes, with the architecture already built, the political layer as the binding variable, and the destination depending on the choices the firms and the founders the playbook is written for make between now and the next decade.