The People
Running the AI Transformation
AI transformation fails as a technical project because the barriers are political. The pattern that works combines a CEO forcing function that ends the wait-and-see default, an environment that reduces the political cost of adoption, and explicit acceptance of the productivity dip that follows reorganization.
In 1951, the Tavistock Institute's Eric Trist and Ken Bamforth published a study in Human Relations on what had happened when Durham's coal mines mechanized their extraction process. The old system was a six-person team working a single coal face from shift to shift, with each miner competent in every task, each accountable for the output, each paid on a shared measure. The new longwall method split the work into specialized roles across three shifts, introduced supervisors, and re-paid the miners individually. The machinery was better. The output fell. Absenteeism roughly doubled. The quality of the coal dropped. Trist and Bamforth's finding was that mechanization had succeeded technically while destroying the social structure that made the work function, and the redesign of the shifts rather than the introduction of the machinery was what produced the decline.
Seventy-five years later the same pattern runs at white-collar scale. Organizations deploy agents without redesigning roles, career paths, or team structures, and they reproduce the same failure. The technical work is the easy half of the transformation, and the social redesign is what separates functioning organizations from expensive theater.
AI transformation fails as a technical project because the barriers are political. The pattern that works combines three structural elements: a CEO forcing function that ends the organizational wait-and-see default, an environment that reduces the political cost of adoption, and explicit acceptance of the productivity dip that follows reorganization. Organizations that supply one element without the others produce rebellion (forcing without environment), pilots that do not ship (environment without forcing), or quiet burn-down (neither, with a target attached anyway).
Transformation is a political project before it is anything else
AI transformation threatens what people actually care about: status, identity, resources, career trajectory. The engineer who resists an AI coding assistant is defending a professional identity built over a decade. The manager who slow-walks adoption is preserving the headcount that justifies the title. The department VP who insists that the work is "too judgment-heavy for AI" is defending a budget line. None of these resistances survive logical argument, because none of them is built on logic.
The political moves that work take these stakes seriously. They change the default with a forcing function, reduce the cost of trying with an environment, and protect status differently after the transformation than before. Running the change as a technical rollout, with training decks and enablement meetings, leaves the political layer untouched. Behavior does not change. Six months later the dashboards show login counts while the business metrics remain flat, which is the most common shape of failed AI transformations in 2026.
The CEO forcing function ends the wait-and-see default
Persuasion loses to political inertia in any organization above roughly fifty people. The default in any mid-sized company is for each function to wait and see whether the other functions are actually changing, because no individual function will pay the cost of going first if everyone else holds still. The CEO forcing function is what breaks this equilibrium. Four variants are running in 2026 practice:
- Hard deadline on a legacy tool. Announce the shutdown date for Confluence, Jira, or whichever platform the agents are supposed to replace. Then actually shut it down on the announced date. Teams migrate because they have no alternative, not because they were persuaded. The "kill Confluence on date X" pattern is the cleanest version of the forcing function because the deadline is external to any single team's political calculus.
- Hiring gate. "Prove you cannot do this with AI before we hire for it." Every headcount request becomes an AI-capability review. Shopify's CEO put this publicly as company policy in early 2025, and the effect was immediate: every headcount request now requires a written case for why AI cannot cover the work. The hiring gate converts the forcing function from a one-time event into a recurring decision rule.
- Explicit generation target. In September 2025, Coinbase CEO Brian Armstrong stated a public target: move from roughly 40% AI-generated code to more than 50% by October. The tweet reached 2.3 million views. Armstrong had already fired engineers who refused to onboard Copilot and Cursor within a one-week deadline. The target anchors the transformation in a company-wide metric everyone owns and every engineer is measured against.
- Budget gate. New project funding requires an agent-led execution plan. The plan must describe which parts of the work the agents will do and which parts require human judgment. Turns the forcing function into a resource allocation rule, which means the finance function enforces it rather than any line manager.
The CEO cannot delegate this work. The pattern that runs at companies with $100M+ in revenue is that the CEO spends 2-3 full days a week on AI, personally, for at least a year. The framing the organization hears is that this is the biggest risk and biggest opportunity in the business, and the evidence that the framing is true is that the CEO's own calendar reflects it. What the CEO actually does on those days is run demos on stage with real company data producing real output, build something visible each week, review transformation metrics in the same meeting as financial metrics, and set the next forcing function when the previous one is fully absorbed. The great version is one line that ends debate. The failure mode is forcing without an environment to catch the people the forcing function pushes forward, which produces the passive-resistance pattern described below.
The twelve-week transformation has four phases and each phase has a failure that kills it
The practical sequence runs in phases rather than rigid weeks, but the time boundaries matter because political momentum decays fast. By the end of the third month the window for decisive action has usually closed, because the people who resist have found their footing and the people who wanted to move have either moved or given up.
Phase 1, weeks 1-2: the CEO commits publicly. The founder runs a live demo at the all-hands meeting, using real company data to produce real output in front of everyone rather than presenting a slide deck about AI. The demo is the permission slip. Without it, people default to assuming AI is not for their kind of work, because nothing visible in the organization has contradicted that assumption. The forcing function gets announced at the end of the demo: a hard deadline on one legacy tool, or a specific generation target, or a hiring-gate policy that takes effect immediately. What kills Phase 1 is a CEO who demos something they did not actually build. The audience catches the inauthenticity, credibility evaporates, and the rest of the sequence runs through molasses.
Phase 2, weeks 2-5: legacy cutoff plus enablement. The shutdown date arrives and the old tool actually goes dark. Every employee gets one to two hours of structured onboarding, which is not a training course but an onboarding session that ends with the employee producing something useful on their own laptop. The difference matters. Training courses teach the abstract capabilities of the tool. Onboarding sessions produce a tangible first artifact, and the artifact is what convinces the person that the tool is not theater. The champion network gets identified in parallel. The risk in Phase 2 is onboarding that runs too long or too abstract; people drop out after the first fifteen minutes. Keep it short, keep it practical, and measure completion.
Phase 3, weeks 4-8: the environment stands up. The five environment elements covered in the next section get built or procured during this window. Peer-to-peer demos start replacing top-down speeches, because same-rank colleagues are more persuasive than leadership statements. Recognition systems begin rewarding AI-skilled work visibly, which means new promotions are announced that explicitly reference AI capability. Practice time becomes legitimate, which means protected hours appear on the calendar rather than being treated as time to steal from real work. Cynicism accumulates on a schedule in Phase 3. Environment elements promised during the Phase 1 demo that do not actually stand up in weeks 4-8 signal to the organization by week 8 that the forcing function was performative.
Phase 4, weeks 9-12: role redesign begins. Roughly 40% of people end up in different roles than they held at Week 1. Anti-pattern detection runs continuously. The metrics get re-baselined against Week 1 so the progress is legible. The Pirate-plus-Architect team (described below) is in place and has shipped three visible wins. The transformation continues past Week 12, but the political decision is already made by then and the remaining work is execution. The Phase 4 failure mode is role reshuffles without compensation adjustments against the new scope, which produces resentment and attrition among exactly the people the transformation needed to retain.
Build the environment around the work, not the heroes inside it
BCG's December 2025 survey of employees across the four stages of AI adoption found that more than 85% stall in the early stages and fewer than 10% reach the daily-workflow stage where AI is used as a regular tool. The stalled employees are not short on individual initiative. They are short on an environment that makes adoption easier than non-adoption.
Five elements stand up in parallel during Phase 3:
- Career paths that name the four human responsibilities as explicit trajectories. Architect, Relationships, Validation, Accountability, each with titles, compensation bands, and promotion criteria. Without named paths, people cannot see what advancing looks like in the new structure, so they default to protecting the old one because that is the one where the promotion rules are still legible.
- Legitimate time for practice. Dedicated hours on the calendar, protected from being reclaimed by the regular work. The pattern that produces uplift is roughly four hours per week of explicitly-protected time. Without this, only people whose work already tolerates experimentation engage, which means the engineers who were already building side projects. Everyone else falls behind, and the adoption gap compounds.
- Access to the best models. Not the corporate chatbot behind a policy wall. Actual frontier access, with the appropriate governance, because the tool gap determines the capability gap. Employees restricted to a three-quarters-behind model cannot demonstrate what current AI can do, which means the demo culture degrades into showing what older AI could not.
- Peer-to-peer demos as the default communication channel. What same-rank colleagues actually ship is more persuasive than leadership statements. Format: thirty-minute demos, no slides, live use, weekly, with the demoer walking through a real piece of work they did in the previous week. The psychological mechanism is that the demoer is not selling AI. They are telling their own story.
- Recognition integrated into status and performance. Recognition signals what the organization actually values. If promotion criteria have not shifted, no amount of training will move behavior, because the career-advancement system is the strongest behavioral signal most people receive at work.
BCG's AI at Work 2025 survey (n=10,635) found that the regular-usage rate jumps sharply for employees who receive at least five hours of training and have access to in-person coaching. The training threshold matters, but the in-person coaching matters more, because the coaching is where people overcome the first small frustration instead of abandoning the tool at the first error.
Compensation belongs in the environment work too. Trying to buy AI-native talent on salary alone fails. Founder-like incentives (equity stakes, 2-5% ownership for foundational hires) recognize the actual leverage the role provides and retain the people who can hold it. The "environment" that exists on a slide but not in the calendar does not exist. If practice time is not on the calendar with protection, it does not exist. If the promotion criteria are still the old ones, the transformation is theater.
Champions amplify an environment, they cannot replace one
Citi's AI Champions and Accelerators program is the clearest 2026 enterprise case. In Citi's own June 2025 communication the program had more than 2,000 colleagues globally. By Business Insider's reporting in January 2026, confirmed through Forbes coverage of Citi's Head of AI in March 2026, the number had grown to approximately 4,000. The program roughly doubled in six months. Volunteers spend three to five hours a week on peer teaching. Champions are not hired specialists. They are existing employees who find use cases, explain them to colleagues, and define AI strategy within their own business unit.
Three responsibilities make the champion role work:
- Find new use cases. Requires domain expertise. A champion who does not know the business cannot find the cases that matter.
- Translate and explain. Requires teaching skill, not technical skill. The champion's job is to move a colleague from curiosity to first working artifact in under an hour.
- Define strategy for the domain. Requires ownership, not just experimentation. The champion becomes the point person for how AI shows up in a given function.
The structural risk is that champions absorb pressure from above (executives expect results) and passivity from below (colleagues wait to be shown), and burn out inside six to nine months because they are expected to carry the transformation on their own. Champions alone cannot change the environment around them. They lack the authority to reset promotion criteria, protect calendar time, or negotiate model access. The role only works when the environment investments above are running in parallel. Identify champions by observed behavior (who already experiments without being told) rather than by rank. Give them explicit 20% time for the role. Pair them with environment investments so they amplify the transformation rather than absorb its full political cost.
The Pirate and the Architect are two roles hired as a pair
A minimum viable disruption team for an AI-native workflow inside a legacy organization is two people. The pattern runs across multiple 2026 transformations and is worth planning around.
The Pirate ships fast. Typically a non-engineer by background who learned to code through AI over the past one to two years, the Pirate owns velocity, demos, and political wins. In the first ninety days their job is to make the transformation visible by producing three concrete outcomes the organization can point to. Their operating cadence is week-level, and they do not wait for the substrate to be perfect before shipping.
The Architect's role is to build the substrate that lets the Pirate's work scale. A senior engineer, still hands-on with code, temperamentally "conservatively optimistic" in the sense of actually enjoying work with agents while remaining skeptical of their current failure modes. The substrate covers identity, audit, cost attribution, model routing, and the first entries in the skill library. Crucially, the Architect is not a CTO who stopped coding two years ago; current execution experience is a hard requirement because an Architect who has not written code recently does not know what agents can actually do in the present tense. Cadence runs at month-level, not weekly demos.
The two roles interact in a specific way. The Pirate hands off to the Architect when something demonstrates it is worth making durable, and the Architect hands back when the substrate can support more experimentation. The first ninety days run in parallel: Pirate ships three visible wins, Architect quietly builds identity, audit, cost attribution, and the first skill-library entries. Hiring only the Pirate produces brittle demos that collapse under scale once the team tries to make anything durable. Hiring only the Architect produces elegant substrate nobody exercises in the field, because velocity is what generates demand for more substrate. Both failures are common because each role is easier to justify to a hiring committee in isolation than as a paired budget line. The discipline is to require the pair as a single hire.
Resistance is fear, and the emotional layer responds to acknowledgment
Skeptics and resisters are signaling fear, not making arguments. The marketer who "doesn't think AI is ready" is afraid of becoming redundant. The senior engineer who "just doesn't find Copilot useful" is defending an identity built on a decade of technical craft. The manager who "has concerns about governance" is defending headcount. Responding with logic to emotional signals escalates the resistance, because the resister reads the logical response as dismissal of the actual concern.
Four interventions that work:
- Run peer adoption, not top-down persuasion. Seeing a same-rank colleague ship something with AI is more convincing than any leadership statement, because the resister cannot dismiss the peer as someone who doesn't understand the real work.
- Make psychological safety around failure explicit, not implicit. Senior people publicly share failed experiments before mid-level people will try. If the CEO does not talk about what they tried that didn't work, nobody else will either. The failure-sharing norm only propagates downward when it is modeled at the top.
- Run the corporate-raider exercise annually. Have the leadership team imagine the company just went bankrupt, and they bought it on fire-sale terms. What would they change on day one? The framing overrides sunk-cost attachment and produces different answers than a normal strategy offsite does.
- Repeat the communication until it compounds. The Jensen Huang pattern is that by the time an announcement lands, the audience feels "why only now?" because the framing has been surfacing in conversation for months. Content lands on repetition, not on first exposure. A new strategic direction shared once at an all-hands meeting is almost always lost.
Corporate-communications teams writing AI announcements in the same register as other change programs usually miss. The emotional layer does not respond to communications-department copy. It responds to leadership visibly doing the work and acknowledging the emotional reality of the people being asked to change.
The transition quarter is worse, not better, and the cost has to be budgeted
Teams redistribute time during the transition quarter rather than saving it. The time spent learning, experimenting, rebuilding workflows, and cleaning up the resulting mess exceeds the time saved by AI taking parts of the old work. Both quality and throughput decline. The dip is real, and pretending otherwise produces worse outcomes than accepting it, because the pretence means teams get blamed for a decline that was structurally inevitable.
Three ways to absorb the dip:
- Remove people temporarily. Secondments into the Pirate-plus-Architect team, or rotations into transformation roles. People know they will return to their regular seats, so the dip is bounded and the institutional knowledge does not walk out the door.
- Accept lower quality during transition. State explicitly that the org is investing in capability build and will accept three to six months of reduced output quality as the cost. The key is announcing this in Phase 1, not in Phase 4 when the dip has already arrived and everyone wants to assign blame.
- Cut scope. Freeze new feature work. Run the transformation as the main priority for a quarter. This is the hardest of the three because it requires not shipping, which cuts across most operating instincts in most companies.
The dip has a shape: worse for one quarter, recovery in the next, and baseline-or-better by quarter three. A dip that does not end after two quarters and shows no upward trajectory is a signal the approach is wrong. Pre-commit to the dip in the Phase 1 CEO communication and frame it as investment cost. Teams warned to expect the dip handle it reasonably well. Teams sold "AI makes you faster immediately" instead blame AI when the dip arrives, and that blame becomes the political fuel that ends the transformation before Phase 4.
Three counterintuitive adoption patterns are worth planning around
Smart engineers are often the worst adopters. The senior engineers with the deepest technical skill are often the slowest. Their decomposition habit ("let me write this as a proper system with clean abstractions") becomes the bottleneck when working with an AI that is happy to produce a messy but correct first draft. High-agency people with lower technical sophistication outperform brilliant engineers because they accept the AI's output and iterate, rather than rewriting it into conformance with their own preferred design. Dell'Acqua's BCG field experiment with 758 consultants is consistent with the finding that within senior ranks, the least-architectural often win. A corollary is to not pick champions by technical rank.
Pre-AI performance does not predict post-AI performance. Mediocre performers often become the best AI users. They were unmotivated in the old regime, not incompetent. AI removes the grunt work they could not push themselves through, and the judgment they always had starts producing leverage. Veterans of thirty or forty years often refuse new tools entirely. Firing based on pre-AI performance produces a Week-1 roster that is worse on paper than the one that existed before the transformation, once the tool-adoption data comes in at Week 12.
Collapsing teams directly triggers the worst resistance. Old teams read a merger as loss: loss of authority, loss of identity, loss of headcount. The cleaner pattern is to build parallel structures that cannibalize the old ones through better output. The old teams die naturally when their work becomes irrelevant. Announcing a team merge as the transformation triggers the passive-resistance pattern described in the next section.
Eleven failure modes recur across 2026 transformations
The named cases below are the shapes that fail transformations repeatedly. Each is paired with its mechanism and the corrective, because naming the pattern is useful only if the organization knows what to do about it.
-
Pilots measured on adoption rather than business outcomes. The MIT Project NANDA study of the state of AI in business 2025 found that 95% of enterprise AI pilots deliver no measurable business return against $30-40 billion in aggregate spend, and only 5% achieve value at scale. A Gartner survey of 782 infrastructure and operations leaders in late 2025 found only 28% of AI use cases fully succeed and meet ROI expectations, while 20% fail outright. The mechanism is announcements and pilots without outcomes tied to the P&L. Teams track logins and "adoption" instead of hours saved, revenue lifted, cost cut. The correction is business-metric measurement from the start. Adoption rate is how many people logged in. Business impact is what changed in the financial statements.
-
Passive resistance after a top-down mandate without psychological safety. Around 30% of employees admit to some form of sabotage-adjacent behavior in response to AI rollouts, rising to roughly 44% for Gen Z workers in the 2026 Writer and Workplace Intelligence survey. Shapes include entering proprietary information into public AI tools, using unapproved AI to make the official tooling look worse, and intentionally generating low-output work to make AI appear less effective. The mechanism is a forcing function without an environment or without psychological safety around failure. The correction is to surface resistance early as part of the emotional-layer work, not punish early failure, and adjust the forcing function if the resistance exceeds threshold.
-
Unsanctioned use of personal AI accounts for work. BCG's AI at Work 2025 survey found that more than half of employees would use AI at work without company approval. Industry discussions name this pattern shadow AI. The mechanism is that sanctioned tooling is too restrictive or too slow, so the work moves off-platform to personal accounts. The correction is governance that matches or beats consumer UX, named approved tools with frontier capability, and an audit trail that detects personal-account use without punishing people who come forward. Shadow AI is a signal that the approved tooling is broken, not a disciplinary case.
-
Cuts ahead of the actual AI substitution curve. Gartner's February 2026 prediction is that 50% of companies that attribute headcount reduction to AI will rehire staff for similar functions by 2027. The already-happening data is stronger. A Careerminds survey of 600 HR professionals in February 2026 found that 35.6% of organizations that cut roles due to AI had already rehired more than half of the cut roles, 32.7% had rehired 25-50%, and 52.1% had done so within six months of the layoff. The mechanism is that cuts run ahead of AI's real substitution capacity. Institutional knowledge disappears. Productivity drops because the work was not actually automatable at current capability. The rehire happens at a premium because external candidates price the scarcity. The correction is augmentation-before-replacement sequencing, keeping institutional knowledge until it is encoded into skills, and staggering cuts against demonstrated substitution rather than against announced ambition.
-
Champions burning out inside six to nine months without an environment around them. Described above. Champions cannot change the environment. If compensation, calendar protection, and promotion criteria do not shift, champions absorb the political cost of the transformation and quit. The correction is to build the environment in parallel with champion identification, not afterward.
-
A transition dip that does not end. The productivity dip is expected. A dip that stays flat or worsens after two quarters is a signal that the approach is wrong, not that the team needs to push harder. The correction is to diagnose whether the forcing function was clear, whether the environment actually stood up, and whether the Pirate-plus-Architect pair was hired. The failure is almost always in one of those three places.
-
Compressing the phased sequence too aggressively. The one-month-per-management-level sizing (described below) is set by how fast political signals propagate through layers, not by the AI tooling. Organizations that try to compress a six-month enterprise transformation into twelve weeks usually skip Phase 3 (environment) and wonder why Phase 4 (role redesign) triggers passive resistance. The correction is to budget real time against the political topology of the specific organization.
-
Messaging without action to back it up. Behavior change requires four things running together: role models, skills development, communication, and formal mechanisms (covered in the next section). Running communication and skills development without role models visibly doing the new behavior, or without formal mechanisms in the promotion and compensation systems, produces the most common enterprise failure: the information exists, the posters are up, nothing happens. The correction is to install the missing legs of the four-part structure.
-
Firing mediocre performers before the AI deployment. The adoption pattern above explains why this fails. The veterans who look productive in the old regime often refuse the new tools. The mediocre performers who were unmotivated in the old regime often become the best AI users because the grunt work that blocked them disappears. The correction is to wait on personnel decisions until after the Phase 4 re-baselining at Week 12. Performance in the old regime is not the same data as performance in the new one.
-
Picking champions by technical rank. The smart-engineer adoption trap above. The correction is to pick by observed adoption behavior rather than by title or seniority.
-
Hiring the Pirate or the Architect alone. The single-role hire is a common pattern because each position is easier to justify to a hiring committee in isolation than as a paired budget line. A Pirate without an Architect generates brittle demos that cannot scale, while the Architect alone builds substrate that nobody exercises in the field. The correction is to require the pair as a single hiring decision rather than two sequential ones.
Three operational disciplines separate transformations that compound from those that stall
The chapter has named the political mechanics. Three operational disciplines determine whether the political work translates into a firm that compounds or one that runs the dance for a year and ends near where it started.
Three days a week of CEO time, not two. The cadence claim runs through founder-led transformations at scale. CEOs of $400M-to-$700M-revenue firms running active AI transformations in 2026 typically commit roughly three full working days per week to it personally for at least a year. The earlier "2-3 full days" sizing sits at the floor of what produces traction; the firms that compound usually run at the top of the range. Three days a week is what makes the next forcing function legible to the rest of the firm — the calendar reflects the priority, the framing repeats often enough to land, and the design intuitions for the loops in 4.5 stay sharp because the CEO touches the agents every day rather than reviewing reports about them. Two days slips back into the wait-and-see equilibrium because the rest of the firm reads two days as "important but not the most important thing on the agenda this quarter." One day is theater.
Champions run parallel to substrate, not after it. The substrate work — identity, audit, context engineering, skill libraries, the operating-model duality from 2.1 — runs slow and careful and applies to the whole firm. The champion experiments — function-level, narrow, opinionated, deployed against a single workflow — run fast and visible and apply to one team. Either track alone fails. Substrate-only produces no traction, because nothing the rest of the firm can see is moving while the foundational work is being built. Champions-only produces local wins that do not scale, because the substrate the wins assumed never materialized and the champions burn out inside six to nine months as the cost of carrying the transformation alone surfaces. The discipline is to start both tracks in Phase 1 of the twelve-week sequence and treat their parallel progress as a single operating-model commitment rather than a sequenced plan.
The hotel cannot be turned off for renovation. The existing revenue base must keep running through the redesign, which is the load-bearing constraint on how aggressive the transformation can be. The legacy customer base, the regulatory commitments, the contractual SLAs, the billing systems, the supplier obligations, the people whose paychecks depend on this quarter's revenue — all of it stays operational while the firm rebuilds the substrate underneath. The discipline is to carve the transformation work into changes that can land without taking the revenue base down: parallel deployment of the new operating model alongside the old, gated cutover by function rather than firm-wide on a single date, and explicit budget for running both topologies in parallel during the cutover window. The forcing function gets calibrated against this constraint — a hard deadline on Confluence works because Confluence sits below the revenue base, but the same forcing function applied to the firm's billing system would take revenue offline and produce a different category of incident. The transformations that stall do so when leadership underestimates the parallel-running cost; the transformations that compound budget for it from Phase 1.
Enterprise-scale transformations have a different timing and diagnostic shape
Two instruments matter at enterprise scale.
Behavior change requires four things at once. McKinsey's Influence Model, first published in a 2003 McKinsey Quarterly article and restated in 2016, captures the shape. Four levers: role models (senior people visibly doing the new behavior), skills development (concrete training), fostering understanding and conviction (repeated communication about why), and formal mechanisms (policy, process, compensation). McKinsey's own analysis of change programs found that running all four levers together increases the likelihood of transformation success threefold over running any subset. Running three without the fourth fails. The most common enterprise failure is communication and skills development without role models or formal mechanisms. Senior leaders talk about AI. Training gets rolled out. But the senior leaders do not actually use AI in visible ways, and the promotion criteria still reward the pre-AI behaviors, so the gap between what the organization says and what it rewards is the signal everyone reads. The diagnostic is to audit all four legs before adding more communication or more training.
Budget roughly one month per management level. A 500-person organization with five layers takes approximately six months to move through a full transformation. The sizing is not set by the AI tooling. It is set by how fast political signals propagate through layers, because each layer has to absorb the signal, decide whether the layer above means it, and adjust its own behavior before the layer below starts moving. Small organizations move faster because they have fewer layers, not because they have better tools. Enterprise transformation experience across multiple client engagements anchors the sizing rule: five to six management levels takes roughly six months, and compressing the schedule usually produces a Phase-3 skip rather than a faster transformation.
For startups of 5-50 people
The CEO is the champion and the Pirate, usually both at once. There is no formal champion network. Peer demos happen at standup. The forcing function is "we hire fewer people and use more AI to stay fast," which is the cleanest version of the hiring gate because there is no intermediate layer to filter the signal. The productivity dip is shorter (roughly six weeks rather than a full quarter) because fewer management layers absorb the shock. Compensation is equity-heavy by default. Recognition is ambient. The Architect role still matters even at ten people, and is usually the co-founder who likes infrastructure more than velocity.
For enterprises of 500+ people
Plan for roughly one month per management level. The CEO must commit two to three full days a week personally for the duration, not less. Champion network sizing should be roughly one champion per 100 employees, mandatory rather than optional. Environment investment runs in parallel with the forcing function from Phase 1. Starting one without the other is the failure mode, not sequencing them. Compensation and promotion-criteria adjustment are on the critical path, not a follow-up exercise, because the old criteria are the loudest behavioral signal in the organization and nothing else competes with them until they move.
The open question is what transformations look like when the 2026 technology substrate is itself replaced. Agent capabilities that take six months to integrate in early 2026 may become native model features within a year, which means a transformation run in Q2 2026 will sit on a different substrate than one run in Q4 2026. The political dynamics are more stable than the technology. The forcing function, the environment, and the productivity dip will still be the structural elements. The tools underneath them will keep moving.