Dear McKinsey,
I read your report on “superagency” and found myself wondering when AI governance became indistinguishable from corporate self-help.
In the report, you present a set of personality archetypes imported from a business book, treat employee enthusiasm as evidence of organizational readiness, collapse AI and generative AI into a single vague category, and then conclude, without any objective metric, that the biggest barrier to AI maturity is leadership not “moving fast enough.” What is missing is more revealing than what appears on the page: there is no measurement of actual harm or failure cases, no assessment of fear or ethical discomfort at the worker level, no real engagement with sector-specific constraints, no differentiation between types of generative AI systems, and no acknowledgement of the governance requirements that define real organizational capability today.
The dataset itself frames the problem.
Eighty-one percent of the respondents are based in the United States, yet the narrative is written as if it represents global workplace dynamics. In governance spaces across Europe, the Middle East, and Africa, the dominant concerns are risk, harm, compliance, power, and accountability - not speed. A U.S.-heavy sentiment study cannot be universalized without distortion, and yet the report repeatedly does exactly that.
The leap from survey perceptions to leadership blame is where the analysis fully detaches from evidence. You note that only 1 percent of companies consider themselves “mature,” 36 percent report no revenue impact from GenAI, and only 23 percent report any cost benefit. These results contradict the push for acceleration, yet the conclusion remains: leaders are too slow. On what basis? There is no rubric for “maturity,” no definition of “speed,” no measurement of actual deployment complexity, and no accounting for compliance or safety constraints across regulated sectors.
Your archetypes illustrate the weakness of the analytical frame.
They are not empirically derived, they are lifted directly from Superagency and retrofitted onto survey data. The distribution itself undermines your messaging: 39 percent Bloomers, 37 percent Gloomers (skeptical), 20 percent Zoomers (accelerationist), and 4 percent Doomers. This is not evidence of universal employee readiness; it is evidence of fragmentation. Yet skepticism is framed as a personality issue instead of a rational response to uncertainty, governance gaps, and incomplete infrastructure.
There is a more serious definitional problem.
The report repeatedly talks about “AI” and “gen AI” as if they were interchangeable. The methodology section itself admits that although the research focuses primarily on gen AI, participants “may not have consistently differentiated between gen AI and other forms of AI”. Predictive models, optimization engines, retrieval systems, large language models, multimodal systems and agentic architectures have very different risk profiles, integration requirements and governance demands. Aggregating them into a single undifferentiated category undermines every claim about maturity, readiness and value.
This conflation affects every conclusion: ROI claims cannot be generalized across technologies with different failure modes; “maturity” cannot be meaningfully assessed without specifying which class of system is in question; and governance requirements differ drastically across categories. Treating GenAI as a single homogenous capability is analytically unsound.
The absence of concrete enterprise use cases further weakens the argument. A table listing generic categories: content creation, summarization, smart search, planning, does not constitute a use-case analysis. There is no mapping to workflows, no assessment of data governance demands, no interaction with safety requirements, and no sector-specific constraints. When the report claims readiness or value, it does so without articulating what organizations are supposed to be ready for.
The governance omissions are severe. In my work across global governance forums, harm, safety, compliance, documentation, auditability, fairness, and sector-specific regulation dominate every serious conversation on AI deployment. None of these appear anywhere in the analysis. The survey never asks whether employees fear job loss, surveillance, increased monitoring, producing unsafe or discriminatory outputs, or being held responsible for AI-generated errors. Without this, the report can only measure enthusiasm, not readiness. Enthusiasm is not a governance metric. Trust in one’s employer does not substitute for enforceable safeguards.
The report treats transformation as inevitable, acceleration as desirable, and caution as a leadership deficit. But slowness is sometimes a legal requirement, and caution is often a strategy rather than a psychological constraint. There is no real engagement with the possibility that, in some sectors or contexts, slower adoption is rational, or that some forms of deployment may not be worth pursuing at all once risks and externalities are fully considered.
When a report encourages organizations to “move faster” without acknowledging regulatory boundaries, integration challenges, or the operational risk of probabilistic systems, it is not offering guidance, but rather a narrative.
These critiques are methodological and analytical. They are grounded in the report’s own numbers and definitions, not in speculation about intent. A framework that relies on imported archetypes, universalizes a US centric dataset, collapses distinct technologies, handles risk at a superficial level, and interprets self reported enthusiasm and trust as evidence of readiness cannot serve as a serious guide for organizations that are actually accountable for safety, compliance and long term impact.
If we are serious about AI’s impact on the workplace, we need clarity, not archetypes; differentiated technology categories, not a single umbrella label; risk measurement, not optimism curves; and governance frameworks that reflect actual organizational constraints. Anything less is not strategy, it is accelerationist mood management.
Sincerely,
An ex-consultant who has sat in these rooms
and now works in governance, where optimism cannot replace accountability.


Exceptional critique of how consultancy frameworks often substitute methodological rigor with narrative momentum. The point about collapsing AI and GenAI into a single undifferentiated category is espically sharp, diffrent systems have wildly different governance surfaces and conflating them makes maturity assesments meaningless. What really landed for me was the observation that enthusiasm isnt a governance metric, organizations optimizing for speed without accounting for compliance or sector-specific risk arent demonstrating readiness, they're just externalizing future liabilities.
Asma... Perfect, as always. Thank you for your ongoing Commitment to Authenticity.