Business. What's the point?
Businesses runs on answers. AI just made that a problem.
The very first course I took as an undergrad turned out to be the most relevant to my career—and I dropped it after one class.
I, like many 18-year-olds, didn’t have a clue what I wanted to do as I entered college. When considering my course of study I did the seemingly rational thing—I took stock of what I enjoyed (read: was good at) and determined what could lead to the best opportunities (read: most lucrative) after graduation. For me, that analysis resulted in studying engineering.
The first classroom I entered, however, was not an engineering class. It was the one non-core elective I was able to squeeze into my freshman year, a Classics course. The first day of class the professor distributed a one-page essay titled “The Useless: What’s the Point?”, an essay on the purpose of a liberal arts education. The essay glorified the concept of understanding, which it defined as seeking to know the great many things in the larger world, a quest with no end that liberated us from the immediate and practical. It contrasted understanding with critical thinking, calling it a “faddish designation for a practical, marketable sort of intellectual activity,” critiquing it for stopping once its purpose, a solution, is achieved. The thesis, in short, was that inquiry matters more than the answer it produces.
(Unfortunately, the professor rather aggressively contrasted the aspiring humanists it expected to be present with the aspiring engineer it didn’t. Feeling out of place, I dropped the class the next day.)
Little did I know that this tension—between questions and answers—would follow me throughout my career. As a young management consultant, I observed large organizations implement policies and procedures that directed their people toward consistent, reliable answers, enabling incredible scale. Years later, as a senior advisor tasked with defining innovative growth strategies, I watched incredibly successful leaders struggle to think expansively about their own businesses—not for lack of intelligence, but because the organizations they led had spent years training inquiry out of them. And in my own role scaling a creative agency through an acquisition, I found a direct tradeoff between growing the business and preserving the inquisitiveness that made it worth acquiring in the first place.
Then, AI arrived and laid the quandary out in plain sight.
Suddenly, there was a technology that could provide answers to nearly anything. Businesses salivated—AI could be fed data and conduct the analytical thinking their organizations run on. Workflows could be automated. Outputs could be generated. Even decisions could be scaled. As for the role of the knowledge workers who currently do this analytical thinking, their future remains up in the air as AI capabilities continue to impress us with each release.
While the adoption of AI at scale by enterprises is new, the organizational dilemma at the heart of this moment is not. As the world has become more unstable and unpredictable over the past few decades, it has required companies to investigate how to make their organizations more adaptive. Most have been unsuccessful because it requires organizational changes that run contrary to how they are successful today.
But I believe AI has now forced their hand. Now that analytical thinking has been commoditized, differentiation can only be found in what analytical thinking can’t do. Organizations, and the people within them, that have operated on answers for decades now need to operate on questions.
The bad news is that this is a much bigger transformation than most are currently considering. It doesn’t simply boil down to AI literacy, which many organizations currently believe. Shifting from answers to questions has implications across the board—how the organization is structured, the people that it employs, and the culture that it promotes. The good news is that the many failed experiments from the past few decades have shown us where the points of failure are. If we can correct for those, we can finally achieve that elusive adaptive organization everyone has been striving for.
Which raises the questions: How do these “Answer Organizations” operate on answers today, and why will AI change that? What are these points of failure, and how can they be addressed? And finally, what does a “Question Organization” even look like?
Let’s discuss.
How Organizations Run on Answers
The evolution of businesses from startup to scaled incumbent follows the same arc so reliably it barely needs explaining. A company is born by some creative, or often accidental, act. To grow, it defines what makes it successful so it can be repeated again and again. To manage its size, it sets rigid structures to ensure consistency and therefore continued success. This works for as long as the conditions that created the success remain stable. But, inevitably, when the world around them changes they struggle to adapt against the very rigidity they built to scale.
Over the last several decades, the pace of change has accelerated. Recognizing this, businesses have been on a journey to become more adaptive. Their goal is to continue performing at their core, scaled business, while simultaneously transforming to be relevant in the future. You would think, given the existential nature of the challenge and the decades spent trying, that businesses would have figured this out by now. But while they’ve certainly tried, they largely haven’t—because it’s exceptionally difficult.
Roger Martin explains why in The Design of Business. Martin describes a “knowledge funnel” in which new knowledge starts as a mystery, evolves to a heuristic, or rule of thumb that narrows the field of inquiry to something manageable, and is codified as an algorithm. He argues once businesses define an algorithm, they exploit it in order to scale and become fixated on reliability to consistently reproduce it. Inventing new businesses, or adapting existing ones, requires the opposite: exploration at the top of the funnel and a search for validity rather than reliability.
Businesses in the algorithm stage run on analytical thinking, because analytical thinking works from what is already known. Whether through deductive reasoning—applying established rules to specific cases—or inductive reasoning—drawing conclusions from observed data—both are backward-looking. This output is reliable because it’s based in data that is known, perfect for optimizing a set algorithm with a documented track record.
Put a different way, scaled businesses run on answers.
While Martin’s dynamic is generally understood, likely well before he was published in 2009, I believe businesses underestimate how deeply ingrained running on answers is. Martin’s examples alone span structures, processes, and culture. He points out that permanent jobs and ongoing tasks are products of running an algorithm reliably; financial planning is a fundamentally reliability-driven process that sets targets based on data from the past; reward systems favor larger revenue and bigger operations—won by optimizing the proven algorithm, not discovering a new one; and cultures see constraints as enemies interrupting reliability rather than opportunities to create something new. Martin also warns of the preponderance of analytical thinking ingrained from professional degrees (including MBAs), the reliability orientation of Wall Street analysts and boards of directors, and the challenges of defending validity in a world obsessed with reliability.
I spent a decade as a part of an “innovation agency” whose remit was to fight against businesses’ natural inclination to get stuck in the algorithm stage. The very existence of agencies like mine is proof of organizations both acknowledging their need to adapt and struggling to do so independently. Given my experience, I think Martin undersells the difficulty of overcoming an answer bias. Throughout the 2010s, I worked with large businesses to develop new innovation structures, cultures, and products—all starting at the top of the knowledge funnel.
Time and again we ran into issues. New structures collapsed as budget was reallocated to core business endeavors that offered a higher return on investment. New cultures clashed with a system that continued to reward those that scaled the algorithm over those that attempted to discover new knowledge. And new innovations themselves were modified to fit the infrastructure that already existed, no matter what it did to the desirability of the final product.
Not only did I see these adaptation challenges in the clients I worked with, but I also experienced them directly when my small, privately owned innovation agency was acquired by a large, publicly traded management consultancy to grow its presence in a new market. While we were a strategic acquisition whose revenue was a rounding error to our acquirer, we were still required to grow at the same percentage as the rest of the company and received a budget based on our size and headcount rather than our objectives and goals. The infrastructure built for our business was replaced by that designed for an organization 4,000 times our size (because…“synergies”). The incentives our people were charged with promoted behavior that ran contrary to why we were acquired.
Even though the intention of our acquirer was to get into a new market, it operated an Answer Organization that prevented it from adapting. The inappropriate growth demands, underfunding due to our relatively small size, infrastructure that did not serve our business, and incentives running counter to our goals were the result of their algorithms intent on optimizing the core business. It wasn’t irrational thinking driving these decisions—it was sound analytical thinking. I knew that every time I pushed in a direction that ran contrary to the algorithm, there was a rational, data-backed answer pushing back. And where there wasn’t data, there was advice: when I raised my hand to start a new venture with the foresight of how AI would disrupt consulting, my advisor discouraged me—starting something new at a company like this was “career suicide.”
The incumbency trap is so challenging to avoid not because organizations are irrational, but because they are entirely rational—built on analytical thinking that supplies solid, data-backed answers. Of course, there is a role for analytical thinking in organizations. Martin argues the key is balancing exploitation of the algorithm with exploration up the knowledge funnel—performing at the core while transforming for the future. But as someone whose job it was to make sure the two coexisted in organizations, I found this nearly impossible because they run so contrary to each other. To make room for exploration you need to be suboptimal at exploitation—and in an Answer Organization that simply cannot be.
But I believe AI will force a reckoning.
How AI Has Changed the Equation
Most knowledge workers are going through an existential crisis right now—and for good reason. They likely work for an Answer Organization, where analytical thinking dominates their tasks. They’ve prided themselves on analyzing data and developing an answer to either implement directly or report up the chain. Their degrees and experience credential them for exactly this kind of analytical work, at their current organization or any other. But they’ve witnessed how excellent AI is at doing the very same analytical thinking and getting exponentially better with each model release.
But what happens when AI is successfully implemented by all organizations? How will companies differentiate themselves if everyone can execute their algorithm perfectly? And what happened to the desire to be more adaptive as an organization in the face of accelerating change? Has change suddenly slowed?
When AI can deliver analytical thinking at scale, the ability to exploit your algorithm reliably becomes a commodity. Suddenly, differentiation moves up the knowledge funnel to those organizations that can validate new algorithms. As I’ve discussed, this need to be more adaptive is not new for businesses. But AI has changed the equation in two ways: it has made reliable performance easier to achieve, and it has made transformation no longer optional.
The initial rollout of AI has been rocky, but the difficulties are largely those of implementation, and, frankly, should be expected. The technology, however, has been proven more than capable of doing the analytical thinking for Answer Organizations. People are also largely embracing it, including in the “shadows” where corporate mandates do not exist.
What is more interesting to me is how AI is already starting to conflict with Answer Organizations.
In early 2025 I was discussing AI implementation with a client in the media and entertainment industry. He mentioned that the most senior people in the company were getting in the way of AI transformation. I asked why—surely senior leaders saw the potential of the technology and wanted to be at the forefront of reaping the benefits. The client chuckled at my political naïveté: implementing AI would mean a reduction in their organization’s headcount, which would correspond to a reduction in budget, and therefore a reduction in power. In other words, if a CMO automated their marketing organization, they would at best undercut their next salary negotiation, and at worst risk their own job security.
While this doesn’t immediately seem to be a dynamic rooted in being an Answer Organization, it very much is. Those who preside over the largest operations, which usually corresponds to the largest headcount and most budget, reap the largest rewards in Answer Organizations. The surest path to that position is overseeing further exploitation of the algorithm, reliably driving growth and meeting forecasts. (This dynamic is also why my advisor at our acquirer thought starting something new would be career suicide.)
This particular issue is not unsolvable, but it does illustrate how deep the Answer Organization paradigm runs. In a future where answers are commoditized, the changes organizations need to make to differentiate themselves are more structural than initially meets the eye.
AI is also exposing a problem at the individual level of Answer Organizations: the inability of employees to ask the right questions.
That problem has a name: “workslop.” Workslop is AI generated work content that lacks the substance to meaningfully advance a given task. While further research adequately outlines the causes of workslop and how to stop it, fixing workslop isn’t what interests me. Simply getting back to “good work” would be a waste of a good crisis.
AI, a superhuman ability to provide answers, is killing productivity in organizations that run on answers. Why? It isn’t the fault of the technology, which is more than capable of delivering quality work. It isn’t a lack of care, as these same employees delivered good work before AI was available. It’s user error. Good AI collaboration requires context, active challenge, and redirection toward the right outcomes—all of which require the ability to ask the right questions.
It has been widely published that “soft skills” are becoming more important, as the half-life of technical skills is expected to soon fall to two years. To date these arguments have been acknowledged and largely ignored—partly because the skills they preach can be challenging to define and teach, but predominantly because they are deprioritized in a world dominated by analytical thinking. The ability to ask questions has atrophied, and organizations have never needed to fix the problem.
In 2009 Martin described a battle between labor and companies that is playing out quite literally today. He noted that out of sheer self-interest, talent keeps their “heuristic shrouded in priestly secrecy” to prevent it from becoming an algorithm that can be handed to a much less expensive person. Today, AI is that less expensive person—capable of running not just the algorithm, but the heuristic too. Hiding a heuristic is no longer an option. The only move is up the funnel.
The path forward for knowledge workers and their organizations is the same. They need to move up the knowledge funnel to generate the new understanding that will differentiate the business. Not because it would be nice to be more adaptive, but because survival in the future now depends on it.
It’s time for Answer Organizations to become Question Organizations.
The Points of Failure in Building a Question Organization
For the longest time I’ve been tortured by trying to explain what is meant by critical thinking while attempting to avoid using the term critical thinking. Everyone has their own definition and their own critique, including that the term is too abstract to be useful.
In search of better language, I turned to David Hitchcock’s Critical Thinking entry in The Stanford Encyclopedia of Philosophy. Helpful, in part, because it confirmed to me that battling over nuances in the definition and similar terms is a complete red herring. Problem solving, higher-order thinking, creative thinking—the through line is the same: careful thinking directed to a goal.
Let’s call it critical thinking, and move on.
While Hitchcock’s focus is education, his work translates directly to business. He describes the origins of critical thinking from philosophers who preached a “scientific attitude of mind.” Terms like “observing,” “experimenting,” and “deciding” are used to describe key components of critical thinking—all of which are common in business contexts. He tangibly defines the seemingly abstract “soft skill,” and cites hard evidence that it can be taught. Most importantly, it lands the value of critical thinking: the means for people to understand.
The objective of organizations built on questions is to move up the knowledge funnel and understand the mysteries of the world for the purpose of adapting their businesses. With understanding as the goal, critical thinking is the cornerstone to what will get them there.
Well, there you go. Organizations need to move up the knowledge funnel to understand the world around them and adapt accordingly; critical thinking enables that understanding; and critical thinking can be both defined and taught. Let’s spin up a couple corporate bootcamps on critical thinking and call it a day!
Right?
If only it were that easy. Advocating for critical thinking in business is not new. I have never found a business leader who has disagreed that critical thinking is a valuable skill. Arguments for critical thinking are only growing louder as people think about the effect of AI on the human workforce. I’ve seen more demand for critical thinking firsthand—clients increasingly asking how to make their workforces more “future-ready,” “strategic,” or “creative.”
The problem isn’t advocacy or demand—it’s that organizations are still built on answers. Answer Organizations fail to enable critical thinking in three ways: they undermine the dispositions needed, they “teach” the skills incorrectly, and they ignore the knowledge required.
Undermining the Disposition
The biggest point of failure for critical thinking in Answer Organizations is how they relentlessly undermine the disposition required.
Hitchcock describes dispositions as “habits of the mind,” or general tendencies to think in particular ways in particular circumstances. There are initiating dispositions that start someone down a path of critical thought, such as habit of inquiry, courage, and willingness to suspend judgement, and internal dispositions that contribute to critical thought once it has started, such as honesty in facing one’s own biases, intellectual perseverance and humility, and anticipating possible consequences. I’ve often thought of dispositions as behaviors—for example, you can’t discover new knowledge without the behavior of curiosity.
Answer Organizations suppress a critical thinking disposition in so many tangible ways I fear I’ll fail to list them all. In this essay alone we’ve touched on permanent job structures that focus people on operating a small slice of the algorithm in a predetermined way; reward systems that make habits of inquiry career suicide; and analysts and boards that require laser focus on delivering the core business as projected, without wasting money and time on exploration boondoggles. All of these are examples of the downstream effects of Answer Organizations that directly discourage a critical thinking disposition.
Instead of attempting to be comprehensive in listing the downstream effects, I’ll instead focus on the source: the answer.
The goal of work in Answer Organizations is to develop reliable answers as efficiently as possible. This runs directly contrary to the dispositions required to think critically. Habits of inquiry and a willingness to suspend judgment take indefinite time and indefinite budget. Facing one’s own biases and anticipating possible consequences lend themselves to unreliable conclusions. The answer, the very basis of Answer Organizations, is incompatible with critical thinking.
Why haven’t we recognized this and rebelled? The truth is our brains are predisposed to love operating in Answer Organizations.
Daniel Kahneman’s System 1 and 2 framework from Thinking, Fast and Slow explains why our brains are comfortable in Answer Organizations. Kahneman famously popularized the two systems in the mind: System 1 operates automatically and quickly; System 2 allocates attention to effortful, deliberate reasoning. Because System 1 operates automatically and cannot be turned off, Kahneman exposes a number of biases, or errors of intuitive thought, that cannot always be avoided. He also describes a “law of least effort”: when multiple paths lead to the same goal, people choose the least cognitively demanding one. In practice, this means System 1 wins by default.
Answer Organizations amplify our laziest thinking. They encourage all sorts of cognitive biases that enable our System 1 thinking to take over, and leave System 2 dormant. For example, Kahneman describes an availability heuristic, in which people judge frequency by the ease with which instances come to mind, a product of System 1 thinking. In an Answer Organization, where what’s recent and familiar is prioritized over what is emerging and unfamiliar, our System 1 tendencies are encouraged. The answer that worked last time is the answer that gets promoted, as it’s deemed reliable and is efficient to recall.
From my business transformation work with clients, I believe undermining the dispositions that are required for critical thinking is, by far, the biggest point of failure. It is the most poorly understood, least invested in, and most undercut by the mechanics of an Answer Organization. Even our own brains are predisposed to work against us in a quest to evolve to a Question Organization.
“Teaching” Critical Thinking
Critical thinking courses designed for businesses already exist, and countless methodologies implicitly encourage critical thinking in business contexts. For example, design thinking, which instructs people to observe, empathize, hypothesize, and experiment, has been popularized in the business world over the last quarter century. Its language mirrors Hitchcock’s definition of critical thinking almost exactly.
So why haven’t our Answer Organizations been taught how to be Question Organizations yet?
In some cases, what is taught as critical thinking is so manipulated to fit in with Answer Organizations that it abandons the purpose of critical thinking entirely. In A Short Guide to Building Your Team’s Critical Thinking Skills, a four-phase approach efficiently outlines how to evaluate critical thinking in a measurable way. This stepwise, measurable approach to teaching critical thinking in businesses is made appealing to Answer Organizations given their demand for efficiency and reliability. The phases of the approach—execute, synthesize, recommend, and generate—all represent analytical thinking, operating from existing information and moving toward a defensible conclusion. While this form of “critical thinking” might be a helpful articulation of how to be successful in an Answer Organization, none of it represents careful thinking directed toward a goal. All of it can be automated by AI today, and thus, will not be helpful in differentiating organizations in the future.
Even if the phases of critical thinking were more helpful in their instruction, this highlights another way in which Answer Organizations fail to teach critical thinking: they efficiently process-ize it. In the same way Answer Organizations turn their business into an algorithm that can be scaled, they attempt to turn critical thinking into an algorithm itself to scale it across its employees.
This is yet another way in which Answer Organizations appeal to our System 1 thinking. Processes allow our System 1 thinking to take over and systematically check all the boxes, creating the illusion that we understand something without truly wrestling with it. Kahneman acknowledges “because adherence to standard operating procedures is difficult to second-guess, decision makers who expect to have their decisions scrutinized are driven to bureaucratic solutions.” Even Hitchcock caveats his section titled “The Process of Thinking Critically” by explaining “checklist conceptions of the process of critical thinking are open to the objection that they are too mechanical and procedural to fit the multi-dimensional and emotionally charged issues for which critical thinking is urgently needed.”
This is often the main critique of design thinking, a methodology that so often masquerades as critical thinking in business. While I wholeheartedly agree with the critiques, it’s important to note that it isn’t necessarily that the intention of the methodology is wrong—as I mentioned before, a lot of the language is one and the same with Hitchcock’s definition of critical thinking. Rather, it is when design thinking is implemented in an Answer Organization that it turns into a feckless exercise that focuses on checking off steps rather than properly thinking critically. I’ve witnessed design thinking break down countless times in Answer Organizations, as the demands of the organization clash with proper exploration for new understanding.
But the most baffling failure is the one hiding in plain sight. Answer Organizations fail to take advantage of their built-in learning mechanism for critical thinking: developing strategies for their business. Hitchcock explains that effective teaching methods for critical thinking have been proven to be dialogue, such as collaboration, anchored instruction, such as applied simulations, and mentoring. There is no better environment to teach critical thinking than a business already running on real problems. Teams are already collaborating on applied challenges with leaders overseeing them. Yet, Answer Organizations pull people out of that environment to sit through classroom-style training that misses the point entirely.
Ignoring Key Knowledge
In addition to dispositions and abilities, Hitchcock explains that there is also required knowledge for critical thinking. But much of this knowledge is not valued by Answer Organizations.
Answer Organizations value expertise, people who have experience operating a given part of the algorithm so they can do so reliably. Kahneman’s illusion of validity is particularly relevant here. Kahneman has proven that experts mistake narrative fluency rooted in expertise for genuine understanding. Expertise becomes the answer that forecloses the question. When this theory was tested by psychologist Gary Klein, the two teamed up and published their failure to disagree. They concluded that in “low-validity” or “wicked” environments—like the world that businesses operate in—expertise produces the illusion of skill rather than genuine skill.
I’m not here to pick a fight with expertise. Even Kahneman admits that this illusion is deeply ingrained in the culture of business and challenging it would threaten people’s livelihood and self-esteem (including mine!). Hitchcock also admits that substantive knowledge of the domain to which an issue belongs is helpful for critical thinking (whew!). This is why, no matter how capable the AI tools, an expert designer will always produce something a non-designer cannot.
But there is other key knowledge that organizations should value beyond expertise, including metacognitive skills.
Metacognitive skills are awareness and control of one’s own thinking processes. It’s thinking about your thinking. Answer Organizations have no use for metacognitive skills. Why would you need to think about your thinking when only the answer that exploits the algorithm is of value? If the answers are defendable, ensuring actual understanding is irrelevant.
But in a Question Organization, understanding is the whole point. When operating at the top of the knowledge funnel, you’re investigating mysteries seeking validity to ensure a new strategic direction will be successful. Therefore, you need to avoid lazy thinking and cognitive illusions that might lead you astray. If a bias makes you believe a strategic direction will yield business success, you might waste valuable resources pursuing something that will not have its desired effect.
Metacognitive skills are just one example. The point is that Question Organizations will require valuing knowledge beyond expertise. Arguments such as this have been made frequently, both in advocacy of “soft skills,” and sometimes framed as generalism versus specialism. Books like Range by David Epstein make arguments that generalist-specialist teams are most productive in business. This argument mirrors Martin’s point that businesses need to balance exploration with exploitation.
To me it’s quite simple: in a future world where we’ll all have access to a superhuman specialist in AI, we should value different things. Knowledge beyond one’s own domain is more valuable than it’s ever been.
Accomplishing the evolution from an Answer Organization to a Question Organization is not an easy task. It has nothing to do with malintent—some that run Answer Organizations preach critical thinking and have actively been attempting to make their organizations more adaptive for decades. But Answer Organizations have a compounding system in place that combats true understanding through critical thinking. They suppress the disposition required. They process-ize the skill. They prioritize narrow expertise. Together, they form a system that defeats itself before the transformation can begin.
AI: A Point Of Failure and An Opportunity
In 2023, at the start of the AI frenzy, the leaders of my consultancy became obsessed with the concept of productizing our professional services. The idea was simple and obvious—now that we had this technology, let’s take what we did manually with teams of people and sell it as a software product. In other words, automate the algorithm.
Our innovation division productized the design-oriented process we typically follow with clients. The tool identified potential market whitespaces, synthesized insights from synthetic consumers, generated relevant concepts, and iterated based on synthetic persona feedback. The tool worked well enough—it produced what it was supposed to produce. But clients found little value in it. Why?
The issue wasn’t in productization—it was in what was productized. The outputs we produce were not what clients valued most. They valued the understanding we developed with their teams, the same teams that would have to execute the strategy once we had left. Accelerating the development of output decreased the level of understanding we offered and therefore decreased the value of our engagements. If anything, we commoditized ourselves by automating what didn’t matter and failing to deliver what did.
For any company not selling AI directly, AI is simply a means of delivering the value you already offer. Critical thinkers in Question Organizations will know this and adapt the business accordingly. Answer Organizations looking to accelerate the algorithm will miss this entirely. In this way, AI provides an additional point of failure.
AI can also directly discourage the critical thinking required to become a Question Organization. Michael Gerlich’s AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking demonstrates this. Gerlich’s study reveals a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. In other words, people who outsource their thinking to AI are worse at thinking critically. Go figure.
But AI isn’t only an additional point of failure. For organizations that evolve into Question Organizations, it becomes an advantage.
From a Kahneman perspective, Gerlich’s conclusion makes perfect sense. The law of least effort can be revised to say just use your System 1 thinking, because AI can do all your System 2 work for you. I would have expected this to sour his view of AI—if it offloads System 2 thinking, what’s left for humans? But in speech at the University of Toronto in 2017, he surprised me. Because he spent his career documenting how inconsistent and biased human judgment is, he welcomed AI as a tool to eliminate that variability reliably.
For Question Organizations, AI can eliminate human bias and inconsistency for routine analytical work that will allow the business to perform reliably at scale. This will free up their critical thinkers to intelligently collaborate with AI to become better at solving the mysteries that lie at the top of the knowledge funnel. Just as AI without critical thinking produces workslop, critical thinking with AI amplifies understanding.
The AI literacy programs I’ve seen in the corporate world today ignore critical thinking skills. They typically provide an overview of how the technology works, some technique around prompt engineering, policies around security and ethical responsibility, and a plug from the IT department on the tools they’re asking people to adopt. The result is an Answer Organization more deeply entrenched than before.
Contrast that with a healthy Question Organization that has avoided the points of failure listed earlier. In a culture that promotes critical thinking, curious practitioners are already playing with AI before leadership initiates a literacy program. Employees use AI to further their critical thinking every day, not through classroom training but through the actual strategic work of the business. They get the most out of AI by directing it with a metacognitive awareness, rather than getting directed by it toward workslop.
For organizations willing to build the capacity, AI is both the greatest risk and the greatest accelerant.
Becoming a Question Organization
The aspiration of becoming a more adaptive organization predates AI by decades. We’ve had many helpful visuals of what I’m calling a Question Organization. The most vivid one I’ve encountered recently is an octopus.
In Become an Octopus Organization, Jana Werner and Phil Le-Brun describe an organization in terms of the intelligent sea creature, whose arms can think and act independently yet work in concert. Werner and Le-Brun contrast today’s rigid organizations with their more adaptive octopus-inspired counterparts. In today’s organization, meetings are centered around answer dissemination; meetings should instead be designed to generate an outcome, where provocative questions are encouraged. In today’s organization, call center agents follow a formulaic script—a set of answers to deliver; agents should instead own the customer’s problem, questioning what’s needed. These examples are helpful in that they illustrate what a Question Organization should look like on the front lines.
Martin offers structural prescriptions for organizations looking to make the shift. Instead of permanent roles blindly turning the algorithm’s crank, form project teams that see the bigger picture and collaborate toward shared goals. Instead of budgeting from past spending data, set budgets around future goals and explicit spending limits—acknowledging that advancing knowledge is tricky to budget for. Instead of a reward system based on overseeing the largest algorithmic operation, reward those who solve wicked problems and generate impact in doing so. Each addresses a structural ill of the Answer Organization.
But while we can visualize the adaptive organization we want to achieve, and recommend structural, process, and cultural tweaks to address points of failure, Answer Organizations have proven stubbornly resistant to all of it. Our answers have been treating the symptoms, when a core question is what we need to cure the disease:
How do you design an organization for people to genuinely want to think?
The core of a Question Organization is a workforce with a genuine love of inquiry. They must be disposed to proactively move up the knowledge funnel and uncover the next mystery that differentiates the business. They must resist the pull toward process-following and tool-dependence, and willingly engage their System 2 thinking—with AI as a collaborator, not a substitute. This is what Answer Organizations have most systematically destroyed, and why they routinely thwart every structural solution. And it is this principle on which to design a Question Organization.
Make no mistake: this is not a failure of people, it’s a failure of organization. People are natural lovers of inquiry (a point Kant observed long before the age of AI). AI is only proving this point in real time, as workers fear the automation of their analytical work far less than the loss of their inquiry work that gives their roles meaning. I’ve seen this in my own work: the most valuable thing I’ve provided clients isn’t strategy or output—it’s the conditions to develop their own understanding, free from the constraints of their Answer Organization.
In this sense, AI has forced a wonderful reckoning. Organizations must evolve into Question Organizations, where their people explore questions about the future while AI runs the algorithm of the present.
In “The Useless: What’s the Point?” my Classics professor argued that understanding has no practical end. He concluded his essay by writing “understanding is not a big business.”
I now know how wrong he was.
The tension between answers and questions turns out to be deeply practical. Not because understanding is philosophically superior, but because in a world where analytical thinking is automated, the capacity to pursue genuine understanding is the most practical business advantage available. The organizations that know how to pursue questions will be the ones worth working for, investing in, and building.
I feel like I’ve finished the course, twenty years later.


