Something is changing in how organizations make commercial decisions, and it is not what most people think.

The narrative in most boardrooms and strategy discussions is that AI is transforming commercial decision-making by providing better analysis, faster synthesis, and more sophisticated modelling. That narrative is partially correct and almost entirely misses the point.

Better analysis does not produce better decisions. It produces more options, more scenarios, and more data — all of which require a decision framework to be useful. Without that framework, better analysis produces better-informed disagreements, not better outcomes. The bottleneck in commercial decision-making has never been the quality of the analysis. It has been the quality of the decision process.

What is actually happening in commercial organizations

The proliferation of AI tools inside client organizations is creating a new and underappreciated problem. Organizations now have access to more analytical capability than ever before. They can model more scenarios, generate more options, and synthesize more market information in less time than was possible five years ago.

What they do not have — and what the AI tools do not provide — is a structured decision framework that turns that analytical output into governed, defensible, documented decisions. The gap between analysis and decision, always present, is widening as analytical capability expands without a corresponding investment in decision governance.

This matters because the cost of commercial decision failures is not primarily a cost of bad analysis. It is a cost of good analysis that was not translated into explicit trade-offs, documented rationale, and accountable recommendations. Organizations that made poor bid decisions, selected wrong suppliers, or mispriced market entries rarely did so because they lacked data. They did so because the data they had was never converted into a decision with a clear structure and a defensible rationale.

The three failures that governed decisions prevent

The first failure is inconsistency. When commercial decisions are made through ad hoc processes — meetings, spreadsheets, and the judgment of whoever is in the room — outcomes vary with the composition of the room rather than with the quality of the opportunity. The same bid, evaluated by different teams in different meetings, produces different prices and different recommendations. That inconsistency is expensive: it means the organization is not learning from its decisions, because the decisions are not comparable.

The second failure is undocumented rationale. When a decision is made in a meeting and the reasoning is not recorded, it cannot be reviewed, audited, or learned from. The organization discovers that a floor was wrong, or a win probability was optimistic, or a competitive response was not anticipated — but it cannot trace that discovery back to the assumption that failed, because the assumption was never written down. The same mistake recurs.

The third failure is ungoverned overrides. In every commercial organization, decisions are overridden. A floor is moved. A recommendation is reversed. A discount is approved outside authorized parameters. These overrides are not inherently wrong — sometimes the person making the override has information or judgment that the analysis did not capture. But an override without a documented rationale is indistinguishable from an override made under pressure or for the wrong reasons. The organization cannot tell the difference, and neither can the audit.

What governed commercial decisions actually look like

Governed commercial decisions share four characteristics regardless of whether the decision is a bid, a procurement selection, a pricing adjustment, or a market entry.

The trade-offs are explicit. Not discussed — explicit. Win probability and margin are quantified together, not treated as separate considerations. Cost and risk are modelled together, not compared independently. The decision is made on the basis of the full trade-off, not on whichever dimension happened to dominate the meeting.

The rationale is documented. The recommendation, the assumptions behind it, the evidence that supports those assumptions, and the conditions under which the recommendation would change — all recorded, in a form that survives the departure of the individuals who made the decision.

The override is registered. When a recommendation is overridden, the override is recorded with a rationale. The organization knows who overrode what, on what basis, and with what outcome. That record is the foundation of organizational learning — and the foundation of defensibility when the decision is subsequently reviewed.

The decision compounds. Each decision builds on the record of prior decisions. Win probability assumptions are calibrated against actual win rates. Supplier performance scores are calibrated against actual delivery outcomes. Pricing assumptions are tested against actual market response. The organization makes progressively better decisions as the record grows — not because individuals get smarter, but because the methodology gets better-calibrated.

The consulting disruption and what it actually means

The argument that AI tools are reducing the need for consulting is partially right and largely misses what matters.

What AI tools displace is the consulting work that was always really about information access and synthesis — the work that a sufficiently capable analytical team with good tools could always have done internally, and now can. That work was never the source of lasting consulting value, even when it was scarce enough to command a premium.

What AI tools do not displace — and may actually increase demand for — is the work of building decision governance into commercial organizations. An organization that acquires AI analytical tools without a decision framework gets better analysis in service of the same ad hoc decision processes that were failing before. The analysis is faster and richer. The decisions are not better governed. The gap between analytical capability and decision quality widens.

The organizations that will navigate this transition successfully are those that invest in decision governance as a capability — not just in analytical tools. That investment requires intellectual architecture: a methodology grounded in decision theory and commercial experience, not just software. It requires implementation: configuring that methodology to the specific competitive environment and organizational context of the client. And it requires discipline: building the documentation and override registry habits that make the governance real rather than nominal.

That is not work that AI tools do on their own. It is work that requires the combination of proven methodology, implementation capability, and ongoing calibration that good consulting has always provided — and that remains valuable precisely because the analytical tools proliferating inside client organizations make the governance gap more visible, not less.

The question organizations should be asking

The right question for any organization evaluating its commercial decision capability is not: do we have access to good analysis? Most organizations do, or can.

The right question is: when our analysis produces a recommendation, how does that recommendation become a decision? Who challenges the assumptions? What trade-offs are explicitly made? What rationale is documented? What happens when someone overrides the recommendation?

If the answer to those questions is “we discuss it in a meeting,” the organization has analysis but not decision governance. The gap between those two things is where commercial performance is won or lost.