Introduction
Transaction Multiples: What’s Fair in Today’s Market?
There are few concepts in corporate finance as simultaneously precise and illusory as the transaction multiple. It is recited with the confidence of a known quantity—“twelve times trailing EBITDA,” “six times revenue,” “eighteen times next-twelve-months cash flow”—yet behind the neat ratio lies a scaffolding of assumptions, projections, narratives, and negotiated context. To treat the multiple as a price tag is to misunderstand both its epistemology and its purpose. It is not a discovery; it is a declaration—a signal in the fog of uncertainty, meant as much to persuade as to describe.
As financial leaders, we are trained to interpret the multiple as a market reference, a benchmark rooted in observed precedent and peer analysis. But we must also confront the deeper truth: that a multiple is not merely the output of a pricing mechanism, but the expression of a belief system—shaped by capital cycles, interest rates, liquidity regimes, and sentiment. It reflects the interplay between scarcity and abundance, between time and risk, and between fact and story. In today’s market—unmoored from historical anchors, awash in capital yet punctuated by anxiety—the question of what constitutes a “fair” multiple has become as much philosophical as it is financial.
The notion of fairness itself, when applied to pricing, deserves interrogation. Fairness implies symmetry, balance, mutual recognition of value. Yet in M&A markets, fairness is a moving target. Buyers and sellers enter with asymmetric information, divergent time horizons, and distinct motives. One sees a terminal cash flow stream; another sees strategic fit. One values control and option value; the other seeks liquidity and de-risking. Each party brings its own discount rate, its own priors, its own sense of the future. And so the multiple becomes not a neutral exchange rate, but a compromise—shaped by the relative strength of narratives, the constraints of alternatives, and the entropy of negotiation itself.
We often hear in boardrooms, with an air of finality, that “this company should trade at eight to ten times EBITDA.” But should is a word of moral force in a world of market relativism. What justifies that range? Is it based on comparable transactions? If so, which comparables, and in what macro context? Are those companies truly similar in growth profile, margin structure, capital intensity, and customer concentration? Or are we adjusting away every difference until the comparables are convenient rather than comparable?
What if, instead, we recognize that the multiple is not a measure of worth but a map of consensus expectations—drawn in erasable pencil, updated as conditions shift, and always subject to interpretation? What if fairness, in this context, means not arriving at a perfect price, but converging on a defensible one—grounded in logic, tested by precedent, but humbled by uncertainty?
The multiple also conceals more than it reveals. Two companies may trade at the same EBITDA multiple, yet differ dramatically in working capital cycles, reinvestment needs, tax attributes, and customer churn. A multiple is a compression algorithm, distilling a multi-variable business into a single quotient. But like any compression, it sacrifices fidelity for speed. It assumes stationarity in a non-stationary world. It invites comparison, but punishes nuance.
In this era of regime shifts—marked by inflation shocks, interest rate reversals, geopolitical uncertainty, and digital disruption—the idea of a fair multiple must be treated as a conditional construct, not a universal truth. What was fair in 2021, when capital was cheap and growth unbounded, is no longer fair in 2025, when cost of capital has risen and scarcity of certainty commands a premium. The discount rate, that silent force beneath all valuation, has shifted. And with it, so must our conception of fairness.
More dangerously, fairness itself has become a tool of negotiation. Sellers invoke “fairness” to defend a prior high-water mark. Buyers invoke it to constrain exuberance. Bankers invoke it to nudge parties toward the midpoint of their respective expectations. Fairness becomes not a measure of economic equivalence, but a rhetorical device. In such an environment, it is incumbent upon the CFO, the board, and the financial sponsor to ask: whose fairness, under what conditions, against what set of priors?
To navigate this fog, we must turn to a more robust framework—one rooted in systems thinking, game theory, and decision logic. We must treat each transaction not as an isolated pricing event, but as a feedback loop within a larger system—of markets, of narratives, of capital flows. We must examine how multiples are shaped by constraint: by the availability of debt, by the scarcity of quality assets, by the bottlenecks in diligence, and by the throughput of deal teams themselves. And we must recognize that the fairness of a multiple is not merely a question of price, but a reflection of risk allocation, time arbitrage, and embedded assumptions about the future.
This letter, then, is not a catalogue of market benchmarks or a report of recent multiples by sector. Such data, while useful, are already abundant and quickly stale. Instead, it is an attempt to probe the philosophical infrastructure beneath the multiple. To ask, as practitioners and fiduciaries, how we ought to reason about value when the market is simultaneously efficient in dissemination and chaotic in interpretation. To offer a method for assessing what is fair—not in the abstract, but in situ, under pressure, in deal rooms where judgment must act before certainty arrives.
In Part I, we will examine how multiples behave under macro compression and expansion cycles—how liquidity, inflation, and scarcity shape valuation bands and distort historical comparables. In Part II, we will explore the negotiation of multiples as a game-theoretic dynamic—where anchoring, signaling, and perception drive outcomes as much as fundamentals. In Part III, we will confront the epistemological fragility of comp sets—why observed transactions may mislead more than inform, and how to build probabilistic adjustments under entropy. In Part IV, we will move beyond multiples entirely, reframing valuation as an exercise in design under uncertainty, and presenting an adaptive framework for navigating fairness when benchmarks fail. We will conclude with an executive summary, distilling the argument into tools for action.
If the Federalist Papers sought to reason with fellow citizens in the storm of constitutional ambiguity, then this letter seeks to reason with fellow financial leaders in the turbulence of valuation relativity. Multiples will remain. But our method of interpreting them must evolve. For in a world of reflexive prices, distorted signals, and narrative arbitrage, fairness is no longer given—it must be earned through clarity, context, and judgment.
Part I
Multiples as Signals—The Economics of Compression and Expansion
There is a certain elegance in the way the market distills a company’s worth into a multiple—so few syllables, so much implication. But like any expression of belief, the multiple is susceptible to the context in which it is uttered. A company trading at ten times EBITDA in 2019 is not the same as one trading at ten times EBITDA in 2025. The numerator may be identical. The denominator may be the same. And yet the interpretation—the signal embedded in the multiple—has fundamentally changed.
We must first establish what a multiple is, at its core: a ratio of price to earnings, revenue, cash flow, or another economic anchor. But the ratio itself is not explanatory. It is reflective. It reflects the collective beliefs of market participants about the risk-adjusted future of those cash flows. A high multiple does not mean a company is expensive per se. It means that the market believes the future is worth pulling forward into the present at a lower discount rate. A low multiple does not necessarily imply weakness; it may suggest uncertainty, cyclicality, or fatigue in the narrative.
This makes the multiple a profoundly reflexive indicator. It is shaped not only by fundamentals, but by how those fundamentals are perceived in the current macro regime. Consider the difference in perception when a software company trades at twenty times ARR in a near-zero interest rate environment versus a 5 percent rate regime. The growth may be identical, the margins comparable. But the cost of waiting—for returns, for monetization, for payback—has changed. The time value of capital has reasserted itself, and with it, the multiple contracts.
We call this process compression, though that term understates the violence it can inflict on enterprise value. In a rising rate environment, or one where risk tolerance recedes, the same earnings stream can fall precipitously in perceived worth. This is not a failure of the company; it is a recalibration of the discount rate. And herein lies the first lesson: multiples expand and compress not solely due to the company’s trajectory, but due to exogenous shifts in the capital pricing mechanism.
In practice, this means that companies do not live in isolation. Their multiples are tethered to broader systems—the Fed’s monetary policy, investor risk appetite, equity risk premiums, credit spreads, and the volatility index. The multiple, in this context, is a derivative—a second-order function of global liquidity and confidence. To quote it without context is akin to quoting temperature without specifying the climate.
And yet, in deal rooms across the world, multiples are routinely invoked as if they were constants—as if the twelve times paid for a target last quarter in an adjacent sector is still relevant, despite changes in debt markets, inflation forecasts, or the competitive landscape. This is the sin of static comparison in a dynamic world. To compare multiples across time without adjusting for macro compression is to misread the signal and chase shadows.
Let us also note that multiple expansion can be equally misleading. It is tempting, especially during bull markets, to interpret high multiples as validation of strategic excellence. But expansion is often driven not by improved fundamentals, but by scarcity and capital concentration. When too much capital chases too few quality assets, prices rise not because risk has diminished, but because the competition for yield has distorted it. The result is a market that bids up assets beyond intrinsic expectation, anchored in liquidity abundance rather than informational superiority.
This dynamic breeds what information theorists call signal pollution. The multiple no longer reflects differentiated insight, but a herd response to stimulus. In such moments, the prudent operator must ask: is this price a revelation, or is it a reflection? Are we observing value, or are we simply witnessing the echo of excess?
Multiples, then, are as much sociological as they are financial. They are the crystallized consensus of belief among a temporary coalition of actors—each with different incentives, constraints, and time horizons. When this consensus shifts, it does not do so gradually. It snaps, often violently. The result is not a smooth adjustment, but a regime change.
For the CFO or board member evaluating a transaction in such an environment, the challenge is clear. You must build a framework for interpreting multiples as regime-sensitive signals, rather than treating them as historical absolutes. This means asking three specific questions in every transaction discussion:
- What is the prevailing capital regime? Is money scarce or abundant? Are rates rising, falling, or plateauing? Is duration being rewarded or punished?
- What is the level of comparative scarcity for this asset type? Are buyers competing due to strategic fit, sector tailwinds, or capital overhang? Is this scarcity structural or cyclical?
- What is the volatility in valuation peer sets? Are we comparing against stable precedents, or are the comps themselves distorted by transient phenomena?
Answering these questions does not produce a definitive multiple. But it sharpens the boundaries of fairness. It allows the team to recognize that a multiple is not a fact, but a belief held under temporary conditions—conditions that may or may not hold through the life of the investment.
It also empowers the team to run scenario-based valuation ranges, stress-testing exit assumptions under multiple regimes. What does the value look like at exit under compression? How resilient is the business to the tightening of capital? How much of the purchase premium is justified by structural advantage versus narrative enthusiasm?
Too often, these questions are deferred until post-close, when the ink is dry and the capital deployed. But by then, the entropy has already entered the system. A multiple, once paid, becomes an expectation. And expectations, when unmet, exact their price—not only in financial terms, but in trust and reputation.
In the next section, we will explore how the multiple itself is used not merely as a signal of belief, but as a strategic instrument in negotiation. We will examine how buyers and sellers deploy comps, anchor expectations, and construct fairness not as equilibrium, but as the outcome of a dynamic game—one where perception often matters more than truth.
Part II
Game Theory at the Deal Table—Anchoring, Signaling, and the Strategy of Fairness
In the quiet intensity of an M&A negotiation, where each side enters with its own spreadsheet, its own model, its own priors, the multiple becomes the axis around which belief and power orbit. It is introduced not as a question, but as a posture: “We believe the business is worth eight times.” Yet no one truly believes the multiple is fixed. It is the opening signal in a dynamic game, where value is not discovered but constructed—through rhetoric, precedent, comparison, and pressure.
To understand this dynamic, we must begin with the concept of anchoring. Behavioral economists have long noted that the first number placed on the table, even if arbitrary, exerts gravitational force on subsequent negotiation. This is no less true in corporate transactions. A buyer may present a proposal at seven times EBITDA, not because that is their terminal value, but because they know it will pull the counterparty’s expectations into a narrower range. Anchors, once placed, compress the zone of possibility. They frame the conversation, and in so doing, they manipulate what appears to be fair.
The countermeasure to anchoring is reference. Sellers come prepared with comparables—recent deals in adjacent sectors that traded at higher multiples. “Company X sold at eleven times,” they say, invoking a sense of injustice, of undervaluation, of misalignment with precedent. But these references, too, are chosen strategically. Rarely do sellers cite deals at lower multiples, even when they are relevant. And so the comp set becomes an engineered narrative, not a neutral sampling. It is selection bias masquerading as objectivity.
This interplay between anchor and reference is the first layer of the game. The second is signal distortion. Buyers and sellers both know they are being watched—by investors, by the market, by advisors. They act accordingly. A buyer may offer an inflated multiple in early stages to deter competition, signaling aggression. A seller may reject a market-clearing bid to signal confidence, even if internal forecasts suggest softness. In both cases, the actions are not rooted in intrinsic belief but in strategic signaling to external observers.
What complicates this further is the presence of third parties—advisors, sponsors, board members—each with their own incentives and time horizons. An investment banker, compensated on transaction size, may nudge for a higher multiple not because it is sustainable, but because it is monetizable. A private equity sponsor nearing the end of a fund may accept a lower multiple with cleaner terms, preferring certainty of close. The table becomes multi-dimensional, the game multi-player.
And so, the concept of fairness becomes slippery. Each party claims to seek a fair deal. But fairness, in negotiation, is often a placeholder for “what advances my position without reputational damage.” It is a concept wielded with moral tone but strategic intent. Buyers argue they are offering a fair premium. Sellers argue they are asking for a fair market rate. Both anchor their claims in reference points shaped by selective data and narrative design.
This is not cynical; it is inevitable. In any transaction, information asymmetry is a structural condition. Buyers know less about the internal quality of earnings, customer churn, and future trajectory. Sellers know less about the buyer’s internal investment hurdles, integration capacity, or intent. Each side constructs its version of the truth, and then negotiates whose version prevails. The multiple is merely the surface expression of this contest.
Yet the contest is not random. It follows predictable patterns, drawn from game theory. Consider the sequential game structure: sellers open the market, buyers signal interest, a subset conducts diligence, offers are refined, exclusivity is granted. At each stage, actions are observed, interpreted, and responded to. A dropped data room question may suggest hesitation. A delayed meeting may imply softening appetite. Each move is both signal and test.
Moreover, the game is played under incomplete information, but repeated structure. Many of the actors—especially institutional buyers and sell-side advisors—play this game repeatedly. Reputation becomes a strategy. A buyer known for retrading on diligence risks being frozen out. A seller known for overpromising in management presentations may lose credibility. This introduces the concept of equilibrium: deals tend to close when the parties’ reputations, expectations, and incentives align in a temporarily stable configuration.
This is why seasoned negotiators spend as much time reading the pattern of behavior as the content of offers. The way a multiple is presented—the timing, the confidence, the caveats—tells more than the number itself. Is the buyer anchoring early or late? Is the multiple dependent on debt assumptions that may not survive market volatility? Is it backed by internal approvals or subject to committee dynamics? These are not footnotes. They are the game.
And yet, amid all this, the financial leader must retain epistemic clarity. The multiple is not a verdict. It is an opening bid in a probabilistic system. It should be evaluated not only for its stated price, but for its latent signal: what does it imply about the buyer’s view of the business? What assumptions are embedded in the earnings normalization? What risks are priced in, and which are ignored?
The most effective CFOs I have worked with use the multiple not as a scorecard, but as a map of belief. They dissect it, reverse-engineer it, and compare it not to past comps, but to internal projections under different buyer scenarios. They do not ask, “Is this number fair?” They ask, “What must be true for this number to be justified?” And then they test that truth with diligence, logic, and experience.
Finally, fairness must be redefined. Not as a midpoint between opposing poles, but as an outcome of transparent process, shared assumptions, and post-close coherence. A fair deal is one in which the multiple reflects the risk, the opportunity, and the execution path agreed upon—not merely the best-case narrative, but the realistic synthesis of both sides’ vision.
In the next part, we will turn to the fragility of comparables themselves. For if the multiple is the signal, and the negotiation the game, then the comps are the field notes—the historical references from which all fairness claims derive. But what if those notes are blurred, biased, or broken by context? What if the very foundation of comparability is cracking under the pressure of temporal distortion and data decay?
Part III
Entropy in the Market—The Epistemic Limits of Comps and the Art of Adjusted Truth
Every multiple rests upon a comparative truth. A transaction is fair, we are told, because it is in line with precedent. The logic is seductively simple: if a similar company sold at eleven times EBITDA, then our target, with slightly stronger growth but slightly higher churn, should fetch ten. These precedents are presented with the confidence of a courtroom exhibit—dates, metrics, summaries, valuation. But what is rarely acknowledged is that this body of comps, so often treated as data, is in fact a narrative of curated approximations—selected, sanitized, and stripped of uncertainty.
The problem is not the comps themselves, but what we ask them to do. We ask them to stand in for direct knowledge, to anchor fairness in a market too complex to observe in totality. But this is a dangerous substitution. In reality, comps are plagued by three forms of entropy: temporal decay, structural incomparability, and informational incompleteness.
First, consider temporal decay. A deal completed eighteen months ago may feel recent, but in capital markets it belongs to another era. Rates may have been lower, risk appetite higher, competitive pressure more acute. The multiple paid then, even if rational in its moment, may now be a distorted echo. To assume continuity in valuation logic is to ignore regime shifts—in monetary policy, sector outlooks, and capital availability. When capital is abundant, buyers stretch to control strategic assets. When capital tightens, prudence returns, and multiples fall not because businesses have worsened, but because time has grown more expensive.
Second, structural incomparability corrodes the integrity of most comp sets. No two companies are truly alike. They differ in customer mix, geographic exposure, contractual stickiness, team capability, and operational leverage. Even within the same sector, metrics conceal as much as they reveal. One business may have twelve-month trailing EBITDA boosted by a one-time contract; another may be masking churn with aggressive discounting. These differences are not footnotes. They are structural. And yet they are often adjusted away by analysts in the name of normalization—a process which, though well-intended, frequently introduces a false sense of equivalence.
Third, and most insidiously, is informational incompleteness. Public transactions disclose terms, but not the invisible forces behind them. We do not see the buyer’s internal model, its integration strategy, its assumptions around customer retention or synergy realization. We do not see the conversations that led to the price, the emotional weight of founder legacy, the presence of side agreements or deferred considerations. Each comp is therefore not a clean datapoint, but a partial revelation—contaminated by opacity.
From this we can infer that comps, while helpful, must be treated as probabilistic guides, not determinative facts. To use them responsibly is to adjust for entropy, to acknowledge what is unknown, and to resist the temptation of precision. This means building ranges, not points; scenarios, not claims. It also means overlaying each comp with contextual modifiers: What was the macro backdrop? Who were the buyers? What was the quality of the asset? Were there strategic bidding dynamics at play?
Indeed, some of the most widely cited comps are outliers—driven by FOMO bidding, distressed sellers, or narrative distortions. But because they are public, they become reference points. This is the danger of availability bias—we overweight what is visible, even if it is atypical. A single high-multiple deal in a frothy market can inflate seller expectations across a sector. And a low-multiple, distressed transaction can depress them. In both cases, the comp does not illuminate reality—it distorts it.
This is where Bayesian logic must enter the room. As sellers, as advisors, as decision-makers, we must treat comps not as authoritative priors, but as imperfect inputs—to be updated as we receive new information. Each new data point, each new signal from the market, should revise our posterior estimate of value. The mistake is not in being wrong; it is in failing to update. Fairness, in this view, is not a fixed state, but a dynamic estimate, constantly refined.
What, then, is the art of adjusted truth? It is the practice of triangulating between comps, internal models, and buyer-specific dynamics. It is understanding that value is path-dependent—that the same company may be worth different multiples to different buyers, at different times, with different capital structures. It is refusing to conflate market-clearing price with intrinsic worth, or to let the past dictate the present without interpretation.
I recall a transaction, not long ago, in the industrial tech space. The company had strong EBITDA margins, low churn, and a defensible moat. Seller expectations were anchored at fourteen times EBITDA, based on a prior transaction in the same sector. But that comp, upon closer analysis, had involved a strategic acquirer with overlapping infrastructure, allowing for substantial cost synergies. The current buyer was a financial sponsor with no such advantages. When we normalized for this delta—factoring in integration benefit, capital cost, and exit optionality—the fair range narrowed to ten to eleven. Seller frustration gave way to understanding, not because the number changed, but because the model of truth was made transparent.
And that is the lesson. Multiples are not untruths. But they are partial truths, often decontextualized and misapplied. The financial leader’s task is not to recite comps, but to deconstruct them—to understand what they hide, what they suggest, and what must be true for them to apply.
In the next and final part of the main letter, we will move beyond the multiple entirely. We will ask: what if fairness is not found in the past, but designed in the present? What if we reframe valuation not as a contest over comparables, but as a collaborative act of system design under uncertainty—balancing trade-offs, optimizing for intent, and building resilience into the structure of the transaction itself?
Part IV
From Multiples to Meaning—Designing Value Under Uncertainty
The failure of most valuation conversations is not that they rely on flawed data. It is that they seek certainty where only coherence is possible. Multiples, for all their ubiquity, offer the illusion of objectivity. They imply that value is observable, measurable, comparable. But value, in truth, is emergent. It arises from the interplay of time, risk, intent, and design. The wise financial operator recognizes this not as a philosophical abstraction, but as a call to pragmatic architecture.
Let us begin with the premise that every transaction is an expression of system design under conditions of partial information. The seller seeks to optimize liquidity, risk transfer, and legacy. The buyer seeks to optimize return, control, and optionality. Both operate under constraints—some declared, others hidden. And within that design space, the parties must arrive at a valuation that is not merely acceptable, but internally consistent with the strategic purpose of the deal.
This means that value must be built, not quoted. It must be reverse-engineered from objectives. Consider a target business with moderate growth but high retention, recurring revenue, and modest capital needs. To a strategic buyer, it may represent a bolt-on for distribution density. To a financial buyer, it may be a cash-yielding platform. The same company yields two different models, and therefore two different logics of value. A single multiple would fail to express this nuance. But a designed structure—with earnouts, rollover equity, or contingent consideration—may reconcile the asymmetry.
This is the architecture of value. It is not a number. It is a portfolio of trade-offs, sequenced and layered. A seller may accept a lower headline multiple in exchange for board representation, founder equity retention, or reinvestment rights. A buyer may pay a premium to accelerate market entry or to lock out competitors. These trade-offs cannot be captured in static comps. They must be modeled as interdependent design variables, each affecting the stability and fairness of the final deal.
Such modeling draws from systems thinking. A transaction, like any adaptive system, has feedback loops. Incentive structures affect behavior. Post-close governance affects information flow. Integration speed affects revenue realization. A valuation that ignores these loops is not fair; it is incomplete. Designing for fairness requires designing for resilience—ensuring that the deal holds under plausible future states, not just under ideal assumptions.
This is where entropy returns as a governing principle. Entropy, in this context, is the likelihood that value will degrade over time due to friction, misalignment, or unforeseen volatility. A transaction that maximizes value on paper but ignores entropy in execution will disappoint. Conversely, a slightly lower valuation that embeds alignment, clarity, and feedback may endure. Fairness, then, is not the point of maximum price. It is the point of maximum coherence under constraint.
Designing for this kind of fairness requires a new language. Rather than debating whether a multiple is high or low, the question becomes: is the structure consistent with the risk transfer, with the cash flow predictability, with the buyer’s intent and capabilities? Does the earnout align incentives, or introduce noise? Does the rollover equity retain commitment, or defer risk? Are reps and warranties proportionate to integration complexity, or are they anchored in negotiation folklore?
To answer these, we must integrate insights from decision theory. Every term in a deal—valuation, indemnity caps, escrows, working capital targets—represents an embedded bet. The seller is betting on the buyer’s execution. The buyer is betting on the seller’s disclosures. The fairness of the multiple is only as strong as the calibration of these bets. And calibration requires transparency, reasoning under uncertainty, and an ability to separate belief from narrative.
I recall a transaction in which the buyer insisted on a headline multiple of eight times EBITDA. The seller had modeled nine. At first glance, the deal appeared stalled. But in unpacking the assumptions, we found that the buyer’s model assumed higher working capital leakage and no growth investment. By adjusting the structure to reflect seller-funding of expansion capex and a smaller earnout tied to retention metrics, we achieved alignment. The final multiple—when reconstructed post-structure—was closer to 8.7. But more importantly, the deal held, because the structure matched the system’s realities.
This, ultimately, is the heart of designed valuation: truth is not in the number but in the match between the structure and the system. A fair deal is not one that clears the market. It is one that survives the future. And survival, in this sense, is a function of coherence.
What does this mean for the practitioner? It means that valuation should no longer be treated as a single-point debate over multiples, but as a multi-dimensional design challenge. The CFO or board must move beyond arguing comps and into modeling conditional futures—states of the world where different deal structures yield different realized outcomes. Sensitivity analysis, scenario modeling, and entropy mapping become core tools in constructing not just what a business is worth today, but what it is likely to be worth under whom, under what structure, and over what time horizon.
It also means that we must shift our ethical lens. Fairness is not a moral tone layered atop price. It is an expression of truth-seeking behavior in a noisy environment. It is revealed in the transparency of assumptions, the symmetry of information, the integrity of process. It is a virtue of design, not a defense of precedent.
As we turn now to the executive summary, we will distill this journey, not to resolve the multiple issues, but to recenter the valuation conversation around what matters: resilience, adaptability, and aligned expectations. For in a world where prices float, narratives shift, and entropy rises, fairness cannot be claimed. It must be constructed carefully, humbly, and with the full weight of judgment.
Executive Summary
Transaction Multiples: What’s Fair in Today’s Market?
We began with a question framed in the language of precision—“What’s fair?”—but arrived in a landscape governed by context, constraint, and change. The transaction multiple, for all its brevity, is among the most misunderstood constructs in finance. It is treated as a datum, but behaves like a signal. It is treated as a benchmark, but operates as a belief system. And so the question of fairness cannot be answered numerically. It must be answered structurally.
In the present market—defined by inflation after decades of disinflation, by volatility after a cycle of low-beta complacency, and by a cost of capital no longer set to zero—the meaning of a “fair multiple” is no longer recoverable from precedent alone. Comps have decayed. Macroeconomic gravity has shifted. And capital, though still abundant, is no longer cheap. The old frameworks are insufficient. What we require now is a new design language.
From this inquiry, several core conclusions emerge:
First, a multiple is not an intrinsic truth. It is a conditional expression of belief—about future growth, perceived risk, and the cost of waiting. In periods of capital compression, these beliefs contract. A ten times multiple paid under zero interest conditions does not translate linearly into a fair value under a five percent hurdle. Fairness, then, is not what was. It is what now makes sense given the opportunity cost of capital.
Second, the multiple is not passively observed; it is actively constructed. In negotiation, it serves as both anchor and signal. Sellers invoke precedents; buyers invoke prudence. The final number reflects not only economic truth, but negotiation strength, information asymmetry, and game theory mechanics. Fairness, therefore, cannot be isolated from process. It is not a point on a graph. It is the residue of disciplined discourse.
Third, comps—the foundation of fairness claims—are data of degraded fidelity. They are frequently unadjusted for integration benefits, capital structures, and macro context. Their visibility does not guarantee their relevance. To use comps as scaffolding for fairness is to build on sediment. The intelligent operator treats them as reference points, not as railings, and builds ranges with reason, not anchors with assertion.
Fourth, and most importantly, value is not located in the multiple. It is located in the design of the transaction. A fair deal is one where structure reflects reality: where earnouts align incentives, where capital risk is priced, where post-close governance matches buyer intent. Fairness is not where the number lands. It is where the system holds. And systems do not hold because of precedent. They hold because of coherence.
As practitioners, this demands a shift. We must stop asking whether a deal price matches the last transaction. We must ask whether it survives the future. Can this buyer execute? Can this seller deliver? Are the mechanisms of protection, incentive, and capital alignment designed for entropy—or against it?
A deal at a lower multiple, but with cleaner structure and better integration, may deliver more realized value than one with a headline premium. A buyer who pays less but invests more may be fairer than one who pays more but constrains growth. In this way, fairness ceases to be a number. It becomes a system-level judgment.
And finally, fairness carries a moral weight. For the seller, it is the responsibility to engage in truth-telling: not dressing up the company, but offering clarity. For the buyer, it is the discipline to price risk honestly, without predation or posturing. For both, it is the shared acknowledgment that capital deployed today affects lives tomorrow—employees, customers, communities. Multiples may come and go, but consequences remain.
In closing, let us then answer the question, not with a number, but with a principle:
A fair transaction multiple is the one that accurately reflects the structure of belief, the calibration of risk, the coherence of design, and the probability of resilience.
It is not remembered for how well it matched the comps, but for how faithfully it captured the truth of what the business was—and what it could become.
And in that, we recover not just fairness, but financial leadership fit for complexity.
Disclaimer: This blog is intended for informational purposes only and does not constitute legal, tax, or accounting advice. You should consult your own tax advisor or counsel for advice tailored to your specific situation.
Hindol Datta is a seasoned finance executive with over 25 years of leadership experience across SaaS, cybersecurity, logistics, and digital marketing industries. He has served as CFO and VP of Finance in both public and private companies, leading $120M+ in fundraising and $150M+ in M&A transactions while driving predictive analytics and ERP transformations. Known for blending strategic foresight with operational discipline, he builds high-performing global finance organizations that enable scalable growth and data-driven decision-making.
AI-assisted insights, supplemented by 25 years of finance leadership experience.