Executives sometimes whisper it in hallways after a big miss: we trusted our instincts. I have sat in those rooms, looking at slides that explain why the launch flopped, the quarter slipped, or the big hire turned toxic. The pattern repeats across industries. A decision felt right, often powered by charisma, a heroic story, and a sample size of one. Then reality, indifferent to feelings, presented the bill.
Gut feel is not the villain. It is fast, learned from past cycles, and efficient when the world matches our mental models. The trouble begins when we drift into new terrain and pretend it is old ground. That is where (un)Common Logic earns its keep, not by being fancy, but by insisting on first-order facts, base rates, and simple math that many professionals skip because they seem obvious. Obvious, it turns out, is not common.
I call it (un)Common Logic because the habits are straightforward, almost boring, yet rarely practiced with discipline. A ten-minute back-of-the-envelope, a quick search for base rates, a small test before a big bet, a premortem that actually names ways we could fail, a confidence interval rather than a point guess. Every one of those moves is teachable. Together, they beat gut feel across most consequential decisions.
What I mean by (un)Common Logic
Uncommon logic is not a new framework. It is the refusal to move forward without checking the outside view, doing the simplest useful math, and writing down the assumptions you are pretending are facts. It shows up in practical moves:
- You forecast a product’s first-year revenue with a range and a data source, not a single number that aligns to the ambition slide. You start with base rates from analogous products or segments, then layer in your specific advantages, instead of assuming your advantage is the base case. You make the smallest reversible test that preserves learning, and you predefine what result counts as a signal. You swap opinions for odds and expected value, even if the odds are imprecise.
An easy example: a consumer app team once pitched me a plan to hit 500,000 monthly active users in six months, seeded by influencer partnerships. The spreadsheet claimed a viral coefficient of 1.2. When I asked for the source on that number, eyes drifted to the table. We pulled base rates for similar apps launched in the past two years. The median viral coefficient sat between 0.6 and 0.9. Of the rare cases above 1.0, most had celebrity fuel, a deep network effect, or a unique utility. Our plan had none. We ran a pilot with two mid-tier influencers, measured true invites-per-new-user, and found a coefficient of 0.73. With a clear reading, we redirected budget to search and direct response, improved onboarding to lift retention by five points, and still missed the original MAU target. We did, however, beat our breakeven user count by month four and kept cash for v2. Instinct pushed for a moonshot. Logic kept us solvent.
Why gut feel deceives smart people
Decades of behavioral research and a few thousand hours in operating reviews point to the same traps. Our brains compress complexity into narratives. We remember winning plays more than near misses. We mistake ease of recall for likelihood. We get overconfident because we see the path we took, not the invisible branches that failed.
Gut is especially slippery in three patterns:
First, low-feedback environments. A trader who makes dozens of similar decisions daily can tune intuition quickly. A CEO who makes one acquisition a year gets feedback too slowly to calibrate instincts. By the time the outcome is clear, the context has shifted.
Second, novel conditions. A manufacturing VP with 20 years of lean experience trusts their sense of cycle time. Then a software-driven process lands on their floor. The mental model lags the new constraint, often digital latency rather than physical motion.
Third, asymmetric payoffs. Our minds underweight tails. Decisions with one big downside and many small wins feel attractive. A classic case is a feature that pleases the core users slightly but adds complexity debt. The short-term dopamine masks the long-term cost of slower velocity.
None of this indicts experience. Pattern recognition is powerful. The judgment to know which playbook fits, and when a problem rhymes with something you have solved, is real skill. The discipline is to treat gut as a hypothesis generator. Then let (un)Common Logic test it.
The quiet power of base rates
If you could only add one habit, choose base rates. Before believing your story, ask what usually happens. If you plan to expand into a new city, what is the median payback period for similar expansions in your industry? If you expect to lift conversion by 20 percent with a redesign, what is the typical gain from comparable efforts without changing the offer?

I once worked with a B2B SaaS firm targeting mid-market accounts. The sales VP projected a win rate of 35 percent after hiring four account executives. That number came from early wins with founder-led sales. We pulled base rates: for first-year AEs selling a product under $50k ACV into greenfield accounts, typical win rates sit between 10 and 18 percent. We adjusted the forecast to 15 percent, aligned hiring to pipeline data, and added one sales engineer. The finance team hated the lower headline, but the board thanked us later for hitting a plan built on https://lorenzopsrb699.trexgame.net/cro-research-methods-at-un-common-logic outside reality.
Base rates do not doom ambition. They anchor it. If the base rate says 15 percent, and you claim 35, your plan must articulate the causal deltas. Maybe you have a unique distribution partner, or your product delivers a regulatory must-have. Show the mechanism. Attach numbers. Then track whether the mechanism is working.
Expected value over opinions
Executives often ask, should we do this? A better question is, what is the expected value of doing this versus the next best use of resources? Even rough expected value changes meetings. It forces you to state probabilities, define upside and downside, and include the value of information.
A retailer I advised debated whether to roll out cashierless checkout to 30 stores. The capital expense per store was about $600k. The operations lead believed shrink would drop enough to justify the investment. The CFO pushed back. We built a model with ranges. We estimated shrink reduction at 10 to 30 percent based on published case studies and our own pilots. We priced labor savings conservatively. We included the risk of customer abandonment due to setup friction in the first three months, then modeled recovery. The base-case expected payback stretched to 42 months, with a probability-weighted downside that looked ugly if customer complaints spiked above a certain threshold.
The pivot was not to scrap the idea. It was to run three stores as high-fidelity experiments with pre-registered customers to control early friction, measure shrink reduction precisely, and collect NPS. We set stopping rules for negative signals. Within eight weeks, we observed a 24 percent shrink drop and a net neutral customer sentiment once onboarding guides improved. Only then did the rollout proceed. Expected value thinking gave us a decision tree, not a yes-no fight.
Experiments that are small, clean, and cheap
The business world talks a lot about testing. The failure mode I see most is half-tests. Teams change multiple variables at once, or they run trials that cannot be compared cleanly to any control. A weak test gives you optics, not learning.
A clean experiment need not be complex. It needs three things: a clear hypothesis, a metric that matters, and a precommitment to how you will interpret the result. I worked with a subscription service that wanted to add a premium tier with a concierge hotline. The product lead argued that high-value customers were asking for it. Customer interviews did include that request. But interviews over-sample the vocal. We set up a two-week test with a subset of visitors who saw the premium offer, biased toward segments most likely to convert. The hypothesis: the premium attachment rate would hit 12 to 18 percent without lifting churn in the core plan. The measured result was 8 percent attachment and a 1.3 point increase in month-one churn for the base plan, likely due to choice overload. The logic path said postpone and reframe. The team returned with a simpler offer and a clearer upgrade path at month three. The second round hit 14 percent attachment with stable churn.
A test that says not yet is not a loss. It is a release of capital back to the portfolio.
Ask disconfirming questions
Leaders often spend 90 percent of time on why an idea will work and spare 10 percent for risks. Reverse it. Make colleagues argue against their own proposals. Run a premortem: imagine the launch failed six months from now, then list the most plausible reasons. Name the tripwires that would warn you early. Write them into the plan as observable conditions.
This is not pessimism. It is an investment in learning speed. When a logistics firm I advised moved to a new warehouse management system, the project plan assumed a smooth cutover in four weeks. Our premortem identified two high-probability failure modes: delayed label printing during peak and slotting errors after backfill. We installed a shadow print service and scoped a manual override procedure for slotting. Peak week came, the main print queue hiccuped, and the shadow kept the line running. The project landed close to schedule. Without the premortem, we would have had an all-hands fire drill.
When instinct deserves a vote
Instinct is not the enemy of reason. It is information compressed into feelings after many cycles of exposure. In some cases, gut can go first, provided you make its role explicit and understand the cost of being wrong.
Use instinct as a lead input when the following are true:
- The domain provides fast, repeated feedback with clear outcomes, and you have lived through hundreds of cycles. The decision is time critical with limited scope, and delay increases downside more than a wrong call would. The stakes are primarily about values or brand tone, where quantified trade-offs miss the point. The environment is stable enough that your past patterns map to the present case.
Even in these cases, capture the reasoning in a short note. Over time, review how well those instinct-driven calls perform. If you learn that certain patterns of confidence correlate with misses, you can retrain the gut.
Finance is not the only math that matters
I have seen brilliant operators do precise financial math and skip the simpler, noisier math that would have saved them. Two underused tools deserve more airtime.
First, Fermi estimates. These are back-of-the-envelope calculations that scale an order of magnitude, not a decimal place. If your marketing head proposes a content strategy that depends on organic traffic, ask for a Fermi estimate of search volume, click-through, and conversion to determine whether the effort can move the needle. Maybe you need 20,000 incremental visits per month to justify the team. A quick scan of top keywords shows a realistic ceiling of 6,000. Better to pivot now than discover it in quarter three.
Second, ranges and sensitivity. Most plans present a single-point forecast. Reality lives in ranges. When you specify a forecast as a 90 percent interval, your brain asks different questions. What has to be true for us to be at the top of the range? What breaks if we are at the bottom? Simple two-way sensitivity tables often reveal that the result is dominated by one or two variables. That directs attention to where to reduce variance.
A hard lesson on hiring
Hiring showcases the tension between gut and logic. Many leaders trust their sense of a candidate after the first 15 minutes. Some do not write structured rubrics because they believe they can read people. Sometimes they can. Often they read people who remind them of successful colleagues, or they get seduced by confidence.
I inherited a team nursing the aftermath of a senior marketing hire who dazzled in interviews. The leader had big-agency references and a strong portfolio. Six months in, pipeline quality deteriorated, and relations with sales soured. Postmortem time. Our process had lacked work samples, did not test collaboration under friction, and overweighted references from contexts nothing like ours. We rebuilt the loop with a paid project, a live working session with sales, and a rubric aligned to explicit must-haves. The next hire was quieter, had two stumbles in the live session, and took feedback well. Pipeline improved within two quarters. The difference was not luck. It was moving from stories about talent to observable, job-relevant evidence. That is (un)Common Logic applied to people decisions.
Why people resist logic even when they know better
Knowledge does not equal behavior. Time pressure, politics, and ego collide with good habits. A product head has to present a bold plan to win a budget. A founder falls in love with their own vision. A division leader fears appearing cautious next to a rival championing a big bet. Logic feels like sand in the gears.
You cannot shame people into discipline. What works is institutional design. Tie promotions and praise to forecast accuracy and learning speed, not to theatrics. Reward teams that change course early when the evidence shifts. Normalize ranges and standard errors in KPI reviews. Put red team roles on big decisions and rotate the job so it becomes a badge of honor, not a career tax.
A sales org I worked with added a simple, visible metric to quarterly business reviews: calibration score. It measured how close the team’s 90 percent confidence intervals landed to actuals. Early rounds looked ridiculous. People called 90 percent intervals that hit 50 percent of the time. Over three quarters, intervals widened, then narrowed as skills improved. Reps who calibrated best tended to outperform on quota later. The firm started coaching around calibration explicitly. The culture shifted, slowly but materially.
The 5-minute (un)Common Logic check
When a decision crosses your desk, and you feel the urge to go with your gut, pull out this short filter. It is not a substitute for deep analysis. It is a speed bump that catches the worst errors.
- Write the decision in one sentence and the dollar or time cost of being wrong. Identify a base rate from analogous cases, with at least one external source. Do a Fermi estimate that supports or challenges the headline promise. Frame an expected value with rough probabilities and a 90 percent range of outcomes. Name one disconfirming test you can run quickly, and a stopping rule for bad signals.
If you cannot do this in five minutes, the decision is either too trivial to merit attention or complex enough to demand a proper workup. Both answers are useful.
Edge cases worth respect
There are real situations where logic can mislead if applied blindly.
First, measurement myopia. You optimize a metric because it is easy to measure, not because it is the thing you truly value. A news site might chase click-through at the expense of trust. A hospital might reduce length of stay and worsen outcomes. Logic requires wisdom about which numbers matter.
Second, unknown unknowns. If you step into a genuinely new domain, base rates can be misleading. The trick is not to discard them, but to acknowledge their fragility and design experiments that surface the unknowns early, then rebuild your base rates once you have real data.
Third, correlated risks. Expected value math assumes independence more often than reality provides. A portfolio of bets that all hinge on consumer credit holding up is not diversified. Map shared failure modes and stress test them.
Fourth, adversarial contexts. In security, fraud, or negotiation, your opponent adapts to your test. Static logic under-reacts. Use game-theoretic thinking, inject randomness where sensible, and avoid revealing thresholds that can be gamed.
These are not excuses to return to feel. They are prompts to refine your logic, choose better metrics, and widen your lens.
How to make (un)Common Logic routine
Culture eats tools. If you want sustained discipline, bake it into recurring rituals. In product reviews, ask for outside-view anchors first, then inside-view advantages. In quarterly planning, require 90 percent intervals on key outputs and track calibration over time. In hiring, insist on work samples and rubric-based scoring with at least one interviewer tasked to argue the no case. In postmortems, spend most of the time on what you would change in the decision process, not the outcome alone.
Keep the bar human. You do not need Monte Carlo simulations to improve. A shared language around base rates, expected value, and testable hypotheses already moves the needle. Over time, as teams get comfortable, you can layer in more sophistication.
I often recommend a short decision journal for leaders. It takes five minutes per major call. Capture the context, your predicted range of outcomes, your confidence, the base rates you used, and the reasons you might be wrong. Review quarterly. You will notice patterns, like overconfidence in domains where the feedback is slow, or better accuracy when you talk to frontline operators. That feedback loop is priceless.
A field story with numbers
Several years ago, I advised a marketplace entering a new vertical. The CEO favored a blitz: nationwide rollout, heavy introductory discounts, a PR push. The model showed break-even in six months. We ran the 5-minute check.
First, what is the cost of being wrong? About 12 million dollars in marketing burn and time taken from the core vertical.
Second, base rates. We found that similar marketplaces entering a new vertical took nine to eighteen months to reach liquidity in their top ten cities. Discount-heavy pushes saw post-promo reversion rates of 40 to 70 percent.
Third, Fermi estimate. If our target was 100,000 monthly transactions at breakeven unit economics, and historical conversion from site visits to transacting users averaged 2 percent, we needed roughly 5 million incremental qualified visits per month. Our SEO ceiling in the category looked like 500,000. Paid could bridge some gap, but even at a generous 3 percent click-through and a reasonable CPC, the budget implied exceeded our plan by a factor of two.
Fourth, expected value. Assigning 30 percent probability to hitting liquidity within six months, 50 percent to doing so in twelve, and 20 percent to failing entirely, the expected cash burn exceeded the CEO’s stated risk appetite. The value of information from a city-by-city pilot looked far higher.
Fifth, disconfirming test. We proposed two cities with different structural conditions. One with high density and existing supply on adjacent categories, another more suburban with thinner supply. We set a rule: if either city failed to reach 60 percent of target liquidity in 90 days despite the planned spend, pause the blitz.
We launched the two-city test. The dense city hit 58 percent, with price elasticities worse than expected. The suburban city hit 35 percent and showed strong supply-side churn after discounts ended. Instead of plowing ahead, we paused, reworked the supply incentives to favor quality and retention, and introduced a subscription for high-frequency demand. Nine months later, we rolled out to eight more cities with a stronger model. We did not dominate the category. We did avoid burning tens of millions and poisoning the well with early supply churn.
Why this all feels simple, and why it is not
Everything above reads almost obvious. That is the point. (un)Common Logic is simple, teachable, and within reach of any competent team. It is also rare in practice because it requires small, repeated acts of humility. You have to admit you do not know, that your story needs an outside view, that your plan must survive contact with a test. You trade the thrill of bold certainty for the quieter satisfaction of compounding good bets.
Leaders set the tone. When the person at the head of the table asks for the base rate before the clever narrative, the room changes. When promotions reward calibration and adaptability, not just high-variance shots that occasionally score, careers adjust. Over a few cycles, the organization gets better at seeing the world as it is.
A final note on speed
Critics sometimes argue that logic slows you down. It can, if you confuse analysis with progress. The spirit of (un)Common Logic is speed with clarity. Ten minutes of grounding spares months of rework. A small clean test today avoids a massive apology tour later. The fastest path is rarely the one that skips thinking. It is the one that thinks just enough, at the right time, about the right things, then moves.
If you adopt only one habit, make it the 5-minute check. If you have more room, add base rates to every significant forecast, insist on ranges, and run a premortem for any decision that could set the company back a year. Over time, your gut will evolve too. It will learn to prefer decisions with positive expected value and to flinch when stories skip the outside view. That is the quiet victory of (un)Common Logic. It not only beats gut feel on the next call, it trains better instincts for the calls after that.