Marketing Metrics that Matter at (un)Common Logic

When someone asks which marketing metrics matter most, the tempting answer is, it depends. That is true, but it is not helpful. The right metrics form a practical system that helps you make better decisions faster. At (un)Common Logic, we treat metrics like instruments in a cockpit. Pilots do not watch everything all at once. They scan, focus on the few that guide the flight, and use the rest for diagnosis and safety. Good marketing measurement works the same way, and the instruments you choose change with the aircraft you are flying.

This article lays out the metrics that have consistently moved the needle for our clients, how we organize them by purpose, and how we adapt them to different business models. It also addresses the traps that pull teams off course, including over relying on channel dashboards, ignoring unit economics, and treating attribution models like gospel. The thread running through everything here is commercial clarity. If a metric does not lead to a better business decision, it is probably noise.

The hierarchy of metrics: steer, confirm, diagnose

Every client engagement starts by separating metrics into three jobs. First, steering metrics that guide weekly and monthly decisions. Second, confirmation metrics that validate strategy and budget at the quarter level. Third, diagnostic metrics that identify why performance is changing within channels and campaigns. When a team argues about whether to optimize for ROAS or CAC, they are usually mixing categories.

Steering metrics are the few that tie spend to profitable growth. A paid social manager might watch click through rate and cost per click all day, but the leadership team needs to steer to cash efficiency. For ecommerce that means contribution margin, not just revenue. For SaaS and B2B, it means CAC payback and pipeline velocity, not just lead volume. For lead generation, it often means cost per qualified opportunity and revenue per lead.

Confirmation metrics answer the question, did the strategy work beyond the attribution model. Media efficiency ratio, new customer growth, incrementality lifts from holdouts, product contribution after returns, and net revenue retention all live here. They do not change hour by hour, but they keep the plan honest.

Diagnostic metrics explain changes in the steering metrics. Conversion rate by device, search term match quality, creative engagement by audience, time to first response on leads, and cart abandon rate belong in this set. These metrics are detailed, and subject to rapid swings, which https://johnathanwpvd919.image-perth.org/the-human-side-of-un-common-logic makes them dangerous when promoted to steering status. Used properly, they support root cause analysis and rapid iteration.

A simple rule helps under pressure. Steer with three to five metrics, confirm with two or three, diagnose with as many as you need, but keep them in their lane.

What actually matters for ecommerce

Ecommerce lives and dies on unit economics. Revenue growth that erodes margin is not real growth. Margin that collapses when you scale is a warning sign. We put contribution first, then customer quality and the sustainability of acquisition.

We start by building a contribution model that includes product margin, shipping cost, packaging, payment fees, discounts, marketing spend, and expected returns. If we cannot roll those up quickly, we estimate ranges and track variance. One apparel client ran at a 4.0 platform ROAS and liked what they saw until we mapped returns by SKU and cohort. On their top three products, returns ran 30 to 45 percent. Contribution swung from slightly profitable to negative at scale. That single view changed creative, targeting, and merchandising within two weeks.

The steering set for ecommerce typically includes contribution margin after returns, blended MER, and new customer share of revenue. We rarely steer to channel ROAS alone, because channel ROAS tends to look best when the algorithm serves frequent buyers. That is comfort food for platforms, but it starves new customer acquisition. If the new customer mix falls below roughly 55 to 70 percent for growth brands, the top line can look fine while the customer file stagnates. Mature brands with strong repeat economics can run lower new customer mix, but they know what they are doing and track file health separately.

Average order value and conversion rate are classic diagnostic metrics. They deserve attention, but they should not become steering metrics unless you have a known mechanical lever in play, such as a price test or a checkout change. Otherwise, they tend to bounce with mix shifts and promotion cadence, and can lead you to over correct.

On attribution, we pair last click with post purchase surveys and periodic geo holdouts. A practical example: a home goods merchant saw Meta claiming 60 percent of conversions while Google Analytics gave Meta credit for 20 percent. Rather than argue, we ran four geo lift tests over eight weeks. Incremental lift came in between 18 and 22 percent, which lined up with our blended MER. We used Meta for reach and creative learning, tightened the audiences, and adjusted budgets to hit a stable MER target. The point was not to crown a winner, it was to align spend with observed incrementality.

What actually matters for SaaS and B2B

Ecommerce can pay back within days. SaaS and B2B run on longer cycles, complex funnels, and a high risk of vanity metrics. MQLs, webinar attendees, and free trials feel productive but can hide a weak pipeline. We default to revenue aligned stages and time based efficiency.

Marketing qualified lead is not useless, but it is not a steering metric. Sales qualified opportunity, conversion to close, pipeline value created, and win rate by segment do the real work. Time to first response and meeting held rate belong next to them. If speed to lead is slow by even 30 minutes in certain geographies or time blocks, the cost of every upstream click rises silently.

CAC payback is the North Star for many SaaS companies. The math is simple enough, but the inputs can get muddy. Gross margin, churn assumptions, and discounting patterns matter. If your payback calculation assumes 85 percent gross margin but your expansion motion gives heavy discounts that bring realized margin to 75 percent, your 10 month payback may actually be 13 to 15 months. We pressure test these numbers by slicing by segment, channel, and offer. The gaps are often largest in partner sourced deals and special promotions.

Pipeline velocity turns strategy into time. Multiply number of opportunities by average deal value, then by win rate, and divide by average sales cycle length. When velocity stalls, we can tell whether to fix lead qualification, tighten targeting, or repair a handoff. One enterprise software client doubled webinar spend after a spike in registrants. Velocity barely moved. The fix was not more traffic, it was training on discovery and enforcing a two day SLA for follow up. Within a quarter, velocity improved 28 percent and CAC fell 18 percent without adding budget.

Attribution in SaaS is even trickier than ecommerce. Most influential touches are invisible to standard pixel tracking, including community, analysts, and peer channels. We pair self reported attribution at the opportunity level with modeled attribution for paid channels and measure success by sourced and influenced pipeline, not by click based ROAS. We also borrow techniques from MMM, but only when the data is mature enough to support it. Until then we use simpler directional methods like pre post analysis around territory level budget changes.

What actually matters for lead generation

Lead gen can grow quickly, but without a strict definition of quality it becomes an expensive data collection exercise. Three numbers keep teams honest. First, cost per qualified lead defined by explicit criteria, not form fills. Second, speed to lead because contact decay is brutal. Third, pipeline and revenue per lead by channel.

We had a home services client spending heavily on search and aggregator leads. On paper, both produced cheap leads. After standardizing qualification and measuring time to contact across the clock, search beat aggregators by a mile. Aggregators came in hot and cheap during late hours when staffing was thin, and those leads aged out before anyone called them. Shifting budget to hours when the team could respond created a step change, not a marginal gain.

Form conversion rate, click quality, and landing page performance sit in the diagnostic layer. They uncover waste and fuel testing. We often find that a small improvement in field count or autofill has a larger economic impact than an entire creative refresh. But we keep them in their lane. The business runs on qualified leads turning into revenue.

The case for blended efficiency

Channel metrics tell you how platforms claim credit. Blended efficiency tells you how your business converts money into margin. This is why we push clients to track MER at the contribution level. It forces a tighter link between marketing and finance and sharply reduces arguments about whether a campaign is working.

Two guardrails make MER a strong steering metric. First, segment by new and returning customers where possible, because growth brands can mask weak acquisition with repeat buyers. Second, track weekly and monthly MER together. Weekly catches misfires fast. Monthly smooths out promo spikes and day of week effects.

MER has limits. If you are sandwiched between a major holiday promotion and a stock out, MER will get noisy. That is when we drop down to the diagnostic layer and inspect channel and campaign level shifts. The last mile matters. Just do not let the diagnostic view run the business.

Building an incrementality habit

Incrementality testing is the antidote to attribution bias. You do not need a PhD in statistics to use it. You need a cadence and the discipline to change budgets based on results.

Geo holdouts or PSA tests work well for large spenders with regional coverage. Matched market tests can work for mid sized budgets. For small budgets, short creative blackout windows or account level ad pausing on specific days can still yield directional signals if you measure cleanly and repeat.

We helped a regional DTC brand run four test waves over a quarter. The first wave showed a surprising lift from upper funnel video, but only in markets with strong email engagement. That pointed to a synergy instead of a standalone winner. We moved budget to those pairs and lifted revenue 11 percent with a flat spend. Without the tests, a last click view would have killed the video early and the blended uplift would have disappeared.

The aim is not a perfect lift number. It is a habit of evidence. Over a year, even rough tests outcompete pristine anecdotes.

Leading indicators that really lead

Every team wants early signals. The trick is choosing indicators that reliably correlate with economic outcomes, and that you can influence within the next sprint.

For paid social, thumb stop rate and the percentage of impressions that reach 3 seconds can predict whether your cost per meaningful site visit will drop in the next week. For search, impression share on brand terms rarely predicts revenue, but Lost IS due to rank on high intent non brand clusters often does. For email, growth in active subscribers who opened twice within 30 days gives a better forecast of repeat revenue than list size.

image

We collect these signals, then validate them against revenue over time. When the correlation holds, we give them a seat in weekly check ins. When it fades, we demote them. There is no shame in retiring a metric that stopped working. Markets evolve. So should your dashboard.

Getting attribution out of the driver’s seat

Attribution assigns credit. Budget should follow profit, not credit. That is why we triangulate. Last click excels at measuring harvesting. Platform modeled attribution picks up early funnel touches and cross device behavior. MMM offers a macro view when data volume and budget stability allow. Cohort analysis tells you how customer quality changes with promotions and seasonality.

We recommend a bifocal approach. Use last click for daily pacing and tactical adjustments. Use blended MER and incrementality tests to shape monthly budgets. Layer in MMM or a lightweight regression once you have a year or more of reasonably stable spend and outcomes. When the different views disagree, investigate, do not average them blindly.

A practical example helps. A CPG subscription brand saw TikTok’s modeled attribution claiming a 1.8 ROAS where last click showed 0.5. Cohort LTV on TikTok acquired customers came in 20 percent higher than the average, and a three week blackout showed a 14 percent dip in new subscriptions in exposed markets. We raised budget, but capped the test to a spend level that preserved MER. Over six weeks the investment paid for itself in subscription starts and the cohort performance held. The blended view made the decision safe to scale.

Data hygiene and the price of sloppiness

Metrics matter only when the data behind them is clean. A five minute change to UTM hygiene can save a quarter of confusion. We have walked into accounts where Paid Social was split across three channel names, organic posts were tagged as paid, and email was labeled CPC. The result looked like weak paid search, erratic social, and a magical direct channel that grew whenever teams got busy. Fixing the taxonomy revealed the actual drivers within a week.

Server side tracking and first party data are not silver bullets, but they close gaps that cost real money. If your ecommerce platform supports server side events, connect them. If your CRM can pass clean revenue back to ad platforms via offline conversions, use it. You are not feeding the platforms for fun. You are buying cheaper feedback loops that let the algorithms find better prospects faster.

Governance beats heroics. Once per quarter, sample orders and opportunities back to source and check whether they hit the right bucket. The boring consistency of that habit pays more than the flashiest dashboard you can build.

Dashboards with a point of view

Dashboards should argue for or against decisions. They should not be museums of charts. We design views around the questions leaders ask each week. Did we grow efficiently. Are we acquiring the right customers. Where is budget under or over earning. What broke that needs fixing now.

Here is a concise checklist we use when building or refreshing dashboards:

    Limit steering metrics to no more than five tiles on the first screen Show blended and channel views side by side, not in separate tabs Tie every metric to a target or range, not just a trend line Add annotations for major events, including promos, stock outs, and tracking changes Include a single click drill down for root cause, but keep the top view uncluttered

A dashboard that follows those rules will shorten meetings and make action items obvious. When debates arise, the team can drop into the diagnostic layer and come back with a specific change rather than a vague sense that performance is off.

Cadence and decision rights

Metrics without a cadence invite thrash. We prefer a simple rhythm. Daily pacing checks within channels to catch fires early. Weekly business reviews with steering and diagnostic metrics. Monthly budget reviews using confirmation metrics like MER and incrementality. Quarterly planning with cohort and payback analysis.

Decision rights need to match the metrics. Channel managers should have authority to shift budget within their channels daily within agreed guardrails. Cross channel budget moves should wait for weekly or monthly forums where blended efficiency and incrementality are on the table. If you let channels raid each other’s budgets based on yesterday’s CPCs, you will get volatility without learning.

One client codified this in a one page doc. It listed who could move what money on which cadence, which metrics justified the move, and when to escalate. Disagreements dropped, and testing velocity doubled, because the rules were clear.

Adapting metrics by growth stage

Early stage companies need speed, but not at the expense of false positives. Mid stage companies need scale, but not at the expense of customer quality. Late stage companies need durability, which means keeping a close eye on margin and retention.

Early stage teams often do best with a short list. New customer MER, contribution after returns, CAC to 12 month gross margin, and a simple incrementality read. Keep it scrappy, but enforce data hygiene from day one so that as you scale you are not rebuilding foundations.

Mid stage companies can add cohort LTV, payback by channel, and pipeline velocity. They should start instrumenting leading indicators that truly lead, like trial to activation rate in SaaS or repeat purchase within 30 days in ecommerce. They can also get value from basic MMM or regression, provided spend has some variance to enable learning.

Late stage companies benefit from fuller MMM, segment level profitability, and NRR drivers. They may also need to manage cannibalization between channels and retail partners, which calls for incrementality tests that include wholesale and marketplace spillover. The common mistake at this stage is treating past coefficients like laws of nature. Protect the habit of testing, because external shocks change the math.

image

The trade offs that matter

Two trade offs recur. Precision versus speed, and local optimization versus global outcomes.

Precision can be intoxicating. A flawless payback model that arrives six weeks late is less useful than a tight estimate you can use next Monday. We aim for 80 percent confidence quickly, then refine. If a decision moves high six figures or more, we slow down and harden the numbers. If it moves tens of thousands, we bias to action and monitor.

Local optimization costs more than teams expect. A search manager who protects channel ROAS at the expense of new customer growth is not doing the brand a favor. A social buyer who chases cheap clicks that do not convert because they look good in platform dashboards burns cash. Blended efficiency and incrementality force alignment. They are not perfect, but they stop the most common budget mistakes.

Practical examples from the field

A cookware brand struggled with rising CPAs on prospecting. Last click blamed social. Post purchase surveys kept pointing to social. We paused paid search brand for a week in two cities and saw no revenue drop, which told us brand spend was harvesting what social created. We cut brand, raised prospecting with creatives that showed use cases rather than polished product shots, and shifted remarketing to email and SMS. MER stabilized within two weeks, and contribution grew 9 percent over the next month.

A B2B cybersecurity firm celebrated a surge in MQLs from content syndication. SQL rate collapsed, sales reps complained, and CAC ballooned. We rebuilt the offer, narrowed syndication partners to those with verifiable intent signals, and enforced a 15 minute response time. SQL rate rebounded from 8 to 26 percent, and CAC returned to target within a quarter. The key metric that changed the conversation was cost per qualified opportunity, not MQL volume.

A beauty subscription company relied on Meta optimization for purchases and saw flat growth. We switched to optimizing for a high intent micro conversion that closely correlated with subscription start, then layered in creative that spoke to first use rituals rather than discounts. Thumb stop rate rose, quality traffic increased, and subscription starts climbed 17 percent at steady spend. The leading indicator earned its spot by proving correlation first, then guiding creative bets.

What we hold constant at (un)Common Logic

Across business models and growth stages, a few principles keep us from chasing our tails.

    Tie steering metrics to profit, not platform credit Separate steering, confirmation, and diagnostic metrics and keep them in their lanes Test incrementality on a cadence you can sustain, then move budget based on results Protect data hygiene with simple naming rules and quarterly audits Give dashboards a point of view and set clear decision rights

These habits sound simple. They take discipline to maintain, especially when campaigns spike or the market shifts. The payoff is durable. Teams make faster decisions, learn from tests instead of debating them, and scale budgets without losing unit economics.

Marketing has more data than ever, but not all data deserves a seat at the strategy table. At (un)Common Logic, the metrics that matter share one trait. They help you grow efficiently today, while improving the odds that growth still looks smart a year from now. When your measurement system supports that kind of judgment, you will find that meetings get shorter, tests get cleaner, and results compound. That is the difference between reporting activity and managing performance.