How to Tell If a Feature Is Worth Your Money

(Before You Spend It)

During the mobile app gold rush, every company with a budget wanted an app. Most of them couldn't tell you why. Alex Bratton, who I worked with during my time at Apple, got so tired of watching money burn that he built a formula to kill bad ideas before they cost anything. He called it Return on App. Three numbers, two formulas, and a question most companies still can't answer: will this actually make us money, or are we just buying a feature list?

The technology has changed. The math hasn't.

Here's how to use it.

A Framework That Worked for Apps Works Even Better for AI

Somewhere around 2013, a guy named Alex Bratton got tired of watching companies throw money at mobile apps that didn't move the needle. He was running Lextech, an Apple enterprise partner, and he kept seeing the same pattern: companies would dream up an app, build it, launch it, and then wonder why nobody used it and nothing changed.

I worked with Alex during my time at Apple. What stuck with me wasn't just the framework, it was the discipline behind it. He refused to let anyone skip the math. You want to build an app? Great. Show me the numbers first. No numbers, no project. That kind of rigor was rare then. It's even rarer now.

So he built a framework. He called it Return on App, or ROA. The idea was dead simple: before you build anything, calculate whether it will generate new revenue, save real costs, or both. If you can't do the math, you shouldn't write the check.

That was during the mobile app craze. We're watching the exact same movie again with AI. Different technology, same bad habits. Companies bolting AI onto everything, buyers evaluating tools based on feature lists instead of business impact, and nobody stopping to ask the one question that matters: will this actually move a number that I care about?

Bratton's framework is over a decade old. It's more useful today than it was when he wrote it.

Why "Does It Have AI?" Is the Wrong Question

Here's the problem with how most businesses evaluate new technology right now.

Seventy-five percent of SMBs are investing in AI, according to Salesforce. Fifty-eight percent already use generative AI. The adoption wave is real. But adoption doesn't equal impact. Buying a tool and getting value from a tool are two very different things.

IDC's research on SMBs in 2026 makes this point sharply. The companies winning with AI are focused on what they call "pragmatic use cases that are easy to deploy and deliver measurable ROI." Not feature lists. Not impressive demos. Measurable returns.

The gap between "we bought an AI tool" and "that AI tool made us money" is where most of the waste lives. And most companies don't have a framework for closing that gap before they spend the money.

Bratton's ROA framework closes it. And while he built it for mobile apps, the math applies to any technology decision, especially AI features and tools.

The ROA Framework: Three Numbers That Tell You the Truth

Bratton's Return on App breaks down into three components. Each one forces a different kind of honesty about whether a technology investment is worth it.

1. New Revenue Potential (NRP)

The NRP answers one question: how much more revenue could you generate if this tool worked the way it's supposed to?

This isn't about the tool's capabilities. It's about your team's capacity. If your sales team spends 30% of their time on proposal formatting and a tool cuts that in half, those hours go back into selling. The NRP captures what that freed-up time is worth in revenue terms.

Bratton's formula uses four inputs:

Impact (I): The percentage of time the new process saves. If a tool cuts a two-hour task down to thirty minutes, that's a 75% impact.

Time (T): The percentage of total work time your team spends on this workflow. If they spend eight hours a week on it out of forty, that's 20%.

Revenue (R): The annual revenue connected to this workflow. If the workflow is your entire sales process, use your full revenue number. If it's a subset, use the portion it drives.

The NRP formula is: I x T x R / (100% minus I)

That denominator is the clever part. It accounts for the compounding effect: when you free up time, people don't just save time, they can do more of the revenue-generating work with that time. The formula captures the additional revenue available when your team can do more with the hours they already have.

Here's a quick example. Say your operations team of five people spends 20% of their time on invoice processing. Your annual revenue is $5 million. A new AI tool could cut that processing time by 50%.

I = 50%, T = 20%, R = $5,000,000

NRP = 0.50 x 0.20 x $5,000,000 / (1 minus 0.50) = $1,000,000

That's a million dollars in revenue capacity freed up. Not guaranteed revenue, but capacity. The question then becomes: can your team convert that freed time into actual revenue? If yes, the tool pays for itself many times over. If no, the NRP is lower, and you need to be honest about that.

2. Cost Savings Potential (CSP)

The CSP is more straightforward. It answers: how much money will you save by making this process faster and cheaper to run?

Same four inputs, different formula, and you swap Revenue for Cost.

Cost (C): The total cost of the workflow, including labor, materials, printing, shipping, equipment, maintenance, and any downstream workflows this one slows down.

The CSP formula is: I x T x C

Using the same example: your five-person operations team costs $500,000 annually in fully loaded compensation. They spend 20% of their time on invoice processing. The AI tool cuts processing time by 50%.

CSP = 0.50 x 0.20 x $500,000 = $50,000

That's $50,000 in direct cost savings per year. Real money. Measurable. The kind of number a CFO will actually nod at.

3. Organizational Impact (OI)

This is the one that gets skipped, and it's often where the real value hides.

The OI captures benefits that don't show up neatly in revenue or cost formulas. Things like reduced employee turnover because people aren't stuck doing mind-numbing repetitive work. Or the impression that modern, well-designed tools make on customers and recruits. Or the downstream effects on other workflows that were bottlenecked by the one you just fixed.

Bratton acknowledged that OI factors are usually unique to the organization. You can't always put a precise number on them. But you can name them, estimate them, and include them in your decision.

For AI tools specifically, OI often shows up in three places: error reduction (fewer mistakes in the processed work), speed to customer (faster turnaround on things clients are waiting for), and team morale (people spending less time on work they hate).

Putting It Together: ROA = NRP + CSP + OI

The total Return on App (or in our case, Return on AI) is the sum of all three. In the example above, you're looking at up to $1,000,000 in revenue capacity plus $50,000 in direct savings plus whatever you estimate for organizational impact.

Now compare that to the cost of the tool. If the AI product costs $30,000 a year, the math is obvious. If it costs $500,000, you need to be a lot more careful about your assumptions, especially on the NRP side.

The beauty of this framework is that it forces specificity. You can't just say "this tool will help." You have to say how much, where, and based on what assumptions.

The Common Mistakes

Mistake 1: Running the numbers on the vendor's assumptions, not yours.

Every demo will tell you their tool delivers a 50% improvement. Maybe it does, for a company with clean data, trained staff, and processes that were already well-documented. Your mileage will vary. Run the ROA with conservative Impact numbers. Use 25% instead of 50%. If the math still works, you've got something.

Mistake 2: Forgetting the Time variable.

A tool that saves 80% of the time on a process your team spends 2% of their week on is not a high-ROA investment. The Impact might be impressive, but the Time multiplier kills it. Always check how much of your team's total capacity actually touches the workflow you're improving.

Mistake 3: Counting NRP as guaranteed revenue.

NRP is capacity, not cash. Freeing up ten hours a week for your sales team only generates revenue if they use those hours to sell. If they fill the time with meetings, the NRP is zero. Be honest about whether your organization will convert freed capacity into results.

Mistake 4: Ignoring the cost of implementation.

The ROA formula calculates the return. But the return only kicks in after the tool is running. If implementation takes six months and $100,000 in consulting fees, your first-year ROA looks very different. Include setup costs, training time, and the productivity dip that happens during any transition.

Mistake 5: Skipping OI because it's hard to quantify.

Some of the most valuable outcomes are the hardest to measure. If a new tool means your best operations person stops threatening to quit because they're no longer buried in data entry, that's worth something. Don't ignore it just because you can't put it in a spreadsheet. Estimate it. Name it. Include it, even with a range.

Where to Start This Week

Pick one tool you're currently evaluating, or one you've already bought and aren't sure about. Then do this:

Step 1: Identify the specific workflow it touches. Not "it helps with operations." Which process, specifically?

Step 2: Estimate the four inputs. Impact (how much time will it save in this process?), Time (what percentage of your team's week is spent on this?), Revenue (how much revenue does this workflow drive?), and Cost (what does this workflow cost to run?).

Step 3: Run the NRP and CSP formulas. Use conservative numbers. If the tool vendor claims 50% impact, use 25% for your first calculation.

Step 4: List your OI factors. What else changes if this workflow gets better? Employee retention? Customer experience? Downstream bottlenecks?

Step 5: Compare the total ROA to the cost of the tool, including implementation. If the ROA is at least 3x the total cost in year one, you've got a strong case. If it's less than 2x, dig deeper into your assumptions before committing.

This takes about 30 minutes per tool. And it's 30 minutes that will save you from the feature-stuffing trap, because no amount of AI features matters if the math doesn't work.

I break down frameworks like this every week in From Signal to Scale, my weekly newsletter. Three signals from AI, automation, and tech. No hype. No buzzwords. Just the stuff that actually matters if you're running or building a business.

If this was useful, you'll like what shows up on Fridays.

If you're not already reading Signal to Scale, that's where I share tools and approaches like this every Friday. [Subscribe here]

Have you run ROA numbers on your current tools? I'd love to hear what you found. Hit reply or drop me a note.