Okay so scale growth with smart experimentation is basically the only thing I think about between 10 p.m. and 2 a.m. these days while I’m lying in bed in my apartment in Austin staring at the ceiling fan that’s moving just slow enough to be annoying.
I used to think growth was all about big hairy audacious ideas—like landing that one viral TikTok campaign or getting acquired by some FAANG giant overnight. Turns out most of that is luck mixed with really good PR. What’s actually moved the needle for me, especially the last eighteen months, is treating growth like a really boring science project where I run tiny cheap experiments constantly and kill the losers fast.

Why I Suck at Guessing (and Why You Probably Do Too)
Here’s a humiliating confession: for almost two years I was convinced that adding more payment method options would 3x our conversion rate. I mean, everyone says “reduce friction,” right? So I bullied the dev team into building Apple Pay, Google Pay, Klarna, Afterpay, Buy Now Pay Later with my dog’s name if they’d let me.
We finally shipped it. Result? +0.4% lift. Not even statistically significant. I wasted like six weeks of engineering time and probably $18k in opportunity cost because I was so sure I was right. That was the moment I went full “screw my gut, give me data” mode.

Mastering YouTube Ads for Event Promotion in 2026: High-Impact Video Campaigns that Turn Viewers into Ticket Buyers | Ticket Fairy Promoter Blog
Now I force myself to run smart experimentation before I commit real resources. Small, fast, cheap tests first.
My Current Scrappy Framework to Scale Growth with Smart Experimentation
I’m not fancy. No fancy Six Sigma belts or $50k Amplitude subscription here. Here’s what I actually do in 2026:
- Idea backlog in Notion that I ruthlessly prioritize Every random thought (“what if we make the CTA button look like a literal exploding firework?”) goes in. Then I score each one 1–10 on three things:
- Potential impact (how big could the win be?)
- Ease of testing (can I MVP this in <1 week?)
- Confidence level (how sure am I this isn’t stupid?) Only stuff that averages >7 gets built.
- Minimum viable experiment mindset If I can test the hypothesis with a Google Form linked in an Instagram story or a $200 FB ad set, I do that before writing code. Example: wanted to see if “free shipping over $75” beat “10% off $100.” Ran it as two separate landing page links in stories to 1,200 followers. 10% off won by 22 points. Saved us from building the wrong thing.
- Weekly experiment cadence I aim for 3–5 live tests every single week. Some are tiny (headline tweaks), some are bigger (new onboarding flow). Most die after 48–96 hours. The winners get resources to scale.
- Kill fast, celebrate small If p-value isn’t <0.05 and the lift isn’t at least +8–10% I usually shut it down. I used to drag losers along because “maybe it needs more time.” Nah. Time is the most expensive thing.
One Experiment That Actually Helped Me Scale Growth
Last fall I was panicking because paid acquisition CAC was creeping up 30% quarter over quarter. Instead of throwing more budget at Meta, I ran a really dumb-sounding test:
We added a one-question post-purchase survey (“What almost stopped you from buying today?”) with four radio buttons and an “other” text field.
The #1 answer by a mile? “I wasn’t sure if it would fit / look good on me.”
So we tested adding a quick “see it on a real person like you” carousel using customer-submitted photos (with permission, anonymized height/weight ranges).
Result after two weeks: +14% add-to-cart rate, +9% overall conversion. That one experiment is probably responsible for an extra $70–80k in revenue this quarter so far.


And it started because I read like twelve open-ended survey responses at 1 a.m. while eating leftover Whataburger in sweatpants.
Tools I Actually Use (Nothing Crazy Expensive)
- Google Analytics 4 + GA4 BigQuery export (free tier still slaps)
- Mixpanel for event-level stuff when GA feels too aggregated
- VWO or Optimizely free tier for basic A/B (I bounce between them)
- Notion + a Google Sheet for experiment tracking
- Causal for lightweight forecasting when I want to pretend I’m sophisticated
I don’t have a data scientist. It’s me, a coffee IV drip, and ChatGPT helping me write SQL half the time.
The Part Where It All Goes to Hell Sometimes
Real talk: smart experimentation sounds clean but it gets chaotic fast.
Last month I ran four tests at once on the same page because I was impatient. Traffic cannibalized itself, results were noisy garbage, and I spent three days trying to untangle it before I just reverted everything and started over.
I still over-test sometimes. I still fall in love with my own ideas and have to force myself to kill them. I still have weeks where nothing wins and I feel like a fraud.
That’s the game though. Scale growth with smart experimentation isn’t a straight line up and to the right. It’s more like three steps forward, one spectacular faceplant, two shuffles sideways, then maybe another step.
Wrapping This Ramble Up
If you’re reading this at 2 a.m. wondering why nothing is working, here’s my only real advice: stop trying to be brilliant and start trying to be fast and wrong a lot. Run the ugly test. Kill it quick when it flops. Double down on the weird 12% win nobody expected.
That’s how I’m clawing my way toward scaling right now.
What’s one tiny experiment you could run tomorrow morning before your first coffee? Drop it in the comments—I’m nosy and I’ll probably tell you it’s dumb and then secretly copy it if it works.
Catch you in the next post-mortem.
