💥 Activate Maximum Impact! 💥 How to scale your experimentation program for results.
Creating an effective testing and optimization program requires more than launching a few experiments. Avoid the most common pitfalls and set yourself up for maximum success!
The 3 most common reasons optimization programs fail are:
Missing necessary skills on the team 👨🚀
Lack of clear strategy and supporting processes 🎯
Unable to demonstrate value from your efforts 📈
An effective experimentation program starts with a solid foundation of people, processes and quality data. Start with small experiments to generate quick wins. With clear insights, you’ll build momentum, start seeing meaningful growth, and increase your confidence so you can start swinging for some home runs.
In this post, we'll look at what you need to build an optimization program that can transform your website into an acquisition machine for both self-serve sign-ups and enterprise leads.
Failed at first
Our enthusiastic marketing team was bursting with ideas, we had an A/B testing platform, and our engineering and design team was ready to bring those ideas to life.
But it didn’t work.
We launched several experiments and spent tons of time trying to make sense of data. We brought creative ideas to life and attempted to learn our way to growth, but we couldn’t keep the momentum or significantly move the needle.
We tried to be super collaborative, with multiple departments having a say in experiments. Anyone in marketing was able to come up with an idea and test it, no one truly owned the process. And even when some tests did show promise, there was no follow-through to implement those insights.
In the end, our big experimentation experiment had very little to show for all our effort.
But the story doesn’t end there.
What changed the game
Tl;dr - dedicated ownership, a systematic approach, and quality data.
The pivotal moment arrived when my company recognized the need for dedicated ownership and expertise. I started hunting for an expert and eventually I came across an interesting LinkedIn post on how to build a highly effective website navigation from Scott Olivares. My hunt was successful and Scott came to work for my company.
Here are some of the additional components we focused on to create a truly successful optimization program.
Find the truth
You know what they say about data: “garbage in, garbage out.” Before starting any effort to drive growth, you need to establish a sound data-driven foundation to ensure you understand the impact you are having and make the right decisions.
Here are a few tips on how to get your data in order:
Find the right data: Not all metrics are created equal. Identify those that are closely tied to the impact you are trying to make so that your efforts reflect true growth.
Validate the accuracy: Questionable data can undermine your confidence in what you are doing, and at worst, it can lead you to make the wrong decisions. QA your most critical metrics by walking through the flows on your website and matching your activity in the analytics platform.
Identify the gaps: Explicitly call out missing metrics or the pens you don’t trust. There are countless metrics, dimensions, integrations, and visualizations that will drive deeper insights and measure your impact. It’s okay if you don’t have everything in order right away. Work with the data you trust, and enhance the rest in parallel.
Create a roadmap: With the gaps identified, create a prioritized list of metrics and capabilities that you need, and start tackling them systematically. This is the beginning of your analytics roadmap that will enhance the fidelity and quality of your data your experiment program will depend on.
Prioritize for impact
As Scott used to say, keep it S.I.M.P.L.E - Systematic, insight-driven and meaningful changes on popular pages to learn through experimentation (ok… the acronym is a bit of stretch). Here are the three key aspects to this approach:
Create an experiment spreadsheet: Compile a prioritized list of experiments with key details: hypothesis, target KPI, weekly page traffic, estimated impact (high/medium/low), effort level, and strategic significance. This will allow you to prioritize the best experiments first. Once a test is finished, be sure to document the results in the same spreadsheet.
Engage with cross-functional teams: Collaborate with design and engineering teams to assess the feasibility and effort required to bring a test to market. The effort-to-impact ratio will be critical to prioritizing your experiment roadmap. Share the spreadsheet with the team and beyond. I will show how and why you are prioritizing certain experiments.
Simplify where needed: High impact doesn't have to mean high effort. Reassess and simplify experiments to reduce the effort required to implement them without compromising quality or results.
Build momentum
Rome wasn’t built in a day. The foundation of an effective optimization program starts with a series of small, quick wins. Keep these three points in mind at the beginning.
Stay lean: Avoid the distraction and stay focused on what is essential to launch your equipment. While experimentation platforms offer myriad features, most of them you’ll likely never use.
Start small: Technical issues are inevitable, and you’ll encounter plenty of them when you are first starting out. Start with simple ideas that don’t require a ton of engineering or design work - many of your high impact tests may not require any engineering or design work! Get a few quick wins to build momentum.
Avoid over-complicating: Prioritize quality over quantity. It’s tempting to run multiple tests in an attempt to accelerate time-to-insights, but running too many simultaneously can lead to technical collisions as well as confusing results.
Take big swings
Once you've worked out the kinks and have a few wins under your belt, aim higher.
Go big: Leadership loves big wins and big wins require big swings. Propose experiments that go beyond simple UI or messaging changes, and get executive sponsorship to run the test. Launch the experiment to a small user segment, monitor, and expand based on results.
Do your homework: Big swings often come with or higher effort, and usually require resources and collaboration from across the organization. To increase your chances of success, take the time to create a compelling business case for your experiment. A strong business case defines a clear problem, breaks the problem down to understand what might be causing it, and proposes a solution that is grounded in data-driven insights.
Crowdsource the effort
Having primarily worked in B2B SaaS companies, where the majority of the employees have a technical background, I’ve come to find that these folks LOVE data and numbers. Harness the collective intelligence inside your company!
Evangelize your efforts: Use your company Slack channel (or similar) to provide updates on upcoming experiments and those in-flight. Welcome new ideas and use your experimentation spreadsheet to prioritize the best.
Gamify the process: Encourage company engagement with initiatives like "pick the winner" contests. If you want to get fancy, create a poll allowing participation with a single click.
Encourage vigilance: While you should test every experiment before it goes out, bugs are inevitable. When evangelizing your efforts, also provide an avenue for submitting bugs. It can be a dedicated channel or a direct DM. Better to hear it from a peer than from the CEO!
Celebrate and share: Test results are effectively the punchline of the experiment. Share results, and recognize contributors, especially if the idea came from outside your team. Don’t worry about sharing failed experiments. Transparency breeds trust. There is something to be learned from each experiment regardless of the outcome.
Don’t test everything!
When your tool is a hammer, everything looks like a nail. There are some things that you test, and others you should simply do.
Create a clear distinction between experiments and necessary website updates. Treat your website as a product and create a roadmap accordingly. Experiments will be one component of that roadmap along with necessary website updates, analytics improvements and engineering-led infrastructure initiatives.
In conclusion
Good ideas are a dime a dozen, and the ability to run experiments means nothing without the right people and process in place.
The right person in your organization will make all the difference. Find your growth expert and hire them, or find someone bright within your company, get them trained, and give them a shot.
Find the data you need to measure your impact, make sure it’s accurate, and plan on improving your data maturity. Remember, “garbage in, garbage out.”
Prioritize the best ideas based on potential impact, required effort, and strategic importance; don’t waste your time on ideas that fall to the bottom of your list.
Start small to build momentum and work out the technical kinks, then take big swings to get some home runs. When taking those big swings, do your homework to build your credibility and get buy-in.
Documenting all your experiments will allow you to learn from the past, aggregate the impact, and demonstrate the value of your program.
Please don’t test everything! You’re a growth driver, not a scientist. Fix what needs fixing and spend your brain power on innovation and un-obvious ideas.