What differences do good experiments make for a company? And how do you create experiments that don’t suck?

During our fantastic Product-Led Summit in February 2022, Willie Tran, Principal Growth Product Manager at Calendly and Jacqueline Sigler, Senior Product Manager at GoDaddy, took us through all things experimentation: its importance, how to implement a great experimentation program and much more.

Check out the highlights from the Q&A 👇

Q: What is product-led growth?

Willie Tran

Product led growth is essentially down to the execution level. It’s more or less running experiments and a way to measure noticeable impact and improvement to your product, usually through the common metrics like acquisition, activation, revenue retention, and referral.

How do you actually measure that? You can do quasi experiments where you launch it, and you observe a metric over time, controlling for certain time windows. However, it's not always good to do this because you then run into effects of seasonality and if you don't have enough scale, you may need to test these time windows for longer, which is definitely less than ideal.

The way to measure impact is through an experimentation program. You can’t improve a product or do PLG without understanding how much you moved something by.

Jacqueline Sigler

Growth PMs are focused on enabling a smoother purchase path. What's the way to attract more users to bring in through the funnel? Potentially change how we describe different plan variants, that's my take on what is product lead growth.

Q: What is an experiment?

Willie Tran

An experiment is a vehicle to learn. It's not just testing your idea, which I think is what a lot of people think it is.

When you're setting up an experimentation program or an experiment, your objective is to learn about the users trying to answer questions like “What is the problem?”,“How can we better understand the problem the user is experiencing?”, or “Is our onboarding too long?”.

Jacqueline Sigler

There's been a strong emphasis on experimentation over the last two years or so and it created excitement amongst the product team, but also created a lot of anxiety. There’s already a lot of pressure to meet your roadmap milestones, so it felt like experimentation was another thing to add to the plate when we're already being asked to do so much. How do you balance that?

Seeing experimentation as learning has alleviated some of this pressure and more so it's about learning with the emphasis on customer value. We have our internal team who will do testing to make an improvement to internal tooling and we want to see if our fulfillment teams will be able to get through their jobs more efficiently. On the customer side, it should be driven by getting a customer a better experience.

Willie Tran

Getting leadership excited about experimentation can be a challenge. We can talk about how valuable experimentation is, but the truth is it’s very difficult to get started so unfortunately it's very easy to make an experiment that sucks.

Furthermore, it's incredibly easy to make an invalid experiment. An invalid experiment can be for example experimenting with flipping a coin six times with a probability it will land on heads 85% of the time. I flipped the coin six times and landed heads five out of six times, therefore every time you flip a coin 85% of the time it'll land heads but that’s obviously not true. That’s an invalid experiment.

With experiments, this happens all the time and what happens is they then present the numbers from the invalid experiment as gospel and truth. That’s harmful because it's essentially disinformation. You're saying you’ve just increased the bottom line by $10 million but that's probability, a random error.

Q: What do you use to implement experimentation and make sure you're reaching that statistical significance?

Willie Tran

This question can be broken down in a couple of ways. Firstly, you have to ensure the experiment is valid. What's the baseline conversion rate?

Then, you decide what you want the statistical power to be and you choose the minimum detectable effect, which is essentially a sensitivity like “how much are you changing the user experience?”, “how much are you expecting the number to move by?”.

After that, you look at the results and see if the median is statistically significantly different in your treatment vs. your control. You put that into another calculator and then after that you conclude it. That's how I test whether it's invalid.

In regards to a framework to ensure we're not just throwing stuff at the wall my approach is different. The framework I adopt with all my teams is essentially starting off with questions not with tactics.

The first thing we feel inclined to do is come up with tactics and a bunch of experiment ideas but instead come in with questions, work with your team collaboratively to create a question backlog. Then, for each problem statement, you do an ideation session with your team around potential experiments.

Every experiment you run relates to a problem statement, a validated one, which relates back to a question you originally got insight from, which is preventing activation.

It's all perfectly logical, you run the experiment to get insight, which usually leads to more questions, which leads to more insights, which leads to more problem statements, which leads to more experiments and so on.

Jacqueline Sigler

At GoDaddy, the leadership instituted experimentation and one of the challenges is that GoDaddy is a large company with about 200 PMs. Many have different backgrounds, experiences and some having had no experience in experimentation, myself included, so it was a bit intimidating.

Establishing a consistent framework has been very helpful in terms of setting out objective observation hypotheses, what are the key KPIs they’re measuring, having that emphasis on the clarity and consistency of how you're approaching experimentation.

It’s really important to establish, especially if you're gonna have other members of the organization outside of product start doing experimentation.

Part of the discouragement and the intimidation around the experiments is the pressure and desire to feel like you have to create a winning experiment which people on our experimentation team and leadership continue to enforce. However, having an experiment that's a loss doesn't mean it was bad or a waste of time because you still learn something.

Willie Tran

My least favorite experiment is one you don’t learn anything from. For example, we ran an experiment which generated somewhere between $5-8 million for Dropbox in incremental annual recurring revenue.

I say it was my least favorite experiment because we changed so many variables that afterwards I had to do a bunch of experiments to figure out what actually caused the increase.

Jacqueline Sigler

The key learning I've had is that an experimentation that sucks is one where you can't measure the result because ultimately it means you didn't get the benefit of all the time you invested in it.

Q: What can you do when you’re struggling to decide what to do next?

Willie Tran

Look back to your hypothesis. A good hypothesis is made up of three components. By execution, we will see an increase or decrease in metric because of assumption. This means three things:

  • Our execution was wrong.
  • Our metric was wrong (which is surprisingly common).
  • Our assumptions were wrong.

One or a combination of those can be wrong. Go back to your hypothesis, look at one of those three things and see where you think things could be wrong. The next step is in one of those three things.

Q: How do you value qualitative feedback from users and how do you bring it into your framework and experimentation?

Willie Tran

This is what goes into the problem statements. Look at the data holistically and identify some common trends you’re seeing, but also make sure you're asking the right questions.

Once you have this data, then generate the problem statements and this will give you the clarity your team needs to come up with the right solutions for that problem.

I always believe solutions are easy and questions are hard. The qualitative part is the most critical part of experimentation, but it’s often overlooked.

Jacqueline Sigler

I've recently worked on a new branding product which was effectively a new version of what has been existing for this set group of customers. Our approach was that we created a new figma design, new structure, a prototype, and we did interviews with these customers asking them a series of qualitative questions.

I also wanted to have some quantitative aspect to this experiment. Those same customers with their previous brand experience had a CSAT score.

In the interview, I asked a series of qualitative driven questions and then I asked them a question that effectively got to the CSAT score, like “how would you rate this?” or “what's your expected expectation of this on a scale of one to 10?”.

So I was able to reference their past CSAT scores to give a quantitative measure, but then also have the qualitative through the question answers.

The qualitative questions really gave an illuminating response more than if I had only sent them a survey of this new prototype with a CSAT because I wouldn't have gotten the real feedback I needed without qualitative data. In this experiment I managed to have a good balance of qualitative and quantitative data.

To wrap up…

Experiments are all about trial and error. They won’t always succeed, but as long as you’re learning they’re worth the effort. Lead with questions rather than tactics and build your experimentation framework out from there. And, as always, leadership buy-in is key!

Want to improve your Go-to-Market strategy?

Our Go-to-Market Certified: Masters course will give you all the information and knowledge you need to up your GTM game.

Delivered by Yoni Solomon, Chief Marketing Officer at Uptime.com, this course provides you with everything you need to design, launch, and measure an impactful Go-to-Market strategy.

By the end of this course, you'll be able to confidently:

🚀 Grasp a proven product launch formula that’s equal parts comprehensive, repeatable, creative, and collaborative.
🧠 Gain the expertise and know-how to build and tailor an ideal product blueprint of your own.
🛠 Equip yourself with templates to facilitate a seamless GTM process.