
Discover more from Paths to Growth
Tests Are Not Exams Theorem
This is one of the first Lens that I introduce to a team as it is foundational to a successful growth team and alongside the Lens “To Test or Not to Test".
While all tests should have a hypothesis rooted in a "Valuable Source" which gives us the confidence that the outcome will be positive, it does not mean that a test must actually be positive. A test should not have the same expectation of result you have from a complete and final feature, which to me has the requirement of being a positive experience for your customers.
When testings, you and your team should have the frame of reference that 70% of your attempts will be neutral or negative.
That's only a 30% positivity rate.
We are conditioned, largely by school, to believe that the only truly successful score on any test is a nearly perfect one. To get an exam score clearly into A range, students typically need to score at least 92.5%. That’s a fine frame for a student on an exam, but it doesn’t do the trick for a growth team.
A 30% success rate just seems embarrassing when viewed with that rubric. But it is critical for a growth team to calibrate their expectations to this spectrum.
You and your team need the space to test the boundaries and the air cover to make mistakes. They need the ability to test a potentially negative experience, and the degrees of freedom to test less than perfect experience. In most organizations I have worked at this is a very uncomfortable concept for an organization to embrace.
The truth is that if you are not failing 70% of the time, you are probably not being aggressive or fast enough in your testing.
Running an AB test on your product should not have the expectation of knowing the outcome as positive, otherwise why run the test in the first place. This mentality leads to "False Confidence".
Tests are not like exams. You don't need to know the answer to the a test. You simply need to test it.
Ask yourself: Are we being aggressive enough not only in the number and velocity of testing, but also are we being too safe in what we are testing. Does the team have the shared understanding that we should be testing everything and are they unafraid to fail?