May 17, 2019
Iterative Strategy Execution to improving Product KPIs
Working from an existing product:
- Capture current metrics
- How are those being captured?
- Quantitative: Google Analytics? 3rd party metrics capture system?
- Qualitative: user surveys, support tickets, NPS likert score
- What are the current KPIs?
- Do they reflect and drive towards business goals?
- What is the cadence you capture and analyse/synthesis those? How do you know when you are on track?
- Quarterly? Weekly? Yearly?
- When does your organization change these KPIs and for what reason?
- When business goals change? Is this communicated transparently to the team? etc.
- Identify the metric you’d like to improve
- Does this metric drive business needs? (why do we want to improve this metric?)
- i.e. better conversion will lead to more revenue
- What is the historical data you have on this metric?
- This provides context on how the current solution is performing and if there were any increases or decreases in the # over time.
- Does this metric drive business needs? (why do we want to improve this metric?)
- Strategize a plan for improving the metric:
- Hypothesize about why the metric is performing poorly, these will be the hypothesis’ that we will test in our research phase
- Create a goal for the study
- I.e. to identify current pain points, tests our hypothesis, and ideate possible solutions
- Identify the best methodologies of testing this hypothesis –
- start with formative research (lets survey users for their current thoughts and understanding of the existing solution)
- then move on to generative research phase (lets test our hypothesis, different designs, different copy, etc)
- Different methodologies include:
- User survey (interview questions, likert scales, SUS)
- Task based testing
- Preference testing
- First click testing
- 5 second testing
- Identify the sample size you’ll need to conduct this study (below are my usual sample sizes)
- User survey – 20+ participants
- Task based testing – 5+ participants
- Preference testing – 20+ participants
- First click testing – 20+ participants
- 5 second testing – 20+ participants
- Decide/plan on the platforms and logistics of the studies, platforms I have used:
- Usabilityhub.com (great for fast general feedback)
- Usertesting.com (best overall but slower and more expensive)
- Respondent.io (great for very specific user groups)
- In-person usability testing
- In-person contextual interviewing
- In-person focus groups
- Create test script(s) for your study
- What are we assessing for? What are the best questions to assess to it?
- Are the questions clear and non-leading?
- Do the questions drive to the cause of the metric failure?
- Ask one question at a time, avoid compound questions.
- Pilot the test(s) and make changes as needed
- Test the test script with 1 participant to make sure the questions are clear and are easily understood
- Set KPIs for your study!
- This is not a common practice but one that I like doing to assess if I am getting closer to the core problem or to solving the core problem. This is where iterative testing comes into play. These test KPIs can be a preview of what a “live” KPI would be. KPIs for a research study would be in line with the goal of the study. Some examples:
- Ie. Test until 90% of participants can complete the task without assistance or critical errors.
- I.e. test until 90% of participants can accurately describe what the company does
- Ie. test until you receive a 8+ NPS
- This is not a common practice but one that I like doing to assess if I am getting closer to the core problem or to solving the core problem. This is where iterative testing comes into play. These test KPIs can be a preview of what a “live” KPI would be. KPIs for a research study would be in line with the goal of the study. Some examples:
- Execute strategy
- Formative research
- Usually user surveys or focus groups:
- This is where you uncover common pain points, perceptions, misunderstandings, misinterpretations, wants, needs, what users value and don’t value about your product, etc.
- Usually user surveys or focus groups:
- generative/iterative research
- Usually task-based testing, preference testing, first click testing, etc.
- This is where you can test different versions of your product to see which one performs better (can be done with wires, hi-fi mocks, clickable prototype, demo site, etc)
- Usually task-based testing, preference testing, first click testing, etc.
- Formative research
- Synthesis
- Synthesize all testing results (usually done in google sheets) and distill findings into your main KPIs
- Identify if you have met KPI; if not, test again.
- Synthesize all testing results (usually done in google sheets) and distill findings into your main KPIs
- Proposal of improvements
- Propose the best tested solution to the team along with testing KPI readout
- Roadmapping improvements
- Add improvements to the product development roadmap based on user value/dev time matrix