Joe Tuan
Joe Tuan
Founder, Topflight Apps
May 17, 2019

Iterative Strategy Execution to improving Product KPIs

Working from an existing product:

  1. Capture current metrics
  1. How are those being captured?
    1. Quantitative: Google Analytics? 3rd party metrics capture system?
    2. Qualitative: user surveys, support tickets, NPS likert score
  2. What are the current KPIs?
    1. Do they reflect and drive towards business goals?
    2. What is the cadence you capture and analyse/synthesis those? How do you know when you are on track?
      1. Quarterly? Weekly? Yearly?
    3. When does your organization change these KPIs and for what reason?
      1. When business goals change? Is this communicated transparently to the team? etc.
  • Identify the metric you’d like to improve
    1. Does this metric drive business needs? (why do we want to improve this metric?)
      1. i.e. better conversion will lead to more revenue
    2. What is the historical data you have on this metric?
      1. This provides context on how the current solution is performing and if there were any increases or decreases in the # over time.
  • Strategize a plan for improving the metric:
    1. Hypothesize about why the metric is performing poorly, these will be the hypothesis’ that we will test in our research phase
    2. Create a goal for the study
      1. I.e. to identify current pain points, tests our hypothesis, and ideate possible solutions
    3. Identify the best methodologies of testing this hypothesis –
      1. start with formative research (lets survey users for their current thoughts and understanding of the existing solution)
      2. then move on to generative research phase (lets test our hypothesis, different designs, different copy, etc)
      3. Different methodologies include:
        1. User survey (interview questions, likert scales, SUS)
        2. Task based testing
        3. Preference testing
        4. First click testing
        5. 5 second testing
    4. Identify the sample size you’ll need to conduct this study (below are my usual sample sizes)
      1. User survey – 20+ participants
      2. Task based testing – 5+ participants
      3. Preference testing – 20+ participants
      4. First click testing – 20+ participants
      5. 5 second testing – 20+ participants
    5. Decide/plan on the platforms and logistics of the studies, platforms I have used:
      1. Usabilityhub.com (great for fast general feedback)
      2. Usertesting.com (best overall but slower and more expensive)
      3. Respondent.io (great for very specific user groups)
      4. In-person usability testing
      5. In-person contextual interviewing
      6. In-person focus groups
    6. Create test script(s) for your study
      1. What are we assessing for? What are the best questions to assess to it?
      2. Are the questions clear and non-leading?
      3. Do the questions drive to the cause of the metric failure?
      4. Ask one question at a time, avoid compound questions.
    7. Pilot the test(s) and make changes as needed
      1. Test the test script with 1 participant to make sure the questions are clear and are easily understood
    8. Set KPIs for your study!
      1. This is not a common practice but one that I like doing to assess if I am getting closer to the core problem or to solving the core problem. This is where iterative testing comes into play. These test KPIs can be a preview of what a “live” KPI would be. KPIs for a research study would be in line with the goal of the study. Some examples:
        1. Ie. Test until 90% of participants can complete the task without assistance or critical errors.
        2. I.e. test until 90% of participants can accurately describe what the company does
        3. Ie. test until you receive a 8+ NPS
  1. Execute strategy
    1. Formative research
      1. Usually user surveys or focus groups:
        1. This is where you uncover common pain points, perceptions, misunderstandings, misinterpretations, wants, needs, what users value and don’t value about your product, etc.
    2. generative/iterative research
      1. Usually task-based testing, preference testing, first click testing, etc.
        1. This is where you can test different versions of your product to see which one performs better (can be done with wires, hi-fi mocks, clickable prototype, demo site, etc)
  2. Synthesis
    1. Synthesize all testing results (usually done in google sheets) and distill findings into your main KPIs
      1. Identify if you have met KPI; if not, test again.
  3. Proposal of improvements
    1. Propose the best tested solution to the team along with testing KPI readout
  4. Roadmapping improvements
    1. Add improvements to the product development roadmap based on user value/dev time matrix

Joe Tuan

Founder, Topflight Apps
Founder of Topflight Apps. We built apps that raised $165M+ till date. On a mission to fast-forward human progress by decentralizing healthcare and fintech.
Learn how to build winning apps.

Privacy Policy: We hate spam and promise to keep your email address safe

Copy link