10 lessons from A/B testing

24 Aug


We implemented A/B testing into our product 6 months ago. During that time we conducted a variety of A/B tests to generate insights about our user’s behaviour. We learnt a lot about our specific product. More generally, we learnt about how to run valuable A/B tests.

Below is a Buzzfeed-esque TOP 10 LESSONS I learnt RUNNING A/B TESTS. It’s tips & tricks – plus things to avoid doing. It’s written from a product/BA perspective.




Lesson 1: A/B vs MVT testing

Lesson 1

A/B and MVT testing are very similar. Infact the terms are sometimes used interchangeably.

A/B and MVT tests both serve up different experiences to the audience and measure which experience performs the best. They are both run with the same 3rd party tools (e.g. Optimizely, Maxymiser) and have a similar experiment lifecycles.

The key difference between A/B and MVT tests is how many elements they vary to the audience.

A/B test

This is where you change one element of a page (e.g. the colour of a button). You might compare a blue button (challenger) against a red button (control) and examine what effect the button’s colour has on user behaviour. For example:

Button Colour | Variant Name |

| Blue                  | Challenger       |

| Red                   | Control            |

Pros: Simple to build, faster results, easier to interpret results

Cons: limited to one element of a user experience (e.g. button colour)

As a note – A/B tests aren’t limited to 2 variants. You could show a blue button, red button, purple button etc; as long as you change only one element of an experience (button colour) it’s an A/B test.

MVT test

This is where you change a combination of elements. You might compare changing the button colour and its text label. You would test all combinations of those changes and see what effect it has on user behaviour. For example:

| Button Colour | Text Copy | Variant Name |

| Blue                  | Click here  | Challenger 1   |

| Blue                  | Click          | Challenger 2   |

| Red                  | Click here   | Challenger 3   |

| Red                  | Click           | Control           |

Pros: Greater insights, identifies the optimal user experience, more control

Cons: Longer to get results, more complex, requires more traffic

Which one to pick?

This depends on what you want to test & your testable hypothesis. In the early stages of running experiments you might start with A/B tests and then move onto MVT tests. This is because A/B tests are simpler to create & interpret. MVT tests are slightly more complex but provide greater product insight.

As an example: we ran an MVT experiment where we changed the promotional copy on a page and a CTA label. We thought both elements would impact the click-through rate. The result was that the winning promotional copy was emotive copy. The best CTA was “Get started“. However the optimal variant was descriptive copy with “Get started“.Why? Perhaps because the tone between the two elements was more aligned. If we had run this as 2 A/B tests then we wouldn’t have identified the optimal combination.


Lesson 2: Have a clear hypothesis


An experiment is designed to test a hypothesis. The purpose of an experiment is to make a change and analyse the effect. Tests need to have a clear reason and a measurable outcome.

When creating an A/B test its crucial to create a clear hypothesisWhat is the problem you’re trying to solve? What are the success metrics? Why do you think this change will have an effect?

We use a variation of the Thoughtworks format to write testable hypotheses:

We predict that <change>

Will significantly impact <KPI/user behaviour>

We will know this to be true when <measurable outcome>

By having clearly defined hypotheses we can:

  1. Compare the merits of different hypotheses and select the most valuable one first. For example if hypothesis 1 predicts a 5% uplift in a KPI and hypothesis 2 predicts a 50% uplift in the same KPI, then we would test hypothesis 2 first.
  2. Agree the success metric upfront before starting development. For example if changing the mobile navigation is the test, what are the success metrics: more users clicking on the menu button, more items in the menu being clicked, increased usage and retention of brand new users? Having clear success metrics/goals is key when trying to identify the winning variant later on.
  3. Ensure the test is focussed on solving a user problem or improving a KPI that matters to the product. We don’t want to run tests simply because we can – they need to solve problems and offer benefits. The above format aligns each test with business KPIs/user problems.
  4. Make it incredibly easy for anyone to generate a hypothesis. The Thoughtworks format means that anyone in our team can generate a hypothesis. Some of the best ideas we’ve had are from “non-creatives” such as QA.

Note – we often put a “background” section with research in the testable hypothesis (e.g. how many people currently use a feature, industry average, user feedback etc).


Lesson 3: Forecast sample size

Lesson 3

When designing an A/B experiment it’s crucial to calculate the sample size. You will need to forecast the sample size required to detect the MDE (Minimal Detectable Effect). This forecast will inform:

  1. Whether you can run the experiment (do you have enough users?)
  2. The maximum number of variants you can create
  3. What proportion of the audience will need to be in the experiment
  4. Potentially the experiment duration (e.g. it will take 2 weeks to get that many users)

There’s several tools online to help you forecast e.g. https://www.optimizely.com/resources/sample-size-calculator/. Without upfront forecasting you run the risk of creating an experiment that will never reach an outcome.

For example: imagine your product has 100k weekly users. You plug in the numbers and forecast that each variant requires 22k users to detect a 0.05 statistical effect size. That means you should build no more than 4 variants, otherwise you won’t detect a significant result. At least 44% of users need to be in the experiment (22% see a variant, 22% see a control). If the change is radical, based on these numbers you may only want to create one variant; this is because you don’t want to show the experiment/significant UX changes to a large proportion of the audience.


Lesson 4: More variants the better

Optimizely ran an analysis of their customers successful A/B tests. What they found was interesting. The more variants run in an experiment (up to a limit), the more likely you are to find an effect. Why?

One reason is that if you ask UX to create 2 variants they may create two similar visuals. If you ask them to create 8 there might be greater differences between them. It’s likely with 2 variants you’re playing it safe. The Optimizely results suggests running about 5 variants in a test:

Lesson 4


Lesson 5: Implement a health metric

The purpose of a health metric is to ensure that an experiment doesn’t maximise one KPI (the experiment’s primary goal) at the detriment of other KPIs. Popular health metrics include: average weekly visits, content consumption, session duration etc. Essentially health metrics are key business KPIs you don’t want to see go downduring an experiment. If the health metric fails, then you pull the experiment early, or do not release the winning variant.

For example: imagine you have 3 variants of a sign-in prompt. One variant of the prompt is non-dismissible. If your primary goal is to maximise sign-ins then this variant will win. However the variant could be so annoying that it reduces overall user engagement with the product. Your health metric ensures you don’t maximise sign-ins at the detriment of core product KPIs (e.g. average weekly sessions).

In our case – the BA worked with stakeholders/the product owner to identify & track the health metrics. The health metrics will vary depending on the product.



Lesson 6: Get management buy in

Based on experience, I recommend getting management buy-in early on. A/B testing is a significant culture change. It challenges the idea that a Product Owner/UX/Managers know what the best user experience is. It replaces gut decisions with data based decisions. Essentially A/B testing can transition a team from a HIPPO culture (HIghest Paid Persons Opinion) to a data driven culture.

Lesson 6

To get management buy in for A/B testing there’s a variety of tactics:

  1. Ensure the 1st A/B test you run offers real business value. Don’t run a minor/arbitrary change as your 1st test. Try to solve an important problem or turn the dial on a key business KPI. Even better if the result might challenge existing beliefs.
  2. Reiterate the benefits of A/B testing. These include:
    • Increasing collaboration by empowering the team to generate their own hypotheses, which can be delivered as “small bets”
    • Increasing openness by encouraging a data-driven culture to decision making, rather than a HIPPO culture
    • Increasing innovation by learning more about user behaviour and adapting the product
    • Increasing innovation because delivering changes to a sub-set of the live audience means you can experiment more and take more risks
    • Challenging assumptions and decisions to create a more valuable product. Gut feelings can be wrong
    • Small bets are better than big bets. They are less risky & can have significant user benefits
    • Empowering the team to improve the quality of solutions
  3. Create experiments in collaboration with the entire team so that it’s not seen as a threat to the PO/UX
  4. Create a fun testing environment. Get people to place bets on the winner.


Lesson 7: Assumptions can be wrong

lesson 7

We’ve had several examples of where our assumptions about user behaviour were wrong.

Our 1st A/B test was a prompt. We thought it would increase usage of a new service. We were so confident about it as an in-app notification that we were going to make it a re-usable component. We actually had 3 more prompts on the roadmap.

What did we find out with an A/B test? The prompt significantly reduced general usage of the app. It was a dramatic drop in usage. The results challenged our assumptions and changed our roadmap.

By having a control group that we could compare against & by serving the experiment to a sub-set of the audience we were able to challenge our assumptions early & with a relatively small subset of users.
We never put the prompt live. Test your assumptions.


Lesson 8:  Broadly it’s a 6 step process

This is a slight simplification – below is the typical lifecycle of an experiment.

lesson 8

STEP 1 – Business goals

Identify the business goals (KPIs) and significant user problems for your product.

STEP 2 – Generate hypotheses

Generate testable hypotheses to solve these goals/problems. Prioritise the most valuable tests.

STEP 3 – Create the test

  • Work with UX & developers to create n number of variants
  • Forecast the number of users required for the MDE
  • Decide on traffic allocation (e.g. 50% see A, 50% see B)
  • Identify target conditions (e.g. only signed in users, only 10% of users)
  • Implement conversion goals (one primary and optional secondary goals)
  • Implement the health check
  • Set the statistical significance level

STEP 4 – Run the experiment

  • Run the experiment for at least 1 business cycle
  • Actively monitor it
  • Potentially ramp up number of users

STEP 5 – Analyse results

  • Review the performance of variants
  • Analyse the health check
  • Identify winner

STEP 6 – Promote the winner

  • Promote the winner to 100% of the audience
  • Learn the lessons
  • Archive the experiment


Lesson 9: Make testing part of the process

lesson 9

When we started A/B tested we committed to run 3 tests in the first quarter. It was a realistic target. It meant we were either developing a test, or analysing the results of a test (tests typically ran for 2 weeks). The more tests we ran, the easier they were to create.

Getting into a regular cycle is important in the early stages. For any feature or change you should ask “Could we A/B test that?”

I have seen several teams “implement A/B testing” and only run 1-2 tests. The key to getting value from A/B testing is to make it part of the product development lifecycle.


Lesson 10: There’s a community out there …

There’s a huge number of resources out there:

lesson 10


I learnt a huge amount from Olivier Tatard, Sibbs Singh, Sam Brown, Toby Urff and the folks at Optimizely. Big thanks also to the rest of the app team, we all went on the journey together.

If you made it down this far then you get 10 bonus points.


Applying Build, Measure Learn to Sprints Demos

25 Jul


Like most Scrum teams, we held “Sprint Review Meeting” every two weeks. We would gather as a team to demo what was recently built & receive feedback. Although it was a great opportunity to showcase recent work, we identified a number of problems with “Sprint Review Meetings” for our mature product:

  1. Stakeholder attendance was poor. Stakeholders saw the Sprint Review Meetings as a technical show & tell. The demos often didn’t work fully & business value wasn’t necessarily communicated.
  2. Because developers demoed the work, it put disproportionate pressure on the development team. We presented recent work & we often had problems with test environments/connections/mock data etc.
  3. More generally – the development team wanted regular updates from the product team. Our retros identified a need for the product team to provide regular updates about recent features; did a recently released feature meet our hypothesis? What did we learn? Will we iterate? How did it impact our quarterly OKRs?
  4. Sprint Review Meetings felt like a conveyor belt. We would demonstrate work, get feedback about quality, and then watch it leave the factory. But we wanted to learn how customers actually used the new product. We wanted external as well as internal feedback.


Build, Measure, Learn (BMLs) sessions

To address the above issues, we replaced Sprint Review Meetings with “Build, Measure, Learn” sessions. As advocates of the Build, Measure, Learn approach – we were keen to review recently released features with the team. We launched features every 2 weeks – so the natural cadence was to report on features at the end of the following Sprint.

We created “Build, Measure, Learn” sessions. The basic format is simple:


Every 2 weeks. At the end of the Sprint. Replaces the Sprint Review Meeting. 


Team (Product, Devs, UX) & Stakeholders. 


1 hour.


The session is divided into two sections:

  1. Build = demo from the development team about what was built during the Sprint. It’s a chance to get feedback from the Product Owner/Stakeholders.
  2. Measure/Learn = product reporting back on stats/usage/insights of recently launched features. Typically on features & changes launched 2 & 4 weeks ago. This provides an external feedback loop.

The Measure/Learn section became as valuable as the demo section. It also provided practical breathing space for setting up/fixing demo’s – if we had problems we would start off with the Measure/Learn section😉


Build section

As with the Sprint Review meeting – this section was the development team demoing what was built during the Sprint.

This was an opportunity for product/stakeholders to provide feedback and ask any questions. Changes were noted by the BA and put on the product backlog.

It was also an opportunity to praise the team & celebrate success.


Measure/Learn section

In the Measure/Learn section the BA or Product Owner would cover the following areas:

  1. General product performance: how we are performing against quarterly goals/OKRs
  2. For each recently released feature:
    • Present the testable hypothesis
    • Present the actuals. Key trends/unexpected findings/verbatim feedback from the audience about the feature
    • Present key learnings/actions: Build a v2/pivot/stop at v1/kill the feature?
  3. Wider insights (optional):
    • Present recent audience research/lab testing
    • Present upcoming work that UX are exploring & get feedback on it



We found that BML sessions were a great replacement to Sprint Review Meetings. They ensured we kept the measurement & learning part of the lifecycle front and center in the team. The Measure/Learn section also ensured we reported back on business value regularly.

Main benefits:

  1. Learnings/insights about recently released features were shared with the team – this kept us focused on our original hypotheses and business value. It enabled us to discuss the learnings based on external audience feedback.
  2. Encouraged a shared sense of ownership about the end of Sprint session and the performance of features
  3. Increased stakeholder attendance & stakeholder engagement as there was a focus on audience feedback and KPIs
  4. We were still able to demo the newly developed features & get Product Owner/Stakeholder feedback

Simpsons humour

15 Jul


How Might We … brainstorm ideas

13 Jul

Screen Shot 2016-07-13 at 18.46.35


“How Might We …” is a group brainstorming technique we have used for 6>months to solve creative challenges. It originated with Basadur at Procter & Gamble in the 1970s, and is used by IDEO/Facebook/Google/fans of Design Thinking.

“How Might We …” is a collaborative technique to generate lots of solutions to a challenge. Our team modified the technique slightly to ensure that we also prioritise those solutions. More on that below …

In essence “How Might We …” frames problems as opportunity statements in order to brainstorm solutions. For example:

  • How Might We promote our new service to the audience?
  • How Might We improve our membership offering?
  • How Might We completely re-imagine the personalisation experience?
  • How Might We find a new way to accomplish our download target?
  • How Might We get users excited & ready for the Rio Olympics?

How Might We works well with a range of problem statements. Ideally the question shouldn’t be too narrow or broad.



How Might We sessions involve a mixture of participants: product (Product Owner/BA), technical (Developers/Tech Lead/QA) and stakeholders. The duration is 1 – 1.5 hours.

The format is:

  1. Scene setup (background/constraints/goals)
  2. Introduce the question (How Might We …)
  3. Diverge (generate as many solutions as possible)
  4. Converge (prioritise the solutions)


1. Scene Setup

Scene setup is about introducing the background, constraints, goals & groundrules of the How Might We session.

For example we held a session about: “How Might We get app users excited & ready for the Rio Olympics?” We invited 10 participants across product, technical and stakeholder teams. For 5 minutes we setup the scene. As part of scene setup:

  • Background: Rio 2016 is the biggest sporting event. We expect record downloads & app traffic. There will be high expectations. There will be hundreds of events & hours of live coverage.
  • Constraints: We want to deliver the best possible experience without building a Rio specific app.
  • Session goal: Generate ideas for new features & to promote current features.
  • Commitment: We will take the best ideas forward to explore further.


2. Introduce the question

The How Might We question is presented to participants and put on a wall/physical board

The question shouldn’t be too restrictive; wording is incredibly important. Check the wording with others before the session. We circulate the question to participants ahead of the session – this allows them to generate some solutions before the meeting.

Framing the question in context/time will help. It makes the problem more tangible. For example:

“It’s 3 days before the Olympics. How Might We get users excited & ready for the Rio Olympics?”


3. Diverge

Use a technique like crazy 8’s to generate ideas. Give people 5-10 minutes to think of many solutions to the question.

These solutions are typically written on post-it notes. At the end of 10 minutes we ask each participant to stand up and present their post-it notes ideas to the group. Participants explain their ideas; common ideas are grouped together. For example:

Post it note ideas

With 10 users you can generate 50 – 80 ideas. Once ideas are grouped together you can have 20 – 30 unique ideas.


4. Converge

We ask people to pick their favourite idea. It can be there own idea, or another person’s post-it note idea.

For 10-15 minutes they explore that idea in more detail. Participants can add notes/draw user flows/write a description about the idea.

At the end of 10 minutes, each participant is asked to present back their idea to the group. For example:

Idea example

Once each participant has presented their idea (10 people = 10 ideas), participants are invited to dot vote. Each participant has 3 votes to select their favourite 3 ideas.

Typically this is where a HMW ends ….

BUT we would often find ourselves in a position where the top voted idea was the most difficult to implement. The top ideas were often elaborate & had a cool factor – but were very complicated to build/offered limited business value. For example: “We could build VR into the app. It would offer all sports in immersive 3D and recommend videos based on the user’s Facebook likes”.

AND we found that stakeholders weren’t comfortable having an equal say (3 dot votes) to QA/developers in terms of the product proposition.

SO we implemented a further step to converge on more realistic options. We took the top voted ideas + any ideas that stakeholders were particularly keen on from the How Might We session. We allowed UX to explore these ideas in more detail. An example of a more refined idea is an Olympics branded menu:

Screen Shot 2016-07-13 at 17.56.56

We took these ideas into the prioritisation session.



With the more refined ideas we held a prioritization session with the key stakeholders (product owner, tech lead, primary stakeholders).

As a group we would rank these ideas in terms of business value and technical complexity (1-5). The business value was driven by a KPI or agreed mission. The technical complexity was an estimate of effort.

Complexity 5 = hard

Complexity 1 = easy

Impact 5 = high impact

Impact 1 = low impact

We would end up with a relative ranking of the top ideas. For example:

Cost Value example

The top left quadrant is tempting (high impact, low effort). The bottom right quadrant is not tempting (low impact, high effort).

We used the relative weightings & dot voting to select the best idea. We would go on to shape & build the best idea.

User story mapping

5 Dec

What is user story mapping

User story mapping is a workshop technique that creates a visual representation of product requirements, and orders them based on priority, user theme & time.

The output of user story mapping is a product roadmap that clearly conveys the context of each user story, the horizontal slices of the MVP, and the aspirational scope of the product.

The technique has become popular through the work of Jeff Patton. An example user story map is here:

Shoes website - USM

Reasons to use this technique

The early stages of the new product lifecycle can be a challenging time for Business Analysts. Considerable BA effort can be spent:

  • Capturing initial requirements (e.g. User Stories) & creating a structure around them (e.g. EPICs)
  • Capturing the overall product scope for the new product
  • Prioritising requirements into an MVP and subsequent releases
  • Understanding where to focus the analysis effort (i.e. which areas to initial spec out)

User story mapping will help you visualise the requirements & their context within the overall product roadmap. You will create a visual overview of the roadmap.

How to run a USM session

Organise a workshop with the key stakeholders: product, UX, technology, internal SMEs etc. Ideally you want no more than 10 people (7 + – 2 is the magic number).

Bring post-it notes, pens and plenty of sweets.

Step 1 (kickoff & context)

The BA/PO kicks off the session. They present a summary of the product vision. What problems does the product need to solve? What audience insight do we have? What are our known constraints? What are our competitors doing? What are our aspirations?

An example would be “We want to build a website that sell shoes to teenagers. We have evidence that XYZ opportunity exists and we want to focus on the teenage market etc etc”

This should take 10 – 20 minutes.

Step 2 (identify user tasks)

Each stakeholder should (in silence) write down how a user will use this new product on post-it notes. What do users they need & want to be able to do?

An example would be: “search for shoes”, “browse a website hierarchy”, “view recommended products” “add to a basket” “share on Facebook” etc.

As a tip to ensure you’re capturing the correct detail – they should be written from the user’s perspective and usually start with a verb (e.g. add, share, browse, buy etc). These are the things your users will do with the product.

This should take 10 minutes (depending on the scope of the product).

Step 3 (remove duplicate tasks)

The BA/PO should put the post-it notes on a physical board. Talking about each post-it note, the author should describe what they’ve written and why.

During this process duplicate post-it notes should be removed.

You will end up with something like this:

USM - User tasks

These are your USER TASKS (i.e. things a user does to achieve an objective). They form the walking skeleton.

This should take 15 minutes.

Step 4 (identify user activities)

With the post-it notes on the board, start to organise them into clusters (clusters = based on similarity). Assign each cluster a name.

An example would be: “find a product“, “view a product“, “buy a product“ etc. Where “find a product“ includes “search on site”, “browse a website hierarchy”, “view recommended products” etc.

Order these clusters according to the order in which a user would complete them.

For example: find a product > then view a product > then add a product > then buy a product

Note – if the post-it notes can’t be ordered from a user perspective, then you can order them based on when you plan to pick up the work. Ideally you should organise the items from the user’s perspective, so that reading the activities from left to right tells a story and represents a narrative.

User activities - USM

These clusters are your USER ACTIVITIES. They form the backbone.

This should take 15 minutes.

Step 5 (confirm all activities & tasks are captured)

With the USER ACTIVITIES and USER TASKS on the board – explore the user journey and see whether you have captured all the activities and tasks. Thinking in terms of user personas can assist. Drill down into the activities to see if there’s additional detail you can add.

Use this as a time to confirm that you’ve captured the scope of the system. If there are activities or tasks that don’t fit into the main narrative (e.g. user can localise the website language), add these items to the end of the map.

This should take 20 minutes.

Step 6 (identify user stories)

The final level of detail for the map is USER STORIES. I haven’t encountered a body related synonym for user stories – continuing with the theme of walking skeleton & backbone, we could suggest body of work?

Each USER TASK will have user stories. Explore the user tasks in additional detail – capture and frame that detail as user stories (each user story requires a title, a description, and any relevant detail).

For example within the “search on site“ user task, there could be a user story to:

  1. Search based on keyword (AS A user, I WANT to search for shoes using keywords such as Nike, Jordan, trainers SO THAT I can find my desired product quickly)
  2. Search based on colour
  3. Filter by brand
  4. Search based on shoe size

The user stories are designed to be a placeholder for a conversation – a snapshot of what the user needs and why. Don’t try to add too much detail. They should meet the INVEST criteria.

This step can take some time, depending on the product. Either take a break until starting the step, or run it as separate sessions.

Step 7 (prioritise user stories)

With the product team steering this, work as a team to put user stories in a priority order, from highest priority to lowest priority. This can be a difficult task – compare the relative value of each user story to another.

Now draw a vertical line to slice to roadmap. This line represents the minimum viable product (MVP), you can draw lines for additional versions of the product too.

MVP - User story mapping

Step 8 (next steps)

You should now have all the user stories defined, prioritized and organized by theme.

The BA can take away each user story and work on them separately, adding acceptance criteria/detail in order to allocate them into a Sprint. The great thing about this technique is that each user story now sits within the context of a user journey.

Shoes website - USM summary

Benefits of user story mapping

+ It visualises the product backlog – a USM is easy to present & understand compared to a list of JIRA tickets. A picture is worth a thousand JIRA tickets🙂 You can put it on a Sprint board.

+ It radiates information including: the context of items, the relative priority of each item, the groupings of work and the MVP.

+ It’s hierarchical – it covers the high level themes and the more granular user stories. You can see the big picture, the finer detail and the aspirational vision.

+ It encourages valuable slicing of functionality – the lines drawn from left to right slice the backlog so that you don’t build all of the search stories and then run out of money before you develop the buy stories.

+ It organizes the roadmap based on how users will use the new product. By focusing on user behaviour rather than how systems interact, you build a product around the user needs.


  • Can this be applied to any type of product?
    • I think so. Provided it has end users. I’ve applied the technique to a feature which allowed users to sync their favourite items. On the surface of it, the feature was essentially a read/write capability – but we were able to convey it as a set of user tasks.
  • Would you ever create multiple user story maps for one product?
    • I have never done this. If your product is the Amazon website then you may need multiple maps – but hopefully your product isn’t that large.
  • Are there any downsides?
    • The user story map doesn’t replace your requirements/JIRA tickets. It’s essentially a way to visualise them within the context of the roadmap. As the system changes you’ll need to update the USM as well as your requirements/JIRA. As such, it’s a product artefact that needs maintaining.
  • What if you have no idea what the product is?
    • The technique generally assumes there is a product vision around which to hang these tasks/stories. If you have very little idea about the new product (e.g. “we want to build a page on the website where users can see things that most interest them”) then you can still use this technique. I’d recommend some form of upfront thought about what you want that page to do by the product team & a warm up activity for the group before step 2 (e.g. crazy 8’s).
  • Aren’t user tasks just another type of user story?
    • User tasks can be thought of as high-level user stories e.g. “searching for a product“ is a task that can include multiple user stories (search by keyword, search by shoe size, filter by brand, search by colour). The essence of activities vs tasks vs stories is that they’re hierarchical user needs. Exactly what they are depends on what product you build.

Product exercises – postcard from the future etc

7 Apr


Below is a set of exercises I have run or participated in – that are intended to facilitate product discovery & product development early in the project lifecycle. These exercises help teams generate a product vision and solicit high-level features/themes around which a prioritised product backlog can be generated.

Postcard from the future


  1. Split the group of participants into pairs. Participants include senior project stakeholders (UX, product, tech etc). Try to create cross discipline pairs.
  2. Assign each pair a user persona. User personas are designed to provide insight into your market by identifying how different types of consumer will use your product. An example persona which you could assign to a pair would be “Mobile Mark. 24 yrs old London professional with the latest iPhone. Likes new technology. Regularly checks Twitter etc …”
  3. Ask each pair to write a postcard from the future as their assigned persona. An example would be “imagine we have just launched our new responsive website. Mobile Mark is a tech savy user. He is so happy with our new website that he has decided to write a postcard to our team.” A pair would then write a postcard as Mark – about why he loves the new responsive website.
  4. Once the postcards have been been generated – ask each pair to introduce their assigned persona to the wider group and present that persona’s postcard from the future. Allow the wider group to ask any follow-up questions.

Note – I have also run this session with Tweets/Vines from the future. The Twitter format is particularly useful for capturing a concise summary – often with the single most important feature. The Vine format is an interesting variant, it allows people to be more creative and tell a story (e.g. “Mark gets on the bus, checks the website, jumps up and down in excitement and spills his drink, Tweets his friend”).





This exercise is useful for:

  1. Getting senior project stakeholders to think about the product from a persona’s point of view. It’s easy for stakeholders to think of features their own viewpoint – this exercises allows senior project stakeholders to empathise and create a vision from their user’s perspective.
  2. Identifying “warm fuzzy features”. It’s a great technique for discovering features that would delight your users e.g. everytime Man Utd score Mark’s phone cheers, otherwise it boos”. The format and persona based nature of the exercise encourage creative and out-of-the-box thought.

Hopes and fears


  1. The facilitator will introduce the project goal to the stakeholders e.g. “We are planning to create an Apple watch app. Here are several potential designs. Before we proceed any further – we want to ask everyone for feedback about their hopes and fears for this project”. Stakeholders in this context are anyone who has an interest in the success of the project.
  2. The facilitator will ask stakeholders to write down their hopes and fears in silence. These hopes/fears are then placed on a board.
  3. Once all the feedback has been captured on the board – the facilitator will talk through the hopes and fears. The facilitator will: seek clarity on specific points, encourage an open dialogue on each item, seek agreement and identify common themes.
  4. The hopes and fears will then be taken away and captured in the project space (e.g. a wiki space).


  1. This exercise allows stakeholders to convey their aspirations and perceived risks early in the project lifecycle. It reduces project risk and builds consensus.
  2. Individual hopes can form the basis of high-level themes or features. For example “I hope we can provide content that is useful to a user’s context”. This hope may become a feature to “vary the content/format depending on time of day”.

Now, next and later


  1. The facilitator will hold up individual features (features are prepared in advance). They will describe each feature and encourage an open dialogue with the audience. The audience is typically senior project stakeholders.
  2. For each feature, detail and clarity will be added through discussion. The audience will be asked to assign each feature to either the now, next or later column. If too many items are added to the Now column then the facilitator should challenge this.
  3. Once all features have been presented to the audience – the facilitator will review the content in each column. The facilitator will confirm whether each column contains a logical slice of features and verify consensus within the group. Items can be moved. Particular emphasis should be given to the Now column – because these features could become the 1st phase of the project.


  1. Primarily used for early product prioritization. The “Now” features can become the MVP. The exercise produces the phases required to achieve a vision.
  2. The Now, next and later technique is similar to MOSCOW prioritisation – however it offers a slightly fresher format.

1st published BA cartoon

7 Oct

Happy to announce that Brian the BA has reached a wider audience: