Archive | Uncategorized RSS feed for this section

User Story Smells

24 Jan


“User story smells” is a term used by Mike Cohn in User Stories Applied. It describes anti-patterns that happen when writing user stories. Mike Cohn provided a number of story smells.

With 9 years Business Analysis experience, I decided to write my top 10 story smells. They are based on my observations. I’ve even created a game for people to try.



Smell 1 – Everything in a Sprint should be written as a user story

This seems to happen with less experienced agile teams. They use the story format for everything in a Sprint (e.g. As a developer … I want … So that).

Why is it bad? User stories are written from the perspective of end users. They ensure what you build is anchored on a user need. Technical tasks can be sub-tasks of user stories (preferred option), or just tasks that need to be done to keep the lights on (e.g. renew a cert).

User stories are one type of item in the product backlog. Other types of item include: bugs, tasks, epics and spikes. Item can be in a Sprint without being user stories. Don’t spend time thinking how a technical sub-task can fit into the user story format. hammer-nail


Smell 2 – Stories should be sliced by technology layer, because that’s how our development team will approach them

Teams can have different groups of developers (e.g. front end and backend developers). There can be pressure to slice stories accordingly, because each story will be done by a different development team. Another reason is that breaking it down by technology layer removes a dependency on other developer teams. This is an artefact of how the development team is split.

The problem with this approach is that technology slides do not produce a valuable deliverable for the end user. The front end slice must plug into the backend to add value. Vertical slices of functionality are preferred to horizontal technology slices. Vertical slices are much more likely to be potentially shippable.



Smell 3 – Stories don’t need acceptance criteria

This is a strange one – I’ve seen it before. The idea is that the BA/product team should not solutionise. They should present the user need/story to the developer and not come with a list of acceptance criteria/constraints.

The problem is – you need a clear outcome for a story. And there are often clear requirements from the business, or constraints to be considered. Just putting AS I … I WANT … SO THAT and leaving out the acceptance criteria means you won’t know when a ticket is done. It’s not specific enough.

Collaborative specifications, or collaborative specification reviews (e.g. 3 Amigos) work around this. Stories have to have acceptance criteria in order to be testable and closeable.



Smell 4 – The product owner is a user

One of the most common smells. The product owner is a proxy for the user, but 9 times out of 10 they’re not the end user of the service.

The user in a user story has to be an end user of the system. They can be personas/types of user (e.g. admin, front line staff, loyal user etc). Product Owners, BAs, members of the dev team are not the end users.

Writing AS A product owner I WANT something SO THAT value isn’t a user story.



Smell 5 – Acceptance criteria must specify how features look & behave

Some developers like lots of detail. And that’s OK … but generally speaking acceptance criteria specify behaviour (i.e. what the system does in certain scenarios). They don’t need to specify how it looks.

There can be times when describing how a feature looks is useful – or even necessary. Generally attaching a visual or link to a component library is sufficient.

A picture is worth a thousand words.



Smell 6 – System-wide NFRs should be written as NFRs

NFRs are tricky. There are obviously NFRs that affect the end user e.g. system availability. They can be convincingly been written in the user story format.

One problem with writing system-wide NFRs as user stories (e.g. availability, system backups) is that they cut across the entire system. It’s difficult to test these NFRs until the entire system is built. I prefer to have system wide NFRs either as “definition of done criteria” which get tested against each ticket, or as items for regression testing at the end of a release.

Story -specific NFRs might be written as ACs against a ticket (e.g. audit log for a reduction decision).



Smell 7 – Specifying what the user wants is enough!

I’ve seen several people excluding the 3rd line of a user story. It’s the reason why the user wants something – the 3rd line helps us to understand why we’re doing the work.

The 3rd line of the user story (So that … ) can be driven from user research, or observations, or data analytics etc. Either way we need to understand the why before we start to solve the problem. At the minimum a story needs to include “So that”. This helps with prioritisation.



Smell 8 – User stories should be incredibly detailed

User stories should specify the appropriate level of information. There’s a tendency from BAs, and sometimes from the development team, to try to put all the information they have into a ticket.

Having a ticket that is too detailed adds little value. It makes it likely that people will scan over the ticket and miss the most important information. An incredibly detailed ticket is not necessarily better than a less detailed ticket – it’s about having the appropriate level of information.

As a story is worked on it might be that more detail emerges. But a story should contain enough information for the team to develop and test it.



Smell 9 – User stories can depend on other stories in the Sprint

Ideally user stories should meet the INVEST criteria. That means each story should be independent.

Unless it’s agreed at Sprint planning & made visible on the ticket – all user stories should be independent. There may be cases where two dependent stories are brought into the same Sprint – however the goal should be that stories do not depend on other stories.



Smell 10 – Stories should be very small

This is more for teams that are using Gherkin & TDD, however some teams aim to have very small user stories. Almost at the level of a handful of scenarios.

One advantage of smaller user stories is that we can track progress in a Sprint to a more granular level. But a note of caution – small user stories are essentially a grouping of scenarios. They can make the Sprint board less manageable and in themselves deliver very little value to a user. For very small stories it is difficult to make them to be independent and valuable.


The game

Here’s a link to a game we created. It lists the 10 smells + 10 example bad user stories. See if you can match them:

It’s a great team exercise – with either a product or a BA team. It helps reiterate some of the key points above. And makes examples tangible.

Any smells I’ve missed? Enjoy!


Bit of BA humour from me …

24 Jan


New cartoon published

26 Nov

My latest cartoon was published on Modern Analyst. Very happy with it:

Applying Build, Measure Learn to Sprints Demos

25 Jul


Like most Scrum teams, we held “Sprint Review Meeting” every two weeks. We would gather as a team to demo what was recently built & receive feedback. Although it was a great opportunity to showcase recent work, we identified a number of problems with “Sprint Review Meetings” for our mature product:

  1. Stakeholder attendance was poor. Stakeholders saw the Sprint Review Meetings as a technical show & tell. The demos often didn’t work fully & business value wasn’t necessarily communicated.
  2. Because developers demoed the work, it put disproportionate pressure on the development team. We presented recent work & we often had problems with test environments/connections/mock data etc.
  3. More generally – the development team wanted regular updates from the product team. Our retros identified a need for the product team to provide regular updates about recent features; did a recently released feature meet our hypothesis? What did we learn? Will we iterate? How did it impact our quarterly OKRs?
  4. Sprint Review Meetings felt like a conveyor belt. We would demonstrate work, get feedback about quality, and then watch it leave the factory. But we wanted to learn how customers actually used the new product. We wanted external as well as internal feedback.


Build, Measure, Learn (BMLs) sessions

To address the above issues, we replaced Sprint Review Meetings with “Build, Measure, Learn” sessions. As advocates of the Build, Measure, Learn approach – we were keen to review recently released features with the team. We launched features every 2 weeks – so the natural cadence was to report on features at the end of the following Sprint.

We created “Build, Measure, Learn” sessions. The basic format is simple:


Every 2 weeks. At the end of the Sprint. Replaces the Sprint Review Meeting. 


Team (Product, Devs, UX) & Stakeholders. 


1 hour.


The session is divided into two sections:

  1. Build = demo from the development team about what was built during the Sprint. It’s a chance to get feedback from the Product Owner/Stakeholders.
  2. Measure/Learn = product reporting back on stats/usage/insights of recently launched features. Typically on features & changes launched 2 & 4 weeks ago. This provides an external feedback loop.

The Measure/Learn section became as valuable as the demo section. It also provided practical breathing space for setting up/fixing demo’s – if we had problems we would start off with the Measure/Learn section 😉


Build section

As with the Sprint Review meeting – this section was the development team demoing what was built during the Sprint.

This was an opportunity for product/stakeholders to provide feedback and ask any questions. Changes were noted by the BA and put on the product backlog.

It was also an opportunity to praise the team & celebrate success.


Measure/Learn section

In the Measure/Learn section the BA or Product Owner would cover the following areas:

  1. General product performance: how we are performing against quarterly goals/OKRs
  2. For each recently released feature:
    • Present the testable hypothesis
    • Present the actuals. Key trends/unexpected findings/verbatim feedback from the audience about the feature
    • Present key learnings/actions: Build a v2/pivot/stop at v1/kill the feature?
  3. Wider insights (optional):
    • Present recent audience research/lab testing
    • Present upcoming work that UX are exploring & get feedback on it



We found that BML sessions were a great replacement to Sprint Review Meetings. They ensured we kept the measurement & learning part of the lifecycle front and center in the team. The Measure/Learn section also ensured we reported back on business value regularly.

Main benefits:

  1. Learnings/insights about recently released features were shared with the team – this kept us focused on our original hypotheses and business value. It enabled us to discuss the learnings based on external audience feedback.
  2. Encouraged a shared sense of ownership about the end of Sprint session and the performance of features
  3. Increased stakeholder attendance & stakeholder engagement as there was a focus on audience feedback and KPIs
  4. We were still able to demo the newly developed features & get Product Owner/Stakeholder feedback

Simpsons humour

15 Jul


How Might We … brainstorm ideas

13 Jul

Screen Shot 2016-07-13 at 18.46.35


“How Might We …” is a group brainstorming technique we have used for 6>months to solve creative challenges. It originated with Basadur at Procter & Gamble in the 1970s, and is used by IDEO/Facebook/Google/fans of Design Thinking.

“How Might We …” is a collaborative technique to generate lots of solutions to a challenge. Our team modified the technique slightly to ensure that we also prioritise those solutions. More on that below …

In essence “How Might We …” frames problems as opportunity statements in order to brainstorm solutions. For example:

  • How Might We promote our new service to the audience?
  • How Might We improve our membership offering?
  • How Might We completely re-imagine the personalisation experience?
  • How Might We find a new way to accomplish our download target?
  • How Might We get users excited & ready for the Rio Olympics?

How Might We works well with a range of problem statements. Ideally the question shouldn’t be too narrow or broad.



How Might We sessions involve a mixture of participants: product (Product Owner/BA), technical (Developers/Tech Lead/QA) and stakeholders. The duration is 1 – 1.5 hours.

The format is:

  1. Scene setup (background/constraints/goals)
  2. Introduce the question (How Might We …)
  3. Diverge (generate as many solutions as possible)
  4. Converge (prioritise the solutions)


1. Scene Setup

Scene setup is about introducing the background, constraints, goals & groundrules of the How Might We session.

For example we held a session about: “How Might We get app users excited & ready for the Rio Olympics?” We invited 10 participants across product, technical and stakeholder teams. For 5 minutes we setup the scene. As part of scene setup:

  • Background: Rio 2016 is the biggest sporting event. We expect record downloads & app traffic. There will be high expectations. There will be hundreds of events & hours of live coverage.
  • Constraints: We want to deliver the best possible experience without building a Rio specific app.
  • Session goal: Generate ideas for new features & to promote current features.
  • Commitment: We will take the best ideas forward to explore further.


2. Introduce the question

The How Might We question is presented to participants and put on a wall/physical board

The question shouldn’t be too restrictive; wording is incredibly important. Check the wording with others before the session. We circulate the question to participants ahead of the session – this allows them to generate some solutions before the meeting.

Framing the question in context/time will help. It makes the problem more tangible. For example:

“It’s 3 days before the Olympics. How Might We get users excited & ready for the Rio Olympics?”


3. Diverge

Use a technique like crazy 8’s to generate ideas. Give people 5-10 minutes to think of many solutions to the question.

These solutions are typically written on post-it notes. At the end of 10 minutes we ask each participant to stand up and present their post-it notes ideas to the group. Participants explain their ideas; common ideas are grouped together. For example:

Post it note ideas

With 10 users you can generate 50 – 80 ideas. Once ideas are grouped together you can have 20 – 30 unique ideas.


4. Converge

We ask people to pick their favourite idea. It can be there own idea, or another person’s post-it note idea.

For 10-15 minutes they explore that idea in more detail. Participants can add notes/draw user flows/write a description about the idea.

At the end of 10 minutes, each participant is asked to present back their idea to the group. For example:

Idea example

Once each participant has presented their idea (10 people = 10 ideas), participants are invited to dot vote. Each participant has 3 votes to select their favourite 3 ideas.

Typically this is where a HMW ends ….

BUT we would often find ourselves in a position where the top voted idea was the most difficult to implement. The top ideas were often elaborate & had a cool factor – but were very complicated to build/offered limited business value. For example: “We could build VR into the app. It would offer all sports in immersive 3D and recommend videos based on the user’s Facebook likes”.

AND we found that stakeholders weren’t comfortable having an equal say (3 dot votes) to QA/developers in terms of the product proposition.

SO we implemented a further step to converge on more realistic options. We took the top voted ideas + any ideas that stakeholders were particularly keen on from the How Might We session. We allowed UX to explore these ideas in more detail. An example of a more refined idea is an Olympics branded menu:

Screen Shot 2016-07-13 at 17.56.56

We took these ideas into the prioritisation session.



With the more refined ideas we held a prioritization session with the key stakeholders (product owner, tech lead, primary stakeholders).

As a group we would rank these ideas in terms of business value and technical complexity (1-5). The business value was driven by a KPI or agreed mission. The technical complexity was an estimate of effort.

Complexity 5 = hard

Complexity 1 = easy

Impact 5 = high impact

Impact 1 = low impact

We would end up with a relative ranking of the top ideas. For example:

Cost Value example

The top left quadrant is tempting (high impact, low effort). The bottom right quadrant is not tempting (low impact, high effort).

We used the relative weightings & dot voting to select the best idea. We would go on to shape & build the best idea.

User story mapping

5 Dec

What is user story mapping

User story mapping is a workshop technique that creates a visual representation of product requirements, and orders them based on priority, user theme & time.

The output of user story mapping is a product roadmap that clearly conveys the context of each user story, the horizontal slices of the MVP, and the aspirational scope of the product.

The technique has become popular through the work of Jeff Patton. An example user story map is here:

Shoes website - USM

Reasons to use this technique

The early stages of the new product lifecycle can be a challenging time for Business Analysts. Considerable BA effort can be spent:

  • Capturing initial requirements (e.g. User Stories) & creating a structure around them (e.g. EPICs)
  • Capturing the overall product scope for the new product
  • Prioritising requirements into an MVP and subsequent releases
  • Understanding where to focus the analysis effort (i.e. which areas to initial spec out)

User story mapping will help you visualise the requirements & their context within the overall product roadmap. You will create a visual overview of the roadmap.

How to run a USM session

Organise a workshop with the key stakeholders: product, UX, technology, internal SMEs etc. Ideally you want no more than 10 people (7 + – 2 is the magic number).

Bring post-it notes, pens and plenty of sweets.

Step 1 (kickoff & context)

The BA/PO kicks off the session. They present a summary of the product vision. What problems does the product need to solve? What audience insight do we have? What are our known constraints? What are our competitors doing? What are our aspirations?

An example would be “We want to build a website that sell shoes to teenagers. We have evidence that XYZ opportunity exists and we want to focus on the teenage market etc etc”

This should take 10 – 20 minutes.

Step 2 (identify user tasks)

Each stakeholder should (in silence) write down how a user will use this new product on post-it notes. What do users they need & want to be able to do?

An example would be: “search for shoes”, “browse a website hierarchy”, “view recommended products” “add to a basket” “share on Facebook” etc.

As a tip to ensure you’re capturing the correct detail – they should be written from the user’s perspective and usually start with a verb (e.g. add, share, browse, buy etc). These are the things your users will do with the product.

This should take 10 minutes (depending on the scope of the product).

Step 3 (remove duplicate tasks)

The BA/PO should put the post-it notes on a physical board. Talking about each post-it note, the author should describe what they’ve written and why.

During this process duplicate post-it notes should be removed.

You will end up with something like this:

USM - User tasks

These are your USER TASKS (i.e. things a user does to achieve an objective). They form the walking skeleton.

This should take 15 minutes.

Step 4 (identify user activities)

With the post-it notes on the board, start to organise them into clusters (clusters = based on similarity). Assign each cluster a name.

An example would be: “find a product“, “view a product“, “buy a product“ etc. Where “find a product“ includes “search on site”, “browse a website hierarchy”, “view recommended products” etc.

Order these clusters according to the order in which a user would complete them.

For example: find a product > then view a product > then add a product > then buy a product

Note – if the post-it notes can’t be ordered from a user perspective, then you can order them based on when you plan to pick up the work. Ideally you should organise the items from the user’s perspective, so that reading the activities from left to right tells a story and represents a narrative.

User activities - USM

These clusters are your USER ACTIVITIES. They form the backbone.

This should take 15 minutes.

Step 5 (confirm all activities & tasks are captured)

With the USER ACTIVITIES and USER TASKS on the board – explore the user journey and see whether you have captured all the activities and tasks. Thinking in terms of user personas can assist. Drill down into the activities to see if there’s additional detail you can add.

Use this as a time to confirm that you’ve captured the scope of the system. If there are activities or tasks that don’t fit into the main narrative (e.g. user can localise the website language), add these items to the end of the map.

This should take 20 minutes.

Step 6 (identify user stories)

The final level of detail for the map is USER STORIES. I haven’t encountered a body related synonym for user stories – continuing with the theme of walking skeleton & backbone, we could suggest body of work?

Each USER TASK will have user stories. Explore the user tasks in additional detail – capture and frame that detail as user stories (each user story requires a title, a description, and any relevant detail).

For example within the “search on site“ user task, there could be a user story to:

  1. Search based on keyword (AS A user, I WANT to search for shoes using keywords such as Nike, Jordan, trainers SO THAT I can find my desired product quickly)
  2. Search based on colour
  3. Filter by brand
  4. Search based on shoe size

The user stories are designed to be a placeholder for a conversation – a snapshot of what the user needs and why. Don’t try to add too much detail. They should meet the INVEST criteria.

This step can take some time, depending on the product. Either take a break until starting the step, or run it as separate sessions.

Step 7 (prioritise user stories)

With the product team steering this, work as a team to put user stories in a priority order, from highest priority to lowest priority. This can be a difficult task – compare the relative value of each user story to another.

Now draw a vertical line to slice to roadmap. This line represents the minimum viable product (MVP), you can draw lines for additional versions of the product too.

MVP - User story mapping

Step 8 (next steps)

You should now have all the user stories defined, prioritized and organized by theme.

The BA can take away each user story and work on them separately, adding acceptance criteria/detail in order to allocate them into a Sprint. The great thing about this technique is that each user story now sits within the context of a user journey.

Shoes website - USM summary

Benefits of user story mapping

+ It visualises the product backlog – a USM is easy to present & understand compared to a list of JIRA tickets. A picture is worth a thousand JIRA tickets 🙂 You can put it on a Sprint board.

+ It radiates information including: the context of items, the relative priority of each item, the groupings of work and the MVP.

+ It’s hierarchical – it covers the high level themes and the more granular user stories. You can see the big picture, the finer detail and the aspirational vision.

+ It encourages valuable slicing of functionality – the lines drawn from left to right slice the backlog so that you don’t build all of the search stories and then run out of money before you develop the buy stories.

+ It organizes the roadmap based on how users will use the new product. By focusing on user behaviour rather than how systems interact, you build a product around the user needs.


  • Can this be applied to any type of product?
    • I think so. Provided it has end users. I’ve applied the technique to a feature which allowed users to sync their favourite items. On the surface of it, the feature was essentially a read/write capability – but we were able to convey it as a set of user tasks.
  • Would you ever create multiple user story maps for one product?
    • I have never done this. If your product is the Amazon website then you may need multiple maps – but hopefully your product isn’t that large.
  • Are there any downsides?
    • The user story map doesn’t replace your requirements/JIRA tickets. It’s essentially a way to visualise them within the context of the roadmap. As the system changes you’ll need to update the USM as well as your requirements/JIRA. As such, it’s a product artefact that needs maintaining.
  • What if you have no idea what the product is?
    • The technique generally assumes there is a product vision around which to hang these tasks/stories. If you have very little idea about the new product (e.g. “we want to build a page on the website where users can see things that most interest them”) then you can still use this technique. I’d recommend some form of upfront thought about what you want that page to do by the product team & a warm up activity for the group before step 2 (e.g. crazy 8’s).
  • Aren’t user tasks just another type of user story?
    • User tasks can be thought of as high-level user stories e.g. “searching for a product“ is a task that can include multiple user stories (search by keyword, search by shoe size, filter by brand, search by colour). The essence of activities vs tasks vs stories is that they’re hierarchical user needs. Exactly what they are depends on what product you build.

Product exercises – postcard from the future etc

7 Apr


Below is a set of exercises I have run or participated in – that are intended to facilitate product discovery & product development early in the project lifecycle. These exercises help teams generate a product vision and solicit high-level features/themes around which a prioritised product backlog can be generated.

Postcard from the future


  1. Split the group of participants into pairs. Participants include senior project stakeholders (UX, product, tech etc). Try to create cross discipline pairs.
  2. Assign each pair a user persona. User personas are designed to provide insight into your market by identifying how different types of consumer will use your product. An example persona which you could assign to a pair would be “Mobile Mark. 24 yrs old London professional with the latest iPhone. Likes new technology. Regularly checks Twitter etc …”
  3. Ask each pair to write a postcard from the future as their assigned persona. An example would be “imagine we have just launched our new responsive website. Mobile Mark is a tech savy user. He is so happy with our new website that he has decided to write a postcard to our team.” A pair would then write a postcard as Mark – about why he loves the new responsive website.
  4. Once the postcards have been been generated – ask each pair to introduce their assigned persona to the wider group and present that persona’s postcard from the future. Allow the wider group to ask any follow-up questions.

Note – I have also run this session with Tweets/Vines from the future. The Twitter format is particularly useful for capturing a concise summary – often with the single most important feature. The Vine format is an interesting variant, it allows people to be more creative and tell a story (e.g. “Mark gets on the bus, checks the website, jumps up and down in excitement and spills his drink, Tweets his friend”).





This exercise is useful for:

  1. Getting senior project stakeholders to think about the product from a persona’s point of view. It’s easy for stakeholders to think of features their own viewpoint – this exercises allows senior project stakeholders to empathise and create a vision from their user’s perspective.
  2. Identifying “warm fuzzy features”. It’s a great technique for discovering features that would delight your users e.g. everytime Man Utd score Mark’s phone cheers, otherwise it boos”. The format and persona based nature of the exercise encourage creative and out-of-the-box thought.

Hopes and fears


  1. The facilitator will introduce the project goal to the stakeholders e.g. “We are planning to create an Apple watch app. Here are several potential designs. Before we proceed any further – we want to ask everyone for feedback about their hopes and fears for this project”. Stakeholders in this context are anyone who has an interest in the success of the project.
  2. The facilitator will ask stakeholders to write down their hopes and fears in silence. These hopes/fears are then placed on a board.
  3. Once all the feedback has been captured on the board – the facilitator will talk through the hopes and fears. The facilitator will: seek clarity on specific points, encourage an open dialogue on each item, seek agreement and identify common themes.
  4. The hopes and fears will then be taken away and captured in the project space (e.g. a wiki space).


  1. This exercise allows stakeholders to convey their aspirations and perceived risks early in the project lifecycle. It reduces project risk and builds consensus.
  2. Individual hopes can form the basis of high-level themes or features. For example “I hope we can provide content that is useful to a user’s context”. This hope may become a feature to “vary the content/format depending on time of day”.

Now, next and later


  1. The facilitator will hold up individual features (features are prepared in advance). They will describe each feature and encourage an open dialogue with the audience. The audience is typically senior project stakeholders.
  2. For each feature, detail and clarity will be added through discussion. The audience will be asked to assign each feature to either the now, next or later column. If too many items are added to the Now column then the facilitator should challenge this.
  3. Once all features have been presented to the audience – the facilitator will review the content in each column. The facilitator will confirm whether each column contains a logical slice of features and verify consensus within the group. Items can be moved. Particular emphasis should be given to the Now column – because these features could become the 1st phase of the project.


  1. Primarily used for early product prioritization. The “Now” features can become the MVP. The exercise produces the phases required to achieve a vision.
  2. The Now, next and later technique is similar to MOSCOW prioritisation – however it offers a slightly fresher format.

1st published BA cartoon

7 Oct

Happy to announce that Brian the BA has reached a wider audience:


Brian the Business Analyst – part 2

24 Apr