“How Might We …” is a group brainstorming technique we have used for 6>months to solve creative challenges. It originated with Basadur at Procter & Gamble in the 1970s, and is used by IDEO/Facebook/Google/fans of Design Thinking.
“How Might We …” is a collaborative technique to generate lots of solutions to a challenge. Our team modified the technique slightly to ensure that we also prioritise those solutions. More on that below …
In essence “How Might We …” frames problems as opportunity statements in order to brainstorm solutions. For example:
- How Might We promote our new service to the audience?
- How Might We improve our membership offering?
- How Might We completely re-imagine the personalisation experience?
- How Might We find a new way to accomplish our download target?
- How Might We get users excited & ready for the Rio Olympics?
How Might We works well with a range of problem statements. Ideally the question shouldn’t be too narrow or broad.
How Might We sessions involve a mixture of participants: product (Product Owner/BA), technical (Developers/Tech Lead/QA) and stakeholders. The duration is 1 – 1.5 hours.
The format is:
- Scene setup (background/constraints/goals)
- Introduce the question (How Might We …)
- Diverge (generate as many solutions as possible)
- Converge (prioritise the solutions)
1. Scene Setup
Scene setup is about introducing the background, constraints, goals & groundrules of the How Might We session.
For example we held a session about: “How Might We get app users excited & ready for the Rio Olympics?” We invited 10 participants across product, technical and stakeholder teams. For 5 minutes we setup the scene. As part of scene setup:
- Background: Rio 2016 is the biggest sporting event. We expect record downloads & app traffic. There will be high expectations. There will be hundreds of events & hours of live coverage.
- Constraints: We want to deliver the best possible experience without building a Rio specific app.
- Session goal: Generate ideas for new features & to promote current features.
- Commitment: We will take the best ideas forward to explore further.
2. Introduce the question
The How Might We question is presented to participants and put on a wall/physical board
The question shouldn’t be too restrictive; wording is incredibly important. Check the wording with others before the session. We circulate the question to participants ahead of the session – this allows them to generate some solutions before the meeting.
Framing the question in context/time will help. It makes the problem more tangible. For example:
“It’s 3 days before the Olympics. How Might We get users excited & ready for the Rio Olympics?”
Use a technique like crazy 8’s to generate ideas. Give people 5-10 minutes to think of many solutions to the question.
These solutions are typically written on post-it notes. At the end of 10 minutes we ask each participant to stand up and present their post-it notes ideas to the group. Participants explain their ideas; common ideas are grouped together. For example:
With 10 users you can generate 50 – 80 ideas. Once ideas are grouped together you can have 20 – 30 unique ideas.
We ask people to pick their favourite idea. It can be there own idea, or another person’s post-it note idea.
For 10-15 minutes they explore that idea in more detail. Participants can add notes/draw user flows/write a description about the idea.
At the end of 10 minutes, each participant is asked to present back their idea to the group. For example:
Once each participant has presented their idea (10 people = 10 ideas), participants are invited to dot vote. Each participant has 3 votes to select their favourite 3 ideas.
Typically this is where a HMW ends ….
BUT we would often find ourselves in a position where the top voted idea was the most difficult to implement. The top ideas were often elaborate & had a cool factor – but were very complicated to build/offered limited business value. For example: “We could build VR into the app. It would offer all sports in immersive 3D and recommend videos based on the user’s Facebook likes”.
AND we found that stakeholders weren’t comfortable having an equal say (3 dot votes) to QA/developers in terms of the product proposition.
SO we implemented a further step to converge on more realistic options. We took the top voted ideas + any ideas that stakeholders were particularly keen on from the How Might We session. We allowed UX to explore these ideas in more detail. An example of a more refined idea is an Olympics branded menu:
We took these ideas into the prioritisation session.
With the more refined ideas we held a prioritization session with the key stakeholders (product owner, tech lead, primary stakeholders).
As a group we would rank these ideas in terms of business value and technical complexity (1-5). The business value was driven by a KPI or agreed mission. The technical complexity was an estimate of effort.
Complexity 5 = hard
Complexity 1 = easy
Impact 5 = high impact
Impact 1 = low impact
We would end up with a relative ranking of the top ideas. For example:
The top left quadrant is tempting (high impact, low effort). The bottom right quadrant is not tempting (low impact, high effort).
We used the relative weightings & dot voting to select the best idea. We would go on to shape & build the best idea.
Within the Scrum Framework – there are numerous GASPs (Generally Accepted Scrum Practices). The following 4 meetings are all GASPs:
• Sprint Planning
• Daily Stand-up
• Show and Tell/Sprint Review
• Sprint Retrospective
There have been efforts to include a 5th meeting to the list of GASPs:
• Product Backlog Refinement (PBR session)
AIM OF THE PBR SESSION
The overall aim of this meeting is to manage the product inventory and ensure that the product backlog (i.e. anything outside of a Sprint) is up-to-date. This is done through the following PBR activities:
• Progressively breaking down large items (EPICs) into smaller items (features/use cases etc) that can be implemented in a single Sprint
• Grouping items based on commonality (technical delivery/product goal etc)
• Adding detail – such as acceptance criteria – to items in order to generate a common understanding
• Pre-Sprint, high-level estimation of items in the product backlog will facilitate delivery planning
• Methods include – story point estimation (e.g. planning poker on the Fibonacci sequence), t-shirt sizes, bucket estimation, blink estimation
• Items are prioritised according to business value (this is primarily identified by the Product Owner/stakeholders/user data)
• Items are independent as per the INVEST criteria – therefore the order of items on the product backlog leads to a prioritised Sprint backlog
IV) “Ready” state:
• Items are discussed – with issues/questions/actions being identified
• Agreement on what needs to be done in order to get items into “Ready for development”/”Ready for Sprint”
• Team understands the bigger picture – i.e. the vision beyond the current Sprint
• New, high-level user stories are discussed and added to the Product Backlog
• Bringing together members of the business and technical team to discuss ideation facilitates collaborative product development within a cross-functional team
GENERAL FORMAT OF THE PBR SESSION
• Regular – product priorities/understandings are dynamic. The product backlog must therefore be responsive to change. It is recommended that PBR sessions are held every Sprint or 2
• Scheduled – Typically mid Sprint in order to avoid conflict with the Sprint Planning/Show and Tell/Retrospectives
• Duration – Timeboxed – typically to 1.5 – 2 hours
• The Product Owner is primarily responsible for the Product Backlog. The Scrum Master is responsible for facilitation and the removal of obstacles. Attendance of both is therefore mandatory
• Team members – invited – however attendance is optional. Details of which stories will be discussed in the session should be provided in advance (this enables users to decide whether or not to attend)
• Small number of stakeholders can be invited to assist with prioritisation. Representation from both the business and technical team is preferred
• The entire product backlog is not discussed. Instead the agenda should cover items that are likely to come up in the next 3-4 Sprints
• The session should aim to achieve the following:
•• Agreement on story breakdown/high level definition
•• High-level estimation
•• Item prioritisation
•• Agreement on actions necessary to get items into a “Ready” state
•• Discussion of any new ideas
• At a high level – the aim of the PBR session is to ensure that items in the Product Backlog meet the DEEP criteria (Detailed Appropriately, Estimated, Emergent and Prioritised)
What is “Agile”?
The concept of “Agile” was popularised in 2001 with the publication of the Agile Manifesto (http://agilemanifesto.org/). The manifesto set out a number of principles which – when successfully applied – would improve the process of software development.
Over time these Agile principles – valuing individuals and interactions over processes, responding to change, collaboration etc – were applied to projects in various industries. “Agile” made the transition from IT to general management practice in a similar manner to lean/six sigma’s evolution from its manufacturing origins.
With its increasing popularity – Project Managers were widely encouraged to implement Agile principles in order to successfully deliver projects (http://pmdoi.org/).
Why measure Agile maturity?
1) Let’s imagine the following scenario:
You’ve heard that Agile is popular with other PMs – but you’re not sure about it. You decide to try Agile on your project. You invest £10,000 on staff training and purchasing Agile artefacts (an online dashboard, Sprint boards etc). One year later – you want to find out whether your investment was worthwhile. You measure the team’s current delivery (time/cost/quality metrics) and compare this against an earlier benchmark. You notice very little improvement.
As a PM – do you attribute this to the team’s failure to implement Agile principles (i.e. due to a low level of adoption)? Or was Agile the incorrect methodology for this team (i.e. due to the nature of the product would a more structured approach have had greater efficacy)?
Without a measure of Agile maturity – how can you answer these questions?
2) In another scenario:
You’re a PM who is completely sold on “Agile”. You believe in it passionately – the more Agile a team is the better. You have a total of 7 teams across the business using Agile.
How can you identify the Agile improvement areas for each of the 7 teams? How can you provide feedback on their Agile performance?
3) Last scenario:
You dislike Agile intensely. It’s a methodology that has being extended to fit every size/structure of team. You use Waterfall – and want to convey that this approach works for your team.
Imagine that you could measure your team’s success using the standard metrics – and also show that this success has been possible without being “Agile”. Perhaps then there would be less pressure from management to adopt Agile principles.
Broadly speaking therefore – the pragmatist (scenario 1), Agile evangelist (scenario 2) and Agile sceptic (scenario 3) would each benefit from the ability to quantify Agile adoption. So what types of Agile assessments are available?
How to measure Agile maturity?
There are 4 widely available, self-administrable, free assessments that can provide teams with a detailed breakdown of performance across key Agile areas:
1) Mike Cohn & Kenny Rubin (http://comparativeagility.com/): This assessment measures a team’s performance across 7 dimensions: teamwork, requirements, planning, technical practices, quality, culture and knowledge creation. With approximately 100 questions – this is one of the most detailed assessments available online.
2) Thoughtworks (http://www.agileassessments.com/online-assessments/agile-self-evaluation): Consisting of 20 questions – and providing a detailed summary report – this assessment was developed by a leading consultancy. The assessment is best administered as a group exercise- as this encourages dialogue and leads to a more balanced viewpoint.
3) The Nokia Test (http://jeffsutherland.com/nokiatest.pdf): This is one of the oldest assessments available – and was produced by Jeff Sutherland (a key figure in the Agile community). The Nokia Test focuses on the iterative nature of Agile and the adoption of the Scrum framework.
4) Agile Karlskrona test (http://www.piratson.se/archive/Agile_Karlskrona_Test.html or http://www.mayberg.se/archive/Agile_Karlskrona_Test.pdf): an incredibly simple self-assessment that provides a team with a basic snapshot of their performance.
The 4 assessments mentioned above provide valuable feedback: improvement areas are flagged and successes are recognised. They also encourage dialogue throughout the team in terms of best practices.
It is worth adding however that “Agile” is based on a descriptive manifesto (e.g. you should value customer collaboration) – rather than a prescriptive manifesto (e.g. you should produce a customer collaboration tracker which should be reviewed biweekly with the PMO team). The origins of Agile (in terms of a descriptive manifesto), and the lack of a definitive maturity model, mean that although measuring Agile maturity is worthwhile – it is an imprecise science.
The MOST acronym (Mission, Objectives, Strategy, Tactics) can be used to describe the main differences between the Product Owner and Business Analyst roles on a project.
- This is the vision statement for the product. It should be concise and value driven.
- This will provide answers to the following questions: What is the intention and long term direction of the product? Who is the user-base/target market? What is the business benefit?
- Example: “We want to deliver the most popular Sports app in the World – with unparalleled journalist content”
- Responsibility of the Product Owner.
- These are derived from the product mission. These are targets that will translate the product mission into reality.
- These will provide answers to the following questions: What goals will lead us to achieve our mission? What will need to be created? What will need to be changed? What will need to be acquired?
- Example: “We need to deliver live video streaming in the iOS app“
- Responsibility of the Business Analyst.
- This is a description of how success will be achieved. This should describe the features and their prioritisation.
- This will provide answers to the following questions: How will the product scope be delivered across iterations? What is the Minimum Viable Product for release 1.0/launch? Which features are nice-to-haves?
- Example: “Pundit analysis, live video & match statistics are required for the first release – personalisation will be delivered in the second release of the app”
- Responsibility of the Product Owner.
- These are derived from the product strategy. These are the deliverables that will be provided by the development team.
- These will provide answers to the following questions: How can we achieve tangible benefits in the next Sprints? What tasks need to be completed? How can work be grouped together logically & in terms of delivery?
- Example: “Provide live streaming of our CMS videos using Media Player”
- Responsibility of the Business Analyst.
Within MOST there are 2 definition activities (Mission and Objectives) and 2 planning activities (Strategy and Tactics).
- The Mission (high level product definition) is done by the Product Owner.
- The Objectives (detailed product definition) is done by the Business Analyst.
- The Strategy (high level product planning) is done by the Product Owner.
- The Tactics (detailed product planning) is done by the Business Analyst.