Tag Archives: technology

Overview of AWS for Business Analysts

23 Jan

Purpose of the blog

I’ve written this blog as a non-techie guide to AWS.

If you’ve ever been in a meeting where people mentioned ‘EC2‘, ‘ELB‘, ‘RDS‘ and gone … ‘WFH‘ then this blog post is for you.

It’ll provide:

  • an overview of AWS (what is it, why its popular, how its used)
  • typical AWS architecture for a project – including key terms (VPN, Regions, AZs)
  • cheat sheet of other key terms (EBS, EFS etc)

Overview of AWS

What is AWS

AWS is the most popular cloud platform in the world. It’s owned by Amazon & is almost as large as the next 2 cloud providers combined (Google Cloud + Microsoft’s Azure).

In a nutshell – AWS allows companies to use on-demand cloud computing from Amazon. Customers can easily access servers, storage, databases and a huge set of application services using a pay-as-you-go model.

TL;DR: AWS is a cloud platform (owned by Amazon) used by companies to host and manage services in the cloud.

Why companies use it

Historically companies have owned their own IT infrastructure (e.g. servers / routers / storage). This has an overhead in terms of maintenance. It meant companies had to pay large amounts of money to own their infrastructure – even if that infrastructure was barely used certain times (e.g. at 3am). Companies also struggled to ramp up the infrastructure if demand suddenly went up (e.g. viral video on a website).

AWS & the cloud in general helps companies with that situation. It has 5 main benefits:

  1. Pay for what you use
  2. Scale the infrastructure to meet the demand
  3. Resiliency (e.g. if a data centre goes down)
  4. Cheaper (by leveraging the purchasing scale of Amazon)
  5. Removes the need to own and manage your own data centres

TL;DR: AWS allows companies to only pay for the infrastructure they use. It also allows companies to quickly ramp up & ramp down infrastructure depending on demand.

How companies use it

There’s 3 main cloud computing models. Most companies use IaaS.

  1. Infrastructure as a Service (IaaS) – provides access to networking features, computers (virtual or dedicated hardware) and data storage. This provides the greatest flexibility as you control the software / IT resources. With this model you get the kit but you manage it
  2. Platform as a Service (PaaS) – removes the need for your organisation to manage the infrastructure (hardware and operating systems). You don’t have to worry about software updates, resource procurement & capacity planning. With this model there’s even less to do – you just deploy / manage your own application (e.g. your website code)
  3. Software as a Service (SaaS) – provides you with a product that is run and managed by AWS. In this model you don’t need to worry about the infrastructure OR the service

If Amazon provides a suitable managed service, then it’s often cheaper to use PaaS rather than IaaS – because you don’t need to build and manage the service yourself.

A note about cloud deployment models …. broadly speaking there’s two models & most companies operate as “Hybrid“:

  1. Cloud = application is fully deployed in the cloud. All parts of the application run in the cloud
  2. Hybrid = connects infrastructure & applications between cloud-based resources and non-cloud based resources. This is typically used when legacy resources were built on-prem & its too complex to move them (e.g. part of an application was built years ago), or because the company doesn’t want certain information in the cloud (e.g. privileged customer information)

TL;DR: Most companies use AWS to provision infrastructure (IaaS). Amazon also offer PaaS and SaaS. PaaS means Amazon manage the platform (e.g. hardware / OS). SaaS means Amazon provides the product / service as well as the infrastructure.

Typical architecture

Region / Availability Zone

AWS has multiple Regions around the world. A Region is a geographic location (e.g. London, Ireland). You will typically deploy your application to one Region (e.g. London).

An Availability Zone is a data centre. A Region will have multiple Availability Zones. This means if one Availability Zone (AZ) fails, the other one(s) will keep running so you have resiliency. If you deploy to the London region – you will be in 3 AZs.

TL;DR Your application is likely to be hosted in 1 Region (London). Across 3 Availability Zones

VPC / subnet

A VPC (Virtual Private Cloud) is your own chunk of the cloud. It allows you to create your own network in the cloud.

Essentially a VPC is a subsection of the cloud – allowing you more control. You control what traffic goes in and out of the network.

A VPC sits at the region level. You can leverage any of the Availability Zones to create your virtual machines (e.g. EC2 instances) and other services.

Within a VPC you can create subnets – which are isolated parts of the network. You can create many subnets in an AZ. Subnets are just a way to divide up your VPC. A subnet exists at the AZ level. You can have public or private subnets (or both).

The main AWS Services inside a VPC are: EC2, RDS, ELB. Although most things can now sit in a VPC.

TL;DR: You’ll likely have 1 VPC (Virtual Private Cloud) in London & it will span all 3 AZs. A VPC gives your company an isolated part of AWS. You will create subnets to break-up the VPC into smaller chunks.

Internet Gateway = configures incoming and outgoing traffic to your VPC. It’s attached to the VPC & allows it to communicate with the Internet.

Route Table = Each VPC has a route table which makes the routing decision. Used to determine where network traffic is directed.

NACL = Acts as a firewall at the subnet level. Controls traffic coming in and out of a subnet. You can associate multiple subnets with a single NACL. There are 2 levels of firewall in a VPC: Network access control list (NACL) = at a subnet level. Security group = At an EC2 instance level.

Subnet = a subnetwork inside a VPC. It exists in 1 AZ. You can assign it an IP range & it allows you to control access to resources (e.g. you could create a private subnet for a DB and ensure its only accessible by the VPC).

NAT (not represented in the diagram) = Network address translation. NATs are devices which sit on the public subnet and can talk to the Internet on behalf of EC2 which are on private instances.

Every VPC comes with a private IP address range which is called CIDR (classless inter-domain routing). A VPC comes with a default local router that routes the traffic within a VPC.

Key concepts

EC2 / EBS / AMI – server, storage, machine image

Elastic Compute Cloud (EC2) is a virtual machine in the cloud. You can run applications on it. It’s a bit like having a computer. It’s at an AZ level.

You install an image on the EC2 instance (e.g. Windows or Linux) & chose the size (CPU / memory / storage).

Storage is not persisted on an EC2 (e.g. if you delete an EC2 instance the storage is lost), so you will need EBS.

EBS = Elastic Block Storage. It’s like a hard drive & is local to an EC2 instance. This means it’s at an AZ level. You use it for storing things like the EC2 Operating System. It behaves like a raw, unformatted block device & is used for persistent storage.

AMI = Amazon Machine Image. A template that contains the software configuration (e.g. OS, application, server) required to launch your EC2 instance.

TL;DR You will spin up EC2 instances on your subnets. EC2 instances are like computers (with OS, CPU, memory storage)& you can run your application on them. EBS is storage attached to an EC2 instance. AMI is a template for launching EC2 instances.

ELB, Autoscaling & CloudWatch – load balancing, scaling, monitoring

Elastic Load Balancer (ELB) allows you to balance incoming traffic across multiple EC2 instances. It allows you to route traffic across EC2 instances so that they’re not overwhelmed.

Autoscaling adds capacity on the fly to ELB. Autoscaling increases or decreases the number of EC2 instances based on a scaling policy. Autoscaling will increase instances when a threshold value is exceeded and remove instances when they are not being utilised.

Cloudwatch is a monitoring service. It monitors the health of resources and applications. If an action is to be taken it will trigger the appropriate resources via alarms. Cloudwatch triggers the autoscaling.

TL;DR: Elastic Load Balancer (ELB) distributes traffic across your existing EC2 instances. Cloudwatch monitors the service & triggers autoscaling. Autoscaling will perform scaling up or down of EC2 instances.

IAM – access management

IAM = Identity and Access Management. This is where you manage access to AWS resources (e.g. S3 bucket) & the actions that can be performed (e.g. create S3 bucket). It’s commonly used to manage users, groups, IAM Access Policies & roles. You can use IAM roles for example to grant applications permissions to AWS resources.

IAM is set at a global level (above region level – essentially at an AWS account level).

TL;DR: IAM is where you manage access to computing, storage, database & application services. You can decide what resources a user or application can access, and what actions they can perform.

ELK – analytics, data processing & visualisation

ELK Elasticsearch + Logstash + Kibana. It’s often used to aggregate and analyse the logs from all your systems.

Elasticsearch is a search and analytics engine. Logstash is used for data processing; Logstash ingests data from multiple sources, transforms it & sends it to Elasticsearch. Kibana lets you view data with charts and graphs. Here’s an example from Kibana:

Elastic Stack is the next evolution of ELK. It includes Beats:

  • Beats = lightweight, single purpose data shippers. Sits on your server and sends data to Logstash or Elasticsearch
  • Example Beats include: Filebeat (ships logs and other data), Metricbeat (ships metric data), Packetbeat (ships network data)

As a note – there is an Amazon-managed elastic service called ‘Amazon OpenSearch Service’.

TL;DR ELK lets you analyse logs and visualise them on a dashboard. You can see errors, volumes, performance (& more) for your service. Elastic Stack is ELK + Beats (data shippers).

Bringing it all together

Example 1 – VPC in 1 region, 3 AZs, with multiple subnets

Here we have a VPC spanning 3 AZs. This VPC could be in the London Region.

To segment the VPC into smaller networks – they have setup private and public subnets. Each subnet is likely to have EC2 instances / DB instances in them.

Example 2 – VPC in 1 region, 2 AZs, with multiple subnets (EC2 and DB instances)

In this example you have a VPC in 1 Region across 2 AZs. You can see that they’ve setup public subnets (to connect to the Internet) and private subnets (for EC2 instances and to host a DB with private information). The IGW (Internet Gateway) is attached to the VPC; the Internet Gateway is controlling incoming & outgoing traffic and allows the VPC to communicate with the Internet.

There is an Elastic Load Balancer (ELB) which is being used to balance incoming traffic across EC2 instances – so that the EC2 instances are not overwhelmed. It’s not shown here – but they may also be using Cloudwatch and Autoscaling to increase / decrease the number of EC2 instances depending on traffic.

Example 3 – VPC that’s extending out to an S3 bucket

This is a more detailed version of example 2. In this example you can see they’re connecting to an S3 bucket (let’s say to upload and download photos). Because the S3 bucket is available on the internet – the EC2 instance could go via the public subnet (via NAT > IGW > S3). However they’ve put a VPC (Endpoint) Gateway inplace.

If you have your S3 bucket or Dynamo DB in the same region you can use the “VPC Endpoint service” to reach them without going via the internet. If you want to access any other AWS services (e.g. SQS, Cloudwatch, SNS, SES) that’s not S3 or Dynamo DB – you can use “VPC endpoint interfaces”.

Example 4 – Multiple VPCs, VPC peering, transit gateway, VPN tunnels and direct connects

Looking at the right hand side of the image. In this design there are multiple VPCs.

One big application may be across multiple VPCs. VPC peering allows one VPC to talk to another using a dedicated and private network. They can be in the same AWS region or a different AWS region. It means you don’t have to talk over public internet but via AWS managed connectivity. HOWEVER this is VPC-to-VPC and if you have many VPCs this becomes complex because its 1:1 connection between VPCs.

If you want to connect hundreds of VPCs you can use a transit gateway. With this design all VPCs connect to a transit gateway + the transit gateway can connect to any VPC (it acts like a hub).

There is a 3rd way to connect a VPC to another VPC – if you don’t want to expose all the machines in one VPC (e.g. if its a SAAS product). It’s not represented in this diagram but if you only want to expose 1 service you can use “private link”. Which allows the Network load balancer of one VPC to connect to the VPC Endpoint Interface.

Finally – in the bottom right you can see a Virtual Private Gateway. This allows your VPC to connect to your on-prem network or your on-prem data centre. It can enable connectivity using VPN tunnels or a dedicated connection called AWS direct connect (the latter gives more bandwidth reliability). Essentially its used for hybrid connectivity – where some of your workloads are on premise & some are in AWS.

Appendices – more information

Note on storage options – EBS, S3, EFS

There are several storage options (https://aws.amazon.com/products/storage/). Three examples are:

EBS (Elastic Block Storage) = Block Storage. It can only be used by EC2 instances & is local to the EC2 instance. It’s like a hard drive & used for things like the EC2 Operating System. Exists at an AZ level.

S3 (Simple Storage Service) = Object Storage. Essentially a bucket where you can store things – S3 can be accessed over the internet. S3 is flat storage (there’s no hierarchy). It offers unlimited storage. Used for uploading and sharing files like images/videos, log files & data backups etc.

EFS (Elastic File System) = File Storage. It’s shared between EC2 instances. It allows a hierarchical structure. It’s at a region level and can be accessed across multiple AZs. Used for web serving, data analytics etc.

Note on DB options

There are eleven database services (https://aws.amazon.com/products/databases/). These include:

  • RDS = Service for relational databases
  • DynamoDB = NoSQL DB
  • ElastiCache = Used for DB caching (Redis and Memcached engines)

Note on caching

Cloudfront is the AWS CDN Service. It means static content (e.g. video or images) can be cached at each location to reduce latency. It stores data in ‘edge locations’.

Note on global / region / VPC / AZ level

Some AWS services are at an account level. IAM(Identity and access management), billing, Route 53. They are global and affect all regions & all services that work below it.

Some are at a region level e.g. S3, CDN, Dyanmo DB, SNS, API Gateway, Lambda. These services are managed by AWS – they’re in your region but not in your VPC.

Some are at a VPC level

Some are at a AZ level e.g. EC2, RDS.

Advertisement

Product Backlog Refinement in SCRUM

16 Jun

Within the Scrum Framework – there are numerous GASPs (Generally Accepted Scrum Practices). The following 4 meetings are all GASPs:

• Sprint Planning
• Daily Stand-up
• Show and Tell/Sprint Review
• Sprint Retrospective

There have been efforts to include a 5th meeting to the list of GASPs:

• Product Backlog Refinement (PBR session)

AIM OF THE PBR SESSION
The overall aim of this meeting is to manage the product inventory and ensure that the product backlog (i.e. anything outside of a Sprint) is up-to-date. This is done through the following PBR activities:

I) Refinement:
• Progressively breaking down large items (EPICs) into smaller items (features/use cases etc) that can be implemented in a single Sprint
• Grouping items based on commonality (technical delivery/product goal etc)
• Adding detail – such as acceptance criteria – to items in order to generate a common understanding

II) Estimation:
• Pre-Sprint, high-level estimation of items in the product backlog will facilitate delivery planning
• Methods include – story point estimation (e.g. planning poker on the Fibonacci sequence), t-shirt sizes, bucket estimation, blink estimation

III) Prioritisation:
• Items are prioritised according to business value (this is primarily identified by the Product Owner/stakeholders/user data)
• Items are independent as per the INVEST criteria – therefore the order of items on the product backlog leads to a prioritised Sprint backlog

IV) “Ready” state:
• Items are discussed – with issues/questions/actions being identified
• Agreement on what needs to be done in order to get items into “Ready for development”/”Ready for Sprint”

V) Communication:
• Team understands the bigger picture – i.e. the vision beyond the current Sprint

VI) Ideation:
• New, high-level user stories are discussed and added to the Product Backlog
• Bringing together members of the business and technical team to discuss ideation facilitates collaborative product development within a cross-functional team

GENERAL FORMAT OF THE PBR SESSION
I) Timing:
• Regular – product priorities/understandings are dynamic. The product backlog must therefore be responsive to change. It is recommended that PBR sessions are held every Sprint or 2
• Scheduled – Typically mid Sprint in order to avoid conflict with the Sprint Planning/Show and Tell/Retrospectives
• Duration – Timeboxed – typically to 1.5 – 2 hours

II) Participants:
• The Product Owner is primarily responsible for the Product Backlog. The Scrum Master is responsible for facilitation and the removal of obstacles. Attendance of both is therefore mandatory
• Team members – invited – however attendance is optional. Details of which stories will be discussed in the session should be provided in advance (this enables users to decide whether or not to attend)
• Small number of stakeholders can be invited to assist with prioritisation. Representation from both the business and technical team is preferred

III) Agenda:
• The entire product backlog is not discussed. Instead the agenda should cover items that are likely to come up in the next 3-4 Sprints
• The session should aim to achieve the following:
•• Agreement on story breakdown/high level definition
•• High-level estimation
•• Item prioritisation
•• Agreement on actions necessary to get items into a “Ready” state
•• Discussion of any new ideas
• At a high level – the aim of the PBR session is to ensure that items in the Product Backlog meet the DEEP criteria (Detailed Appropriately, Estimated, Emergent and Prioritised)

Applying Agile principles to requirement analysis

23 May

Background

The Agile methodology originated within the software development industry. Since its inception in 2001 – Agile has expanded beyond an initial developer-centric community – to being embraced by multi-discipline teams working across numerous industries.

The antecedent of Agile within IT was the Waterfall methodology. The Waterfall framework consisted of a series of sequential, discrete phases – with each phase conveniently mapped to a role/responsibly:

Analysis Phase             -> Requirement Analysis (Business Analysts/Product Owners)

Design Phase                -> UX (Designers/Usability Experts)

Development Phase  -> Software Development (Developers)

Testing Phase               -> QA (Manual Testers and Developers in Test)

Delivery Phase             -> Release Management (Project Managers)

Due to the increasing popularity of Agile – requirement analysis has been encouraged to transition from being a stand-alone phase owned by BAs/POs – to become a project facet that can incorporate Agile principles.

In what ways can requirement analysis adopt Agile principles?

Collaborative requirement analysis

Prior to Agile – the practice of the development team being presented with an upfront, non-negotiable, detailed requirements document (BRD/functional specification etc) was common. With the advent of Agile – requirement analysis should no longer be restricted to the interaction between BAs/POs and the business – instead we should embrace collaborative requirement analysis:

A popular collaborative requirement technique is the “3 Amigos”.  This process involves the developer, BA and QA discussing the requirement specification in a workshop. Each Amigo will offer a unique perspective – through discussions the Amigos will identify edge cases, undefined requirements, opportunities and potential reuse. The 3 Amigos technique can also reduce the risk of incomplete features being pushed into development by the product team – requirement specifications must be pulled into development when they have been reviewed and accepted by the 3 Amigos.

Collaborative requirement analysis facilitates a project-wide sense of ownership – and also communicates a common understanding of what features need to be built. Collaborative requirement analysis produces more robust specifications – and reduces the role-based silos that can exist on projects.

Detail as an emergent property

Agile artefacts such as technical spikes and development iterations mean that high-level requirements can be considered sufficient at project initiation. Low fidelity requirement assets (e.g. user stories /”back of the napkin” designs) should be employed on Agile projects:

Just-in-time requirements analysis (JITRA) has a concept that requirements should only be specified at the level of detail required for upcoming development. JITRA states that the further in advance of development requirements are defined – the more probable that requirements will become out of date, leading to rework and wasted effort.

Detail should emerge when it is required – which is typically towards the middle/end of the project lifecycle. Initial requirement analysis should be focussed on business justification and solution scope.

Embrace change

Specifications will evolve throughout the project lifecycle; all team members must acknowledge the benefit of responding to change. Adapting to changes in circumstances/urgency/understanding is crucial – requirement analysis should be considered an iterative rather than exhaustive process:

In terms of systems theory – project teams should be viewed as open systems. As the system will tend towards a steady state – change should be encouraged and communicated at an organisational level. Regular priority sessions, stakeholder workshops and competitor reviews should be used to mitigate resistance to change.

Incorporating feedback is crucial to the success of a project. Requirements are not unchangeable statements – they only reflect the current and expected situation, both of which are liable to change.

Necessary documentation

The adoption of Agile principles does not mean that requirements should not be documented. Requirement documentation is vital for developers, QA and the business stakeholders:

The principle of living documentation should be embraced. This means that all documentation needs to be accessible and up-to date. Business users, developers and QA should be able to request requirement changes. Documentation is most valuable when it is understandable by all team members, available and responsive to change.

Lightweight documentation such as feature files and high level process maps summarise the output of the requirement analysis process. The Agile methodology encourages appropriate documentation – superfluous detail is wasted effort; Agile does not negate documentation.

Continuous process improvement

Requirement processes should not be viewed as immovable obstacles. Instead these processes should evolve and adapt to meet the needs of the project. Where a process or artefact ceases to produce the expected value –it should be reviewed and changed by a self-organising team:

Retrospectives are a popular technique for identifying improvement opportunities. Team members meet to discuss what the team needs start doing, stop doing, and continue doing. Regular (every 2/3 weeks) and actionable retrospectives provide an open forum for continuous process improvement.

Requirement analysis processes (to-be-analysis, process mapping, stakeholder workshops etc) can always be improved. A technique that is effective for one team – may not be effective for another – or at least may require several modifications.

Continuous delivery

The Agile methodology promotes product iterations and regular releases. In order to align with this ethos, requirement analysis must produce a constant output – a steady flow of requirements will avoid the “big bang” requirement delivery that characterised the Waterfall methodology:

Minimum Viable Product (MVP) provides the scope of requirement analysis. The MVP will be delivered in multiple iterations – requirement analysis must be constantly baselined against the MVP and ensure that there is a sufficient specification available for each delivery.

Shorter delivery timescales encourages more frequent requirement analysis output. Specifications should be aligned to the MVP – features need to be deliverable and contribute to the MVP vision.

Conclusion

Iterative, collaborative Agile development has replaced the sequential Waterfall development methodology. Prior to Agile – the product team could hand over a list of detailed requirements – which would then be used by developers to build the product. In order to align requirement analysis with Agile development practices – the following principles need to be applied: requirement collaboration, iterative specifications, embracing change, necessary documentation, continuous improvement and continuous delivery. By adopting these principles requirement analysis will transition into the Agile world, produce better specifications and ultimately lead to greater quality products.

How 2 types of BA can transition from Waterfall to Agile

25 Apr

Introduction

Most BAs have experience of traditional Waterfall development. Within the Waterfall framework there are the following sequential phases:

-> Analysis
-> Design
-> Development
-> Testing
-> Delivery

Waterfall projects kick-off with an “Analysis” phase. This is designed to assess the problem and scope out a solution (as-is and to-be analysis). The project will then progress through the remaining discrete phases – until the product is delivered.

In the same way that the “Design” phase had designers, the “Testing” phase had testers, the “Analysis” phases had analysts (BAs).

After working on a number of Waterfall projects – I noticed that 2 types of BA evolved.

TYPE A (“Specialists”/”Purists”)

• These individuals were lucky enough to work in a mature BA practice. The analyst role was well established – BA deliverables were often standardised (e.g. BRD templates/peer review)
• They focused primarily on the “Analysis” phase of a project – in the same way that designers focused on the “Design” phase
• When the “Analysis” phase was complete – they moved onto another project. It wasn’t uncommon for a BA to work on multiple projects/workstreams simultaneously

TYPE B (“Generalists”)

• These individuals almost had to define their own roles. They worked in organisations where the BA role/responsibilities weren’t fully understood
• BAs were treated as generalists. They were often asked to “do a bit of everything”
• Their deliverables stretched across the project lifecycle and included: requirement specs (“Analysis” phase), wireframes/low fidelity mock-ups (“Design” phase), clarifying queries from the dev team (“Development” phase), testing the product (“Testing” phase) and creating user guides/training end-users (“Delivery” phase)
• These individuals worked on individual projects – and were often influential

How can these 2 types transition into Agile?

TYPE A (”Specialists”/“Purists”)
• These individuals can continue to focus on specifying new features (i.e. requirements/acceptance criteria). In order to maintain a purist approach they will need to work 1 Sprint ahead of the developers. Most of the work they produce will go into the following development Sprint
• Career progression -> Senior BA role

TYPE B (“Generalists”)
• These individuals can continue to focus on a diverse range of deliverables: they can produce requirement specifications, provide design feedback, facilitate product planning, support development and perform ad-hoc testing. In order to provide the developers with a constant backlog of work – they will need to spend 50% of their time on the current Sprint and 50% of their time on the upcoming Sprint.
• Career progression -> Product Owner (possibly also Scrum Master role)

Thoughts/feedback?

What is your opinion? Are there TYPE As and TYPE Bs? Where do they fit in an Agile world?

Image

Brian the Business Analyst – part 2

24 Apr

brian_the_ba_pt2

Product Owners vs Business Analysts – MOSTly different roles?

14 Feb

The MOST acronym (Mission, Objectives, Strategy, Tactics) can be used to describe the main differences between the Product Owner and Business Analyst roles on a project.

Mission

  • This is the vision statement for the product. It should be concise and value driven.
  • This will provide answers to the following questions: What is the intention and long term direction of the product? Who is the user-base/target market? What is the business benefit?
  • Example: “We want to deliver the most popular Sports app in the World – with unparalleled journalist content”
  • Responsibility of the Product Owner.

Objectives

  • These are derived from the product mission. These are targets that will translate the product mission into reality.
  • These will provide answers to the following questions: What goals will lead us to achieve our mission? What will need to be created? What will need to be changed? What will need to be acquired?
  • Example: “We need to deliver live video streaming in the iOS app
  • Responsibility of the Business Analyst.

Strategy

  • This is a description of how success will be achieved. This should describe the features and their prioritisation.
  • This will provide answers to the following questions: How will the product scope be delivered across iterations? What is the Minimum Viable Product for release 1.0/launch? Which features are nice-to-haves?
  • Example: “Pundit analysis, live video & match statistics are required for the first release – personalisation will be delivered in the second release of the app” 
  • Responsibility of the Product Owner.

Tactics

  • These are derived from the product strategy. These are the deliverables that will be provided by the development team.
  • These will provide answers to the following questions: How can we achieve tangible benefits in the next Sprints? What tasks need to be completed? How can work be grouped together logically & in terms of delivery?
  • Example: “Provide live streaming of our CMS videos using Media Player”
  • Responsibility of the Business Analyst.

Summary

Within MOST there are 2 definition activities (Mission and Objectives) and 2 planning activities (Strategy and Tactics).

  • The Mission (high level product definition) is done by the Product Owner.
  • The Objectives (detailed product definition) is done by the Business Analyst.
  • The Strategy (high level product planning) is done by the Product Owner.
  • The Tactics (detailed product planning) is done by the Business Analyst.

The 3 Amigos – BA, QA and Developer

7 Feb

The 3 Amigos (sometimes referred to as a “Specification Workshop”) is a meeting where the Business Analyst presents requirements and test scenarios (collectively called a “feature”) for review by a member of the development team and a member of the quality assurance team. The overall aims are to ensure:

i)              COLLABORATIVE REQUIREMENTS: a common understanding of what needs to be built, business justification is conveyed for a feature, a project-wide sense of ownership.

ii)             COLLABORATIVE TESTS: all teams members contribute to testing the quality of a feature, business & technical edge cases are identifed, testing restrictions are conveyed, test duplication within the team is reduced.

iii)           READY FOR DEV CONSENSUS: Pull vs Push approach – features are pulled into a Sprint when they have been reviewed and accepted by the 3 Amigos. Features cannot be pushed into a Sprint – this reduces the risk of the team incorrectly assuming that a feature is ready for dev.

The general format of the 3 Amigo process is:

  •  A time boxed meeting (30 mins – 1 hrs max) is setup 1-2 Sprints before a feature is expected to go into development.
  • 1 Developer + 1 QA are identified and invited to the meeting. These are expected to be the individuals who will develop and test that feature.
  • The Business Analyst begins the meeting by introducing the feature to the Amigos. Why is the feature needed, is it like anything they’ve done before, what should it look like on the site?
  • The Business Analyst presents the requirements (prepared prior to the 3 Amigos) – these are reviewed by the Amigos who provide feedback. The requirements should be updated in the session until the requirements are deemed “Ready for Dev”.
  • The Business Analyst will then present the test scenarios (prepared before the meeting) – these are also reviewed by the Amigos. Feedback is incorporated until it is agreed that the test scenarios cover the feature’s expected behavior – this ensures good test coverage.
  • The feature/specification is now “Ready for Dev” – it has been accepted by the developer and QA.
  • Developer – asked to identify any tasks that need to be done pre-development e.g. do they need access to an endpoint, do they need to see variants of the visual design? These tasks are assigned and put on the current Sprint board.
  • QA – asked to identify any tasks that need to be done pre-feature testing e.g. do they require access to a system, do they need mock data? These tasks are assigned and put on the current Sprint board.
  • Estimate: the Amigos should have a common understanding of the requirements and the test scenarios (the “feature”). This is a good opportunity for the developer and QA to provide estimates.

Lessons we have learnt:

  • The developer and QA involved in the 3 Amigo meeting should be the individuals who will develop and test the feature. We have explored the idea of “any developer/QA can be involved in the 3 Amigos and any developer/QA can then pickup the feature” – however we have learnt that maximum benefit comes from the Amigos being involved in a feature until its completion.
  • The requirements and test scenarios should be maintained in a place where everyone has access. This gives individuals/stakeholders (even non-Amigos) VISIBILITY of the requirements and tests.
  • What language should be used for requirements and test scenarios? Technical? Business? Plain English? We have found that DOMAIN language is the most useful … if you work in banking than all 3 Amigos should know what a derivative is – but not everyone is expected to know what a cron job is. If in doubt – maintain a glossary of terms.

Challenges:

  • For the BA: although the BA still tacitly “owns” the requirements – part of the collaborative 3 Amigo process is a shared level of ownership. This can be difficult for a BA – as requirements are one of the main BA deliverables.
  • For the Developer: there may be some resistance to reviewing requirements/test scenarios as these are “non–development activities”. In our experience the 3 Amigos process enables the developer to have greater visibility of the requirements, provide technical feedback and convey challenges/blockers.
  • For the QA: similar to the BA – they may need time to adjust to the test scenarios being under common ownership.
  • For the Product Owner: the Product Owner isn’t an Amigo. The process assumes that the BA represents the Product Owner/stakeholders. Once the requirements and test scenarios have been through the 3 Amigo process– it can be worth re-confirming them with stakeholders.
  • For the PM: the 3 Amigo process limits what a PM can put into a Sprint – features are pulled into a Sprint by the team and not pushed by the PM. The BA can begin to take on some traditional PM duties: task breakdown (tasks are identified in the 3 Amigos), estimations (developer + QA estimates are provided in the 3 Amigos), limited Sprint planning etc.
  • For the Agile enthusiast: to be a fully iterative process there may be “pre amigo” meetings – and complex features may require several sessions.