Leading Results

In search of enlightened leadership and inspiring results

1 note

letsstw asked: Isaac, I just watched "Your Top 7 Agile Questions Answered" on YouTube. Thanks for this brilliant webinar on that matter. You pointed out the velocity aspect in detail, ending in a discussion where you said something like "earned value analysis works great for Agile. Send an email if you want to learn more" ... so, here is the email :-) I'd highly appreciate your feedback. Regards, Stefan.

Hi Stefan,

Thank you for the kind words.  I’m glad you found the webinar informative.  Yes, there is a popular misconception that agile and EVM metrics are mutually exclusive, but I have used EVM metrics to articulate the progress of projects delivered by agile teams for many years.  I just copied an old blog post into Leading Results that addresses some of the details.   Take a look and let me know if this helps.

0 notes

Questions about Agile and EVM

Over the years I’ve received a number of inquiries about how Earned Value Management (EVM) metrics apply (or don’t) in the context of agile execution.  

So I’ve finally broken down and dug out a very old internal blog post I did (pre-LeadingResults) on the subject while leading the agile practice at a previous employer.  It was written for internal consumption and reflects thoughts that are almost a decade old, so be gentle will the comments, but I would love to discuss the premise further with anyone whose interested!

_________________________________________________________

Agile Measurement and Control

I was trading emails the other day with a peer who is working with a client in the very early phases of an Agile pilot program, and I thought part of our discussion might make a good post.

It seems this client has, in the past, attempted to manage their projects using some PMI best practice approaches, and is looking to understand how those tools and techniques compare to the Agile tools and techniques being considered.  They’re particularly concerned that they don’t lose the things that have worked well for them in the past - they don’t want to throw the baby out with the bath water.

One item that they want to maintain is Earned Value Management (EVM).

For those who may not be familiar with it, EVM is a technique used to measure the health of a project in terms of schedule and budget.  It provides a scientific and quantifiable way to gauge whether or not a project is effectively tracking toward completion - on-time and on-budget.

In my pre-Agile life I was a big advocate of EVM, and really still am.  EVM always made a lot of sense to me in that it provided continuous feedback on where the project stood - allowing me to make adjustments early in the project if things started to trend in the wrong direction.  The only real problem was that EVM can be difficult to setup, time consuming to maintain, and sometimes difficult to understand for those who aren’t accustom to it.

What I found when I started working with Agile Measurement and Control techniques is that they and EVM really only differ in terminology, but that the Agile techniques, and that the Agile techniques simplify the implementation and maintenance of data and show the resulting information in an intuitive way that allows not just me to make necessary adjustments - but for the project team and management to understand and make those adjustments themselves (can you say “empowered teams”?).

So - a very basic EVM primer:

TERMS:

Planned Value (PV) - the amount of effort initially planned to complete work expected to be completed at a given point of the project - expressed in $

Earned Value (EV) - the amount of effort initially planned to complete the work that has actually been completed at a given point in the project - expressed in $ 

Actual Cost (AC) - the amount of $ costs incurred by the project at a given point in the project.

Schedule Performance Index (SPI) - measure of schedule health. SPI = EV/PV.  (=1) on schedule; (<1) behind schedule; (>1) ahead of schedule.  

Cost Performance Index (CPI) - measure of budget health.  CPI = EV/AC.  (=1) on budget; (<1) over budget; (>1) under budget.

Comparison to Agile Measures (a simplified example)

PV = Projected Velocity

EV = Actual Velocity

AC = Agile methods don’t explicitly tell you to measure actual costs for each iteration, but I would argue that it’s assumed.  Every team has a burn rate that consists of team effort (billable hours) + any capital and/or miscellaneous costs.  So you can easily determine your Actual Costs for an iteration and can determine your Planned Costs by computing the team’s expected burn rate for each iteration.

So, as a simplified example, if you have a release plan with 200 story points that you intend to deliver in 5 2-week iterations, and your team has a planned burn rate of $40,500 per iteration (5 people * $90/hr * 45 hr/wk * 2 wks), then you can deduce that:

  • Projected Budget = $202,500
  • Projected Velocity = 40
If I wanted to express progress using EVM terminology, I would convert Story Points to a unit of “value” by dividing the budget by the # of story points in the backlog - $202,500/200 = 1012.5 - so that each Story Point delivered provides $1,012.50 in Earned Value (EV) and your Planned Value (PV) is $40,500 per iteration.
***This assumes that you haven’t created a change buffer to accomodate and embrace the inevitable change.  Typically I use 20% as a rule of thumb - if I have an initial backlog of 200 points, I plan the release and build the team to be able to deliver 240 points, and set the expectation with the Product Owner that we have approximately 40 points “in the bank”.  Not everyone does this though.
So, if at the end of iteration 1 you have an actual velocity of 42 story points, then your EV = $42,525
And lets say that your team, in iteration 1 didn’t average 45 hours per week.  Instead they averaged 42 hours per week.  So your Actual Cost (AC) for that iteration was $37,800.
  • CPI = EV/AC = 42,525 / 37,800 = 1.125 (you’re under budget!)
  • SPI = EV/PV = 42,525 / 40,500 = 1.05 (you’re ahead of schedule!)

But lets say that the Product Owner added 25 story points to the product backlog during the course of Iteration 1, and expects those to be completed for this release, without adding an iteration or adjusting resources…

Well, now you’ve changed your planned velocity:  225/5 = 45.  So your Planned Value for each iteration increases to $45,562.50

  • CPI = EV/AC = 42,525 / 37,800 = 1.125 (you’re still under budget!)
  • SPI = EV/PV = 42,525 / 45,562.50 = 0.933 (you’re now BEHIND schedule!)

This provides you the early warning of project health that both EVM and Agile measures aim to provide.  Both provide empirical process control capabilities.  You can now make adjustments by:

  1. adjusting the schedule
  2. increasing resources
  3. reducing scope

I find that the agile measures are far easier to work with, but if it makes sense to the client to think in terms of CPI/SPI, then that’s a great process customization.  If it helps your team and management communicate and have visibility into what is happening - then do it!

What EVM doesn’t provide is a true measure of Business Value delivered.  EVM “value” measures are really effort measures, just like story points.  

In the world of Agile metrics we add the concept of “Value Points” which, when combined with a burn-up chart, can add a very interesting dimension to your measurement toolkit.  But that’s a topic for another post.

2 Key Points to close with:

  1. Just because a measurement technique is advocated by traditionalists, doesn’t necessarily make it bad - don’t throw the baby out with the bath water
  2. If anyone suggests that agile doesn’t provide rigorous controls, measures or oversight for “the enterprise” - you know better.

0 notes

Specialized Functional Teams

Scrum, like most Agile frameworks, is built around the concept of a small, dedicated, cross-functional teams that are empowered to self-organize and self-manage around a shared commitment to delivering “potentially deployable” increments of working software frequently – typically every two weeks.  Each Scrum team has within it the necessary mix of skills and knowledge to deliver on this commitment.  This concept of a self-contained, self-sufficient team maximizes flexibility and collaboration by minimizing external dependencies and allowing the team to focus on their work. 

In organizations early in their Agile Evolution, where functional specialization and departmental silos have been deeply entrenched, it’s often necessary to evolve gradually toward teams that are fully independent and self-reliant.  And it should be noted that, in larger, more complex organizations; and for those operating in a product domain that involves highly specialized technical skills, the value to be realized from truly self-reliant teams often doesn’t justify the effort and expense.

In these environments it is necessary for Scrum teams to leverage capabilities from outside the team in order to meet their commitments.  These are capabilities that involve skills that are a) held by a small number of people in the organization; b) needed by a large number of teams; c) not needed by teams in sufficient quantities or frequency to justify dedicating a person to each team (even if you had enough people to do so).    

In environments where this is the case, a common challenge that emerges quickly after Scrum teams are initially formed, is that the existing policies and processes used by these functionally specialized groups cannot keep up with the needs of the Scrum teams.  Scrum teams deliver small increments of working product that is “potentially deployable” in short (2 week) iterations.  This puts pressure on the organization.  Delays of even a few days, waiting for a response or deliverable from an external group, can cause Scrum teams to not meet their iteration commitments, which reduces their velocity, which impacts release scope and/or timeline.

Scrum doesn’t fix your problems - it just exposes them, and gives you a framework in which to address them. 

As Scrum teams become faster and more capable, they naturally put pressure on the organization, exposing bottlenecks and inefficiencies that weren’t noticed before.  A functionally specialized group that takes 2 weeks to deliver an effort estimate and can then commit to assigning people to that effort (at 10-15% dedication) within 3 additional weeks after the estimate is accepted, may have seemed perfectly reasonable before Agile and Scrum.  But now that organization has become the limiting factor for the entire value delivery system.  Just as we’ve dedicated time and effort to improving the effectiveness and flexibility of the core software delivery team (the Scrum team), we must now dedicate ourselves to improving the cycle time and throughput of the functional specialty teams that support our Scrum teams.

How a given specialty area, in a given organization, should be addressed is going to be largely unique to that specific situation.  However, there are a number of common patterns that generalized from successful implementations. 

Empower the Scrum Teams:

Often, many of the skills held by the specialized functional team are also held (at a generalist level) by members of the Scrum team (or could be with a bit of training).  In these cases, as much as 90% of the specialized work could be done by the generalists on the Scrum Team.  But, in the interest of oversight and ensuring that the 10% is done correctly, all of the work is done by the functional specialists.  By enabling the Scrum team to do more of their own work, the scrum team’s flexibility is increased (as well as their opportunity for professional development), the burden on the specialized functional team is reduced, and the functional specialists are freed to focus on the high value 10% and oversight work for which they are most needed.

Streamline the Functional Teams:

Just as Scrum is being used to streamline the product development cycle – removing waste, shortening cycle times, improving feedback and optimizing for throughput – there may be opportunities to apply agile principles and practicies to the specialized functional team processes.  The specific frameworks and practices applicable to a given team will vary – some may leverage Scrum, others may be a fit for a Kanban system, still others may choose a hybrid approach.  Understand the nature of the work being done, focus on the needs of the customer(s) being served, and implement a process that focuses on effectiveness over efficiency, enables rapid feedback, empirical control, and continuous improvement.

Groom the Backlog:

For certain specialty functional groups, the best way for them to assist the Scrum team – if they cannot be part of the team – is to provide their input and contribution prior to the Scrum team beginning a backlog item.  System architects and User Experience Designers often fit into this space.  Architectural and design issues can be considered prior to a backlog item being pulled into an iteration – as part of the backlog grooming activities.  In these cases, architectural or design reviews may be considered part of the “Definition of Ready” for a story.  It should be noted that this pre-iteration activity is by definition risky and potentially wasteful – as work is being done before all information is available or final decision to implement the story has been made.  It may be that this risk and waste is acceptable, but care should be taken to avoid reverting to “big up-front design” under the guise of backlog grooming.  Whenever possible, combine this approach with Empowering the Scrum Team to minimize potentially wasteful activity.

Forward Looking:

By our definition, specialized functional teams are supporting a number of different scrum teams who will need input and contribution from the specialized team at differing intervals and levels effort.  This means that the workload for the specialized team will vary over time.  This variability is often at the core of Scrum team frustration in not being able to get timely assistance – because they’re requesting assistance at the same time as every other Scrum team in the organization.

As Scrum teams mature they will begin to exhibit a consistent velocity (within statistical control limits).  This provides the level of predictability and necessary to allow realistic release planning and road-mapping (see the 5-levels of Planning).  By having a plan, and providing visibility as that plan evolves over time, teams can predict when stories that are expected to require involvement from any specialized functional groups.  Those specialized functional groups can then, by looking across the release plans for all supported Scrum teams, anticipate the level of support that will is expected over time, and can feed any concerns back to the Scrum teams so that they can adjust accordingly.

It should be noted that, while the other patterns described are intended to remove cycle-time and throughput constraints in your specialty functional groups, the Forward Looking pattern focuses on identifying and managing those constraints.  In order to effectively manage and maximize the value delivery capability of your organization, both constraint identification/management and constraint removal tactics must be employed.  Just as we look take a systems perspective to maximizing value in our Scrum teams - empirically measuring and continuously improving the team’s effectiveness; we must do the same at an organizational level.

0 notes

User Stories or Use Cases? - YES!

Over the past several months I’ve had a recurring conversation with various large, enterprise organizations transitioning from traditional approaches to more agile methods.  

The topic of this conversation has been a discomfort with User Stories, and a desire to maintain their investment in Use Cases.  

These organizations come to this conversation hesitantly, but steeled for battle - convinced that I am going to try and disuade them from their ‘un-agile’ ways and insist that they adhere to agile ‘best practices’.

They’ve been generally surprised by my response, so I thought I’d share it here:

'I actually really like Use Cases; though I tend to use them a little bit differently than it sounds like you are.  I actually combine them with User Stories.  But before I tell you how I do that, let me ask you a question -
Why do you want to keep Use Cases?’
'Why' is always a tough question, so we ramble around a bit, but we ultimately get to the point where we say that the Use Cases have 2 primary purposes:
  1. They provide a reference for all the details of what we’re doing while building and testing
  2. They provide a history of what was done, and how the system behaves
At which point I tell them, “that’s perfect, because that’s pretty much what I’ve used them for too.  What I find all too often is that organizations have a 3rd, hidden purpose for Use Cases - they provide an alternative to talking to each other…”
They look at me kind of funny and I continue - 
"See, in those organizations they view ‘good’ requirements documentation - whether in Use Case format or otherwise - as a stack of paper that they can drop on a developer’s desk, so that the developer doesn’t need to talk to anyone.  If the developer does need to have a conversation with the analyst (or, God forbid, the customer) then that’s a sign of bad or less than ideal requirements documentation - that we didn’t really specify them in enough detail…"
Usually there are a few knowing smiles and nodding heads at this point, so I continue -
"So let me tell you how I’ve used Use Cases.  
I start with a list of Use Case titles - basically a Use Case Catalog.  They generally look something like:
 'As a <User Type> I need to <Activity> so that <Business Value>.'  
We called that our Initial Backlog.
Now, those probably look a lot like User Story right?”
They nod
"Well, yes and no actually."
They then squint at me.
"See that’s just a Story CARD, which isn’t really a story - it’s a ‘reminder to have a conversation’ - which is where the STORY comes from.
We have those conversations on a just-in-time basis of progressive elaboration that we called Backlog Grooming.
Now obviously we don’t want to keep having the same conversations over and over again, so we need to capture the results somehow, so we can refer back to them to know what we’ve discussed; and so we can update them if subsequent discussions lead to changes.  
For some teams, the best way to capture those details is with activity diagrams, state diagrams and wireframes.  For others its a bulleted set of Acceptance Criteria and notes.  And for others it might be Use Cases Scenarios with a data dictionary, business rules and other attributes…  
In fact, it might be any combination of those things based on the story and the preferences of the team.
I’ve worked with lots of teams that used the Use Case format as their means of capturing the results of conversations.
And many of those teams were obligated to provide requirements documentation as part of their product - for regulatory or governance purposes, or as an artifact for support and maintenance.
So, for those teams, we made having an up-to-date set of Use Cases part of our Definition of Done.  Having up-to-date Use Cases allowed us to keep our product Potentially Deployable - which included meeting regulatory and governance documentation needs.”

Being agile doesn’t necessarily mean discarding everything you know.  And it certainly doesn’t mean blind adherence to a set of tools and techniques approved by the agile police.  

Being agile means holding to a set of values and principles that ultimately come down to customer, value, feedback, quality, transparency and sustainability.  

Being agile means questioning, experimenting, inspecting and adapting, and using what works.

0 notes

Measuring Success - Quality

This post has been a long time coming.  I’ve started and restarted it at least a half-dozen times.  

Quality - there may be no more multi-faceted and powerful attribute in successful software development.  Quality is central to everything we do and seek.    

  • Higher Quality leads to greater Productivity, throughput and velocity
  • Higher Quality leads to increased Responsiveness, reduced cycle-times, shorter lead-times
  • Higher Quality leads to improved Customer Satisfaction, Employee Satisfaction
  • Higher Quality leads to better Predictability, reduced risk, improved decision making

Or at least that’s my hypothesis…

And that hypothesis is widely shared amongst the agile and product development communities.  We’ve developed numerous principles, practices and techniques intended to improve quality:  Test Driven Development; Continuous Integration; Automated Build and Deploy; Pair Programming; Customer Demos; Behavior Driven Development; Acceptance Test Driven Development; and Set-based Design techniques are all at least partially focused on yielding quality improvements.

But quality can’t simply be viewed as a set of tools and techniques - independent variables/levers which we hypothesize will lead to improved business outcomes.  Quality is also a business outcome unto itself.  

This series emphasizes the need to focus on business outcomes (success) 1st - methods and practices 2nd.  So, putting aside the methods and good practice assumptions of agile, and focusing solely on the business outcome of improved quality:

QUALITY = FEWER DEFECTS IN PRODUCTION

We apply agile quality practices and techniques, because we believe that doing so will yield improved business outcomes - Quality (fewer defects in production) being one of those outcomes - along with Productivity; Predictability; Responsiveness; Customer and Employee Satisfaction.

Large, manual, end-of-cycle execution of formal testing by an independent QA organization is also a method aimed at improving these business outcomes.  I hypothesize that it is less effective than alternative agile techniques.  But I don’t take that on faith, and neither should you.  We must test our hypothesis.

HOW DO WE MEASURE QUALITY?

There are innumerable quality metrics that have been devised over the years - each with its own strengths and weaknesses.  In my experience, it’s important to keep metrics simple, and to not let great become the enemy of good enough.  In other words, if you have a metric that does a good job of providing insight into the quality of your product/solution, and is simple to collect and interpret; that is likely better than chasing after a metric that will do a great job, but would be more complicated.

For my part, I’ve had success over the years using a couple relatively simple metrics:

  • DEFECT DENSITY - # Defects / KLOC
  • DEFECT ARRIVAL - # Defects Identified / Month

WHAT DO WE CONSIDER A DEFECT?

In both cases I include only defects in the production system.  

Measuring defects found and eliminated during the development cycle may be useful for managing your development and quality processes.  But from a business outcomes perspective our focus is reducing the # of defects that make it to production - not making assumptions about how or when to achieve that.

NOT ALL DEFECTS ARE CREATED EQUAL

Any good metric should drive in more questions than answers.  I find it useful to tag Defects with information about Type and Severity, so that we can consider some of those questions more deeply.

  • Our Defect Density is high; but our Severity 1 & 2 density is low.  What is the impact on other outcomes (Productivity; Customer Satisfaction; etc…) if we were to invest in reducing our low severity defects?
  • Our Defect Arrival is very high immediately following a major release.  But the defects are mostly Type = Usability.  Why are our customers having such a tough time using our new features; and how can we ease them through the learning curve? 

You may have some hypotheses based on these questions.  Perhaps those hypotheses involve application or improved use of agile tools and techniques.  What experiments would you run to prove or disprove your hypothesis?  What new questions will those results yield?

1 note

Recommended Reading

In my bag right now:

  • Agile Software Requirements: lean requirements practices for teams, programs and the enterprise; Dean Leffingwell
  • The Knowing-Doing Gap: how smart companies turn knowledge into action; Jeffrey Pfeffer & Robert I. Sutton
  • Strategy and the Fat Smoker: doing what’s obvious but not easy; David Maister

Essential Topics

  • Agile Software Development with Scrum; Ken Schwaber and Mike Beedle
  • Extreme Programming Explained; Kent Beck
  • Kanban: Successful Evolutionary Change for your Technology Business; David Anderson
  • Kanban & Scrum: Making the Most of Both; Henrik Kniberg & Mattias Skarin
  • Scrum and XP from the Trenches; Henrik Kniberg (download from infoq.com

Product Owner

  • Agile Product Management with Scrum: Creating Products that Customers Love; Roman Pichler
  • User Stories Explained; Mike Cohn
  • The Principles of Product Development Flow: 2nd Generation Lean Product Development; Donald Reinersten

ScrumMaster

  • Collaboration Explained; Jean Tabaka
  • Agile Retrospectives: Making Good Teams Great; Esther Derby and Diana Larsen
  • Coaching Agile Teams: A Companion for ScrumMasters, Agile Coaches, and Project Managers in Transition; Lyssa Adkins
  • Agile Project Management with Scrum; Ken Schwaber
  • Agile Estimating and Planning; Mike Cohn

Delivery Team

  • Agile Testing: A Practical Guide for Testers and Agile Teams; Lisa Crispin and Janet Gregory
  • The Art of Agile Development; James Shore and Shane Warden
  • Refactoring: Improving the Design of Existing Code; Kent Beck
  • Working Effectively with Legacy Code; Martin Feathers
  • Test Driven Development: By Example; Kent Beck
  • Clean Code; A handbook of Agile Software Craftsmanship; Bob Martin

Managers & Executives

  • The Business Value of Agile Software Methods:  Maximizing ROI with Just-in-time Processes and Documentation; David F. Rico
  • Agile and Iterative Development:  A Manager’s Guide; Craig Larman
  • Management 3.0 Leading Agile Developers, Developing Agile Leaders; Jurgen Appelo
  • MacGregor; Arthur Elliott Carlisle (download)

Large Scale & Enterprise Agile

  • Scaling Lean & Agile Development:  Thinking and Organizational Tools for Large-Scale Scrum; Craig Larman and Bas Vodde
  • Practices for Scaling Lean & Agile Development: Large, Multisite, and Offshore Product Development with Large-Scale Scrum; Craig Larman and Bas Vodde
  • Agile Software Requirements: Lean Requirement Practices for Teams, Programs and the Enterprise; Dean Leffingwell
  • Scaling Software Agility:  Best Practices for Large Enterprises; Dean Leffingwell

Organizational Change

  • Switch: How to Change Things When Change is Hard; Chip Heath and Dan Heath
  • Drive: The Surprising Truth About What Motivates Us; Daniel H. Pink
  • Leading Change; John P. Kotter
  • Abolish Performance Appraisals: Why They Backfire and What to Do Instead; Tom Coens & Mark Jenkins M.D.
  • Succeeding with Agile:  Software Development Using Scrum; Mike Cohn

Advanced Topics and Theory

  • The Principles of Product Development Flow: 2nd Generation Lean Product Development; Donald Reinersten
  • The Goal: A Process of Ongoing Improvement; Eliyahu M. Goldratt
  • The Human Side of the Enterprise; Douglas MacGregor
  • The New Economics; W. Edwards Deming
  • Scoring a Whole in One; Edward Martin Baker
  • Understanding Variation: The Key to Managing Chaos; Donald J. Wheeler
  • The Black Swan: The Impact of the Highly Improbable; Nassim Nicholas Taleb

0 notes

Measuring Results - Productivity

For many leaders, increasing the productivity of the development organization is their primary and over-riding goal.  ‘Doing more with less’ is a mantra they preach to their teams continuously, and which colors every decision they make.

Yet few have a clear and consistent definition of productivity, or an effective means of measuring it. 

Productivity:  a relative measure of the efficiency of a system in converting inputs into useful outputs.

Productivity: the ratio of the real value of outputs to the combined input of labour and capital.

Productivity: a measure relating a quantity and quality of output to the inputs required to produce it.

So in simplest terms:

Productivity = Valuable Outputs / Costly Inputs

Be Careful What you Wish For

While I am a ardent believer in the value of metrics, it is important to keep in mind the potential for unintended consequences.  Individual and group behaviors will evolve to meet your measures - not necessarily your goals.  So ensure that the measures you use promote the behaviors you desire.

Most documented productivity metrics in software development define a unit of output as either a KLOC (thousand lines of code) or as a Function Point.  Both have significant disadvantages.

KLOCs are generally favored for the simple reason that they’re unambiguous and easy to measure.  However, they can unintentionally promote poor design and coding practices.  Elegant, efficient, maintainable software takes more time to write, and takes fewer lines of code.  Therefore, a KLOC metric inadvertently creates a disincentive for building high quality software, and rewards poorly thought through designs and shoddy craftsmanship.

While Function Points don’t necessarily promote poor design and craftsmanship they have their own unique challenge - they are famously difficult and expensive to calculate and measure.

Perhaps even more importantly, both KLOCs and Function Points ignore the core aspect of ‘Value’ inherent in our definition of output - and therefore productivity. 

It has been stated that 64% of the features and function in the typical software system are rarely or never used (Standish 2002).  Calling the code that delivers these features and functions “productive” may be a mischaracterization. 

Similarly, those features and functions that ARE commonly used, are generally not of uniform value.  The Pareto Principle would suggest that 80% of the value is delivered by 20% of the effort.  Yet KLOC and Function Point based metrics treat all features and functions (and code) delivered as interchangeable.  Which promotes focusing on the easiest and lowest technical risk; rather than the most valuable, most innovative and greatest business risk…  

Output is a measure of the value delivered, not the effort expended.

A number of people in the Agile Community have written about an alternative unit of output measure - Value Points.  While the simplicity and value focus of this model is appealing, it has challenges when scaled beyond a few teams.  In order to be meaningful at the organizational level you must normalize the relative value point scale across teams and programs - which can be difficult and expensive. 

Also, the Value Point approach does not easily translate to initiatives and/or divisions that may not be delivering in an Agile manner.  Having a common measure of output, and therefore productivity, is critical to measuring the impact of your agile investment.

So, the approach we recommend is to associate a percentage of the total value of a given initiative to each Minimally Marketable Feature (MMF) or production release.  These percentages can then be applied to a any monetary business justifications (ROI, NPV, Discounted Cashflow, etc…) to arrive at an expected dollar value realized from each release.

Hence,

Productivity = Total Value Realized (delivered to production) /  Total Cost of production (labor)


Using this approach, your organization (be it a scrum team, a multi-team agile program, a waterfall project, or even an entire product development group) base-line their productivity for a period of time, and monitor its change over time.

Example:

The division has 3 initiative in progress:

  • Initiative A:  Total Expected Value $5MM; being delivered by a scrum team with an iteration run-rate of $70,000.
  • Initiative B:  Total Expected Value of $25MM; being delivered 5 scrum teams with a combined iteration run-rate of $400,000.
  • Initiative C:  Total Expected Value of $50MM; being delivered according to a project plan and resource matrix that charges $2.5MM to the project in the 1st quarter.
In Q1:
  • Initiative A: Released to production monthly and delivered a total of 60% of Expected Value; or $3MM.  25% of the backlog has been burned-down in terms of story points.  Total Cost $210,000.  PRODUCTIVITY = 14.29
  • Initiative B: Released to production quarterly and delivered a total of 65% of Expected Value; or $16.25MM.  35% of the backlog has been burned-down in terms of story points.  Total Cost $1.2MM.  PRODUCTIVITY = 13.54
  • Initiative C: Completed requirements definition and is 50% done with Detailed Design, and delivered 0% of Total Expected Value.  Total Cost $1.2MM.  PRODUCTIVITY = 0.
                       *undelivered work is WIP, and therefore not yet productive.
Aggregate Division Productivity for Q1 = 7.38

As you can see, it would be relatively straight forward to predict Q2 productivity - at the initiative as well as division levels - by assessing the various product roadmaps and traditional project plans.

Those projections could then be used to drive discussion about trade-offs on where to allocate limited capacity and maximize productivity.  To staff and fund initiatives where productive potential is high, and to cancel successful projects whose greatest productive potential has already been harvested.

To inform intelligent business decisions - which is WHY we’re measure outcomes.

Next Up:  QUALITY

0 notes

Measuring Results

     “How do we measure the success of agile?” 

It’s one of the most common questions we hear from senior leaders.  And its a critically important one for agile evangelists working to justify the organization’s investment in agile, and maintain momentum as other priorities compete for leadership attention.

The agile community’s typical response to this question has been some form of an agile maturity assessment - such as the Nokia Test.  

These tools are clear, easy to use, and can be extremely effective in helping organizations assess their adherence to good agile practices.  Yet, when used in isolation, they can leave senior leaders unfulfilled - and miss an important opportunity for aligning agile to fundamental objectives. 

For business leader, the question isn’t how well are we doing agile?  The question is how well is agile doing for us?  What impact is agile having on business results?

This is the question senior leaders really want answered. 

My response has typically been:

     “How did you measure your impact on business results before Agile?”

Which is generally met with awkward silence and a muttered admission of:

     "not very well."

The conversation then moves toward why measurement is expensive, why you can’t show progress if you don’t have a baseline, and why you need to be very careful about what and how you measure - lest you create unhealthy behaviors and unintended consequences. 

Someone ultimately quotes Einstein, we nod our heads thoughtfully, and finally move on to other topics. Crisis averted!

But the question remains -

     “How do we measure the success of agile?”

and, if Agile Success = Business Success, then the real question is:

     “How do we measure business success?”

Which is the question we’re beginning this blog series to address.

While every organization will have their own unique objectives and priorities, most can be encapsulated as some combination of these:

  • Productivity
  • Quality
  • Time-to-Market
  • Responsiveness
  • Customer Satisfaction
  • Employee Satisfaction
  • Predictability

In the posts to follow we will examine each of these Business Outcomes and look at:

  • How can we measure the business outcome?
  • What agile practices are effective levers in improving the business outcome?
  • How can we measure our agile levers as leading indicators of improved business outcomes?

1st up - PRODUCTIVITY

0 notes

Why Leading Results?

Several weeks ago, I made a pact with my Rally colleague Ken Clyne to finally begin blogging in 2011. 

At the time, I expected that I’d write about agile product development - what it is to “be agile”, the relative merits of different frameworks, various tips and tricks, and basically share my experiences leading agile transformations.  

After all, it’s what I do - it’s what I know.

But then I came across this great TED Talk: “The Golden Circle”, by Simon Sinek. 

"People don’t buy What you do, they buy Why you do it"…  Its amazing what happens when you start asking the right questions.

Agile isn’t the ‘Why’ - at least not for me.  Agile coaching is ‘What’ I do.  Why do I do it?  What do I believe?

First, I believe in LEADERSHIP.

Leadership tends to get a bad rap in some corners of the agile community.  After all, agile is about self-managed, empowered teams and the wisdom of the crowd.  Potential leaders are too often equated with traditional, autocratic management - slow, bureaucratic and in-humane - so they’re effectively told to ‘just stay out of the team’s way’ - and that’s unfortunate.

Truly enlightened leadership is the key to high-performing teams.  Real leadership unleashes the potential of people; transforms them into a team; inspires their passions and focuses their energies. 

And I believe in RESULTS.

When we invest our time, energy and passion we expect to achieve something.  To realize meaningful results.  Results matter.  Results can mean more than making money (though it almost always includes that) - delighting customers, being first to market, and creating a great work environment may all be objectives for you, your team and your organization.  Are you achieving results?

No matter the beauty of your process or the philosophical purity of your approach; if it doesn’t yield results it’s a well executed failure.

Too often, results seem to get lost in process maturity and methodology dogma - and that’s certainly not unique to the agile community.

So…  Why Leading Results?

  • Because I believe that enlightened leadership is the key to unleashing the potential of high performing teams that achieve results beyond the imagining of their individual members…
  • I help unleash this potential by coaching people on how to lead effectively, from any position within the organization…
  • I just happen to coach lean/agile principles and practices…
  • Would you like to buy some?