Software Factories: Part 4 – Measuring Outcomes

[Software Factories Series]

One of the difficulties with implementing a software factory along with best takeoff software is measuring development-team performance. Depending on the culture of the team, and the desired outcomes for your team, the approach may be fundamentally different, but here is an approach I have implemented, that I believe is fairly unique in a corporate environment.

My Objectives were:

  • To change the culture from one that was inward-looking (every problem is solved by writing some code) to one that considered external commercial or Open Source solutions (a problem can be solved in many different ways).
  • To reward outcomes rather than inputs, i.e. the primary concern is that the project objectives, the deadlines and the architectural principles and standards were all met.
  • To allow developers some freedom to choose their projects
  • To encourage information sharing, collaboration and cross-skilling within the team
  • To reward the top performers and manage the poor performers

This looks like a lot to ask from a single process with a simple measurement system, but I believe that the process we have selected should achieve all of these.  Firstly it is necessary to note that I have created three development teams who are accountable for progressing our technical strategy and defining standards and toolsets within their respective domains.  These are the “Front-Ends” team which includes the web-team, the Integration team, and the Services team.

The RFP Process

  • At the Project prioritisation forum the Business-Unit heads agree the priorities of the projects for which we will publish RFPs
  • The Project office prepares a very brief RFP describing the key outcomes required for the project.
  • As the CIO, I estimate the size of the project and award the project a size-value on a scale of 1-20.
  • The RFP is published on an internal blog indicating its size-value and priority.
  • The developers are notified of a new RFP and may then respond to the RFP or form a syndicate across the various teams to respond to the RFP.  Only two days are allocated for the construction of a proposal at this point in the process.
  • The responses are considered at the weekly Architecture council meeting (Chief Architect, Head of Operations, Heads of each development team, Data & Services registrar) in terms of the quality of the technical approach, the reasonableness of the effort estimate and the time-to-market vs architectural-significance trade-offs.
  • One syndicate is awarded the project and must take into consideration any guidance offered by the architectural council.

The Planning Process

  • The syndicate should then embark on a detailed estimation process which follows a Delphi methodology to arrive at a fairly good estimate of effort.
  • A detailed design process is worked into the plan and after the design is complete the effort estimates are reviewed one more time.
  • The project is baselined at this point and deadline targets are measured based on this iteration of the project estimation.
  • Depending on each syndicate members involvement after the estimation the project-size units earned by each member is adjusted if necessary.

The Project Scoring process

  • Once the project has been completed it must be scored by all the relevant stakeholders.
  • A standardized electronic survey with a set of questions for each type of stakeholder is used, so that scores can be easily compared.
  • The standardized questions are published so that everyone knows in advance what the required outcomes are.

The Performance  Measurement Metrics:

Using these measures the following can easily be measured and managed:

  • Each individual must have earned at least 20 Project size points in a year.  I measure this on a prorated basis every quarter and meet with underperforming team members to help them get back on track.
  • In order to qualify for a bonus each individual must earn at least 24 project-size points in a year.
  • Of those eligible for a bonus, their individual Aggregate Project Outcome Scores  (Sum of (earned Projects Size-points x Project Survey Score) for all the projects an individual was involved in) determines their rank on the Bonus list.  Top 5 scorers earn 15,14,13,12,11% of the development team bonus pool respectively, and the balance of the eligible team-members share the remaining 37% as a ratio of their relative scores.

Performance Reporting

Although all this may sound complicated it reduces each team-member’s score to two metrics: Project Size Points and Project Outcome Scores.

  • If you have more than 20 Project Size points you keep your job. 
  • If you have more than 24 Project Size points you will get a bonus.  
  • The size of that bonus is determined by you Project outcome Scores (The quality & timeliness of your work)

The current scores for each team member (both size and outcome scores) are published weekly for all to see.  No-one should have any doubt where they stand at any point in the year.

Some Balancing factors

Of course none of this works if there are insufficient projects to keep everyone busy during the year.  It would not be fair if there were not enough project-size points available to ensure that everyone can earn their required 20.  Fortunately I do not anticipate that scenario for many years to come.
I am sure that by the time that occurs humans will no longer be involved in software development processes, and performance measurement will have a completely different set of objectives.

Share
This entry was posted in Management, Philosophy, Strategy and tagged , . Bookmark the permalink.

8 Responses to Software Factories: Part 4 – Measuring Outcomes

Leave a Reply

Your email address will not be published. Required fields are marked *