Top 3 ways to increase software development productivity

Software development productivity is the ratio between the value of software produced to the expense of producing it. It can be increased both by driving up the value of the output created by a software organization and by reducing costs of developing software. Many discussions on the software productivity have focused on individual developers. Measuring developer productivity has proven to be a difficult problem to solve (see CannotMeasureProductivity by Martin Fowler). Part of the problem has been preoccupation with code, the output developers produce, instead of outcome, the benefits derived from software. Those benefits are typically the results of large team efforts and focusing on the software engineering organization impact at a macro instead of an individual level is a better path to understanding and influencing productivity. From my experience, the best way to drive software productivity is engineering and product/project management leadership focus in the following areas:

Working on the right stuff
Driving productivity starts with a clear vision, direction and goals. Productivity of even the best team will be zero if their efforts are misplaced, e.g. if the software they develop will not be adopted. The vision should clearly articulate expected benefits. Having transparent goals has an added benefit of enabling individual and team creativity in finding the best ways to achieve the goals.

Keeping a steady hand on the wheel
A common anti-pattern in product/project management is frequent, unwarranted changes in priorities. Direction changes are expensive. They result in a throw away work which by definition has no value, add an overhead of context switching (ramping down on one activity and up on another takes time) and impact team morale. Software engineering organizations need to be agile, changes are expected and frequently good. Agility should not however be mistaken with pointing teams in different directions as a reaction to a stream of random requests and ideas or creating an emergency due to a lack of foresight. Please refer to my earlier blog post on an Agile Product Management Process for an example of how to manage change in a productive way.

Minimizing operational chores
Time spent on reacting to operational issues and fixing bugs creates no incremental value. Reducing it has multiple benefits. It frees time for innovation, i.e. creates an opportunity for the team to produce more, energizes the team members and makes it easier to attract talent (developers prefer working on a new rather than maintaining the old code).

There is more to driving productivity. Development processes and tools, increased reuse, better working environment etc. can all help. From my experience however the biggest impact comes from having a clear, value oriented strategy and minimizing the unproductive time. It boils down to a strong engineering and product/project management leadership.

API Design Best Practices

If you design APIs, I would highly recommend reading “Web API Design: Crafting Interfaces that Developers Love” by Brian Mulloy. The book focuses on RESTful APIs and covers designing methods signatures, handling errors, authentication, versioning etc. It is clear, logical and concise. At just over 30 pages, you can read it in less than an hour. It will help you design elegant, easy to use APIs.

Engineering Excellence

Engineering excellence is about delivering software you can be proud of. Great software has features supporting well articulated business and user needs (see Product Hierarchy of Needs: Winning, Keeping and Growing Business). Having great features however is not enough. Quality characteristics such as usability, uptime and performance drive user satisfaction. Well thought through processes create an environment in which teams can not only deliver but also exceed user expectations. This post introduces continuous improvement process for assessing and driving engineering excellence.

Engineering Excellence Process

The process I have been using with my teams focuses on gradual improvements. Here is how it works. On a quarterly basis, we have a working session with each team to self-assess the current state of engineering excellence. The process utilizes scorecard shown below. An outcome of the session is an action plan to improve in at least one area. The objective is not to address all issues, gaps or risks, but to gradually improve while delivering features and functionality. Here is the process in more details:

For each engineering excellence topic (see scorecard below)
  Assess the current state (green, yellow, red)
    - Green means the topic requires no attention (e.g. we are world-class)
    - Red means the topic requires immediate attention (e.g. there are issues or risks with significant 
      business impact)
    - Yellow means we could and should do better
If there are any red topics
  Devise plan to address them
  Proceed to execute the plan ASAP
Else if there are any yellow topics
  Prioritize
  Pick up 1-3 topics to focus on in the next three months
  Develop action plan for the selected topic(s)
Repeat the entire process next quarter

Engineering Excellence Scorecard

Engineering excellence scorecard has two buckets: engineering processes and software quality attributes:

Process Status Notes Action Items Sample Questions
Product/Project Management Do we have clearly articulated goals and measures of success for every project?
Do we have clearly articulated scope for every project?
Development Do we have predictable and repeatable process?
Do we have clear doneness criteria?
Do we have any impediments to productivity?
Source Code Management Do we have an effective branching model?
Release Do we have zero downtime release process?
QA Do we have 100% code review coverage?
Do we have 80%+ automated unit test coverage?
Do we have 80%+ automated integration test coverage?
Do our staging environments adequately represent production environments?
Monitoring Are we alerted to all critical issues?
Are we warned before issues become critical?
Exception Handling Are all exceptions handled?
Logging Do we have gaps in logging?
Backup Is all data backed up?
Have we tested recovery?
Disaster Recovery Will our applications stay up when an entire data center goes down?
Quality Attribute Status Notes Action Items Sample Questions
User Satisfaction Do we have user satisfaction gaps?
Uptime Do we have any single point of failure?
Is failover fully automated?
Do we comply with uptime SLA?
Performance Do we have clearly articulated performance SLAs?
Do we regularly test performance?
Scalability Do we understand how much load can our systems handle?
Do we regularly stress test?
How quickly can we scale?
Security Do we have any security exposure?
Regulatory Compliance Do we have any legal exposure?

Do you need a code freeze?

Code freeze, in which no changes are permitted to a software system, is a policy often used to reduce risk. Any changes to code or configuration may have unintended consequences and introduce bugs. Code freezes help to ensure that the system will continue to operate without disruptions. They are for example commonly used in the retail industry during holiday shopping season when systems load is at a peak.

While reducing risk, code freezes also have negative consequences:

    1. Business agility is impacted as business is constrained in what they can ask software teams for during the code freeze.
    2. Time to market is elongated as feature deployments are delayed till after the code freeze.
    3. Software team productivity goes down. Any bug fix deployments are more time consuming while complying with the code freeze policies, code merges can become more complicate, system integration and QA of new features is disrupted or at least delayed.
    4. Subsequent release becomes more risky as a larger backlog of features gets deployed after the code freeze.

This begs a question. Can code freezes be avoided? The answer is yes. State of the art is continuous deployment – multiple production releases a day, five days a week, every week, i.e. no code freeze required.

Code freezes can be eliminated if there is a high level of confidence that software deployments will not have unintended consequences. This requires:

    1. Comprehensive test automation including unit, integration and performance testing.
    2. Diligent code reviews.
    3. Zero downtime release process. This requires particular attention to any database changes.
    4. Robust monitoring with warnings issued when system’s health deteriorates but well before it becomes a real issue, and critical alerts sent when the system requires immediate attention.

Related posts:

Agile Product Management Process

An overview of a fast-pace product development process.

1. Product requests and ideas are collected using a variety of sources and techniques. They come from customers and internal stakeholders, and are gathered using brainstorming sessions, interviews, online feedback capture, focus groups, analysis of sales win/loss reports and support cases, competitive analysis etc.

ProductManagementProcessProduct requests and ideas are captured as Wishlist items.

Note: there is a separate escalation process to handle requests of urgent nature, e.g. issues impacting customers.

2. Wishlist is prioritized every six to seven weeks (twice a quarter). For an overview of the prioritization process please see  The Art and Science of Product Prioritization.

3. Items selected during a prioritization session are moved to the Design Queue for analysis, requirements definition and design. Output of the design process consists of lightweight documentation, wireframes and visual comps (where needed).

4. Once an item design is done, it is moved to the Product Backlog.

5. Every two weeks, Engineering Team selects items from the Product Backlog for implementation in a bi-weekly sprint. Our Development Process post provides more details.

6. Product enhancement and fixes are deployed to production every 2nd Wednesday. Release notes and updated documentation are published on the Help Center shortly after a release.

Getting new developers productive on day one

All new developers joining my team start coding on day one, and complete the first set of tasks/commit code on day two.

I found that the most effective way of getting new team members up to speed and productive is by getting them to code right away. I start by assigning a set of simple bug fixes and tiny enhancements. The tasks are selected to touch various parts of the code base.

The advantages of this approach are:

  • Coding tasks provide a clear focus and beat other methods of getting up to speed such as reading documentation and browsing through the code base.
  • Keeping them simple provides a set of quick to achieve goals and the resultant sense of accomplishment is a powerful motivator.
  • Careful selection of tasks provides a good exposure to the breadth of the code base.
  • We start getting business value on day two.
  • Keeping the tasks small minimizes risks.

We use a bi-weekly, scrum based, agile process (see Our Development Process). Throughout the developer’s first full sprint I gradually increase complexity of the tasks. At the beginning of their second full sprint, they are ready to start selecting work on par with the other team members.

The approach is very effective and sets the tone for a high performance team culture.

What is your method for on-boarding of the new team members?

jQuery Mobile Tutorial the most popular in 2012

jQuery Mobile Tutorial  posts got by far the most views in 2012. The three parts tutorial – recently upgraded to jQuery Mobile 1.2.0 – can be found at:

jQuery Mobile Tutorial Part I – Static Pages

jQuery Mobile Tutorial Part II – Dynamic Pages

jQuery Mobile Tutorial Part III – Managing Data

Next in order of popularity were Top Chart Librariesgit-flow with rebase and Running distributed cron jobs in the cloud.

Visitors came from 146 countries led by United States, India and United Kingdom.

Our Development Process

Introduction for the new team members.

  1. We use a scrum based, bi-weekly, agile process.
  2. Every 2nd Wednesday we have a Sprint Kickoff meeting where we select tickets to work on in the next sprint. Tickets are chosen based on priorities, e.g. highest priority first and within a priority defects ahead of enhancements.
  3. Tickets are stored in Lighthouse.
  4. github is our source code repository.
  5. We use git flow branching model. Please read Why aren’t you using git-flow?.
  6. When starting work on a ticket, create a feature branch for it.
  7. We subscribe to the Test Driven Development philosophy. Code is considered complete when it has full test coverage and the tests are passing.
  8. Both unit and integration tests are implemented using rspec.
  9. Once all work on a ticket has been completed:
    – commit the changes to the feature branch using the following git commit message convention: “[#TICKET_NUMBER] Your message.” This allows us to cross-reference Lighthouse with github.
    – Then pull develop branch from github, rebase develop into feature branch and push feature branch to github. Rebase will keep commit history clean and linear.
    – Last but not least initiate a pull request on github. Please refer to Using Pull Requests.
  10. Next comes a code review. Developers with a thorough understanding of the code base can take on the reviewer role. When reviewing a pull request, discuss it with developer if necessary. When code is ready for merge:
    – Rebase develop to feature branch and push the feature branch to github.
    – Merge pull request and delete feature branch on github.
    – Last but not least change the ticket state in Lighthouse to resolved.
  11. We use tddium for Continuous Integration. Develop branch commits on github trigger execution of the entire application test suite. You will receive an email notification from tddium. Any failed tests need to be resolved as a priority ahead of working on the next ticket.
  12. QA Team does additional QA on top of test automation. Ticket state is changed to verified if it passes QA. It is changed to open if issues are discovered. Our goal is to have zero re-opened tickets.
  13. Every 2nd Tuesday we have a Sprint Wrap-up meeting where we go over completed and outstanding sprint tickets.
  14. After a Sprint Wrap-up meeting, a release branch is created. Release name is the same as the Lighthouse milestone number for the current sprint, e.g. 1.0.5.4.
  15. After release branch is created we have a day for the final regression testing. Any issues found during regression testing have a priority and have to be fixed in the release branch.
  16. When a release is ready for production, i.e. all tickets are in verified state and there are no failing tests, the release branch is merged into develop and master. The master branch is deployed on production as described in this article: Deploying Ruby applications in the cloud.
  17. On rare occasions, a hot fix release is required in between regular releases. Hot fix naming convention adds a digit to the regular release name. For example if 1.0.5.4 was the most recent release prior to a hot fix, the hot fix name would be 1.0.5.4.1.
  18. Every business day we hold a brief team meeting – 10 minutes scrum at 10:10am.

The Art and Science of Product Prioritization

Prioritization is a challenge. A typical product wishlist far exceeds the product team capacity. Choosing wisely makes a big difference in product and ultimately business success.

Product wishlist items are collected using a variety of sources and techniques. They come from customers and internal stakeholders and are gathered using brainstorming sessions, interviews, online feedback capture, focus groups, analysis of sales win/loss reports and support cases, competitive analysis etc. Wishlist is constantly evolving with frequent additions of new items. Different stakeholders commonly have varying perspectives on the top priorities.

The process we have used to determine top priorities starts with scoring (the science part). All items are scored in two dimensions: business value and cost/complexity.

Items which are high value and low complexity (Quick Wins) are a no brainer and after a sanity check automatically become a priority. Items which are low value and high complexity make for an equally easy decision and are disqualified. Now comes the arts part. Items of high value and high complexity is where the key decisions have to be made. Scoring is a rather imprecise tool so simply using scores to drive decisions would not result in an optimal decision. Top items from the Strategic quadrant are instead selected in a working session involving key stakeholders where their respective merits are debated and ultimately voted on.

The process is outlined in more details below:

  1. Start with the business goals and key themes among customer requests. Translate these into product goals.
  2. Define scoring criteria. Value scores are based on alignment with the product goals. Cost/complexity scores are based on rough engineering estimates.
  3. Sanitize  wishlist.
  4. Brainstorm additional wishlist items.
  5. Score wishlist and place all items on the 2 x 2 board.
  6. Sanity check Quick Wins and Disqualified items.
  7. Debate and select top items from the Strategic quadrant.
  8. Selectively choose Discretionary items.

Steps 6 through 8 above are done is a working session with the key stakeholders. The entire process is repeated every 6-7 weeks, or at a minimum once a quarter.

Related posts:

Product Hierarchy of Needs: Winning, Keeping and Growing Business

Developing products at a high pace and low cost

Popular jQuery Mobile Tutorial upgraded to version 1.1.1

jQuery Mobile Tutorial has been upgraded to work with the latest stable release of jQuery Mobile – release 1.1.1.

If you are interested in learning jQuery Mobile, the tutorial will walk you through the process of building a simple but fully functional mobile application.

Part I of the tutorial covers framework basics – creation of static pages. Part II introduces dynamic pages and Part III adds data management features – ability to add, store and delete records.

Related posts:

HTML5 versus Native Apps: Which Platform To Choose For Your Next Mobile App