Wednesday, February 8, 2017

Software Vendor Management and Code Quality

Outsourcing software development projects requires vigilance in order to realize the anticipated gains.  The hard-fought negotiations to ensure a little bit less cost for the client with a worthwhile profit for the vendor are over for another year or two and the actual work can (re)commence.
What impact will the new software development outsourcing contract have on the behavior of the vendor? 
Probably the vendor will be looking to save costs to regain lost margin.  With the best intentions in the world, this probably means quality is at risk, even if only in the short term.  Why? Because the vendor will probably choose to do one, or all, of the following: Push more work through the same team: introduce new, cheaper resources to the team; cut back on testing.
How can a client monitor for these software vendor management changes? 
First and foremost, you need good data.  It is not helpful to start to gather data after you think you might have detected a problem with delivered code.  The only data that will be useful in a discussion about diminishing quality from development outsourcing is trend data (I will return to this point at the end).  That means that the client must be capturing and analyzing data continuously – even in the good times.  It you tell me that the quality of my code has dropped off recently, I will not believe you unless you can show me concrete data showing me when and how it was better before.
What sort of data? 
The level of defects found by severity in any acceptance testing should be included.  However, with many clients these days having only limited software development capabilities themselves, I would also recommend that all delivered code should be passed through a reputable static code analysis such as CASTSonarQube or Klocwork.  These tools provide a deeper analysis of the quality of the code, new metrics and, by comparison with previous runs on previous code deliveries, the impact of the current code delivery – did it improve or diminish the overall quality of the application being worked on.  Clearly, the former is desirable and the latter is a cause for discussion.  Some care needs to be taken before diving headlong into an untried static code analyzer.  Poor examples of the breed tend to generate many false positives –sometimes so many false positives that the credibility and value of the tool is lost.
Maintaining code quality with software vendor management
Photo By Markus Spiske (https://unsplash.com/photos/xekxE_VR0Ec) [CC0], via Wikimedia Commons
From personal experience, I also like to see the results of formal code reviews carried out on the code by the developer and one of his colleagues.  To quote a RogueWave white paper, “The value of code review is unarguable, which explains why they’re mandated by 53% of today’s software development teams.”  Many years ago, during my time managing a large software development group at Sanchez Computer Associates (now part of FIS), we faced the challenge of maintaining and improving code quality on our complex core product while increasing the number of developers to meet demand.  Code reviews seemed to be a good answer because we had a group of very experienced developers who could teach and mentor the newcomers.  The problem was that the old hands were just as much in demand to get code out of the door so didn’t have the time to review all the code being produced by everyone else.
They, not I, came up with a good compromise.  They devised a set of programming standards in the form of a checklist that every programmer, including the most experienced developers would apply to their code before unit test.  This caught a lot of minor problems through the simple repetitive reminder exercise.  Next, the programmer would do a quick review of their checklist and code with a colleague who could do quick “spot checks.”  Finally, if any coding defects were discovered in subsequent test or use, the lessons from these were captured in an updated checklist.  From a software vendor management perspective, I see the collection and review of these checklists as being a form of commitment from individual team members that their code is “done.”
Returning to my point about trend data, being the only currency for a software vendor management discussion, in my experience these discussions proceed very differently if the data collected before the contract (re)negotiation are used to set some expectations in the contract.  Not necessarily service level agreements (SLAs), because these may be reserved for more important issues such as cost, productivity or customer satisfaction, but certainly the recording of an expectation that quality metrics will meet or exceed some average expectations based on prior performance from this software vendor (or the one they are replacing).
This post was originally published at https://www.softwarevalue.com/insights/blog/posts/2017/january/software-vendor-management-and-code-quality/

No comments:

Post a Comment